diff --git a/static/docs/.nojekyll b/static/docs/.nojekyll new file mode 100644 index 0000000000000000000000000000000000000000..8b137891791fe96927ad78e64b0aad7bded08bdc --- /dev/null +++ b/static/docs/.nojekyll @@ -0,0 +1 @@ + diff --git a/static/docs/404.html b/static/docs/404.html new file mode 100644 index 0000000000000000000000000000000000000000..b7d16e7ab94dce2097726ce3c55dead1b1efc3ab --- /dev/null +++ b/static/docs/404.html @@ -0,0 +1,223 @@ + + + + + + + + +Page not found (404) • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + +
+ +
+
+ + +Content not found. Please use links in the navbar. + +
+ + + +
+ + + + +
+ + + + + + + + diff --git a/static/docs/CONTRIBUTING.html b/static/docs/CONTRIBUTING.html new file mode 100644 index 0000000000000000000000000000000000000000..32e2bbed79bb308d8ba5213360f4d3797b08ba74 --- /dev/null +++ b/static/docs/CONTRIBUTING.html @@ -0,0 +1,260 @@ + + + + + + + + +Contributing to torch • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + +
+ +
+
+ + +
+ +

This outlines how to propose a change to torch. For more detailed info about contributing to this, and other tidyverse packages, please see the development contributing guide.

+
+

+Fixing typos

+

You can fix typos, spelling mistakes, or grammatical errors in the documentation directly using the GitHub web interface, as long as the changes are made in the source file. This generally means you’ll need to edit roxygen2 comments in an .R, not a .Rd file. You can find the .R file that generates the .Rd by reading the comment in the first line.

+

See also the [Documentation] section.

+
+
+

+Filing bugs

+

If you find a bug in torch please open an issue here. Please, provide detailed information on how to reproduce the bug. It would be great to also provide a reprex.

+
+
+

+Feature requests

+

Feel free to open issues here and add the feature-request tag. Try searching if there’s already an open issue for your feature-request, in this case it’s better to comment or upvote it intead of opening a new one.

+
+
+

+Examples

+

We welcome contributed examples. feel free to open a PR with new examples. The should be placed in the vignettes/examples folder.

+

The examples should be an .R file and a .Rmd file with the same name that just renders the code.

+

See mnist-mlp.R and mnist-mlp.Rmd

+

One must be able to run the example without manually downloading any dataset/file. You should also add an entry to the _pkgdown.yaml file.

+
+
+

+Code contributions

+

We have many open issues in the github repo if there’s one item that you want to work on, you can comment on it an ask for directions.

+
+
+

+Documentation

+

We use roxygen2 to generate the documentation. IN order to update the docs, edit the file in the R directory. To regenerate and preview the docs, use the custom tools/document.R script, as we need to patch roxygen2 to avoid running the examples on CRAN.

+
+
+ +
+ + + +
+ + + + +
+ + + + + + + + diff --git a/static/docs/LICENSE-text.html b/static/docs/LICENSE-text.html new file mode 100644 index 0000000000000000000000000000000000000000..df7874b73c33e746100d584e0a60bad77de6758c --- /dev/null +++ b/static/docs/LICENSE-text.html @@ -0,0 +1,225 @@ + + + + + + + + +License • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + +
+ +
+
+ + +
YEAR: 2020
+COPYRIGHT HOLDER: Daniel Falbel
+
+ +
+ + + +
+ + + + +
+ + + + + + + + diff --git a/static/docs/LICENSE.html b/static/docs/LICENSE.html new file mode 100644 index 0000000000000000000000000000000000000000..4ab810144163105ae0a5e4039a54e6fc0bbcb99b --- /dev/null +++ b/static/docs/LICENSE.html @@ -0,0 +1,229 @@ + + + + + + + + +MIT License • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + +
+ +
+
+ + +
+ +

Copyright (c) 2020 Daniel Falbel

+

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

+

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

+

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

+
+ +
+ + + +
+ + + + +
+ + + + + + + + diff --git a/static/docs/articles/examples/mnist-cnn.html b/static/docs/articles/examples/mnist-cnn.html new file mode 100644 index 0000000000000000000000000000000000000000..8c102afbd76efd8e1f23f2d48283a0e3a160eaf3 --- /dev/null +++ b/static/docs/articles/examples/mnist-cnn.html @@ -0,0 +1,259 @@ + + + + + + + +mnist-cnn • torch + + + + + + + + + + +
+
+ + + + +
+
+ + + + +
dir <- "~/Downloads/mnist"
+
+ds <- mnist_dataset(
+  dir,
+  download = TRUE,
+  transform = function(x) {
+    x <- x$to(dtype = torch_float())/256
+    x[newaxis,..]
+  }
+)
+dl <- dataloader(ds, batch_size = 32, shuffle = TRUE)
+
+net <- nn_module(
+  "Net",
+  initialize = function() {
+    self$conv1 <- nn_conv2d(1, 32, 3, 1)
+    self$conv2 <- nn_conv2d(32, 64, 3, 1)
+    self$dropout1 <- nn_dropout2d(0.25)
+    self$dropout2 <- nn_dropout2d(0.5)
+    self$fc1 <- nn_linear(9216, 128)
+    self$fc2 <- nn_linear(128, 10)
+  },
+  forward = function(x) {
+    x <- self$conv1(x)
+    x <- nnf_relu(x)
+    x <- self$conv2(x)
+    x <- nnf_relu(x)
+    x <- nnf_max_pool2d(x, 2)
+    x <- self$dropout1(x)
+    x <- torch_flatten(x, start_dim = 2)
+    x <- self$fc1(x)
+    x <- nnf_relu(x)
+    x <- self$dropout2(x)
+    x <- self$fc2(x)
+    output <- nnf_log_softmax(x, dim=1)
+    output
+  }
+)
+
+model <- net()
+optimizer <- optim_sgd(model$parameters, lr = 0.01)
+
+epochs <- 10
+
+for (epoch in 1:10) {
+
+  pb <- progress::progress_bar$new(
+    total = length(dl),
+    format = "[:bar] :eta Loss: :loss"
+  )
+  l <- c()
+
+  for (b in enumerate(dl)) {
+    optimizer$zero_grad()
+    output <- model(b[[1]])
+    loss <- nnf_nll_loss(output, b[[2]])
+    loss$backward()
+    optimizer$step()
+    l <- c(l, loss$item())
+    pb$tick(tokens = list(loss = mean(l)))
+  }
+
+  cat(sprintf("Loss at epoch %d: %3f\n", epoch, mean(l)))
+}
+
+ + + +
+ + + + +
+ + + + + + diff --git a/static/docs/articles/examples/mnist-cnn_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/articles/examples/mnist-cnn_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/articles/examples/mnist-cnn_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/articles/examples/mnist-dcgan.html b/static/docs/articles/examples/mnist-dcgan.html new file mode 100644 index 0000000000000000000000000000000000000000..a7a103c5d56851d1fc1fd3615b0922e3c62c964d --- /dev/null +++ b/static/docs/articles/examples/mnist-dcgan.html @@ -0,0 +1,340 @@ + + + + + + + +mnist-dcgan • torch + + + + + + + + + + +
+
+ + + + +
+
+ + + + +
library(torch)
+
+dir <- "~/Downloads/mnist"
+
+ds <- mnist_dataset(
+  dir,
+  download = TRUE,
+  transform = function(x) {
+    x <- x$to(dtype = torch_float())/256
+    x <- 2*(x - 0.5)
+    x[newaxis,..]
+  }
+)
+dl <- dataloader(ds, batch_size = 32, shuffle = TRUE)
+
+generator <- nn_module(
+  "generator",
+  initialize = function(latent_dim, out_channels) {
+    self$main <- nn_sequential(
+      nn_conv_transpose2d(latent_dim, 512, kernel_size = 4,
+                          stride = 1, padding = 0, bias = FALSE),
+      nn_batch_norm2d(512),
+      nn_relu(),
+      nn_conv_transpose2d(512, 256, kernel_size = 4,
+                          stride = 2, padding = 1, bias = FALSE),
+      nn_batch_norm2d(256),
+      nn_relu(),
+      nn_conv_transpose2d(256, 128, kernel_size = 4,
+                          stride = 2, padding = 1, bias = FALSE),
+      nn_batch_norm2d(128),
+      nn_relu(),
+      nn_conv_transpose2d(128, out_channels, kernel_size = 4,
+                          stride = 2, padding = 3, bias = FALSE),
+      nn_tanh()
+    )
+  },
+  forward = function(input) {
+    self$main(input)
+  }
+)
+
+discriminator <- nn_module(
+  "discriminator",
+  initialize = function(in_channels) {
+    self$main <- nn_sequential(
+      nn_conv2d(in_channels, 16, kernel_size = 4, stride = 2, padding = 1, bias = FALSE),
+      nn_leaky_relu(0.2, inplace = TRUE),
+      nn_conv2d(16, 32, kernel_size = 4, stride = 2, padding = 1, bias = FALSE),
+      nn_batch_norm2d(32),
+      nn_leaky_relu(0.2, inplace = TRUE),
+      nn_conv2d(32, 64, kernel_size = 4, stride = 2, padding = 1, bias = FALSE),
+      nn_batch_norm2d(64),
+      nn_leaky_relu(0.2, inplace = TRUE),
+      nn_conv2d(64, 128, kernel_size = 4, stride = 2, padding = 1, bias = FALSE),
+      nn_leaky_relu(0.2, inplace = TRUE)
+    )
+    self$linear <- nn_linear(128, 1)
+    self$sigmoid <- nn_sigmoid()
+  },
+  forward = function(input) {
+    x <- self$main(input)
+    x <- torch_flatten(x, start_dim = 2)
+    x <- self$linear(x)
+    self$sigmoid(x)
+  }
+)
+
+plot_gen <- function(noise) {
+  img <- G(noise)
+  img <- img$cpu()
+  img <- img[1,1,,,newaxis]/2 + 0.5
+  img <- torch_stack(list(img, img, img), dim = 2)[..,1]
+  img <- as.raster(as_array(img))
+  plot(img)
+}
+
+device <- torch_device(ifelse(cuda_is_available(),  "cuda", "cpu"))
+
+G <- generator(latent_dim = 100, out_channels = 1)
+D <- discriminator(in_channels = 1)
+
+init_weights <- function(m) {
+  if (grepl("conv", m$.classes[[1]])) {
+    nn_init_normal_(m$weight$data(), 0.0, 0.02)
+  } else if (grepl("batch_norm", m$.classes[[1]])) {
+    nn_init_normal_(m$weight$data(), 1.0, 0.02)
+    nn_init_constant_(m$bias$data(), 0)
+  }
+}
+
+G[[1]]$apply(init_weights)
+D[[1]]$apply(init_weights)
+
+G$to(device = device)
+D$to(device = device)
+
+G_optimizer <- optim_adam(G$parameters, lr = 2 * 1e-4, betas = c(0.5, 0.999))
+D_optimizer <- optim_adam(D$parameters, lr = 2 * 1e-4, betas = c(0.5, 0.999))
+
+fixed_noise <- torch_randn(1, 100, 1, 1, device = device)
+
+loss <- nn_bce_loss()
+
+for (epoch in 1:10) {
+
+  pb <- progress::progress_bar$new(
+    total = length(dl),
+    format = "[:bar] :eta Loss D: :lossd Loss G: :lossg"
+  )
+  lossg <- c()
+  lossd <- c()
+
+  for (b in enumerate(dl)) {
+
+    y_real <- torch_ones(32, device = device)
+    y_fake <- torch_zeros(32, device = device)
+
+    noise <- torch_randn(32, 100, 1, 1, device = device)
+    fake <- G(noise)
+
+    img <- b[[1]]$to(device = device)
+
+    # train the discriminator ---
+    D_loss <- loss(D(img), y_real) + loss(D(fake$detach()), y_fake)
+
+    D_optimizer$zero_grad()
+    D_loss$backward()
+    D_optimizer$step()
+
+    # train the generator ---
+
+    G_loss <- loss(D(fake), y_real)
+
+    G_optimizer$zero_grad()
+    G_loss$backward()
+    G_optimizer$step()
+
+    lossd <- c(lossd, D_loss$item())
+    lossg <- c(lossg, G_loss$item())
+    pb$tick(tokens = list(lossd = mean(lossd), lossg = mean(lossg)))
+  }
+  plot_gen(fixed_noise)
+
+  cat(sprintf("Epoch %d - Loss D: %3f Loss G: %3f\n", epoch, mean(lossd), mean(lossg)))
+}
+
+ + + +
+ + + + +
+ + + + + + diff --git a/static/docs/articles/examples/mnist-dcgan_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/articles/examples/mnist-dcgan_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/articles/examples/mnist-dcgan_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/articles/examples/mnist-mlp.html b/static/docs/articles/examples/mnist-mlp.html new file mode 100644 index 0000000000000000000000000000000000000000..df53285e9cd0f55ede1887148ea1c8e1f9ef74bc --- /dev/null +++ b/static/docs/articles/examples/mnist-mlp.html @@ -0,0 +1,247 @@ + + + + + + + +mnist-mlp • torch + + + + + + + + + + +
+
+ + + + +
+
+ + + + +
dir <- "~/Downloads/mnist"
+
+ds <- mnist_dataset(
+  dir,
+  download = TRUE,
+  transform = function(x) {
+    x$to(dtype = torch_float())/256
+  }
+)
+dl <- dataloader(ds, batch_size = 32, shuffle = TRUE)
+
+net <- nn_module(
+  "Net",
+  initialize = function() {
+    self$fc1 <- nn_linear(784, 128)
+    self$fc2 <- nn_linear(128, 10)
+  },
+  forward = function(x) {
+    x %>%
+      torch_flatten(start_dim = 2) %>%
+      self$fc1() %>%
+      nnf_relu() %>%
+      self$fc2() %>%
+      nnf_log_softmax(dim = 1)
+  }
+)
+
+model <- net()
+optimizer <- optim_sgd(model$parameters, lr = 0.01)
+
+epochs <- 10
+
+for (epoch in 1:10) {
+
+  pb <- progress::progress_bar$new(
+    total = length(dl),
+    format = "[:bar] :eta Loss: :loss"
+  )
+  l <- c()
+
+  for (b in enumerate(dl)) {
+    optimizer$zero_grad()
+    output <- model(b[[1]])
+    loss <- nnf_nll_loss(output, b[[2]])
+    loss$backward()
+    optimizer$step()
+    l <- c(l, loss$item())
+    pb$tick(tokens = list(loss = mean(l)))
+  }
+
+  cat(sprintf("Loss at epoch %d: %3f\n", epoch, mean(l)))
+}
+
+ + + +
+ + + + +
+ + + + + + diff --git a/static/docs/articles/examples/mnist-mlp_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/articles/examples/mnist-mlp_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/articles/examples/mnist-mlp_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/articles/extending-autograd.html b/static/docs/articles/extending-autograd.html new file mode 100644 index 0000000000000000000000000000000000000000..05802b50daf77ef49b30610c97c9b02ac2cf7687 --- /dev/null +++ b/static/docs/articles/extending-autograd.html @@ -0,0 +1,265 @@ + + + + + + + +Extending Autograd • torch + + + + + + + + + + +
+
+ + + + +
+
+ + + + + +

Adding operations to autograd requires implementing a new autograd_function for each operation. Recall that autograd_functionss are what autograd uses to compute the results and gradients, and encode the operation history. Every new function requires you to implement 2 methods:

+
    +
  • forward() - the code that performs the operation. It can take as many arguments as you want, with some of them being optional, if you specify the default values. All kinds of R objects are accepted here. Tensor arguments that track history (i.e., with requires_grad=TRUE) will be converted to ones that don’t track history before the call, and their use will be registered in the graph. Note that this logic won’t traverse lists or any other data structures and will only consider Tensor’s that are direct arguments to the call. You can return either a single Tensor output, or a list of Tensors if there are multiple outputs. Also, please refer to the docs of autograd_function to find descriptions of useful methods that can be called only from forward().

  • +
  • backward() - gradient formula. It will be given as many Tensor arguments as there were outputs, with each of them representing gradient w.r.t. that output. It should return as many Tensors as there were Tensor's that required gradients in forward, with each of them containing the gradient w.r.t. its corresponding input.

  • +
+
+

+Note

+

It’s the user’s responsibility to use the special functions in the forward’s ctx properly in order to ensure that the new autograd_function works properly with the autograd engine.

+
    +
  • save_for_backward() must be used when saving input or ouput of the forward to be used later in the backward.

  • +
  • mark_dirty() must be used to mark any input that is modified inplace by the forward function.

  • +
  • mark_non_differentiable() must be used to tell the engine if an output is not differentiable.

  • +
+
+
+

+Examples

+

Below you can find code for a linear function:

+
+linear <- autograd_function(
+  forward = function(ctx, input, weight, bias = NULL) {
+    ctx$save_for_backward(input = input, weight = weight, bias = bias)
+    output <- input$mm(weight$t())
+    if (!is.null(bias))
+      output <- output + bias$unsqueeze(0)$expand_as(output)
+    
+    output
+  },
+  backward = function(ctx, grad_output) {
+    
+    s <- ctx$saved_variables
+    
+    grads <- list(
+      input = NULL,
+      weight = NULL,
+      bias = NULL
+    )
+    
+    if (ctx$needs_input_grad$input)
+      grads$input <- grad_output$mm(s$weight)
+    
+    if (ctx$needs_input_grad$weight)
+      grads$weight <- grad_output$t()$mm(s$input)
+    
+    if (!is.null(s$bias) && ctx$needs_input_grad$bias)
+      grads$bias <- grad_output$sum(dim = 0)
+    
+    grads
+  }
+)
+
+

Here, we give an additional example of a function that is parametrized by non-Tensor arguments:

+
+mul_constant <- autograd_function(
+  forward = function(ctx, tensor, constant) {
+    ctx$save_for_backward(constant = constant)
+    tensor * constant
+  },
+  backward = function(ctx, grad_output) {
+    v <- ctx$saved_variables
+    list(
+      tensor = grad_output * v$constant
+    )
+  }
+)
+
+
+x <- torch_tensor(1, requires_grad = TRUE)
+o <- mul_constant(x, 2)
+o$backward()
+x$grad
+#> torch_tensor 
+#>  2
+#> [ CPUFloatType{1} ]
+
+
+
+ + + +
+ + + + +
+ + + + + + diff --git a/static/docs/articles/extending-autograd_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/articles/extending-autograd_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/articles/extending-autograd_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/articles/getting-started/assets/mnist.png b/static/docs/articles/getting-started/assets/mnist.png new file mode 100644 index 0000000000000000000000000000000000000000..53c876a89d53ccb3ae4fb5167460e84248ad3672 Binary files /dev/null and b/static/docs/articles/getting-started/assets/mnist.png differ diff --git a/static/docs/articles/getting-started/autograd.html b/static/docs/articles/getting-started/autograd.html new file mode 100644 index 0000000000000000000000000000000000000000..ea2410c48218b84cbb5080787904deceb58a3963 --- /dev/null +++ b/static/docs/articles/getting-started/autograd.html @@ -0,0 +1,350 @@ + + + + + + + +Autograd: automatic differentiation • torch + + + + + + + + + + +
+
+ + + + +
+
+ + + + +
+

Note: This is an R port of the official tutorial available here. All credits goes to Soumith Chintala.

+
+ +

Central to all neural networks in torch is the autograd functionality. Let’s first briefly visit this, and we will then go to training our first neural network.

+

Autograd provides automatic differentiation for all operations on Tensors. It is a define-by-run framework, which means that your backprop is defined by how your code is run, and that every single iteration can be different.

+

Let us see this in more simple terms with some examples.

+
+

+Tensor

+

torch_tensor is the central class of the package. If you set its attribute $requires_grad as TRUE, it starts to track all operations on it. When you finish your computation you can call $backward() and have all the gradients computed automatically. The gradient for this tensor will be accumulated into $grad attribute.

+

To stop a tensor from tracking history, you can call $detach() to detach it from the computation history, and to prevent future computation from being tracked.

+

To prevent tracking history (and using memory), you can also wrap the code block in with_no_grad({<code>}). This can be particularly helpful when evaluating a model because the model may have trainable parameters with requires_grad=TRUE, but for which we don’t need the gradients.

+

There’s one more class which is very important for autograd implementation - a autograd_function.

+

Tensor and Function are interconnected and build up an acyclic graph, that encodes a complete history of computation. Each tensor has a $grad_fn attribute that references an autograd_function that has created the Tensor (except for Tensors created by the user - their grad_fn is NULL).

+

If you want to compute the derivatives, you can call $backward() on a Tensor. If Tensor is a scalar (i.e. it holds a one element data), you don’t need to specify any arguments to backward(), however if it has more elements, you need to specify a gradient argument that is a tensor of matching shape.

+

Create a tensor and set requires_grad=TRUE to track computation with it:

+
+x <- torch_ones(2, 2, requires_grad = TRUE)
+x
+#> torch_tensor 
+#>  1  1
+#>  1  1
+#> [ CPUFloatType{2,2} ]
+
+

Do a tensor operation:

+
+y <- x + 2
+y
+#> torch_tensor 
+#>  3  3
+#>  3  3
+#> [ CPUFloatType{2,2} ]
+
+

y was created as a result of an operation, so it has a grad_fn.

+
+y$grad_fn
+#> AddBackward1
+
+

Do more operations on y

+
+z <- y * y * 3
+z
+#> torch_tensor 
+#>  27  27
+#>  27  27
+#> [ CPUFloatType{2,2} ]
+out <- z$mean()
+out
+#> torch_tensor 
+#> 27
+#> [ CPUFloatType{} ]
+
+

$requires_grad_( ... ) changes an existing Tensor’s requires_grad flag in-place. The input flag defaults to FALSE if not given.

+
+a <- torch_randn(2, 2)
+a <- (a * 3) / (a - 1)
+a$requires_grad
+#> [1] FALSE
+a$requires_grad_(TRUE)
+#> torch_tensor 
+#> -0.4350  1.4882
+#> -0.5849  9.3457
+#> [ CPUFloatType{2,2} ]
+a$requires_grad
+#> [1] TRUE
+b <- (a * a)$sum()
+b$grad_fn
+#> SumBackward0
+
+
+
+

+Gradients

+

Let’s backprop now. Because out contains a single scalar, out$backward() is equivalent to out$backward(torch.tensor(1.)).

+
+out$backward()
+
+

Print gradients d(out)/dx

+
+x$grad
+#> torch_tensor 
+#>  4.5000  4.5000
+#>  4.5000  4.5000
+#> [ CPUFloatType{2,2} ]
+
+

You should have got a matrix of 4.5. Let’s call the out Tensor \(o\).

+

We have that \(o = \frac{1}{4}\sum_i z_i\), \(z_i = 3(x_i+2)^2\) and \(z_i\bigr\rvert_{x_i=1} = 27\). Therefore, \(\frac{\partial o}{\partial x_i} = \frac{3}{2}(x_i+2)\), hence \(\frac{\partial o}{\partial x_i}\bigr\rvert_{x_i=1} = \frac{9}{2} = 4.5\).

+

Mathematically, if you have a vector valued function \(\vec{y}=f(\vec{x})\), then the gradient of \(\vec{y}\) with respect to \(\vec{x}\) is a Jacobian matrix:

+

\[ + J=\left(\begin{array}{ccc} + \frac{\partial y_{1}}{\partial x_{1}} & \cdots & \frac{\partial y_{1}}{\partial x_{n}}\\ + \vdots & \ddots & \vdots\\ + \frac{\partial y_{m}}{\partial x_{1}} & \cdots & \frac{\partial y_{m}}{\partial x_{n}} + \end{array}\right) +\]

+

Generally speaking, autograd is an engine for computing vector-Jacobian product. That is, given any vector \(v=\left(\begin{array}{cccc} v_{1} & v_{2} & \cdots & v_{m}\end{array}\right)^{T}\), compute the product \(v^{T}\cdot J\). If \(v\) happens to be the gradient of a scalar function \(l=g\left(\vec{y}\right)\), that is, \(v=\left(\begin{array}{ccc}\frac{\partial l}{\partial y_{1}} & \cdots & \frac{\partial l}{\partial y_{m}}\end{array}\right)^{T}\), then by the chain rule, the vector-Jacobian product would be the gradient of \(l\) with respect to \(\vec{x}\):

+

\[ + J^{T}\cdot v=\left(\begin{array}{ccc} + \frac{\partial y_{1}}{\partial x_{1}} & \cdots & \frac{\partial y_{m}}{\partial x_{1}}\\ + \vdots & \ddots & \vdots\\ + \frac{\partial y_{1}}{\partial x_{n}} & \cdots & \frac{\partial y_{m}}{\partial x_{n}} + \end{array}\right)\left(\begin{array}{c} + \frac{\partial l}{\partial y_{1}}\\ + \vdots\\ + \frac{\partial l}{\partial y_{m}} + \end{array}\right)=\left(\begin{array}{c} + \frac{\partial l}{\partial x_{1}}\\ + \vdots\\ + \frac{\partial l}{\partial x_{n}} + \end{array}\right) +\]

+

(Note that \(v^{T}\cdot J\) gives a row vector which can be treated as a column vector by taking \(J^{T}\cdot v\).)

+

This characteristic of vector-Jacobian product makes it very convenient to feed external gradients into a model that has non-scalar output.

+

Now let’s take a look at an example of vector-Jacobian product:

+
+x <- torch_randn(3, requires_grad=TRUE)
+y <- 100 * x
+y
+#> torch_tensor 
+#>  -50.4960
+#>  -28.4113
+#>  101.7135
+#> [ CPUFloatType{3} ]
+
+

Now in this case y is no longer a scalar. autograd could not compute the full Jacobian directly, but if we just want the vector-Jacobian product, simply pass the vector to backward as argument:

+
+v <- torch_tensor(c(0.1, 1.0, 0.0001))
+y$backward(v)
+
+x$grad
+#> torch_tensor 
+#>  1.0000e+01
+#>  1.0000e+02
+#>  1.0000e-02
+#> [ CPUFloatType{3} ]
+
+

You can also stop autograd from tracking history on Tensors with $requires_grad=TRUE either by wrapping the code block in with with_no_grad():

+
+x$requires_grad
+#> [1] TRUE
+(x ** 2)$requires_grad
+#> [1] TRUE
+
+with_no_grad({
+  print((x ** 2)$requires_grad)
+})
+#> [1] FALSE
+
+
+x$requires_grad
+#> [1] TRUE
+y <- x$detach()
+y$requires_grad
+#> [1] FALSE
+x$eq(y)$all()
+#> torch_tensor 
+#> 1
+#> [ CPUBoolType{} ]
+
+

Read Later:

+

Document about help(autograd_function), vignette("using-autograd"), vignette("extending-autograd").

+
+
+ + + +
+ + + + +
+ + + + + + diff --git a/static/docs/articles/getting-started/autograd_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/articles/getting-started/autograd_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/articles/getting-started/autograd_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/articles/getting-started/control-flow-and-weight-sharing.html b/static/docs/articles/getting-started/control-flow-and-weight-sharing.html new file mode 100644 index 0000000000000000000000000000000000000000..89a34409145381dd4425ef518ac5bd79d7df9a32 --- /dev/null +++ b/static/docs/articles/getting-started/control-flow-and-weight-sharing.html @@ -0,0 +1,292 @@ + + + + + + + +Control flow & Weight sharing • torch + + + + + + + + + + +
+
+ + + + +
+
+ + + + +
+

Note: This is an R port of the official tutorial available here. All credits goes to Justin Johnson.

+
+ +

As an example of dynamic graphs and weight sharing, we implement a very strange model: a fully-connected ReLU network that on each forward pass chooses a random number between 1 and 4 and uses that many hidden layers, reusing the same weights multiple times to compute the innermost hidden layers.

+

For this model we can use normal R flow control to implement the loop, and we can implement weight sharing among the innermost layers by simply reusing the same Module multiple times when defining the forward pass.

+

We can easily implement this model using nn_module:

+
+dynamic_net <- nn_module(
+   "dynamic_net",
+   # In the constructor we construct three nn_linear instances that we will use
+   # in the forward pass.
+   initialize = function(D_in, H, D_out) {
+      self$input_linear <- nn_linear(D_in, H)
+      self$middle_linear <- nn_linear(H, H)
+      self$output_linear <- nn_linear(H, D_out)
+   },
+   # For the forward pass of the model, we randomly choose either 0, 1, 2, or 3
+   # and reuse the middle_linear Module that many times to compute hidden layer
+   # representations.
+   # 
+   # Since each forward pass builds a dynamic computation graph, we can use normal
+   # R control-flow operators like loops or conditional statements when
+   # defining the forward pass of the model.
+   # 
+   # Here we also see that it is perfectly safe to reuse the same Module many
+   # times when defining a computational graph. This is a big improvement from Lua
+   # Torch, where each Module could be used only once.
+   forward = function(x) {
+      h_relu <- self$input_linear(x)$clamp(min = 0)
+      for (i in seq_len(sample.int(4, size = 1))) {
+         h_relu <- self$middle_linear(h_relu)$clamp(min=0)
+      }
+      y_pred <- self$output_linear(h_relu)
+      y_pred
+   }
+)
+
+
+if (cuda_is_available()) {
+   device <- torch_device("cuda")
+} else {
+   device <- torch_device("cpu")
+}
+   
+# N is batch size; D_in is input dimension;
+# H is hidden dimension; D_out is output dimension.
+N <- 64
+D_in <- 1000
+H <- 100
+D_out <- 10
+
+# Create random input and output data
+# Setting requires_grad=FALSE (the default) indicates that we do not need to 
+# compute gradients with respect to these Tensors during the backward pass.
+x <- torch_randn(N, D_in, device=device)
+y <- torch_randn(N, D_out, device=device)
+
+# Construct our model by instantiating the class defined above
+model <- dynamic_net(D_in, H, D_out)
+
+# The nn package also contains definitions of popular loss functions; in this
+# case we will use Mean Squared Error (MSE) as our loss function.
+loss_fn <- nnf_mse_loss
+
+# Use the optim package to define an Optimizer that will update the weights of
+# the model for us. Here we will use Adam; the optim package contains many other
+# optimization algorithms. The first argument to the Adam constructor tells the
+# optimizer which Tensors it should update.
+learning_rate <- 1e-4
+optimizer <- optim_sgd(model$parameters, lr=learning_rate, momentum = 0.9)
+
+for (t in seq_len(500)) {
+   # Forward pass: compute predicted y by passing x to the model. Module objects
+   # can be called like functions. When doing so you pass a Tensor of input
+   # data to the Module and it produces a Tensor of output data.
+   y_pred <- model(x)
+   
+   # Compute and print loss. We pass Tensors containing the predicted and true
+   # values of y, and the loss function returns a Tensor containing the
+   # loss.
+   loss <- loss_fn(y_pred, y)
+   if (t %% 100 == 0 || t == 1)
+      cat("Step:", t, ":", as.numeric(loss), "\n")
+   
+   # Before the backward pass, use the optimizer object to zero all of the
+   # gradients for the variables it will update (which are the learnable
+   # weights of the model). This is because by default, gradients are
+   # accumulated in buffers( i.e, not overwritten) whenever $backward()
+   # is called. Checkout docs of `autograd_backward` for more details.
+   optimizer$zero_grad()
+
+   # Backward pass: compute gradient of the loss with respect to model
+   # parameters
+   loss$backward()
+
+   # Calling the step function on an Optimizer makes an update to its
+   # parameters
+   optimizer$step()
+}
+#> Step: 1 : 1.054659 
+#> Step: 100 : 1.05705 
+#> Step: 200 : 1.048708 
+#> Step: 300 : 1.052647 
+#> Step: 400 : 1.042869 
+#> Step: 500 : 1.039991
+
+
+ + + +
+ + + + +
+ + + + + + diff --git a/static/docs/articles/getting-started/control-flow-and-weight-sharing_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/articles/getting-started/control-flow-and-weight-sharing_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/articles/getting-started/control-flow-and-weight-sharing_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/articles/getting-started/custom-nn.html b/static/docs/articles/getting-started/custom-nn.html new file mode 100644 index 0000000000000000000000000000000000000000..671685baa502976c6a1e57318439c18ef641601e --- /dev/null +++ b/static/docs/articles/getting-started/custom-nn.html @@ -0,0 +1,276 @@ + + + + + + + +Custom nn modules • torch + + + + + + + + + + +
+
+ + + + +
+
+ + + + +
+

Note: This is an R port of the official tutorial available here. All credits goes to Justin Johnson.

+
+ +

Sometimes you will want to specify models that are more complex than a sequence of existing Modules; for these cases you can define your own Modules by using nn_module function and defining a forward which receives input Tensors and produces output Tensors using other modules or other autograd operations on Tensors.

+

In this example we implement our two-layer network as a custom Module subclass:

+
+two_layer_net <- nn_module(
+   "two_layer_net",
+   initialize = function(D_in, H, D_out) {
+      self$linear1 <- nn_linear(D_in, H)
+      self$linear2 <- nn_linear(H, D_out)
+   },
+   forward = function(x) {
+      x %>% 
+         self$linear1() %>% 
+         nnf_relu() %>% 
+         self$linear2()
+   }
+)
+
+
+if (cuda_is_available()) {
+   device <- torch_device("cuda")
+} else {
+   device <- torch_device("cpu")
+}
+   
+# N is batch size; D_in is input dimension;
+# H is hidden dimension; D_out is output dimension.
+N <- 64
+D_in <- 1000
+H <- 100
+D_out <- 10
+
+# Create random input and output data
+# Setting requires_grad=FALSE (the default) indicates that we do not need to 
+# compute gradients with respect to these Tensors during the backward pass.
+x <- torch_randn(N, D_in, device=device)
+y <- torch_randn(N, D_out, device=device)
+
+# Construct our model by instantiating the class defined above
+model <- two_layer_net(D_in, H, D_out)
+
+# The nn package also contains definitions of popular loss functions; in this
+# case we will use Mean Squared Error (MSE) as our loss function.
+loss_fn <- nnf_mse_loss
+
+# Use the optim package to define an Optimizer that will update the weights of
+# the model for us. Here we will use Adam; the optim package contains many other
+# optimization algorithms. The first argument to the Adam constructor tells the
+# optimizer which Tensors it should update.
+learning_rate <- 1e-4
+optimizer <- optim_sgd(model$parameters, lr=learning_rate)
+
+for (t in seq_len(500)) {
+   # Forward pass: compute predicted y by passing x to the model. Module objects
+   # can be called like functions. When doing so you pass a Tensor of input
+   # data to the Module and it produces a Tensor of output data.
+   y_pred <- model(x)
+   
+   # Compute and print loss. We pass Tensors containing the predicted and true
+   # values of y, and the loss function returns a Tensor containing the
+   # loss.
+   loss <- loss_fn(y_pred, y)
+   if (t %% 100 == 0 || t == 1)
+      cat("Step:", t, ":", as.numeric(loss), "\n")
+   
+   # Before the backward pass, use the optimizer object to zero all of the
+   # gradients for the variables it will update (which are the learnable
+   # weights of the model). This is because by default, gradients are
+   # accumulated in buffers( i.e, not overwritten) whenever $backward()
+   # is called. Checkout docs of `autograd_backward` for more details.
+   optimizer$zero_grad()
+
+   # Backward pass: compute gradient of the loss with respect to model
+   # parameters
+   loss$backward()
+
+   # Calling the step function on an Optimizer makes an update to its
+   # parameters
+   optimizer$step()
+}
+#> Step: 1 : 1.04065 
+#> Step: 100 : 1.026708 
+#> Step: 200 : 1.013019 
+#> Step: 300 : 0.9996911 
+#> Step: 400 : 0.986709 
+#> Step: 500 : 0.9740159
+
+

In the next example we will about dynamic graphs in torch.

+
+ + + +
+ + + + +
+ + + + + + diff --git a/static/docs/articles/getting-started/custom-nn_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/articles/getting-started/custom-nn_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/articles/getting-started/custom-nn_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/articles/getting-started/neural-networks.html b/static/docs/articles/getting-started/neural-networks.html new file mode 100644 index 0000000000000000000000000000000000000000..9c40455a7655242b62260f8840edffbc81a1e8e8 --- /dev/null +++ b/static/docs/articles/getting-started/neural-networks.html @@ -0,0 +1,409 @@ + + + + + + + +Neural networks • torch + + + + + + + + + + +
+
+ + + + +
+
+ + + + +
+

Note: This is an R port of the official tutorial available here. All credits goes to Soumith Chintala.

+
+ +

Neural networks can be constructed using the nn functionality.

+

Now that you had a glimpse of autograd, nn depends on autograd to define models and differentiate them. An nn.Module contains layers, and a method forward(input) that returns the output.

+

For example, look at this network that classifies digit images:

+
+

Convnet for mnist classification

+
+

It is a simple feed-forward network. It takes the input, feeds it through several layers one after the other, and then finally gives the output.

+

A typical training procedure for a neural network is as follows:

+
    +
  • Define the neural network that has some learnable parameters (or weights)
  • +
  • Iterate over a dataset of inputs
  • +
  • Process input through the network
  • +
  • Compute the loss (how far is the output from being correct)
  • +
  • Propagate gradients back into the network’s parameters
  • +
  • Update the weights of the network, typically using a simple update rule: weight = weight - learning_rate * gradient.
  • +
+
+

+Define the network

+

Let’s define this network:

+
+Net <- nn_module(
+  initialize = function() {
+    self$conv1 = nn_conv2d(1, 6, 3)
+    self$conv2 = nn_conv2d(6, 16, 3)
+    # an affine operation: y = Wx + b
+    self$fc1 = nn_linear(16 * 6 * 6, 120)  # 6*6 from image dimension
+    self$fc2 = nn_linear(120, 84)
+    self$fc3 = nn_linear(84, 10)
+  },
+  forward = function(x) {
+    x %>% 
+      
+      self$conv1() %>% 
+      nnf_relu() %>% 
+      nnf_max_pool2d(c(2,2)) %>% 
+      
+      self$conv2() %>% 
+      nnf_relu() %>% 
+      nnf_max_pool2d(c(2,2)) %>% 
+      
+      torch_flatten(start_dim = 2) %>% 
+      
+      self$fc1() %>% 
+      nnf_relu() %>% 
+      
+      self$fc2() %>% 
+      nnf_relu() %>% 
+      
+      self$fc3()
+  }
+)
+
+net <- Net()
+
+

You just have to define the forward function, and the backward function (where gradients are computed) is automatically defined for you using autograd. You can use any of the Tensor operations in the forward function.

+

The learnable parameters of a model are returned by net$parameters.

+
+str(net$parameters)
+#> List of 10
+#>  $ conv1.weight:Float [1:6, 1:1, 1:3, 1:3]
+#>  $ conv1.bias  :Float [1:6]
+#>  $ conv2.weight:Float [1:16, 1:6, 1:3, 1:3]
+#>  $ conv2.bias  :Float [1:16]
+#>  $ fc1.weight  :Float [1:120, 1:576]
+#>  $ fc1.bias    :Float [1:120]
+#>  $ fc2.weight  :Float [1:84, 1:120]
+#>  $ fc2.bias    :Float [1:84]
+#>  $ fc3.weight  :Float [1:10, 1:84]
+#>  $ fc3.bias    :Float [1:10]
+
+

Let’s try a random 32x32 input. Note: expected input size of this net (LeNet) is 32x32. To use this net on the MNIST dataset, please resize the images from the dataset to 32x32.

+
+input <- torch_randn(1, 1, 32, 32)
+out <- net(input)
+out
+#> torch_tensor 
+#> -0.0560  0.0916  0.0401 -0.1081 -0.0183 -0.0508  0.1250 -0.0574  0.0058  0.0025
+#> [ CPUFloatType{1,10} ]
+
+

Zero the gradient buffers of all parameters and backprops with random gradients:

+
+net$zero_grad()
+out$backward(torch_randn(1, 10))
+
+
+

Note: nn only supports mini-batches. The entire torch.nn package only supports inputs that are a mini-batch of samples, and not a single sample. For example, nn_conv2d will take in a 4D Tensor of nSamples x nChannels x Height x Width. If you have a single sample, just use input$unsqueeze(1) to add a fake batch dimension.

+
+

Before proceeding further, let’s recap all the classes you’ve seen so far.

+
+

+Recap

+
    +
  • torch_tensor - A multi-dimensional array with support for autograd operations like backward(). Also holds the gradient w.r.t. the tensor.

  • +
  • nn_module - Neural network module. Convenient way of encapsulating parameters, with helpers for moving them to GPU, exporting, loading, etc.

  • +
  • nn_parameter - A kind of Tensor, that is automatically registered as a parameter when assigned as an attribute to a Module.

  • +
  • autograd_function - Implements forward and backward definitions of an autograd operation. Every Tensor operation creates at least a single Function node that connects to functions that created a Tensor and encodes its history.

  • +
+
+
+

+At this point, we covered

+
    +
  • Defining a neural network
  • +
  • Processing inputs and calling backward
  • +
+
+
+

+Still left

+
    +
  • Computing the loss
  • +
  • Updating the weights of the network
  • +
+
+
+
+

+Loss function

+

A loss function takes the (output, target) pair of inputs, and computes a value that estimates how far away the output is from the target.

+

There are several different loss functions under the nn package . A simple loss is: nnf_mse_loss which computes the mean-squared error between the input and the target.

+

For example:

+
+output <- net(input)
+target <- torch_randn(10)  # a dummy target, for example
+target <- target$view(c(1, -1))  # make it the same shape as output
+
+loss <- nnf_mse_loss(output, target)
+loss
+#> torch_tensor 
+#> 0.388282
+#> [ CPUFloatType{} ]
+
+

Now, if you follow loss in the backward direction, using its $grad_fn attribute, you will see a graph of computations that looks like this:

+
input -> conv2d -> relu -> maxpool2d -> conv2d -> relu -> maxpool2d
+      -> view -> linear -> relu -> linear -> relu -> linear
+      -> MSELoss
+      -> loss
+

So, when we call loss$backward(), the whole graph is differentiated w.r.t. the loss, and all Tensors in the graph that has requires_grad=True will have their #grad Tensor accumulated with the gradient.

+

For illustration, let us follow a few steps backward:

+
+loss$grad_fn
+#> MseLossBackward
+loss$grad_fn$next_functions[[1]]
+#> AddmmBackward
+loss$grad_fn$next_functions[[1]]$next_functions[[1]]
+#> torch::autograd::AccumulateGrad
+
+
+
+

+Backprop

+

To backpropagate the error all we have to do is to loss$backward(). You need to clear the existing gradients though, else gradients will be accumulated to existing gradients.

+

Now we shall call loss$backward(), and have a look at conv1’s bias gradients before and after the backward.

+
+net$zero_grad()     # zeroes the gradient buffers of all parameters
+
+# conv1.bias.grad before backward
+net$conv1$bias$grad
+#> torch_tensor 
+#>  0
+#>  0
+#>  0
+#>  0
+#>  0
+#>  0
+#> [ CPUFloatType{6} ]
+
+loss$backward()
+
+# conv1.bias.grad after backward
+net$conv1$bias$grad
+#> torch_tensor 
+#> 0.001 *
+#>  2.3567
+#> -1.3589
+#> -0.6749
+#>  5.5939
+#> -4.2062
+#>  0.6161
+#> [ CPUFloatType{6} ]
+
+

Now, we have seen how to use loss functions.

+
+
+

+Update the weights

+

The simplest update rule used in practice is the Stochastic Gradient Descent (SGD):

+

\[weight = weight - learning_rate * gradient\]

+

We can implement this using simple R code:

+
+learning_rate <- 0.01
+for (f in net$parameters) {
+  with_no_grad({
+    f$sub_(f$grad * learning_rate)
+  })
+}
+
+
+

Note: Weight updates here is wraped around with_no_grad as we don’t the updates to be tracked by the autograd engine.

+
+

However, as you use neural networks, you want to use various different update rules such as SGD, Nesterov-SGD, Adam, RMSProp, etc.

+
+# create your optimizer
+optimizer <- optim_sgd(net$parameters, lr = 0.01)
+
+# in your training loop:
+optimizer$zero_grad()   # zero the gradient buffers
+output <- net(input)
+loss <- nnf_mse_loss(output, target)
+loss$backward()
+optimizer$step()    # Does the update
+#> NULL
+
+
+

Note: Observe how gradient buffers had to be manually set to zero using optimizer$zero_grad(). This is because gradients are accumulated as explained in the Backprop section.

+
+
+
+ + + +
+ + + + +
+ + + + + + diff --git a/static/docs/articles/getting-started/neural-networks_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/articles/getting-started/neural-networks_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/articles/getting-started/neural-networks_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/articles/getting-started/new-autograd-functions.html b/static/docs/articles/getting-started/new-autograd-functions.html new file mode 100644 index 0000000000000000000000000000000000000000..7b118773a0cfe232f3322afaec1308f95279d6a9 --- /dev/null +++ b/static/docs/articles/getting-started/new-autograd-functions.html @@ -0,0 +1,284 @@ + + + + + + + +Defining new autograd functions • torch + + + + + + + + + + +
+
+ + + + +
+
+ + + + +
+

Note: This is an R port of the official tutorial available here. All credits goes to Justin Johnson.

+
+ +

Under the hood, each primitive autograd operator is really two functions that operate on Tensors. The forward function computes output Tensors from input Tensors. The backward function receives the gradient of the output Tensors with respect to some scalar value, and computes the gradient of the input Tensors with respect to that same scalar value.

+

In torch we can easily define our own autograd operator by defining a subclass of autograd_function and implementing the forward and backward functions. We can then use our new autograd operator by constructing an instance and calling it like a function, passing Tensors containing input data.

+

In this example we define our own custom autograd function for performing the ReLU nonlinearity, and use it to implement our two-layer network:

+
+# We can implement our own custom autograd Functions by subclassing
+# autograd_functioon and implementing the forward and backward passes
+# which operate on Tensors.
+my_relu <- autograd_function(
+   # In the forward pass we receive a Tensor containing the input and return
+   # a Tensor containing the output. ctx is a context object that can be used
+   # to stash information for backward computation. You can cache arbitrary
+   # objects for use in the backward pass using the ctx$save_for_backward method.
+   forward = function(ctx, input) {
+      ctx$save_for_backward(input = input)
+      input$clamp(min = 0)
+   },
+   # In the backward pass we receive a Tensor containing the gradient of the loss
+   # with respect to the output, and we need to compute the gradient of the loss
+   # with respect to the input.
+   backward = function(ctx, grad_output) {
+      v <- ctx$saved_variables
+      grad_input <- grad_output$clone()
+      grad_input[v$input < 0] <- 0
+      list(input = grad_input)
+   }
+)
+
+if (cuda_is_available()) {
+   device <- torch_device("cuda")
+} else {
+   device <- torch_device("cpu")
+}
+   
+# N is batch size; D_in is input dimension;
+# H is hidden dimension; D_out is output dimension.
+N <- 64
+D_in <- 1000
+H <- 100
+D_out <- 10
+
+# Create random input and output data
+# Setting requires_grad=FALSE (the default) indicates that we do not need to 
+# compute gradients with respect to these Tensors during the backward pass.
+x <- torch_randn(N, D_in, device=device)
+y <- torch_randn(N, D_out, device=device)
+
+# Randomly initialize weights
+# Setting requires_grad=TRUE indicates that we want to compute gradients with
+# respect to these Tensors during the backward pass.
+w1 <- torch_randn(D_in, H, device=device, requires_grad = TRUE)
+w2 <- torch_randn(H, D_out, device=device, requires_grad = TRUE)
+
+learning_rate <- 1e-6
+for (t in seq_len(500)) {
+   # Forward pass: compute predicted y using operations on Tensors; these
+   # are exactly the same operations we used to compute the forward pass using
+   # Tensors, but we do not need to keep references to intermediate values since
+   # we are not implementing the backward pass by hand.
+   y_pred <- my_relu(x$mm(w1))$mm(w2)
+   
+   # Compute and print loss using operations on Tensors.
+   # Now loss is a Tensor of shape (1,)
+   loss <- (y_pred - y)$pow(2)$sum()
+   if (t %% 100 == 0 || t == 1)
+      cat("Step:", t, ":", as.numeric(loss), "\n")
+   
+   # Use autograd to compute the backward pass. This call will compute the
+   # gradient of loss with respect to all Tensors with requires_grad=True.
+   # After this call w1$grad and w2$grad will be Tensors holding the gradient
+   # of the loss with respect to w1 and w2 respectively.
+   loss$backward()
+   
+   # Manually update weights using gradient descent. Wrap in `with_no_grad`
+   # because weights have requires_grad=TRUE, but we don't need to track this
+   # in autograd.
+   # You can also use optim_sgd to achieve this.
+   with_no_grad({
+      
+      # operations suffixed with an `_` operates on in-place on the tensor.
+      w1$sub_(learning_rate * w1$grad)
+      w2$sub_(learning_rate * w2$grad)
+      
+      # Manually zero the gradients after updating weights
+      w1$grad$zero_()
+      w2$grad$zero_()
+   })
+}
+#> Step: 1 : 25332368 
+#> Step: 100 : 473.5124 
+#> Step: 200 : 2.001738 
+#> Step: 300 : 0.01279241 
+#> Step: 400 : 0.0002553065 
+#> Step: 500 : 4.130635e-05
+
+

In the next example we will learn how to use the neural networks abstractions in torch.

+
+ + + +
+ + + + +
+ + + + + + diff --git a/static/docs/articles/getting-started/new-autograd-functions_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/articles/getting-started/new-autograd-functions_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/articles/getting-started/new-autograd-functions_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/articles/getting-started/nn.html b/static/docs/articles/getting-started/nn.html new file mode 100644 index 0000000000000000000000000000000000000000..f2d0e7bc110a7ea6ed8d67a0deb8f664b5f050ad --- /dev/null +++ b/static/docs/articles/getting-started/nn.html @@ -0,0 +1,267 @@ + + + + + + + +nn: neural networks with torch • torch + + + + + + + + + + +
+
+ + + + +
+
+ + + + +
+

Note: This is an R port of the official tutorial available here. All credits goes to Justin Johnson.

+
+ +

Computational graphs and autograd are a very powerful paradigm for defining complex operators and automatically taking derivatives; however for large neural networks raw autograd can be a bit too low-level.

+

When building neural networks we frequently think of arranging the computation into layers, some of which have learnable parameters which will be optimized during learning.

+

In TensorFlow, packages like Keras, TensorFlow-Slim, and TFLearn provide higher-level abstractions over raw computational graphs that are useful for building neural networks.

+

In torch, the nn functionality serves this same purpose. The nn feature defines a set of Modules, which are roughly equivalent to neural network layers. A Module receives input Tensors and computes output Tensors, but may also hold internal state such as Tensors containing learnable parameters. The nn collection also defines a set of useful loss functions that are commonly used when training neural networks.

+

In this example we use nn to implement our two-layer network:

+
+if (cuda_is_available()) {
+   device <- torch_device("cuda")
+} else {
+   device <- torch_device("cpu")
+}
+   
+# N is batch size; D_in is input dimension;
+# H is hidden dimension; D_out is output dimension.
+N <- 64
+D_in <- 1000
+H <- 100
+D_out <- 10
+
+# Create random input and output data
+# Setting requires_grad=FALSE (the default) indicates that we do not need to 
+# compute gradients with respect to these Tensors during the backward pass.
+x <- torch_randn(N, D_in, device=device)
+y <- torch_randn(N, D_out, device=device)
+
+# Use the nn package to define our model as a sequence of layers. nn_sequential
+# is a Module which contains other Modules, and applies them in sequence to
+# produce its output. Each Linear Module computes output from input using a
+# linear function, and holds internal Tensors for its weight and bias.
+model <- nn_sequential(
+    nn_linear(D_in, H),
+    nn_relu(),
+    nn_linear(H, D_out)
+)
+
+# The nn package also contains definitions of popular loss functions; in this
+# case we will use Mean Squared Error (MSE) as our loss function.
+loss_fn <- nnf_mse_loss
+
+learning_rate <- 1e-6
+for (t in seq_len(500)) {
+   # Forward pass: compute predicted y by passing x to the model. Module objects
+   # can be called like functions. When doing so you pass a Tensor of input
+   # data to the Module and it produces a Tensor of output data.
+   y_pred <- model(x)
+   
+   # Compute and print loss. We pass Tensors containing the predicted and true
+   # values of y, and the loss function returns a Tensor containing the
+   # loss.
+   loss <- loss_fn(y_pred, y)
+   if (t %% 100 == 0 || t == 1)
+      cat("Step:", t, ":", as.numeric(loss), "\n")
+   
+   # Zero the gradients before running the backward pass.
+   model$zero_grad()
+
+   # Backward pass: compute gradient of the loss with respect to all the learnable
+   # parameters of the model. Internally, the parameters of each Module are stored
+   # in Tensors with requires_grad=TRUE, so this call will compute gradients for
+   # all learnable parameters in the model.
+   loss$backward()
+   
+   # Update the weights using gradient descent. Each parameter is a Tensor, so
+   # we can access its gradients like we did before.
+   with_no_grad({
+      for (param in model$parameters) {
+         param$sub_(learning_rate * param$grad)
+      }
+   })
+}
+#> Step: 1 : 1.04115 
+#> Step: 100 : 1.041026 
+#> Step: 200 : 1.040901 
+#> Step: 300 : 1.040776 
+#> Step: 400 : 1.04065 
+#> Step: 500 : 1.040525
+
+

In the next example we will learn how to use optimizers implemented in torch.

+
+ + + +
+ + + + +
+ + + + + + diff --git a/static/docs/articles/getting-started/nn_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/articles/getting-started/nn_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/articles/getting-started/nn_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/articles/getting-started/optim.html b/static/docs/articles/getting-started/optim.html new file mode 100644 index 0000000000000000000000000000000000000000..d5bcc6308a6f28f8b766ec2bf8fa624b8fbbf4e6 --- /dev/null +++ b/static/docs/articles/getting-started/optim.html @@ -0,0 +1,269 @@ + + + + + + + +optim: optimizers in torch • torch + + + + + + + + + + +
+
+ + + + +
+
+ + + + +
+

Note: This is an R port of the official tutorial available here. All credits goes to Justin Johnson.

+
+ +

Up to this point we have updated the weights of our models by manually mutating the Tensors holding learnable parameters (with with_no_grad to avoid tracking history in autograd). This is not a huge burden for simple optimization algorithms like stochastic gradient descent, but in practice we often train neural networks using more sophisticated optimizers like AdaGrad, RMSProp, Adam, etc.

+

The optim package in torch abstracts the idea of an optimization algorithm and provides implementations of commonly used optimization algorithms.

+

In this example we will use the nn package to define our model as before, but we will optimize the model using the Adam algorithm provided by optim:

+
+if (cuda_is_available()) {
+   device <- torch_device("cuda")
+} else {
+   device <- torch_device("cpu")
+}
+   
+# N is batch size; D_in is input dimension;
+# H is hidden dimension; D_out is output dimension.
+N <- 64
+D_in <- 1000
+H <- 100
+D_out <- 10
+
+# Create random input and output data
+# Setting requires_grad=FALSE (the default) indicates that we do not need to 
+# compute gradients with respect to these Tensors during the backward pass.
+x <- torch_randn(N, D_in, device=device)
+y <- torch_randn(N, D_out, device=device)
+
+# Use the nn package to define our model as a sequence of layers. nn_sequential
+# is a Module which contains other Modules, and applies them in sequence to
+# produce its output. Each Linear Module computes output from input using a
+# linear function, and holds internal Tensors for its weight and bias.
+model <- nn_sequential(
+    nn_linear(D_in, H),
+    nn_relu(),
+    nn_linear(H, D_out)
+)
+
+# The nn package also contains definitions of popular loss functions; in this
+# case we will use Mean Squared Error (MSE) as our loss function.
+loss_fn <- nnf_mse_loss
+
+# Use the optim package to define an Optimizer that will update the weights of
+# the model for us. Here we will use Adam; the optim package contains many other
+# optimization algorithms. The first argument to the Adam constructor tells the
+# optimizer which Tensors it should update.
+learning_rate <- 1e-4
+optimizer <- optim_adam(model$parameters, lr=learning_rate)
+
+for (t in seq_len(500)) {
+   # Forward pass: compute predicted y by passing x to the model. Module objects
+   # can be called like functions. When doing so you pass a Tensor of input
+   # data to the Module and it produces a Tensor of output data.
+   y_pred <- model(x)
+   
+   # Compute and print loss. We pass Tensors containing the predicted and true
+   # values of y, and the loss function returns a Tensor containing the
+   # loss.
+   loss <- loss_fn(y_pred, y)
+   if (t %% 100 == 0 || t == 1)
+      cat("Step:", t, ":", as.numeric(loss), "\n")
+   
+   # Before the backward pass, use the optimizer object to zero all of the
+   # gradients for the variables it will update (which are the learnable
+   # weights of the model). This is because by default, gradients are
+   # accumulated in buffers( i.e, not overwritten) whenever $backward()
+   # is called. Checkout docs of `autograd_backward` for more details.
+   optimizer$zero_grad()
+
+   # Backward pass: compute gradient of the loss with respect to model
+   # parameters
+   loss$backward()
+
+   # Calling the step function on an Optimizer makes an update to its
+   # parameters
+   optimizer$step()
+}
+#> Step: 1 : 1.03194 
+#> Step: 100 : 0.08338322 
+#> Step: 200 : 0.001254716 
+#> Step: 300 : 3.605265e-06 
+#> Step: 400 : 2.155708e-09 
+#> Step: 500 : 5.439427e-13
+
+

In the next example we will learn how to create custom nn_modules.

+
+ + + +
+ + + + +
+ + + + + + diff --git a/static/docs/articles/getting-started/optim_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/articles/getting-started/optim_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/articles/getting-started/optim_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/articles/getting-started/tensors-and-autograd.html b/static/docs/articles/getting-started/tensors-and-autograd.html new file mode 100644 index 0000000000000000000000000000000000000000..b26b93941a340acc42402bad6372a302ad2f2529 --- /dev/null +++ b/static/docs/articles/getting-started/tensors-and-autograd.html @@ -0,0 +1,262 @@ + + + + + + + +Tensors and autograd • torch + + + + + + + + + + +
+
+ + + + +
+
+ + + + +
+

Note: This is an R port of the official tutorial available here. All credits goes to Justin Johnson.

+
+ +

In the previous examples, we had to manually implement both the forward and backward passes of our neural network. Manually implementing the backward pass is not a big deal for a small two-layer network, but can quickly get very hairy for large complex networks.

+

Thankfully, we can use automatic differentiation to automate the computation of backward passes in neural networks. The autograd feature in torch provides exactly this functionality. When using autograd, the forward pass of your network will define a computational graph; nodes in the graph will be Tensors, and edges will be functions that produce output Tensors from input Tensors. Backpropagating through this graph then allows you to easily compute gradients.

+

This sounds complicated, it’s pretty simple to use in practice. Each Tensor represents a node in a computational graph. If x is a Tensor that has x$requires_grad=TRUE then x$grad is another Tensor holding the gradient of x with respect to some scalar value.

+

Here we use torch Tensors and autograd to implement our two-layer network; now we no longer need to manually implement the backward pass through the network:

+
+if (cuda_is_available()) {
+   device <- torch_device("cuda")
+} else {
+   device <- torch_device("cpu")
+}
+   
+# N is batch size; D_in is input dimension;
+# H is hidden dimension; D_out is output dimension.
+N <- 64
+D_in <- 1000
+H <- 100
+D_out <- 10
+
+# Create random input and output data
+# Setting requires_grad=FALSE (the default) indicates that we do not need to 
+# compute gradients with respect to these Tensors during the backward pass.
+x <- torch_randn(N, D_in, device=device)
+y <- torch_randn(N, D_out, device=device)
+
+# Randomly initialize weights
+# Setting requires_grad=TRUE indicates that we want to compute gradients with
+# respect to these Tensors during the backward pass.
+w1 <- torch_randn(D_in, H, device=device, requires_grad = TRUE)
+w2 <- torch_randn(H, D_out, device=device, requires_grad = TRUE)
+
+learning_rate <- 1e-6
+for (t in seq_len(500)) {
+   # Forward pass: compute predicted y using operations on Tensors; these
+   # are exactly the same operations we used to compute the forward pass using
+   # Tensors, but we do not need to keep references to intermediate values since
+   # we are not implementing the backward pass by hand.
+   y_pred <- x$mm(w1)$clamp(min=0)$mm(w2)
+   
+   # Compute and print loss using operations on Tensors.
+   # Now loss is a Tensor of shape (1,)
+   loss <- (y_pred - y)$pow(2)$sum()
+   if (t %% 100 == 0 || t == 1)
+      cat("Step:", t, ":", as.numeric(loss), "\n")
+   
+   # Use autograd to compute the backward pass. This call will compute the
+   # gradient of loss with respect to all Tensors with requires_grad=True.
+   # After this call w1$grad and w2$grad will be Tensors holding the gradient
+   # of the loss with respect to w1 and w2 respectively.
+   loss$backward()
+   
+   # Manually update weights using gradient descent. Wrap in `with_no_grad`
+   # because weights have requires_grad=TRUE, but we don't need to track this
+   # in autograd.
+   # You can also use optim_sgd to achieve this.
+   with_no_grad({
+      
+      # operations suffixed with an `_` operates on in-place on the tensor.
+      w1$sub_(learning_rate * w1$grad)
+      w2$sub_(learning_rate * w2$grad)
+      
+      # Manually zero the gradients after updating weights
+      w1$grad$zero_()
+      w2$grad$zero_()
+   })
+}
+#> Step: 1 : 27399256 
+#> Step: 100 : 756.7294 
+#> Step: 200 : 9.271971 
+#> Step: 300 : 0.205474 
+#> Step: 400 : 0.00579866 
+#> Step: 500 : 0.0003981641
+
+

In the next example we will learn how to create new autograd functions.

+
+ + + +
+ + + + +
+ + + + + + diff --git a/static/docs/articles/getting-started/tensors-and-autograd_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/articles/getting-started/tensors-and-autograd_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/articles/getting-started/tensors-and-autograd_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/articles/getting-started/tensors.html b/static/docs/articles/getting-started/tensors.html new file mode 100644 index 0000000000000000000000000000000000000000..f79bad2fb5585214b7da72e4d251bef9b4ecb4e0 --- /dev/null +++ b/static/docs/articles/getting-started/tensors.html @@ -0,0 +1,247 @@ + + + + + + + +torch Tensors • torch + + + + + + + + + + +
+
+ + + + +
+
+ + + + +
+

Note: This is an R port of the official tutorial available here. All credits goes to Justin Johnson.

+
+ +

R arrays are great, but they cannot utilize GPUs to accelerate its numerical computations. For modern deep neural networks, GPUs often provide speedups of 50x or greater, so unfortunately pure R won’t be enough for modern deep learning.

+

Here we introduce the most fundamental torch concept: the Tensor. A torch Tensor is conceptually similar to an R array: a Tensor is an n-dimensional array, and torch provides many functions for operating on these Tensors. Behind the scenes, Tensors can keep track of a computational graph and gradients, but they’re also useful as a generic tool for scientific computing.

+

Also unlike R, torch Tensors can utilize GPUs to accelerate their numeric computations. To run a torch Tensor on GPU, you simply need to cast it to a new datatype.

+

Here we use torch Tensors to fit a two-layer network to random data. Like the R before we need to manually implement the forward and backward passes through the network:

+
+if (cuda_is_available()) {
+   device <- torch_device("cuda")
+} else {
+   device <- torch_device("cpu")
+}
+   
+# N is batch size; D_in is input dimension;
+# H is hidden dimension; D_out is output dimension.
+N <- 64
+D_in <- 1000
+H <- 100
+D_out <- 10
+
+# Create random input and output data
+x <- torch_randn(N, D_in, device=device)
+y <- torch_randn(N, D_out, device=device)
+
+# Randomly initialize weights
+w1 <- torch_randn(D_in, H, device=device)
+w2 <- torch_randn(H, D_out, device=device)
+
+learning_rate <- 1e-6
+for (t in seq_len(500)) {
+   # Forward pass: compute predicted y
+   h <- x$mm(w1)
+   h_relu <- h$clamp(min=0)
+   y_pred <- h_relu$mm(w2)
+   
+   # Compute and print loss
+   loss <- as.numeric((y_pred - y)$pow(2)$sum())
+   if (t %% 100 == 0 || t == 1)
+      cat("Step:", t, ":", loss, "\n")
+   
+   # Backprop to compute gradients of w1 and w2 with respect to loss
+   grad_y_pred <- 2.0 * (y_pred - y)
+   grad_w2 <- h_relu$t()$mm(grad_y_pred)
+   grad_h_relu <- grad_y_pred$mm(w2$t())
+   grad_h <- grad_h_relu$clone()
+   grad_h[h < 0] <- 0
+   grad_w1 <- x$t()$mm(grad_h)
+   
+   # Update weights using gradient descent
+   w1 <- w1 - learning_rate * grad_w1
+   w2 <- w2 - learning_rate * grad_w2
+}
+#> Step: 1 : 31418640 
+#> Step: 100 : 959.8534 
+#> Step: 200 : 34.41588 
+#> Step: 300 : 1.953276 
+#> Step: 400 : 0.1213879 
+#> Step: 500 : 0.008213471
+
+

In the next example we will use autograd instead of computing the gradients manually.

+
+ + + +
+ + + + +
+ + + + + + diff --git a/static/docs/articles/getting-started/tensors_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/articles/getting-started/tensors_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/articles/getting-started/tensors_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/articles/getting-started/warmup.html b/static/docs/articles/getting-started/warmup.html new file mode 100644 index 0000000000000000000000000000000000000000..f85e2bf0630bcd80636c187ec9b3e909eea037df --- /dev/null +++ b/static/docs/articles/getting-started/warmup.html @@ -0,0 +1,240 @@ + + + + + + + +Warm-up • torch + + + + + + + + + + +
+
+ + + + +
+
+ + + + +
+

Note: This is an R port of the official tutorial available here. All credits goes to Justin Johnson.

+
+ +

A fully-connected ReLU network with one hidden layer and no biases, trained to predict y from x using Euclidean error.

+

This implementation uses pure R to manually compute the forward pass, loss, and backward pass.

+

An R array is a generic n-dimensional array; it does not know anything about deep learning or gradients or computational graphs, and is just a way to perform generic numeric computations.

+
+# N is batch size; D_in is input dimension;
+# H is hidden dimension; D_out is output dimension.
+N <- 64
+D_in <- 1000
+H <- 100
+D_out <- 10
+
+# Create random input and output data
+x <- array(rnorm(N*D_in), dim = c(N, D_in))
+y <- array(rnorm(N*D_out), dim = c(N, D_out))
+
+# Randomly initialize weights
+w1 <- array(rnorm(D_in*H), dim = c(D_in, H))
+w2 <- array(rnorm(H*D_out), dim = c(H, D_out))
+
+learning_rate <- 1e-6
+for (t in seq_len(500)) {
+   # Forward pass: compute predicted y
+   h <- x %*% w1
+   h_relu <- ifelse(h < 0, 0, h)
+   y_pred <- h_relu %*% w2
+   
+   # Compute and print loss
+   loss <- sum((y_pred - y)^2)
+   if (t %% 100 == 0 || t == 1)
+      cat("Step:", t, ":", loss, "\n")
+   
+   # Backprop to compute gradients of w1 and w2 with respect to loss
+   grad_y_pred <- 2 * (y_pred - y)
+   grad_w2 <- t(h_relu) %*% grad_y_pred
+   grad_h_relu <- grad_y_pred %*% t(w2)
+   grad_h <- grad_h_relu
+   grad_h[h < 0] <- 0
+   grad_w1 <- t(x) %*% grad_h
+   
+   # Update weights
+   w1 <- w1 - learning_rate * grad_w1
+   w2 <- w2 - learning_rate * grad_w2
+}
+#> Step: 1 : 28115720 
+#> Step: 100 : 536.8496 
+#> Step: 200 : 2.748443 
+#> Step: 300 : 0.01913319 
+#> Step: 400 : 0.0001405911 
+#> Step: 500 : 1.056453e-06
+
+

In the next example we will replace the R array for a torch Tensor.

+
+ + + +
+ + + + +
+ + + + + + diff --git a/static/docs/articles/getting-started/warmup_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/articles/getting-started/warmup_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/articles/getting-started/warmup_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/articles/getting-started/what-is-torch.html b/static/docs/articles/getting-started/what-is-torch.html new file mode 100644 index 0000000000000000000000000000000000000000..cf83c6ada393e32104e3af08ea96f1f344cf429a --- /dev/null +++ b/static/docs/articles/getting-started/what-is-torch.html @@ -0,0 +1,411 @@ + + + + + + + +What is torch? • torch + + + + + + + + + + +
+
+ + + + +
+
+ + + + +
+

Note: This is an R port of the official tutorial available here. All credits goes to Soumith Chintala.

+
+ +

It’s a scientific computing package targeted at two sets of audiences:

+
    +
  • An array library to use the power of GPUs
  • +
  • a deep learning research platform that provides maximum flexibility and speed
  • +
+
+

+Getting started

+
+

+Tensors

+

Tensors are similar to R arrays, with the addition being that Tensors can also be used on a GPU to accelerate computing.

+
+

Note: An uninitialized matrix is declared, but does not contain definite known values before it is used. When an uninitialized matrix is created, whatever values were in the allocated memory at the time will appear as the initial values.

+
+

Construct a 5x3 matrix, uninitialized:

+
+x <- torch_empty(5, 3)
+x
+#> torch_tensor 
+#>  0.0000e+00  1.5846e+29  1.7045e+23
+#> -1.5849e+29  7.0065e-45  0.0000e+00
+#>  0.0000e+00  0.0000e+00  0.0000e+00
+#>  0.0000e+00  0.0000e+00  0.0000e+00
+#>  0.0000e+00  0.0000e+00  0.0000e+00
+#> [ CPUFloatType{5,3} ]
+
+

Construct a randomly initialized matrix:

+
+x <- torch_rand(5, 3)
+x
+#> torch_tensor 
+#>  0.2655  0.4783  0.8396
+#>  0.2444  0.6980  0.7983
+#>  0.6614  0.9127  0.8649
+#>  0.8563  0.3856  0.0944
+#>  0.8246  0.9818  0.8342
+#> [ CPUFloatType{5,3} ]
+
+

Construct a matrix filled zeros and of dtype long:

+
+x <- torch_zeros(5, 3, dtype = torch_long())
+x
+#> torch_tensor 
+#>  0  0  0
+#>  0  0  0
+#>  0  0  0
+#>  0  0  0
+#>  0  0  0
+#> [ CPULongType{5,3} ]
+
+

Construct a tensor directly from data:

+
+x <- torch_tensor(c(5.5, 3))
+x
+#> torch_tensor 
+#>  5.5000
+#>  3.0000
+#> [ CPUFloatType{2} ]
+
+

or create a tensor based on an existing tensor. These methods will reuse properties of the input tensor, e.g. dtype, unless new values are provided by user

+
+x <- torch_randn_like(x, dtype = torch_float()) # override dtype!
+x                                               # result has the same size
+#> torch_tensor 
+#>  0.7172
+#>  0.9112
+#> [ CPUFloatType{2} ]
+
+

Get its size:

+
+x$size()
+#> [1] 2
+
+
+
+

+Operations

+

There are multiple syntaxes for operations. In the following example, we will take a look at the addition operation.

+

Addition: syntax 1

+
+x <- torch_rand(5, 3)
+y <- torch_rand(5, 3)
+x + y
+#> torch_tensor 
+#>  0.7737  0.8053  0.3150
+#>  1.3053  1.0479  0.5301
+#>  1.6027  0.7272  0.9115
+#>  1.3239  0.8749  0.7270
+#>  1.1298  1.3922  0.7527
+#> [ CPUFloatType{5,3} ]
+
+

Addition: syntax 2

+
+torch_add(x, y)
+#> torch_tensor 
+#>  0.7737  0.8053  0.3150
+#>  1.3053  1.0479  0.5301
+#>  1.6027  0.7272  0.9115
+#>  1.3239  0.8749  0.7270
+#>  1.1298  1.3922  0.7527
+#> [ CPUFloatType{5,3} ]
+
+

Addition: in-place

+
+y$add_(x)
+#> torch_tensor 
+#>  0.7737  0.8053  0.3150
+#>  1.3053  1.0479  0.5301
+#>  1.6027  0.7272  0.9115
+#>  1.3239  0.8749  0.7270
+#>  1.1298  1.3922  0.7527
+#> [ CPUFloatType{5,3} ]
+y
+#> torch_tensor 
+#>  0.7737  0.8053  0.3150
+#>  1.3053  1.0479  0.5301
+#>  1.6027  0.7272  0.9115
+#>  1.3239  0.8749  0.7270
+#>  1.1298  1.3922  0.7527
+#> [ CPUFloatType{5,3} ]
+
+
+

Note: Any operation that mutates a tensor in-place is post-fixed with an _. For example: x$copy_(y), x$t_(), will change x.

+
+

You can use standard R-like indexing with all bells and whistles! See more about indexing with vignette("indexing").

+
+x[, 1]
+#> torch_tensor 
+#>  0.4454
+#>  0.5480
+#>  0.7439
+#>  0.7984
+#>  0.9449
+#> [ CPUFloatType{5} ]
+
+

Resizing: If you want to resize/reshape tensor, you can use torch_view:

+
+x <- torch_randn(4, 4)
+y <- x$view(16)
+z <- x$view(size = c(-1, 8))  # the size -1 is inferred from other dimensions
+x$size()
+#> [1] 4 4
+y$size()
+#> [1] 16
+z$size()
+#> [1] 2 8
+
+

If you have a one element tensor, use $item() to get the value as an R number

+
+x <- torch_randn(1)
+x
+#> torch_tensor 
+#>  1.1464
+#> [ CPUFloatType{1} ]
+x$item()
+#> [1] 1.146403
+
+

You can find a complete list of operations in the reference page.

+
+
+
+

+R bridge

+

Converting a Torch Tensor to an R array and vice versa is a breeze.

+
+

+Converting a torch tensor into an R array

+
+a <- torch_ones(5)
+a
+#> torch_tensor 
+#>  1
+#>  1
+#>  1
+#>  1
+#>  1
+#> [ CPUFloatType{5} ]
+
+
+b <- as_array(a)
+b
+#> [1] 1 1 1 1 1
+
+
+
+

+Converting R arrays to torch tensors

+
+a <- rep(1, 5)
+a
+#> [1] 1 1 1 1 1
+b <- torch_tensor(a)
+b
+#> torch_tensor 
+#>  1
+#>  1
+#>  1
+#>  1
+#>  1
+#> [ CPUFloatType{5} ]
+
+

Currently supported types are numerics and boolean types.

+
+
+
+

+CUDA tensors

+

Tensors can be moved onto any device using the $to method.

+
+if (cuda_is_available()) {
+  device <- torch_device("cuda")
+  y <- torch_ones_like(x, device = device)  # directly create a tensor on GPU
+  x <- x$to(device)                       # or just use strings ``.to("cuda")``
+  z <- x + y
+  print(z)
+  print(z$to(device = "cpu", torch_double())) # `$to` can also change dtype together!
+}
+
+
+
+ + + +
+ + + + +
+ + + + + + diff --git a/static/docs/articles/getting-started/what-is-torch_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/articles/getting-started/what-is-torch_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/articles/getting-started/what-is-torch_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/articles/index.html b/static/docs/articles/index.html new file mode 100644 index 0000000000000000000000000000000000000000..6ff786a38768fd5f9d3d4f4946d791a36e815624 --- /dev/null +++ b/static/docs/articles/index.html @@ -0,0 +1,254 @@ + + + + + + + + +Articles • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + +
+ +
+ +
+ + + +
+ + + + + + + + diff --git a/static/docs/articles/indexing.html b/static/docs/articles/indexing.html new file mode 100644 index 0000000000000000000000000000000000000000..cf9232fa03e7f4e3316fb6edf8b5d3ce83cce59b --- /dev/null +++ b/static/docs/articles/indexing.html @@ -0,0 +1,379 @@ + + + + + + + +Indexing tensors • torch + + + + + + + + + + +
+
+ + + + +
+
+ + + + + +

In this article we describe the indexing operator for torch tensors and how it compares to the R indexing operator for arrays.

+

Torch’s indexing semantics are closer to numpy’s semantics than R’s. You will find a lot of similarities between this article and the numpy indexing article available here.

+
+

+Single element indexing

+

Single element indexing for a 1-D tensors works mostly as expected. Like R, it is 1-based. Unlike R though, it accepts negative indices for indexing from the end of the array. (In R, negative indices are used to remove elements.)

+
+x <- torch_tensor(1:10)
+x[1]
+#> torch_tensor 
+#> 1
+#> [ CPULongType{} ]
+x[-1]
+#> torch_tensor 
+#> 10
+#> [ CPULongType{} ]
+
+

You can also subset matrices and higher dimensions arrays using the same syntax:

+
+x <- x$reshape(shape = c(2,5))
+x
+#> torch_tensor 
+#>   1   2   3   4   5
+#>   6   7   8   9  10
+#> [ CPULongType{2,5} ]
+x[1,3]
+#> torch_tensor 
+#> 3
+#> [ CPULongType{} ]
+x[1,-1]
+#> torch_tensor 
+#> 5
+#> [ CPULongType{} ]
+
+

Note that if one indexes a multidimensional tensor with fewer indices than dimensions, one gets an error, unlike in R that would flatten the array. For example:

+
+x[1]
+#> torch_tensor 
+#>  1
+#>  2
+#>  3
+#>  4
+#>  5
+#> [ CPULongType{5} ]
+
+
+
+

+Slicing and striding

+

It is possible to slice and stride arrays to extract sub-arrays of the same number of dimensions, but of different sizes than the original. This is best illustrated by a few examples:

+
+x <- torch_tensor(1:10)
+x
+#> torch_tensor 
+#>   1
+#>   2
+#>   3
+#>   4
+#>   5
+#>   6
+#>   7
+#>   8
+#>   9
+#>  10
+#> [ CPULongType{10} ]
+x[2:5]
+#> torch_tensor 
+#>  2
+#>  3
+#>  4
+#>  5
+#> [ CPULongType{4} ]
+x[1:(-7)]
+#> torch_tensor 
+#>  1
+#>  2
+#>  3
+#>  4
+#> [ CPULongType{4} ]
+
+

You can also use the 1:10:2 syntax which means: In the range from 1 to 10, take every second item. For example:

+
+x[1:5:2]
+#> torch_tensor 
+#>  1
+#>  3
+#>  5
+#> [ CPULongType{3} ]
+
+

Another special syntax is the N, meaning the size of the specified dimension.

+
+x[5:N]
+#> torch_tensor 
+#>   5
+#>   6
+#>   7
+#>   8
+#>   9
+#>  10
+#> [ CPULongType{6} ]
+
+
+
+

+Getting the complete dimension

+

Like in R, you can take all elements in a dimension by leaving an index empty.

+

Consider a matrix:

+
+x <- torch_randn(2, 3)
+x
+#> torch_tensor 
+#>  1.4158  0.9219 -0.1461
+#>  0.9801  0.7556  0.2140
+#> [ CPUFloatType{2,3} ]
+
+

The following syntax will give you the first row:

+
+x[1,]
+#> torch_tensor 
+#>  1.4158
+#>  0.9219
+#> -0.1461
+#> [ CPUFloatType{3} ]
+
+

And this would give you the first 2 columns:

+
+x[,1:2]
+#> torch_tensor 
+#>  1.4158  0.9219
+#>  0.9801  0.7556
+#> [ CPUFloatType{2,2} ]
+
+
+
+

+Dropping dimensions

+

By default, when indexing by a single integer, this dimension will be dropped to avoid the singleton dimension:

+
+x <- torch_randn(2, 3)
+x[1,]$shape
+#> [1] 3
+
+

You can optionally use the drop = FALSE argument to avoid dropping the dimension.

+
+x[1,,drop = FALSE]$shape
+#> [1] 1 3
+
+
+
+

+Adding a new dimension

+

It’s possible to add a new dimension to a tensor using index-like syntax:

+
+x <- torch_tensor(c(10))
+x$shape
+#> [1] 1
+x[, newaxis]$shape
+#> [1] 1 1
+x[, newaxis, newaxis]$shape
+#> [1] 1 1 1
+
+

You can also use NULL instead of newaxis:

+
+x[,NULL]$shape
+#> [1] 1 1
+
+
+
+

+Dealing with variable number of indices

+

Sometimes we don’t know how many dimensions a tensor has, but we do know what to do with the last available dimension, or the first one. To subsume all others, we can use ..:

+
+z <- torch_tensor(1:125)$reshape(c(5,5,5))
+z[1,..]
+#> torch_tensor 
+#>   1   2   3   4   5
+#>   6   7   8   9  10
+#>  11  12  13  14  15
+#>  16  17  18  19  20
+#>  21  22  23  24  25
+#> [ CPULongType{5,5} ]
+z[..,1]
+#> torch_tensor 
+#>    1    6   11   16   21
+#>   26   31   36   41   46
+#>   51   56   61   66   71
+#>   76   81   86   91   96
+#>  101  106  111  116  121
+#> [ CPULongType{5,5} ]
+
+
+
+ + + +
+ + + + +
+ + + + + + diff --git a/static/docs/articles/indexing_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/articles/indexing_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/articles/indexing_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/articles/loading-data.html b/static/docs/articles/loading-data.html new file mode 100644 index 0000000000000000000000000000000000000000..a00e41ea32c08c39db7d215ad228b5a623d11a83 --- /dev/null +++ b/static/docs/articles/loading-data.html @@ -0,0 +1,390 @@ + + + + + + + +Loading data • torch + + + + + + + + + + +
+
+ + + + +
+
+ + + + + +
+

+Datasets and data loaders

+

Central to data ingestion and preprocessing are datasets and data loaders.

+

torch comes equipped with a bag of datasets related to, mostly, image recognition and natural language processing (e.g., mnist_dataset()), which can be iterated over by means of dataloaders:

+
# ...
+ds <- mnist_dataset(
+  dir, 
+  download = TRUE, 
+  transform = function(x) {
+    x <- x$to(dtype = torch_float())/256
+    x[newaxis,..]
+  }
+)
+
+dl <- dataloader(ds, batch_size = 32, shuffle = TRUE)
+
+for (b in enumerate(dl)) {
+  # ...
+

Cf. vignettes/examples/mnist-cnn.R for a complete example.

+

What if you want to train on a different dataset? In these cases, you subclass Dataset, an abstract container that needs to know how to iterate over the given data. To that purpose, your subclass needs to implement .getitem(), and say what should be returned when the data loader is asking for the next batch.

+

In .getitem(), you can implement whatever preprocessing you require. Additionally, you should implement .length(), so users can find out how many items there are in the dataset.

+

While this may sound complicated, it is not at all. The base logic is straightforward – complexity will, naturally, correlate with how involved your preprocessing is. To provide you with a simple but functional prototype, here we show how to create your own dataset to train on Allison Horst's penguins.

+
+
+

+A custom dataset

+
+library(palmerpenguins)
+library(magrittr)
+
+penguins
+#> # A tibble: 344 x 8
+#>    species island bill_length_mm bill_depth_mm flipper_length_… body_mass_g
+#>    <fct>   <fct>           <dbl>         <dbl>            <int>       <int>
+#>  1 Adelie  Torge…           39.1          18.7              181        3750
+#>  2 Adelie  Torge…           39.5          17.4              186        3800
+#>  3 Adelie  Torge…           40.3          18                195        3250
+#>  4 Adelie  Torge…           NA            NA                 NA          NA
+#>  5 Adelie  Torge…           36.7          19.3              193        3450
+#>  6 Adelie  Torge…           39.3          20.6              190        3650
+#>  7 Adelie  Torge…           38.9          17.8              181        3625
+#>  8 Adelie  Torge…           39.2          19.6              195        4675
+#>  9 Adelie  Torge…           34.1          18.1              193        3475
+#> 10 Adelie  Torge…           42            20.2              190        4250
+#> # … with 334 more rows, and 2 more variables: sex <fct>, year <int>
+
+

Datasets are R6 classes created using the dataset() constructor. You can pass a name and various member functions. Among those should be initialize(), to create instance variables, .getitem(), to indicate how the data should be returned, and .length(), to say how many items we have.

+

In addition, any number of helper functions can be defined.

+

Here, we assume the penguins have already been loaded, and all preprocessing consists in removing lines with NA values, transforming factors to numbers starting from 0, and converting from R data types to torch tensors.

+

In .getitem, we essentially decide how this data is going to be used: All variables besides species go into x, the predictor, and species will constitute y, the target. Predictor and target are returned in a list, to be accessed as batch[[1]] and batch[[2]] during training.

+
+penguins_dataset <- dataset(
+  
+  name = "penguins_dataset",
+  
+  initialize = function() {
+    self$data <- self$prepare_penguin_data()
+  },
+  
+  .getitem = function(index) {
+    
+    x <- self$data[index, 2:-1]
+    y <- self$data[index, 1]$to(torch_long())
+    
+    list(x, y)
+  },
+  
+  .length = function() {
+    self$data$size()[[1]]
+  },
+  
+  prepare_penguin_data = function() {
+    
+    input <- na.omit(penguins) 
+    # conveniently, the categorical data are already factors
+    input$species <- as.numeric(input$species)
+    input$island <- as.numeric(input$island)
+    input$sex <- as.numeric(input$sex)
+    
+    input <- as.matrix(input)
+    torch_tensor(input)
+  }
+)
+
+

Let’s create the dataset , query for it’s length, and look at its first item:

+
+tuxes <- penguins_dataset()
+tuxes$.length()
+#> [1] 333
+tuxes$.getitem(1)
+#> [[1]]
+#> torch_tensor 
+#>     3.0000
+#>    39.1000
+#>    18.7000
+#>   181.0000
+#>  3750.0000
+#>     2.0000
+#>  2007.0000
+#> [ CPUFloatType{7} ]
+#> 
+#> [[2]]
+#> torch_tensor 
+#> 1
+#> [ CPULongType{} ]
+
+

To be able to iterate over tuxes, we need a data loader (we override the default batch size of 1):

+
+dl <-tuxes %>% dataloader(batch_size = 8)
+
+

Calling .length() on a data loader (as opposed to a dataset) will return the number of batches we have:

+
+dl$.length()
+#> [1] 42
+
+

And we can create an iterator to inspect the first batch:

+
+iter <- dl$.iter()
+b <- iter$.next()
+b
+#> [[1]]
+#> torch_tensor 
+#>     3.0000    39.1000    18.7000   181.0000  3750.0000     2.0000  2007.0000
+#>     3.0000    39.5000    17.4000   186.0000  3800.0000     1.0000  2007.0000
+#>     3.0000    40.3000    18.0000   195.0000  3250.0000     1.0000  2007.0000
+#>     3.0000    36.7000    19.3000   193.0000  3450.0000     1.0000  2007.0000
+#>     3.0000    39.3000    20.6000   190.0000  3650.0000     2.0000  2007.0000
+#>     3.0000    38.9000    17.8000   181.0000  3625.0000     1.0000  2007.0000
+#>     3.0000    39.2000    19.6000   195.0000  4675.0000     2.0000  2007.0000
+#>     3.0000    41.1000    17.6000   182.0000  3200.0000     1.0000  2007.0000
+#> [ CPUFloatType{8,7} ]
+#> 
+#> [[2]]
+#> torch_tensor 
+#>  1
+#>  1
+#>  1
+#>  1
+#>  1
+#>  1
+#>  1
+#>  1
+#> [ CPULongType{8} ]
+
+

To train a network, we can use enumerate to iterate over batches.

+
+
+

+Training with data loaders

+

Our example network is very simple. (In reality, we would want to treat island as the categorical variable it is, and either one-hot-encode or embed it.)

+
+net <- nn_module(
+  "PenguinNet",
+  initialize = function() {
+    self$fc1 <- nn_linear(7, 32)
+    self$fc2 <- nn_linear(32, 3)
+  },
+  forward = function(x) {
+    x %>% 
+      self$fc1() %>% 
+      nnf_relu() %>% 
+      self$fc2() %>% 
+      nnf_log_softmax(dim = 1)
+  }
+)
+
+model <- net()
+
+

We still need an optimizer:

+
+optimizer <- optim_sgd(model$parameters, lr = 0.01)
+
+

And we’re ready to train:

+
+for (epoch in 1:10) {
+  
+  l <- c()
+  
+  for (b in enumerate(dl)) {
+    optimizer$zero_grad()
+    output <- model(b[[1]])
+    loss <- nnf_nll_loss(output, b[[2]])
+    loss$backward()
+    optimizer$step()
+    l <- c(l, loss$item())
+  }
+  
+  cat(sprintf("Loss at epoch %d: %3f\n", epoch, mean(l)))
+}
+#> Loss at epoch 1: 51.747068
+#> Loss at epoch 2: 2.068251
+#> Loss at epoch 3: 2.068251
+#> Loss at epoch 4: 2.068251
+#> Loss at epoch 5: 2.068251
+#> Loss at epoch 6: 2.068251
+#> Loss at epoch 7: 2.068251
+#> Loss at epoch 8: 2.068251
+#> Loss at epoch 9: 2.068251
+#> Loss at epoch 10: 2.068251
+
+
+
+ + + +
+ + + + +
+ + + + + + diff --git a/static/docs/articles/loading-data_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/articles/loading-data_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/articles/loading-data_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/articles/tensor-creation.html b/static/docs/articles/tensor-creation.html new file mode 100644 index 0000000000000000000000000000000000000000..a60aa0f72ca338da0ad0faf631597570798e9a62 --- /dev/null +++ b/static/docs/articles/tensor-creation.html @@ -0,0 +1,316 @@ + + + + + + + +Creating tensors • torch + + + + + + + + + + +
+
+ + + + +
+
+ + + + + +

In this article we describe various ways of creating torch tensors in R.

+
+

+From R objects

+

You can create tensors from R objects using the torch_tensor function. The torch_tensor function takes an R vector, matrix or array and creates an equivalent torch_tensor.

+

You can see a few examples below:

+
+torch_tensor(c(1,2,3))
+#> torch_tensor 
+#>  1
+#>  2
+#>  3
+#> [ CPUFloatType{3} ]
+
+# conform to row-major indexing used in torch
+torch_tensor(matrix(1:10, ncol = 5, nrow = 2, byrow = TRUE))
+#> torch_tensor 
+#>   1   2   3   4   5
+#>   6   7   8   9  10
+#> [ CPULongType{2,5} ]
+torch_tensor(array(runif(12), dim = c(2, 2, 3)))
+#> torch_tensor 
+#> (1,.,.) = 
+#>   0.5612  0.4325  0.6571
+#>   0.4899  0.4636  0.9910
+#> 
+#> (2,.,.) = 
+#>   0.7073  0.8791  0.3117
+#>   0.9112  0.6383  0.1045
+#> [ CPUFloatType{2,2,3} ]
+
+

By default, we will create tensors in the cpu device, converting their R datatype to the corresponding torch dtype.

+
+

Note currently, only numeric and boolean types are supported.

+
+

You can always modify dtype and device when converting an R object to a torch tensor. For example:

+
+torch_tensor(1, dtype = torch_long())
+#> torch_tensor 
+#>  1
+#> [ CPULongType{1} ]
+torch_tensor(1, device = "cpu", dtype = torch_float64())
+#> torch_tensor 
+#>  1
+#> [ CPUDoubleType{1} ]
+
+

Other options available when creating a tensor are:

+
    +
  • +requires_grad: boolean indicating if you want autograd to record operations on them for automatic differentiation.
  • +
  • +pin_memory: – If set, the tensor returned would be allocated in pinned memory. Works only for CPU tensors.
  • +
+

These options are available for all functions that can be used to create new tensors, including the factory functions listed in the next section.

+
+
+

+Using creation functions

+

You can also use the torch_* functions listed below to create torch tensors using some algorithm.

+

For example, the torch_randn function will create tensors using the normal distribution with mean 0 and standard deviation 1. You can use the ... argument to pass the size of the dimensions. For example, the code below will create a normally distributed tensor with shape 5x3.

+
+x <- torch_randn(5, 3)
+x
+#> torch_tensor 
+#> -1.5887 -0.0033  1.0389
+#>  0.0472 -1.0173 -1.5143
+#>  1.9183 -0.6090 -0.9197
+#>  1.7162 -1.8687  0.8053
+#>  1.0018  0.6406 -0.5853
+#> [ CPUFloatType{5,3} ]
+
+

Another example is torch_ones, which creates a tensor filled with ones.

+
+x <- torch_ones(2, 4, dtype = torch_int64(), device = "cpu")
+x
+#> torch_tensor 
+#>  1  1  1  1
+#>  1  1  1  1
+#> [ CPULongType{2,4} ]
+
+

Here is the full list of functions that can be used to bulk-create tensors in torch:

+
    +
  • +torch_arange: Returns a tensor with a sequence of integers,
  • +
  • +torch_empty: Returns a tensor with uninitialized values,
  • +
  • +torch_eye: Returns an identity matrix,
  • +
  • +torch_full: Returns a tensor filled with a single value,
  • +
  • +torch_linspace: Returns a tensor with values linearly spaced in some interval,
  • +
  • +torch_logspace: Returns a tensor with values logarithmically spaced in some interval,
  • +
  • +torch_ones: Returns a tensor filled with all ones,
  • +
  • +torch_rand: Returns a tensor filled with values drawn from a uniform distribution on [0, 1).
  • +
  • +torch_randint: Returns a tensor with integers randomly drawn from an interval,
  • +
  • +torch_randn: Returns a tensor filled with values drawn from a unit normal distribution,
  • +
  • +torch_randperm: Returns a tensor filled with a random permutation of integers in some interval,
  • +
  • +torch_zeros: Returns a tensor filled with all zeros.
  • +
+
+
+

+Conversion

+

Once a tensor exists you can convert between dtypes and move to a different device with to method. For example:

+
+x <- torch_tensor(1)
+y <- x$to(dtype = torch_int32())
+x
+#> torch_tensor 
+#>  1
+#> [ CPUFloatType{1} ]
+y
+#> torch_tensor 
+#>  1
+#> [ CPUIntType{1} ]
+
+

You can also copy a tensor to the GPU using:

+
x <- torch_tensor(1)
+y <- x$cuda())
+
+
+ + + +
+ + + + +
+ + + + + + diff --git a/static/docs/articles/tensor-creation_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/articles/tensor-creation_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/articles/tensor-creation_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/articles/tensor/index.html b/static/docs/articles/tensor/index.html new file mode 100644 index 0000000000000000000000000000000000000000..ac0649d5127f07c228ec3f4dc2199f9fb497542b --- /dev/null +++ b/static/docs/articles/tensor/index.html @@ -0,0 +1,3612 @@ + + + + + + + +Tensor objects • torch + + + + + + + + + + +
+
+ + + + +
+
+ + + + + +

Central to torch is the torch_tensor objects. torch_tensor’s are R objects very similar to R6 instances. Tensors have a large amount of methods that can be called using the $ operator.

+

Following is a list of all methods that can be called by tensor objects and their documentation. You can also look at PyTorch’s documentation for additional details.

+
+

+T

+

Is this Tensor with its dimensions reversed.

+

If n is the number of dimensions in x, x$T is equivalent to x$permute(n-1, n-2, ..., 0).

+
+
+

+abs

+

abs() -> Tensor

+

See ?torch_abs

+
+
+

+abs_

+

abs_() -> Tensor

+

In-place version of $abs

+
+
+

+absolute

+

absolute() -> Tensor

+

Alias for [$abs()]

+
+
+

+absolute_

+

absolute_() -> Tensor

+

In-place version of $absolute Alias for [$abs_()]

+
+
+

+acos

+

acos() -> Tensor

+

See ?torch_acos

+
+
+

+acos_

+

acos_() -> Tensor

+

In-place version of $acos

+
+
+

+acosh

+

acosh() -> Tensor

+

See ?torch_acosh

+
+
+

+acosh_

+

acosh_() -> Tensor

+

In-place version of $acosh

+
+
+

+add

+

add(other, *, alpha=1) -> Tensor

+

Add a scalar or tensor to self tensor. If both alpha and other are specified, each element of other is scaled by alpha before being used.

+

When other is a tensor, the shape of other must be broadcastable with the shape of the underlying tensor

+

See ?torch_add

+
+
+

+add_

+

add_(other, *, alpha=1) -> Tensor

+

In-place version of $add

+
+
+

+addbmm

+

addbmm(batch1, batch2, *, beta=1, alpha=1) -> Tensor

+

See ?torch_addbmm

+
+
+

+addbmm_

+

addbmm_(batch1, batch2, *, beta=1, alpha=1) -> Tensor

+

In-place version of $addbmm

+
+
+

+addcdiv

+

addcdiv(tensor1, tensor2, *, value=1) -> Tensor

+

See ?torch_addcdiv

+
+
+

+addcdiv_

+

addcdiv_(tensor1, tensor2, *, value=1) -> Tensor

+

In-place version of $addcdiv

+
+
+

+addcmul

+

addcmul(tensor1, tensor2, *, value=1) -> Tensor

+

See ?torch_addcmul

+
+
+

+addcmul_

+

addcmul_(tensor1, tensor2, *, value=1) -> Tensor

+

In-place version of $addcmul

+
+
+

+addmm

+

addmm(mat1, mat2, *, beta=1, alpha=1) -> Tensor

+

See ?torch_addmm

+
+
+

+addmm_

+

addmm_(mat1, mat2, *, beta=1, alpha=1) -> Tensor

+

In-place version of $addmm

+
+
+

+addmv

+

addmv(mat, vec, *, beta=1, alpha=1) -> Tensor

+

See ?torch_addmv

+
+
+

+addmv_

+

addmv_(mat, vec, *, beta=1, alpha=1) -> Tensor

+

In-place version of $addmv

+
+
+

+addr

+

addr(vec1, vec2, *, beta=1, alpha=1) -> Tensor

+

See ?torch_addr

+
+
+

+addr_

+

addr_(vec1, vec2, *, beta=1, alpha=1) -> Tensor

+

In-place version of $addr

+
+
+

+align_as

+

align_as(other) -> Tensor

+

Permutes the dimensions of the self tensor to match the dimension order in the other tensor, adding size-one dims for any new names.

+

This operation is useful for explicit broadcasting by names (see examples).

+

All of the dims of self must be named in order to use this method. The resulting tensor is a view on the original tensor.

+

All dimension names of self must be present in other$names. other may contain named dimensions that are not in self$names; the output tensor has a size-one dimension for each of those new names.

+

To align a tensor to a specific order, use $align_to.

+
+

+Examples:

+
+# Example 1: Applying a mask
+mask <- torch_randint(low = 0, high = 2, size = c(127, 128), dtype=torch_bool())$refine_names(c('W', 'H'))
+imgs <- torch_randn(32, 128, 127, 3, names=c('N', 'H', 'W', 'C'))
+imgs$masked_fill_(mask$align_as(imgs), 0)
+
+# Example 2: Applying a per-channel-scale
+scale_channels <- function(input, scale) {
+  scale <- scale$refine_names("C")
+  input * scale$align_as(input)
+}
+
+num_channels <- 3
+scale <- torch_randn(num_channels, names='C')
+imgs <- torch_rand(32, 128, 128, num_channels, names=c('N', 'H', 'W', 'C'))
+more_imgs = torch_rand(32, num_channels, 128, 128, names=c('N', 'C', 'H', 'W'))
+videos = torch_randn(3, num_channels, 128, 128, 128, names=c('N', 'C', 'H', 'W', 'D'))
+
+# scale_channels is agnostic to the dimension order of the input
+scale_channels(imgs, scale)
+scale_channels(more_imgs, scale)
+scale_channels(videos, scale)
+
+
+
+

+Warning:

+

The named tensor API is experimental and subject to change.

+
+
+
+

+align_to

+

Permutes the dimensions of the self tensor to match the order specified in names, adding size-one dims for any new names.

+

All of the dims of self must be named in order to use this method. The resulting tensor is a view on the original tensor.

+

All dimension names of self must be present in names. names may contain additional names that are not in self$names; the output tensor has a size-one dimension for each of those new names.

+
+

+Arguments:

+
    +
  • names (iterable of str): The desired dimension ordering of the output tensor. May contain up to one Ellipsis that is expanded to all unmentioned dim names of self.
  • +
+
+
+

+Examples:

+
+

+Warning:

+

The named tensor API is experimental and subject to change.

+
+
+
+
+

+all

+

all() -> bool

+

Returns TRUE if all elements in the tensor are TRUE, FALSE otherwise.

+
+

+Examples:

+
+a <- torch_rand(1, 2)$to(dtype = torch_bool())
+a
+a$all()
+
+

all(dim, keepdim=FALSE, out=NULL) -> Tensor

+

Returns TRUE if all elements in each row of the tensor in the given dimension dim are TRUE, FALSE otherwise.

+

If keepdim is TRUE, the output tensor is of the same size as input except in the dimension dim where it is of size 1. Otherwise, dim is squeezed (see ?torch_squeeze()), resulting in the output tensor having 1 fewer dimension than input.

+
+
+

+Arguments:

+
    +
  • dim (int): the dimension to reduce
  • +
  • keepdim (bool): whether the output tensor has dim retained or not
  • +
  • out (Tensor, optional): the output tensor
  • +
+
+
+

+Examples:

+
+a <- torch_rand(4, 2)$to(dtype = torch_bool())
+a
+a$all(dim=2)
+a$all(dim=1)
+
+
+
+
+

+allclose

+

allclose(other, rtol=1e-05, atol=1e-08, equal_nan=FALSE) -> Tensor

+

See ?torch_allclose

+
+
+

+angle

+

angle() -> Tensor

+

See ?torch_angle

+
+
+

+any

+

any() -> bool

+

Returns TRUE if any elements in the tensor are TRUE, FALSE otherwise.

+
+

+Examples:

+
+a <- torch_rand(1, 2)$to(dtype = torch_bool())
+a
+a$any()
+
+

any(dim, keepdim=FALSE, out=NULL) -> Tensor

+

Returns TRUE if any elements in each row of the tensor in the given dimension dim are TRUE, FALSE otherwise.

+

If keepdim is TRUE, the output tensor is of the same size as input except in the dimension dim where it is of size 1. Otherwise, dim is squeezed (see ?torch_squeeze()), resulting in the output tensor having 1 fewer dimension than input.

+
+
+

+Arguments:

+
    +
  • dim (int): the dimension to reduce
  • +
  • keepdim (bool): whether the output tensor has dim retained or not
  • +
  • out (Tensor, optional): the output tensor
  • +
+
+
+

+Examples:

+
+a <- torch_randn(4, 2) < 0
+a
+a$any(2)
+a$any(1)
+
+
+
+
+

+apply_

+

apply_(callable) -> Tensor

+

Applies the function callable to each element in the tensor, replacing each element with the value returned by callable.

+
+

+Note:

+

This function only works with CPU tensors and should not be used in code sections that require high performance.

+
+
+
+

+argmax

+

argmax(dim=NULL, keepdim=FALSE) -> LongTensor

+

See ?torch_argmax

+
+
+

+argmin

+

argmin(dim=NULL, keepdim=FALSE) -> LongTensor

+

See ?torch_argmin

+
+
+

+argsort

+

argsort(dim=-1, descending=FALSE) -> LongTensor

+

See ?torch_argsort

+
+
+

+as_strided

+

as_strided(size, stride, storage_offset=0) -> Tensor

+

See [torch_as_strided()]

+
+
+

+as_subclass

+

as_subclass(cls) -> Tensor

+

Makes a cls instance with the same data pointer as self. Changes in the output mirror changes in self, and the output stays attached to the autograd graph. cls must be a subclass of Tensor.

+
+
+

+asin

+

asin() -> Tensor

+

See ?torch_asin

+
+
+

+asin_

+

asin_() -> Tensor

+

In-place version of $asin

+
+
+

+asinh

+

asinh() -> Tensor

+

See ?torch_asinh

+
+
+

+asinh_

+

asinh_() -> Tensor

+

In-place version of $asinh

+
+
+

+atan

+

atan() -> Tensor

+

See ?torch_atan

+
+
+

+atan2

+

atan2(other) -> Tensor

+

See [torch_atan2()]

+
+
+

+atan2_

+

atan2_(other) -> Tensor

+

In-place version of $atan2

+
+
+

+atan_

+

atan_() -> Tensor

+

In-place version of $atan

+
+
+

+atanh

+

atanh() -> Tensor

+

See ?torch_atanh

+
+
+

+atanh_

+

In-place version of $atanh

+
+
+

+backward

+

Computes the gradient of current tensor w.r.t. graph leaves.

+

The graph is differentiated using the chain rule. If the tensor is non-scalar (i.e. its data has more than one element) and requires gradient, the function additionally requires specifying gradient. It should be a tensor of matching type and location, that contains the gradient of the differentiated function w.r.t. self.

+

This function accumulates gradients in the leaves - you might need to zero $grad attributes or set them to NULL before calling it. See Default gradient layouts<default-grad-layouts> for details on the memory layout of accumulated gradients.

+
+

+Arguments:

+
    +
  • gradient (Tensor or NULL): Gradient w.r.t. the tensor. If it is a tensor, it will be automatically converted to a Tensor that does not require grad unless create_graph is TRUE. NULL values can be specified for scalar Tensors or ones that don’t require grad. If a NULL value would be acceptable then this argument is optional.
  • +
  • retain_graph (bool, optional): If FALSE, the graph used to compute the grads will be freed. Note that in nearly all cases setting this option to TRUE is not needed and often can be worked around in a much more efficient way. Defaults to the value of create_graph.
  • +
  • create_graph (bool, optional): If TRUE, graph of the derivative will be constructed, allowing to compute higher order derivative products. Defaults to FALSE.
  • +
+
+
+
+

+baddbmm

+

baddbmm(batch1, batch2, *, beta=1, alpha=1) -> Tensor

+

See ?torch_baddbmm

+
+
+

+baddbmm_

+

baddbmm_(batch1, batch2, *, beta=1, alpha=1) -> Tensor

+

In-place version of $baddbmm

+
+
+

+bernoulli

+

bernoulli(*, generator=NULL) -> Tensor

+

Returns a result tensor where each \(\texttt{result[i]}\) is independently sampled from \(\text{Bernoulli}(\texttt{self[i]})\). self must have floating point dtype, and the result will have the same dtype.

+

See ?torch_bernoulli

+
+
+

+bernoulli_

+

bernoulli_(p=0.5, *, generator=NULL) -> Tensor

+

Fills each location of self with an independent sample from \(\text{Bernoulli}(\texttt{p})\). self can have integral dtype.

+

bernoulli_(p_tensor, *, generator=NULL) -> Tensor

+

p_tensor should be a tensor containing probabilities to be used for drawing the binary random number.

+

The \(\text{i}^{th}\) element of self tensor will be set to a value sampled from \(\text{Bernoulli}(\texttt{p\_tensor[i]})\).

+

self can have integral dtype, but p_tensor must have floating point dtype.

+

See also $bernoulli and ?torch_bernoulli

+
+
+

+bfloat16

+

bfloat16(memory_format=torch_preserve_format) -> Tensor self$bfloat16() is equivalent to self$to(torch_bfloat16). See [to()].

+
+

+Arguments:

+
    +
  • memory_format (torch_memory_format, optional): the desired memory format of
  • +
  • returned Tensor. Default: torch_preserve_format.
  • +
+
+
+
+

+bincount

+

bincount(weights=NULL, minlength=0) -> Tensor

+

See ?torch_bincount

+
+
+

+bitwise_and

+

bitwise_and() -> Tensor

+

See [torch_bitwise_and()]

+
+
+

+bitwise_and_

+

bitwise_and_() -> Tensor

+

In-place version of $bitwise_and

+
+
+

+bitwise_not

+

bitwise_not() -> Tensor

+

See [torch_bitwise_not()]

+
+
+

+bitwise_not_

+

bitwise_not_() -> Tensor

+

In-place version of $bitwise_not

+
+
+

+bitwise_or

+

bitwise_or() -> Tensor

+

See [torch_bitwise_or()]

+
+
+

+bitwise_or_

+

bitwise_or_() -> Tensor

+

In-place version of $bitwise_or

+
+
+

+bitwise_xor

+

bitwise_xor() -> Tensor

+

See [torch_bitwise_xor()]

+
+
+

+bitwise_xor_

+

bitwise_xor_() -> Tensor

+

In-place version of $bitwise_xor

+
+
+

+bmm

+

bmm(batch2) -> Tensor

+

See ?torch_bmm

+
+
+

+bool

+

bool(memory_format=torch_preserve_format) -> Tensor

+

self$bool() is equivalent to self$to(torch_bool). See [to()].

+
+

+Arguments:

+
    +
  • memory_format (torch_memory_format, optional): the desired memory format of
  • +
  • returned Tensor. Default: torch_preserve_format.
  • +
+
+
+
+

+byte

+

byte(memory_format=torch_preserve_format) -> Tensor

+

self$byte() is equivalent to self$to(torch_uint8). See [to()].

+
+

+Arguments:

+
    +
  • memory_format (torch_memory_format, optional): the desired memory format of
  • +
  • returned Tensor. Default: torch_preserve_format.
  • +
+
+
+
+

+cauchy_

+

cauchy_(median=0, sigma=1, *, generator=NULL) -> Tensor

+

Fills the tensor with numbers drawn from the Cauchy distribution:

+

\[ +f(x) = \dfrac{1}{\pi} \dfrac{\sigma}{(x - \text{median})^2 + \sigma^2} +\]

+
+
+

+ceil

+

ceil() -> Tensor

+

See ?torch_ceil

+
+
+

+ceil_

+

ceil_() -> Tensor

+

In-place version of $ceil

+
+
+

+char

+

char(memory_format=torch_preserve_format) -> Tensor

+

self$char() is equivalent to self$to(torch_int8). See [to()].

+
+

+Arguments:

+
    +
  • memory_format (torch_memory_format, optional): the desired memory format of
  • +
  • returned Tensor. Default: torch_preserve_format.
  • +
+
+
+
+

+cholesky

+

cholesky(upper=FALSE) -> Tensor

+

See ?torch_cholesky

+
+
+

+cholesky_inverse

+

cholesky_inverse(upper=FALSE) -> Tensor

+

See [torch_cholesky_inverse()]

+
+
+

+cholesky_solve

+

cholesky_solve(input2, upper=FALSE) -> Tensor

+

See [torch_cholesky_solve()]

+
+
+

+chunk

+

chunk(chunks, dim=0) -> List of Tensors

+

See ?torch_chunk

+
+
+

+clamp

+

clamp(min, max) -> Tensor

+

See ?torch_clamp

+
+
+

+clamp_

+

clamp_(min, max) -> Tensor

+

In-place version of $clamp

+
+
+

+clone

+

clone(memory_format=torch_preserve_format) -> Tensor

+

Returns a copy of the self tensor. The copy has the same size and data type as self.

+
+

+Note:

+

Unlike copy_(), this function is recorded in the computation graph. Gradients propagating to the cloned tensor will propagate to the original tensor.

+
+
+

+Arguments:

+
    +
  • memory_format (torch_memory_format, optional): the desired memory format of
  • +
  • returned Tensor. Default: torch_preserve_format.
  • +
+
+
+
+

+conj

+

conj() -> Tensor

+

See ?torch_conj

+
+
+

+contiguous

+

contiguous(memory_format=torch_contiguous_format) -> Tensor

+

Returns a contiguous in memory tensor containing the same data as self tensor. If self tensor is already in the specified memory format, this function returns the self tensor.

+
+

+Arguments:

+
    +
  • memory_format (torch_memory_format, optional): the desired memory format of
  • +
  • returned Tensor. Default: torch_contiguous_format.
  • +
+
+
+
+

+copy_

+

copy_(src, non_blocking=FALSE) -> Tensor

+

Copies the elements from src into self tensor and returns self.

+

The src tensor must be :ref:broadcastable <broadcasting-semantics> with the self tensor. It may be of a different data type or reside on a different device.

+
+

+Arguments:

+
    +
  • src (Tensor): the source tensor to copy from
  • +
  • non_blocking (bool): if TRUE and this copy is between CPU and GPU,
  • +
  • the copy may occur asynchronously with respect to the host. For other
  • +
  • cases, this argument has no effect.
  • +
+
+
+
+

+cos

+

cos() -> Tensor

+

See ?torch_cos

+
+
+

+cos_

+

cos_() -> Tensor

+

In-place version of $cos

+
+
+

+cosh

+

cosh() -> Tensor

+

See ?torch_cosh

+
+
+

+cosh_

+

cosh_() -> Tensor

+

In-place version of $cosh

+
+
+

+cpu

+

cpu(memory_format=torch_preserve_format) -> Tensor

+

Returns a copy of this object in CPU memory.

+

If this object is already in CPU memory and on the correct device, then no copy is performed and the original object is returned.

+
+

+Arguments:

+
    +
  • memory_format (torch_memory_format, optional): the desired memory format of
  • +
  • returned Tensor. Default: torch_preserve_format.
  • +
+
+
+
+

+cross

+

cross(other, dim=-1) -> Tensor

+

See ?torch_cross

+
+
+

+cuda

+

cuda(device=NULL, non_blocking=FALSE, memory_format=torch_preserve_format) -> Tensor

+

Returns a copy of this object in CUDA memory.

+

If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.

+
+

+Arguments:

+
    +
  • device (torch_device): The destination GPU device. Defaults to the current CUDA device.
  • +
  • non_blocking (bool): If TRUE and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect. Default: FALSE.
  • +
  • memory_format (torch_memory_format, optional): the desired memory format of
  • +
  • returned Tensor. Default: torch_preserve_format.
  • +
+
+
+
+

+cummax

+

cummax(dim) -> (Tensor, Tensor)

+

See ?torch_cummax

+
+
+

+cummin

+

cummin(dim) -> (Tensor, Tensor)

+

See ?torch_cummin

+
+
+

+cumprod

+

cumprod(dim, dtype=NULL) -> Tensor

+

See ?torch_cumprod

+
+
+

+cumsum

+

cumsum(dim, dtype=NULL) -> Tensor

+

See ?torch_cumsum

+
+
+

+data_ptr

+

data_ptr() -> int

+

Returns the address of the first element of self tensor.

+
+
+

+deg2rad

+

deg2rad() -> Tensor

+

See [torch_deg2rad()]

+
+
+

+deg2rad_

+

deg2rad_() -> Tensor

+

In-place version of $deg2rad

+
+
+

+dense_dim

+

dense_dim() -> int

+

If self is a sparse COO tensor (i.e., with torch_sparse_coo layout), this returns the number of dense dimensions. Otherwise, this throws an error.

+

See also $sparse_dim.

+
+
+

+dequantize

+

dequantize() -> Tensor

+

Given a quantized Tensor, dequantize it and return the dequantized float Tensor.

+
+
+

+det

+

det() -> Tensor

+

See ?torch_det

+
+
+

+detach

+

Returns a new Tensor, detached from the current graph.

+

The result will never require gradient.

+
+

+Note:

+

Returned Tensor shares the same storage with the original one. In-place modifications on either of them will be seen, and may trigger errors in correctness checks. IMPORTANT NOTE: Previously, in-place size / stride / storage changes (such as resize_ / resize_as_ / set_ / transpose_) to the returned tensor also update the original tensor. Now, these in-place changes will not update the original tensor anymore, and will instead trigger an error. For sparse tensors: In-place indices / values changes (such as zero_ / copy_ / add_) to the returned tensor will not update the original tensor anymore, and will instead trigger an error.

+
+
+
+

+detach_

+

Detaches the Tensor from the graph that created it, making it a leaf. Views cannot be detached in-place.

+
+
+

+device

+

Is the torch_device where this Tensor is.

+
+
+

+diag

+

diag(diagonal=0) -> Tensor

+

See ?torch_diag

+
+
+

+diag_embed

+

diag_embed(offset=0, dim1=-2, dim2=-1) -> Tensor

+

See [torch_diag_embed()]

+
+
+

+diagflat

+

diagflat(offset=0) -> Tensor

+

See ?torch_diagflat

+
+
+

+diagonal

+

diagonal(offset=0, dim1=0, dim2=1) -> Tensor

+

See ?torch_diagonal

+
+
+

+digamma

+

digamma() -> Tensor

+

See ?torch_digamma

+
+
+

+digamma_

+

digamma_() -> Tensor

+

In-place version of $digamma

+
+
+

+dim

+

dim() -> int

+

Returns the number of dimensions of self tensor.

+
+
+

+dist

+

dist(other, p=2) -> Tensor

+

See ?torch_dist

+
+
+

+div

+

div(value) -> Tensor

+

See ?torch_div

+
+
+

+div_

+

div_(value) -> Tensor

+

In-place version of $div

+
+
+

+dot

+

dot(tensor2) -> Tensor

+

See ?torch_dot

+
+
+

+double

+

double(memory_format=torch_preserve_format) -> Tensor

+

self$double() is equivalent to self$to(torch_float64). See [to()].

+
+

+Arguments:

+
    +
  • memory_format (torch_memory_format, optional): the desired memory format of
  • +
  • returned Tensor. Default: torch_preserve_format.
  • +
+
+
+
+

+eig

+

eig(eigenvectors=FALSE) -> (Tensor, Tensor)

+

See ?torch_eig

+
+
+

+element_size

+

element_size() -> int

+

Returns the size in bytes of an individual element.

+
+

+Examples:

+
+torch_tensor(c(1))$element_size()
+
+
+
+
+

+eq

+

eq(other) -> Tensor

+

See ?torch_eq

+
+
+

+eq_

+

eq_(other) -> Tensor

+

In-place version of $eq

+
+
+

+equal

+

equal(other) -> bool

+

See ?torch_equal

+
+
+

+erf

+

erf() -> Tensor

+

See ?torch_erf

+
+
+

+erf_

+

erf_() -> Tensor

+

In-place version of $erf

+
+
+

+erfc

+

erfc() -> Tensor

+

See ?torch_erfc

+
+
+

+erfc_

+

erfc_() -> Tensor

+

In-place version of $erfc

+
+
+

+erfinv

+

erfinv() -> Tensor

+

See ?torch_erfinv

+
+
+

+erfinv_

+

erfinv_() -> Tensor

+

In-place version of $erfinv

+
+
+

+exp

+

exp() -> Tensor

+

See ?torch_exp

+
+
+

+exp_

+

exp_() -> Tensor

+

In-place version of $exp

+
+
+

+expand

+

expand(*sizes) -> Tensor

+

Returns a new view of the self tensor with singleton dimensions expanded to a larger size.

+

Passing -1 as the size for a dimension means not changing the size of that dimension.

+

Tensor can be also expanded to a larger number of dimensions, and the new ones will be appended at the front. For the new dimensions, the size cannot be set to -1.

+

Expanding a tensor does not allocate new memory, but only creates a new view on the existing tensor where a dimension of size one is expanded to a larger size by setting the stride to 0. Any dimension of size 1 can be expanded to an arbitrary value without allocating new memory.

+
+

+Arguments:

+
    +
  • sizes (torch_Size or int…): the desired expanded size
  • +
+
+
+

+Warning:

+

More than one element of an expanded tensor may refer to a single memory location. As a result, in-place operations (especially ones that are vectorized) may result in incorrect behavior. If you need to write to the tensors, please clone them first.

+
+
+

+Examples:

+
+x <- torch_tensor(matrix(c(1,2,3), ncol = 1))
+x$size()
+x$expand(c(3, 4))
+x$expand(c(-1, 4))  # -1 means not changing the size of that dimension
+
+
+
+
+

+expand_as

+

expand_as(other) -> Tensor

+

Expand this tensor to the same size as other. self$expand_as(other) is equivalent to self$expand(other.size()).

+

Please see $expand for more information about expand.

+
+

+Arguments:

+
    +
  • other (`$): The result tensor has the same size
  • +
  • as other.
  • +
+
+
+
+

+expm1

+

expm1() -> Tensor

+

See [torch_expm1()]

+
+
+

+expm1_

+

expm1_() -> Tensor

+

In-place version of $expm1

+
+
+

+exponential_

+

exponential_(lambd=1, *, generator=NULL) -> Tensor

+

Fills self tensor with elements drawn from the exponential distribution:

+

\[ +f(x) = \lambda e^{-\lambda x} +\]

+
+
+

+fft

+

fft(signal_ndim, normalized=FALSE) -> Tensor

+

See ?torch_fft

+
+
+

+fill_

+

fill_(value) -> Tensor

+

Fills self tensor with the specified value.

+
+
+

+fill_diagonal_

+

fill_diagonal_(fill_value, wrap=FALSE) -> Tensor

+

Fill the main diagonal of a tensor that has at least 2-dimensions. When dims>2, all dimensions of input must be of equal length. This function modifies the input tensor in-place, and returns the input tensor.

+
+

+Arguments:

+
    +
  • fill_value (Scalar): the fill value
  • +
  • wrap (bool): the diagonal ‘wrapped’ after N columns for tall matrices.
  • +
+
+
+

+Examples:

+
+a <- torch_zeros(3, 3)
+a$fill_diagonal_(5)
+b <- torch_zeros(7, 3)
+b$fill_diagonal_(5)
+c <- torch_zeros(7, 3)
+c$fill_diagonal_(5, wrap=TRUE)
+
+
+
+
+

+flatten

+

flatten(input, start_dim=0, end_dim=-1) -> Tensor

+

see ?torch_flatten

+
+
+

+flip

+

flip(dims) -> Tensor

+

See ?torch_flip

+
+
+

+fliplr

+

fliplr() -> Tensor

+

See ?torch_fliplr

+
+
+

+flipud

+

flipud() -> Tensor

+

See ?torch_flipud

+
+
+

+float

+

float(memory_format=torch_preserve_format) -> Tensor

+

self$float() is equivalent to self$to(torch_float32). See [to()].

+
+

+Arguments:

+
    +
  • memory_format (torch_memory_format, optional): the desired memory format of
  • +
  • returned Tensor. Default: torch_preserve_format.
  • +
+
+
+
+

+floor

+

floor() -> Tensor

+

See ?torch_floor

+
+
+

+floor_

+

floor_() -> Tensor

+

In-place version of $floor

+
+
+

+floor_divide

+

floor_divide(value) -> Tensor

+

See [torch_floor_divide()]

+
+
+

+floor_divide_

+

floor_divide_(value) -> Tensor

+

In-place version of $floor_divide

+
+
+

+fmod

+

fmod(divisor) -> Tensor

+

See ?torch_fmod

+
+
+

+fmod_

+

fmod_(divisor) -> Tensor

+

In-place version of $fmod

+
+
+

+frac

+

frac() -> Tensor

+

See ?torch_frac

+
+
+

+frac_

+

frac_() -> Tensor

+

In-place version of $frac

+
+
+

+gather

+

gather(dim, index) -> Tensor

+

See ?torch_gather

+
+
+

+ge

+

ge(other) -> Tensor

+

See ?torch_ge

+
+
+

+ge_

+

ge_(other) -> Tensor

+

In-place version of $ge

+
+
+

+geometric_

+

geometric_(p, *, generator=NULL) -> Tensor

+

Fills self tensor with elements drawn from the geometric distribution:

+

\[ +f(X=k) = p^{k - 1} (1 - p) +\]

+
+
+

+geqrf

+

geqrf() -> (Tensor, Tensor)

+

See ?torch_geqrf

+
+
+

+ger

+

ger(vec2) -> Tensor

+

See ?torch_ger

+
+
+

+get_device

+

get_device() -> Device ordinal (Integer)

+

For CUDA tensors, this function returns the device ordinal of the GPU on which the tensor resides. For CPU tensors, an error is thrown.

+
+

+Examples:

+
+x <- torch_randn(3, 4, 5, device='cuda:0')
+x$get_device()
+x$cpu()$get_device()  # RuntimeError: get_device is not implemented for type torch_FloatTensor
+
+
+
+
+

+grad

+

This attribute is NULL by default and becomes a Tensor the first time a call to backward computes gradients for self. The attribute will then contain the gradients computed and future calls to [backward()] will accumulate (add) gradients into it.

+
+
+

+gt

+

gt(other) -> Tensor

+

See ?torch_gt

+
+
+

+gt_

+

gt_(other) -> Tensor

+

In-place version of $gt

+
+
+

+half

+

half(memory_format=torch_preserve_format) -> Tensor

+

self$half() is equivalent to self$to(torch_float16). See [to()].

+
+

+Arguments:

+
    +
  • memory_format (torch_memory_format, optional): the desired memory format of
  • +
  • returned Tensor. Default: torch_preserve_format.
  • +
+
+
+
+

+hardshrink

+

hardshrink(lambd=0.5) -> Tensor

+

See [torch_nn.functional.hardshrink()]

+
+
+

+has_names

+

Is TRUE if any of this tensor’s dimensions are named. Otherwise, is FALSE.

+
+
+

+histc

+

histc(bins=100, min=0, max=0) -> Tensor

+

See ?torch_histc

+
+
+

+ifft

+

ifft(signal_ndim, normalized=FALSE) -> Tensor

+

See ?torch_ifft

+
+
+

+imag

+

Returns a new tensor containing imaginary values of the self tensor. The returned tensor and self share the same underlying storage.

+
+

+Warning:

+

[imag()] is only supported for tensors with complex dtypes.

+
+
+

+Examples:

+
+x <- torch_randn(4, dtype=torch_cfloat())
+x
+x$imag
+
+
+
+
+

+index_add

+

index_add(tensor1, dim, index, tensor2) -> Tensor

+

Out-of-place version of $index_add_. tensor1 corresponds to self in $index_add_.

+
+
+

+index_add_

+

index_add_(dim, index, tensor) -> Tensor

+

Accumulate the elements of tensor into the self tensor by adding to the indices in the order given in index. For example, if dim == 0 and index[i] == j, then the i th row of tensor is added to the j th row of self.

+

The dim th dimension of tensor must have the same size as the length of index (which must be a vector), and all other dimensions must match self, or an error will be raised.

+
+

+Note:

+

In some circumstances when using the CUDA backend with CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch_backends.cudnn.deterministic = TRUE.

+
+
+

+Arguments:

+
    +
  • dim (int): dimension along which to index
  • +
  • index (LongTensor): indices of tensor to select from
  • +
  • tensor (Tensor): the tensor containing values to add
  • +
+
+
+

+Examples:

+
+x <- torch_ones(5, 3)
+t <- torch_tensor(matrix(1:9, ncol = 3), dtype=torch_float())
+index <- torch_tensor(c(1L, 4L, 3L))
+x$index_add_(1, index, t)
+
+
+
+
+

+index_copy

+

index_copy(tensor1, dim, index, tensor2) -> Tensor

+

Out-of-place version of $index_copy_. tensor1 corresponds to self in $index_copy_.

+
+
+

+index_copy_

+

index_copy_(dim, index, tensor) -> Tensor

+

Copies the elements of tensor into the self tensor by selecting the indices in the order given in index. For example, if dim == 0 and index[i] == j, then the i th row of tensor is copied to the j th row of self.

+

The dim th dimension of tensor must have the same size as the length of index (which must be a vector), and all other dimensions must match self, or an error will be raised.

+
+

+Arguments:

+
    +
  • dim (int): dimension along which to index
  • +
  • index (LongTensor): indices of tensor to select from
  • +
  • tensor (Tensor): the tensor containing values to copy
  • +
+
+
+

+Examples:

+
+x <- torch_zeros(5, 3)
+t <- torch_tensor(matrix(1:9, ncol = 3), dtype=torch_float())
+index <- torch_tensor(c(1, 5, 3))
+x$index_copy_(1, index, t)
+
+
+
+
+

+index_fill

+

index_fill(tensor1, dim, index, value) -> Tensor

+

Out-of-place version of $index_fill_. tensor1 corresponds to self in $index_fill_.

+
+
+

+index_fill_

+

index_fill_(dim, index, val) -> Tensor

+

Fills the elements of the self tensor with value val by selecting the indices in the order given in index.

+
+

+Arguments:

+
    +
  • dim (int): dimension along which to index
  • +
  • index (LongTensor): indices of self tensor to fill in
  • +
  • val (float): the value to fill with
  • +
+
+
+

+Examples:

+
+x <- torch_tensor(matrix(1:9, ncol = 3), dtype=torch_float())
+index <- torch_tensor(c(1, 3), dtype = torch_long())
+x$index_fill_(1, index, -1)
+
+
+
+
+

+index_put

+

index_put(tensor1, indices, value, accumulate=FALSE) -> Tensor

+

Out-place version of $index_put_. tensor1 corresponds to self in $index_put_.

+
+
+

+index_put_

+

index_put_(indices, value, accumulate=FALSE) -> Tensor

+

Puts values from the tensor value into the tensor self using the indices specified in indices (which is a tuple of Tensors). The expression tensor.index_put_(indices, value) is equivalent to tensor[indices] = value. Returns self.

+

If accumulate is TRUE, the elements in value are added to self. If accumulate is FALSE, the behavior is undefined if indices contain duplicate elements.

+
+

+Arguments:

+
    +
  • indices (tuple of LongTensor): tensors used to index into self.
  • +
  • value (Tensor): tensor of same dtype as self.
  • +
  • accumulate (bool): whether to accumulate into self
  • +
+
+
+
+

+index_select

+

index_select(dim, index) -> Tensor

+

See [torch_index_select()]

+
+
+

+indices

+

indices() -> Tensor

+

If self is a sparse COO tensor (i.e., with torch_sparse_coo layout), this returns a view of the contained indices tensor. Otherwise, this throws an error.

+

See also Tensor.values.

+
+

+Note:

+

This method can only be called on a coalesced sparse tensor. See Tensor.coalesce for details.

+
+
+
+

+int

+

int(memory_format=torch_preserve_format) -> Tensor

+

self$int() is equivalent to self$to(torch_int32). See [to()].

+
+

+Arguments:

+
    +
  • memory_format (torch_memory_format, optional): the desired memory format of
  • +
  • returned Tensor. Default: torch_preserve_format.
  • +
+
+
+
+

+int_repr

+

int_repr() -> Tensor

+

Given a quantized Tensor, self$int_repr() returns a CPU Tensor with uint8_t as data type that stores the underlying uint8_t values of the given Tensor.

+
+
+

+inverse

+

inverse() -> Tensor

+

See ?torch_inverse

+
+
+

+irfft

+

irfft(signal_ndim, normalized=FALSE, onesided=TRUE, signal_sizes=NULL) -> Tensor

+

See ?torch_irfft

+
+
+

+is_complex

+

is_complex() -> bool

+

Returns TRUE if the data type of self is a complex data type.

+
+
+

+is_contiguous

+

is_contiguous(memory_format=torch_contiguous_format) -> bool

+

Returns TRUE if self tensor is contiguous in memory in the order specified by memory format.

+
+

+Arguments:

+
    +
  • memory_format (torch_memory_format, optional): Specifies memory allocation
  • +
  • order. Default: torch_contiguous_format.
  • +
+
+
+
+

+is_cuda

+

Is TRUE if the Tensor is stored on the GPU, FALSE otherwise.

+
+
+

+is_floating_point

+

is_floating_point() -> bool

+

Returns TRUE if the data type of self is a floating point data type.

+
+
+

+is_leaf

+

All Tensors that have requires_grad which is FALSE will be leaf Tensors by convention.

+

For Tensors that have requires_grad which is TRUE, they will be leaf Tensors if they were created by the user. This means that they are not the result of an operation and so grad_fn is NULL.

+

Only leaf Tensors will have their grad populated during a call to [backward()]. To get grad populated for non-leaf Tensors, you can use [retain_grad()].

+
+

+Examples:

+
+a <- torch_rand(10, requires_grad=TRUE)
+a$is_leaf()
+
+# b <- torch_rand(10, requires_grad=TRUE)$cuda()
+# b$is_leaf()
+# FALSE
+# b was created by the operation that cast a cpu Tensor into a cuda Tensor
+
+c <- torch_rand(10, requires_grad=TRUE) + 2
+c$is_leaf()
+# c was created by the addition operation
+
+# d <- torch_rand(10)$cuda()
+# d$is_leaf()
+# TRUE
+# d does not require gradients and so has no operation creating it (that is tracked by the autograd engine)
+
+# e <- torch_rand(10)$cuda()$requires_grad_()
+# e$is_leaf()
+# TRUE
+# e requires gradients and has no operations creating it
+
+# f <- torch_rand(10, requires_grad=TRUE, device="cuda")
+# f$is_leaf
+# TRUE
+# f requires grad, has no operation creating it
+
+
+
+
+

+is_meta

+

Is TRUE if the Tensor is a meta tensor, FALSE otherwise. Meta tensors are like normal tensors, but they carry no data.

+
+
+

+is_pinned

+

Returns true if this tensor resides in pinned memory.

+
+
+

+is_quantized

+

Is TRUE if the Tensor is quantized, FALSE otherwise.

+
+
+

+is_set_to

+

is_set_to(tensor) -> bool

+

Returns TRUE if this object refers to the same THTensor object from the Torch C API as the given tensor.

+
+
+

+is_shared

+

Checks if tensor is in shared memory.

+

This is always TRUE for CUDA tensors.

+
+
+

+is_signed

+

is_signed() -> bool

+

Returns TRUE if the data type of self is a signed data type.

+
+
+

+isclose

+

isclose(other, rtol=1e-05, atol=1e-08, equal_nan=FALSE) -> Tensor

+

See ?torch_isclose

+
+
+

+isfinite

+

isfinite() -> Tensor

+

See ?torch_isfinite

+
+
+

+isinf

+

isinf() -> Tensor

+

See ?torch_isinf

+
+
+

+isnan

+

isnan() -> Tensor

+

See ?torch_isnan

+
+
+

+istft

+

See ?torch_istft ## item

+

item() -> number

+

Returns the value of this tensor as a standard Python number. This only works for tensors with one element. For other cases, see $tolist.

+

This operation is not differentiable.

+
+

+Examples:

+
+x <- torch_tensor(1.0)
+x$item()
+
+
+
+
+

+kthvalue

+

kthvalue(k, dim=NULL, keepdim=FALSE) -> (Tensor, LongTensor)

+

See ?torch_kthvalue

+
+
+

+le

+

le(other) -> Tensor

+

See ?torch_le

+
+
+

+le_

+

le_(other) -> Tensor

+

In-place version of $le

+
+
+

+lerp

+

lerp(end, weight) -> Tensor

+

See ?torch_lerp

+
+
+

+lerp_

+

lerp_(end, weight) -> Tensor

+

In-place version of $lerp

+
+
+

+lgamma

+

lgamma() -> Tensor

+

See ?torch_lgamma

+
+
+

+lgamma_

+

lgamma_() -> Tensor

+

In-place version of $lgamma

+
+
+

+log

+

log() -> Tensor

+

See ?torch_log

+
+
+

+log10

+

log10() -> Tensor

+

See [torch_log10()]

+
+
+

+log10_

+

log10_() -> Tensor

+

In-place version of $log10

+
+
+

+log1p

+

log1p() -> Tensor

+

See [torch_log1p()]

+
+
+

+log1p_

+

log1p_() -> Tensor

+

In-place version of $log1p

+
+
+

+log2

+

log2() -> Tensor

+

See [torch_log2()]

+
+
+

+log2_

+

log2_() -> Tensor

+

In-place version of $log2

+
+
+

+log_

+

log_() -> Tensor

+

In-place version of $log

+
+
+

+log_normal_

+

log_normal_(mean=1, std=2, *, generator=NULL)

+

Fills self tensor with numbers samples from the log-normal distribution parameterized by the given mean \mu and standard deviation \sigma. Note that mean and std are the mean and standard deviation of the underlying normal distribution, and not of the returned distribution:

+

\[ +f(x) = \dfrac{1}{x \sigma \sqrt{2\pi}}\ e^{-\frac{(\ln x - \mu)^2}{2\sigma^2}} +\]

+
+
+

+logaddexp

+

logaddexp(other) -> Tensor

+

See ?torch_logaddexp

+
+
+

+logaddexp2

+

logaddexp2(other) -> Tensor

+

See [torch_logaddexp2()]

+
+
+

+logcumsumexp

+

logcumsumexp(dim) -> Tensor

+

See ?torch_logcumsumexp

+
+
+

+logdet

+

logdet() -> Tensor

+

See ?torch_logdet

+
+
+

+logical_and

+

logical_and() -> Tensor

+

See [torch_logical_and()]

+
+
+

+logical_and_

+

logical_and_() -> Tensor

+

In-place version of $logical_and

+
+
+

+logical_not

+

logical_not() -> Tensor

+

See [torch_logical_not()]

+
+
+

+logical_not_

+

logical_not_() -> Tensor

+

In-place version of $logical_not

+
+
+

+logical_or

+

logical_or() -> Tensor

+

See [torch_logical_or()]

+
+
+

+logical_or_

+

logical_or_() -> Tensor

+

In-place version of $logical_or

+
+
+

+logical_xor

+

logical_xor() -> Tensor

+

See [torch_logical_xor()]

+
+
+

+logical_xor_

+

logical_xor_() -> Tensor

+

In-place version of $logical_xor

+
+
+

+logsumexp

+

logsumexp(dim, keepdim=FALSE) -> Tensor

+

See ?torch_logsumexp

+
+
+

+long

+

long(memory_format=torch_preserve_format) -> Tensor

+

self$long() is equivalent to self$to(torch_int64). See [to()].

+
+

+Arguments:

+
    +
  • memory_format (torch_memory_format, optional): the desired memory format of
  • +
  • returned Tensor. Default: torch_preserve_format.
  • +
+
+
+
+

+lstsq

+

lstsq(A) -> (Tensor, Tensor)

+

See ?torch_lstsq

+
+
+

+lt

+

lt(other) -> Tensor

+

See ?torch_lt

+
+
+

+lt_

+

lt_(other) -> Tensor

+

In-place version of $lt

+
+
+

+lu

+

See ?torch_lu ## lu_solve

+

lu_solve(LU_data, LU_pivots) -> Tensor

+

See [torch_lu_solve()]

+
+
+

+map_

+

map_(tensor, callable)

+

Applies callable for each element in self tensor and the given tensor and stores the results in self tensor. self tensor and the given tensor must be broadcastable.

+

The callable should have the signature:

+

callable(a, b) -> number

+
+
+

+masked_fill

+

masked_fill(mask, value) -> Tensor

+

Out-of-place version of $masked_fill_

+
+
+

+masked_fill_

+

masked_fill_(mask, value)

+

Fills elements of self tensor with value where mask is TRUE. The shape of mask must be broadcastable <broadcasting-semantics> with the shape of the underlying tensor.

+
+

+Arguments:

+
    +
  • mask (BoolTensor): the boolean mask
  • +
  • value (float): the value to fill in with
  • +
+
+
+
+

+masked_scatter

+

masked_scatter(mask, tensor) -> Tensor

+

Out-of-place version of $masked_scatter_

+
+
+

+masked_scatter_

+

masked_scatter_(mask, source)

+

Copies elements from source into self tensor at positions where the mask is TRUE. The shape of mask must be :ref:broadcastable <broadcasting-semantics> with the shape of the underlying tensor. The source should have at least as many elements as the number of ones in mask

+
+

+Arguments:

+
    +
  • mask (BoolTensor): the boolean mask
  • +
  • source (Tensor): the tensor to copy from
  • +
+
+
+

+Note:

+

The mask operates on the self tensor, not on the given source tensor.

+
+
+
+

+masked_select

+

masked_select(mask) -> Tensor

+

See [torch_masked_select()]

+
+
+

+matmul

+

matmul(tensor2) -> Tensor

+

See ?torch_matmul

+
+
+

+matrix_power

+

matrix_power(n) -> Tensor

+

See [torch_matrix_power()]

+
+
+

+max

+

max(dim=NULL, keepdim=FALSE) -> Tensor or (Tensor, Tensor)

+

See ?torch_max

+
+
+

+mean

+

mean(dim=NULL, keepdim=FALSE) -> Tensor or (Tensor, Tensor)

+

See ?torch_mean

+
+
+

+median

+

median(dim=NULL, keepdim=FALSE) -> (Tensor, LongTensor)

+

See ?torch_median

+
+
+

+min

+

min(dim=NULL, keepdim=FALSE) -> Tensor or (Tensor, Tensor)

+

See ?torch_min

+
+
+

+mm

+

mm(mat2) -> Tensor

+

See ?torch_mm

+
+
+

+mode

+

mode(dim=NULL, keepdim=FALSE) -> (Tensor, LongTensor)

+

See ?torch_mode

+
+
+

+mul

+

mul(value) -> Tensor

+

See ?torch_mul

+
+
+

+mul_

+

mul_(value)

+

In-place version of $mul

+
+
+

+multinomial

+

multinomial(num_samples, replacement=FALSE, *, generator=NULL) -> Tensor

+

See ?torch_multinomial

+
+
+

+mv

+

mv(vec) -> Tensor

+

See ?torch_mv

+
+
+

+mvlgamma

+

mvlgamma(p) -> Tensor

+

See ?torch_mvlgamma

+
+
+

+mvlgamma_

+

mvlgamma_(p) -> Tensor

+

In-place version of $mvlgamma

+
+
+

+names

+

Stores names for each of this tensor’s dimensions.

+

names[idx] corresponds to the name of tensor dimension idx. Names are either a string if the dimension is named or NULL if the dimension is unnamed.

+

Dimension names may contain characters or underscore. Furthermore, a dimension name must be a valid Python variable name (i.e., does not start with underscore).

+

Tensors may not have two named dimensions with the same name.

+
+

+Warning:

+

The named tensor API is experimental and subject to change.

+
+
+
+

+narrow

+

narrow(dimension, start, length) -> Tensor

+

See ?torch_narrow

+
+

+Examples:

+
+x <- torch_tensor(matrix(1:9, ncol = 3))
+x$narrow(1, 1, 3)
+x$narrow(1, 1, 2)
+
+
+
+
+

+narrow_copy

+

narrow_copy(dimension, start, length) -> Tensor

+

Same as Tensor.narrow except returning a copy rather than shared storage. This is primarily for sparse tensors, which do not have a shared-storage narrow method. Calling narrow_copy` withdimemsion > self\(sparse_dim()`` will return a copy with the relevant dense dimension narrowed, and ``self\)shape`` updated accordingly.

+
+
+

+ndim

+

Alias for $dim()

+
+
+

+ndimension

+

ndimension() -> int

+

Alias for $dim()

+
+
+

+ne

+

ne(other) -> Tensor

+

See ?torch_ne

+
+
+

+ne_

+

ne_(other) -> Tensor

+

In-place version of $ne

+
+
+

+neg

+

neg() -> Tensor

+

See ?torch_neg

+
+
+

+neg_

+

neg_() -> Tensor

+

In-place version of $neg

+
+
+

+nelement

+

nelement() -> int

+

Alias for $numel

+
+
+

+new_empty

+

new_empty(size, dtype=NULL, device=NULL, requires_grad=FALSE) -> Tensor

+

Returns a Tensor of size size filled with uninitialized data. By default, the returned Tensor has the same torch_dtype and torch_device as this tensor.

+
+

+Arguments:

+
    +
  • dtype (torch_dtype, optional): the desired type of returned tensor. Default: if NULL, same torch_dtype as this tensor.
  • +
  • device (torch_device, optional): the desired device of returned tensor. Default: if NULL, same torch_device as this tensor.
  • +
  • requires_grad (bool, optional): If autograd should record operations on the
  • +
  • returned tensor. Default: FALSE.
  • +
+
+
+

+Examples:

+
+tensor <- torch_ones(5)
+tensor$new_empty(c(2, 3))
+
+
+
+
+

+new_full

+

new_full(size, fill_value, dtype=NULL, device=NULL, requires_grad=FALSE) -> Tensor

+

Returns a Tensor of size size filled with fill_value. By default, the returned Tensor has the same torch_dtype and torch_device as this tensor.

+
+

+Arguments:

+
    +
  • fill_value (scalar): the number to fill the output tensor with.
  • +
  • dtype (torch_dtype, optional): the desired type of returned tensor. Default: if NULL, same torch_dtype as this tensor.
  • +
  • device (torch_device, optional): the desired device of returned tensor. Default: if NULL, same torch_device as this tensor.
  • +
  • requires_grad (bool, optional): If autograd should record operations on the
  • +
  • returned tensor. Default: FALSE.
  • +
+
+
+

+Examples:

+
+tensor <- torch_ones(c(2), dtype=torch_float64())
+tensor$new_full(c(3, 4), 3.141592)
+
+
+
+
+

+new_ones

+

new_ones(size, dtype=NULL, device=NULL, requires_grad=FALSE) -> Tensor

+

Returns a Tensor of size size filled with 1. By default, the returned Tensor has the same torch_dtype and torch_device as this tensor.

+
+

+Arguments:

+
    +
  • size (int…): a list, tuple, or torch_Size of integers defining the
  • +
  • shape of the output tensor.
  • +
  • dtype (torch_dtype, optional): the desired type of returned tensor. Default: if NULL, same torch_dtype as this tensor.
  • +
  • device (torch_device, optional): the desired device of returned tensor. Default: if NULL, same torch_device as this tensor.
  • +
  • requires_grad (bool, optional): If autograd should record operations on the
  • +
  • returned tensor. Default: FALSE.
  • +
+
+
+

+Examples:

+
+tensor <- torch_tensor(c(2), dtype=torch_int32())
+tensor$new_ones(c(2, 3))
+
+
+
+
+

+new_tensor

+

new_tensor(data, dtype=NULL, device=NULL, requires_grad=FALSE) -> Tensor

+

Returns a new Tensor with data as the tensor data. By default, the returned Tensor has the same torch_dtype and torch_device as this tensor.

+
+

+Warning:

+

new_tensor always copies data(). If you have a Tensordata` and want to avoid a copy, use [$requires_grad_()] or [$detach()]. If you have a numpy array and want to avoid a copy, use [torch_from_numpy()].

+

When data is a tensor x, [new_tensor()()] reads out ‘the data’ from whatever it is passed, and constructs a leaf variable. Therefore tensor$new_tensor(x) is equivalent to x$clone()$detach() and tensor$new_tensor(x, requires_grad=TRUE) is equivalent to x$clone()$detach()$requires_grad_(TRUE). The equivalents using clone() and detach() are recommended.

+
+
+

+Arguments:

+
    +
  • data (array_like): The returned Tensor copies data.
  • +
  • dtype (torch_dtype, optional): the desired type of returned tensor. Default: if NULL, same torch_dtype as this tensor.
  • +
  • device (torch_device, optional): the desired device of returned tensor. Default: if NULL, same torch_device as this tensor.
  • +
  • requires_grad (bool, optional): If autograd should record operations on the
  • +
  • returned tensor. Default: FALSE.
  • +
+
+
+

+Examples:

+
+tensor <- torch_ones(c(2), dtype=torch_int8)
+data <- matrix(1:4, ncol = 2)
+tensor$new_tensor(data)
+
+
+
+
+

+new_zeros

+

new_zeros(size, dtype=NULL, device=NULL, requires_grad=FALSE) -> Tensor

+

Returns a Tensor of size size filled with 0. By default, the returned Tensor has the same torch_dtype and torch_device as this tensor.

+
+

+Arguments:

+
    +
  • size (int…): a list, tuple, or torch_Size of integers defining the
  • +
  • shape of the output tensor.
  • +
  • dtype (torch_dtype, optional): the desired type of returned tensor. Default: if NULL, same torch_dtype as this tensor.
  • +
  • device (torch_device, optional): the desired device of returned tensor. Default: if NULL, same torch_device as this tensor.
  • +
  • requires_grad (bool, optional): If autograd should record operations on the
  • +
  • returned tensor. Default: FALSE.
  • +
+
+
+

+Examples:

+
+tensor <- torch_tensor(c(1), dtype=torch_float64())
+tensor$new_zeros(c(2, 3))
+
+
+
+
+

+nonzero

+

nonzero() -> LongTensor

+

See ?torch_nonzero

+
+
+

+norm

+

See ?torch_norm ## normal_

+

normal_(mean=0, std=1, *, generator=NULL) -> Tensor

+

Fills self tensor with elements samples from the normal distribution parameterized by mean and std.

+
+
+

+numel

+

numel() -> int

+

See ?torch_numel

+
+
+

+numpy

+

numpy() -> numpy.ndarray

+

Returns self tensor as a NumPy :class:ndarray. This tensor and the returned ndarray share the same underlying storage. Changes to self tensor will be reflected in the :class:ndarray and vice versa.

+
+
+

+orgqr

+

orgqr(input2) -> Tensor

+

See ?torch_orgqr

+
+
+

+ormqr

+

ormqr(input2, input3, left=TRUE, transpose=FALSE) -> Tensor

+

See ?torch_ormqr

+
+
+

+permute

+

permute(*dims) -> Tensor

+

Returns a view of the original tensor with its dimensions permuted.

+
+

+Arguments:

+
    +
  • dims (int…): The desired ordering of dimensions
  • +
+
+
+

+Examples:

+
+x <- torch_randn(2, 3, 5)
+x$size()
+x$permute(c(3, 1, 2))$size()
+
+
+
+
+

+pin_memory

+

pin_memory() -> Tensor

+

Copies the tensor to pinned memory, if it’s not already pinned.

+
+
+

+pinverse

+

pinverse() -> Tensor

+

See ?torch_pinverse

+
+
+

+polygamma

+

polygamma(n) -> Tensor

+

See ?torch_polygamma

+
+
+

+polygamma_

+

polygamma_(n) -> Tensor

+

In-place version of $polygamma

+
+
+

+pow

+

pow(exponent) -> Tensor

+

See ?torch_pow

+
+
+

+pow_

+

pow_(exponent) -> Tensor

+

In-place version of $pow

+
+
+

+prod

+

prod(dim=NULL, keepdim=FALSE, dtype=NULL) -> Tensor

+

See ?torch_prod

+
+
+

+put_

+

put_(indices, tensor, accumulate=FALSE) -> Tensor

+

Copies the elements from tensor into the positions specified by indices. For the purpose of indexing, the self tensor is treated as if it were a 1-D tensor.

+

If accumulate is TRUE, the elements in tensor are added to self. If accumulate is FALSE, the behavior is undefined if indices contain duplicate elements.

+
+

+Arguments:

+
    +
  • indices (LongTensor): the indices into self
  • +
  • tensor (Tensor): the tensor containing values to copy from
  • +
  • accumulate (bool): whether to accumulate into self
  • +
+
+
+

+Examples:

+
+src <- torch_tensor(matrix(3:8, ncol = 3))
+src$put_(torch_tensor(1:2), torch_tensor(9:10))
+
+
+
+
+

+q_per_channel_axis

+

q_per_channel_axis() -> int

+

Given a Tensor quantized by linear (affine) per-channel quantization, returns the index of dimension on which per-channel quantization is applied.

+
+
+

+q_per_channel_scales

+

q_per_channel_scales() -> Tensor

+

Given a Tensor quantized by linear (affine) per-channel quantization, returns a Tensor of scales of the underlying quantizer. It has the number of elements that matches the corresponding dimensions (from q_per_channel_axis) of the tensor.

+
+
+

+q_per_channel_zero_points

+

q_per_channel_zero_points() -> Tensor

+

Given a Tensor quantized by linear (affine) per-channel quantization, returns a tensor of zero_points of the underlying quantizer. It has the number of elements that matches the corresponding dimensions (from q_per_channel_axis) of the tensor.

+
+
+

+q_scale

+

q_scale() -> float

+

Given a Tensor quantized by linear(affine) quantization, returns the scale of the underlying quantizer().

+
+
+

+q_zero_point

+

q_zero_point() -> int

+

Given a Tensor quantized by linear(affine) quantization, returns the zero_point of the underlying quantizer().

+
+
+

+qr

+

qr(some=TRUE) -> (Tensor, Tensor)

+

See ?torch_qr

+
+
+

+qscheme

+

qscheme() -> torch_qscheme

+

Returns the quantization scheme of a given QTensor.

+
+
+

+rad2deg

+

rad2deg() -> Tensor

+

See [torch_rad2deg()]

+
+
+

+rad2deg_

+

rad2deg_() -> Tensor

+

In-place version of $rad2deg

+
+
+

+random_

+

random_(from=0, to=NULL, *, generator=NULL) -> Tensor

+

Fills self tensor with numbers sampled from the discrete uniform distribution over [from, to - 1]. If not specified, the values are usually only bounded by self tensor’s data type. However, for floating point types, if unspecified, range will be [0, 2^mantissa] to ensure that every value is representable. For example, torch_tensor(1, dtype=torch_double).random_() will be uniform in [0, 2^53].

+
+
+

+real

+

Returns a new tensor containing real values of the self tensor. The returned tensor and self share the same underlying storage.

+
+

+Warning:

+

[real()] is only supported for tensors with complex dtypes.

+
+
+

+Examples:

+
+x <- torch_randn(4, dtype=torch_cfloat())
+x
+x$real
+
+
+
+
+

+reciprocal

+

reciprocal() -> Tensor

+

See ?torch_reciprocal

+
+
+

+reciprocal_

+

reciprocal_() -> Tensor

+

In-place version of $reciprocal

+
+
+

+record_stream

+

record_stream(stream)

+

Ensures that the tensor memory is not reused for another tensor until all current work queued on stream are complete.

+
+

+Note:

+

The caching allocator is aware of only the stream where a tensor was allocated. Due to the awareness, it already correctly manages the life cycle of tensors on only one stream. But if a tensor is used on a stream different from the stream of origin, the allocator might reuse the memory unexpectedly. Calling this method lets the allocator know which streams have used the tensor.

+
+
+
+

+refine_names

+

Refines the dimension names of self according to names.

+

Refining is a special case of renaming that “lifts” unnamed dimensions. A NULL dim can be refined to have any name; a named dim can only be refined to have the same name.

+

Because named tensors can coexist with unnamed tensors, refining names gives a nice way to write named-tensor-aware code that works with both named and unnamed tensors.

+

names may contain up to one Ellipsis (...). The Ellipsis is expanded greedily; it is expanded in-place to fill names to the same length as self$dim() using names from the corresponding indices of self$names.

+
+

+Arguments:

+
    +
  • names (iterable of str): The desired names of the output tensor. May contain up to one Ellipsis.
  • +
+
+
+

+Examples:

+
+imgs <- torch_randn(32, 3, 128, 128)
+named_imgs <- imgs$refine_names(c('N', 'C', 'H', 'W'))
+named_imgs$names
+
+
+
+
+

+register_hook

+

Registers a backward hook.

+

The hook will be called every time a gradient with respect to the Tensor is computed. The hook should have the following signature::

+

hook(grad) -> Tensor or NULL

+

The hook should not modify its argument, but it can optionally return a new gradient which will be used in place of grad.

+

This function returns a handle with a method handle$remove() that removes the hook from the module.

+
+

+Example

+
+v <- torch_tensor(c(0., 0., 0.), requires_grad=TRUE)
+h <- v$register_hook(function(grad) grad * 2)  # double the gradient
+v$backward(torch_tensor(c(1., 2., 3.)))
+v$grad
+h$remove()
+
+
+
+
+

+remainder

+

remainder(divisor) -> Tensor

+

See ?torch_remainder

+
+
+

+remainder_

+

remainder_(divisor) -> Tensor

+

In-place version of $remainder

+
+
+

+rename

+

Renames dimension names of self.

+

There are two main usages:

+

self$rename(**rename_map) returns a view on tensor that has dims renamed as specified in the mapping rename_map.

+

self$rename(*names) returns a view on tensor, renaming all dimensions positionally using names. Use self$rename(NULL) to drop names on a tensor.

+

One cannot specify both positional args names and keyword args rename_map.

+
+

+Examples:

+
+imgs <- torch_rand(2, 3, 5, 7, names=c('N', 'C', 'H', 'W'))
+renamed_imgs <- imgs$rename(c("Batch", "Channels", "Height", "Width"))
+
+
+
+
+

+rename_

+

In-place version of $rename.

+
+
+

+renorm

+

renorm(p, dim, maxnorm) -> Tensor

+

See ?torch_renorm

+
+
+

+renorm_

+

renorm_(p, dim, maxnorm) -> Tensor

+

In-place version of $renorm

+
+
+

+repeat

+

repeat(*sizes) -> Tensor

+

Repeats this tensor along the specified dimensions.

+

Unlike $expand, this function copies the tensor’s data.

+
+

+Arguments:

+
    +
  • sizes (torch_Size or int…): The number of times to repeat this tensor along each
  • +
  • dimension
  • +
+
+
+

+Examples:

+
+x <- torch_tensor(c(1, 2, 3))
+x$`repeat`(c(4, 2))
+x$`repeat`(c(4, 2, 1))$size()
+
+
+
+
+

+repeat_interleave

+

repeat_interleave(repeats, dim=NULL) -> Tensor

+

See [torch_repeat_interleave()].

+
+
+

+requires_grad

+

Is TRUE if gradients need to be computed for this Tensor, FALSE otherwise.

+
+

+Note:

+

The fact that gradients need to be computed for a Tensor do not mean that the grad attribute will be populated, see is_leaf for more details.

+
+
+
+

+requires_grad_

+

requires_grad_(requires_grad=TRUE) -> Tensor

+

Change if autograd should record operations on this tensor: sets this tensor’s requires_grad attribute in-place. Returns this tensor.

+

[requires_grad_()]’s main use case is to tell autograd to begin recording operations on a Tensor tensor. If tensor has requires_grad=FALSE (because it was obtained through a DataLoader, or required preprocessing or initialization), tensor.requires_grad_() makes it so that autograd will begin to record operations on tensor.

+
+

+Arguments:

+
    +
  • requires_grad (bool): If autograd should record operations on this tensor. Default: TRUE.
  • +
+
+
+

+Examples:

+
+# Let's say we want to preprocess some saved weights and use
+# the result as new weights.
+saved_weights <- c(0.1, 0.2, 0.3, 0.25)
+loaded_weights <- torch_tensor(saved_weights)
+weights <- preprocess(loaded_weights)  # some function
+weights
+
+# Now, start to record operations done to weights
+weights$requires_grad_()
+out <- weights$pow(2)$sum()
+out$backward()
+weights$grad
+
+
+
+
+

+reshape

+

reshape(*shape) -> Tensor

+

Returns a tensor with the same data and number of elements as self but with the specified shape. This method returns a view if shape is compatible with the current shape. See $view on when it is possible to return a view.

+

See ?torch_reshape

+
+

+Arguments:

+
    +
  • shape (tuple of ints or int…): the desired shape
  • +
+
+
+
+

+reshape_as

+

reshape_as(other) -> Tensor

+

Returns this tensor as the same shape as other. self$reshape_as(other) is equivalent to self$reshape(other.sizes()). This method returns a view if other.sizes() is compatible with the current shape. See $view on when it is possible to return a view.

+

Please see reshape for more information about reshape.

+
+

+Arguments:

+
    +
  • other (`$): The result tensor has the same shape
  • +
  • as other.
  • +
+
+
+
+

+resize_

+

resize_(*sizes, memory_format=torch_contiguous_format) -> Tensor

+

Resizes self tensor to the specified size. If the number of elements is larger than the current storage size, then the underlying storage is resized to fit the new number of elements. If the number of elements is smaller, the underlying storage is not changed. Existing elements are preserved but any new memory is uninitialized.

+
+

+Warning:

+

This is a low-level method. The storage is reinterpreted as C-contiguous, ignoring the current strides (unless the target size equals the current size, in which case the tensor is left unchanged). For most purposes, you will instead want to use $view(), which checks for contiguity, or $reshape(), which copies data if needed. To change the size in-place with custom strides, see $set_().

+
+
+

+Arguments:

+
    +
  • sizes (torch_Size or int…): the desired size
  • +
  • memory_format (torch_memory_format, optional): the desired memory format of Tensor. Default: torch_contiguous_format. Note that memory format of self is going to be unaffected if self$size() matches sizes.
  • +
+
+
+

+Examples:

+
+x <- torch_tensor(matrix(1:6, ncol = 2))
+x$resize_(c(2, 2))
+
+
+
+
+

+resize_as_

+

resize_as_(tensor, memory_format=torch_contiguous_format) -> Tensor

+

Resizes the self tensor to be the same size as the specified tensor. This is equivalent to self$resize_(tensor.size()).

+
+

+Arguments:

+
    +
  • memory_format (torch_memory_format, optional): the desired memory format of Tensor. Default: torch_contiguous_format. Note that memory format of self is going to be unaffected if self$size() matches tensor.size().
  • +
+
+
+
+

+retain_grad

+

Enables $grad attribute for non-leaf Tensors.

+
+
+

+rfft

+

rfft(signal_ndim, normalized=FALSE, onesided=TRUE) -> Tensor

+

See ?torch_rfft

+
+
+

+roll

+

roll(shifts, dims) -> Tensor

+

See ?torch_roll

+
+
+

+rot90

+

rot90(k, dims) -> Tensor

+

See [torch_rot90()]

+
+
+

+round

+

round() -> Tensor

+

See ?torch_round

+
+
+

+round_

+

round_() -> Tensor

+

In-place version of $round

+
+
+

+rsqrt

+

rsqrt() -> Tensor

+

See ?torch_rsqrt

+
+
+

+rsqrt_

+

rsqrt_() -> Tensor

+

In-place version of $rsqrt

+
+
+

+scatter

+

scatter(dim, index, src) -> Tensor

+

Out-of-place version of $scatter_

+
+
+

+scatter_

+

scatter_(dim, index, src) -> Tensor

+

Writes all values from the tensor src into self at the indices specified in the index tensor. For each value in src, its output index is specified by its index in src for dimension != dim and by the corresponding value in index for dimension = dim.

+

For a 3-D tensor, self is updated as:

+
self[index[i][j][k]][j][k] = src[i][j][k]  # if dim == 0
+self[i][index[i][j][k]][k] = src[i][j][k]  # if dim == 1
+self[i][j][index[i][j][k]] = src[i][j][k]  # if dim == 2
+

This is the reverse operation of the manner described in $gather.

+

self, index and src (if it is a Tensor) should have same number of dimensions. It is also required that index.size(d) <= src.size(d) for all dimensions d, and that index.size(d) <= self$size(d) for all dimensions d != dim.

+

Moreover, as for $gather, the values of index must be between 0 and self$size(dim) - 1 inclusive, and all values in a row along the specified dimension dim must be unique.

+
+

+Arguments:

+
    +
  • dim (int): the axis along which to index
  • +
  • index (LongTensor): the indices of elements to scatter,
  • +
  • can be either empty or the same size of src. When empty, the operation returns identity
  • +
  • src (Tensor): the source element(s) to scatter,
  • +
  • incase value is not specified
  • +
  • value (float): the source element(s) to scatter,
  • +
  • incase src is not specified
  • +
+
+
+

+Examples:

+
+x <- torch_rand(2, 5)
+x
+torch_zeros(3, 5)$scatter_(
+        1, 
+        torch_tensor(rbind(c(2, 3, 3, 1, 1), c(3, 1, 1, 2, 3)), x)
+)
+
+z <- torch_zeros(2, 4)$scatter_(
+        2, 
+        torch_tensor(matrix(3:4, ncol = 1)), 1.23
+)
+
+
+
+
+

+scatter_add

+

scatter_add(dim, index, src) -> Tensor

+

Out-of-place version of $scatter_add_

+
+
+

+scatter_add_

+

scatter_add_(dim, index, src) -> Tensor

+

Adds all values from the tensor other into self at the indices specified in the index tensor in a similar fashion as ~$scatter_. For each value in src, it is added to an index in self which is specified by its index in src for dimension != dim and by the corresponding value in index for dimension = dim.

+

For a 3-D tensor, self is updated as::

+
self[index[i][j][k]][j][k] += src[i][j][k]  # if dim == 0
+self[i][index[i][j][k]][k] += src[i][j][k]  # if dim == 1
+self[i][j][index[i][j][k]] += src[i][j][k]  # if dim == 2
+

self, index and src should have same number of dimensions. It is also required that index.size(d) <= src.size(d) for all dimensions d, and that index.size(d) <= self$size(d) for all dimensions d != dim.

+
+

+Note:

+

In some circumstances when using the CUDA backend with CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch_backends.cudnn.deterministic = TRUE.

+
+
+

+Arguments:

+
    +
  • dim (int): the axis along which to index
  • +
  • index (LongTensor): the indices of elements to scatter and add,
  • +
  • can be either empty or the same size of src. When empty, the operation returns identity.
  • +
  • src (Tensor): the source elements to scatter and add
  • +
+
+
+

+Examples:

+
+x <- torch_rand(2, 5)
+x
+torch_ones(3, 5)$scatter_add_(1, torch_tensor(rbind(c(0, 1, 2, 0, 0), c(2, 0, 0, 1, 2))), x)
+
+
+
+
+

+select

+

select(dim, index) -> Tensor

+

Slices the self tensor along the selected dimension at the given index. This function returns a view of the original tensor with the given dimension removed.

+
+

+Arguments:

+
    +
  • dim (int): the dimension to slice
  • +
  • index (int): the index to select with
  • +
+
+
+

+Note:

+

select is equivalent to slicing. For example, tensor$select(0, index) is equivalent to tensor[index] and tensor$select(2, index) is equivalent to tensor[:,:,index].

+
+
+
+

+set_

+

set_(source=NULL, storage_offset=0, size=NULL, stride=NULL) -> Tensor

+

Sets the underlying storage, size, and strides. If source is a tensor, self tensor will share the same storage and have the same size and strides as source. Changes to elements in one tensor will be reflected in the other.

+
+

+Arguments:

+
    +
  • source (Tensor or Storage): the tensor or storage to use
  • +
  • storage_offset (int, optional): the offset in the storage
  • +
  • size (torch_Size, optional): the desired size. Defaults to the size of the source.
  • +
  • stride (tuple, optional): the desired stride. Defaults to C-contiguous strides.
  • +
+
+
+
+

+share_memory_

+

Moves the underlying storage to shared memory.

+

This is a no-op if the underlying storage is already in shared memory and for CUDA tensors. Tensors in shared memory cannot be resized.

+
+
+

+short

+

short(memory_format=torch_preserve_format) -> Tensor

+

self$short() is equivalent to self$to(torch_int16). See [to()].

+
+

+Arguments:

+
    +
  • memory_format (torch_memory_format, optional): the desired memory format of
  • +
  • returned Tensor. Default: torch_preserve_format.
  • +
+
+
+
+

+sigmoid

+

sigmoid() -> Tensor

+

See ?torch_sigmoid

+
+
+

+sigmoid_

+

sigmoid_() -> Tensor

+

In-place version of $sigmoid

+
+
+

+sign

+

sign() -> Tensor

+

See ?torch_sign

+
+
+

+sign_

+

sign_() -> Tensor

+

In-place version of $sign

+
+
+

+sin

+

sin() -> Tensor

+

See ?torch_sin

+
+
+

+sin_

+

sin_() -> Tensor

+

In-place version of $sin

+
+
+

+sinh

+

sinh() -> Tensor

+

See ?torch_sinh

+
+
+

+sinh_

+

sinh_() -> Tensor

+

In-place version of $sinh

+
+
+

+size

+

size() -> torch_Size

+

Returns the size of the self tensor. The returned value is a subclass of tuple.

+
+

+Examples:

+
+torch_empty(3, 4, 5)$size()
+
+
+
+
+

+slogdet

+

slogdet() -> (Tensor, Tensor)

+

See ?torch_slogdet

+
+
+

+solve

+

solve(A) -> Tensor, Tensor

+

See ?torch_solve

+
+
+

+sort

+

sort(dim=-1, descending=FALSE) -> (Tensor, LongTensor)

+

See ?torch_sort

+
+
+

+sparse_dim

+

sparse_dim() -> int

+

If self is a sparse COO tensor (i.e., with torch_sparse_coo layout), this returns the number of sparse dimensions. Otherwise, this throws an error.

+

See also Tensor.dense_dim.

+
+
+

+sparse_mask

+

sparse_mask(input, mask) -> Tensor

+

Returns a new SparseTensor with values from Tensor input filtered by indices of mask and values are ignored. input and mask must have the same shape.

+
+

+Arguments:

+
    +
  • input (Tensor): an input Tensor
  • +
  • mask (SparseTensor): a SparseTensor which we filter input based on its indices
  • +
+
+
+
+

+split

+

See ?torch_split

+
+
+

+sqrt

+

sqrt() -> Tensor

+

See ?torch_sqrt

+
+
+

+sqrt_

+

sqrt_() -> Tensor

+

In-place version of $sqrt

+
+
+

+square

+

square() -> Tensor

+

See ?torch_square

+
+
+

+square_

+

square_() -> Tensor

+

In-place version of $square

+
+
+

+squeeze

+

squeeze(dim=NULL) -> Tensor

+

See ?torch_squeeze

+
+
+

+squeeze_

+

squeeze_(dim=NULL) -> Tensor

+

In-place version of $squeeze

+
+
+

+std

+

std(dim=NULL, unbiased=TRUE, keepdim=FALSE) -> Tensor

+

See ?torch_std

+
+
+

+stft

+

See ?torch_stft

+
+
+

+storage

+

storage() -> torch_Storage

+

Returns the underlying storage.

+
+
+

+storage_offset

+

storage_offset() -> int

+

Returns self tensor’s offset in the underlying storage in terms of number of storage elements (not bytes).

+
+

+Examples:

+
+x <- torch_tensor(c(1, 2, 3, 4, 5))
+x$storage_offset()
+x[3:N]$storage_offset()
+
+
+
+
+

+storage_type

+

storage_type() -> type

+

Returns the type of the underlying storage.

+
+
+

+stride

+

stride(dim) -> tuple or int

+

Returns the stride of self tensor.

+

Stride is the jump necessary to go from one element to the next one in the specified dimension dim. A tuple of all strides is returned when no argument is passed in. Otherwise, an integer value is returned as the stride in the particular dimension dim.

+
+

+Arguments:

+
    +
  • dim (int, optional): the desired dimension in which stride is required
  • +
+
+
+

+Examples:

+
+x <- torch_tensor(matrix(1:10, nrow = 2))
+x$stride()
+x$stride(1)
+x$stride(-1)
+
+
+
+
+

+sub

+

sub(other, *, alpha=1) -> Tensor

+

Subtracts a scalar or tensor from self tensor. If both alpha and other are specified, each element of other is scaled by alpha before being used.

+

When other is a tensor, the shape of other must be broadcastable <broadcasting-semantics> with the shape of the underlying tensor.

+
+
+

+sub_

+

sub_(other, *, alpha=1) -> Tensor

+

In-place version of $sub

+
+
+

+sum

+

sum(dim=NULL, keepdim=FALSE, dtype=NULL) -> Tensor

+

See ?torch_sum

+
+
+

+sum_to_size

+

sum_to_size(*size) -> Tensor

+

Sum this tensor to size. size must be broadcastable to this tensor size.

+
+

+Arguments:

+
    +
  • size (int…): a sequence of integers defining the shape of the output tensor.
  • +
+
+
+
+

+svd

+

svd(some=TRUE, compute_uv=TRUE) -> (Tensor, Tensor, Tensor)

+

See ?torch_svd

+
+
+

+symeig

+

symeig(eigenvectors=FALSE, upper=TRUE) -> (Tensor, Tensor)

+

See ?torch_symeig

+
+
+

+t

+

t() -> Tensor

+

See ?torch_t

+
+
+

+t_

+

t_() -> Tensor

+

In-place version of $t

+
+
+

+take

+

take(indices) -> Tensor

+

See ?torch_take

+
+
+

+tan

+

tan() -> Tensor

+

See ?torch_tan

+
+
+

+tan_

+

tan_() -> Tensor

+

In-place version of $tan

+
+
+

+tanh

+

tanh() -> Tensor

+

See ?torch_tanh

+
+
+

+tanh_

+

tanh_() -> Tensor

+

In-place version of $tanh

+
+
+

+to

+

to(*args, **kwargs) -> Tensor

+

Performs Tensor dtype and/or device conversion. A torch_dtype and :class:torch_device are inferred from the arguments of self$to(*args, **kwargs).

+
+

+Note:

+

If the self Tensor already has the correct torch_dtype and :class:torch_device, then self is returned. Otherwise, the returned tensor is a copy of self with the desired torch_dtype and :class:torch_device.

+

Here are the ways to call to:

+

to(dtype, non_blocking=FALSE, copy=FALSE, memory_format=torch_preserve_format) -> Tensor

+

Returns a Tensor with the specified dtype

+
+
+

+Arguments:

+
    +
  • memory_format (torch_memory_format, optional): the desired memory format of returned Tensor. Default: torch_preserve_format.
  • +
+

to(device=NULL, dtype=NULL, non_blocking=FALSE, copy=FALSE, memory_format=torch_preserve_format) -> Tensor

+

Returns a Tensor with the specified device and (optional) dtype. If dtype is NULL it is inferred to be self$dtype. When non_blocking, tries to convert asynchronously with respect to the host if possible, e.g., converting a CPU Tensor with pinned memory to a CUDA Tensor.

+

When copy is set, a new Tensor is created even when the Tensor already matches the desired conversion.

+
+
+

+Arguments:

+
    +
  • memory_format (torch_memory_format, optional): the desired memory format of returned Tensor. Default: torch_preserve_format.
  • +
+

function:: to(other, non_blocking=FALSE, copy=FALSE) -> Tensor

+

Returns a Tensor with same torch_dtype and :class:torch_device as the Tensor other. When non_blocking, tries to convert asynchronously with respect to the host if possible, e.g., converting a CPU Tensor with pinned memory to a CUDA Tensor.

+

When copy is set, a new Tensor is created even when the Tensor already matches the desired conversion.

+
+
+

+Examples:

+
+tensor <- torch_randn(2, 2)  # Initially dtype=float32, device=cpu
+tensor$to(dtype = torch_float64())
+
+other <- torch_randn(1, dtype=torch_float64())
+tensor$to(other = other, non_blocking=TRUE)
+
+
+
+
+

+to_mkldnn

+

to_mkldnn() -> Tensor Returns a copy of the tensor in torch_mkldnn layout.

+
+
+

+to_sparse

+

to_sparse(sparseDims) -> Tensor Returns a sparse copy of the tensor. PyTorch supports sparse tensors in coordinate format <sparse-docs>.

+
+

+Arguments:

+
    +
  • sparseDims (int, optional): the number of sparse dimensions to include in the new sparse tensor
  • +
+
+
+
+

+tolist

+

tolist() -> list or number

+

Returns the tensor as a (nested) list. For scalars, a standard Python number is returned, just like with $item. Tensors are automatically moved to the CPU first if necessary.

+

This operation is not differentiable.

+
+
+

+topk

+

topk(k, dim=NULL, largest=TRUE, sorted=TRUE) -> (Tensor, LongTensor)

+

See ?torch_topk

+
+
+

+trace

+

trace() -> Tensor

+

See ?torch_trace

+
+
+

+transpose

+

transpose(dim0, dim1) -> Tensor

+

See ?torch_transpose

+
+
+

+transpose_

+

transpose_(dim0, dim1) -> Tensor

+

In-place version of $transpose

+
+
+

+triangular_solve

+

triangular_solve(A, upper=TRUE, transpose=FALSE, unitriangular=FALSE) -> (Tensor, Tensor)

+

See [torch_triangular_solve()]

+
+
+

+tril

+

tril(k=0) -> Tensor

+

See ?torch_tril

+
+
+

+tril_

+

tril_(k=0) -> Tensor

+

In-place version of $tril

+
+
+

+triu

+

triu(k=0) -> Tensor

+

See ?torch_triu

+
+
+

+triu_

+

triu_(k=0) -> Tensor

+

In-place version of $triu

+
+
+

+true_divide

+

true_divide(value) -> Tensor

+

See [torch_true_divide()]

+
+
+

+true_divide_

+

true_divide_(value) -> Tensor

+

In-place version of $true_divide_

+
+
+

+trunc

+

trunc() -> Tensor

+

See ?torch_trunc

+
+
+

+trunc_

+

trunc_() -> Tensor

+

In-place version of $trunc

+
+
+

+type

+

type(dtype=NULL, non_blocking=FALSE, **kwargs) -> str or Tensor Returns the type if dtype is not provided, else casts this object to the specified type.

+

If this is already of the correct type, no copy is performed and the original object is returned.

+
+

+Arguments:

+
    +
  • dtype (type or string): The desired type
  • +
  • non_blocking (bool): If TRUE, and the source is in pinned memory
  • +
  • and destination is on the GPU or vice versa, the copy is performed
  • +
  • asynchronously with respect to the host. Otherwise, the argument
  • +
  • has no effect. **kwargs: For compatibility, may contain the key async in place of
  • +
  • the non_blocking argument. The async arg is deprecated.
  • +
+
+
+
+

+type_as

+

type_as(tensor) -> Tensor

+

Returns this tensor cast to the type of the given tensor.

+

This is a no-op if the tensor is already of the correct type. This is equivalent to self$type(tensor.type())

+
+

+Arguments:

+
    +
  • tensor (Tensor): the tensor which has the desired type
  • +
+
+
+
+

+unbind

+

unbind(dim=0) -> seq

+

See ?torch_unbind

+
+
+

+unflatten

+

Unflattens the named dimension dim, viewing it in the shape specified by namedshape.

+
+

+Arguments:

+
    +
  • namedshape: (iterable of (name, size) tuples).
  • +
+
+
+
+

+unfold

+

unfold(dimension, size, step) -> Tensor

+

Returns a view of the original tensor which contains all slices of size size from self tensor in the dimension dimension.

+

Step between two slices is given by step.

+

If sizedim is the size of dimension dimension for self, the size of dimension dimension in the returned tensor will be (sizedim - size) / step + 1.

+

An additional dimension of size size is appended in the returned tensor.

+
+

+Arguments:

+
    +
  • dimension (int): dimension in which unfolding happens
  • +
  • size (int): the size of each slice that is unfolded
  • +
  • step (int): the step between each slice
  • +
+
+
+
+

+uniform_

+

uniform_(from=0, to=1) -> Tensor

+

Fills self tensor with numbers sampled from the continuous uniform distribution:

+

\[ +P(x) = \dfrac{1}{\text{to} - \text{from}} +\]

+
+
+

+unique

+

Returns the unique elements of the input tensor.

+

See ?torch_unique

+
+
+

+unique_consecutive

+

Eliminates all but the first element from every consecutive group of equivalent elements.

+

See [torch_unique_consecutive()]

+
+
+

+unsqueeze

+

unsqueeze(dim) -> Tensor

+

See ?torch_unsqueeze

+
+
+

+unsqueeze_

+

unsqueeze_(dim) -> Tensor

+

In-place version of $unsqueeze

+
+
+

+values

+

values() -> Tensor

+

If self is a sparse COO tensor (i.e., with torch_sparse_coo layout), this returns a view of the contained values tensor. Otherwise, this throws an error.

+
+

+Note:

+

This method can only be called on a coalesced sparse tensor. See Tensor$coalesce for details.

+
+
+
+

+var

+

var(dim=NULL, unbiased=TRUE, keepdim=FALSE) -> Tensor

+

See ?torch_var

+
+
+

+view

+

view(*shape) -> Tensor

+

Returns a new tensor with the same data as the self tensor but of a different shape.

+

The returned tensor shares the same data and must have the same number of elements, but may have a different size. For a tensor to be viewed, the new view size must be compatible with its original size and stride, i.e., each new view dimension must either be a subspace of an original dimension, or only span across original dimensions d, d+1, \dots, d+k that satisfy the following contiguity-like condition that \forall i = d, \dots, d+k-1,

+

\[ +\text{stride}[i] = \text{stride}[i+1] \times \text{size}[i+1] +\]

+

Otherwise, it will not be possible to view self tensor as shape without copying it (e.g., via contiguous). When it is unclear whether a view can be performed, it is advisable to use :meth:reshape, which returns a view if the shapes are compatible, and copies (equivalent to calling contiguous) otherwise.

+
+

+Arguments:

+
    +
  • shape (torch_Size or int…): the desired size
  • +
+
+
+
+

+view_as

+

view_as(other) -> Tensor

+

View this tensor as the same size as other. self$view_as(other) is equivalent to self$view(other.size()).

+

Please see $view for more information about view.

+
+

+Arguments:

+
    +
  • other (`$): The result tensor has the same size
  • +
  • as other.
  • +
+
+
+
+

+where

+

where(condition, y) -> Tensor

+

self$where(condition, y) is equivalent to torch_where(condition, self, y). See ?torch_where

+
+
+

+zero_

+

zero_() -> Tensor

+

Fills self tensor with zeros.

+
+
+ + + +
+ + + + +
+ + + + + + diff --git a/static/docs/articles/tensor/index_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/articles/tensor/index_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/articles/tensor/index_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/articles/using-autograd.html b/static/docs/articles/using-autograd.html new file mode 100644 index 0000000000000000000000000000000000000000..558d9ba1358720ebe679abfe054d5974fb59a1e4 --- /dev/null +++ b/static/docs/articles/using-autograd.html @@ -0,0 +1,373 @@ + + + + + + + +Using autograd • torch + + + + + + + + + + +
+
+ + + + +
+
+ + + + + +

So far, all we’ve been using from torch is tensors, but we’ve been performing all calculations ourselves – the computing the predictions, the loss, the gradients (and thus, the necessary updates to the weights), and the new weight values. In this chapter, we’ll make a significant change: Namely, we spare ourselves the cumbersome calculation of gradients, and have torch do it for us.

+

Before we see that in action, let’s get some more background.

+
+

+Automatic differentiation with autograd

+

Torch uses a module called autograd to record operations performed on tensors, and store what has to be done to obtain the respective gradients. These actions are stored as functions, and those functions are applied in order when the gradient of the output (normally, the loss) with respect to those tensors is calculated: starting from the output node and propagating gradients back through the network. This is a form of reverse mode automatic differentiation.

+

As users, we can see a bit of this implementation. As a prerequisite for this “recording” to happen, tensors have to be created with requires_grad = TRUE. E.g.

+
+x <- torch_ones(2,2, requires_grad = TRUE)
+
+

To be clear, this is a tensor with respect to which gradients have to be calculated – normally, a tensor representing a weight or a bias, not the input data 1. If we now perform some operation on that tensor, assigning the result to y

+
+y <- x$mean()
+
+

we find that y now has a non-empty grad_fn that tells torch how to compute the gradient of y with respect to x:

+
+y$grad_fn
+#> MeanBackward0
+
+

Actual computation of gradients is triggered by calling backward() on the output tensor.

+
+y$backward()
+
+

That executed, x now has a non-empty field grad that stores the gradient of y with respect to x:

+
+x$grad
+#> torch_tensor 
+#>  0.2500  0.2500
+#>  0.2500  0.2500
+#> [ CPUFloatType{2,2} ]
+
+

With a longer chain of computations, we can peek at how torch builds up a graph of backward operations.

+

Here is a slightly more complex example. We call retain_grad() on y and z just for demonstration purposes; by default, intermediate gradients – while of course they have to be computed – aren’t stored, in order to save memory.

+
+x1 <- torch_ones(2,2, requires_grad = TRUE)
+x2 <- torch_tensor(1.1, requires_grad = TRUE)
+y <- x1 * (x2 + 2)
+y$retain_grad()
+z <- y$pow(2) * 3
+z$retain_grad()
+out <- z$mean()
+
+

Starting from out$grad_fn, we can follow the graph all back to the leaf nodes:

+
+# how to compute the gradient for mean, the last operation executed
+out$grad_fn
+#> MeanBackward0
+# how to compute the gradient for the multiplication by 3 in z = y$pow(2) * 3
+out$grad_fn$next_functions
+#> [[1]]
+#> MulBackward1
+# how to compute the gradient for pow in z = y.pow(2) * 3
+out$grad_fn$next_functions[[1]]$next_functions
+#> [[1]]
+#> PowBackward0
+# how to compute the gradient for the multiplication in y = x * (x + 2)
+out$grad_fn$next_functions[[1]]$next_functions[[1]]$next_functions
+#> [[1]]
+#> MulBackward0
+# how to compute the gradient for the two branches of y = x * (x + 2),
+# where the left branch is a leaf node (AccumulateGrad for x1)
+out$grad_fn$next_functions[[1]]$next_functions[[1]]$next_functions[[1]]$next_functions
+#> [[1]]
+#> torch::autograd::AccumulateGrad
+#> [[2]]
+#> AddBackward1
+# here we arrive at the other leaf node (AccumulateGrad for x2)
+out$grad_fn$next_functions[[1]]$next_functions[[1]]$next_functions[[1]]$next_functions[[2]]$next_functions
+#> [[1]]
+#> torch::autograd::AccumulateGrad
+
+

After calling out$backward(), all tensors in the graph will have their respective gradients created. Without our calls to retain_grad above, z$grad and y$grad would be empty:

+
+out$backward()
+z$grad
+#> torch_tensor 
+#>  0.2500  0.2500
+#>  0.2500  0.2500
+#> [ CPUFloatType{2,2} ]
+y$grad
+#> torch_tensor 
+#>  4.6500  4.6500
+#>  4.6500  4.6500
+#> [ CPUFloatType{2,2} ]
+x2$grad
+#> torch_tensor 
+#>  18.6000
+#> [ CPUFloatType{1} ]
+x1$grad
+#> torch_tensor 
+#>  14.4150  14.4150
+#>  14.4150  14.4150
+#> [ CPUFloatType{2,2} ]
+
+

Thus acquainted with autograd, we’re ready to modify our example.

+
+
+

+The simple network, now using autograd

+

For a single new line calling loss$backward(), now a number of lines (that did manual backprop) are gone:

+
+### generate training data -----------------------------------------------------
+# input dimensionality (number of input features)
+d_in <- 3
+# output dimensionality (number of predicted features)
+d_out <- 1
+# number of observations in training set
+n <- 100
+# create random data
+x <- torch_randn(n, d_in)
+y <- x[,1]*0.2 - x[..,2]*1.3 - x[..,3]*0.5 + torch_randn(n)
+y <- y$unsqueeze(dim = 1)
+### initialize weights ---------------------------------------------------------
+# dimensionality of hidden layer
+d_hidden <- 32
+# weights connecting input to hidden layer
+w1 <- torch_randn(d_in, d_hidden, requires_grad = TRUE)
+# weights connecting hidden to output layer
+w2 <- torch_randn(d_hidden, d_out, requires_grad = TRUE)
+# hidden layer bias
+b1 <- torch_zeros(1, d_hidden, requires_grad = TRUE)
+# output layer bias
+b2 <- torch_zeros(1, d_out,requires_grad = TRUE)
+### network parameters ---------------------------------------------------------
+learning_rate <- 1e-4
+### training loop --------------------------------------------------------------
+for (t in 1:200) {
+
+    ### -------- Forward pass -------- 
+    y_pred <- x$mm(w1)$add(b1)$clamp(min = 0)$mm(w2)$add(b2)
+    ### -------- compute loss -------- 
+    loss <- (y_pred - y)$pow(2)$mean()
+    if (t %% 10 == 0) cat(t, as_array(loss), "\n")
+    ### -------- Backpropagation -------- 
+    # compute the gradient of loss with respect to all tensors with requires_grad = True.
+    loss$backward()
+ 
+    ### -------- Update weights -------- 
+    
+    # Wrap in torch.no_grad() because this is a part we DON'T want to record for automatic gradient computation
+    with_no_grad({
+      
+      w1$sub_(learning_rate * w1$grad)
+      w2$sub_(learning_rate * w2$grad)
+      b1$sub_(learning_rate * b1$grad)
+      b2$sub_(learning_rate * b2$grad)
+      
+      # Zero the gradients after every pass, because they'd accumulate otherwise
+      w1$grad$zero_()
+      w2$grad$zero_()
+      b1$grad$zero_()
+      b2$grad$zero_()
+    
+    })
+    
+}
+#> 10 27.60956 
+#> 20 25.39985 
+#> 30 23.42485 
+#> 40 21.65899 
+#> 50 20.07844 
+#> 60 18.66107 
+#> 70 17.38713 
+#> 80 16.24375 
+#> 90 15.21763 
+#> 100 14.29351 
+#> 110 13.45975 
+#> 120 12.70921 
+#> 130 12.03104 
+#> 140 11.41835 
+#> 150 10.86677 
+#> 160 10.36613 
+#> 170 9.911062 
+#> 180 9.496947 
+#> 190 9.121381 
+#> 200 8.778724
+
+

We still manually compute the forward pass, and we still manually update the weights. In the last two chapters of this section, we’ll see how these parts of the logic can be made more modular and reusable, as well.

+
+
+
+
    +
  1. Unless we want to change the data, as in adversarial example generation↩︎

  2. +
+
+
+ + + +
+ + + + +
+ + + + + + diff --git a/static/docs/articles/using-autograd_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/articles/using-autograd_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/articles/using-autograd_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/authors.html b/static/docs/authors.html new file mode 100644 index 0000000000000000000000000000000000000000..95f59cb12d9ee5a765ef62aa2515437c19d8f00e --- /dev/null +++ b/static/docs/authors.html @@ -0,0 +1,238 @@ + + + + + + + + +Authors • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + +
+ +
+
+ + +
    +
  • +

    Daniel Falbel. Author, maintainer, copyright holder. +

    +
  • +
  • +

    Javier Luraschi. Author, copyright holder. +

    +
  • +
  • +

    Dmitriy Selivanov. Contributor. +

    +
  • +
  • +

    Athos Damiani. Contributor. +

    +
  • +
  • +

    RStudio. Copyright holder. +

    +
  • +
+ +
+ +
+ + + + +
+ + + + + + + + diff --git a/static/docs/bootstrap-toc.css b/static/docs/bootstrap-toc.css new file mode 100644 index 0000000000000000000000000000000000000000..5a859415c1f7eacfd94920968bc910e2f1f1427e --- /dev/null +++ b/static/docs/bootstrap-toc.css @@ -0,0 +1,60 @@ +/*! + * Bootstrap Table of Contents v0.4.1 (http://afeld.github.io/bootstrap-toc/) + * Copyright 2015 Aidan Feldman + * Licensed under MIT (https://github.com/afeld/bootstrap-toc/blob/gh-pages/LICENSE.md) */ + +/* modified from https://github.com/twbs/bootstrap/blob/94b4076dd2efba9af71f0b18d4ee4b163aa9e0dd/docs/assets/css/src/docs.css#L548-L601 */ + +/* All levels of nav */ +nav[data-toggle='toc'] .nav > li > a { + display: block; + padding: 4px 20px; + font-size: 13px; + font-weight: 500; + color: #767676; +} +nav[data-toggle='toc'] .nav > li > a:hover, +nav[data-toggle='toc'] .nav > li > a:focus { + padding-left: 19px; + color: #563d7c; + text-decoration: none; + background-color: transparent; + border-left: 1px solid #563d7c; +} +nav[data-toggle='toc'] .nav > .active > a, +nav[data-toggle='toc'] .nav > .active:hover > a, +nav[data-toggle='toc'] .nav > .active:focus > a { + padding-left: 18px; + font-weight: bold; + color: #563d7c; + background-color: transparent; + border-left: 2px solid #563d7c; +} + +/* Nav: second level (shown on .active) */ +nav[data-toggle='toc'] .nav .nav { + display: none; /* Hide by default, but at >768px, show it */ + padding-bottom: 10px; +} +nav[data-toggle='toc'] .nav .nav > li > a { + padding-top: 1px; + padding-bottom: 1px; + padding-left: 30px; + font-size: 12px; + font-weight: normal; +} +nav[data-toggle='toc'] .nav .nav > li > a:hover, +nav[data-toggle='toc'] .nav .nav > li > a:focus { + padding-left: 29px; +} +nav[data-toggle='toc'] .nav .nav > .active > a, +nav[data-toggle='toc'] .nav .nav > .active:hover > a, +nav[data-toggle='toc'] .nav .nav > .active:focus > a { + padding-left: 28px; + font-weight: 500; +} + +/* from https://github.com/twbs/bootstrap/blob/e38f066d8c203c3e032da0ff23cd2d6098ee2dd6/docs/assets/css/src/docs.css#L631-L634 */ +nav[data-toggle='toc'] .nav > .active > ul { + display: block; +} diff --git a/static/docs/bootstrap-toc.js b/static/docs/bootstrap-toc.js new file mode 100644 index 0000000000000000000000000000000000000000..1cdd573b20f53b3ebe31c021e154c4338ca456af --- /dev/null +++ b/static/docs/bootstrap-toc.js @@ -0,0 +1,159 @@ +/*! + * Bootstrap Table of Contents v0.4.1 (http://afeld.github.io/bootstrap-toc/) + * Copyright 2015 Aidan Feldman + * Licensed under MIT (https://github.com/afeld/bootstrap-toc/blob/gh-pages/LICENSE.md) */ +(function() { + 'use strict'; + + window.Toc = { + helpers: { + // return all matching elements in the set, or their descendants + findOrFilter: function($el, selector) { + // http://danielnouri.org/notes/2011/03/14/a-jquery-find-that-also-finds-the-root-element/ + // http://stackoverflow.com/a/12731439/358804 + var $descendants = $el.find(selector); + return $el.filter(selector).add($descendants).filter(':not([data-toc-skip])'); + }, + + generateUniqueIdBase: function(el) { + var text = $(el).text(); + var anchor = text.trim().toLowerCase().replace(/[^A-Za-z0-9]+/g, '-'); + return anchor || el.tagName.toLowerCase(); + }, + + generateUniqueId: function(el) { + var anchorBase = this.generateUniqueIdBase(el); + for (var i = 0; ; i++) { + var anchor = anchorBase; + if (i > 0) { + // add suffix + anchor += '-' + i; + } + // check if ID already exists + if (!document.getElementById(anchor)) { + return anchor; + } + } + }, + + generateAnchor: function(el) { + if (el.id) { + return el.id; + } else { + var anchor = this.generateUniqueId(el); + el.id = anchor; + return anchor; + } + }, + + createNavList: function() { + return $(''); + }, + + createChildNavList: function($parent) { + var $childList = this.createNavList(); + $parent.append($childList); + return $childList; + }, + + generateNavEl: function(anchor, text) { + var $a = $(''); + $a.attr('href', '#' + anchor); + $a.text(text); + var $li = $('
  • '); + $li.append($a); + return $li; + }, + + generateNavItem: function(headingEl) { + var anchor = this.generateAnchor(headingEl); + var $heading = $(headingEl); + var text = $heading.data('toc-text') || $heading.text(); + return this.generateNavEl(anchor, text); + }, + + // Find the first heading level (`

    `, then `

    `, etc.) that has more than one element. Defaults to 1 (for `

    `). + getTopLevel: function($scope) { + for (var i = 1; i <= 6; i++) { + var $headings = this.findOrFilter($scope, 'h' + i); + if ($headings.length > 1) { + return i; + } + } + + return 1; + }, + + // returns the elements for the top level, and the next below it + getHeadings: function($scope, topLevel) { + var topSelector = 'h' + topLevel; + + var secondaryLevel = topLevel + 1; + var secondarySelector = 'h' + secondaryLevel; + + return this.findOrFilter($scope, topSelector + ',' + secondarySelector); + }, + + getNavLevel: function(el) { + return parseInt(el.tagName.charAt(1), 10); + }, + + populateNav: function($topContext, topLevel, $headings) { + var $context = $topContext; + var $prevNav; + + var helpers = this; + $headings.each(function(i, el) { + var $newNav = helpers.generateNavItem(el); + var navLevel = helpers.getNavLevel(el); + + // determine the proper $context + if (navLevel === topLevel) { + // use top level + $context = $topContext; + } else if ($prevNav && $context === $topContext) { + // create a new level of the tree and switch to it + $context = helpers.createChildNavList($prevNav); + } // else use the current $context + + $context.append($newNav); + + $prevNav = $newNav; + }); + }, + + parseOps: function(arg) { + var opts; + if (arg.jquery) { + opts = { + $nav: arg + }; + } else { + opts = arg; + } + opts.$scope = opts.$scope || $(document.body); + return opts; + } + }, + + // accepts a jQuery object, or an options object + init: function(opts) { + opts = this.helpers.parseOps(opts); + + // ensure that the data attribute is in place for styling + opts.$nav.attr('data-toggle', 'toc'); + + var $topContext = this.helpers.createChildNavList(opts.$nav); + var topLevel = this.helpers.getTopLevel(opts.$scope); + var $headings = this.helpers.getHeadings(opts.$scope, topLevel); + this.helpers.populateNav($topContext, topLevel, $headings); + } + }; + + $(function() { + $('nav[data-toggle="toc"]').each(function(i, el) { + var $nav = $(el); + Toc.init($nav); + }); + }); +})(); diff --git a/static/docs/dev/.nojekyll b/static/docs/dev/.nojekyll new file mode 100644 index 0000000000000000000000000000000000000000..8b137891791fe96927ad78e64b0aad7bded08bdc --- /dev/null +++ b/static/docs/dev/.nojekyll @@ -0,0 +1 @@ + diff --git a/static/docs/dev/404.html b/static/docs/dev/404.html new file mode 100644 index 0000000000000000000000000000000000000000..93c1a15c95edec0c4bf85f8b10c7f394e7651990 --- /dev/null +++ b/static/docs/dev/404.html @@ -0,0 +1,223 @@ + + + + + + + + +Page not found (404) • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +Content not found. Please use links in the navbar. + +
    + + + +
    + + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/CONTRIBUTING.html b/static/docs/dev/CONTRIBUTING.html new file mode 100644 index 0000000000000000000000000000000000000000..312e9538d5315e7c8ed46a07c6f30bed1a0d04ef --- /dev/null +++ b/static/docs/dev/CONTRIBUTING.html @@ -0,0 +1,260 @@ + + + + + + + + +Contributing to torch • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    + +

    This outlines how to propose a change to torch. For more detailed info about contributing to this, and other tidyverse packages, please see the development contributing guide.

    +
    +

    +Fixing typos

    +

    You can fix typos, spelling mistakes, or grammatical errors in the documentation directly using the GitHub web interface, as long as the changes are made in the source file. This generally means you’ll need to edit roxygen2 comments in an .R, not a .Rd file. You can find the .R file that generates the .Rd by reading the comment in the first line.

    +

    See also the [Documentation] section.

    +
    +
    +

    +Filing bugs

    +

    If you find a bug in torch please open an issue here. Please, provide detailed information on how to reproduce the bug. It would be great to also provide a reprex.

    +
    +
    +

    +Feature requests

    +

    Feel free to open issues here and add the feature-request tag. Try searching if there’s already an open issue for your feature-request, in this case it’s better to comment or upvote it intead of opening a new one.

    +
    +
    +

    +Examples

    +

    We welcome contributed examples. feel free to open a PR with new examples. The should be placed in the vignettes/examples folder.

    +

    The examples should be an .R file and a .Rmd file with the same name that just renders the code.

    +

    See mnist-mlp.R and mnist-mlp.Rmd

    +

    One must be able to run the example without manually downloading any dataset/file. You should also add an entry to the _pkgdown.yaml file.

    +
    +
    +

    +Code contributions

    +

    We have many open issues in the github repo if there’s one item that you want to work on, you can comment on it an ask for directions.

    +
    +
    +

    +Documentation

    +

    We use roxygen2 to generate the documentation. IN order to update the docs, edit the file in the R directory. To regenerate and preview the docs, use the custom tools/document.R script, as we need to patch roxygen2 to avoid running the examples on CRAN.

    +
    +
    + +
    + + + +
    + + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/LICENSE-text.html b/static/docs/dev/LICENSE-text.html new file mode 100644 index 0000000000000000000000000000000000000000..dffe68c47e5146ab6ab7c74fdb91c0e68bb22b8d --- /dev/null +++ b/static/docs/dev/LICENSE-text.html @@ -0,0 +1,225 @@ + + + + + + + + +License • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    YEAR: 2020
    +COPYRIGHT HOLDER: Daniel Falbel
    +
    + +
    + + + +
    + + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/LICENSE.html b/static/docs/dev/LICENSE.html new file mode 100644 index 0000000000000000000000000000000000000000..1ae650b1ab930a0262cc69297146aab839397fcf --- /dev/null +++ b/static/docs/dev/LICENSE.html @@ -0,0 +1,229 @@ + + + + + + + + +MIT License • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    + +

    Copyright (c) 2020 Daniel Falbel

    +

    Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

    +

    The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

    +

    THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

    +
    + +
    + + + +
    + + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/articles/examples/mnist-cnn.html b/static/docs/dev/articles/examples/mnist-cnn.html new file mode 100644 index 0000000000000000000000000000000000000000..5e29c8d52067534043dac9c518ea22e866e8dc79 --- /dev/null +++ b/static/docs/dev/articles/examples/mnist-cnn.html @@ -0,0 +1,260 @@ + + + + + + + +mnist-cnn • torch + + + + + + + + + + + +
    +
    + + + + +
    +
    + + + + +
    dir <- "~/Downloads/mnist"
    +
    +ds <- mnist_dataset(
    +  dir,
    +  download = TRUE,
    +  transform = function(x) {
    +    x <- x$to(dtype = torch_float())/256
    +    x[newaxis,..]
    +  }
    +)
    +dl <- dataloader(ds, batch_size = 32, shuffle = TRUE)
    +
    +net <- nn_module(
    +  "Net",
    +  initialize = function() {
    +    self$conv1 <- nn_conv2d(1, 32, 3, 1)
    +    self$conv2 <- nn_conv2d(32, 64, 3, 1)
    +    self$dropout1 <- nn_dropout2d(0.25)
    +    self$dropout2 <- nn_dropout2d(0.5)
    +    self$fc1 <- nn_linear(9216, 128)
    +    self$fc2 <- nn_linear(128, 10)
    +  },
    +  forward = function(x) {
    +    x <- self$conv1(x)
    +    x <- nnf_relu(x)
    +    x <- self$conv2(x)
    +    x <- nnf_relu(x)
    +    x <- nnf_max_pool2d(x, 2)
    +    x <- self$dropout1(x)
    +    x <- torch_flatten(x, start_dim = 2)
    +    x <- self$fc1(x)
    +    x <- nnf_relu(x)
    +    x <- self$dropout2(x)
    +    x <- self$fc2(x)
    +    output <- nnf_log_softmax(x, dim=1)
    +    output
    +  }
    +)
    +
    +model <- net()
    +optimizer <- optim_sgd(model$parameters, lr = 0.01)
    +
    +epochs <- 10
    +
    +for (epoch in 1:10) {
    +
    +  pb <- progress::progress_bar$new(
    +    total = length(dl),
    +    format = "[:bar] :eta Loss: :loss"
    +  )
    +  l <- c()
    +
    +  for (b in enumerate(dl)) {
    +    optimizer$zero_grad()
    +    output <- model(b[[1]])
    +    loss <- nnf_nll_loss(output, b[[2]])
    +    loss$backward()
    +    optimizer$step()
    +    l <- c(l, loss$item())
    +    pb$tick(tokens = list(loss = mean(l)))
    +  }
    +
    +  cat(sprintf("Loss at epoch %d: %3f\n", epoch, mean(l)))
    +}
    +
    + + + +
    + + + +
    + +
    +

    Site built with pkgdown 1.5.1.

    +
    + +
    +
    + + + + + + diff --git a/static/docs/dev/articles/examples/mnist-cnn_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/dev/articles/examples/mnist-cnn_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/dev/articles/examples/mnist-cnn_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/dev/articles/examples/mnist-dcgan.html b/static/docs/dev/articles/examples/mnist-dcgan.html new file mode 100644 index 0000000000000000000000000000000000000000..e5ef1e9329a4e37a83a746a96e767a93c3fb10f7 --- /dev/null +++ b/static/docs/dev/articles/examples/mnist-dcgan.html @@ -0,0 +1,341 @@ + + + + + + + +mnist-dcgan • torch + + + + + + + + + + + +
    +
    + + + + +
    +
    + + + + +
    library(torch)
    +
    +dir <- "~/Downloads/mnist"
    +
    +ds <- mnist_dataset(
    +  dir,
    +  download = TRUE,
    +  transform = function(x) {
    +    x <- x$to(dtype = torch_float())/256
    +    x <- 2*(x - 0.5)
    +    x[newaxis,..]
    +  }
    +)
    +dl <- dataloader(ds, batch_size = 32, shuffle = TRUE)
    +
    +generator <- nn_module(
    +  "generator",
    +  initialize = function(latent_dim, out_channels) {
    +    self$main <- nn_sequential(
    +      nn_conv_transpose2d(latent_dim, 512, kernel_size = 4,
    +                          stride = 1, padding = 0, bias = FALSE),
    +      nn_batch_norm2d(512),
    +      nn_relu(),
    +      nn_conv_transpose2d(512, 256, kernel_size = 4,
    +                          stride = 2, padding = 1, bias = FALSE),
    +      nn_batch_norm2d(256),
    +      nn_relu(),
    +      nn_conv_transpose2d(256, 128, kernel_size = 4,
    +                          stride = 2, padding = 1, bias = FALSE),
    +      nn_batch_norm2d(128),
    +      nn_relu(),
    +      nn_conv_transpose2d(128, out_channels, kernel_size = 4,
    +                          stride = 2, padding = 3, bias = FALSE),
    +      nn_tanh()
    +    )
    +  },
    +  forward = function(input) {
    +    self$main(input)
    +  }
    +)
    +
    +discriminator <- nn_module(
    +  "discriminator",
    +  initialize = function(in_channels) {
    +    self$main <- nn_sequential(
    +      nn_conv2d(in_channels, 16, kernel_size = 4, stride = 2, padding = 1, bias = FALSE),
    +      nn_leaky_relu(0.2, inplace = TRUE),
    +      nn_conv2d(16, 32, kernel_size = 4, stride = 2, padding = 1, bias = FALSE),
    +      nn_batch_norm2d(32),
    +      nn_leaky_relu(0.2, inplace = TRUE),
    +      nn_conv2d(32, 64, kernel_size = 4, stride = 2, padding = 1, bias = FALSE),
    +      nn_batch_norm2d(64),
    +      nn_leaky_relu(0.2, inplace = TRUE),
    +      nn_conv2d(64, 128, kernel_size = 4, stride = 2, padding = 1, bias = FALSE),
    +      nn_leaky_relu(0.2, inplace = TRUE)
    +    )
    +    self$linear <- nn_linear(128, 1)
    +    self$sigmoid <- nn_sigmoid()
    +  },
    +  forward = function(input) {
    +    x <- self$main(input)
    +    x <- torch_flatten(x, start_dim = 2)
    +    x <- self$linear(x)
    +    self$sigmoid(x)
    +  }
    +)
    +
    +plot_gen <- function(noise) {
    +  img <- G(noise)
    +  img <- img$cpu()
    +  img <- img[1,1,,,newaxis]/2 + 0.5
    +  img <- torch_stack(list(img, img, img), dim = 2)[..,1]
    +  img <- as.raster(as_array(img))
    +  plot(img)
    +}
    +
    +device <- torch_device(ifelse(cuda_is_available(),  "cuda", "cpu"))
    +
    +G <- generator(latent_dim = 100, out_channels = 1)
    +D <- discriminator(in_channels = 1)
    +
    +init_weights <- function(m) {
    +  if (grepl("conv", m$.classes[[1]])) {
    +    nn_init_normal_(m$weight$data(), 0.0, 0.02)
    +  } else if (grepl("batch_norm", m$.classes[[1]])) {
    +    nn_init_normal_(m$weight$data(), 1.0, 0.02)
    +    nn_init_constant_(m$bias$data(), 0)
    +  }
    +}
    +
    +G[[1]]$apply(init_weights)
    +D[[1]]$apply(init_weights)
    +
    +G$to(device = device)
    +D$to(device = device)
    +
    +G_optimizer <- optim_adam(G$parameters, lr = 2 * 1e-4, betas = c(0.5, 0.999))
    +D_optimizer <- optim_adam(D$parameters, lr = 2 * 1e-4, betas = c(0.5, 0.999))
    +
    +fixed_noise <- torch_randn(1, 100, 1, 1, device = device)
    +
    +loss <- nn_bce_loss()
    +
    +for (epoch in 1:10) {
    +
    +  pb <- progress::progress_bar$new(
    +    total = length(dl),
    +    format = "[:bar] :eta Loss D: :lossd Loss G: :lossg"
    +  )
    +  lossg <- c()
    +  lossd <- c()
    +
    +  for (b in enumerate(dl)) {
    +
    +    y_real <- torch_ones(32, device = device)
    +    y_fake <- torch_zeros(32, device = device)
    +
    +    noise <- torch_randn(32, 100, 1, 1, device = device)
    +    fake <- G(noise)
    +
    +    img <- b[[1]]$to(device = device)
    +
    +    # train the discriminator ---
    +    D_loss <- loss(D(img), y_real) + loss(D(fake$detach()), y_fake)
    +
    +    D_optimizer$zero_grad()
    +    D_loss$backward()
    +    D_optimizer$step()
    +
    +    # train the generator ---
    +
    +    G_loss <- loss(D(fake), y_real)
    +
    +    G_optimizer$zero_grad()
    +    G_loss$backward()
    +    G_optimizer$step()
    +
    +    lossd <- c(lossd, D_loss$item())
    +    lossg <- c(lossg, G_loss$item())
    +    pb$tick(tokens = list(lossd = mean(lossd), lossg = mean(lossg)))
    +  }
    +  plot_gen(fixed_noise)
    +
    +  cat(sprintf("Epoch %d - Loss D: %3f Loss G: %3f\n", epoch, mean(lossd), mean(lossg)))
    +}
    +
    + + + +
    + + + +
    + +
    +

    Site built with pkgdown 1.5.1.

    +
    + +
    +
    + + + + + + diff --git a/static/docs/dev/articles/examples/mnist-dcgan_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/dev/articles/examples/mnist-dcgan_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/dev/articles/examples/mnist-dcgan_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/dev/articles/examples/mnist-mlp.html b/static/docs/dev/articles/examples/mnist-mlp.html new file mode 100644 index 0000000000000000000000000000000000000000..b13de836a63d4d85d0c3c16a861d5ad3ac3f0807 --- /dev/null +++ b/static/docs/dev/articles/examples/mnist-mlp.html @@ -0,0 +1,248 @@ + + + + + + + +mnist-mlp • torch + + + + + + + + + + + +
    +
    + + + + +
    +
    + + + + +
    dir <- "~/Downloads/mnist"
    +
    +ds <- mnist_dataset(
    +  dir,
    +  download = TRUE,
    +  transform = function(x) {
    +    x$to(dtype = torch_float())/256
    +  }
    +)
    +dl <- dataloader(ds, batch_size = 32, shuffle = TRUE)
    +
    +net <- nn_module(
    +  "Net",
    +  initialize = function() {
    +    self$fc1 <- nn_linear(784, 128)
    +    self$fc2 <- nn_linear(128, 10)
    +  },
    +  forward = function(x) {
    +    x %>%
    +      torch_flatten(start_dim = 2) %>%
    +      self$fc1() %>%
    +      nnf_relu() %>%
    +      self$fc2() %>%
    +      nnf_log_softmax(dim = 1)
    +  }
    +)
    +
    +model <- net()
    +optimizer <- optim_sgd(model$parameters, lr = 0.01)
    +
    +epochs <- 10
    +
    +for (epoch in 1:10) {
    +
    +  pb <- progress::progress_bar$new(
    +    total = length(dl),
    +    format = "[:bar] :eta Loss: :loss"
    +  )
    +  l <- c()
    +
    +  for (b in enumerate(dl)) {
    +    optimizer$zero_grad()
    +    output <- model(b[[1]])
    +    loss <- nnf_nll_loss(output, b[[2]])
    +    loss$backward()
    +    optimizer$step()
    +    l <- c(l, loss$item())
    +    pb$tick(tokens = list(loss = mean(l)))
    +  }
    +
    +  cat(sprintf("Loss at epoch %d: %3f\n", epoch, mean(l)))
    +}
    +
    + + + +
    + + + +
    + +
    +

    Site built with pkgdown 1.5.1.

    +
    + +
    +
    + + + + + + diff --git a/static/docs/dev/articles/examples/mnist-mlp_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/dev/articles/examples/mnist-mlp_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/dev/articles/examples/mnist-mlp_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/dev/articles/extending-autograd.html b/static/docs/dev/articles/extending-autograd.html new file mode 100644 index 0000000000000000000000000000000000000000..45a74441058bdd4803be770ddce3315b9c5ab708 --- /dev/null +++ b/static/docs/dev/articles/extending-autograd.html @@ -0,0 +1,266 @@ + + + + + + + +Extending Autograd • torch + + + + + + + + + + + +
    +
    + + + + +
    +
    + + + + + +

    Adding operations to autograd requires implementing a new autograd_function for each operation. Recall that autograd_functionss are what autograd uses to compute the results and gradients, and encode the operation history. Every new function requires you to implement 2 methods:

    +
      +
    • forward() - the code that performs the operation. It can take as many arguments as you want, with some of them being optional, if you specify the default values. All kinds of R objects are accepted here. Tensor arguments that track history (i.e., with requires_grad=TRUE) will be converted to ones that don’t track history before the call, and their use will be registered in the graph. Note that this logic won’t traverse lists or any other data structures and will only consider Tensor’s that are direct arguments to the call. You can return either a single Tensor output, or a list of Tensors if there are multiple outputs. Also, please refer to the docs of autograd_function to find descriptions of useful methods that can be called only from forward().

    • +
    • backward() - gradient formula. It will be given as many Tensor arguments as there were outputs, with each of them representing gradient w.r.t. that output. It should return as many Tensors as there were Tensor's that required gradients in forward, with each of them containing the gradient w.r.t. its corresponding input.

    • +
    +
    +

    +Note

    +

    It’s the user’s responsibility to use the special functions in the forward’s ctx properly in order to ensure that the new autograd_function works properly with the autograd engine.

    +
      +
    • save_for_backward() must be used when saving input or ouput of the forward to be used later in the backward.

    • +
    • mark_dirty() must be used to mark any input that is modified inplace by the forward function.

    • +
    • mark_non_differentiable() must be used to tell the engine if an output is not differentiable.

    • +
    +
    +
    +

    +Examples

    +

    Below you can find code for a linear function:

    +
    +linear <- autograd_function(
    +  forward = function(ctx, input, weight, bias = NULL) {
    +    ctx$save_for_backward(input = input, weight = weight, bias = bias)
    +    output <- input$mm(weight$t())
    +    if (!is.null(bias))
    +      output <- output + bias$unsqueeze(0)$expand_as(output)
    +    
    +    output
    +  },
    +  backward = function(ctx, grad_output) {
    +    
    +    s <- ctx$saved_variables
    +    
    +    grads <- list(
    +      input = NULL,
    +      weight = NULL,
    +      bias = NULL
    +    )
    +    
    +    if (ctx$needs_input_grad$input)
    +      grads$input <- grad_output$mm(s$weight)
    +    
    +    if (ctx$needs_input_grad$weight)
    +      grads$weight <- grad_output$t()$mm(s$input)
    +    
    +    if (!is.null(s$bias) && ctx$needs_input_grad$bias)
    +      grads$bias <- grad_output$sum(dim = 0)
    +    
    +    grads
    +  }
    +)
    +
    +

    Here, we give an additional example of a function that is parametrized by non-Tensor arguments:

    +
    +mul_constant <- autograd_function(
    +  forward = function(ctx, tensor, constant) {
    +    ctx$save_for_backward(constant = constant)
    +    tensor * constant
    +  },
    +  backward = function(ctx, grad_output) {
    +    v <- ctx$saved_variables
    +    list(
    +      tensor = grad_output * v$constant
    +    )
    +  }
    +)
    +
    +
    +x <- torch_tensor(1, requires_grad = TRUE)
    +o <- mul_constant(x, 2)
    +o$backward()
    +x$grad
    +#> torch_tensor 
    +#>  2
    +#> [ CPUFloatType{1} ]
    +
    +
    +
    + + + +
    + + + +
    + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + diff --git a/static/docs/dev/articles/extending-autograd_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/dev/articles/extending-autograd_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/dev/articles/extending-autograd_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/dev/articles/getting-started/assets/mnist.png b/static/docs/dev/articles/getting-started/assets/mnist.png new file mode 100644 index 0000000000000000000000000000000000000000..53c876a89d53ccb3ae4fb5167460e84248ad3672 Binary files /dev/null and b/static/docs/dev/articles/getting-started/assets/mnist.png differ diff --git a/static/docs/dev/articles/getting-started/autograd.html b/static/docs/dev/articles/getting-started/autograd.html new file mode 100644 index 0000000000000000000000000000000000000000..0daf780f5c8326bf3fd6d9e8ed368628ace9f505 --- /dev/null +++ b/static/docs/dev/articles/getting-started/autograd.html @@ -0,0 +1,351 @@ + + + + + + + +Autograd: automatic differentiation • torch + + + + + + + + + + + +
    +
    + + + + +
    +
    + + + + +
    +

    Note: This is an R port of the official tutorial available here. All credits goes to Soumith Chintala.

    +
    + +

    Central to all neural networks in torch is the autograd functionality. Let’s first briefly visit this, and we will then go to training our first neural network.

    +

    Autograd provides automatic differentiation for all operations on Tensors. It is a define-by-run framework, which means that your backprop is defined by how your code is run, and that every single iteration can be different.

    +

    Let us see this in more simple terms with some examples.

    +
    +

    +Tensor

    +

    torch_tensor is the central class of the package. If you set its attribute $requires_grad as TRUE, it starts to track all operations on it. When you finish your computation you can call $backward() and have all the gradients computed automatically. The gradient for this tensor will be accumulated into $grad attribute.

    +

    To stop a tensor from tracking history, you can call $detach() to detach it from the computation history, and to prevent future computation from being tracked.

    +

    To prevent tracking history (and using memory), you can also wrap the code block in with_no_grad({<code>}). This can be particularly helpful when evaluating a model because the model may have trainable parameters with requires_grad=TRUE, but for which we don’t need the gradients.

    +

    There’s one more class which is very important for autograd implementation - a autograd_function.

    +

    Tensor and Function are interconnected and build up an acyclic graph, that encodes a complete history of computation. Each tensor has a $grad_fn attribute that references an autograd_function that has created the Tensor (except for Tensors created by the user - their grad_fn is NULL).

    +

    If you want to compute the derivatives, you can call $backward() on a Tensor. If Tensor is a scalar (i.e. it holds a one element data), you don’t need to specify any arguments to backward(), however if it has more elements, you need to specify a gradient argument that is a tensor of matching shape.

    +

    Create a tensor and set requires_grad=TRUE to track computation with it:

    +
    +x <- torch_ones(2, 2, requires_grad = TRUE)
    +x
    +#> torch_tensor 
    +#>  1  1
    +#>  1  1
    +#> [ CPUFloatType{2,2} ]
    +
    +

    Do a tensor operation:

    +
    +y <- x + 2
    +y
    +#> torch_tensor 
    +#>  3  3
    +#>  3  3
    +#> [ CPUFloatType{2,2} ]
    +
    +

    y was created as a result of an operation, so it has a grad_fn.

    +
    +y$grad_fn
    +#> AddBackward1
    +
    +

    Do more operations on y

    +
    +z <- y * y * 3
    +z
    +#> torch_tensor 
    +#>  27  27
    +#>  27  27
    +#> [ CPUFloatType{2,2} ]
    +out <- z$mean()
    +out
    +#> torch_tensor 
    +#> 27
    +#> [ CPUFloatType{} ]
    +
    +

    $requires_grad_( ... ) changes an existing Tensor’s requires_grad flag in-place. The input flag defaults to FALSE if not given.

    +
    +a <- torch_randn(2, 2)
    +a <- (a * 3) / (a - 1)
    +a$requires_grad
    +#> [1] FALSE
    +a$requires_grad_(TRUE)
    +#> torch_tensor 
    +#> -13.0655   2.0776
    +#>   0.0652  -0.4122
    +#> [ CPUFloatType{2,2} ]
    +a$requires_grad
    +#> [1] TRUE
    +b <- (a * a)$sum()
    +b$grad_fn
    +#> SumBackward0
    +
    +
    +
    +

    +Gradients

    +

    Let’s backprop now. Because out contains a single scalar, out$backward() is equivalent to out$backward(torch.tensor(1.)).

    +
    +out$backward()
    +
    +

    Print gradients d(out)/dx

    +
    +x$grad
    +#> torch_tensor 
    +#>  4.5000  4.5000
    +#>  4.5000  4.5000
    +#> [ CPUFloatType{2,2} ]
    +
    +

    You should have got a matrix of 4.5. Let’s call the out Tensor \(o\).

    +

    We have that \(o = \frac{1}{4}\sum_i z_i\), \(z_i = 3(x_i+2)^2\) and \(z_i\bigr\rvert_{x_i=1} = 27\). Therefore, \(\frac{\partial o}{\partial x_i} = \frac{3}{2}(x_i+2)\), hence \(\frac{\partial o}{\partial x_i}\bigr\rvert_{x_i=1} = \frac{9}{2} = 4.5\).

    +

    Mathematically, if you have a vector valued function \(\vec{y}=f(\vec{x})\), then the gradient of \(\vec{y}\) with respect to \(\vec{x}\) is a Jacobian matrix:

    +

    \[ + J=\left(\begin{array}{ccc} + \frac{\partial y_{1}}{\partial x_{1}} & \cdots & \frac{\partial y_{1}}{\partial x_{n}}\\ + \vdots & \ddots & \vdots\\ + \frac{\partial y_{m}}{\partial x_{1}} & \cdots & \frac{\partial y_{m}}{\partial x_{n}} + \end{array}\right) +\]

    +

    Generally speaking, autograd is an engine for computing vector-Jacobian product. That is, given any vector \(v=\left(\begin{array}{cccc} v_{1} & v_{2} & \cdots & v_{m}\end{array}\right)^{T}\), compute the product \(v^{T}\cdot J\). If \(v\) happens to be the gradient of a scalar function \(l=g\left(\vec{y}\right)\), that is, \(v=\left(\begin{array}{ccc}\frac{\partial l}{\partial y_{1}} & \cdots & \frac{\partial l}{\partial y_{m}}\end{array}\right)^{T}\), then by the chain rule, the vector-Jacobian product would be the gradient of \(l\) with respect to \(\vec{x}\):

    +

    \[ + J^{T}\cdot v=\left(\begin{array}{ccc} + \frac{\partial y_{1}}{\partial x_{1}} & \cdots & \frac{\partial y_{m}}{\partial x_{1}}\\ + \vdots & \ddots & \vdots\\ + \frac{\partial y_{1}}{\partial x_{n}} & \cdots & \frac{\partial y_{m}}{\partial x_{n}} + \end{array}\right)\left(\begin{array}{c} + \frac{\partial l}{\partial y_{1}}\\ + \vdots\\ + \frac{\partial l}{\partial y_{m}} + \end{array}\right)=\left(\begin{array}{c} + \frac{\partial l}{\partial x_{1}}\\ + \vdots\\ + \frac{\partial l}{\partial x_{n}} + \end{array}\right) +\]

    +

    (Note that \(v^{T}\cdot J\) gives a row vector which can be treated as a column vector by taking \(J^{T}\cdot v\).)

    +

    This characteristic of vector-Jacobian product makes it very convenient to feed external gradients into a model that has non-scalar output.

    +

    Now let’s take a look at an example of vector-Jacobian product:

    +
    +x <- torch_randn(3, requires_grad=TRUE)
    +y <- 100 * x
    +y
    +#> torch_tensor 
    +#>  81.4641
    +#> -29.4692
    +#>  15.0851
    +#> [ CPUFloatType{3} ]
    +
    +

    Now in this case y is no longer a scalar. autograd could not compute the full Jacobian directly, but if we just want the vector-Jacobian product, simply pass the vector to backward as argument:

    +
    +v <- torch_tensor(c(0.1, 1.0, 0.0001))
    +y$backward(v)
    +
    +x$grad
    +#> torch_tensor 
    +#>  1.0000e+01
    +#>  1.0000e+02
    +#>  1.0000e-02
    +#> [ CPUFloatType{3} ]
    +
    +

    You can also stop autograd from tracking history on Tensors with $requires_grad=TRUE either by wrapping the code block in with with_no_grad():

    +
    +x$requires_grad
    +#> [1] TRUE
    +(x ** 2)$requires_grad
    +#> [1] TRUE
    +
    +with_no_grad({
    +  print((x ** 2)$requires_grad)
    +})
    +#> [1] FALSE
    +
    +
    +x$requires_grad
    +#> [1] TRUE
    +y <- x$detach()
    +y$requires_grad
    +#> [1] FALSE
    +x$eq(y)$all()
    +#> torch_tensor 
    +#> 1
    +#> [ CPUBoolType{} ]
    +
    +

    Read Later:

    +

    Document about help(autograd_function), vignette("using-autograd"), vignette("extending-autograd").

    +
    +
    + + + +
    + + + +
    + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + diff --git a/static/docs/dev/articles/getting-started/autograd_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/dev/articles/getting-started/autograd_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/dev/articles/getting-started/autograd_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/dev/articles/getting-started/control-flow-and-weight-sharing.html b/static/docs/dev/articles/getting-started/control-flow-and-weight-sharing.html new file mode 100644 index 0000000000000000000000000000000000000000..8602e7408b48c30c76a04c4f57ab3cb460a69135 --- /dev/null +++ b/static/docs/dev/articles/getting-started/control-flow-and-weight-sharing.html @@ -0,0 +1,293 @@ + + + + + + + +Control flow & Weight sharing • torch + + + + + + + + + + + +
    +
    + + + + +
    +
    + + + + +
    +

    Note: This is an R port of the official tutorial available here. All credits goes to Justin Johnson.

    +
    + +

    As an example of dynamic graphs and weight sharing, we implement a very strange model: a fully-connected ReLU network that on each forward pass chooses a random number between 1 and 4 and uses that many hidden layers, reusing the same weights multiple times to compute the innermost hidden layers.

    +

    For this model we can use normal R flow control to implement the loop, and we can implement weight sharing among the innermost layers by simply reusing the same Module multiple times when defining the forward pass.

    +

    We can easily implement this model using nn_module:

    +
    +dynamic_net <- nn_module(
    +   "dynamic_net",
    +   # In the constructor we construct three nn_linear instances that we will use
    +   # in the forward pass.
    +   initialize = function(D_in, H, D_out) {
    +      self$input_linear <- nn_linear(D_in, H)
    +      self$middle_linear <- nn_linear(H, H)
    +      self$output_linear <- nn_linear(H, D_out)
    +   },
    +   # For the forward pass of the model, we randomly choose either 0, 1, 2, or 3
    +   # and reuse the middle_linear Module that many times to compute hidden layer
    +   # representations.
    +   # 
    +   # Since each forward pass builds a dynamic computation graph, we can use normal
    +   # R control-flow operators like loops or conditional statements when
    +   # defining the forward pass of the model.
    +   # 
    +   # Here we also see that it is perfectly safe to reuse the same Module many
    +   # times when defining a computational graph. This is a big improvement from Lua
    +   # Torch, where each Module could be used only once.
    +   forward = function(x) {
    +      h_relu <- self$input_linear(x)$clamp(min = 0)
    +      for (i in seq_len(sample.int(4, size = 1))) {
    +         h_relu <- self$middle_linear(h_relu)$clamp(min=0)
    +      }
    +      y_pred <- self$output_linear(h_relu)
    +      y_pred
    +   }
    +)
    +
    +
    +if (cuda_is_available()) {
    +   device <- torch_device("cuda")
    +} else {
    +   device <- torch_device("cpu")
    +}
    +   
    +# N is batch size; D_in is input dimension;
    +# H is hidden dimension; D_out is output dimension.
    +N <- 64
    +D_in <- 1000
    +H <- 100
    +D_out <- 10
    +
    +# Create random input and output data
    +# Setting requires_grad=FALSE (the default) indicates that we do not need to 
    +# compute gradients with respect to these Tensors during the backward pass.
    +x <- torch_randn(N, D_in, device=device)
    +y <- torch_randn(N, D_out, device=device)
    +
    +# Construct our model by instantiating the class defined above
    +model <- dynamic_net(D_in, H, D_out)
    +
    +# The nn package also contains definitions of popular loss functions; in this
    +# case we will use Mean Squared Error (MSE) as our loss function.
    +loss_fn <- nnf_mse_loss
    +
    +# Use the optim package to define an Optimizer that will update the weights of
    +# the model for us. Here we will use Adam; the optim package contains many other
    +# optimization algorithms. The first argument to the Adam constructor tells the
    +# optimizer which Tensors it should update.
    +learning_rate <- 1e-4
    +optimizer <- optim_sgd(model$parameters, lr=learning_rate, momentum = 0.9)
    +
    +for (t in seq_len(500)) {
    +   # Forward pass: compute predicted y by passing x to the model. Module objects
    +   # can be called like functions. When doing so you pass a Tensor of input
    +   # data to the Module and it produces a Tensor of output data.
    +   y_pred <- model(x)
    +   
    +   # Compute and print loss. We pass Tensors containing the predicted and true
    +   # values of y, and the loss function returns a Tensor containing the
    +   # loss.
    +   loss <- loss_fn(y_pred, y)
    +   if (t %% 100 == 0 || t == 1)
    +      cat("Step:", t, ":", as.numeric(loss), "\n")
    +   
    +   # Before the backward pass, use the optimizer object to zero all of the
    +   # gradients for the variables it will update (which are the learnable
    +   # weights of the model). This is because by default, gradients are
    +   # accumulated in buffers( i.e, not overwritten) whenever $backward()
    +   # is called. Checkout docs of `autograd_backward` for more details.
    +   optimizer$zero_grad()
    +
    +   # Backward pass: compute gradient of the loss with respect to model
    +   # parameters
    +   loss$backward()
    +
    +   # Calling the step function on an Optimizer makes an update to its
    +   # parameters
    +   optimizer$step()
    +}
    +#> Step: 1 : 0.9501783 
    +#> Step: 100 : 0.948805 
    +#> Step: 200 : 0.9428347 
    +#> Step: 300 : 0.9459909 
    +#> Step: 400 : 0.9362943 
    +#> Step: 500 : 0.9435167
    +
    +
    + + + +
    + + + +
    + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + diff --git a/static/docs/dev/articles/getting-started/control-flow-and-weight-sharing_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/dev/articles/getting-started/control-flow-and-weight-sharing_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/dev/articles/getting-started/control-flow-and-weight-sharing_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/dev/articles/getting-started/custom-nn.html b/static/docs/dev/articles/getting-started/custom-nn.html new file mode 100644 index 0000000000000000000000000000000000000000..3fd821fdcb6a121b4adba375af2d1caf348237e8 --- /dev/null +++ b/static/docs/dev/articles/getting-started/custom-nn.html @@ -0,0 +1,277 @@ + + + + + + + +Custom nn modules • torch + + + + + + + + + + + +
    +
    + + + + +
    +
    + + + + +
    +

    Note: This is an R port of the official tutorial available here. All credits goes to Justin Johnson.

    +
    + +

    Sometimes you will want to specify models that are more complex than a sequence of existing Modules; for these cases you can define your own Modules by using nn_module function and defining a forward which receives input Tensors and produces output Tensors using other modules or other autograd operations on Tensors.

    +

    In this example we implement our two-layer network as a custom Module subclass:

    +
    +two_layer_net <- nn_module(
    +   "two_layer_net",
    +   initialize = function(D_in, H, D_out) {
    +      self$linear1 <- nn_linear(D_in, H)
    +      self$linear2 <- nn_linear(H, D_out)
    +   },
    +   forward = function(x) {
    +      x %>% 
    +         self$linear1() %>% 
    +         nnf_relu() %>% 
    +         self$linear2()
    +   }
    +)
    +
    +
    +if (cuda_is_available()) {
    +   device <- torch_device("cuda")
    +} else {
    +   device <- torch_device("cpu")
    +}
    +   
    +# N is batch size; D_in is input dimension;
    +# H is hidden dimension; D_out is output dimension.
    +N <- 64
    +D_in <- 1000
    +H <- 100
    +D_out <- 10
    +
    +# Create random input and output data
    +# Setting requires_grad=FALSE (the default) indicates that we do not need to 
    +# compute gradients with respect to these Tensors during the backward pass.
    +x <- torch_randn(N, D_in, device=device)
    +y <- torch_randn(N, D_out, device=device)
    +
    +# Construct our model by instantiating the class defined above
    +model <- two_layer_net(D_in, H, D_out)
    +
    +# The nn package also contains definitions of popular loss functions; in this
    +# case we will use Mean Squared Error (MSE) as our loss function.
    +loss_fn <- nnf_mse_loss
    +
    +# Use the optim package to define an Optimizer that will update the weights of
    +# the model for us. Here we will use Adam; the optim package contains many other
    +# optimization algorithms. The first argument to the Adam constructor tells the
    +# optimizer which Tensors it should update.
    +learning_rate <- 1e-4
    +optimizer <- optim_sgd(model$parameters, lr=learning_rate)
    +
    +for (t in seq_len(500)) {
    +   # Forward pass: compute predicted y by passing x to the model. Module objects
    +   # can be called like functions. When doing so you pass a Tensor of input
    +   # data to the Module and it produces a Tensor of output data.
    +   y_pred <- model(x)
    +   
    +   # Compute and print loss. We pass Tensors containing the predicted and true
    +   # values of y, and the loss function returns a Tensor containing the
    +   # loss.
    +   loss <- loss_fn(y_pred, y)
    +   if (t %% 100 == 0 || t == 1)
    +      cat("Step:", t, ":", as.numeric(loss), "\n")
    +   
    +   # Before the backward pass, use the optimizer object to zero all of the
    +   # gradients for the variables it will update (which are the learnable
    +   # weights of the model). This is because by default, gradients are
    +   # accumulated in buffers( i.e, not overwritten) whenever $backward()
    +   # is called. Checkout docs of `autograd_backward` for more details.
    +   optimizer$zero_grad()
    +
    +   # Backward pass: compute gradient of the loss with respect to model
    +   # parameters
    +   loss$backward()
    +
    +   # Calling the step function on an Optimizer makes an update to its
    +   # parameters
    +   optimizer$step()
    +}
    +#> Step: 1 : 1.066273 
    +#> Step: 100 : 1.052879 
    +#> Step: 200 : 1.039649 
    +#> Step: 300 : 1.02675 
    +#> Step: 400 : 1.014158 
    +#> Step: 500 : 1.001865
    +
    +

    In the next example we will about dynamic graphs in torch.

    +
    + + + +
    + + + +
    + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + diff --git a/static/docs/dev/articles/getting-started/custom-nn_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/dev/articles/getting-started/custom-nn_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/dev/articles/getting-started/custom-nn_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/dev/articles/getting-started/neural-networks.html b/static/docs/dev/articles/getting-started/neural-networks.html new file mode 100644 index 0000000000000000000000000000000000000000..e4d919f3ff174bc2feb6c0e8ff129a3e1f1c6f08 --- /dev/null +++ b/static/docs/dev/articles/getting-started/neural-networks.html @@ -0,0 +1,410 @@ + + + + + + + +Neural networks • torch + + + + + + + + + + + +
    +
    + + + + +
    +
    + + + + +
    +

    Note: This is an R port of the official tutorial available here. All credits goes to Soumith Chintala.

    +
    + +

    Neural networks can be constructed using the nn functionality.

    +

    Now that you had a glimpse of autograd, nn depends on autograd to define models and differentiate them. An nn.Module contains layers, and a method forward(input) that returns the output.

    +

    For example, look at this network that classifies digit images:

    +
    +

    Convnet for mnist classification

    +
    +

    It is a simple feed-forward network. It takes the input, feeds it through several layers one after the other, and then finally gives the output.

    +

    A typical training procedure for a neural network is as follows:

    +
      +
    • Define the neural network that has some learnable parameters (or weights)
    • +
    • Iterate over a dataset of inputs
    • +
    • Process input through the network
    • +
    • Compute the loss (how far is the output from being correct)
    • +
    • Propagate gradients back into the network’s parameters
    • +
    • Update the weights of the network, typically using a simple update rule: weight = weight - learning_rate * gradient.
    • +
    +
    +

    +Define the network

    +

    Let’s define this network:

    +
    +Net <- nn_module(
    +  initialize = function() {
    +    self$conv1 = nn_conv2d(1, 6, 3)
    +    self$conv2 = nn_conv2d(6, 16, 3)
    +    # an affine operation: y = Wx + b
    +    self$fc1 = nn_linear(16 * 6 * 6, 120)  # 6*6 from image dimension
    +    self$fc2 = nn_linear(120, 84)
    +    self$fc3 = nn_linear(84, 10)
    +  },
    +  forward = function(x) {
    +    x %>% 
    +      
    +      self$conv1() %>% 
    +      nnf_relu() %>% 
    +      nnf_max_pool2d(c(2,2)) %>% 
    +      
    +      self$conv2() %>% 
    +      nnf_relu() %>% 
    +      nnf_max_pool2d(c(2,2)) %>% 
    +      
    +      torch_flatten(start_dim = 2) %>% 
    +      
    +      self$fc1() %>% 
    +      nnf_relu() %>% 
    +      
    +      self$fc2() %>% 
    +      nnf_relu() %>% 
    +      
    +      self$fc3()
    +  }
    +)
    +
    +net <- Net()
    +
    +

    You just have to define the forward function, and the backward function (where gradients are computed) is automatically defined for you using autograd. You can use any of the Tensor operations in the forward function.

    +

    The learnable parameters of a model are returned by net$parameters.

    +
    +str(net$parameters)
    +#> List of 10
    +#>  $ conv1.weight:Float [1:6, 1:1, 1:3, 1:3]
    +#>  $ conv1.bias  :Float [1:6]
    +#>  $ conv2.weight:Float [1:16, 1:6, 1:3, 1:3]
    +#>  $ conv2.bias  :Float [1:16]
    +#>  $ fc1.weight  :Float [1:120, 1:576]
    +#>  $ fc1.bias    :Float [1:120]
    +#>  $ fc2.weight  :Float [1:84, 1:120]
    +#>  $ fc2.bias    :Float [1:84]
    +#>  $ fc3.weight  :Float [1:10, 1:84]
    +#>  $ fc3.bias    :Float [1:10]
    +
    +

    Let’s try a random 32x32 input. Note: expected input size of this net (LeNet) is 32x32. To use this net on the MNIST dataset, please resize the images from the dataset to 32x32.

    +
    +input <- torch_randn(1, 1, 32, 32)
    +out <- net(input)
    +out
    +#> torch_tensor 
    +#> -0.0352  0.1035  0.0462 -0.0992 -0.0517 -0.0676  0.1250 -0.0572  0.0229  0.0145
    +#> [ CPUFloatType{1,10} ]
    +
    +

    Zero the gradient buffers of all parameters and backprops with random gradients:

    +
    +net$zero_grad()
    +out$backward(torch_randn(1, 10))
    +
    +
    +

    Note: nn only supports mini-batches. The entire torch.nn package only supports inputs that are a mini-batch of samples, and not a single sample. For example, nn_conv2d will take in a 4D Tensor of nSamples x nChannels x Height x Width. If you have a single sample, just use input$unsqueeze(1) to add a fake batch dimension.

    +
    +

    Before proceeding further, let’s recap all the classes you’ve seen so far.

    +
    +

    +Recap

    +
      +
    • torch_tensor - A multi-dimensional array with support for autograd operations like backward(). Also holds the gradient w.r.t. the tensor.

    • +
    • nn_module - Neural network module. Convenient way of encapsulating parameters, with helpers for moving them to GPU, exporting, loading, etc.

    • +
    • nn_parameter - A kind of Tensor, that is automatically registered as a parameter when assigned as an attribute to a Module.

    • +
    • autograd_function - Implements forward and backward definitions of an autograd operation. Every Tensor operation creates at least a single Function node that connects to functions that created a Tensor and encodes its history.

    • +
    +
    +
    +

    +At this point, we covered

    +
      +
    • Defining a neural network
    • +
    • Processing inputs and calling backward
    • +
    +
    +
    +

    +Still left

    +
      +
    • Computing the loss
    • +
    • Updating the weights of the network
    • +
    +
    +
    +
    +

    +Loss function

    +

    A loss function takes the (output, target) pair of inputs, and computes a value that estimates how far away the output is from the target.

    +

    There are several different loss functions under the nn package . A simple loss is: nnf_mse_loss which computes the mean-squared error between the input and the target.

    +

    For example:

    +
    +output <- net(input)
    +target <- torch_randn(10)  # a dummy target, for example
    +target <- target$view(c(1, -1))  # make it the same shape as output
    +
    +loss <- nnf_mse_loss(output, target)
    +loss
    +#> torch_tensor 
    +#> 0.799604
    +#> [ CPUFloatType{} ]
    +
    +

    Now, if you follow loss in the backward direction, using its $grad_fn attribute, you will see a graph of computations that looks like this:

    +
    input -> conv2d -> relu -> maxpool2d -> conv2d -> relu -> maxpool2d
    +      -> view -> linear -> relu -> linear -> relu -> linear
    +      -> MSELoss
    +      -> loss
    +

    So, when we call loss$backward(), the whole graph is differentiated w.r.t. the loss, and all Tensors in the graph that has requires_grad=True will have their #grad Tensor accumulated with the gradient.

    +

    For illustration, let us follow a few steps backward:

    +
    +loss$grad_fn
    +#> MseLossBackward
    +loss$grad_fn$next_functions[[1]]
    +#> AddmmBackward
    +loss$grad_fn$next_functions[[1]]$next_functions[[1]]
    +#> torch::autograd::AccumulateGrad
    +
    +
    +
    +

    +Backprop

    +

    To backpropagate the error all we have to do is to loss$backward(). You need to clear the existing gradients though, else gradients will be accumulated to existing gradients.

    +

    Now we shall call loss$backward(), and have a look at conv1’s bias gradients before and after the backward.

    +
    +net$zero_grad()     # zeroes the gradient buffers of all parameters
    +
    +# conv1.bias.grad before backward
    +net$conv1$bias$grad
    +#> torch_tensor 
    +#>  0
    +#>  0
    +#>  0
    +#>  0
    +#>  0
    +#>  0
    +#> [ CPUFloatType{6} ]
    +
    +loss$backward()
    +
    +# conv1.bias.grad after backward
    +net$conv1$bias$grad
    +#> torch_tensor 
    +#> 0.001 *
    +#> -3.9505
    +#> -0.0917
    +#> -4.1386
    +#> -7.6969
    +#>  1.5123
    +#> -5.0254
    +#> [ CPUFloatType{6} ]
    +
    +

    Now, we have seen how to use loss functions.

    +
    +
    +

    +Update the weights

    +

    The simplest update rule used in practice is the Stochastic Gradient Descent (SGD):

    +

    \[weight = weight - learning_rate * gradient\]

    +

    We can implement this using simple R code:

    +
    +learning_rate <- 0.01
    +for (f in net$parameters) {
    +  with_no_grad({
    +    f$sub_(f$grad * learning_rate)
    +  })
    +}
    +
    +
    +

    Note: Weight updates here is wraped around with_no_grad as we don’t the updates to be tracked by the autograd engine.

    +
    +

    However, as you use neural networks, you want to use various different update rules such as SGD, Nesterov-SGD, Adam, RMSProp, etc.

    +
    +# create your optimizer
    +optimizer <- optim_sgd(net$parameters, lr = 0.01)
    +
    +# in your training loop:
    +optimizer$zero_grad()   # zero the gradient buffers
    +output <- net(input)
    +loss <- nnf_mse_loss(output, target)
    +loss$backward()
    +optimizer$step()    # Does the update
    +#> NULL
    +
    +
    +

    Note: Observe how gradient buffers had to be manually set to zero using optimizer$zero_grad(). This is because gradients are accumulated as explained in the Backprop section.

    +
    +
    +
    + + + +
    + + + +
    + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + diff --git a/static/docs/dev/articles/getting-started/neural-networks_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/dev/articles/getting-started/neural-networks_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/dev/articles/getting-started/neural-networks_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/dev/articles/getting-started/new-autograd-functions.html b/static/docs/dev/articles/getting-started/new-autograd-functions.html new file mode 100644 index 0000000000000000000000000000000000000000..44ccda5b02593adeac17d6adeabc81f2b91ac3ff --- /dev/null +++ b/static/docs/dev/articles/getting-started/new-autograd-functions.html @@ -0,0 +1,285 @@ + + + + + + + +Defining new autograd functions • torch + + + + + + + + + + + +
    +
    + + + + +
    +
    + + + + +
    +

    Note: This is an R port of the official tutorial available here. All credits goes to Justin Johnson.

    +
    + +

    Under the hood, each primitive autograd operator is really two functions that operate on Tensors. The forward function computes output Tensors from input Tensors. The backward function receives the gradient of the output Tensors with respect to some scalar value, and computes the gradient of the input Tensors with respect to that same scalar value.

    +

    In torch we can easily define our own autograd operator by defining a subclass of autograd_function and implementing the forward and backward functions. We can then use our new autograd operator by constructing an instance and calling it like a function, passing Tensors containing input data.

    +

    In this example we define our own custom autograd function for performing the ReLU nonlinearity, and use it to implement our two-layer network:

    +
    +# We can implement our own custom autograd Functions by subclassing
    +# autograd_functioon and implementing the forward and backward passes
    +# which operate on Tensors.
    +my_relu <- autograd_function(
    +   # In the forward pass we receive a Tensor containing the input and return
    +   # a Tensor containing the output. ctx is a context object that can be used
    +   # to stash information for backward computation. You can cache arbitrary
    +   # objects for use in the backward pass using the ctx$save_for_backward method.
    +   forward = function(ctx, input) {
    +      ctx$save_for_backward(input = input)
    +      input$clamp(min = 0)
    +   },
    +   # In the backward pass we receive a Tensor containing the gradient of the loss
    +   # with respect to the output, and we need to compute the gradient of the loss
    +   # with respect to the input.
    +   backward = function(ctx, grad_output) {
    +      v <- ctx$saved_variables
    +      grad_input <- grad_output$clone()
    +      grad_input[v$input < 0] <- 0
    +      list(input = grad_input)
    +   }
    +)
    +
    +if (cuda_is_available()) {
    +   device <- torch_device("cuda")
    +} else {
    +   device <- torch_device("cpu")
    +}
    +   
    +# N is batch size; D_in is input dimension;
    +# H is hidden dimension; D_out is output dimension.
    +N <- 64
    +D_in <- 1000
    +H <- 100
    +D_out <- 10
    +
    +# Create random input and output data
    +# Setting requires_grad=FALSE (the default) indicates that we do not need to 
    +# compute gradients with respect to these Tensors during the backward pass.
    +x <- torch_randn(N, D_in, device=device)
    +y <- torch_randn(N, D_out, device=device)
    +
    +# Randomly initialize weights
    +# Setting requires_grad=TRUE indicates that we want to compute gradients with
    +# respect to these Tensors during the backward pass.
    +w1 <- torch_randn(D_in, H, device=device, requires_grad = TRUE)
    +w2 <- torch_randn(H, D_out, device=device, requires_grad = TRUE)
    +
    +learning_rate <- 1e-6
    +for (t in seq_len(500)) {
    +   # Forward pass: compute predicted y using operations on Tensors; these
    +   # are exactly the same operations we used to compute the forward pass using
    +   # Tensors, but we do not need to keep references to intermediate values since
    +   # we are not implementing the backward pass by hand.
    +   y_pred <- my_relu(x$mm(w1))$mm(w2)
    +   
    +   # Compute and print loss using operations on Tensors.
    +   # Now loss is a Tensor of shape (1,)
    +   loss <- (y_pred - y)$pow(2)$sum()
    +   if (t %% 100 == 0 || t == 1)
    +      cat("Step:", t, ":", as.numeric(loss), "\n")
    +   
    +   # Use autograd to compute the backward pass. This call will compute the
    +   # gradient of loss with respect to all Tensors with requires_grad=True.
    +   # After this call w1$grad and w2$grad will be Tensors holding the gradient
    +   # of the loss with respect to w1 and w2 respectively.
    +   loss$backward()
    +   
    +   # Manually update weights using gradient descent. Wrap in `with_no_grad`
    +   # because weights have requires_grad=TRUE, but we don't need to track this
    +   # in autograd.
    +   # You can also use optim_sgd to achieve this.
    +   with_no_grad({
    +      
    +      # operations suffixed with an `_` operates on in-place on the tensor.
    +      w1$sub_(learning_rate * w1$grad)
    +      w2$sub_(learning_rate * w2$grad)
    +      
    +      # Manually zero the gradients after updating weights
    +      w1$grad$zero_()
    +      w2$grad$zero_()
    +   })
    +}
    +#> Step: 1 : 39805352 
    +#> Step: 100 : 334.5283 
    +#> Step: 200 : 0.9720595 
    +#> Step: 300 : 0.005285224 
    +#> Step: 400 : 0.0001543093 
    +#> Step: 500 : 3.083485e-05
    +
    +

    In the next example we will learn how to use the neural networks abstractions in torch.

    +
    + + + +
    + + + +
    + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + diff --git a/static/docs/dev/articles/getting-started/new-autograd-functions_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/dev/articles/getting-started/new-autograd-functions_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/dev/articles/getting-started/new-autograd-functions_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/dev/articles/getting-started/nn.html b/static/docs/dev/articles/getting-started/nn.html new file mode 100644 index 0000000000000000000000000000000000000000..f3e97ad426f9939dfb9567122bd5bb6ba4bf4eaf --- /dev/null +++ b/static/docs/dev/articles/getting-started/nn.html @@ -0,0 +1,268 @@ + + + + + + + +nn: neural networks with torch • torch + + + + + + + + + + + +
    +
    + + + + +
    +
    + + + + +
    +

    Note: This is an R port of the official tutorial available here. All credits goes to Justin Johnson.

    +
    + +

    Computational graphs and autograd are a very powerful paradigm for defining complex operators and automatically taking derivatives; however for large neural networks raw autograd can be a bit too low-level.

    +

    When building neural networks we frequently think of arranging the computation into layers, some of which have learnable parameters which will be optimized during learning.

    +

    In TensorFlow, packages like Keras, TensorFlow-Slim, and TFLearn provide higher-level abstractions over raw computational graphs that are useful for building neural networks.

    +

    In torch, the nn functionality serves this same purpose. The nn feature defines a set of Modules, which are roughly equivalent to neural network layers. A Module receives input Tensors and computes output Tensors, but may also hold internal state such as Tensors containing learnable parameters. The nn collection also defines a set of useful loss functions that are commonly used when training neural networks.

    +

    In this example we use nn to implement our two-layer network:

    +
    +if (cuda_is_available()) {
    +   device <- torch_device("cuda")
    +} else {
    +   device <- torch_device("cpu")
    +}
    +   
    +# N is batch size; D_in is input dimension;
    +# H is hidden dimension; D_out is output dimension.
    +N <- 64
    +D_in <- 1000
    +H <- 100
    +D_out <- 10
    +
    +# Create random input and output data
    +# Setting requires_grad=FALSE (the default) indicates that we do not need to 
    +# compute gradients with respect to these Tensors during the backward pass.
    +x <- torch_randn(N, D_in, device=device)
    +y <- torch_randn(N, D_out, device=device)
    +
    +# Use the nn package to define our model as a sequence of layers. nn_sequential
    +# is a Module which contains other Modules, and applies them in sequence to
    +# produce its output. Each Linear Module computes output from input using a
    +# linear function, and holds internal Tensors for its weight and bias.
    +model <- nn_sequential(
    +    nn_linear(D_in, H),
    +    nn_relu(),
    +    nn_linear(H, D_out)
    +)
    +
    +# The nn package also contains definitions of popular loss functions; in this
    +# case we will use Mean Squared Error (MSE) as our loss function.
    +loss_fn <- nnf_mse_loss
    +
    +learning_rate <- 1e-6
    +for (t in seq_len(500)) {
    +   # Forward pass: compute predicted y by passing x to the model. Module objects
    +   # can be called like functions. When doing so you pass a Tensor of input
    +   # data to the Module and it produces a Tensor of output data.
    +   y_pred <- model(x)
    +   
    +   # Compute and print loss. We pass Tensors containing the predicted and true
    +   # values of y, and the loss function returns a Tensor containing the
    +   # loss.
    +   loss <- loss_fn(y_pred, y)
    +   if (t %% 100 == 0 || t == 1)
    +      cat("Step:", t, ":", as.numeric(loss), "\n")
    +   
    +   # Zero the gradients before running the backward pass.
    +   model$zero_grad()
    +
    +   # Backward pass: compute gradient of the loss with respect to all the learnable
    +   # parameters of the model. Internally, the parameters of each Module are stored
    +   # in Tensors with requires_grad=TRUE, so this call will compute gradients for
    +   # all learnable parameters in the model.
    +   loss$backward()
    +   
    +   # Update the weights using gradient descent. Each parameter is a Tensor, so
    +   # we can access its gradients like we did before.
    +   with_no_grad({
    +      for (param in model$parameters) {
    +         param$sub_(learning_rate * param$grad)
    +      }
    +   })
    +}
    +#> Step: 1 : 1.083556 
    +#> Step: 100 : 1.083417 
    +#> Step: 200 : 1.083277 
    +#> Step: 300 : 1.083137 
    +#> Step: 400 : 1.082997 
    +#> Step: 500 : 1.082856
    +
    +

    In the next example we will learn how to use optimizers implemented in torch.

    +
    + + + +
    + + + +
    + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + diff --git a/static/docs/dev/articles/getting-started/nn_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/dev/articles/getting-started/nn_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/dev/articles/getting-started/nn_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/dev/articles/getting-started/optim.html b/static/docs/dev/articles/getting-started/optim.html new file mode 100644 index 0000000000000000000000000000000000000000..8e5e85962ff1fab7cbb1680418d3dd49614e4cfe --- /dev/null +++ b/static/docs/dev/articles/getting-started/optim.html @@ -0,0 +1,270 @@ + + + + + + + +optim: optimizers in torch • torch + + + + + + + + + + + +
    +
    + + + + +
    +
    + + + + +
    +

    Note: This is an R port of the official tutorial available here. All credits goes to Justin Johnson.

    +
    + +

    Up to this point we have updated the weights of our models by manually mutating the Tensors holding learnable parameters (with with_no_grad to avoid tracking history in autograd). This is not a huge burden for simple optimization algorithms like stochastic gradient descent, but in practice we often train neural networks using more sophisticated optimizers like AdaGrad, RMSProp, Adam, etc.

    +

    The optim package in torch abstracts the idea of an optimization algorithm and provides implementations of commonly used optimization algorithms.

    +

    In this example we will use the nn package to define our model as before, but we will optimize the model using the Adam algorithm provided by optim:

    +
    +if (cuda_is_available()) {
    +   device <- torch_device("cuda")
    +} else {
    +   device <- torch_device("cpu")
    +}
    +   
    +# N is batch size; D_in is input dimension;
    +# H is hidden dimension; D_out is output dimension.
    +N <- 64
    +D_in <- 1000
    +H <- 100
    +D_out <- 10
    +
    +# Create random input and output data
    +# Setting requires_grad=FALSE (the default) indicates that we do not need to 
    +# compute gradients with respect to these Tensors during the backward pass.
    +x <- torch_randn(N, D_in, device=device)
    +y <- torch_randn(N, D_out, device=device)
    +
    +# Use the nn package to define our model as a sequence of layers. nn_sequential
    +# is a Module which contains other Modules, and applies them in sequence to
    +# produce its output. Each Linear Module computes output from input using a
    +# linear function, and holds internal Tensors for its weight and bias.
    +model <- nn_sequential(
    +    nn_linear(D_in, H),
    +    nn_relu(),
    +    nn_linear(H, D_out)
    +)
    +
    +# The nn package also contains definitions of popular loss functions; in this
    +# case we will use Mean Squared Error (MSE) as our loss function.
    +loss_fn <- nnf_mse_loss
    +
    +# Use the optim package to define an Optimizer that will update the weights of
    +# the model for us. Here we will use Adam; the optim package contains many other
    +# optimization algorithms. The first argument to the Adam constructor tells the
    +# optimizer which Tensors it should update.
    +learning_rate <- 1e-4
    +optimizer <- optim_adam(model$parameters, lr=learning_rate)
    +
    +for (t in seq_len(500)) {
    +   # Forward pass: compute predicted y by passing x to the model. Module objects
    +   # can be called like functions. When doing so you pass a Tensor of input
    +   # data to the Module and it produces a Tensor of output data.
    +   y_pred <- model(x)
    +   
    +   # Compute and print loss. We pass Tensors containing the predicted and true
    +   # values of y, and the loss function returns a Tensor containing the
    +   # loss.
    +   loss <- loss_fn(y_pred, y)
    +   if (t %% 100 == 0 || t == 1)
    +      cat("Step:", t, ":", as.numeric(loss), "\n")
    +   
    +   # Before the backward pass, use the optimizer object to zero all of the
    +   # gradients for the variables it will update (which are the learnable
    +   # weights of the model). This is because by default, gradients are
    +   # accumulated in buffers( i.e, not overwritten) whenever $backward()
    +   # is called. Checkout docs of `autograd_backward` for more details.
    +   optimizer$zero_grad()
    +
    +   # Backward pass: compute gradient of the loss with respect to model
    +   # parameters
    +   loss$backward()
    +
    +   # Calling the step function on an Optimizer makes an update to its
    +   # parameters
    +   optimizer$step()
    +}
    +#> Step: 1 : 1.080712 
    +#> Step: 100 : 0.07540431 
    +#> Step: 200 : 0.001296245 
    +#> Step: 300 : 1.112598e-05 
    +#> Step: 400 : 7.013001e-08 
    +#> Step: 500 : 1.750831e-10
    +
    +

    In the next example we will learn how to create custom nn_modules.

    +
    + + + +
    + + + +
    + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + diff --git a/static/docs/dev/articles/getting-started/optim_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/dev/articles/getting-started/optim_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/dev/articles/getting-started/optim_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/dev/articles/getting-started/tensors-and-autograd.html b/static/docs/dev/articles/getting-started/tensors-and-autograd.html new file mode 100644 index 0000000000000000000000000000000000000000..0615d9aee3472ee45a697ed0df4f27cc04164db9 --- /dev/null +++ b/static/docs/dev/articles/getting-started/tensors-and-autograd.html @@ -0,0 +1,263 @@ + + + + + + + +Tensors and autograd • torch + + + + + + + + + + + +
    +
    + + + + +
    +
    + + + + +
    +

    Note: This is an R port of the official tutorial available here. All credits goes to Justin Johnson.

    +
    + +

    In the previous examples, we had to manually implement both the forward and backward passes of our neural network. Manually implementing the backward pass is not a big deal for a small two-layer network, but can quickly get very hairy for large complex networks.

    +

    Thankfully, we can use automatic differentiation to automate the computation of backward passes in neural networks. The autograd feature in torch provides exactly this functionality. When using autograd, the forward pass of your network will define a computational graph; nodes in the graph will be Tensors, and edges will be functions that produce output Tensors from input Tensors. Backpropagating through this graph then allows you to easily compute gradients.

    +

    This sounds complicated, it’s pretty simple to use in practice. Each Tensor represents a node in a computational graph. If x is a Tensor that has x$requires_grad=TRUE then x$grad is another Tensor holding the gradient of x with respect to some scalar value.

    +

    Here we use torch Tensors and autograd to implement our two-layer network; now we no longer need to manually implement the backward pass through the network:

    +
    +if (cuda_is_available()) {
    +   device <- torch_device("cuda")
    +} else {
    +   device <- torch_device("cpu")
    +}
    +   
    +# N is batch size; D_in is input dimension;
    +# H is hidden dimension; D_out is output dimension.
    +N <- 64
    +D_in <- 1000
    +H <- 100
    +D_out <- 10
    +
    +# Create random input and output data
    +# Setting requires_grad=FALSE (the default) indicates that we do not need to 
    +# compute gradients with respect to these Tensors during the backward pass.
    +x <- torch_randn(N, D_in, device=device)
    +y <- torch_randn(N, D_out, device=device)
    +
    +# Randomly initialize weights
    +# Setting requires_grad=TRUE indicates that we want to compute gradients with
    +# respect to these Tensors during the backward pass.
    +w1 <- torch_randn(D_in, H, device=device, requires_grad = TRUE)
    +w2 <- torch_randn(H, D_out, device=device, requires_grad = TRUE)
    +
    +learning_rate <- 1e-6
    +for (t in seq_len(500)) {
    +   # Forward pass: compute predicted y using operations on Tensors; these
    +   # are exactly the same operations we used to compute the forward pass using
    +   # Tensors, but we do not need to keep references to intermediate values since
    +   # we are not implementing the backward pass by hand.
    +   y_pred <- x$mm(w1)$clamp(min=0)$mm(w2)
    +   
    +   # Compute and print loss using operations on Tensors.
    +   # Now loss is a Tensor of shape (1,)
    +   loss <- (y_pred - y)$pow(2)$sum()
    +   if (t %% 100 == 0 || t == 1)
    +      cat("Step:", t, ":", as.numeric(loss), "\n")
    +   
    +   # Use autograd to compute the backward pass. This call will compute the
    +   # gradient of loss with respect to all Tensors with requires_grad=True.
    +   # After this call w1$grad and w2$grad will be Tensors holding the gradient
    +   # of the loss with respect to w1 and w2 respectively.
    +   loss$backward()
    +   
    +   # Manually update weights using gradient descent. Wrap in `with_no_grad`
    +   # because weights have requires_grad=TRUE, but we don't need to track this
    +   # in autograd.
    +   # You can also use optim_sgd to achieve this.
    +   with_no_grad({
    +      
    +      # operations suffixed with an `_` operates on in-place on the tensor.
    +      w1$sub_(learning_rate * w1$grad)
    +      w2$sub_(learning_rate * w2$grad)
    +      
    +      # Manually zero the gradients after updating weights
    +      w1$grad$zero_()
    +      w2$grad$zero_()
    +   })
    +}
    +#> Step: 1 : 34836468 
    +#> Step: 100 : 390.6113 
    +#> Step: 200 : 1.657834 
    +#> Step: 300 : 0.0120327 
    +#> Step: 400 : 0.0002646378 
    +#> Step: 500 : 4.226738e-05
    +
    +

    In the next example we will learn how to create new autograd functions.

    +
    + + + +
    + + + +
    + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + diff --git a/static/docs/dev/articles/getting-started/tensors-and-autograd_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/dev/articles/getting-started/tensors-and-autograd_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/dev/articles/getting-started/tensors-and-autograd_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/dev/articles/getting-started/tensors.html b/static/docs/dev/articles/getting-started/tensors.html new file mode 100644 index 0000000000000000000000000000000000000000..1834ffa8fd15c569144489be54e64f5cfa5003ff --- /dev/null +++ b/static/docs/dev/articles/getting-started/tensors.html @@ -0,0 +1,248 @@ + + + + + + + +torch Tensors • torch + + + + + + + + + + + +
    +
    + + + + +
    +
    + + + + +
    +

    Note: This is an R port of the official tutorial available here. All credits goes to Justin Johnson.

    +
    + +

    R arrays are great, but they cannot utilize GPUs to accelerate its numerical computations. For modern deep neural networks, GPUs often provide speedups of 50x or greater, so unfortunately pure R won’t be enough for modern deep learning.

    +

    Here we introduce the most fundamental torch concept: the Tensor. A torch Tensor is conceptually similar to an R array: a Tensor is an n-dimensional array, and torch provides many functions for operating on these Tensors. Behind the scenes, Tensors can keep track of a computational graph and gradients, but they’re also useful as a generic tool for scientific computing.

    +

    Also unlike R, torch Tensors can utilize GPUs to accelerate their numeric computations. To run a torch Tensor on GPU, you simply need to cast it to a new datatype.

    +

    Here we use torch Tensors to fit a two-layer network to random data. Like the R before we need to manually implement the forward and backward passes through the network:

    +
    +if (cuda_is_available()) {
    +   device <- torch_device("cuda")
    +} else {
    +   device <- torch_device("cpu")
    +}
    +   
    +# N is batch size; D_in is input dimension;
    +# H is hidden dimension; D_out is output dimension.
    +N <- 64
    +D_in <- 1000
    +H <- 100
    +D_out <- 10
    +
    +# Create random input and output data
    +x <- torch_randn(N, D_in, device=device)
    +y <- torch_randn(N, D_out, device=device)
    +
    +# Randomly initialize weights
    +w1 <- torch_randn(D_in, H, device=device)
    +w2 <- torch_randn(H, D_out, device=device)
    +
    +learning_rate <- 1e-6
    +for (t in seq_len(500)) {
    +   # Forward pass: compute predicted y
    +   h <- x$mm(w1)
    +   h_relu <- h$clamp(min=0)
    +   y_pred <- h_relu$mm(w2)
    +   
    +   # Compute and print loss
    +   loss <- as.numeric((y_pred - y)$pow(2)$sum())
    +   if (t %% 100 == 0 || t == 1)
    +      cat("Step:", t, ":", loss, "\n")
    +   
    +   # Backprop to compute gradients of w1 and w2 with respect to loss
    +   grad_y_pred <- 2.0 * (y_pred - y)
    +   grad_w2 <- h_relu$t()$mm(grad_y_pred)
    +   grad_h_relu <- grad_y_pred$mm(w2$t())
    +   grad_h <- grad_h_relu$clone()
    +   grad_h[h < 0] <- 0
    +   grad_w1 <- x$t()$mm(grad_h)
    +   
    +   # Update weights using gradient descent
    +   w1 <- w1 - learning_rate * grad_w1
    +   w2 <- w2 - learning_rate * grad_w2
    +}
    +#> Step: 1 : 39707928 
    +#> Step: 100 : 498.2322 
    +#> Step: 200 : 2.06651 
    +#> Step: 300 : 0.01554536 
    +#> Step: 400 : 0.0003302942 
    +#> Step: 500 : 5.156323e-05
    +
    +

    In the next example we will use autograd instead of computing the gradients manually.

    +
    + + + +
    + + + +
    + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + diff --git a/static/docs/dev/articles/getting-started/tensors_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/dev/articles/getting-started/tensors_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/dev/articles/getting-started/tensors_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/dev/articles/getting-started/warmup.html b/static/docs/dev/articles/getting-started/warmup.html new file mode 100644 index 0000000000000000000000000000000000000000..7e94f3de426b1ab84d6f5a429b0214181755ba9d --- /dev/null +++ b/static/docs/dev/articles/getting-started/warmup.html @@ -0,0 +1,241 @@ + + + + + + + +Warm-up • torch + + + + + + + + + + + +
    +
    + + + + +
    +
    + + + + +
    +

    Note: This is an R port of the official tutorial available here. All credits goes to Justin Johnson.

    +
    + +

    A fully-connected ReLU network with one hidden layer and no biases, trained to predict y from x using Euclidean error.

    +

    This implementation uses pure R to manually compute the forward pass, loss, and backward pass.

    +

    An R array is a generic n-dimensional array; it does not know anything about deep learning or gradients or computational graphs, and is just a way to perform generic numeric computations.

    +
    +# N is batch size; D_in is input dimension;
    +# H is hidden dimension; D_out is output dimension.
    +N <- 64
    +D_in <- 1000
    +H <- 100
    +D_out <- 10
    +
    +# Create random input and output data
    +x <- array(rnorm(N*D_in), dim = c(N, D_in))
    +y <- array(rnorm(N*D_out), dim = c(N, D_out))
    +
    +# Randomly initialize weights
    +w1 <- array(rnorm(D_in*H), dim = c(D_in, H))
    +w2 <- array(rnorm(H*D_out), dim = c(H, D_out))
    +
    +learning_rate <- 1e-6
    +for (t in seq_len(500)) {
    +   # Forward pass: compute predicted y
    +   h <- x %*% w1
    +   h_relu <- ifelse(h < 0, 0, h)
    +   y_pred <- h_relu %*% w2
    +   
    +   # Compute and print loss
    +   loss <- sum((y_pred - y)^2)
    +   if (t %% 100 == 0 || t == 1)
    +      cat("Step:", t, ":", loss, "\n")
    +   
    +   # Backprop to compute gradients of w1 and w2 with respect to loss
    +   grad_y_pred <- 2 * (y_pred - y)
    +   grad_w2 <- t(h_relu) %*% grad_y_pred
    +   grad_h_relu <- grad_y_pred %*% t(w2)
    +   grad_h <- grad_h_relu
    +   grad_h[h < 0] <- 0
    +   grad_w1 <- t(x) %*% grad_h
    +   
    +   # Update weights
    +   w1 <- w1 - learning_rate * grad_w1
    +   w2 <- w2 - learning_rate * grad_w2
    +}
    +#> Step: 1 : 45068030 
    +#> Step: 100 : 1143.562 
    +#> Step: 200 : 10.35666 
    +#> Step: 300 : 0.1330576 
    +#> Step: 400 : 0.001955063 
    +#> Step: 500 : 3.09119e-05
    +
    +

    In the next example we will replace the R array for a torch Tensor.

    +
    + + + +
    + + + +
    + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + diff --git a/static/docs/dev/articles/getting-started/warmup_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/dev/articles/getting-started/warmup_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/dev/articles/getting-started/warmup_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/dev/articles/getting-started/what-is-torch.html b/static/docs/dev/articles/getting-started/what-is-torch.html new file mode 100644 index 0000000000000000000000000000000000000000..842c8ae28ca86110a7b1af6c5fdd9c98b8a096a8 --- /dev/null +++ b/static/docs/dev/articles/getting-started/what-is-torch.html @@ -0,0 +1,413 @@ + + + + + + + +What is torch? • torch + + + + + + + + + + + +
    +
    + + + + +
    +
    + + + + +
    +

    Note: This is an R port of the official tutorial available here. All credits goes to Soumith Chintala.

    +
    + +

    It’s a scientific computing package targeted at two sets of audiences:

    +
      +
    • An array library to use the power of GPUs
    • +
    • a deep learning research platform that provides maximum flexibility and speed
    • +
    +
    +

    +Getting started

    +
    +

    +Tensors

    +

    Tensors are similar to R arrays, with the addition being that Tensors can also be used on a GPU to accelerate computing.

    +
    +

    Note: An uninitialized matrix is declared, but does not contain definite known values before it is used. When an uninitialized matrix is created, whatever values were in the allocated memory at the time will appear as the initial values.

    +
    +

    Construct a 5x3 matrix, uninitialized:

    +
    +x <- torch_empty(5, 3)
    +x
    +#> torch_tensor 
    +#>  0  0  0
    +#>  0  0  0
    +#>  0  0  0
    +#>  0  0  0
    +#>  0  0  0
    +#> [ CPUFloatType{5,3} ]
    +
    +

    Construct a randomly initialized matrix:

    +
    +x <- torch_rand(5, 3)
    +x
    +#> torch_tensor 
    +#>  0.9781  0.5050  0.6961
    +#>  0.2296  0.9370  0.3338
    +#>  0.0232  0.7163  0.3911
    +#>  0.2576  0.0083  0.4875
    +#>  0.1455  0.5404  0.8133
    +#> [ CPUFloatType{5,3} ]
    +
    +

    Construct a matrix filled zeros and of dtype long:

    +
    +x <- torch_zeros(5, 3, dtype = torch_long())
    +x
    +#> torch_tensor 
    +#>  0  0  0
    +#>  0  0  0
    +#>  0  0  0
    +#>  0  0  0
    +#>  0  0  0
    +#> [ CPULongType{5,3} ]
    +
    +

    Construct a tensor directly from data:

    +
    +x <- torch_tensor(c(5.5, 3))
    +x
    +#> torch_tensor 
    +#>  5.5000
    +#>  3.0000
    +#> [ CPUFloatType{2} ]
    +
    +

    or create a tensor based on an existing tensor. These methods will reuse properties of the input tensor, e.g. dtype, unless new values are provided by user

    +
    +x <- torch_randn_like(x, dtype = torch_float()) # override dtype!
    +x                                               # result has the same size
    +#> torch_tensor 
    +#> 0.01 *
    +#> -5.4223
    +#> -20.1941
    +#> [ CPUFloatType{2} ]
    +
    +

    Get its size:

    +
    +x$size()
    +#> [1] 2
    +
    +
    +
    +

    +Operations

    +

    There are multiple syntaxes for operations. In the following example, we will take a look at the addition operation.

    +

    Addition: syntax 1

    +
    +x <- torch_rand(5, 3)
    +y <- torch_rand(5, 3)
    +x + y
    +#> torch_tensor 
    +#>  0.7164  0.8170  1.3400
    +#>  0.2271  0.6043  1.1158
    +#>  1.3897  1.6707  0.6946
    +#>  1.7146  0.9900  0.8561
    +#>  0.5293  0.8795  1.0980
    +#> [ CPUFloatType{5,3} ]
    +
    +

    Addition: syntax 2

    +
    +torch_add(x, y)
    +#> torch_tensor 
    +#>  0.7164  0.8170  1.3400
    +#>  0.2271  0.6043  1.1158
    +#>  1.3897  1.6707  0.6946
    +#>  1.7146  0.9900  0.8561
    +#>  0.5293  0.8795  1.0980
    +#> [ CPUFloatType{5,3} ]
    +
    +

    Addition: in-place

    +
    +y$add_(x)
    +#> torch_tensor 
    +#>  0.7164  0.8170  1.3400
    +#>  0.2271  0.6043  1.1158
    +#>  1.3897  1.6707  0.6946
    +#>  1.7146  0.9900  0.8561
    +#>  0.5293  0.8795  1.0980
    +#> [ CPUFloatType{5,3} ]
    +y
    +#> torch_tensor 
    +#>  0.7164  0.8170  1.3400
    +#>  0.2271  0.6043  1.1158
    +#>  1.3897  1.6707  0.6946
    +#>  1.7146  0.9900  0.8561
    +#>  0.5293  0.8795  1.0980
    +#> [ CPUFloatType{5,3} ]
    +
    +
    +

    Note: Any operation that mutates a tensor in-place is post-fixed with an _. For example: x$copy_(y), x$t_(), will change x.

    +
    +

    You can use standard R-like indexing with all bells and whistles! See more about indexing with vignette("indexing").

    +
    +x[, 1]
    +#> torch_tensor 
    +#>  0.0192
    +#>  0.0535
    +#>  0.7355
    +#>  0.8610
    +#>  0.2385
    +#> [ CPUFloatType{5} ]
    +
    +

    Resizing: If you want to resize/reshape tensor, you can use torch_view:

    +
    +x <- torch_randn(4, 4)
    +y <- x$view(16)
    +z <- x$view(size = c(-1, 8))  # the size -1 is inferred from other dimensions
    +x$size()
    +#> [1] 4 4
    +y$size()
    +#> [1] 16
    +z$size()
    +#> [1] 2 8
    +
    +

    If you have a one element tensor, use $item() to get the value as an R number

    +
    +x <- torch_randn(1)
    +x
    +#> torch_tensor 
    +#>  0.2725
    +#> [ CPUFloatType{1} ]
    +x$item()
    +#> [1] 0.2725014
    +
    +

    You can find a complete list of operations in the reference page.

    +
    +
    +
    +

    +R bridge

    +

    Converting a Torch Tensor to an R array and vice versa is a breeze.

    +
    +

    +Converting a torch tensor into an R array

    +
    +a <- torch_ones(5)
    +a
    +#> torch_tensor 
    +#>  1
    +#>  1
    +#>  1
    +#>  1
    +#>  1
    +#> [ CPUFloatType{5} ]
    +
    +
    +b <- as_array(a)
    +b
    +#> [1] 1 1 1 1 1
    +
    +
    +
    +

    +Converting R arrays to torch tensors

    +
    +a <- rep(1, 5)
    +a
    +#> [1] 1 1 1 1 1
    +b <- torch_tensor(a)
    +b
    +#> torch_tensor 
    +#>  1
    +#>  1
    +#>  1
    +#>  1
    +#>  1
    +#> [ CPUFloatType{5} ]
    +
    +

    Currently supported types are numerics and boolean types.

    +
    +
    +
    +

    +CUDA tensors

    +

    Tensors can be moved onto any device using the $to method.

    +
    +if (cuda_is_available()) {
    +  device <- torch_device("cuda")
    +  y <- torch_ones_like(x, device = device)  # directly create a tensor on GPU
    +  x <- x$to(device)                       # or just use strings ``.to("cuda")``
    +  z <- x + y
    +  print(z)
    +  print(z$to(device = "cpu", torch_double())) # `$to` can also change dtype together!
    +}
    +
    +
    +
    + + + +
    + + + +
    + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + diff --git a/static/docs/dev/articles/getting-started/what-is-torch_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/dev/articles/getting-started/what-is-torch_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/dev/articles/getting-started/what-is-torch_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/dev/articles/index.html b/static/docs/dev/articles/index.html new file mode 100644 index 0000000000000000000000000000000000000000..3df87feab10679ea721d42452b40007f813d3f56 --- /dev/null +++ b/static/docs/dev/articles/index.html @@ -0,0 +1,254 @@ + + + + + + + + +Articles • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + + + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/articles/indexing.html b/static/docs/dev/articles/indexing.html new file mode 100644 index 0000000000000000000000000000000000000000..fc27e9a0b94589abb4e5c44c985637d9761257f9 --- /dev/null +++ b/static/docs/dev/articles/indexing.html @@ -0,0 +1,380 @@ + + + + + + + +Indexing tensors • torch + + + + + + + + + + + +
    +
    + + + + +
    +
    + + + + + +

    In this article we describe the indexing operator for torch tensors and how it compares to the R indexing operator for arrays.

    +

    Torch’s indexing semantics are closer to numpy’s semantics than R’s. You will find a lot of similarities between this article and the numpy indexing article available here.

    +
    +

    +Single element indexing

    +

    Single element indexing for a 1-D tensors works mostly as expected. Like R, it is 1-based. Unlike R though, it accepts negative indices for indexing from the end of the array. (In R, negative indices are used to remove elements.)

    +
    +x <- torch_tensor(1:10)
    +x[1]
    +#> torch_tensor 
    +#> 1
    +#> [ CPULongType{} ]
    +x[-1]
    +#> torch_tensor 
    +#> 10
    +#> [ CPULongType{} ]
    +
    +

    You can also subset matrices and higher dimensions arrays using the same syntax:

    +
    +x <- x$reshape(shape = c(2,5))
    +x
    +#> torch_tensor 
    +#>   1   2   3   4   5
    +#>   6   7   8   9  10
    +#> [ CPULongType{2,5} ]
    +x[1,3]
    +#> torch_tensor 
    +#> 3
    +#> [ CPULongType{} ]
    +x[1,-1]
    +#> torch_tensor 
    +#> 5
    +#> [ CPULongType{} ]
    +
    +

    Note that if one indexes a multidimensional tensor with fewer indices than dimensions, one gets an error, unlike in R that would flatten the array. For example:

    +
    +x[1]
    +#> torch_tensor 
    +#>  1
    +#>  2
    +#>  3
    +#>  4
    +#>  5
    +#> [ CPULongType{5} ]
    +
    +
    +
    +

    +Slicing and striding

    +

    It is possible to slice and stride arrays to extract sub-arrays of the same number of dimensions, but of different sizes than the original. This is best illustrated by a few examples:

    +
    +x <- torch_tensor(1:10)
    +x
    +#> torch_tensor 
    +#>   1
    +#>   2
    +#>   3
    +#>   4
    +#>   5
    +#>   6
    +#>   7
    +#>   8
    +#>   9
    +#>  10
    +#> [ CPULongType{10} ]
    +x[2:5]
    +#> torch_tensor 
    +#>  2
    +#>  3
    +#>  4
    +#>  5
    +#> [ CPULongType{4} ]
    +x[1:(-7)]
    +#> torch_tensor 
    +#>  1
    +#>  2
    +#>  3
    +#>  4
    +#> [ CPULongType{4} ]
    +
    +

    You can also use the 1:10:2 syntax which means: In the range from 1 to 10, take every second item. For example:

    +
    +x[1:5:2]
    +#> torch_tensor 
    +#>  1
    +#>  3
    +#>  5
    +#> [ CPULongType{3} ]
    +
    +

    Another special syntax is the N, meaning the size of the specified dimension.

    +
    +x[5:N]
    +#> torch_tensor 
    +#>   5
    +#>   6
    +#>   7
    +#>   8
    +#>   9
    +#>  10
    +#> [ CPULongType{6} ]
    +
    +
    +
    +

    +Getting the complete dimension

    +

    Like in R, you can take all elements in a dimension by leaving an index empty.

    +

    Consider a matrix:

    +
    +x <- torch_randn(2, 3)
    +x
    +#> torch_tensor 
    +#> -0.8220  1.2598  0.9492
    +#> -1.2370  1.2724 -0.1865
    +#> [ CPUFloatType{2,3} ]
    +
    +

    The following syntax will give you the first row:

    +
    +x[1,]
    +#> torch_tensor 
    +#> -0.8220
    +#>  1.2598
    +#>  0.9492
    +#> [ CPUFloatType{3} ]
    +
    +

    And this would give you the first 2 columns:

    +
    +x[,1:2]
    +#> torch_tensor 
    +#> -0.8220  1.2598
    +#> -1.2370  1.2724
    +#> [ CPUFloatType{2,2} ]
    +
    +
    +
    +

    +Dropping dimensions

    +

    By default, when indexing by a single integer, this dimension will be dropped to avoid the singleton dimension:

    +
    +x <- torch_randn(2, 3)
    +x[1,]$shape
    +#> [1] 3
    +
    +

    You can optionally use the drop = FALSE argument to avoid dropping the dimension.

    +
    +x[1,,drop = FALSE]$shape
    +#> [1] 1 3
    +
    +
    +
    +

    +Adding a new dimension

    +

    It’s possible to add a new dimension to a tensor using index-like syntax:

    +
    +x <- torch_tensor(c(10))
    +x$shape
    +#> [1] 1
    +x[, newaxis]$shape
    +#> [1] 1 1
    +x[, newaxis, newaxis]$shape
    +#> [1] 1 1 1
    +
    +

    You can also use NULL instead of newaxis:

    +
    +x[,NULL]$shape
    +#> [1] 1 1
    +
    +
    +
    +

    +Dealing with variable number of indices

    +

    Sometimes we don’t know how many dimensions a tensor has, but we do know what to do with the last available dimension, or the first one. To subsume all others, we can use ..:

    +
    +z <- torch_tensor(1:125)$reshape(c(5,5,5))
    +z[1,..]
    +#> torch_tensor 
    +#>   1   2   3   4   5
    +#>   6   7   8   9  10
    +#>  11  12  13  14  15
    +#>  16  17  18  19  20
    +#>  21  22  23  24  25
    +#> [ CPULongType{5,5} ]
    +z[..,1]
    +#> torch_tensor 
    +#>    1    6   11   16   21
    +#>   26   31   36   41   46
    +#>   51   56   61   66   71
    +#>   76   81   86   91   96
    +#>  101  106  111  116  121
    +#> [ CPULongType{5,5} ]
    +
    +
    +
    + + + +
    + + + +
    + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + diff --git a/static/docs/dev/articles/indexing_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/dev/articles/indexing_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/dev/articles/indexing_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/dev/articles/loading-data.html b/static/docs/dev/articles/loading-data.html new file mode 100644 index 0000000000000000000000000000000000000000..56e4b36f718d23680ef81031de8177cd3b6c271b --- /dev/null +++ b/static/docs/dev/articles/loading-data.html @@ -0,0 +1,391 @@ + + + + + + + +Loading data • torch + + + + + + + + + + + +
    +
    + + + + +
    +
    + + + + + +
    +

    +Datasets and data loaders

    +

    Central to data ingestion and preprocessing are datasets and data loaders.

    +

    torch comes equipped with a bag of datasets related to, mostly, image recognition and natural language processing (e.g., mnist_dataset()), which can be iterated over by means of dataloaders:

    +
    # ...
    +ds <- mnist_dataset(
    +  dir, 
    +  download = TRUE, 
    +  transform = function(x) {
    +    x <- x$to(dtype = torch_float())/256
    +    x[newaxis,..]
    +  }
    +)
    +
    +dl <- dataloader(ds, batch_size = 32, shuffle = TRUE)
    +
    +for (b in enumerate(dl)) {
    +  # ...
    +

    Cf. vignettes/examples/mnist-cnn.R for a complete example.

    +

    What if you want to train on a different dataset? In these cases, you subclass Dataset, an abstract container that needs to know how to iterate over the given data. To that purpose, your subclass needs to implement .getitem(), and say what should be returned when the data loader is asking for the next batch.

    +

    In .getitem(), you can implement whatever preprocessing you require. Additionally, you should implement .length(), so users can find out how many items there are in the dataset.

    +

    While this may sound complicated, it is not at all. The base logic is straightforward – complexity will, naturally, correlate with how involved your preprocessing is. To provide you with a simple but functional prototype, here we show how to create your own dataset to train on Allison Horst's penguins.

    +
    +
    +

    +A custom dataset

    +
    +library(palmerpenguins)
    +library(magrittr)
    +
    +penguins
    +#> # A tibble: 344 x 8
    +#>    species island bill_length_mm bill_depth_mm flipper_length_… body_mass_g
    +#>    <fct>   <fct>           <dbl>         <dbl>            <int>       <int>
    +#>  1 Adelie  Torge…           39.1          18.7              181        3750
    +#>  2 Adelie  Torge…           39.5          17.4              186        3800
    +#>  3 Adelie  Torge…           40.3          18                195        3250
    +#>  4 Adelie  Torge…           NA            NA                 NA          NA
    +#>  5 Adelie  Torge…           36.7          19.3              193        3450
    +#>  6 Adelie  Torge…           39.3          20.6              190        3650
    +#>  7 Adelie  Torge…           38.9          17.8              181        3625
    +#>  8 Adelie  Torge…           39.2          19.6              195        4675
    +#>  9 Adelie  Torge…           34.1          18.1              193        3475
    +#> 10 Adelie  Torge…           42            20.2              190        4250
    +#> # … with 334 more rows, and 2 more variables: sex <fct>, year <int>
    +
    +

    Datasets are R6 classes created using the dataset() constructor. You can pass a name and various member functions. Among those should be initialize(), to create instance variables, .getitem(), to indicate how the data should be returned, and .length(), to say how many items we have.

    +

    In addition, any number of helper functions can be defined.

    +

    Here, we assume the penguins have already been loaded, and all preprocessing consists in removing lines with NA values, transforming factors to numbers starting from 0, and converting from R data types to torch tensors.

    +

    In .getitem, we essentially decide how this data is going to be used: All variables besides species go into x, the predictor, and species will constitute y, the target. Predictor and target are returned in a list, to be accessed as batch[[1]] and batch[[2]] during training.

    +
    +penguins_dataset <- dataset(
    +  
    +  name = "penguins_dataset",
    +  
    +  initialize = function() {
    +    self$data <- self$prepare_penguin_data()
    +  },
    +  
    +  .getitem = function(index) {
    +    
    +    x <- self$data[index, 2:-1]
    +    y <- self$data[index, 1]$to(torch_long())
    +    
    +    list(x, y)
    +  },
    +  
    +  .length = function() {
    +    self$data$size()[[1]]
    +  },
    +  
    +  prepare_penguin_data = function() {
    +    
    +    input <- na.omit(penguins) 
    +    # conveniently, the categorical data are already factors
    +    input$species <- as.numeric(input$species)
    +    input$island <- as.numeric(input$island)
    +    input$sex <- as.numeric(input$sex)
    +    
    +    input <- as.matrix(input)
    +    torch_tensor(input)
    +  }
    +)
    +
    +

    Let’s create the dataset , query for it’s length, and look at its first item:

    +
    +tuxes <- penguins_dataset()
    +tuxes$.length()
    +#> [1] 333
    +tuxes$.getitem(1)
    +#> [[1]]
    +#> torch_tensor 
    +#>     3.0000
    +#>    39.1000
    +#>    18.7000
    +#>   181.0000
    +#>  3750.0000
    +#>     2.0000
    +#>  2007.0000
    +#> [ CPUFloatType{7} ]
    +#> 
    +#> [[2]]
    +#> torch_tensor 
    +#> 1
    +#> [ CPULongType{} ]
    +
    +

    To be able to iterate over tuxes, we need a data loader (we override the default batch size of 1):

    +
    +dl <-tuxes %>% dataloader(batch_size = 8)
    +
    +

    Calling .length() on a data loader (as opposed to a dataset) will return the number of batches we have:

    +
    +dl$.length()
    +#> [1] 42
    +
    +

    And we can create an iterator to inspect the first batch:

    +
    +iter <- dl$.iter()
    +b <- iter$.next()
    +b
    +#> [[1]]
    +#> torch_tensor 
    +#>     3.0000    39.1000    18.7000   181.0000  3750.0000     2.0000  2007.0000
    +#>     3.0000    39.5000    17.4000   186.0000  3800.0000     1.0000  2007.0000
    +#>     3.0000    40.3000    18.0000   195.0000  3250.0000     1.0000  2007.0000
    +#>     3.0000    36.7000    19.3000   193.0000  3450.0000     1.0000  2007.0000
    +#>     3.0000    39.3000    20.6000   190.0000  3650.0000     2.0000  2007.0000
    +#>     3.0000    38.9000    17.8000   181.0000  3625.0000     1.0000  2007.0000
    +#>     3.0000    39.2000    19.6000   195.0000  4675.0000     2.0000  2007.0000
    +#>     3.0000    41.1000    17.6000   182.0000  3200.0000     1.0000  2007.0000
    +#> [ CPUFloatType{8,7} ]
    +#> 
    +#> [[2]]
    +#> torch_tensor 
    +#>  1
    +#>  1
    +#>  1
    +#>  1
    +#>  1
    +#>  1
    +#>  1
    +#>  1
    +#> [ CPULongType{8} ]
    +
    +

    To train a network, we can use enumerate to iterate over batches.

    +
    +
    +

    +Training with data loaders

    +

    Our example network is very simple. (In reality, we would want to treat island as the categorical variable it is, and either one-hot-encode or embed it.)

    +
    +net <- nn_module(
    +  "PenguinNet",
    +  initialize = function() {
    +    self$fc1 <- nn_linear(7, 32)
    +    self$fc2 <- nn_linear(32, 3)
    +  },
    +  forward = function(x) {
    +    x %>% 
    +      self$fc1() %>% 
    +      nnf_relu() %>% 
    +      self$fc2() %>% 
    +      nnf_log_softmax(dim = 1)
    +  }
    +)
    +
    +model <- net()
    +
    +

    We still need an optimizer:

    +
    +optimizer <- optim_sgd(model$parameters, lr = 0.01)
    +
    +

    And we’re ready to train:

    +
    +for (epoch in 1:10) {
    +  
    +  l <- c()
    +  
    +  for (b in enumerate(dl)) {
    +    optimizer$zero_grad()
    +    output <- model(b[[1]])
    +    loss <- nnf_nll_loss(output, b[[2]])
    +    loss$backward()
    +    optimizer$step()
    +    l <- c(l, loss$item())
    +  }
    +  
    +  cat(sprintf("Loss at epoch %d: %3f\n", epoch, mean(l)))
    +}
    +#> Loss at epoch 1: 51.747068
    +#> Loss at epoch 2: 2.068251
    +#> Loss at epoch 3: 2.068251
    +#> Loss at epoch 4: 2.068251
    +#> Loss at epoch 5: 2.068251
    +#> Loss at epoch 6: 2.068251
    +#> Loss at epoch 7: 2.068251
    +#> Loss at epoch 8: 2.068251
    +#> Loss at epoch 9: 2.068251
    +#> Loss at epoch 10: 2.068251
    +
    +
    +
    + + + +
    + + + +
    + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + diff --git a/static/docs/dev/articles/loading-data_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/dev/articles/loading-data_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/dev/articles/loading-data_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/dev/articles/tensor-creation.html b/static/docs/dev/articles/tensor-creation.html new file mode 100644 index 0000000000000000000000000000000000000000..d9a311096f64251a2fbde6eb299d6917cc92da53 --- /dev/null +++ b/static/docs/dev/articles/tensor-creation.html @@ -0,0 +1,317 @@ + + + + + + + +Creating tensors • torch + + + + + + + + + + + +
    +
    + + + + +
    +
    + + + + + +

    In this article we describe various ways of creating torch tensors in R.

    +
    +

    +From R objects

    +

    You can create tensors from R objects using the torch_tensor function. The torch_tensor function takes an R vector, matrix or array and creates an equivalent torch_tensor.

    +

    You can see a few examples below:

    +
    +torch_tensor(c(1,2,3))
    +#> torch_tensor 
    +#>  1
    +#>  2
    +#>  3
    +#> [ CPUFloatType{3} ]
    +
    +# conform to row-major indexing used in torch
    +torch_tensor(matrix(1:10, ncol = 5, nrow = 2, byrow = TRUE))
    +#> torch_tensor 
    +#>   1   2   3   4   5
    +#>   6   7   8   9  10
    +#> [ CPULongType{2,5} ]
    +torch_tensor(array(runif(12), dim = c(2, 2, 3)))
    +#> torch_tensor 
    +#> (1,.,.) = 
    +#>   0.2270  0.0942  0.5878
    +#>   0.1103  0.5365  0.2416
    +#> 
    +#> (2,.,.) = 
    +#>   0.9161  0.4202  0.3500
    +#>   0.3538  0.9176  0.5131
    +#> [ CPUFloatType{2,2,3} ]
    +
    +

    By default, we will create tensors in the cpu device, converting their R datatype to the corresponding torch dtype.

    +
    +

    Note currently, only numeric and boolean types are supported.

    +
    +

    You can always modify dtype and device when converting an R object to a torch tensor. For example:

    +
    +torch_tensor(1, dtype = torch_long())
    +#> torch_tensor 
    +#>  1
    +#> [ CPULongType{1} ]
    +torch_tensor(1, device = "cpu", dtype = torch_float64())
    +#> torch_tensor 
    +#>  1
    +#> [ CPUDoubleType{1} ]
    +
    +

    Other options available when creating a tensor are:

    +
      +
    • +requires_grad: boolean indicating if you want autograd to record operations on them for automatic differentiation.
    • +
    • +pin_memory: – If set, the tensor returned would be allocated in pinned memory. Works only for CPU tensors.
    • +
    +

    These options are available for all functions that can be used to create new tensors, including the factory functions listed in the next section.

    +
    +
    +

    +Using creation functions

    +

    You can also use the torch_* functions listed below to create torch tensors using some algorithm.

    +

    For example, the torch_randn function will create tensors using the normal distribution with mean 0 and standard deviation 1. You can use the ... argument to pass the size of the dimensions. For example, the code below will create a normally distributed tensor with shape 5x3.

    +
    +x <- torch_randn(5, 3)
    +x
    +#> torch_tensor 
    +#> -0.0891 -1.7818 -0.3516
    +#>  0.4988 -1.3215  1.0235
    +#> -0.2183  1.8560 -0.4986
    +#>  1.1953 -0.5551 -1.1393
    +#> -0.4159  0.4346  1.6532
    +#> [ CPUFloatType{5,3} ]
    +
    +

    Another example is torch_ones, which creates a tensor filled with ones.

    +
    +x <- torch_ones(2, 4, dtype = torch_int64(), device = "cpu")
    +x
    +#> torch_tensor 
    +#>  1  1  1  1
    +#>  1  1  1  1
    +#> [ CPULongType{2,4} ]
    +
    +

    Here is the full list of functions that can be used to bulk-create tensors in torch:

    +
      +
    • +torch_arange: Returns a tensor with a sequence of integers,
    • +
    • +torch_empty: Returns a tensor with uninitialized values,
    • +
    • +torch_eye: Returns an identity matrix,
    • +
    • +torch_full: Returns a tensor filled with a single value,
    • +
    • +torch_linspace: Returns a tensor with values linearly spaced in some interval,
    • +
    • +torch_logspace: Returns a tensor with values logarithmically spaced in some interval,
    • +
    • +torch_ones: Returns a tensor filled with all ones,
    • +
    • +torch_rand: Returns a tensor filled with values drawn from a uniform distribution on [0, 1).
    • +
    • +torch_randint: Returns a tensor with integers randomly drawn from an interval,
    • +
    • +torch_randn: Returns a tensor filled with values drawn from a unit normal distribution,
    • +
    • +torch_randperm: Returns a tensor filled with a random permutation of integers in some interval,
    • +
    • +torch_zeros: Returns a tensor filled with all zeros.
    • +
    +
    +
    +

    +Conversion

    +

    Once a tensor exists you can convert between dtypes and move to a different device with to method. For example:

    +
    +x <- torch_tensor(1)
    +y <- x$to(dtype = torch_int32())
    +x
    +#> torch_tensor 
    +#>  1
    +#> [ CPUFloatType{1} ]
    +y
    +#> torch_tensor 
    +#>  1
    +#> [ CPUIntType{1} ]
    +
    +

    You can also copy a tensor to the GPU using:

    +
    x <- torch_tensor(1)
    +y <- x$cuda())
    +
    +
    + + + +
    + + + +
    + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + diff --git a/static/docs/dev/articles/tensor-creation_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/dev/articles/tensor-creation_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/dev/articles/tensor-creation_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/dev/articles/tensor/index.html b/static/docs/dev/articles/tensor/index.html new file mode 100644 index 0000000000000000000000000000000000000000..e76a4ef789d7545944895bdce3b9a230c7d048bd --- /dev/null +++ b/static/docs/dev/articles/tensor/index.html @@ -0,0 +1,3613 @@ + + + + + + + +Tensor objects • torch + + + + + + + + + + + +
    +
    + + + + +
    +
    + + + + + +

    Central to torch is the torch_tensor objects. torch_tensor’s are R objects very similar to R6 instances. Tensors have a large amount of methods that can be called using the $ operator.

    +

    Following is a list of all methods that can be called by tensor objects and their documentation. You can also look at PyTorch’s documentation for additional details.

    +
    +

    +T

    +

    Is this Tensor with its dimensions reversed.

    +

    If n is the number of dimensions in x, x$T is equivalent to x$permute(n-1, n-2, ..., 0).

    +
    +
    +

    +abs

    +

    abs() -> Tensor

    +

    See ?torch_abs

    +
    +
    +

    +abs_

    +

    abs_() -> Tensor

    +

    In-place version of $abs

    +
    +
    +

    +absolute

    +

    absolute() -> Tensor

    +

    Alias for [$abs()]

    +
    +
    +

    +absolute_

    +

    absolute_() -> Tensor

    +

    In-place version of $absolute Alias for [$abs_()]

    +
    +
    +

    +acos

    +

    acos() -> Tensor

    +

    See ?torch_acos

    +
    +
    +

    +acos_

    +

    acos_() -> Tensor

    +

    In-place version of $acos

    +
    +
    +

    +acosh

    +

    acosh() -> Tensor

    +

    See ?torch_acosh

    +
    +
    +

    +acosh_

    +

    acosh_() -> Tensor

    +

    In-place version of $acosh

    +
    +
    +

    +add

    +

    add(other, *, alpha=1) -> Tensor

    +

    Add a scalar or tensor to self tensor. If both alpha and other are specified, each element of other is scaled by alpha before being used.

    +

    When other is a tensor, the shape of other must be broadcastable with the shape of the underlying tensor

    +

    See ?torch_add

    +
    +
    +

    +add_

    +

    add_(other, *, alpha=1) -> Tensor

    +

    In-place version of $add

    +
    +
    +

    +addbmm

    +

    addbmm(batch1, batch2, *, beta=1, alpha=1) -> Tensor

    +

    See ?torch_addbmm

    +
    +
    +

    +addbmm_

    +

    addbmm_(batch1, batch2, *, beta=1, alpha=1) -> Tensor

    +

    In-place version of $addbmm

    +
    +
    +

    +addcdiv

    +

    addcdiv(tensor1, tensor2, *, value=1) -> Tensor

    +

    See ?torch_addcdiv

    +
    +
    +

    +addcdiv_

    +

    addcdiv_(tensor1, tensor2, *, value=1) -> Tensor

    +

    In-place version of $addcdiv

    +
    +
    +

    +addcmul

    +

    addcmul(tensor1, tensor2, *, value=1) -> Tensor

    +

    See ?torch_addcmul

    +
    +
    +

    +addcmul_

    +

    addcmul_(tensor1, tensor2, *, value=1) -> Tensor

    +

    In-place version of $addcmul

    +
    +
    +

    +addmm

    +

    addmm(mat1, mat2, *, beta=1, alpha=1) -> Tensor

    +

    See ?torch_addmm

    +
    +
    +

    +addmm_

    +

    addmm_(mat1, mat2, *, beta=1, alpha=1) -> Tensor

    +

    In-place version of $addmm

    +
    +
    +

    +addmv

    +

    addmv(mat, vec, *, beta=1, alpha=1) -> Tensor

    +

    See ?torch_addmv

    +
    +
    +

    +addmv_

    +

    addmv_(mat, vec, *, beta=1, alpha=1) -> Tensor

    +

    In-place version of $addmv

    +
    +
    +

    +addr

    +

    addr(vec1, vec2, *, beta=1, alpha=1) -> Tensor

    +

    See ?torch_addr

    +
    +
    +

    +addr_

    +

    addr_(vec1, vec2, *, beta=1, alpha=1) -> Tensor

    +

    In-place version of $addr

    +
    +
    +

    +align_as

    +

    align_as(other) -> Tensor

    +

    Permutes the dimensions of the self tensor to match the dimension order in the other tensor, adding size-one dims for any new names.

    +

    This operation is useful for explicit broadcasting by names (see examples).

    +

    All of the dims of self must be named in order to use this method. The resulting tensor is a view on the original tensor.

    +

    All dimension names of self must be present in other$names. other may contain named dimensions that are not in self$names; the output tensor has a size-one dimension for each of those new names.

    +

    To align a tensor to a specific order, use $align_to.

    +
    +

    +Examples:

    +
    +# Example 1: Applying a mask
    +mask <- torch_randint(low = 0, high = 2, size = c(127, 128), dtype=torch_bool())$refine_names(c('W', 'H'))
    +imgs <- torch_randn(32, 128, 127, 3, names=c('N', 'H', 'W', 'C'))
    +imgs$masked_fill_(mask$align_as(imgs), 0)
    +
    +# Example 2: Applying a per-channel-scale
    +scale_channels <- function(input, scale) {
    +  scale <- scale$refine_names("C")
    +  input * scale$align_as(input)
    +}
    +
    +num_channels <- 3
    +scale <- torch_randn(num_channels, names='C')
    +imgs <- torch_rand(32, 128, 128, num_channels, names=c('N', 'H', 'W', 'C'))
    +more_imgs = torch_rand(32, num_channels, 128, 128, names=c('N', 'C', 'H', 'W'))
    +videos = torch_randn(3, num_channels, 128, 128, 128, names=c('N', 'C', 'H', 'W', 'D'))
    +
    +# scale_channels is agnostic to the dimension order of the input
    +scale_channels(imgs, scale)
    +scale_channels(more_imgs, scale)
    +scale_channels(videos, scale)
    +
    +
    +
    +

    +Warning:

    +

    The named tensor API is experimental and subject to change.

    +
    +
    +
    +

    +align_to

    +

    Permutes the dimensions of the self tensor to match the order specified in names, adding size-one dims for any new names.

    +

    All of the dims of self must be named in order to use this method. The resulting tensor is a view on the original tensor.

    +

    All dimension names of self must be present in names. names may contain additional names that are not in self$names; the output tensor has a size-one dimension for each of those new names.

    +
    +

    +Arguments:

    +
      +
    • names (iterable of str): The desired dimension ordering of the output tensor. May contain up to one Ellipsis that is expanded to all unmentioned dim names of self.
    • +
    +
    +
    +

    +Examples:

    +
    +

    +Warning:

    +

    The named tensor API is experimental and subject to change.

    +
    +
    +
    +
    +

    +all

    +

    all() -> bool

    +

    Returns TRUE if all elements in the tensor are TRUE, FALSE otherwise.

    +
    +

    +Examples:

    +
    +a <- torch_rand(1, 2)$to(dtype = torch_bool())
    +a
    +a$all()
    +
    +

    all(dim, keepdim=FALSE, out=NULL) -> Tensor

    +

    Returns TRUE if all elements in each row of the tensor in the given dimension dim are TRUE, FALSE otherwise.

    +

    If keepdim is TRUE, the output tensor is of the same size as input except in the dimension dim where it is of size 1. Otherwise, dim is squeezed (see ?torch_squeeze()), resulting in the output tensor having 1 fewer dimension than input.

    +
    +
    +

    +Arguments:

    +
      +
    • dim (int): the dimension to reduce
    • +
    • keepdim (bool): whether the output tensor has dim retained or not
    • +
    • out (Tensor, optional): the output tensor
    • +
    +
    +
    +

    +Examples:

    +
    +a <- torch_rand(4, 2)$to(dtype = torch_bool())
    +a
    +a$all(dim=2)
    +a$all(dim=1)
    +
    +
    +
    +
    +

    +allclose

    +

    allclose(other, rtol=1e-05, atol=1e-08, equal_nan=FALSE) -> Tensor

    +

    See ?torch_allclose

    +
    +
    +

    +angle

    +

    angle() -> Tensor

    +

    See ?torch_angle

    +
    +
    +

    +any

    +

    any() -> bool

    +

    Returns TRUE if any elements in the tensor are TRUE, FALSE otherwise.

    +
    +

    +Examples:

    +
    +a <- torch_rand(1, 2)$to(dtype = torch_bool())
    +a
    +a$any()
    +
    +

    any(dim, keepdim=FALSE, out=NULL) -> Tensor

    +

    Returns TRUE if any elements in each row of the tensor in the given dimension dim are TRUE, FALSE otherwise.

    +

    If keepdim is TRUE, the output tensor is of the same size as input except in the dimension dim where it is of size 1. Otherwise, dim is squeezed (see ?torch_squeeze()), resulting in the output tensor having 1 fewer dimension than input.

    +
    +
    +

    +Arguments:

    +
      +
    • dim (int): the dimension to reduce
    • +
    • keepdim (bool): whether the output tensor has dim retained or not
    • +
    • out (Tensor, optional): the output tensor
    • +
    +
    +
    +

    +Examples:

    +
    +a <- torch_randn(4, 2) < 0
    +a
    +a$any(2)
    +a$any(1)
    +
    +
    +
    +
    +

    +apply_

    +

    apply_(callable) -> Tensor

    +

    Applies the function callable to each element in the tensor, replacing each element with the value returned by callable.

    +
    +

    +Note:

    +

    This function only works with CPU tensors and should not be used in code sections that require high performance.

    +
    +
    +
    +

    +argmax

    +

    argmax(dim=NULL, keepdim=FALSE) -> LongTensor

    +

    See ?torch_argmax

    +
    +
    +

    +argmin

    +

    argmin(dim=NULL, keepdim=FALSE) -> LongTensor

    +

    See ?torch_argmin

    +
    +
    +

    +argsort

    +

    argsort(dim=-1, descending=FALSE) -> LongTensor

    +

    See ?torch_argsort

    +
    +
    +

    +as_strided

    +

    as_strided(size, stride, storage_offset=0) -> Tensor

    +

    See [torch_as_strided()]

    +
    +
    +

    +as_subclass

    +

    as_subclass(cls) -> Tensor

    +

    Makes a cls instance with the same data pointer as self. Changes in the output mirror changes in self, and the output stays attached to the autograd graph. cls must be a subclass of Tensor.

    +
    +
    +

    +asin

    +

    asin() -> Tensor

    +

    See ?torch_asin

    +
    +
    +

    +asin_

    +

    asin_() -> Tensor

    +

    In-place version of $asin

    +
    +
    +

    +asinh

    +

    asinh() -> Tensor

    +

    See ?torch_asinh

    +
    +
    +

    +asinh_

    +

    asinh_() -> Tensor

    +

    In-place version of $asinh

    +
    +
    +

    +atan

    +

    atan() -> Tensor

    +

    See ?torch_atan

    +
    +
    +

    +atan2

    +

    atan2(other) -> Tensor

    +

    See [torch_atan2()]

    +
    +
    +

    +atan2_

    +

    atan2_(other) -> Tensor

    +

    In-place version of $atan2

    +
    +
    +

    +atan_

    +

    atan_() -> Tensor

    +

    In-place version of $atan

    +
    +
    +

    +atanh

    +

    atanh() -> Tensor

    +

    See ?torch_atanh

    +
    +
    +

    +atanh_

    +

    In-place version of $atanh

    +
    +
    +

    +backward

    +

    Computes the gradient of current tensor w.r.t. graph leaves.

    +

    The graph is differentiated using the chain rule. If the tensor is non-scalar (i.e. its data has more than one element) and requires gradient, the function additionally requires specifying gradient. It should be a tensor of matching type and location, that contains the gradient of the differentiated function w.r.t. self.

    +

    This function accumulates gradients in the leaves - you might need to zero $grad attributes or set them to NULL before calling it. See Default gradient layouts<default-grad-layouts> for details on the memory layout of accumulated gradients.

    +
    +

    +Arguments:

    +
      +
    • gradient (Tensor or NULL): Gradient w.r.t. the tensor. If it is a tensor, it will be automatically converted to a Tensor that does not require grad unless create_graph is TRUE. NULL values can be specified for scalar Tensors or ones that don’t require grad. If a NULL value would be acceptable then this argument is optional.
    • +
    • retain_graph (bool, optional): If FALSE, the graph used to compute the grads will be freed. Note that in nearly all cases setting this option to TRUE is not needed and often can be worked around in a much more efficient way. Defaults to the value of create_graph.
    • +
    • create_graph (bool, optional): If TRUE, graph of the derivative will be constructed, allowing to compute higher order derivative products. Defaults to FALSE.
    • +
    +
    +
    +
    +

    +baddbmm

    +

    baddbmm(batch1, batch2, *, beta=1, alpha=1) -> Tensor

    +

    See ?torch_baddbmm

    +
    +
    +

    +baddbmm_

    +

    baddbmm_(batch1, batch2, *, beta=1, alpha=1) -> Tensor

    +

    In-place version of $baddbmm

    +
    +
    +

    +bernoulli

    +

    bernoulli(*, generator=NULL) -> Tensor

    +

    Returns a result tensor where each \(\texttt{result[i]}\) is independently sampled from \(\text{Bernoulli}(\texttt{self[i]})\). self must have floating point dtype, and the result will have the same dtype.

    +

    See ?torch_bernoulli

    +
    +
    +

    +bernoulli_

    +

    bernoulli_(p=0.5, *, generator=NULL) -> Tensor

    +

    Fills each location of self with an independent sample from \(\text{Bernoulli}(\texttt{p})\). self can have integral dtype.

    +

    bernoulli_(p_tensor, *, generator=NULL) -> Tensor

    +

    p_tensor should be a tensor containing probabilities to be used for drawing the binary random number.

    +

    The \(\text{i}^{th}\) element of self tensor will be set to a value sampled from \(\text{Bernoulli}(\texttt{p\_tensor[i]})\).

    +

    self can have integral dtype, but p_tensor must have floating point dtype.

    +

    See also $bernoulli and ?torch_bernoulli

    +
    +
    +

    +bfloat16

    +

    bfloat16(memory_format=torch_preserve_format) -> Tensor self$bfloat16() is equivalent to self$to(torch_bfloat16). See [to()].

    +
    +

    +Arguments:

    +
      +
    • memory_format (torch_memory_format, optional): the desired memory format of
    • +
    • returned Tensor. Default: torch_preserve_format.
    • +
    +
    +
    +
    +

    +bincount

    +

    bincount(weights=NULL, minlength=0) -> Tensor

    +

    See ?torch_bincount

    +
    +
    +

    +bitwise_and

    +

    bitwise_and() -> Tensor

    +

    See [torch_bitwise_and()]

    +
    +
    +

    +bitwise_and_

    +

    bitwise_and_() -> Tensor

    +

    In-place version of $bitwise_and

    +
    +
    +

    +bitwise_not

    +

    bitwise_not() -> Tensor

    +

    See [torch_bitwise_not()]

    +
    +
    +

    +bitwise_not_

    +

    bitwise_not_() -> Tensor

    +

    In-place version of $bitwise_not

    +
    +
    +

    +bitwise_or

    +

    bitwise_or() -> Tensor

    +

    See [torch_bitwise_or()]

    +
    +
    +

    +bitwise_or_

    +

    bitwise_or_() -> Tensor

    +

    In-place version of $bitwise_or

    +
    +
    +

    +bitwise_xor

    +

    bitwise_xor() -> Tensor

    +

    See [torch_bitwise_xor()]

    +
    +
    +

    +bitwise_xor_

    +

    bitwise_xor_() -> Tensor

    +

    In-place version of $bitwise_xor

    +
    +
    +

    +bmm

    +

    bmm(batch2) -> Tensor

    +

    See ?torch_bmm

    +
    +
    +

    +bool

    +

    bool(memory_format=torch_preserve_format) -> Tensor

    +

    self$bool() is equivalent to self$to(torch_bool). See [to()].

    +
    +

    +Arguments:

    +
      +
    • memory_format (torch_memory_format, optional): the desired memory format of
    • +
    • returned Tensor. Default: torch_preserve_format.
    • +
    +
    +
    +
    +

    +byte

    +

    byte(memory_format=torch_preserve_format) -> Tensor

    +

    self$byte() is equivalent to self$to(torch_uint8). See [to()].

    +
    +

    +Arguments:

    +
      +
    • memory_format (torch_memory_format, optional): the desired memory format of
    • +
    • returned Tensor. Default: torch_preserve_format.
    • +
    +
    +
    +
    +

    +cauchy_

    +

    cauchy_(median=0, sigma=1, *, generator=NULL) -> Tensor

    +

    Fills the tensor with numbers drawn from the Cauchy distribution:

    +

    \[ +f(x) = \dfrac{1}{\pi} \dfrac{\sigma}{(x - \text{median})^2 + \sigma^2} +\]

    +
    +
    +

    +ceil

    +

    ceil() -> Tensor

    +

    See ?torch_ceil

    +
    +
    +

    +ceil_

    +

    ceil_() -> Tensor

    +

    In-place version of $ceil

    +
    +
    +

    +char

    +

    char(memory_format=torch_preserve_format) -> Tensor

    +

    self$char() is equivalent to self$to(torch_int8). See [to()].

    +
    +

    +Arguments:

    +
      +
    • memory_format (torch_memory_format, optional): the desired memory format of
    • +
    • returned Tensor. Default: torch_preserve_format.
    • +
    +
    +
    +
    +

    +cholesky

    +

    cholesky(upper=FALSE) -> Tensor

    +

    See ?torch_cholesky

    +
    +
    +

    +cholesky_inverse

    +

    cholesky_inverse(upper=FALSE) -> Tensor

    +

    See [torch_cholesky_inverse()]

    +
    +
    +

    +cholesky_solve

    +

    cholesky_solve(input2, upper=FALSE) -> Tensor

    +

    See [torch_cholesky_solve()]

    +
    +
    +

    +chunk

    +

    chunk(chunks, dim=0) -> List of Tensors

    +

    See ?torch_chunk

    +
    +
    +

    +clamp

    +

    clamp(min, max) -> Tensor

    +

    See ?torch_clamp

    +
    +
    +

    +clamp_

    +

    clamp_(min, max) -> Tensor

    +

    In-place version of $clamp

    +
    +
    +

    +clone

    +

    clone(memory_format=torch_preserve_format) -> Tensor

    +

    Returns a copy of the self tensor. The copy has the same size and data type as self.

    +
    +

    +Note:

    +

    Unlike copy_(), this function is recorded in the computation graph. Gradients propagating to the cloned tensor will propagate to the original tensor.

    +
    +
    +

    +Arguments:

    +
      +
    • memory_format (torch_memory_format, optional): the desired memory format of
    • +
    • returned Tensor. Default: torch_preserve_format.
    • +
    +
    +
    +
    +

    +conj

    +

    conj() -> Tensor

    +

    See ?torch_conj

    +
    +
    +

    +contiguous

    +

    contiguous(memory_format=torch_contiguous_format) -> Tensor

    +

    Returns a contiguous in memory tensor containing the same data as self tensor. If self tensor is already in the specified memory format, this function returns the self tensor.

    +
    +

    +Arguments:

    +
      +
    • memory_format (torch_memory_format, optional): the desired memory format of
    • +
    • returned Tensor. Default: torch_contiguous_format.
    • +
    +
    +
    +
    +

    +copy_

    +

    copy_(src, non_blocking=FALSE) -> Tensor

    +

    Copies the elements from src into self tensor and returns self.

    +

    The src tensor must be :ref:broadcastable <broadcasting-semantics> with the self tensor. It may be of a different data type or reside on a different device.

    +
    +

    +Arguments:

    +
      +
    • src (Tensor): the source tensor to copy from
    • +
    • non_blocking (bool): if TRUE and this copy is between CPU and GPU,
    • +
    • the copy may occur asynchronously with respect to the host. For other
    • +
    • cases, this argument has no effect.
    • +
    +
    +
    +
    +

    +cos

    +

    cos() -> Tensor

    +

    See ?torch_cos

    +
    +
    +

    +cos_

    +

    cos_() -> Tensor

    +

    In-place version of $cos

    +
    +
    +

    +cosh

    +

    cosh() -> Tensor

    +

    See ?torch_cosh

    +
    +
    +

    +cosh_

    +

    cosh_() -> Tensor

    +

    In-place version of $cosh

    +
    +
    +

    +cpu

    +

    cpu(memory_format=torch_preserve_format) -> Tensor

    +

    Returns a copy of this object in CPU memory.

    +

    If this object is already in CPU memory and on the correct device, then no copy is performed and the original object is returned.

    +
    +

    +Arguments:

    +
      +
    • memory_format (torch_memory_format, optional): the desired memory format of
    • +
    • returned Tensor. Default: torch_preserve_format.
    • +
    +
    +
    +
    +

    +cross

    +

    cross(other, dim=-1) -> Tensor

    +

    See ?torch_cross

    +
    +
    +

    +cuda

    +

    cuda(device=NULL, non_blocking=FALSE, memory_format=torch_preserve_format) -> Tensor

    +

    Returns a copy of this object in CUDA memory.

    +

    If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.

    +
    +

    +Arguments:

    +
      +
    • device (torch_device): The destination GPU device. Defaults to the current CUDA device.
    • +
    • non_blocking (bool): If TRUE and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect. Default: FALSE.
    • +
    • memory_format (torch_memory_format, optional): the desired memory format of
    • +
    • returned Tensor. Default: torch_preserve_format.
    • +
    +
    +
    +
    +

    +cummax

    +

    cummax(dim) -> (Tensor, Tensor)

    +

    See ?torch_cummax

    +
    +
    +

    +cummin

    +

    cummin(dim) -> (Tensor, Tensor)

    +

    See ?torch_cummin

    +
    +
    +

    +cumprod

    +

    cumprod(dim, dtype=NULL) -> Tensor

    +

    See ?torch_cumprod

    +
    +
    +

    +cumsum

    +

    cumsum(dim, dtype=NULL) -> Tensor

    +

    See ?torch_cumsum

    +
    +
    +

    +data_ptr

    +

    data_ptr() -> int

    +

    Returns the address of the first element of self tensor.

    +
    +
    +

    +deg2rad

    +

    deg2rad() -> Tensor

    +

    See [torch_deg2rad()]

    +
    +
    +

    +deg2rad_

    +

    deg2rad_() -> Tensor

    +

    In-place version of $deg2rad

    +
    +
    +

    +dense_dim

    +

    dense_dim() -> int

    +

    If self is a sparse COO tensor (i.e., with torch_sparse_coo layout), this returns the number of dense dimensions. Otherwise, this throws an error.

    +

    See also $sparse_dim.

    +
    +
    +

    +dequantize

    +

    dequantize() -> Tensor

    +

    Given a quantized Tensor, dequantize it and return the dequantized float Tensor.

    +
    +
    +

    +det

    +

    det() -> Tensor

    +

    See ?torch_det

    +
    +
    +

    +detach

    +

    Returns a new Tensor, detached from the current graph.

    +

    The result will never require gradient.

    +
    +

    +Note:

    +

    Returned Tensor shares the same storage with the original one. In-place modifications on either of them will be seen, and may trigger errors in correctness checks. IMPORTANT NOTE: Previously, in-place size / stride / storage changes (such as resize_ / resize_as_ / set_ / transpose_) to the returned tensor also update the original tensor. Now, these in-place changes will not update the original tensor anymore, and will instead trigger an error. For sparse tensors: In-place indices / values changes (such as zero_ / copy_ / add_) to the returned tensor will not update the original tensor anymore, and will instead trigger an error.

    +
    +
    +
    +

    +detach_

    +

    Detaches the Tensor from the graph that created it, making it a leaf. Views cannot be detached in-place.

    +
    +
    +

    +device

    +

    Is the torch_device where this Tensor is.

    +
    +
    +

    +diag

    +

    diag(diagonal=0) -> Tensor

    +

    See ?torch_diag

    +
    +
    +

    +diag_embed

    +

    diag_embed(offset=0, dim1=-2, dim2=-1) -> Tensor

    +

    See [torch_diag_embed()]

    +
    +
    +

    +diagflat

    +

    diagflat(offset=0) -> Tensor

    +

    See ?torch_diagflat

    +
    +
    +

    +diagonal

    +

    diagonal(offset=0, dim1=0, dim2=1) -> Tensor

    +

    See ?torch_diagonal

    +
    +
    +

    +digamma

    +

    digamma() -> Tensor

    +

    See ?torch_digamma

    +
    +
    +

    +digamma_

    +

    digamma_() -> Tensor

    +

    In-place version of $digamma

    +
    +
    +

    +dim

    +

    dim() -> int

    +

    Returns the number of dimensions of self tensor.

    +
    +
    +

    +dist

    +

    dist(other, p=2) -> Tensor

    +

    See ?torch_dist

    +
    +
    +

    +div

    +

    div(value) -> Tensor

    +

    See ?torch_div

    +
    +
    +

    +div_

    +

    div_(value) -> Tensor

    +

    In-place version of $div

    +
    +
    +

    +dot

    +

    dot(tensor2) -> Tensor

    +

    See ?torch_dot

    +
    +
    +

    +double

    +

    double(memory_format=torch_preserve_format) -> Tensor

    +

    self$double() is equivalent to self$to(torch_float64). See [to()].

    +
    +

    +Arguments:

    +
      +
    • memory_format (torch_memory_format, optional): the desired memory format of
    • +
    • returned Tensor. Default: torch_preserve_format.
    • +
    +
    +
    +
    +

    +eig

    +

    eig(eigenvectors=FALSE) -> (Tensor, Tensor)

    +

    See ?torch_eig

    +
    +
    +

    +element_size

    +

    element_size() -> int

    +

    Returns the size in bytes of an individual element.

    +
    +

    +Examples:

    +
    +torch_tensor(c(1))$element_size()
    +
    +
    +
    +
    +

    +eq

    +

    eq(other) -> Tensor

    +

    See ?torch_eq

    +
    +
    +

    +eq_

    +

    eq_(other) -> Tensor

    +

    In-place version of $eq

    +
    +
    +

    +equal

    +

    equal(other) -> bool

    +

    See ?torch_equal

    +
    +
    +

    +erf

    +

    erf() -> Tensor

    +

    See ?torch_erf

    +
    +
    +

    +erf_

    +

    erf_() -> Tensor

    +

    In-place version of $erf

    +
    +
    +

    +erfc

    +

    erfc() -> Tensor

    +

    See ?torch_erfc

    +
    +
    +

    +erfc_

    +

    erfc_() -> Tensor

    +

    In-place version of $erfc

    +
    +
    +

    +erfinv

    +

    erfinv() -> Tensor

    +

    See ?torch_erfinv

    +
    +
    +

    +erfinv_

    +

    erfinv_() -> Tensor

    +

    In-place version of $erfinv

    +
    +
    +

    +exp

    +

    exp() -> Tensor

    +

    See ?torch_exp

    +
    +
    +

    +exp_

    +

    exp_() -> Tensor

    +

    In-place version of $exp

    +
    +
    +

    +expand

    +

    expand(*sizes) -> Tensor

    +

    Returns a new view of the self tensor with singleton dimensions expanded to a larger size.

    +

    Passing -1 as the size for a dimension means not changing the size of that dimension.

    +

    Tensor can be also expanded to a larger number of dimensions, and the new ones will be appended at the front. For the new dimensions, the size cannot be set to -1.

    +

    Expanding a tensor does not allocate new memory, but only creates a new view on the existing tensor where a dimension of size one is expanded to a larger size by setting the stride to 0. Any dimension of size 1 can be expanded to an arbitrary value without allocating new memory.

    +
    +

    +Arguments:

    +
      +
    • sizes (torch_Size or int…): the desired expanded size
    • +
    +
    +
    +

    +Warning:

    +

    More than one element of an expanded tensor may refer to a single memory location. As a result, in-place operations (especially ones that are vectorized) may result in incorrect behavior. If you need to write to the tensors, please clone them first.

    +
    +
    +

    +Examples:

    +
    +x <- torch_tensor(matrix(c(1,2,3), ncol = 1))
    +x$size()
    +x$expand(c(3, 4))
    +x$expand(c(-1, 4))  # -1 means not changing the size of that dimension
    +
    +
    +
    +
    +

    +expand_as

    +

    expand_as(other) -> Tensor

    +

    Expand this tensor to the same size as other. self$expand_as(other) is equivalent to self$expand(other.size()).

    +

    Please see $expand for more information about expand.

    +
    +

    +Arguments:

    +
      +
    • other (`$): The result tensor has the same size
    • +
    • as other.
    • +
    +
    +
    +
    +

    +expm1

    +

    expm1() -> Tensor

    +

    See [torch_expm1()]

    +
    +
    +

    +expm1_

    +

    expm1_() -> Tensor

    +

    In-place version of $expm1

    +
    +
    +

    +exponential_

    +

    exponential_(lambd=1, *, generator=NULL) -> Tensor

    +

    Fills self tensor with elements drawn from the exponential distribution:

    +

    \[ +f(x) = \lambda e^{-\lambda x} +\]

    +
    +
    +

    +fft

    +

    fft(signal_ndim, normalized=FALSE) -> Tensor

    +

    See ?torch_fft

    +
    +
    +

    +fill_

    +

    fill_(value) -> Tensor

    +

    Fills self tensor with the specified value.

    +
    +
    +

    +fill_diagonal_

    +

    fill_diagonal_(fill_value, wrap=FALSE) -> Tensor

    +

    Fill the main diagonal of a tensor that has at least 2-dimensions. When dims>2, all dimensions of input must be of equal length. This function modifies the input tensor in-place, and returns the input tensor.

    +
    +

    +Arguments:

    +
      +
    • fill_value (Scalar): the fill value
    • +
    • wrap (bool): the diagonal ‘wrapped’ after N columns for tall matrices.
    • +
    +
    +
    +

    +Examples:

    +
    +a <- torch_zeros(3, 3)
    +a$fill_diagonal_(5)
    +b <- torch_zeros(7, 3)
    +b$fill_diagonal_(5)
    +c <- torch_zeros(7, 3)
    +c$fill_diagonal_(5, wrap=TRUE)
    +
    +
    +
    +
    +

    +flatten

    +

    flatten(input, start_dim=0, end_dim=-1) -> Tensor

    +

    see ?torch_flatten

    +
    +
    +

    +flip

    +

    flip(dims) -> Tensor

    +

    See ?torch_flip

    +
    +
    +

    +fliplr

    +

    fliplr() -> Tensor

    +

    See ?torch_fliplr

    +
    +
    +

    +flipud

    +

    flipud() -> Tensor

    +

    See ?torch_flipud

    +
    +
    +

    +float

    +

    float(memory_format=torch_preserve_format) -> Tensor

    +

    self$float() is equivalent to self$to(torch_float32). See [to()].

    +
    +

    +Arguments:

    +
      +
    • memory_format (torch_memory_format, optional): the desired memory format of
    • +
    • returned Tensor. Default: torch_preserve_format.
    • +
    +
    +
    +
    +

    +floor

    +

    floor() -> Tensor

    +

    See ?torch_floor

    +
    +
    +

    +floor_

    +

    floor_() -> Tensor

    +

    In-place version of $floor

    +
    +
    +

    +floor_divide

    +

    floor_divide(value) -> Tensor

    +

    See [torch_floor_divide()]

    +
    +
    +

    +floor_divide_

    +

    floor_divide_(value) -> Tensor

    +

    In-place version of $floor_divide

    +
    +
    +

    +fmod

    +

    fmod(divisor) -> Tensor

    +

    See ?torch_fmod

    +
    +
    +

    +fmod_

    +

    fmod_(divisor) -> Tensor

    +

    In-place version of $fmod

    +
    +
    +

    +frac

    +

    frac() -> Tensor

    +

    See ?torch_frac

    +
    +
    +

    +frac_

    +

    frac_() -> Tensor

    +

    In-place version of $frac

    +
    +
    +

    +gather

    +

    gather(dim, index) -> Tensor

    +

    See ?torch_gather

    +
    +
    +

    +ge

    +

    ge(other) -> Tensor

    +

    See ?torch_ge

    +
    +
    +

    +ge_

    +

    ge_(other) -> Tensor

    +

    In-place version of $ge

    +
    +
    +

    +geometric_

    +

    geometric_(p, *, generator=NULL) -> Tensor

    +

    Fills self tensor with elements drawn from the geometric distribution:

    +

    \[ +f(X=k) = p^{k - 1} (1 - p) +\]

    +
    +
    +

    +geqrf

    +

    geqrf() -> (Tensor, Tensor)

    +

    See ?torch_geqrf

    +
    +
    +

    +ger

    +

    ger(vec2) -> Tensor

    +

    See ?torch_ger

    +
    +
    +

    +get_device

    +

    get_device() -> Device ordinal (Integer)

    +

    For CUDA tensors, this function returns the device ordinal of the GPU on which the tensor resides. For CPU tensors, an error is thrown.

    +
    +

    +Examples:

    +
    +x <- torch_randn(3, 4, 5, device='cuda:0')
    +x$get_device()
    +x$cpu()$get_device()  # RuntimeError: get_device is not implemented for type torch_FloatTensor
    +
    +
    +
    +
    +

    +grad

    +

    This attribute is NULL by default and becomes a Tensor the first time a call to backward computes gradients for self. The attribute will then contain the gradients computed and future calls to [backward()] will accumulate (add) gradients into it.

    +
    +
    +

    +gt

    +

    gt(other) -> Tensor

    +

    See ?torch_gt

    +
    +
    +

    +gt_

    +

    gt_(other) -> Tensor

    +

    In-place version of $gt

    +
    +
    +

    +half

    +

    half(memory_format=torch_preserve_format) -> Tensor

    +

    self$half() is equivalent to self$to(torch_float16). See [to()].

    +
    +

    +Arguments:

    +
      +
    • memory_format (torch_memory_format, optional): the desired memory format of
    • +
    • returned Tensor. Default: torch_preserve_format.
    • +
    +
    +
    +
    +

    +hardshrink

    +

    hardshrink(lambd=0.5) -> Tensor

    +

    See [torch_nn.functional.hardshrink()]

    +
    +
    +

    +has_names

    +

    Is TRUE if any of this tensor’s dimensions are named. Otherwise, is FALSE.

    +
    +
    +

    +histc

    +

    histc(bins=100, min=0, max=0) -> Tensor

    +

    See ?torch_histc

    +
    +
    +

    +ifft

    +

    ifft(signal_ndim, normalized=FALSE) -> Tensor

    +

    See ?torch_ifft

    +
    +
    +

    +imag

    +

    Returns a new tensor containing imaginary values of the self tensor. The returned tensor and self share the same underlying storage.

    +
    +

    +Warning:

    +

    [imag()] is only supported for tensors with complex dtypes.

    +
    +
    +

    +Examples:

    +
    +x <- torch_randn(4, dtype=torch_cfloat())
    +x
    +x$imag
    +
    +
    +
    +
    +

    +index_add

    +

    index_add(tensor1, dim, index, tensor2) -> Tensor

    +

    Out-of-place version of $index_add_. tensor1 corresponds to self in $index_add_.

    +
    +
    +

    +index_add_

    +

    index_add_(dim, index, tensor) -> Tensor

    +

    Accumulate the elements of tensor into the self tensor by adding to the indices in the order given in index. For example, if dim == 0 and index[i] == j, then the i th row of tensor is added to the j th row of self.

    +

    The dim th dimension of tensor must have the same size as the length of index (which must be a vector), and all other dimensions must match self, or an error will be raised.

    +
    +

    +Note:

    +

    In some circumstances when using the CUDA backend with CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch_backends.cudnn.deterministic = TRUE.

    +
    +
    +

    +Arguments:

    +
      +
    • dim (int): dimension along which to index
    • +
    • index (LongTensor): indices of tensor to select from
    • +
    • tensor (Tensor): the tensor containing values to add
    • +
    +
    +
    +

    +Examples:

    +
    +x <- torch_ones(5, 3)
    +t <- torch_tensor(matrix(1:9, ncol = 3), dtype=torch_float())
    +index <- torch_tensor(c(1L, 4L, 3L))
    +x$index_add_(1, index, t)
    +
    +
    +
    +
    +

    +index_copy

    +

    index_copy(tensor1, dim, index, tensor2) -> Tensor

    +

    Out-of-place version of $index_copy_. tensor1 corresponds to self in $index_copy_.

    +
    +
    +

    +index_copy_

    +

    index_copy_(dim, index, tensor) -> Tensor

    +

    Copies the elements of tensor into the self tensor by selecting the indices in the order given in index. For example, if dim == 0 and index[i] == j, then the i th row of tensor is copied to the j th row of self.

    +

    The dim th dimension of tensor must have the same size as the length of index (which must be a vector), and all other dimensions must match self, or an error will be raised.

    +
    +

    +Arguments:

    +
      +
    • dim (int): dimension along which to index
    • +
    • index (LongTensor): indices of tensor to select from
    • +
    • tensor (Tensor): the tensor containing values to copy
    • +
    +
    +
    +

    +Examples:

    +
    +x <- torch_zeros(5, 3)
    +t <- torch_tensor(matrix(1:9, ncol = 3), dtype=torch_float())
    +index <- torch_tensor(c(1, 5, 3))
    +x$index_copy_(1, index, t)
    +
    +
    +
    +
    +

    +index_fill

    +

    index_fill(tensor1, dim, index, value) -> Tensor

    +

    Out-of-place version of $index_fill_. tensor1 corresponds to self in $index_fill_.

    +
    +
    +

    +index_fill_

    +

    index_fill_(dim, index, val) -> Tensor

    +

    Fills the elements of the self tensor with value val by selecting the indices in the order given in index.

    +
    +

    +Arguments:

    +
      +
    • dim (int): dimension along which to index
    • +
    • index (LongTensor): indices of self tensor to fill in
    • +
    • val (float): the value to fill with
    • +
    +
    +
    +

    +Examples:

    +
    +x <- torch_tensor(matrix(1:9, ncol = 3), dtype=torch_float())
    +index <- torch_tensor(c(1, 3), dtype = torch_long())
    +x$index_fill_(1, index, -1)
    +
    +
    +
    +
    +

    +index_put

    +

    index_put(tensor1, indices, value, accumulate=FALSE) -> Tensor

    +

    Out-place version of $index_put_. tensor1 corresponds to self in $index_put_.

    +
    +
    +

    +index_put_

    +

    index_put_(indices, value, accumulate=FALSE) -> Tensor

    +

    Puts values from the tensor value into the tensor self using the indices specified in indices (which is a tuple of Tensors). The expression tensor.index_put_(indices, value) is equivalent to tensor[indices] = value. Returns self.

    +

    If accumulate is TRUE, the elements in value are added to self. If accumulate is FALSE, the behavior is undefined if indices contain duplicate elements.

    +
    +

    +Arguments:

    +
      +
    • indices (tuple of LongTensor): tensors used to index into self.
    • +
    • value (Tensor): tensor of same dtype as self.
    • +
    • accumulate (bool): whether to accumulate into self
    • +
    +
    +
    +
    +

    +index_select

    +

    index_select(dim, index) -> Tensor

    +

    See [torch_index_select()]

    +
    +
    +

    +indices

    +

    indices() -> Tensor

    +

    If self is a sparse COO tensor (i.e., with torch_sparse_coo layout), this returns a view of the contained indices tensor. Otherwise, this throws an error.

    +

    See also Tensor.values.

    +
    +

    +Note:

    +

    This method can only be called on a coalesced sparse tensor. See Tensor.coalesce for details.

    +
    +
    +
    +

    +int

    +

    int(memory_format=torch_preserve_format) -> Tensor

    +

    self$int() is equivalent to self$to(torch_int32). See [to()].

    +
    +

    +Arguments:

    +
      +
    • memory_format (torch_memory_format, optional): the desired memory format of
    • +
    • returned Tensor. Default: torch_preserve_format.
    • +
    +
    +
    +
    +

    +int_repr

    +

    int_repr() -> Tensor

    +

    Given a quantized Tensor, self$int_repr() returns a CPU Tensor with uint8_t as data type that stores the underlying uint8_t values of the given Tensor.

    +
    +
    +

    +inverse

    +

    inverse() -> Tensor

    +

    See ?torch_inverse

    +
    +
    +

    +irfft

    +

    irfft(signal_ndim, normalized=FALSE, onesided=TRUE, signal_sizes=NULL) -> Tensor

    +

    See ?torch_irfft

    +
    +
    +

    +is_complex

    +

    is_complex() -> bool

    +

    Returns TRUE if the data type of self is a complex data type.

    +
    +
    +

    +is_contiguous

    +

    is_contiguous(memory_format=torch_contiguous_format) -> bool

    +

    Returns TRUE if self tensor is contiguous in memory in the order specified by memory format.

    +
    +

    +Arguments:

    +
      +
    • memory_format (torch_memory_format, optional): Specifies memory allocation
    • +
    • order. Default: torch_contiguous_format.
    • +
    +
    +
    +
    +

    +is_cuda

    +

    Is TRUE if the Tensor is stored on the GPU, FALSE otherwise.

    +
    +
    +

    +is_floating_point

    +

    is_floating_point() -> bool

    +

    Returns TRUE if the data type of self is a floating point data type.

    +
    +
    +

    +is_leaf

    +

    All Tensors that have requires_grad which is FALSE will be leaf Tensors by convention.

    +

    For Tensors that have requires_grad which is TRUE, they will be leaf Tensors if they were created by the user. This means that they are not the result of an operation and so grad_fn is NULL.

    +

    Only leaf Tensors will have their grad populated during a call to [backward()]. To get grad populated for non-leaf Tensors, you can use [retain_grad()].

    +
    +

    +Examples:

    +
    +a <- torch_rand(10, requires_grad=TRUE)
    +a$is_leaf()
    +
    +# b <- torch_rand(10, requires_grad=TRUE)$cuda()
    +# b$is_leaf()
    +# FALSE
    +# b was created by the operation that cast a cpu Tensor into a cuda Tensor
    +
    +c <- torch_rand(10, requires_grad=TRUE) + 2
    +c$is_leaf()
    +# c was created by the addition operation
    +
    +# d <- torch_rand(10)$cuda()
    +# d$is_leaf()
    +# TRUE
    +# d does not require gradients and so has no operation creating it (that is tracked by the autograd engine)
    +
    +# e <- torch_rand(10)$cuda()$requires_grad_()
    +# e$is_leaf()
    +# TRUE
    +# e requires gradients and has no operations creating it
    +
    +# f <- torch_rand(10, requires_grad=TRUE, device="cuda")
    +# f$is_leaf
    +# TRUE
    +# f requires grad, has no operation creating it
    +
    +
    +
    +
    +

    +is_meta

    +

    Is TRUE if the Tensor is a meta tensor, FALSE otherwise. Meta tensors are like normal tensors, but they carry no data.

    +
    +
    +

    +is_pinned

    +

    Returns true if this tensor resides in pinned memory.

    +
    +
    +

    +is_quantized

    +

    Is TRUE if the Tensor is quantized, FALSE otherwise.

    +
    +
    +

    +is_set_to

    +

    is_set_to(tensor) -> bool

    +

    Returns TRUE if this object refers to the same THTensor object from the Torch C API as the given tensor.

    +
    +
    +

    +is_shared

    +

    Checks if tensor is in shared memory.

    +

    This is always TRUE for CUDA tensors.

    +
    +
    +

    +is_signed

    +

    is_signed() -> bool

    +

    Returns TRUE if the data type of self is a signed data type.

    +
    +
    +

    +isclose

    +

    isclose(other, rtol=1e-05, atol=1e-08, equal_nan=FALSE) -> Tensor

    +

    See ?torch_isclose

    +
    +
    +

    +isfinite

    +

    isfinite() -> Tensor

    +

    See ?torch_isfinite

    +
    +
    +

    +isinf

    +

    isinf() -> Tensor

    +

    See ?torch_isinf

    +
    +
    +

    +isnan

    +

    isnan() -> Tensor

    +

    See ?torch_isnan

    +
    +
    +

    +istft

    +

    See ?torch_istft ## item

    +

    item() -> number

    +

    Returns the value of this tensor as a standard Python number. This only works for tensors with one element. For other cases, see $tolist.

    +

    This operation is not differentiable.

    +
    +

    +Examples:

    +
    +x <- torch_tensor(1.0)
    +x$item()
    +
    +
    +
    +
    +

    +kthvalue

    +

    kthvalue(k, dim=NULL, keepdim=FALSE) -> (Tensor, LongTensor)

    +

    See ?torch_kthvalue

    +
    +
    +

    +le

    +

    le(other) -> Tensor

    +

    See ?torch_le

    +
    +
    +

    +le_

    +

    le_(other) -> Tensor

    +

    In-place version of $le

    +
    +
    +

    +lerp

    +

    lerp(end, weight) -> Tensor

    +

    See ?torch_lerp

    +
    +
    +

    +lerp_

    +

    lerp_(end, weight) -> Tensor

    +

    In-place version of $lerp

    +
    +
    +

    +lgamma

    +

    lgamma() -> Tensor

    +

    See ?torch_lgamma

    +
    +
    +

    +lgamma_

    +

    lgamma_() -> Tensor

    +

    In-place version of $lgamma

    +
    +
    +

    +log

    +

    log() -> Tensor

    +

    See ?torch_log

    +
    +
    +

    +log10

    +

    log10() -> Tensor

    +

    See [torch_log10()]

    +
    +
    +

    +log10_

    +

    log10_() -> Tensor

    +

    In-place version of $log10

    +
    +
    +

    +log1p

    +

    log1p() -> Tensor

    +

    See [torch_log1p()]

    +
    +
    +

    +log1p_

    +

    log1p_() -> Tensor

    +

    In-place version of $log1p

    +
    +
    +

    +log2

    +

    log2() -> Tensor

    +

    See [torch_log2()]

    +
    +
    +

    +log2_

    +

    log2_() -> Tensor

    +

    In-place version of $log2

    +
    +
    +

    +log_

    +

    log_() -> Tensor

    +

    In-place version of $log

    +
    +
    +

    +log_normal_

    +

    log_normal_(mean=1, std=2, *, generator=NULL)

    +

    Fills self tensor with numbers samples from the log-normal distribution parameterized by the given mean \mu and standard deviation \sigma. Note that mean and std are the mean and standard deviation of the underlying normal distribution, and not of the returned distribution:

    +

    \[ +f(x) = \dfrac{1}{x \sigma \sqrt{2\pi}}\ e^{-\frac{(\ln x - \mu)^2}{2\sigma^2}} +\]

    +
    +
    +

    +logaddexp

    +

    logaddexp(other) -> Tensor

    +

    See ?torch_logaddexp

    +
    +
    +

    +logaddexp2

    +

    logaddexp2(other) -> Tensor

    +

    See [torch_logaddexp2()]

    +
    +
    +

    +logcumsumexp

    +

    logcumsumexp(dim) -> Tensor

    +

    See ?torch_logcumsumexp

    +
    +
    +

    +logdet

    +

    logdet() -> Tensor

    +

    See ?torch_logdet

    +
    +
    +

    +logical_and

    +

    logical_and() -> Tensor

    +

    See [torch_logical_and()]

    +
    +
    +

    +logical_and_

    +

    logical_and_() -> Tensor

    +

    In-place version of $logical_and

    +
    +
    +

    +logical_not

    +

    logical_not() -> Tensor

    +

    See [torch_logical_not()]

    +
    +
    +

    +logical_not_

    +

    logical_not_() -> Tensor

    +

    In-place version of $logical_not

    +
    +
    +

    +logical_or

    +

    logical_or() -> Tensor

    +

    See [torch_logical_or()]

    +
    +
    +

    +logical_or_

    +

    logical_or_() -> Tensor

    +

    In-place version of $logical_or

    +
    +
    +

    +logical_xor

    +

    logical_xor() -> Tensor

    +

    See [torch_logical_xor()]

    +
    +
    +

    +logical_xor_

    +

    logical_xor_() -> Tensor

    +

    In-place version of $logical_xor

    +
    +
    +

    +logsumexp

    +

    logsumexp(dim, keepdim=FALSE) -> Tensor

    +

    See ?torch_logsumexp

    +
    +
    +

    +long

    +

    long(memory_format=torch_preserve_format) -> Tensor

    +

    self$long() is equivalent to self$to(torch_int64). See [to()].

    +
    +

    +Arguments:

    +
      +
    • memory_format (torch_memory_format, optional): the desired memory format of
    • +
    • returned Tensor. Default: torch_preserve_format.
    • +
    +
    +
    +
    +

    +lstsq

    +

    lstsq(A) -> (Tensor, Tensor)

    +

    See ?torch_lstsq

    +
    +
    +

    +lt

    +

    lt(other) -> Tensor

    +

    See ?torch_lt

    +
    +
    +

    +lt_

    +

    lt_(other) -> Tensor

    +

    In-place version of $lt

    +
    +
    +

    +lu

    +

    See ?torch_lu ## lu_solve

    +

    lu_solve(LU_data, LU_pivots) -> Tensor

    +

    See [torch_lu_solve()]

    +
    +
    +

    +map_

    +

    map_(tensor, callable)

    +

    Applies callable for each element in self tensor and the given tensor and stores the results in self tensor. self tensor and the given tensor must be broadcastable.

    +

    The callable should have the signature:

    +

    callable(a, b) -> number

    +
    +
    +

    +masked_fill

    +

    masked_fill(mask, value) -> Tensor

    +

    Out-of-place version of $masked_fill_

    +
    +
    +

    +masked_fill_

    +

    masked_fill_(mask, value)

    +

    Fills elements of self tensor with value where mask is TRUE. The shape of mask must be broadcastable <broadcasting-semantics> with the shape of the underlying tensor.

    +
    +

    +Arguments:

    +
      +
    • mask (BoolTensor): the boolean mask
    • +
    • value (float): the value to fill in with
    • +
    +
    +
    +
    +

    +masked_scatter

    +

    masked_scatter(mask, tensor) -> Tensor

    +

    Out-of-place version of $masked_scatter_

    +
    +
    +

    +masked_scatter_

    +

    masked_scatter_(mask, source)

    +

    Copies elements from source into self tensor at positions where the mask is TRUE. The shape of mask must be :ref:broadcastable <broadcasting-semantics> with the shape of the underlying tensor. The source should have at least as many elements as the number of ones in mask

    +
    +

    +Arguments:

    +
      +
    • mask (BoolTensor): the boolean mask
    • +
    • source (Tensor): the tensor to copy from
    • +
    +
    +
    +

    +Note:

    +

    The mask operates on the self tensor, not on the given source tensor.

    +
    +
    +
    +

    +masked_select

    +

    masked_select(mask) -> Tensor

    +

    See [torch_masked_select()]

    +
    +
    +

    +matmul

    +

    matmul(tensor2) -> Tensor

    +

    See ?torch_matmul

    +
    +
    +

    +matrix_power

    +

    matrix_power(n) -> Tensor

    +

    See [torch_matrix_power()]

    +
    +
    +

    +max

    +

    max(dim=NULL, keepdim=FALSE) -> Tensor or (Tensor, Tensor)

    +

    See ?torch_max

    +
    +
    +

    +mean

    +

    mean(dim=NULL, keepdim=FALSE) -> Tensor or (Tensor, Tensor)

    +

    See ?torch_mean

    +
    +
    +

    +median

    +

    median(dim=NULL, keepdim=FALSE) -> (Tensor, LongTensor)

    +

    See ?torch_median

    +
    +
    +

    +min

    +

    min(dim=NULL, keepdim=FALSE) -> Tensor or (Tensor, Tensor)

    +

    See ?torch_min

    +
    +
    +

    +mm

    +

    mm(mat2) -> Tensor

    +

    See ?torch_mm

    +
    +
    +

    +mode

    +

    mode(dim=NULL, keepdim=FALSE) -> (Tensor, LongTensor)

    +

    See ?torch_mode

    +
    +
    +

    +mul

    +

    mul(value) -> Tensor

    +

    See ?torch_mul

    +
    +
    +

    +mul_

    +

    mul_(value)

    +

    In-place version of $mul

    +
    +
    +

    +multinomial

    +

    multinomial(num_samples, replacement=FALSE, *, generator=NULL) -> Tensor

    +

    See ?torch_multinomial

    +
    +
    +

    +mv

    +

    mv(vec) -> Tensor

    +

    See ?torch_mv

    +
    +
    +

    +mvlgamma

    +

    mvlgamma(p) -> Tensor

    +

    See ?torch_mvlgamma

    +
    +
    +

    +mvlgamma_

    +

    mvlgamma_(p) -> Tensor

    +

    In-place version of $mvlgamma

    +
    +
    +

    +names

    +

    Stores names for each of this tensor’s dimensions.

    +

    names[idx] corresponds to the name of tensor dimension idx. Names are either a string if the dimension is named or NULL if the dimension is unnamed.

    +

    Dimension names may contain characters or underscore. Furthermore, a dimension name must be a valid Python variable name (i.e., does not start with underscore).

    +

    Tensors may not have two named dimensions with the same name.

    +
    +

    +Warning:

    +

    The named tensor API is experimental and subject to change.

    +
    +
    +
    +

    +narrow

    +

    narrow(dimension, start, length) -> Tensor

    +

    See ?torch_narrow

    +
    +

    +Examples:

    +
    +x <- torch_tensor(matrix(1:9, ncol = 3))
    +x$narrow(1, 1, 3)
    +x$narrow(1, 1, 2)
    +
    +
    +
    +
    +

    +narrow_copy

    +

    narrow_copy(dimension, start, length) -> Tensor

    +

    Same as Tensor.narrow except returning a copy rather than shared storage. This is primarily for sparse tensors, which do not have a shared-storage narrow method. Calling narrow_copy` withdimemsion > self\(sparse_dim()`` will return a copy with the relevant dense dimension narrowed, and ``self\)shape`` updated accordingly.

    +
    +
    +

    +ndim

    +

    Alias for $dim()

    +
    +
    +

    +ndimension

    +

    ndimension() -> int

    +

    Alias for $dim()

    +
    +
    +

    +ne

    +

    ne(other) -> Tensor

    +

    See ?torch_ne

    +
    +
    +

    +ne_

    +

    ne_(other) -> Tensor

    +

    In-place version of $ne

    +
    +
    +

    +neg

    +

    neg() -> Tensor

    +

    See ?torch_neg

    +
    +
    +

    +neg_

    +

    neg_() -> Tensor

    +

    In-place version of $neg

    +
    +
    +

    +nelement

    +

    nelement() -> int

    +

    Alias for $numel

    +
    +
    +

    +new_empty

    +

    new_empty(size, dtype=NULL, device=NULL, requires_grad=FALSE) -> Tensor

    +

    Returns a Tensor of size size filled with uninitialized data. By default, the returned Tensor has the same torch_dtype and torch_device as this tensor.

    +
    +

    +Arguments:

    +
      +
    • dtype (torch_dtype, optional): the desired type of returned tensor. Default: if NULL, same torch_dtype as this tensor.
    • +
    • device (torch_device, optional): the desired device of returned tensor. Default: if NULL, same torch_device as this tensor.
    • +
    • requires_grad (bool, optional): If autograd should record operations on the
    • +
    • returned tensor. Default: FALSE.
    • +
    +
    +
    +

    +Examples:

    +
    +tensor <- torch_ones(5)
    +tensor$new_empty(c(2, 3))
    +
    +
    +
    +
    +

    +new_full

    +

    new_full(size, fill_value, dtype=NULL, device=NULL, requires_grad=FALSE) -> Tensor

    +

    Returns a Tensor of size size filled with fill_value. By default, the returned Tensor has the same torch_dtype and torch_device as this tensor.

    +
    +

    +Arguments:

    +
      +
    • fill_value (scalar): the number to fill the output tensor with.
    • +
    • dtype (torch_dtype, optional): the desired type of returned tensor. Default: if NULL, same torch_dtype as this tensor.
    • +
    • device (torch_device, optional): the desired device of returned tensor. Default: if NULL, same torch_device as this tensor.
    • +
    • requires_grad (bool, optional): If autograd should record operations on the
    • +
    • returned tensor. Default: FALSE.
    • +
    +
    +
    +

    +Examples:

    +
    +tensor <- torch_ones(c(2), dtype=torch_float64())
    +tensor$new_full(c(3, 4), 3.141592)
    +
    +
    +
    +
    +

    +new_ones

    +

    new_ones(size, dtype=NULL, device=NULL, requires_grad=FALSE) -> Tensor

    +

    Returns a Tensor of size size filled with 1. By default, the returned Tensor has the same torch_dtype and torch_device as this tensor.

    +
    +

    +Arguments:

    +
      +
    • size (int…): a list, tuple, or torch_Size of integers defining the
    • +
    • shape of the output tensor.
    • +
    • dtype (torch_dtype, optional): the desired type of returned tensor. Default: if NULL, same torch_dtype as this tensor.
    • +
    • device (torch_device, optional): the desired device of returned tensor. Default: if NULL, same torch_device as this tensor.
    • +
    • requires_grad (bool, optional): If autograd should record operations on the
    • +
    • returned tensor. Default: FALSE.
    • +
    +
    +
    +

    +Examples:

    +
    +tensor <- torch_tensor(c(2), dtype=torch_int32())
    +tensor$new_ones(c(2, 3))
    +
    +
    +
    +
    +

    +new_tensor

    +

    new_tensor(data, dtype=NULL, device=NULL, requires_grad=FALSE) -> Tensor

    +

    Returns a new Tensor with data as the tensor data. By default, the returned Tensor has the same torch_dtype and torch_device as this tensor.

    +
    +

    +Warning:

    +

    new_tensor always copies data(). If you have a Tensordata` and want to avoid a copy, use [$requires_grad_()] or [$detach()]. If you have a numpy array and want to avoid a copy, use [torch_from_numpy()].

    +

    When data is a tensor x, [new_tensor()()] reads out ‘the data’ from whatever it is passed, and constructs a leaf variable. Therefore tensor$new_tensor(x) is equivalent to x$clone()$detach() and tensor$new_tensor(x, requires_grad=TRUE) is equivalent to x$clone()$detach()$requires_grad_(TRUE). The equivalents using clone() and detach() are recommended.

    +
    +
    +

    +Arguments:

    +
      +
    • data (array_like): The returned Tensor copies data.
    • +
    • dtype (torch_dtype, optional): the desired type of returned tensor. Default: if NULL, same torch_dtype as this tensor.
    • +
    • device (torch_device, optional): the desired device of returned tensor. Default: if NULL, same torch_device as this tensor.
    • +
    • requires_grad (bool, optional): If autograd should record operations on the
    • +
    • returned tensor. Default: FALSE.
    • +
    +
    +
    +

    +Examples:

    +
    +tensor <- torch_ones(c(2), dtype=torch_int8)
    +data <- matrix(1:4, ncol = 2)
    +tensor$new_tensor(data)
    +
    +
    +
    +
    +

    +new_zeros

    +

    new_zeros(size, dtype=NULL, device=NULL, requires_grad=FALSE) -> Tensor

    +

    Returns a Tensor of size size filled with 0. By default, the returned Tensor has the same torch_dtype and torch_device as this tensor.

    +
    +

    +Arguments:

    +
      +
    • size (int…): a list, tuple, or torch_Size of integers defining the
    • +
    • shape of the output tensor.
    • +
    • dtype (torch_dtype, optional): the desired type of returned tensor. Default: if NULL, same torch_dtype as this tensor.
    • +
    • device (torch_device, optional): the desired device of returned tensor. Default: if NULL, same torch_device as this tensor.
    • +
    • requires_grad (bool, optional): If autograd should record operations on the
    • +
    • returned tensor. Default: FALSE.
    • +
    +
    +
    +

    +Examples:

    +
    +tensor <- torch_tensor(c(1), dtype=torch_float64())
    +tensor$new_zeros(c(2, 3))
    +
    +
    +
    +
    +

    +nonzero

    +

    nonzero() -> LongTensor

    +

    See ?torch_nonzero

    +
    +
    +

    +norm

    +

    See ?torch_norm ## normal_

    +

    normal_(mean=0, std=1, *, generator=NULL) -> Tensor

    +

    Fills self tensor with elements samples from the normal distribution parameterized by mean and std.

    +
    +
    +

    +numel

    +

    numel() -> int

    +

    See ?torch_numel

    +
    +
    +

    +numpy

    +

    numpy() -> numpy.ndarray

    +

    Returns self tensor as a NumPy :class:ndarray. This tensor and the returned ndarray share the same underlying storage. Changes to self tensor will be reflected in the :class:ndarray and vice versa.

    +
    +
    +

    +orgqr

    +

    orgqr(input2) -> Tensor

    +

    See ?torch_orgqr

    +
    +
    +

    +ormqr

    +

    ormqr(input2, input3, left=TRUE, transpose=FALSE) -> Tensor

    +

    See ?torch_ormqr

    +
    +
    +

    +permute

    +

    permute(*dims) -> Tensor

    +

    Returns a view of the original tensor with its dimensions permuted.

    +
    +

    +Arguments:

    +
      +
    • dims (int…): The desired ordering of dimensions
    • +
    +
    +
    +

    +Examples:

    +
    +x <- torch_randn(2, 3, 5)
    +x$size()
    +x$permute(c(3, 1, 2))$size()
    +
    +
    +
    +
    +

    +pin_memory

    +

    pin_memory() -> Tensor

    +

    Copies the tensor to pinned memory, if it’s not already pinned.

    +
    +
    +

    +pinverse

    +

    pinverse() -> Tensor

    +

    See ?torch_pinverse

    +
    +
    +

    +polygamma

    +

    polygamma(n) -> Tensor

    +

    See ?torch_polygamma

    +
    +
    +

    +polygamma_

    +

    polygamma_(n) -> Tensor

    +

    In-place version of $polygamma

    +
    +
    +

    +pow

    +

    pow(exponent) -> Tensor

    +

    See ?torch_pow

    +
    +
    +

    +pow_

    +

    pow_(exponent) -> Tensor

    +

    In-place version of $pow

    +
    +
    +

    +prod

    +

    prod(dim=NULL, keepdim=FALSE, dtype=NULL) -> Tensor

    +

    See ?torch_prod

    +
    +
    +

    +put_

    +

    put_(indices, tensor, accumulate=FALSE) -> Tensor

    +

    Copies the elements from tensor into the positions specified by indices. For the purpose of indexing, the self tensor is treated as if it were a 1-D tensor.

    +

    If accumulate is TRUE, the elements in tensor are added to self. If accumulate is FALSE, the behavior is undefined if indices contain duplicate elements.

    +
    +

    +Arguments:

    +
      +
    • indices (LongTensor): the indices into self
    • +
    • tensor (Tensor): the tensor containing values to copy from
    • +
    • accumulate (bool): whether to accumulate into self
    • +
    +
    +
    +

    +Examples:

    +
    +src <- torch_tensor(matrix(3:8, ncol = 3))
    +src$put_(torch_tensor(1:2), torch_tensor(9:10))
    +
    +
    +
    +
    +

    +q_per_channel_axis

    +

    q_per_channel_axis() -> int

    +

    Given a Tensor quantized by linear (affine) per-channel quantization, returns the index of dimension on which per-channel quantization is applied.

    +
    +
    +

    +q_per_channel_scales

    +

    q_per_channel_scales() -> Tensor

    +

    Given a Tensor quantized by linear (affine) per-channel quantization, returns a Tensor of scales of the underlying quantizer. It has the number of elements that matches the corresponding dimensions (from q_per_channel_axis) of the tensor.

    +
    +
    +

    +q_per_channel_zero_points

    +

    q_per_channel_zero_points() -> Tensor

    +

    Given a Tensor quantized by linear (affine) per-channel quantization, returns a tensor of zero_points of the underlying quantizer. It has the number of elements that matches the corresponding dimensions (from q_per_channel_axis) of the tensor.

    +
    +
    +

    +q_scale

    +

    q_scale() -> float

    +

    Given a Tensor quantized by linear(affine) quantization, returns the scale of the underlying quantizer().

    +
    +
    +

    +q_zero_point

    +

    q_zero_point() -> int

    +

    Given a Tensor quantized by linear(affine) quantization, returns the zero_point of the underlying quantizer().

    +
    +
    +

    +qr

    +

    qr(some=TRUE) -> (Tensor, Tensor)

    +

    See ?torch_qr

    +
    +
    +

    +qscheme

    +

    qscheme() -> torch_qscheme

    +

    Returns the quantization scheme of a given QTensor.

    +
    +
    +

    +rad2deg

    +

    rad2deg() -> Tensor

    +

    See [torch_rad2deg()]

    +
    +
    +

    +rad2deg_

    +

    rad2deg_() -> Tensor

    +

    In-place version of $rad2deg

    +
    +
    +

    +random_

    +

    random_(from=0, to=NULL, *, generator=NULL) -> Tensor

    +

    Fills self tensor with numbers sampled from the discrete uniform distribution over [from, to - 1]. If not specified, the values are usually only bounded by self tensor’s data type. However, for floating point types, if unspecified, range will be [0, 2^mantissa] to ensure that every value is representable. For example, torch_tensor(1, dtype=torch_double).random_() will be uniform in [0, 2^53].

    +
    +
    +

    +real

    +

    Returns a new tensor containing real values of the self tensor. The returned tensor and self share the same underlying storage.

    +
    +

    +Warning:

    +

    [real()] is only supported for tensors with complex dtypes.

    +
    +
    +

    +Examples:

    +
    +x <- torch_randn(4, dtype=torch_cfloat())
    +x
    +x$real
    +
    +
    +
    +
    +

    +reciprocal

    +

    reciprocal() -> Tensor

    +

    See ?torch_reciprocal

    +
    +
    +

    +reciprocal_

    +

    reciprocal_() -> Tensor

    +

    In-place version of $reciprocal

    +
    +
    +

    +record_stream

    +

    record_stream(stream)

    +

    Ensures that the tensor memory is not reused for another tensor until all current work queued on stream are complete.

    +
    +

    +Note:

    +

    The caching allocator is aware of only the stream where a tensor was allocated. Due to the awareness, it already correctly manages the life cycle of tensors on only one stream. But if a tensor is used on a stream different from the stream of origin, the allocator might reuse the memory unexpectedly. Calling this method lets the allocator know which streams have used the tensor.

    +
    +
    +
    +

    +refine_names

    +

    Refines the dimension names of self according to names.

    +

    Refining is a special case of renaming that “lifts” unnamed dimensions. A NULL dim can be refined to have any name; a named dim can only be refined to have the same name.

    +

    Because named tensors can coexist with unnamed tensors, refining names gives a nice way to write named-tensor-aware code that works with both named and unnamed tensors.

    +

    names may contain up to one Ellipsis (...). The Ellipsis is expanded greedily; it is expanded in-place to fill names to the same length as self$dim() using names from the corresponding indices of self$names.

    +
    +

    +Arguments:

    +
      +
    • names (iterable of str): The desired names of the output tensor. May contain up to one Ellipsis.
    • +
    +
    +
    +

    +Examples:

    +
    +imgs <- torch_randn(32, 3, 128, 128)
    +named_imgs <- imgs$refine_names(c('N', 'C', 'H', 'W'))
    +named_imgs$names
    +
    +
    +
    +
    +

    +register_hook

    +

    Registers a backward hook.

    +

    The hook will be called every time a gradient with respect to the Tensor is computed. The hook should have the following signature::

    +

    hook(grad) -> Tensor or NULL

    +

    The hook should not modify its argument, but it can optionally return a new gradient which will be used in place of grad.

    +

    This function returns a handle with a method handle$remove() that removes the hook from the module.

    +
    +

    +Example

    +
    +v <- torch_tensor(c(0., 0., 0.), requires_grad=TRUE)
    +h <- v$register_hook(function(grad) grad * 2)  # double the gradient
    +v$backward(torch_tensor(c(1., 2., 3.)))
    +v$grad
    +h$remove()
    +
    +
    +
    +
    +

    +remainder

    +

    remainder(divisor) -> Tensor

    +

    See ?torch_remainder

    +
    +
    +

    +remainder_

    +

    remainder_(divisor) -> Tensor

    +

    In-place version of $remainder

    +
    +
    +

    +rename

    +

    Renames dimension names of self.

    +

    There are two main usages:

    +

    self$rename(**rename_map) returns a view on tensor that has dims renamed as specified in the mapping rename_map.

    +

    self$rename(*names) returns a view on tensor, renaming all dimensions positionally using names. Use self$rename(NULL) to drop names on a tensor.

    +

    One cannot specify both positional args names and keyword args rename_map.

    +
    +

    +Examples:

    +
    +imgs <- torch_rand(2, 3, 5, 7, names=c('N', 'C', 'H', 'W'))
    +renamed_imgs <- imgs$rename(c("Batch", "Channels", "Height", "Width"))
    +
    +
    +
    +
    +

    +rename_

    +

    In-place version of $rename.

    +
    +
    +

    +renorm

    +

    renorm(p, dim, maxnorm) -> Tensor

    +

    See ?torch_renorm

    +
    +
    +

    +renorm_

    +

    renorm_(p, dim, maxnorm) -> Tensor

    +

    In-place version of $renorm

    +
    +
    +

    +repeat

    +

    repeat(*sizes) -> Tensor

    +

    Repeats this tensor along the specified dimensions.

    +

    Unlike $expand, this function copies the tensor’s data.

    +
    +

    +Arguments:

    +
      +
    • sizes (torch_Size or int…): The number of times to repeat this tensor along each
    • +
    • dimension
    • +
    +
    +
    +

    +Examples:

    +
    +x <- torch_tensor(c(1, 2, 3))
    +x$`repeat`(c(4, 2))
    +x$`repeat`(c(4, 2, 1))$size()
    +
    +
    +
    +
    +

    +repeat_interleave

    +

    repeat_interleave(repeats, dim=NULL) -> Tensor

    +

    See [torch_repeat_interleave()].

    +
    +
    +

    +requires_grad

    +

    Is TRUE if gradients need to be computed for this Tensor, FALSE otherwise.

    +
    +

    +Note:

    +

    The fact that gradients need to be computed for a Tensor do not mean that the grad attribute will be populated, see is_leaf for more details.

    +
    +
    +
    +

    +requires_grad_

    +

    requires_grad_(requires_grad=TRUE) -> Tensor

    +

    Change if autograd should record operations on this tensor: sets this tensor’s requires_grad attribute in-place. Returns this tensor.

    +

    [requires_grad_()]’s main use case is to tell autograd to begin recording operations on a Tensor tensor. If tensor has requires_grad=FALSE (because it was obtained through a DataLoader, or required preprocessing or initialization), tensor.requires_grad_() makes it so that autograd will begin to record operations on tensor.

    +
    +

    +Arguments:

    +
      +
    • requires_grad (bool): If autograd should record operations on this tensor. Default: TRUE.
    • +
    +
    +
    +

    +Examples:

    +
    +# Let's say we want to preprocess some saved weights and use
    +# the result as new weights.
    +saved_weights <- c(0.1, 0.2, 0.3, 0.25)
    +loaded_weights <- torch_tensor(saved_weights)
    +weights <- preprocess(loaded_weights)  # some function
    +weights
    +
    +# Now, start to record operations done to weights
    +weights$requires_grad_()
    +out <- weights$pow(2)$sum()
    +out$backward()
    +weights$grad
    +
    +
    +
    +
    +

    +reshape

    +

    reshape(*shape) -> Tensor

    +

    Returns a tensor with the same data and number of elements as self but with the specified shape. This method returns a view if shape is compatible with the current shape. See $view on when it is possible to return a view.

    +

    See ?torch_reshape

    +
    +

    +Arguments:

    +
      +
    • shape (tuple of ints or int…): the desired shape
    • +
    +
    +
    +
    +

    +reshape_as

    +

    reshape_as(other) -> Tensor

    +

    Returns this tensor as the same shape as other. self$reshape_as(other) is equivalent to self$reshape(other.sizes()). This method returns a view if other.sizes() is compatible with the current shape. See $view on when it is possible to return a view.

    +

    Please see reshape for more information about reshape.

    +
    +

    +Arguments:

    +
      +
    • other (`$): The result tensor has the same shape
    • +
    • as other.
    • +
    +
    +
    +
    +

    +resize_

    +

    resize_(*sizes, memory_format=torch_contiguous_format) -> Tensor

    +

    Resizes self tensor to the specified size. If the number of elements is larger than the current storage size, then the underlying storage is resized to fit the new number of elements. If the number of elements is smaller, the underlying storage is not changed. Existing elements are preserved but any new memory is uninitialized.

    +
    +

    +Warning:

    +

    This is a low-level method. The storage is reinterpreted as C-contiguous, ignoring the current strides (unless the target size equals the current size, in which case the tensor is left unchanged). For most purposes, you will instead want to use $view(), which checks for contiguity, or $reshape(), which copies data if needed. To change the size in-place with custom strides, see $set_().

    +
    +
    +

    +Arguments:

    +
      +
    • sizes (torch_Size or int…): the desired size
    • +
    • memory_format (torch_memory_format, optional): the desired memory format of Tensor. Default: torch_contiguous_format. Note that memory format of self is going to be unaffected if self$size() matches sizes.
    • +
    +
    +
    +

    +Examples:

    +
    +x <- torch_tensor(matrix(1:6, ncol = 2))
    +x$resize_(c(2, 2))
    +
    +
    +
    +
    +

    +resize_as_

    +

    resize_as_(tensor, memory_format=torch_contiguous_format) -> Tensor

    +

    Resizes the self tensor to be the same size as the specified tensor. This is equivalent to self$resize_(tensor.size()).

    +
    +

    +Arguments:

    +
      +
    • memory_format (torch_memory_format, optional): the desired memory format of Tensor. Default: torch_contiguous_format. Note that memory format of self is going to be unaffected if self$size() matches tensor.size().
    • +
    +
    +
    +
    +

    +retain_grad

    +

    Enables $grad attribute for non-leaf Tensors.

    +
    +
    +

    +rfft

    +

    rfft(signal_ndim, normalized=FALSE, onesided=TRUE) -> Tensor

    +

    See ?torch_rfft

    +
    +
    +

    +roll

    +

    roll(shifts, dims) -> Tensor

    +

    See ?torch_roll

    +
    +
    +

    +rot90

    +

    rot90(k, dims) -> Tensor

    +

    See [torch_rot90()]

    +
    +
    +

    +round

    +

    round() -> Tensor

    +

    See ?torch_round

    +
    +
    +

    +round_

    +

    round_() -> Tensor

    +

    In-place version of $round

    +
    +
    +

    +rsqrt

    +

    rsqrt() -> Tensor

    +

    See ?torch_rsqrt

    +
    +
    +

    +rsqrt_

    +

    rsqrt_() -> Tensor

    +

    In-place version of $rsqrt

    +
    +
    +

    +scatter

    +

    scatter(dim, index, src) -> Tensor

    +

    Out-of-place version of $scatter_

    +
    +
    +

    +scatter_

    +

    scatter_(dim, index, src) -> Tensor

    +

    Writes all values from the tensor src into self at the indices specified in the index tensor. For each value in src, its output index is specified by its index in src for dimension != dim and by the corresponding value in index for dimension = dim.

    +

    For a 3-D tensor, self is updated as:

    +
    self[index[i][j][k]][j][k] = src[i][j][k]  # if dim == 0
    +self[i][index[i][j][k]][k] = src[i][j][k]  # if dim == 1
    +self[i][j][index[i][j][k]] = src[i][j][k]  # if dim == 2
    +

    This is the reverse operation of the manner described in $gather.

    +

    self, index and src (if it is a Tensor) should have same number of dimensions. It is also required that index.size(d) <= src.size(d) for all dimensions d, and that index.size(d) <= self$size(d) for all dimensions d != dim.

    +

    Moreover, as for $gather, the values of index must be between 0 and self$size(dim) - 1 inclusive, and all values in a row along the specified dimension dim must be unique.

    +
    +

    +Arguments:

    +
      +
    • dim (int): the axis along which to index
    • +
    • index (LongTensor): the indices of elements to scatter,
    • +
    • can be either empty or the same size of src. When empty, the operation returns identity
    • +
    • src (Tensor): the source element(s) to scatter,
    • +
    • incase value is not specified
    • +
    • value (float): the source element(s) to scatter,
    • +
    • incase src is not specified
    • +
    +
    +
    +

    +Examples:

    +
    +x <- torch_rand(2, 5)
    +x
    +torch_zeros(3, 5)$scatter_(
    +        1, 
    +        torch_tensor(rbind(c(2, 3, 3, 1, 1), c(3, 1, 1, 2, 3)), x)
    +)
    +
    +z <- torch_zeros(2, 4)$scatter_(
    +        2, 
    +        torch_tensor(matrix(3:4, ncol = 1)), 1.23
    +)
    +
    +
    +
    +
    +

    +scatter_add

    +

    scatter_add(dim, index, src) -> Tensor

    +

    Out-of-place version of $scatter_add_

    +
    +
    +

    +scatter_add_

    +

    scatter_add_(dim, index, src) -> Tensor

    +

    Adds all values from the tensor other into self at the indices specified in the index tensor in a similar fashion as ~$scatter_. For each value in src, it is added to an index in self which is specified by its index in src for dimension != dim and by the corresponding value in index for dimension = dim.

    +

    For a 3-D tensor, self is updated as::

    +
    self[index[i][j][k]][j][k] += src[i][j][k]  # if dim == 0
    +self[i][index[i][j][k]][k] += src[i][j][k]  # if dim == 1
    +self[i][j][index[i][j][k]] += src[i][j][k]  # if dim == 2
    +

    self, index and src should have same number of dimensions. It is also required that index.size(d) <= src.size(d) for all dimensions d, and that index.size(d) <= self$size(d) for all dimensions d != dim.

    +
    +

    +Note:

    +

    In some circumstances when using the CUDA backend with CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch_backends.cudnn.deterministic = TRUE.

    +
    +
    +

    +Arguments:

    +
      +
    • dim (int): the axis along which to index
    • +
    • index (LongTensor): the indices of elements to scatter and add,
    • +
    • can be either empty or the same size of src. When empty, the operation returns identity.
    • +
    • src (Tensor): the source elements to scatter and add
    • +
    +
    +
    +

    +Examples:

    +
    +x <- torch_rand(2, 5)
    +x
    +torch_ones(3, 5)$scatter_add_(1, torch_tensor(rbind(c(0, 1, 2, 0, 0), c(2, 0, 0, 1, 2))), x)
    +
    +
    +
    +
    +

    +select

    +

    select(dim, index) -> Tensor

    +

    Slices the self tensor along the selected dimension at the given index. This function returns a view of the original tensor with the given dimension removed.

    +
    +

    +Arguments:

    +
      +
    • dim (int): the dimension to slice
    • +
    • index (int): the index to select with
    • +
    +
    +
    +

    +Note:

    +

    select is equivalent to slicing. For example, tensor$select(0, index) is equivalent to tensor[index] and tensor$select(2, index) is equivalent to tensor[:,:,index].

    +
    +
    +
    +

    +set_

    +

    set_(source=NULL, storage_offset=0, size=NULL, stride=NULL) -> Tensor

    +

    Sets the underlying storage, size, and strides. If source is a tensor, self tensor will share the same storage and have the same size and strides as source. Changes to elements in one tensor will be reflected in the other.

    +
    +

    +Arguments:

    +
      +
    • source (Tensor or Storage): the tensor or storage to use
    • +
    • storage_offset (int, optional): the offset in the storage
    • +
    • size (torch_Size, optional): the desired size. Defaults to the size of the source.
    • +
    • stride (tuple, optional): the desired stride. Defaults to C-contiguous strides.
    • +
    +
    +
    +
    +

    +share_memory_

    +

    Moves the underlying storage to shared memory.

    +

    This is a no-op if the underlying storage is already in shared memory and for CUDA tensors. Tensors in shared memory cannot be resized.

    +
    +
    +

    +short

    +

    short(memory_format=torch_preserve_format) -> Tensor

    +

    self$short() is equivalent to self$to(torch_int16). See [to()].

    +
    +

    +Arguments:

    +
      +
    • memory_format (torch_memory_format, optional): the desired memory format of
    • +
    • returned Tensor. Default: torch_preserve_format.
    • +
    +
    +
    +
    +

    +sigmoid

    +

    sigmoid() -> Tensor

    +

    See ?torch_sigmoid

    +
    +
    +

    +sigmoid_

    +

    sigmoid_() -> Tensor

    +

    In-place version of $sigmoid

    +
    +
    +

    +sign

    +

    sign() -> Tensor

    +

    See ?torch_sign

    +
    +
    +

    +sign_

    +

    sign_() -> Tensor

    +

    In-place version of $sign

    +
    +
    +

    +sin

    +

    sin() -> Tensor

    +

    See ?torch_sin

    +
    +
    +

    +sin_

    +

    sin_() -> Tensor

    +

    In-place version of $sin

    +
    +
    +

    +sinh

    +

    sinh() -> Tensor

    +

    See ?torch_sinh

    +
    +
    +

    +sinh_

    +

    sinh_() -> Tensor

    +

    In-place version of $sinh

    +
    +
    +

    +size

    +

    size() -> torch_Size

    +

    Returns the size of the self tensor. The returned value is a subclass of tuple.

    +
    +

    +Examples:

    +
    +torch_empty(3, 4, 5)$size()
    +
    +
    +
    +
    +

    +slogdet

    +

    slogdet() -> (Tensor, Tensor)

    +

    See ?torch_slogdet

    +
    +
    +

    +solve

    +

    solve(A) -> Tensor, Tensor

    +

    See ?torch_solve

    +
    +
    +

    +sort

    +

    sort(dim=-1, descending=FALSE) -> (Tensor, LongTensor)

    +

    See ?torch_sort

    +
    +
    +

    +sparse_dim

    +

    sparse_dim() -> int

    +

    If self is a sparse COO tensor (i.e., with torch_sparse_coo layout), this returns the number of sparse dimensions. Otherwise, this throws an error.

    +

    See also Tensor.dense_dim.

    +
    +
    +

    +sparse_mask

    +

    sparse_mask(input, mask) -> Tensor

    +

    Returns a new SparseTensor with values from Tensor input filtered by indices of mask and values are ignored. input and mask must have the same shape.

    +
    +

    +Arguments:

    +
      +
    • input (Tensor): an input Tensor
    • +
    • mask (SparseTensor): a SparseTensor which we filter input based on its indices
    • +
    +
    +
    +
    +

    +split

    +

    See ?torch_split

    +
    +
    +

    +sqrt

    +

    sqrt() -> Tensor

    +

    See ?torch_sqrt

    +
    +
    +

    +sqrt_

    +

    sqrt_() -> Tensor

    +

    In-place version of $sqrt

    +
    +
    +

    +square

    +

    square() -> Tensor

    +

    See ?torch_square

    +
    +
    +

    +square_

    +

    square_() -> Tensor

    +

    In-place version of $square

    +
    +
    +

    +squeeze

    +

    squeeze(dim=NULL) -> Tensor

    +

    See ?torch_squeeze

    +
    +
    +

    +squeeze_

    +

    squeeze_(dim=NULL) -> Tensor

    +

    In-place version of $squeeze

    +
    +
    +

    +std

    +

    std(dim=NULL, unbiased=TRUE, keepdim=FALSE) -> Tensor

    +

    See ?torch_std

    +
    +
    +

    +stft

    +

    See ?torch_stft

    +
    +
    +

    +storage

    +

    storage() -> torch_Storage

    +

    Returns the underlying storage.

    +
    +
    +

    +storage_offset

    +

    storage_offset() -> int

    +

    Returns self tensor’s offset in the underlying storage in terms of number of storage elements (not bytes).

    +
    +

    +Examples:

    +
    +x <- torch_tensor(c(1, 2, 3, 4, 5))
    +x$storage_offset()
    +x[3:N]$storage_offset()
    +
    +
    +
    +
    +

    +storage_type

    +

    storage_type() -> type

    +

    Returns the type of the underlying storage.

    +
    +
    +

    +stride

    +

    stride(dim) -> tuple or int

    +

    Returns the stride of self tensor.

    +

    Stride is the jump necessary to go from one element to the next one in the specified dimension dim. A tuple of all strides is returned when no argument is passed in. Otherwise, an integer value is returned as the stride in the particular dimension dim.

    +
    +

    +Arguments:

    +
      +
    • dim (int, optional): the desired dimension in which stride is required
    • +
    +
    +
    +

    +Examples:

    +
    +x <- torch_tensor(matrix(1:10, nrow = 2))
    +x$stride()
    +x$stride(1)
    +x$stride(-1)
    +
    +
    +
    +
    +

    +sub

    +

    sub(other, *, alpha=1) -> Tensor

    +

    Subtracts a scalar or tensor from self tensor. If both alpha and other are specified, each element of other is scaled by alpha before being used.

    +

    When other is a tensor, the shape of other must be broadcastable <broadcasting-semantics> with the shape of the underlying tensor.

    +
    +
    +

    +sub_

    +

    sub_(other, *, alpha=1) -> Tensor

    +

    In-place version of $sub

    +
    +
    +

    +sum

    +

    sum(dim=NULL, keepdim=FALSE, dtype=NULL) -> Tensor

    +

    See ?torch_sum

    +
    +
    +

    +sum_to_size

    +

    sum_to_size(*size) -> Tensor

    +

    Sum this tensor to size. size must be broadcastable to this tensor size.

    +
    +

    +Arguments:

    +
      +
    • size (int…): a sequence of integers defining the shape of the output tensor.
    • +
    +
    +
    +
    +

    +svd

    +

    svd(some=TRUE, compute_uv=TRUE) -> (Tensor, Tensor, Tensor)

    +

    See ?torch_svd

    +
    +
    +

    +symeig

    +

    symeig(eigenvectors=FALSE, upper=TRUE) -> (Tensor, Tensor)

    +

    See ?torch_symeig

    +
    +
    +

    +t

    +

    t() -> Tensor

    +

    See ?torch_t

    +
    +
    +

    +t_

    +

    t_() -> Tensor

    +

    In-place version of $t

    +
    +
    +

    +take

    +

    take(indices) -> Tensor

    +

    See ?torch_take

    +
    +
    +

    +tan

    +

    tan() -> Tensor

    +

    See ?torch_tan

    +
    +
    +

    +tan_

    +

    tan_() -> Tensor

    +

    In-place version of $tan

    +
    +
    +

    +tanh

    +

    tanh() -> Tensor

    +

    See ?torch_tanh

    +
    +
    +

    +tanh_

    +

    tanh_() -> Tensor

    +

    In-place version of $tanh

    +
    +
    +

    +to

    +

    to(*args, **kwargs) -> Tensor

    +

    Performs Tensor dtype and/or device conversion. A torch_dtype and :class:torch_device are inferred from the arguments of self$to(*args, **kwargs).

    +
    +

    +Note:

    +

    If the self Tensor already has the correct torch_dtype and :class:torch_device, then self is returned. Otherwise, the returned tensor is a copy of self with the desired torch_dtype and :class:torch_device.

    +

    Here are the ways to call to:

    +

    to(dtype, non_blocking=FALSE, copy=FALSE, memory_format=torch_preserve_format) -> Tensor

    +

    Returns a Tensor with the specified dtype

    +
    +
    +

    +Arguments:

    +
      +
    • memory_format (torch_memory_format, optional): the desired memory format of returned Tensor. Default: torch_preserve_format.
    • +
    +

    to(device=NULL, dtype=NULL, non_blocking=FALSE, copy=FALSE, memory_format=torch_preserve_format) -> Tensor

    +

    Returns a Tensor with the specified device and (optional) dtype. If dtype is NULL it is inferred to be self$dtype. When non_blocking, tries to convert asynchronously with respect to the host if possible, e.g., converting a CPU Tensor with pinned memory to a CUDA Tensor.

    +

    When copy is set, a new Tensor is created even when the Tensor already matches the desired conversion.

    +
    +
    +

    +Arguments:

    +
      +
    • memory_format (torch_memory_format, optional): the desired memory format of returned Tensor. Default: torch_preserve_format.
    • +
    +

    function:: to(other, non_blocking=FALSE, copy=FALSE) -> Tensor

    +

    Returns a Tensor with same torch_dtype and :class:torch_device as the Tensor other. When non_blocking, tries to convert asynchronously with respect to the host if possible, e.g., converting a CPU Tensor with pinned memory to a CUDA Tensor.

    +

    When copy is set, a new Tensor is created even when the Tensor already matches the desired conversion.

    +
    +
    +

    +Examples:

    +
    +tensor <- torch_randn(2, 2)  # Initially dtype=float32, device=cpu
    +tensor$to(dtype = torch_float64())
    +
    +other <- torch_randn(1, dtype=torch_float64())
    +tensor$to(other = other, non_blocking=TRUE)
    +
    +
    +
    +
    +

    +to_mkldnn

    +

    to_mkldnn() -> Tensor Returns a copy of the tensor in torch_mkldnn layout.

    +
    +
    +

    +to_sparse

    +

    to_sparse(sparseDims) -> Tensor Returns a sparse copy of the tensor. PyTorch supports sparse tensors in coordinate format <sparse-docs>.

    +
    +

    +Arguments:

    +
      +
    • sparseDims (int, optional): the number of sparse dimensions to include in the new sparse tensor
    • +
    +
    +
    +
    +

    +tolist

    +

    tolist() -> list or number

    +

    Returns the tensor as a (nested) list. For scalars, a standard Python number is returned, just like with $item. Tensors are automatically moved to the CPU first if necessary.

    +

    This operation is not differentiable.

    +
    +
    +

    +topk

    +

    topk(k, dim=NULL, largest=TRUE, sorted=TRUE) -> (Tensor, LongTensor)

    +

    See ?torch_topk

    +
    +
    +

    +trace

    +

    trace() -> Tensor

    +

    See ?torch_trace

    +
    +
    +

    +transpose

    +

    transpose(dim0, dim1) -> Tensor

    +

    See ?torch_transpose

    +
    +
    +

    +transpose_

    +

    transpose_(dim0, dim1) -> Tensor

    +

    In-place version of $transpose

    +
    +
    +

    +triangular_solve

    +

    triangular_solve(A, upper=TRUE, transpose=FALSE, unitriangular=FALSE) -> (Tensor, Tensor)

    +

    See [torch_triangular_solve()]

    +
    +
    +

    +tril

    +

    tril(k=0) -> Tensor

    +

    See ?torch_tril

    +
    +
    +

    +tril_

    +

    tril_(k=0) -> Tensor

    +

    In-place version of $tril

    +
    +
    +

    +triu

    +

    triu(k=0) -> Tensor

    +

    See ?torch_triu

    +
    +
    +

    +triu_

    +

    triu_(k=0) -> Tensor

    +

    In-place version of $triu

    +
    +
    +

    +true_divide

    +

    true_divide(value) -> Tensor

    +

    See [torch_true_divide()]

    +
    +
    +

    +true_divide_

    +

    true_divide_(value) -> Tensor

    +

    In-place version of $true_divide_

    +
    +
    +

    +trunc

    +

    trunc() -> Tensor

    +

    See ?torch_trunc

    +
    +
    +

    +trunc_

    +

    trunc_() -> Tensor

    +

    In-place version of $trunc

    +
    +
    +

    +type

    +

    type(dtype=NULL, non_blocking=FALSE, **kwargs) -> str or Tensor Returns the type if dtype is not provided, else casts this object to the specified type.

    +

    If this is already of the correct type, no copy is performed and the original object is returned.

    +
    +

    +Arguments:

    +
      +
    • dtype (type or string): The desired type
    • +
    • non_blocking (bool): If TRUE, and the source is in pinned memory
    • +
    • and destination is on the GPU or vice versa, the copy is performed
    • +
    • asynchronously with respect to the host. Otherwise, the argument
    • +
    • has no effect. **kwargs: For compatibility, may contain the key async in place of
    • +
    • the non_blocking argument. The async arg is deprecated.
    • +
    +
    +
    +
    +

    +type_as

    +

    type_as(tensor) -> Tensor

    +

    Returns this tensor cast to the type of the given tensor.

    +

    This is a no-op if the tensor is already of the correct type. This is equivalent to self$type(tensor.type())

    +
    +

    +Arguments:

    +
      +
    • tensor (Tensor): the tensor which has the desired type
    • +
    +
    +
    +
    +

    +unbind

    +

    unbind(dim=0) -> seq

    +

    See ?torch_unbind

    +
    +
    +

    +unflatten

    +

    Unflattens the named dimension dim, viewing it in the shape specified by namedshape.

    +
    +

    +Arguments:

    +
      +
    • namedshape: (iterable of (name, size) tuples).
    • +
    +
    +
    +
    +

    +unfold

    +

    unfold(dimension, size, step) -> Tensor

    +

    Returns a view of the original tensor which contains all slices of size size from self tensor in the dimension dimension.

    +

    Step between two slices is given by step.

    +

    If sizedim is the size of dimension dimension for self, the size of dimension dimension in the returned tensor will be (sizedim - size) / step + 1.

    +

    An additional dimension of size size is appended in the returned tensor.

    +
    +

    +Arguments:

    +
      +
    • dimension (int): dimension in which unfolding happens
    • +
    • size (int): the size of each slice that is unfolded
    • +
    • step (int): the step between each slice
    • +
    +
    +
    +
    +

    +uniform_

    +

    uniform_(from=0, to=1) -> Tensor

    +

    Fills self tensor with numbers sampled from the continuous uniform distribution:

    +

    \[ +P(x) = \dfrac{1}{\text{to} - \text{from}} +\]

    +
    +
    +

    +unique

    +

    Returns the unique elements of the input tensor.

    +

    See ?torch_unique

    +
    +
    +

    +unique_consecutive

    +

    Eliminates all but the first element from every consecutive group of equivalent elements.

    +

    See [torch_unique_consecutive()]

    +
    +
    +

    +unsqueeze

    +

    unsqueeze(dim) -> Tensor

    +

    See ?torch_unsqueeze

    +
    +
    +

    +unsqueeze_

    +

    unsqueeze_(dim) -> Tensor

    +

    In-place version of $unsqueeze

    +
    +
    +

    +values

    +

    values() -> Tensor

    +

    If self is a sparse COO tensor (i.e., with torch_sparse_coo layout), this returns a view of the contained values tensor. Otherwise, this throws an error.

    +
    +

    +Note:

    +

    This method can only be called on a coalesced sparse tensor. See Tensor$coalesce for details.

    +
    +
    +
    +

    +var

    +

    var(dim=NULL, unbiased=TRUE, keepdim=FALSE) -> Tensor

    +

    See ?torch_var

    +
    +
    +

    +view

    +

    view(*shape) -> Tensor

    +

    Returns a new tensor with the same data as the self tensor but of a different shape.

    +

    The returned tensor shares the same data and must have the same number of elements, but may have a different size. For a tensor to be viewed, the new view size must be compatible with its original size and stride, i.e., each new view dimension must either be a subspace of an original dimension, or only span across original dimensions d, d+1, \dots, d+k that satisfy the following contiguity-like condition that \forall i = d, \dots, d+k-1,

    +

    \[ +\text{stride}[i] = \text{stride}[i+1] \times \text{size}[i+1] +\]

    +

    Otherwise, it will not be possible to view self tensor as shape without copying it (e.g., via contiguous). When it is unclear whether a view can be performed, it is advisable to use :meth:reshape, which returns a view if the shapes are compatible, and copies (equivalent to calling contiguous) otherwise.

    +
    +

    +Arguments:

    +
      +
    • shape (torch_Size or int…): the desired size
    • +
    +
    +
    +
    +

    +view_as

    +

    view_as(other) -> Tensor

    +

    View this tensor as the same size as other. self$view_as(other) is equivalent to self$view(other.size()).

    +

    Please see $view for more information about view.

    +
    +

    +Arguments:

    +
      +
    • other (`$): The result tensor has the same size
    • +
    • as other.
    • +
    +
    +
    +
    +

    +where

    +

    where(condition, y) -> Tensor

    +

    self$where(condition, y) is equivalent to torch_where(condition, self, y). See ?torch_where

    +
    +
    +

    +zero_

    +

    zero_() -> Tensor

    +

    Fills self tensor with zeros.

    +
    +
    + + + +
    + + + +
    + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + diff --git a/static/docs/dev/articles/tensor/index_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/dev/articles/tensor/index_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/dev/articles/tensor/index_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/dev/articles/using-autograd.html b/static/docs/dev/articles/using-autograd.html new file mode 100644 index 0000000000000000000000000000000000000000..7b2e4fe8d61dbab74a0298b0944a6a41c866b03a --- /dev/null +++ b/static/docs/dev/articles/using-autograd.html @@ -0,0 +1,374 @@ + + + + + + + +Using autograd • torch + + + + + + + + + + + +
    +
    + + + + +
    +
    + + + + + +

    So far, all we’ve been using from torch is tensors, but we’ve been performing all calculations ourselves – the computing the predictions, the loss, the gradients (and thus, the necessary updates to the weights), and the new weight values. In this chapter, we’ll make a significant change: Namely, we spare ourselves the cumbersome calculation of gradients, and have torch do it for us.

    +

    Before we see that in action, let’s get some more background.

    +
    +

    +Automatic differentiation with autograd

    +

    Torch uses a module called autograd to record operations performed on tensors, and store what has to be done to obtain the respective gradients. These actions are stored as functions, and those functions are applied in order when the gradient of the output (normally, the loss) with respect to those tensors is calculated: starting from the output node and propagating gradients back through the network. This is a form of reverse mode automatic differentiation.

    +

    As users, we can see a bit of this implementation. As a prerequisite for this “recording” to happen, tensors have to be created with requires_grad = TRUE. E.g.

    +
    +x <- torch_ones(2,2, requires_grad = TRUE)
    +
    +

    To be clear, this is a tensor with respect to which gradients have to be calculated – normally, a tensor representing a weight or a bias, not the input data 1. If we now perform some operation on that tensor, assigning the result to y

    +
    +y <- x$mean()
    +
    +

    we find that y now has a non-empty grad_fn that tells torch how to compute the gradient of y with respect to x:

    +
    +y$grad_fn
    +#> MeanBackward0
    +
    +

    Actual computation of gradients is triggered by calling backward() on the output tensor.

    +
    +y$backward()
    +
    +

    That executed, x now has a non-empty field grad that stores the gradient of y with respect to x:

    +
    +x$grad
    +#> torch_tensor 
    +#>  0.2500  0.2500
    +#>  0.2500  0.2500
    +#> [ CPUFloatType{2,2} ]
    +
    +

    With a longer chain of computations, we can peek at how torch builds up a graph of backward operations.

    +

    Here is a slightly more complex example. We call retain_grad() on y and z just for demonstration purposes; by default, intermediate gradients – while of course they have to be computed – aren’t stored, in order to save memory.

    +
    +x1 <- torch_ones(2,2, requires_grad = TRUE)
    +x2 <- torch_tensor(1.1, requires_grad = TRUE)
    +y <- x1 * (x2 + 2)
    +y$retain_grad()
    +z <- y$pow(2) * 3
    +z$retain_grad()
    +out <- z$mean()
    +
    +

    Starting from out$grad_fn, we can follow the graph all back to the leaf nodes:

    +
    +# how to compute the gradient for mean, the last operation executed
    +out$grad_fn
    +#> MeanBackward0
    +# how to compute the gradient for the multiplication by 3 in z = y$pow(2) * 3
    +out$grad_fn$next_functions
    +#> [[1]]
    +#> MulBackward1
    +# how to compute the gradient for pow in z = y.pow(2) * 3
    +out$grad_fn$next_functions[[1]]$next_functions
    +#> [[1]]
    +#> PowBackward0
    +# how to compute the gradient for the multiplication in y = x * (x + 2)
    +out$grad_fn$next_functions[[1]]$next_functions[[1]]$next_functions
    +#> [[1]]
    +#> MulBackward0
    +# how to compute the gradient for the two branches of y = x * (x + 2),
    +# where the left branch is a leaf node (AccumulateGrad for x1)
    +out$grad_fn$next_functions[[1]]$next_functions[[1]]$next_functions[[1]]$next_functions
    +#> [[1]]
    +#> torch::autograd::AccumulateGrad
    +#> [[2]]
    +#> AddBackward1
    +# here we arrive at the other leaf node (AccumulateGrad for x2)
    +out$grad_fn$next_functions[[1]]$next_functions[[1]]$next_functions[[1]]$next_functions[[2]]$next_functions
    +#> [[1]]
    +#> torch::autograd::AccumulateGrad
    +
    +

    After calling out$backward(), all tensors in the graph will have their respective gradients created. Without our calls to retain_grad above, z$grad and y$grad would be empty:

    +
    +out$backward()
    +z$grad
    +#> torch_tensor 
    +#>  0.2500  0.2500
    +#>  0.2500  0.2500
    +#> [ CPUFloatType{2,2} ]
    +y$grad
    +#> torch_tensor 
    +#>  4.6500  4.6500
    +#>  4.6500  4.6500
    +#> [ CPUFloatType{2,2} ]
    +x2$grad
    +#> torch_tensor 
    +#>  18.6000
    +#> [ CPUFloatType{1} ]
    +x1$grad
    +#> torch_tensor 
    +#>  14.4150  14.4150
    +#>  14.4150  14.4150
    +#> [ CPUFloatType{2,2} ]
    +
    +

    Thus acquainted with autograd, we’re ready to modify our example.

    +
    +
    +

    +The simple network, now using autograd

    +

    For a single new line calling loss$backward(), now a number of lines (that did manual backprop) are gone:

    +
    +### generate training data -----------------------------------------------------
    +# input dimensionality (number of input features)
    +d_in <- 3
    +# output dimensionality (number of predicted features)
    +d_out <- 1
    +# number of observations in training set
    +n <- 100
    +# create random data
    +x <- torch_randn(n, d_in)
    +y <- x[,1]*0.2 - x[..,2]*1.3 - x[..,3]*0.5 + torch_randn(n)
    +y <- y$unsqueeze(dim = 1)
    +### initialize weights ---------------------------------------------------------
    +# dimensionality of hidden layer
    +d_hidden <- 32
    +# weights connecting input to hidden layer
    +w1 <- torch_randn(d_in, d_hidden, requires_grad = TRUE)
    +# weights connecting hidden to output layer
    +w2 <- torch_randn(d_hidden, d_out, requires_grad = TRUE)
    +# hidden layer bias
    +b1 <- torch_zeros(1, d_hidden, requires_grad = TRUE)
    +# output layer bias
    +b2 <- torch_zeros(1, d_out,requires_grad = TRUE)
    +### network parameters ---------------------------------------------------------
    +learning_rate <- 1e-4
    +### training loop --------------------------------------------------------------
    +for (t in 1:200) {
    +
    +    ### -------- Forward pass -------- 
    +    y_pred <- x$mm(w1)$add(b1)$clamp(min = 0)$mm(w2)$add(b2)
    +    ### -------- compute loss -------- 
    +    loss <- (y_pred - y)$pow(2)$mean()
    +    if (t %% 10 == 0) cat(t, as_array(loss), "\n")
    +    ### -------- Backpropagation -------- 
    +    # compute the gradient of loss with respect to all tensors with requires_grad = True.
    +    loss$backward()
    + 
    +    ### -------- Update weights -------- 
    +    
    +    # Wrap in torch.no_grad() because this is a part we DON'T want to record for automatic gradient computation
    +    with_no_grad({
    +      
    +      w1$sub_(learning_rate * w1$grad)
    +      w2$sub_(learning_rate * w2$grad)
    +      b1$sub_(learning_rate * b1$grad)
    +      b2$sub_(learning_rate * b2$grad)
    +      
    +      # Zero the gradients after every pass, because they'd accumulate otherwise
    +      w1$grad$zero_()
    +      w2$grad$zero_()
    +      b1$grad$zero_()
    +      b2$grad$zero_()
    +    
    +    })
    +    
    +}
    +#> 10 28.23399 
    +#> 20 25.57913 
    +#> 30 23.29141 
    +#> 40 21.32386 
    +#> 50 19.62816 
    +#> 60 18.16152 
    +#> 70 16.88838 
    +#> 80 15.77924 
    +#> 90 14.80951 
    +#> 100 13.95841 
    +#> 110 13.20845 
    +#> 120 12.54546 
    +#> 130 11.95689 
    +#> 140 11.43233 
    +#> 150 10.96239 
    +#> 160 10.54092 
    +#> 170 10.16278 
    +#> 180 9.821123 
    +#> 190 9.51112 
    +#> 200 9.228704
    +
    +

    We still manually compute the forward pass, and we still manually update the weights. In the last two chapters of this section, we’ll see how these parts of the logic can be made more modular and reusable, as well.

    +
    +
    +
    +
      +
    1. Unless we want to change the data, as in adversarial example generation↩︎

    2. +
    +
    +
    + + + +
    + + + +
    + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + diff --git a/static/docs/dev/articles/using-autograd_files/accessible-code-block-0.0.1/empty-anchor.js b/static/docs/dev/articles/using-autograd_files/accessible-code-block-0.0.1/empty-anchor.js new file mode 100644 index 0000000000000000000000000000000000000000..ca349fd6a570108bde9d7daace534cd651c5f042 --- /dev/null +++ b/static/docs/dev/articles/using-autograd_files/accessible-code-block-0.0.1/empty-anchor.js @@ -0,0 +1,15 @@ +// Hide empty tag within highlighted CodeBlock for screen reader accessibility (see https://github.com/jgm/pandoc/issues/6352#issuecomment-626106786) --> +// v0.0.1 +// Written by JooYoung Seo (jooyoung@psu.edu) and Atsushi Yasumoto on June 1st, 2020. + +document.addEventListener('DOMContentLoaded', function() { + const codeList = document.getElementsByClassName("sourceCode"); + for (var i = 0; i < codeList.length; i++) { + var linkList = codeList[i].getElementsByTagName('a'); + for (var j = 0; j < linkList.length; j++) { + if (linkList[j].innerHTML === "") { + linkList[j].setAttribute('aria-hidden', 'true'); + } + } + } +}); diff --git a/static/docs/dev/authors.html b/static/docs/dev/authors.html new file mode 100644 index 0000000000000000000000000000000000000000..db7f307b67e0db9e8b7eda9a99ff5a60fb3089b7 --- /dev/null +++ b/static/docs/dev/authors.html @@ -0,0 +1,238 @@ + + + + + + + + +Authors • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
      +
    • +

      Daniel Falbel. Author, maintainer, copyright holder. +

      +
    • +
    • +

      Javier Luraschi. Author, copyright holder. +

      +
    • +
    • +

      Dmitriy Selivanov. Contributor. +

      +
    • +
    • +

      Athos Damiani. Contributor. +

      +
    • +
    • +

      RStudio. Copyright holder. +

      +
    • +
    + +
    + +
    + + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/bootstrap-toc.css b/static/docs/dev/bootstrap-toc.css new file mode 100644 index 0000000000000000000000000000000000000000..5a859415c1f7eacfd94920968bc910e2f1f1427e --- /dev/null +++ b/static/docs/dev/bootstrap-toc.css @@ -0,0 +1,60 @@ +/*! + * Bootstrap Table of Contents v0.4.1 (http://afeld.github.io/bootstrap-toc/) + * Copyright 2015 Aidan Feldman + * Licensed under MIT (https://github.com/afeld/bootstrap-toc/blob/gh-pages/LICENSE.md) */ + +/* modified from https://github.com/twbs/bootstrap/blob/94b4076dd2efba9af71f0b18d4ee4b163aa9e0dd/docs/assets/css/src/docs.css#L548-L601 */ + +/* All levels of nav */ +nav[data-toggle='toc'] .nav > li > a { + display: block; + padding: 4px 20px; + font-size: 13px; + font-weight: 500; + color: #767676; +} +nav[data-toggle='toc'] .nav > li > a:hover, +nav[data-toggle='toc'] .nav > li > a:focus { + padding-left: 19px; + color: #563d7c; + text-decoration: none; + background-color: transparent; + border-left: 1px solid #563d7c; +} +nav[data-toggle='toc'] .nav > .active > a, +nav[data-toggle='toc'] .nav > .active:hover > a, +nav[data-toggle='toc'] .nav > .active:focus > a { + padding-left: 18px; + font-weight: bold; + color: #563d7c; + background-color: transparent; + border-left: 2px solid #563d7c; +} + +/* Nav: second level (shown on .active) */ +nav[data-toggle='toc'] .nav .nav { + display: none; /* Hide by default, but at >768px, show it */ + padding-bottom: 10px; +} +nav[data-toggle='toc'] .nav .nav > li > a { + padding-top: 1px; + padding-bottom: 1px; + padding-left: 30px; + font-size: 12px; + font-weight: normal; +} +nav[data-toggle='toc'] .nav .nav > li > a:hover, +nav[data-toggle='toc'] .nav .nav > li > a:focus { + padding-left: 29px; +} +nav[data-toggle='toc'] .nav .nav > .active > a, +nav[data-toggle='toc'] .nav .nav > .active:hover > a, +nav[data-toggle='toc'] .nav .nav > .active:focus > a { + padding-left: 28px; + font-weight: 500; +} + +/* from https://github.com/twbs/bootstrap/blob/e38f066d8c203c3e032da0ff23cd2d6098ee2dd6/docs/assets/css/src/docs.css#L631-L634 */ +nav[data-toggle='toc'] .nav > .active > ul { + display: block; +} diff --git a/static/docs/dev/bootstrap-toc.js b/static/docs/dev/bootstrap-toc.js new file mode 100644 index 0000000000000000000000000000000000000000..1cdd573b20f53b3ebe31c021e154c4338ca456af --- /dev/null +++ b/static/docs/dev/bootstrap-toc.js @@ -0,0 +1,159 @@ +/*! + * Bootstrap Table of Contents v0.4.1 (http://afeld.github.io/bootstrap-toc/) + * Copyright 2015 Aidan Feldman + * Licensed under MIT (https://github.com/afeld/bootstrap-toc/blob/gh-pages/LICENSE.md) */ +(function() { + 'use strict'; + + window.Toc = { + helpers: { + // return all matching elements in the set, or their descendants + findOrFilter: function($el, selector) { + // http://danielnouri.org/notes/2011/03/14/a-jquery-find-that-also-finds-the-root-element/ + // http://stackoverflow.com/a/12731439/358804 + var $descendants = $el.find(selector); + return $el.filter(selector).add($descendants).filter(':not([data-toc-skip])'); + }, + + generateUniqueIdBase: function(el) { + var text = $(el).text(); + var anchor = text.trim().toLowerCase().replace(/[^A-Za-z0-9]+/g, '-'); + return anchor || el.tagName.toLowerCase(); + }, + + generateUniqueId: function(el) { + var anchorBase = this.generateUniqueIdBase(el); + for (var i = 0; ; i++) { + var anchor = anchorBase; + if (i > 0) { + // add suffix + anchor += '-' + i; + } + // check if ID already exists + if (!document.getElementById(anchor)) { + return anchor; + } + } + }, + + generateAnchor: function(el) { + if (el.id) { + return el.id; + } else { + var anchor = this.generateUniqueId(el); + el.id = anchor; + return anchor; + } + }, + + createNavList: function() { + return $(''); + }, + + createChildNavList: function($parent) { + var $childList = this.createNavList(); + $parent.append($childList); + return $childList; + }, + + generateNavEl: function(anchor, text) { + var $a = $(''); + $a.attr('href', '#' + anchor); + $a.text(text); + var $li = $('
  • '); + $li.append($a); + return $li; + }, + + generateNavItem: function(headingEl) { + var anchor = this.generateAnchor(headingEl); + var $heading = $(headingEl); + var text = $heading.data('toc-text') || $heading.text(); + return this.generateNavEl(anchor, text); + }, + + // Find the first heading level (`

    `, then `

    `, etc.) that has more than one element. Defaults to 1 (for `

    `). + getTopLevel: function($scope) { + for (var i = 1; i <= 6; i++) { + var $headings = this.findOrFilter($scope, 'h' + i); + if ($headings.length > 1) { + return i; + } + } + + return 1; + }, + + // returns the elements for the top level, and the next below it + getHeadings: function($scope, topLevel) { + var topSelector = 'h' + topLevel; + + var secondaryLevel = topLevel + 1; + var secondarySelector = 'h' + secondaryLevel; + + return this.findOrFilter($scope, topSelector + ',' + secondarySelector); + }, + + getNavLevel: function(el) { + return parseInt(el.tagName.charAt(1), 10); + }, + + populateNav: function($topContext, topLevel, $headings) { + var $context = $topContext; + var $prevNav; + + var helpers = this; + $headings.each(function(i, el) { + var $newNav = helpers.generateNavItem(el); + var navLevel = helpers.getNavLevel(el); + + // determine the proper $context + if (navLevel === topLevel) { + // use top level + $context = $topContext; + } else if ($prevNav && $context === $topContext) { + // create a new level of the tree and switch to it + $context = helpers.createChildNavList($prevNav); + } // else use the current $context + + $context.append($newNav); + + $prevNav = $newNav; + }); + }, + + parseOps: function(arg) { + var opts; + if (arg.jquery) { + opts = { + $nav: arg + }; + } else { + opts = arg; + } + opts.$scope = opts.$scope || $(document.body); + return opts; + } + }, + + // accepts a jQuery object, or an options object + init: function(opts) { + opts = this.helpers.parseOps(opts); + + // ensure that the data attribute is in place for styling + opts.$nav.attr('data-toggle', 'toc'); + + var $topContext = this.helpers.createChildNavList(opts.$nav); + var topLevel = this.helpers.getTopLevel(opts.$scope); + var $headings = this.helpers.getHeadings(opts.$scope, topLevel); + this.helpers.populateNav($topContext, topLevel, $headings); + } + }; + + $(function() { + $('nav[data-toggle="toc"]').each(function(i, el) { + var $nav = $(el); + Toc.init($nav); + }); + }); +})(); diff --git a/static/docs/dev/docsearch.css b/static/docs/dev/docsearch.css new file mode 100644 index 0000000000000000000000000000000000000000..e5f1fe1dfa2c34c51fe941829b511acd8c763301 --- /dev/null +++ b/static/docs/dev/docsearch.css @@ -0,0 +1,148 @@ +/* Docsearch -------------------------------------------------------------- */ +/* + Source: https://github.com/algolia/docsearch/ + License: MIT +*/ + +.algolia-autocomplete { + display: block; + -webkit-box-flex: 1; + -ms-flex: 1; + flex: 1 +} + +.algolia-autocomplete .ds-dropdown-menu { + width: 100%; + min-width: none; + max-width: none; + padding: .75rem 0; + background-color: #fff; + background-clip: padding-box; + border: 1px solid rgba(0, 0, 0, .1); + box-shadow: 0 .5rem 1rem rgba(0, 0, 0, .175); +} + +@media (min-width:768px) { + .algolia-autocomplete .ds-dropdown-menu { + width: 175% + } +} + +.algolia-autocomplete .ds-dropdown-menu::before { + display: none +} + +.algolia-autocomplete .ds-dropdown-menu [class^=ds-dataset-] { + padding: 0; + background-color: rgb(255,255,255); + border: 0; + max-height: 80vh; +} + +.algolia-autocomplete .ds-dropdown-menu .ds-suggestions { + margin-top: 0 +} + +.algolia-autocomplete .algolia-docsearch-suggestion { + padding: 0; + overflow: visible +} + +.algolia-autocomplete .algolia-docsearch-suggestion--category-header { + padding: .125rem 1rem; + margin-top: 0; + font-size: 1.3em; + font-weight: 500; + color: #00008B; + border-bottom: 0 +} + +.algolia-autocomplete .algolia-docsearch-suggestion--wrapper { + float: none; + padding-top: 0 +} + +.algolia-autocomplete .algolia-docsearch-suggestion--subcategory-column { + float: none; + width: auto; + padding: 0; + text-align: left +} + +.algolia-autocomplete .algolia-docsearch-suggestion--content { + float: none; + width: auto; + padding: 0 +} + +.algolia-autocomplete .algolia-docsearch-suggestion--content::before { + display: none +} + +.algolia-autocomplete .ds-suggestion:not(:first-child) .algolia-docsearch-suggestion--category-header { + padding-top: .75rem; + margin-top: .75rem; + border-top: 1px solid rgba(0, 0, 0, .1) +} + +.algolia-autocomplete .ds-suggestion .algolia-docsearch-suggestion--subcategory-column { + display: block; + padding: .1rem 1rem; + margin-bottom: 0.1; + font-size: 1.0em; + font-weight: 400 + /* display: none */ +} + +.algolia-autocomplete .algolia-docsearch-suggestion--title { + display: block; + padding: .25rem 1rem; + margin-bottom: 0; + font-size: 0.9em; + font-weight: 400 +} + +.algolia-autocomplete .algolia-docsearch-suggestion--text { + padding: 0 1rem .5rem; + margin-top: -.25rem; + font-size: 0.8em; + font-weight: 400; + line-height: 1.25 +} + +.algolia-autocomplete .algolia-docsearch-footer { + width: 110px; + height: 20px; + z-index: 3; + margin-top: 10.66667px; + float: right; + font-size: 0; + line-height: 0; +} + +.algolia-autocomplete .algolia-docsearch-footer--logo { + background-image: url("data:image/svg+xml;utf8,"); + background-repeat: no-repeat; + background-position: 50%; + background-size: 100%; + overflow: hidden; + text-indent: -9000px; + width: 100%; + height: 100%; + display: block; + transform: translate(-8px); +} + +.algolia-autocomplete .algolia-docsearch-suggestion--highlight { + color: #FF8C00; + background: rgba(232, 189, 54, 0.1) +} + + +.algolia-autocomplete .algolia-docsearch-suggestion--text .algolia-docsearch-suggestion--highlight { + box-shadow: inset 0 -2px 0 0 rgba(105, 105, 105, .5) +} + +.algolia-autocomplete .ds-suggestion.ds-cursor .algolia-docsearch-suggestion--content { + background-color: rgba(192, 192, 192, .15) +} diff --git a/static/docs/dev/docsearch.js b/static/docs/dev/docsearch.js new file mode 100644 index 0000000000000000000000000000000000000000..b35504cd3a282816130a16881f3ebeead9c1bcb4 --- /dev/null +++ b/static/docs/dev/docsearch.js @@ -0,0 +1,85 @@ +$(function() { + + // register a handler to move the focus to the search bar + // upon pressing shift + "/" (i.e. "?") + $(document).on('keydown', function(e) { + if (e.shiftKey && e.keyCode == 191) { + e.preventDefault(); + $("#search-input").focus(); + } + }); + + $(document).ready(function() { + // do keyword highlighting + /* modified from https://jsfiddle.net/julmot/bL6bb5oo/ */ + var mark = function() { + + var referrer = document.URL ; + var paramKey = "q" ; + + if (referrer.indexOf("?") !== -1) { + var qs = referrer.substr(referrer.indexOf('?') + 1); + var qs_noanchor = qs.split('#')[0]; + var qsa = qs_noanchor.split('&'); + var keyword = ""; + + for (var i = 0; i < qsa.length; i++) { + var currentParam = qsa[i].split('='); + + if (currentParam.length !== 2) { + continue; + } + + if (currentParam[0] == paramKey) { + keyword = decodeURIComponent(currentParam[1].replace(/\+/g, "%20")); + } + } + + if (keyword !== "") { + $(".contents").unmark({ + done: function() { + $(".contents").mark(keyword); + } + }); + } + } + }; + + mark(); + }); +}); + +/* Search term highlighting ------------------------------*/ + +function matchedWords(hit) { + var words = []; + + var hierarchy = hit._highlightResult.hierarchy; + // loop to fetch from lvl0, lvl1, etc. + for (var idx in hierarchy) { + words = words.concat(hierarchy[idx].matchedWords); + } + + var content = hit._highlightResult.content; + if (content) { + words = words.concat(content.matchedWords); + } + + // return unique words + var words_uniq = [...new Set(words)]; + return words_uniq; +} + +function updateHitURL(hit) { + + var words = matchedWords(hit); + var url = ""; + + if (hit.anchor) { + url = hit.url_without_anchor + '?q=' + escape(words.join(" ")) + '#' + hit.anchor; + } else { + url = hit.url + '?q=' + escape(words.join(" ")); + } + + return url; +} diff --git a/static/docs/dev/index.html b/static/docs/dev/index.html new file mode 100644 index 0000000000000000000000000000000000000000..a1489760088924363ad4cc6bb1af873fc5dfeb8f --- /dev/null +++ b/static/docs/dev/index.html @@ -0,0 +1,312 @@ + + + + + + + +Tensors and Neural Networks with GPU Acceleration • torch + + + + + + + + + + + +
    +
    + + + + +
    +
    +
    + + +
    +

    +Installation

    +

    Run:

    +
    remotes::install_github("mlverse/torch")
    +

    At the first package load additional software will be installed.

    +
    +
    +

    +Example

    +

    Currently this package is only a proof of concept and you can only create a Torch Tensor from an R object. And then convert back from a torch Tensor to an R object.

    +
    library(torch)
    +x <- array(runif(8), dim = c(2, 2, 2))
    +y <- torch_tensor(x, dtype = torch_float64())
    +y
    +#> torch_tensor 
    +#> (1,.,.) = 
    +#>   0.5406  0.8648
    +#>   0.3097  0.9715
    +#> 
    +#> (2,.,.) = 
    +#>   0.1309  0.8992
    +#>   0.4849  0.1902
    +#> [ CPUDoubleType{2,2,2} ]
    +identical(x, as_array(y))
    +#> [1] TRUE
    +
    +

    +Simple Autograd Example

    +

    In the following snippet we let torch, using the autograd feature, calculate the derivatives:

    +
    x <- torch_tensor(1, requires_grad = TRUE)
    +w <- torch_tensor(2, requires_grad = TRUE)
    +b <- torch_tensor(3, requires_grad = TRUE)
    +y <- w * x + b
    +y$backward()
    +x$grad
    +#> torch_tensor 
    +#>  2
    +#> [ CPUFloatType{1} ]
    +w$grad
    +#> torch_tensor 
    +#>  1
    +#> [ CPUFloatType{1} ]
    +b$grad
    +#> torch_tensor 
    +#>  1
    +#> [ CPUFloatType{1} ]
    +
    +
    +

    +Linear Regression

    +

    In the following example we are going to fit a linear regression from scratch using torch’s Autograd.

    +

    Note all methods that end with _ (eg. sub_), will modify the tensors in place.

    +
    x <- torch_randn(100, 2)
    +y <- 0.1 + 0.5*x[,1] - 0.7*x[,2]
    +
    +w <- torch_randn(2, 1, requires_grad = TRUE)
    +b <- torch_zeros(1, requires_grad = TRUE)
    +
    +lr <- 0.5
    +for (i in 1:100) {
    +  y_hat <- torch_mm(x, w) + b
    +  loss <- torch_mean((y - y_hat$squeeze(1))^2)
    +  
    +  loss$backward()
    +  
    +  with_no_grad({
    +    w$sub_(w$grad*lr)
    +    b$sub_(b$grad*lr)   
    +    
    +    w$grad$zero_()
    +    b$grad$zero_()
    +  })
    +}
    +print(w)
    +#> torch_tensor 
    +#> 1e-09 *
    +#>  5.2672
    +#>  -6.7969
    +#> [ CPUFloatType{2,1} ]
    +print(b) 
    +#> torch_tensor 
    +#> 0.01 *
    +#> -9.6802
    +#> [ CPUFloatType{1} ]
    +
    +
    +
    +

    +Contributing

    +

    No matter your current skills it’s possible to contribute to torch development. See the contributing guide for more information.

    +
    +
    +
    + + +
    + + +
    + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + diff --git a/static/docs/dev/link.svg b/static/docs/dev/link.svg new file mode 100644 index 0000000000000000000000000000000000000000..88ad82769b87f10725c57dca6fcf41b4bffe462c --- /dev/null +++ b/static/docs/dev/link.svg @@ -0,0 +1,12 @@ + + + + + + diff --git a/static/docs/dev/news/index.html b/static/docs/dev/news/index.html new file mode 100644 index 0000000000000000000000000000000000000000..22252a32b9f2ef66ee5f6315de90fbbb5dc88b6f --- /dev/null +++ b/static/docs/dev/news/index.html @@ -0,0 +1,235 @@ + + + + + + + + +Changelog • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    +torch (development version) Unreleased +

    +
    +
    +

    +torch 0.0.2 2020-08-31 +

    +
      +
    • Added a NEWS.md file to track changes to the package.
    • +
    • Auto install when loading the package for the first time.
    • +
    +
    +
    + + + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/pkgdown.css b/static/docs/dev/pkgdown.css new file mode 100644 index 0000000000000000000000000000000000000000..1273238dd9541cec3612b557db37835c800aa5de --- /dev/null +++ b/static/docs/dev/pkgdown.css @@ -0,0 +1,367 @@ +/* Sticky footer */ + +/** + * Basic idea: https://philipwalton.github.io/solved-by-flexbox/demos/sticky-footer/ + * Details: https://github.com/philipwalton/solved-by-flexbox/blob/master/assets/css/components/site.css + * + * .Site -> body > .container + * .Site-content -> body > .container .row + * .footer -> footer + * + * Key idea seems to be to ensure that .container and __all its parents__ + * have height set to 100% + * + */ + +html, body { + height: 100%; +} + +body { + position: relative; +} + +body > .container { + display: flex; + height: 100%; + flex-direction: column; +} + +body > .container .row { + flex: 1 0 auto; +} + +footer { + margin-top: 45px; + padding: 35px 0 36px; + border-top: 1px solid #e5e5e5; + color: #666; + display: flex; + flex-shrink: 0; +} +footer p { + margin-bottom: 0; +} +footer div { + flex: 1; +} +footer .pkgdown { + text-align: right; +} +footer p { + margin-bottom: 0; +} + +img.icon { + float: right; +} + +img { + max-width: 100%; +} + +/* Fix bug in bootstrap (only seen in firefox) */ +summary { + display: list-item; +} + +/* Typographic tweaking ---------------------------------*/ + +.contents .page-header { + margin-top: calc(-60px + 1em); +} + +dd { + margin-left: 3em; +} + +/* Section anchors ---------------------------------*/ + +a.anchor { + margin-left: -30px; + display:inline-block; + width: 30px; + height: 30px; + visibility: hidden; + + background-image: url(./link.svg); + background-repeat: no-repeat; + background-size: 20px 20px; + background-position: center center; +} + +.hasAnchor:hover a.anchor { + visibility: visible; +} + +@media (max-width: 767px) { + .hasAnchor:hover a.anchor { + visibility: hidden; + } +} + + +/* Fixes for fixed navbar --------------------------*/ + +.contents h1, .contents h2, .contents h3, .contents h4 { + padding-top: 60px; + margin-top: -40px; +} + +/* Navbar submenu --------------------------*/ + +.dropdown-submenu { + position: relative; +} + +.dropdown-submenu>.dropdown-menu { + top: 0; + left: 100%; + margin-top: -6px; + margin-left: -1px; + border-radius: 0 6px 6px 6px; +} + +.dropdown-submenu:hover>.dropdown-menu { + display: block; +} + +.dropdown-submenu>a:after { + display: block; + content: " "; + float: right; + width: 0; + height: 0; + border-color: transparent; + border-style: solid; + border-width: 5px 0 5px 5px; + border-left-color: #cccccc; + margin-top: 5px; + margin-right: -10px; +} + +.dropdown-submenu:hover>a:after { + border-left-color: #ffffff; +} + +.dropdown-submenu.pull-left { + float: none; +} + +.dropdown-submenu.pull-left>.dropdown-menu { + left: -100%; + margin-left: 10px; + border-radius: 6px 0 6px 6px; +} + +/* Sidebar --------------------------*/ + +#pkgdown-sidebar { + margin-top: 30px; + position: -webkit-sticky; + position: sticky; + top: 70px; +} + +#pkgdown-sidebar h2 { + font-size: 1.5em; + margin-top: 1em; +} + +#pkgdown-sidebar h2:first-child { + margin-top: 0; +} + +#pkgdown-sidebar .list-unstyled li { + margin-bottom: 0.5em; +} + +/* bootstrap-toc tweaks ------------------------------------------------------*/ + +/* All levels of nav */ + +nav[data-toggle='toc'] .nav > li > a { + padding: 4px 20px 4px 6px; + font-size: 1.5rem; + font-weight: 400; + color: inherit; +} + +nav[data-toggle='toc'] .nav > li > a:hover, +nav[data-toggle='toc'] .nav > li > a:focus { + padding-left: 5px; + color: inherit; + border-left: 1px solid #878787; +} + +nav[data-toggle='toc'] .nav > .active > a, +nav[data-toggle='toc'] .nav > .active:hover > a, +nav[data-toggle='toc'] .nav > .active:focus > a { + padding-left: 5px; + font-size: 1.5rem; + font-weight: 400; + color: inherit; + border-left: 2px solid #878787; +} + +/* Nav: second level (shown on .active) */ + +nav[data-toggle='toc'] .nav .nav { + display: none; /* Hide by default, but at >768px, show it */ + padding-bottom: 10px; +} + +nav[data-toggle='toc'] .nav .nav > li > a { + padding-left: 16px; + font-size: 1.35rem; +} + +nav[data-toggle='toc'] .nav .nav > li > a:hover, +nav[data-toggle='toc'] .nav .nav > li > a:focus { + padding-left: 15px; +} + +nav[data-toggle='toc'] .nav .nav > .active > a, +nav[data-toggle='toc'] .nav .nav > .active:hover > a, +nav[data-toggle='toc'] .nav .nav > .active:focus > a { + padding-left: 15px; + font-weight: 500; + font-size: 1.35rem; +} + +/* orcid ------------------------------------------------------------------- */ + +.orcid { + font-size: 16px; + color: #A6CE39; + /* margins are required by official ORCID trademark and display guidelines */ + margin-left:4px; + margin-right:4px; + vertical-align: middle; +} + +/* Reference index & topics ----------------------------------------------- */ + +.ref-index th {font-weight: normal;} + +.ref-index td {vertical-align: top; min-width: 100px} +.ref-index .icon {width: 40px;} +.ref-index .alias {width: 40%;} +.ref-index-icons .alias {width: calc(40% - 40px);} +.ref-index .title {width: 60%;} + +.ref-arguments th {text-align: right; padding-right: 10px;} +.ref-arguments th, .ref-arguments td {vertical-align: top; min-width: 100px} +.ref-arguments .name {width: 20%;} +.ref-arguments .desc {width: 80%;} + +/* Nice scrolling for wide elements --------------------------------------- */ + +table { + display: block; + overflow: auto; +} + +/* Syntax highlighting ---------------------------------------------------- */ + +pre { + word-wrap: normal; + word-break: normal; + border: 1px solid #eee; +} + +pre, code { + background-color: #f8f8f8; + color: #333; +} + +pre code { + overflow: auto; + word-wrap: normal; + white-space: pre; +} + +pre .img { + margin: 5px 0; +} + +pre .img img { + background-color: #fff; + display: block; + height: auto; +} + +code a, pre a { + color: #375f84; +} + +a.sourceLine:hover { + text-decoration: none; +} + +.fl {color: #1514b5;} +.fu {color: #000000;} /* function */ +.ch,.st {color: #036a07;} /* string */ +.kw {color: #264D66;} /* keyword */ +.co {color: #888888;} /* comment */ + +.message { color: black; font-weight: bolder;} +.error { color: orange; font-weight: bolder;} +.warning { color: #6A0366; font-weight: bolder;} + +/* Clipboard --------------------------*/ + +.hasCopyButton { + position: relative; +} + +.btn-copy-ex { + position: absolute; + right: 0; + top: 0; + visibility: hidden; +} + +.hasCopyButton:hover button.btn-copy-ex { + visibility: visible; +} + +/* headroom.js ------------------------ */ + +.headroom { + will-change: transform; + transition: transform 200ms linear; +} +.headroom--pinned { + transform: translateY(0%); +} +.headroom--unpinned { + transform: translateY(-100%); +} + +/* mark.js ----------------------------*/ + +mark { + background-color: rgba(255, 255, 51, 0.5); + border-bottom: 2px solid rgba(255, 153, 51, 0.3); + padding: 1px; +} + +/* vertical spacing after htmlwidgets */ +.html-widget { + margin-bottom: 10px; +} + +/* fontawesome ------------------------ */ + +.fab { + font-family: "Font Awesome 5 Brands" !important; +} + +/* don't display links in code chunks when printing */ +/* source: https://stackoverflow.com/a/10781533 */ +@media print { + code a:link:after, code a:visited:after { + content: ""; + } +} diff --git a/static/docs/dev/pkgdown.js b/static/docs/dev/pkgdown.js new file mode 100644 index 0000000000000000000000000000000000000000..7e7048faebb92b85ed06afddd1a8a4581241d6a4 --- /dev/null +++ b/static/docs/dev/pkgdown.js @@ -0,0 +1,108 @@ +/* http://gregfranko.com/blog/jquery-best-practices/ */ +(function($) { + $(function() { + + $('.navbar-fixed-top').headroom(); + + $('body').css('padding-top', $('.navbar').height() + 10); + $(window).resize(function(){ + $('body').css('padding-top', $('.navbar').height() + 10); + }); + + $('[data-toggle="tooltip"]').tooltip(); + + var cur_path = paths(location.pathname); + var links = $("#navbar ul li a"); + var max_length = -1; + var pos = -1; + for (var i = 0; i < links.length; i++) { + if (links[i].getAttribute("href") === "#") + continue; + // Ignore external links + if (links[i].host !== location.host) + continue; + + var nav_path = paths(links[i].pathname); + + var length = prefix_length(nav_path, cur_path); + if (length > max_length) { + max_length = length; + pos = i; + } + } + + // Add class to parent
  • , and enclosing
  • if in dropdown + if (pos >= 0) { + var menu_anchor = $(links[pos]); + menu_anchor.parent().addClass("active"); + menu_anchor.closest("li.dropdown").addClass("active"); + } + }); + + function paths(pathname) { + var pieces = pathname.split("/"); + pieces.shift(); // always starts with / + + var end = pieces[pieces.length - 1]; + if (end === "index.html" || end === "") + pieces.pop(); + return(pieces); + } + + // Returns -1 if not found + function prefix_length(needle, haystack) { + if (needle.length > haystack.length) + return(-1); + + // Special case for length-0 haystack, since for loop won't run + if (haystack.length === 0) { + return(needle.length === 0 ? 0 : -1); + } + + for (var i = 0; i < haystack.length; i++) { + if (needle[i] != haystack[i]) + return(i); + } + + return(haystack.length); + } + + /* Clipboard --------------------------*/ + + function changeTooltipMessage(element, msg) { + var tooltipOriginalTitle=element.getAttribute('data-original-title'); + element.setAttribute('data-original-title', msg); + $(element).tooltip('show'); + element.setAttribute('data-original-title', tooltipOriginalTitle); + } + + if(ClipboardJS.isSupported()) { + $(document).ready(function() { + var copyButton = ""; + + $(".examples, div.sourceCode").addClass("hasCopyButton"); + + // Insert copy buttons: + $(copyButton).prependTo(".hasCopyButton"); + + // Initialize tooltips: + $('.btn-copy-ex').tooltip({container: 'body'}); + + // Initialize clipboard: + var clipboardBtnCopies = new ClipboardJS('[data-clipboard-copy]', { + text: function(trigger) { + return trigger.parentNode.textContent; + } + }); + + clipboardBtnCopies.on('success', function(e) { + changeTooltipMessage(e.trigger, 'Copied!'); + e.clearSelection(); + }); + + clipboardBtnCopies.on('error', function() { + changeTooltipMessage(e.trigger,'Press Ctrl+C or Command+C to copy'); + }); + }); + } +})(window.jQuery || window.$) diff --git a/static/docs/dev/pkgdown.yml b/static/docs/dev/pkgdown.yml new file mode 100644 index 0000000000000000000000000000000000000000..5f7fa0b3a94cb4bdba9cdb2f38d785cddb8b70ce --- /dev/null +++ b/static/docs/dev/pkgdown.yml @@ -0,0 +1,23 @@ +pandoc: 2.7.3 +pkgdown: 1.6.1 +pkgdown_sha: ~ +articles: + extending-autograd: extending-autograd.html + getting-started/autograd: autograd.html + getting-started/control-flow-and-weight-sharing: control-flow-and-weight-sharing.html + getting-started/custom-nn: custom-nn.html + getting-started/neural-networks: neural-networks.html + getting-started/new-autograd-functions: new-autograd-functions.html + getting-started/nn: nn.html + getting-started/optim: optim.html + getting-started/tensors-and-autograd: tensors-and-autograd.html + getting-started/tensors: tensors.html + getting-started/warmup: warmup.html + getting-started/what-is-torch: what-is-torch.html + indexing: indexing.html + loading-data: loading-data.html + tensor/index: index.html + tensor-creation: tensor-creation.html + using-autograd: using-autograd.html +last_built: 2020-09-21T17:45Z + diff --git a/static/docs/dev/reference/AutogradContext.html b/static/docs/dev/reference/AutogradContext.html new file mode 100644 index 0000000000000000000000000000000000000000..c1d76a6f9474ee2e5556eba7e2f9ad99a8155472 --- /dev/null +++ b/static/docs/dev/reference/AutogradContext.html @@ -0,0 +1,338 @@ + + + + + + + + +Class representing the context. — AutogradContext • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Class representing the context.

    +

    Class representing the context.

    +
    + + + +

    Public fields

    + +

    +
    ptr

    (Dev related) pointer to the context c++ object.

    + +

    +

    Active bindings

    + +

    +
    needs_input_grad

    boolean listing arguments of forward and whether they require_grad.

    + +
    saved_variables

    list of objects that were saved for backward via save_for_backward.

    + +

    +

    Methods

    + + +

    Public methods

    + + +


    +

    Method new()

    +

    (Dev related) Initializes the context. Not user related.

    Usage

    +

    AutogradContext$new(
    +  ptr,
    +  env,
    +  argument_names = NULL,
    +  argument_needs_grad = NULL
    +)

    + +

    Arguments

    +

    +
    ptr

    pointer to the c++ object

    + +
    env

    environment that encloses both forward and backward

    + +
    argument_names

    names of forward arguments

    + +
    argument_needs_grad

    whether each argument in forward needs grad.

    + +

    +


    +

    Method save_for_backward()

    +

    Saves given objects for a future call to backward().

    +

    This should be called at most once, and only from inside the forward() +method.

    +

    Later, saved objects can be accessed through the saved_variables attribute. +Before returning them to the user, a check is made to ensure they weren’t used +in any in-place operation that modified their content.

    +

    Arguments can also be any kind of R object.

    Usage

    +

    AutogradContext$save_for_backward(...)

    + +

    Arguments

    +

    +
    ...

    any kind of R object that will be saved for the backward pass. +It's common to pass named arguments.

    + +

    +


    +

    Method mark_non_differentiable()

    +

    Marks outputs as non-differentiable.

    +

    This should be called at most once, only from inside the forward() method, +and all arguments should be outputs.

    +

    This will mark outputs as not requiring gradients, increasing the efficiency +of backward computation. You still need to accept a gradient for each output +in backward(), but it’s always going to be a zero tensor with the same +shape as the shape of a corresponding output.

    +

    This is used e.g. for indices returned from a max Function.

    Usage

    +

    AutogradContext$mark_non_differentiable(...)

    + +

    Arguments

    +

    +
    ...

    non-differentiable outputs.

    + +

    +


    +

    Method mark_dirty()

    +

    Marks given tensors as modified in an in-place operation.

    +

    This should be called at most once, only from inside the forward() method, +and all arguments should be inputs.

    +

    Every tensor that’s been modified in-place in a call to forward() should +be given to this function, to ensure correctness of our checks. It doesn’t +matter whether the function is called before or after modification.

    Usage

    +

    AutogradContext$mark_dirty(...)

    + +

    Arguments

    +

    +
    ...

    tensors that are modified in-place.

    + +

    +


    +

    Method clone()

    +

    The objects of this class are cloneable with this method.

    Usage

    +

    AutogradContext$clone(deep = FALSE)

    + +

    Arguments

    +

    +
    deep

    Whether to make a deep clone.

    + +

    + + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/Rplot001.png b/static/docs/dev/reference/Rplot001.png new file mode 100644 index 0000000000000000000000000000000000000000..17a358060aed2a86950757bbd25c6f92c08c458f Binary files /dev/null and b/static/docs/dev/reference/Rplot001.png differ diff --git a/static/docs/dev/reference/as_array.html b/static/docs/dev/reference/as_array.html new file mode 100644 index 0000000000000000000000000000000000000000..e1f2a39cef79c3bfb53daaaf6c112e1f1f8faab1 --- /dev/null +++ b/static/docs/dev/reference/as_array.html @@ -0,0 +1,237 @@ + + + + + + + + +Converts to array — as_array • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Converts to array

    +
    + +
    as_array(x)
    + +

    Arguments

    + + + + + + +
    x

    object to be converted into an array

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/autograd_backward.html b/static/docs/dev/reference/autograd_backward.html new file mode 100644 index 0000000000000000000000000000000000000000..486309752d48bc6dad0760d324186e37f658a560 --- /dev/null +++ b/static/docs/dev/reference/autograd_backward.html @@ -0,0 +1,291 @@ + + + + + + + + +Computes the sum of gradients of given tensors w.r.t. graph leaves. — autograd_backward • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    The graph is differentiated using the chain rule. If any of tensors are +non-scalar (i.e. their data has more than one element) and require gradient, +then the Jacobian-vector product would be computed, in this case the function +additionally requires specifying grad_tensors. It should be a sequence of +matching length, that contains the “vector” in the Jacobian-vector product, +usually the gradient of the differentiated function w.r.t. corresponding +tensors (None is an acceptable value for all tensors that don’t need gradient +tensors).

    +
    + +
    autograd_backward(
    +  tensors,
    +  grad_tensors = NULL,
    +  retain_graph = create_graph,
    +  create_graph = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    tensors

    (list of Tensor) – Tensors of which the derivative will +be computed.

    grad_tensors

    (list of (Tensor or NULL)) – The “vector” in the Jacobian-vector product, usually gradients w.r.t. each element of corresponding tensors. NULLvalues can be specified for scalar Tensors or ones that don’t require grad. If aNULL` value would be acceptable for all +grad_tensors, then this argument is optional.

    retain_graph

    (bool, optional) – If FALSE, the graph used to compute +the grad will be freed. Note that in nearly all cases setting this option to +TRUE is not needed and often can be worked around in a much more efficient +way. Defaults to the value of create_graph.

    create_graph

    (bool, optional) – If TRUE, graph of the derivative will +be constructed, allowing to compute higher order derivative products. +Defaults to FALSE.

    + +

    Details

    + +

    This function accumulates gradients in the leaves - you might need to zero +them before calling it.

    + +

    Examples

    +
    if (torch_is_installed()) { +x <- torch_tensor(1, requires_grad = TRUE) +y <- 2 * x + +a <- torch_tensor(1, requires_grad = TRUE) +b <- 3 * a + +autograd_backward(list(y, b)) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/autograd_function.html b/static/docs/dev/reference/autograd_function.html new file mode 100644 index 0000000000000000000000000000000000000000..3f2b700ad701a5a551da789c1b93bb53afd3ffec --- /dev/null +++ b/static/docs/dev/reference/autograd_function.html @@ -0,0 +1,279 @@ + + + + + + + + +Records operation history and defines formulas for differentiating ops. — autograd_function • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Every operation performed on Tensor's creates a new function object, that +performs the computation, and records that it happened. The history is +retained in the form of a DAG of functions, with edges denoting data +dependencies (input <- output). Then, when backward is called, the graph is +processed in the topological ordering, by calling backward() methods of each +Function object, and passing returned gradients on to next Function's.

    +
    + +
    autograd_function(forward, backward)
    + +

    Arguments

    + + + + + + + + + + +
    forward

    Performs the operation. It must accept a context ctx as the first argument, +followed by any number of arguments (tensors or other types). The context can be +used to store tensors that can be then retrieved during the backward pass. +See AutogradContext for more information about context methods.

    backward

    Defines a formula for differentiating the operation. It must accept +a context ctx as the first argument, followed by as many outputs did forward() +return, and it should return a named list. Each argument is the gradient w.r.t +the given output, and each element in the returned list should be the gradient +w.r.t. the corresponding input. The context can be used to retrieve tensors saved +during the forward pass. It also has an attribute ctx$needs_input_grad as a +named list of booleans representing whether each input needs gradient. +E.g., backward() will have ctx$needs_input_grad$input = TRUE if the input +argument to forward() needs gradient computated w.r.t. the output. +See AutogradContext for more information about context methods.

    + + +

    Examples

    +
    if (torch_is_installed()) { + +exp2 <- autograd_function( + forward = function(ctx, i) { + result <- i$exp() + ctx$save_for_backward(result = result) + result + }, + backward = function(ctx, grad_output) { + list(i = grad_output * ctx$saved_variable$result) + } +) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/autograd_grad.html b/static/docs/dev/reference/autograd_grad.html new file mode 100644 index 0000000000000000000000000000000000000000..cd675f9dd68ed34e65f8dd1a41174f99e001adad --- /dev/null +++ b/static/docs/dev/reference/autograd_grad.html @@ -0,0 +1,305 @@ + + + + + + + + +Computes and returns the sum of gradients of outputs w.r.t. the inputs. — autograd_grad • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    grad_outputs should be a list of length matching output containing the “vector” +in Jacobian-vector product, usually the pre-computed gradients w.r.t. each of +the outputs. If an output doesn’t require_grad, then the gradient can be None).

    +
    + +
    autograd_grad(
    +  outputs,
    +  inputs,
    +  grad_outputs = NULL,
    +  retain_graph = create_graph,
    +  create_graph = FALSE,
    +  allow_unused = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    outputs

    (sequence of Tensor) – outputs of the differentiated function.

    inputs

    (sequence of Tensor) – Inputs w.r.t. which the gradient will be +returned (and not accumulated into .grad).

    grad_outputs

    (sequence of Tensor) – The “vector” in the Jacobian-vector +product. Usually gradients w.r.t. each output. None values can be specified for +scalar Tensors or ones that don’t require grad. If a None value would be acceptable +for all grad_tensors, then this argument is optional. Default: None.

    retain_graph

    (bool, optional) – If FALSE, the graph used to compute the +grad will be freed. Note that in nearly all cases setting this option to TRUE is +not needed and often can be worked around in a much more efficient way. +Defaults to the value of create_graph.

    create_graph

    (bool, optional) – If TRUE, graph of the derivative will be constructed, allowing to compute higher order derivative products. Default: FALSE`.

    allow_unused

    (bool, optional) – If FALSE, specifying inputs that were +not used when computing outputs (and therefore their grad is always zero) is an +error. Defaults to FALSE

    + +

    Details

    + +

    If only_inputs is TRUE, the function will only return a list of gradients w.r.t +the specified inputs. If it’s FALSE, then gradient w.r.t. all remaining leaves +will still be computed, and will be accumulated into their .grad attribute.

    + +

    Examples

    +
    if (torch_is_installed()) { +w <- torch_tensor(0.5, requires_grad = TRUE) +b <- torch_tensor(0.9, requires_grad = TRUE) +x <- torch_tensor(runif(100)) +y <- 2 * x + 1 +loss <- (y - (w*x + b))^2 +loss <- loss$mean() + +o <- autograd_grad(loss, list(w, b)) +o + +} +
    #> [[1]] +#> torch_tensor +#> -0.9935 +#> [ CPUFloatType{1} ] +#> +#> [[2]] +#> torch_tensor +#> -1.6206 +#> [ CPUFloatType{1} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/autograd_set_grad_mode.html b/static/docs/dev/reference/autograd_set_grad_mode.html new file mode 100644 index 0000000000000000000000000000000000000000..bc0b22c53c75699740279441c5237042e075e469 --- /dev/null +++ b/static/docs/dev/reference/autograd_set_grad_mode.html @@ -0,0 +1,237 @@ + + + + + + + + +Set grad mode — autograd_set_grad_mode • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Sets or disables gradient history.

    +
    + +
    autograd_set_grad_mode(enabled)
    + +

    Arguments

    + + + + + + +
    enabled

    bool wether to enable or disable the gradient recording.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/cuda_current_device.html b/static/docs/dev/reference/cuda_current_device.html new file mode 100644 index 0000000000000000000000000000000000000000..802763353d054107bf5a1d94b702e3d1f59a2e7a --- /dev/null +++ b/static/docs/dev/reference/cuda_current_device.html @@ -0,0 +1,229 @@ + + + + + + + + +Returns the index of a currently selected device. — cuda_current_device • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Returns the index of a currently selected device.

    +
    + +
    cuda_current_device()
    + + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/cuda_device_count.html b/static/docs/dev/reference/cuda_device_count.html new file mode 100644 index 0000000000000000000000000000000000000000..3ef1d8c35f13aa0623a074ac14249cf1fbba000d --- /dev/null +++ b/static/docs/dev/reference/cuda_device_count.html @@ -0,0 +1,229 @@ + + + + + + + + +Returns the number of GPUs available. — cuda_device_count • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Returns the number of GPUs available.

    +
    + +
    cuda_device_count()
    + + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/cuda_is_available.html b/static/docs/dev/reference/cuda_is_available.html new file mode 100644 index 0000000000000000000000000000000000000000..09dbc69dfefd037845f7a438b770d50d5c35e5e4 --- /dev/null +++ b/static/docs/dev/reference/cuda_is_available.html @@ -0,0 +1,229 @@ + + + + + + + + +Returns a bool indicating if CUDA is currently available. — cuda_is_available • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Returns a bool indicating if CUDA is currently available.

    +
    + +
    cuda_is_available()
    + + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/dataloader.html b/static/docs/dev/reference/dataloader.html new file mode 100644 index 0000000000000000000000000000000000000000..79ec876771cb5ad37f5e1a728e5205c3f64989c5 --- /dev/null +++ b/static/docs/dev/reference/dataloader.html @@ -0,0 +1,310 @@ + + + + + + + + +Data loader. Combines a dataset and a sampler, and provides +single- or multi-process iterators over the dataset. — dataloader • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Data loader. Combines a dataset and a sampler, and provides +single- or multi-process iterators over the dataset.

    +
    + +
    dataloader(
    +  dataset,
    +  batch_size = 1,
    +  shuffle = FALSE,
    +  sampler = NULL,
    +  batch_sampler = NULL,
    +  num_workers = 0,
    +  collate_fn = NULL,
    +  pin_memory = FALSE,
    +  drop_last = FALSE,
    +  timeout = 0,
    +  worker_init_fn = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    dataset

    (Dataset): dataset from which to load the data.

    batch_size

    (int, optional): how many samples per batch to load +(default: 1).

    shuffle

    (bool, optional): set to TRUE to have the data reshuffled +at every epoch (default: FALSE).

    sampler

    (Sampler, optional): defines the strategy to draw samples from +the dataset. If specified, shuffle must be False.

    batch_sampler

    (Sampler, optional): like sampler, but returns a batch of +indices at a time. Mutually exclusive with batch_size, +shuffle, sampler, and drop_last.

    num_workers

    (int, optional): how many subprocesses to use for data +loading. 0 means that the data will be loaded in the main process. +(default: 0)

    collate_fn

    (callable, optional): merges a list of samples to form a mini-batch.

    pin_memory

    (bool, optional): If TRUE, the data loader will copy tensors +into CUDA pinned memory before returning them. If your data elements +are a custom type, or your collate_fn returns a batch that is a custom type +see the example below.

    drop_last

    (bool, optional): set to TRUE to drop the last incomplete batch, +if the dataset size is not divisible by the batch size. If FALSE and +the size of dataset is not divisible by the batch size, then the last batch +will be smaller. (default: FALSE)

    timeout

    (numeric, optional): if positive, the timeout value for collecting a batch +from workers. Should always be non-negative. (default: 0)

    worker_init_fn

    (callable, optional): If not NULL, this will be called on each +worker subprocess with the worker id (an int in [0, num_workers - 1]) as +input, after seeding and before data loading. (default: NULL)

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/dataloader_make_iter.html b/static/docs/dev/reference/dataloader_make_iter.html new file mode 100644 index 0000000000000000000000000000000000000000..c8d69b1edf1ff1a0b19d67206f22b535e228bbd0 --- /dev/null +++ b/static/docs/dev/reference/dataloader_make_iter.html @@ -0,0 +1,237 @@ + + + + + + + + +Creates an iterator from a DataLoader — dataloader_make_iter • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Creates an iterator from a DataLoader

    +
    + +
    dataloader_make_iter(dataloader)
    + +

    Arguments

    + + + + + + +
    dataloader

    a dataloader object.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/dataloader_next.html b/static/docs/dev/reference/dataloader_next.html new file mode 100644 index 0000000000000000000000000000000000000000..b7df18d830ffb3d1e6c64a7d4d5b892cab47cc78 --- /dev/null +++ b/static/docs/dev/reference/dataloader_next.html @@ -0,0 +1,237 @@ + + + + + + + + +Get the next element of a dataloader iterator — dataloader_next • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Get the next element of a dataloader iterator

    +
    + +
    dataloader_next(iter)
    + +

    Arguments

    + + + + + + +
    iter

    a DataLoader iter created with dataloader_make_iter.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/dataset.html b/static/docs/dev/reference/dataset.html new file mode 100644 index 0000000000000000000000000000000000000000..75806acaede9e344c6b915ed5063f51f84ee9df9 --- /dev/null +++ b/static/docs/dev/reference/dataset.html @@ -0,0 +1,267 @@ + + + + + + + + +An abstract class representing a Dataset. — dataset • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    All datasets that represent a map from keys to data samples should subclass +it. All subclasses should overwrite get_item, supporting fetching a +data sample for a given key. Subclasses could also optionally overwrite +lenght, which is expected to return the size of the dataset by many +~torch.utils.data.Sampler implementations and the default options +of ~torch.utils.data.DataLoader.

    +
    + +
    dataset(name = NULL, inherit = Dataset, ..., parent_env = parent.frame())
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    name

    a name for the dataset. It it's also used as the class +for it.

    inherit

    you can optionally inherit from a dataset when creating a +new dataset.

    ...

    public methods for the dataset class

    parent_env

    An environment to use as the parent of newly-created +objects.

    + +

    Note

    + +

    ~torch.utils.data.DataLoader by default constructs a index +sampler that yields integral indices. To make it work with a map-style +dataset with non-integral indices/keys, a custom sampler must be provided.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/default_dtype.html b/static/docs/dev/reference/default_dtype.html new file mode 100644 index 0000000000000000000000000000000000000000..871745eb8b26a67d7546c70ee64f8116800d8574 --- /dev/null +++ b/static/docs/dev/reference/default_dtype.html @@ -0,0 +1,240 @@ + + + + + + + + +Gets and sets the default floating point dtype. — torch_set_default_dtype • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Gets and sets the default floating point dtype.

    +
    + +
    torch_set_default_dtype(d)
    +
    +torch_get_default_dtype()
    + +

    Arguments

    + + + + + + +
    d

    The default floating point dtype to set. Initially set to +torch_float().

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/enumerate.dataloader.html b/static/docs/dev/reference/enumerate.dataloader.html new file mode 100644 index 0000000000000000000000000000000000000000..bdcb452aa6abb4fc339bf02f53fb11d1c024d376 --- /dev/null +++ b/static/docs/dev/reference/enumerate.dataloader.html @@ -0,0 +1,246 @@ + + + + + + + + +Enumerate an iterator — enumerate.dataloader • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Enumerate an iterator

    +
    + +
    # S3 method for dataloader
    +enumerate(x, max_len = 1e+06, ...)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    x

    the generator to enumerate.

    max_len

    maximum number of iterations.

    ...

    passed to specific methods.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/enumerate.html b/static/docs/dev/reference/enumerate.html new file mode 100644 index 0000000000000000000000000000000000000000..c19326bec417bde08917f27a8433a28464f2a653 --- /dev/null +++ b/static/docs/dev/reference/enumerate.html @@ -0,0 +1,241 @@ + + + + + + + + +Enumerate an iterator — enumerate • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Enumerate an iterator

    +
    + +
    enumerate(x, ...)
    + +

    Arguments

    + + + + + + + + + + +
    x

    the generator to enumerate.

    ...

    passed to specific methods.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/figures/torch.png b/static/docs/dev/reference/figures/torch.png new file mode 100644 index 0000000000000000000000000000000000000000..61d24b86074b110f4cf3298f417c4148938c8f05 Binary files /dev/null and b/static/docs/dev/reference/figures/torch.png differ diff --git a/static/docs/dev/reference/index.html b/static/docs/dev/reference/index.html new file mode 100644 index 0000000000000000000000000000000000000000..398bb3b6ca3c729b0b30510a8ac7be2abd3999e2 --- /dev/null +++ b/static/docs/dev/reference/index.html @@ -0,0 +1,3157 @@ + + + + + + + + +Function reference • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +

    Tensor creation utilities

    +

    +
    +

    torch_empty()

    +

    Empty

    +

    torch_arange()

    +

    Arange

    +

    torch_eye()

    +

    Eye

    +

    torch_full()

    +

    Full

    +

    torch_linspace()

    +

    Linspace

    +

    torch_logspace()

    +

    Logspace

    +

    torch_ones()

    +

    Ones

    +

    torch_rand()

    +

    Rand

    +

    torch_randint()

    +

    Randint

    +

    torch_randn()

    +

    Randn

    +

    torch_randperm()

    +

    Randperm

    +

    torch_zeros()

    +

    Zeros

    +

    torch_empty_like()

    +

    Empty_like

    +

    torch_full_like()

    +

    Full_like

    +

    torch_ones_like()

    +

    Ones_like

    +

    torch_rand_like()

    +

    Rand_like

    +

    torch_randint_like()

    +

    Randint_like

    +

    torch_randn_like()

    +

    Randn_like

    +

    torch_zeros_like()

    +

    Zeros_like

    +

    as_array()

    +

    Converts to array

    +

    Tensor attributes

    +

    +
    +

    torch_set_default_dtype() torch_get_default_dtype()

    +

    Gets and sets the default floating point dtype.

    +

    is_torch_device()

    +

    Checks if object is a device

    +

    is_torch_dtype()

    +

    Check if object is a torch data type

    +

    torch_float32() torch_float() torch_float64() torch_double() torch_float16() torch_half() torch_uint8() torch_int8() torch_int16() torch_short() torch_int32() torch_int() torch_int64() torch_long() torch_bool() torch_quint8() torch_qint8() torch_qint32()

    +

    Torch data types

    +

    torch_finfo()

    +

    Floating point type info

    +

    torch_iinfo()

    +

    Integer type info

    +

    torch_per_channel_affine() torch_per_tensor_affine() torch_per_channel_symmetric() torch_per_tensor_symmetric()

    +

    Creates the corresponding Scheme object

    +

    torch_reduction_sum() torch_reduction_mean() torch_reduction_none()

    +

    Creates the reduction objet

    +

    is_torch_layout()

    +

    Check if an object is a torch layout.

    +

    is_torch_memory_format()

    +

    Check if an object is a memory format

    +

    is_torch_qscheme()

    +

    Checks if an object is a QScheme

    +

    is_undefined_tensor()

    +

    Checks if a tensor is undefined

    +

    Serialization

    +

    +
    +

    load_state_dict()

    +

    Load a state dict file

    +

    torch_load()

    +

    Loads a saved object

    +

    torch_save()

    +

    Saves an object to a disk file.

    +

    Mathematical operations on tensors

    +

    +
    +

    torch_abs()

    +

    Abs

    +

    torch_acos()

    +

    Acos

    +

    torch_adaptive_avg_pool1d()

    +

    Adaptive_avg_pool1d

    +

    torch_add()

    +

    Add

    +

    torch_addbmm()

    +

    Addbmm

    +

    torch_addcdiv()

    +

    Addcdiv

    +

    torch_addcmul()

    +

    Addcmul

    +

    torch_addmm()

    +

    Addmm

    +

    torch_addmv()

    +

    Addmv

    +

    torch_addr()

    +

    Addr

    +

    torch_allclose()

    +

    Allclose

    +

    torch_angle()

    +

    Angle

    +

    torch_argmax()

    +

    Argmax

    +

    torch_argmin()

    +

    Argmin

    +

    torch_argsort()

    +

    Argsort

    +

    torch_as_strided()

    +

    As_strided

    +

    torch_asin()

    +

    Asin

    +

    torch_atan()

    +

    Atan

    +

    torch_atan2()

    +

    Atan2

    +

    torch_avg_pool1d()

    +

    Avg_pool1d

    +

    torch_baddbmm()

    +

    Baddbmm

    +

    torch_bartlett_window()

    +

    Bartlett_window

    +

    torch_bernoulli()

    +

    Bernoulli

    +

    torch_bincount()

    +

    Bincount

    +

    torch_bitwise_and()

    +

    Bitwise_and

    +

    torch_bitwise_not()

    +

    Bitwise_not

    +

    torch_bitwise_or()

    +

    Bitwise_or

    +

    torch_bitwise_xor()

    +

    Bitwise_xor

    +

    torch_blackman_window()

    +

    Blackman_window

    +

    torch_bmm()

    +

    Bmm

    +

    torch_broadcast_tensors()

    +

    Broadcast_tensors

    +

    torch_can_cast()

    +

    Can_cast

    +

    torch_cartesian_prod()

    +

    Cartesian_prod

    +

    torch_cat()

    +

    Cat

    +

    torch_cdist()

    +

    Cdist

    +

    torch_ceil()

    +

    Ceil

    +

    torch_celu()

    +

    Celu

    +

    torch_celu_()

    +

    Celu_

    +

    torch_chain_matmul()

    +

    Chain_matmul

    +

    torch_cholesky()

    +

    Cholesky

    +

    torch_cholesky_inverse()

    +

    Cholesky_inverse

    +

    torch_cholesky_solve()

    +

    Cholesky_solve

    +

    torch_chunk()

    +

    Chunk

    +

    torch_clamp()

    +

    Clamp

    +

    torch_combinations()

    +

    Combinations

    +

    torch_conj()

    +

    Conj

    +

    torch_conv1d()

    +

    Conv1d

    +

    torch_conv2d()

    +

    Conv2d

    +

    torch_conv3d()

    +

    Conv3d

    +

    torch_conv_tbc()

    +

    Conv_tbc

    +

    torch_conv_transpose1d()

    +

    Conv_transpose1d

    +

    torch_conv_transpose2d()

    +

    Conv_transpose2d

    +

    torch_conv_transpose3d()

    +

    Conv_transpose3d

    +

    torch_cos()

    +

    Cos

    +

    torch_cosh()

    +

    Cosh

    +

    torch_cosine_similarity()

    +

    Cosine_similarity

    +

    torch_cross()

    +

    Cross

    +

    torch_cummax()

    +

    Cummax

    +

    torch_cummin()

    +

    Cummin

    +

    torch_cumprod()

    +

    Cumprod

    +

    torch_cumsum()

    +

    Cumsum

    +

    torch_det()

    +

    Det

    +

    torch_device()

    +

    Create a Device object

    +

    torch_diag()

    +

    Diag

    +

    torch_diag_embed()

    +

    Diag_embed

    +

    torch_diagflat()

    +

    Diagflat

    +

    torch_diagonal()

    +

    Diagonal

    +

    torch_digamma()

    +

    Digamma

    +

    torch_dist()

    +

    Dist

    +

    torch_div()

    +

    Div

    +

    torch_dot()

    +

    Dot

    +

    torch_eig()

    +

    Eig

    +

    torch_einsum()

    +

    Einsum

    +

    torch_empty_strided()

    +

    Empty_strided

    +

    torch_eq()

    +

    Eq

    +

    torch_equal()

    +

    Equal

    +

    torch_erf()

    +

    Erf

    +

    torch_erfc()

    +

    Erfc

    +

    torch_erfinv()

    +

    Erfinv

    +

    torch_exp()

    +

    Exp

    +

    torch_expm1()

    +

    Expm1

    +

    torch_fft()

    +

    Fft

    +

    torch_flatten()

    +

    Flatten

    +

    torch_flip()

    +

    Flip

    +

    torch_floor()

    +

    Floor

    +

    torch_floor_divide()

    +

    Floor_divide

    +

    torch_fmod()

    +

    Fmod

    +

    torch_frac()

    +

    Frac

    +

    torch_gather()

    +

    Gather

    +

    torch_ge()

    +

    Ge

    +

    torch_generator()

    +

    Create a Generator object

    +

    torch_geqrf()

    +

    Geqrf

    +

    torch_ger()

    +

    Ger

    +

    torch_gt()

    +

    Gt

    +

    torch_hamming_window()

    +

    Hamming_window

    +

    torch_hann_window()

    +

    Hann_window

    +

    torch_histc()

    +

    Histc

    +

    torch_ifft()

    +

    Ifft

    +

    torch_imag()

    +

    Imag

    +

    torch_index_select()

    +

    Index_select

    +

    torch_inverse()

    +

    Inverse

    +

    torch_irfft()

    +

    Irfft

    +

    torch_is_complex()

    +

    Is_complex

    +

    torch_is_floating_point()

    +

    Is_floating_point

    +

    torch_is_installed()

    +

    Verifies if torch is installed

    +

    torch_isfinite()

    +

    Isfinite

    +

    torch_isinf()

    +

    Isinf

    +

    torch_isnan()

    +

    Isnan

    +

    torch_kthvalue()

    +

    Kthvalue

    +

    torch_strided() torch_sparse_coo()

    +

    Creates the corresponding layout

    +

    torch_le()

    +

    Le

    +

    torch_lerp()

    +

    Lerp

    +

    torch_lgamma()

    +

    Lgamma

    +

    torch_log()

    +

    Log

    +

    torch_log10()

    +

    Log10

    +

    torch_log1p()

    +

    Log1p

    +

    torch_log2()

    +

    Log2

    +

    torch_logdet()

    +

    Logdet

    +

    torch_logical_and()

    +

    Logical_and

    +

    torch_logical_not

    +

    Logical_not

    +

    torch_logical_or()

    +

    Logical_or

    +

    torch_logical_xor()

    +

    Logical_xor

    +

    torch_logsumexp()

    +

    Logsumexp

    +

    torch_lstsq()

    +

    Lstsq

    +

    torch_lt()

    +

    Lt

    +

    torch_lu()

    +

    LU

    +

    torch_lu_solve()

    +

    Lu_solve

    +

    torch_manual_seed()

    +

    Sets the seed for generating random numbers.

    +

    torch_masked_select()

    +

    Masked_select

    +

    torch_matmul()

    +

    Matmul

    +

    torch_matrix_power()

    +

    Matrix_power

    +

    torch_matrix_rank()

    +

    Matrix_rank

    +

    torch_max

    +

    Max

    +

    torch_mean()

    +

    Mean

    +

    torch_median()

    +

    Median

    +

    torch_contiguous_format() torch_preserve_format() torch_channels_last_format()

    +

    Memory format

    +

    torch_meshgrid()

    +

    Meshgrid

    +

    torch_min

    +

    Min

    +

    torch_mm()

    +

    Mm

    +

    torch_mode()

    +

    Mode

    +

    torch_mul()

    +

    Mul

    +

    torch_multinomial()

    +

    Multinomial

    +

    torch_mv()

    +

    Mv

    +

    torch_mvlgamma()

    +

    Mvlgamma

    +

    torch_narrow()

    +

    Narrow

    +

    torch_ne()

    +

    Ne

    +

    torch_neg()

    +

    Neg

    +

    torch_nonzero()

    +

    Nonzero

    +

    torch_norm()

    +

    Norm

    +

    torch_normal()

    +

    Normal

    +

    torch_orgqr()

    +

    Orgqr

    +

    torch_ormqr()

    +

    Ormqr

    +

    torch_pdist()

    +

    Pdist

    +

    torch_pinverse()

    +

    Pinverse

    +

    torch_pixel_shuffle()

    +

    Pixel_shuffle

    +

    torch_poisson()

    +

    Poisson

    +

    torch_polygamma()

    +

    Polygamma

    +

    torch_pow()

    +

    Pow

    +

    torch_prod()

    +

    Prod

    +

    torch_promote_types()

    +

    Promote_types

    +

    torch_qr()

    +

    Qr

    +

    torch_quantize_per_channel()

    +

    Quantize_per_channel

    +

    torch_quantize_per_tensor()

    +

    Quantize_per_tensor

    +

    torch_range()

    +

    Range

    +

    torch_real()

    +

    Real

    +

    torch_reciprocal()

    +

    Reciprocal

    +

    torch_relu()

    +

    Relu

    +

    torch_relu_()

    +

    Relu_

    +

    torch_remainder()

    +

    Remainder

    +

    torch_renorm()

    +

    Renorm

    +

    torch_repeat_interleave()

    +

    Repeat_interleave

    +

    torch_reshape()

    +

    Reshape

    +

    torch_result_type()

    +

    Result_type

    +

    torch_rfft()

    +

    Rfft

    +

    torch_roll()

    +

    Roll

    +

    torch_rot90()

    +

    Rot90

    +

    torch_round()

    +

    Round

    +

    torch_rrelu_()

    +

    Rrelu_

    +

    torch_rsqrt()

    +

    Rsqrt

    +

    torch_selu()

    +

    Selu

    +

    torch_selu_()

    +

    Selu_

    +

    torch_sigmoid()

    +

    Sigmoid

    +

    torch_sign()

    +

    Sign

    +

    torch_sin()

    +

    Sin

    +

    torch_sinh()

    +

    Sinh

    +

    torch_slogdet()

    +

    Slogdet

    +

    torch_solve()

    +

    Solve

    +

    torch_sort()

    +

    Sort

    +

    torch_sparse_coo_tensor()

    +

    Sparse_coo_tensor

    +

    torch_split()

    +

    Split

    +

    torch_sqrt()

    +

    Sqrt

    +

    torch_square()

    +

    Square

    +

    torch_squeeze()

    +

    Squeeze

    +

    torch_stack()

    +

    Stack

    +

    torch_std()

    +

    Std

    +

    torch_std_mean()

    +

    Std_mean

    +

    torch_stft()

    +

    Stft

    +

    torch_sum()

    +

    Sum

    +

    torch_svd()

    +

    Svd

    +

    torch_symeig()

    +

    Symeig

    +

    torch_t()

    +

    T

    +

    torch_take()

    +

    Take

    +

    torch_tan()

    +

    Tan

    +

    torch_tanh()

    +

    Tanh

    +

    torch_tensor()

    +

    Converts R objects to a torch tensor

    +

    torch_tensordot()

    +

    Tensordot

    +

    torch_threshold_()

    +

    Threshold_

    +

    torch_topk()

    +

    Topk

    +

    torch_trace()

    +

    Trace

    +

    torch_transpose()

    +

    Transpose

    +

    torch_trapz()

    +

    Trapz

    +

    torch_triangular_solve()

    +

    Triangular_solve

    +

    torch_tril()

    +

    Tril

    +

    torch_tril_indices()

    +

    Tril_indices

    +

    torch_triu()

    +

    Triu

    +

    torch_triu_indices()

    +

    Triu_indices

    +

    torch_true_divide()

    +

    TRUE_divide

    +

    torch_trunc()

    +

    Trunc

    +

    torch_unbind()

    +

    Unbind

    +

    torch_unique_consecutive()

    +

    Unique_consecutive

    +

    torch_unsqueeze()

    +

    Unsqueeze

    +

    torch_var()

    +

    Var

    +

    torch_var_mean()

    +

    Var_mean

    +

    torch_where()

    +

    Where

    +

    Neural network modules

    +

    +
    +

    nn_adaptive_avg_pool1d()

    +

    Applies a 1D adaptive average pooling over an input signal composed of several input planes.

    +

    nn_adaptive_avg_pool2d()

    +

    Applies a 2D adaptive average pooling over an input signal composed of several input planes.

    +

    nn_adaptive_avg_pool3d()

    +

    Applies a 3D adaptive average pooling over an input signal composed of several input planes.

    +

    nn_adaptive_log_softmax_with_loss()

    +

    AdaptiveLogSoftmaxWithLoss module

    +

    nn_adaptive_max_pool1d()

    +

    Applies a 1D adaptive max pooling over an input signal composed of several input planes.

    +

    nn_adaptive_max_pool2d()

    +

    Applies a 2D adaptive max pooling over an input signal composed of several input planes.

    +

    nn_adaptive_max_pool3d()

    +

    Applies a 3D adaptive max pooling over an input signal composed of several input planes.

    +

    nn_avg_pool1d()

    +

    Applies a 1D average pooling over an input signal composed of several +input planes.

    +

    nn_avg_pool2d()

    +

    Applies a 2D average pooling over an input signal composed of several input +planes.

    +

    nn_avg_pool3d()

    +

    Applies a 3D average pooling over an input signal composed of several input +planes.

    +

    nn_batch_norm1d()

    +

    BatchNorm1D module

    +

    nn_batch_norm2d()

    +

    BatchNorm2D

    +

    nn_bce_loss()

    +

    Binary cross entropy loss

    +

    nn_bilinear()

    +

    Bilinear module

    +

    nn_celu()

    +

    CELU module

    +

    nn_conv1d()

    +

    Conv1D module

    +

    nn_conv2d()

    +

    Conv2D module

    +

    nn_conv3d()

    +

    Conv3D module

    +

    nn_conv_transpose1d()

    +

    ConvTranspose1D

    +

    nn_conv_transpose2d()

    +

    ConvTranpose2D module

    +

    nn_conv_transpose3d()

    +

    ConvTranpose3D module

    +

    nn_cross_entropy_loss()

    +

    CrossEntropyLoss module

    +

    nn_dropout()

    +

    Dropout module

    +

    nn_dropout2d()

    +

    Dropout2D module

    +

    nn_dropout3d()

    +

    Dropout3D module

    +

    nn_elu()

    +

    ELU module

    +

    nn_embedding()

    +

    Embedding module

    +

    nn_fractional_max_pool2d()

    +

    Applies a 2D fractional max pooling over an input signal composed of several input planes.

    +

    nn_fractional_max_pool3d()

    +

    Applies a 3D fractional max pooling over an input signal composed of several input planes.

    +

    nn_gelu()

    +

    GELU module

    +

    nn_glu()

    +

    GLU module

    +

    nn_hardshrink()

    +

    Hardshwink module

    +

    nn_hardsigmoid()

    +

    Hardsigmoid module

    +

    nn_hardswish()

    +

    Hardswish module

    +

    nn_hardtanh()

    +

    Hardtanh module

    +

    nn_identity()

    +

    Identity module

    +

    nn_init_calculate_gain()

    +

    Calculate gain

    +

    nn_init_constant_()

    +

    Constant initialization

    +

    nn_init_dirac_()

    +

    Dirac initialization

    +

    nn_init_eye_()

    +

    Eye initialization

    +

    nn_init_kaiming_normal_()

    +

    Kaiming normal initialization

    +

    nn_init_kaiming_uniform_()

    +

    Kaiming uniform initialization

    +

    nn_init_normal_()

    +

    Normal initialization

    +

    nn_init_ones_()

    +

    Ones initialization

    +

    nn_init_orthogonal_()

    +

    Orthogonal initialization

    +

    nn_init_sparse_()

    +

    Sparse initialization

    +

    nn_init_trunc_normal_()

    +

    Truncated normal initialization

    +

    nn_init_uniform_()

    +

    Uniform initialization

    +

    nn_init_xavier_normal_()

    +

    Xavier normal initialization

    +

    nn_init_xavier_uniform_()

    +

    Xavier uniform initialization

    +

    nn_init_zeros_()

    +

    Zeros initialization

    +

    nn_leaky_relu()

    +

    LeakyReLU module

    +

    nn_linear()

    +

    Linear module

    +

    nn_log_sigmoid()

    +

    LogSigmoid module

    +

    nn_log_softmax()

    +

    LogSoftmax module

    +

    nn_lp_pool1d()

    +

    Applies a 1D power-average pooling over an input signal composed of several input +planes.

    +

    nn_lp_pool2d()

    +

    Applies a 2D power-average pooling over an input signal composed of several input +planes.

    +

    nn_max_pool1d()

    +

    MaxPool1D module

    +

    nn_max_pool2d()

    +

    MaxPool2D module

    +

    nn_max_pool3d()

    +

    Applies a 3D max pooling over an input signal composed of several input +planes.

    +

    nn_max_unpool1d()

    +

    Computes a partial inverse of MaxPool1d.

    +

    nn_max_unpool2d()

    +

    Computes a partial inverse of MaxPool2d.

    +

    nn_max_unpool3d()

    +

    Computes a partial inverse of MaxPool3d.

    +

    nn_module()

    +

    Base class for all neural network modules.

    +

    nn_module_list()

    +

    Holds submodules in a list.

    +

    nn_multihead_attention()

    +

    MultiHead attention

    +

    nn_prelu()

    +

    PReLU module

    +

    nn_relu()

    +

    ReLU module

    +

    nn_relu6()

    +

    ReLu6 module

    +

    nn_rnn()

    +

    RNN module

    +

    nn_rrelu()

    +

    RReLU module

    +

    nn_selu()

    +

    SELU module

    +

    nn_sequential()

    +

    A sequential container

    +

    nn_sigmoid()

    +

    Sigmoid module

    +

    nn_softmax()

    +

    Softmax module

    +

    nn_softmax2d()

    +

    Softmax2d module

    +

    nn_softmin()

    +

    Softmin

    +

    nn_softplus()

    +

    Softplus module

    +

    nn_softshrink()

    +

    Softshrink module

    +

    nn_softsign()

    +

    Softsign module

    +

    nn_tanh()

    +

    Tanh module

    +

    nn_tanhshrink()

    +

    Tanhshrink module

    +

    nn_threshold()

    +

    Threshoold module

    +

    nn_utils_rnn_pack_padded_sequence()

    +

    Packs a Tensor containing padded sequences of variable length.

    +

    nn_utils_rnn_pack_sequence()

    +

    Packs a list of variable length Tensors

    +

    nn_utils_rnn_pad_packed_sequence()

    +

    Pads a packed batch of variable length sequences.

    +

    nn_utils_rnn_pad_sequence()

    +

    Pad a list of variable length Tensors with padding_value

    +

    Neural networks functional module

    +

    +
    +

    nnf_adaptive_avg_pool1d()

    +

    Adaptive_avg_pool1d

    +

    nnf_adaptive_avg_pool2d()

    +

    Adaptive_avg_pool2d

    +

    nnf_adaptive_avg_pool3d()

    +

    Adaptive_avg_pool3d

    +

    nnf_adaptive_max_pool1d()

    +

    Adaptive_max_pool1d

    +

    nnf_adaptive_max_pool2d()

    +

    Adaptive_max_pool2d

    +

    nnf_adaptive_max_pool3d()

    +

    Adaptive_max_pool3d

    +

    nnf_affine_grid()

    +

    Affine_grid

    +

    nnf_alpha_dropout()

    +

    Alpha_dropout

    +

    nnf_avg_pool1d()

    +

    Avg_pool1d

    +

    nnf_avg_pool2d()

    +

    Avg_pool2d

    +

    nnf_avg_pool3d()

    +

    Avg_pool3d

    +

    nnf_batch_norm()

    +

    Batch_norm

    +

    nnf_bilinear()

    +

    Bilinear

    +

    nnf_binary_cross_entropy()

    +

    Binary_cross_entropy

    +

    nnf_binary_cross_entropy_with_logits()

    +

    Binary_cross_entropy_with_logits

    +

    nnf_celu() nnf_celu_()

    +

    Celu

    +

    nnf_conv1d()

    +

    Conv1d

    +

    nnf_conv2d()

    +

    Conv2d

    +

    nnf_conv3d()

    +

    Conv3d

    +

    nnf_conv_tbc()

    +

    Conv_tbc

    +

    nnf_conv_transpose1d()

    +

    Conv_transpose1d

    +

    nnf_conv_transpose2d()

    +

    Conv_transpose2d

    +

    nnf_conv_transpose3d()

    +

    Conv_transpose3d

    +

    nnf_cosine_embedding_loss()

    +

    Cosine_embedding_loss

    +

    nnf_cosine_similarity()

    +

    Cosine_similarity

    +

    nnf_cross_entropy()

    +

    Cross_entropy

    +

    nnf_ctc_loss()

    +

    Ctc_loss

    +

    nnf_dropout()

    +

    Dropout

    +

    nnf_dropout2d()

    +

    Dropout2d

    +

    nnf_dropout3d()

    +

    Dropout3d

    +

    nnf_elu() nnf_elu_()

    +

    Elu

    +

    nnf_embedding()

    +

    Embedding

    +

    nnf_embedding_bag()

    +

    Embedding_bag

    +

    nnf_fold()

    +

    Fold

    +

    nnf_fractional_max_pool2d()

    +

    Fractional_max_pool2d

    +

    nnf_fractional_max_pool3d()

    +

    Fractional_max_pool3d

    +

    nnf_gelu()

    +

    Gelu

    +

    nnf_glu()

    +

    Glu

    +

    nnf_grid_sample()

    +

    Grid_sample

    +

    nnf_group_norm()

    +

    Group_norm

    +

    nnf_gumbel_softmax()

    +

    Gumbel_softmax

    +

    nnf_hardshrink()

    +

    Hardshrink

    +

    nnf_hardsigmoid()

    +

    Hardsigmoid

    +

    nnf_hardswish()

    +

    Hardswish

    +

    nnf_hardtanh() nnf_hardtanh_()

    +

    Hardtanh

    +

    nnf_hinge_embedding_loss()

    +

    Hinge_embedding_loss

    +

    nnf_instance_norm()

    +

    Instance_norm

    +

    nnf_interpolate()

    +

    Interpolate

    +

    nnf_kl_div()

    +

    Kl_div

    +

    nnf_l1_loss()

    +

    L1_loss

    +

    nnf_layer_norm()

    +

    Layer_norm

    +

    nnf_leaky_relu()

    +

    Leaky_relu

    +

    nnf_linear()

    +

    Linear

    +

    nnf_local_response_norm()

    +

    Local_response_norm

    +

    nnf_log_softmax()

    +

    Log_softmax

    +

    nnf_logsigmoid()

    +

    Logsigmoid

    +

    nnf_lp_pool1d()

    +

    Lp_pool1d

    +

    nnf_lp_pool2d()

    +

    Lp_pool2d

    +

    nnf_margin_ranking_loss()

    +

    Margin_ranking_loss

    +

    nnf_max_pool1d()

    +

    Max_pool1d

    +

    nnf_max_pool2d()

    +

    Max_pool2d

    +

    nnf_max_pool3d()

    +

    Max_pool3d

    +

    nnf_max_unpool1d()

    +

    Max_unpool1d

    +

    nnf_max_unpool2d()

    +

    Max_unpool2d

    +

    nnf_max_unpool3d()

    +

    Max_unpool3d

    +

    nnf_mse_loss()

    +

    Mse_loss

    +

    nnf_multi_head_attention_forward()

    +

    Multi head attention forward

    +

    nnf_multi_margin_loss()

    +

    Multi_margin_loss

    +

    nnf_multilabel_margin_loss()

    +

    Multilabel_margin_loss

    +

    nnf_multilabel_soft_margin_loss()

    +

    Multilabel_soft_margin_loss

    +

    nnf_nll_loss()

    +

    Nll_loss

    +

    nnf_normalize()

    +

    Normalize

    +

    nnf_one_hot()

    +

    One_hot

    +

    nnf_pad()

    +

    Pad

    +

    nnf_pairwise_distance()

    +

    Pairwise_distance

    +

    nnf_pdist()

    +

    Pdist

    +

    nnf_pixel_shuffle()

    +

    Pixel_shuffle

    +

    nnf_poisson_nll_loss()

    +

    Poisson_nll_loss

    +

    nnf_prelu()

    +

    Prelu

    +

    nnf_relu() nnf_relu_()

    +

    Relu

    +

    nnf_relu6()

    +

    Relu6

    +

    nnf_rrelu() nnf_rrelu_()

    +

    Rrelu

    +

    nnf_selu() nnf_selu_()

    +

    Selu

    +

    nnf_sigmoid()

    +

    Sigmoid

    +

    nnf_smooth_l1_loss()

    +

    Smooth_l1_loss

    +

    nnf_soft_margin_loss()

    +

    Soft_margin_loss

    +

    nnf_softmax()

    +

    Softmax

    +

    nnf_softmin()

    +

    Softmin

    +

    nnf_softplus()

    +

    Softplus

    +

    nnf_softshrink()

    +

    Softshrink

    +

    nnf_softsign()

    +

    Softsign

    +

    nnf_tanhshrink()

    +

    Tanhshrink

    +

    nnf_threshold() nnf_threshold_()

    +

    Threshold

    +

    nnf_triplet_margin_loss()

    +

    Triplet_margin_loss

    +

    nnf_unfold()

    +

    Unfold

    +

    Optimizers

    +

    +
    +

    optim_adam()

    +

    Implements Adam algorithm.

    +

    optim_required()

    +

    Dummy value indicating a required value.

    +

    optim_sgd()

    +

    SGD optimizer

    +

    Datasets

    +

    +
    +

    dataset()

    +

    An abstract class representing a Dataset.

    +

    dataloader()

    +

    Data loader. Combines a dataset and a sampler, and provides +single- or multi-process iterators over the dataset.

    +

    dataloader_make_iter()

    +

    Creates an iterator from a DataLoader

    +

    dataloader_next()

    +

    Get the next element of a dataloader iterator

    +

    enumerate()

    +

    Enumerate an iterator

    +

    enumerate(<dataloader>)

    +

    Enumerate an iterator

    +

    tensor_dataset()

    +

    Dataset wrapping tensors.

    +

    is_dataloader()

    +

    Checks if the object is a dataloader

    +

    Autograd

    +

    +
    +

    autograd_backward()

    +

    Computes the sum of gradients of given tensors w.r.t. graph leaves.

    +

    autograd_function()

    +

    Records operation history and defines formulas for differentiating ops.

    +

    autograd_grad()

    +

    Computes and returns the sum of gradients of outputs w.r.t. the inputs.

    +

    autograd_set_grad_mode()

    +

    Set grad mode

    +

    with_no_grad()

    +

    Temporarily modify gradient recording.

    +

    with_enable_grad()

    +

    Enable grad

    +

    AutogradContext

    +

    Class representing the context.

    +

    Cuda utilities

    +

    +
    +

    cuda_current_device()

    +

    Returns the index of a currently selected device.

    +

    cuda_device_count()

    +

    Returns the number of GPUs available.

    +

    cuda_is_available()

    +

    Returns a bool indicating if CUDA is currently available.

    +

    Installation

    +

    +
    +

    install_torch()

    +

    Install Torch

    +
    + + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/install_torch.html b/static/docs/dev/reference/install_torch.html new file mode 100644 index 0000000000000000000000000000000000000000..330b26ab13b20dca4c158ab82b2b9aa1a677c556 --- /dev/null +++ b/static/docs/dev/reference/install_torch.html @@ -0,0 +1,266 @@ + + + + + + + + +Install Torch — install_torch • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Installs Torch and its dependencies.

    +
    + +
    install_torch(
    +  version = "1.5.0",
    +  type = install_type(version = version),
    +  reinstall = FALSE,
    +  path = install_path(),
    +  ...
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    version

    The Torch version to install.

    type

    The installation type for Torch. Valid values are "cpu" or the 'CUDA' version.

    reinstall

    Re-install Torch even if its already installed?

    path

    Optional path to install or check for an already existing installation.

    ...

    other optional arguments (like load for manual installation.)

    + +

    Details

    + +

    When using path to install in a specific location, make sure the TORCH_HOME environment +variable is set to this same path to reuse this installation. The TORCH_INSTALL environment +variable can be set to 0 to prevent auto-installing torch and TORCH_LOAD set to 0 +to avoid loading dependencies automatically. These environment variables are meant for advanced use +cases and troubleshootinng only.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/is_dataloader.html b/static/docs/dev/reference/is_dataloader.html new file mode 100644 index 0000000000000000000000000000000000000000..5aca7b8c210f20352d705e46a27bed0389d697b8 --- /dev/null +++ b/static/docs/dev/reference/is_dataloader.html @@ -0,0 +1,237 @@ + + + + + + + + +Checks if the object is a dataloader — is_dataloader • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Checks if the object is a dataloader

    +
    + +
    is_dataloader(x)
    + +

    Arguments

    + + + + + + +
    x

    object to check

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/is_torch_device.html b/static/docs/dev/reference/is_torch_device.html new file mode 100644 index 0000000000000000000000000000000000000000..8057fab116132805737f555252474fd5821ce05e --- /dev/null +++ b/static/docs/dev/reference/is_torch_device.html @@ -0,0 +1,237 @@ + + + + + + + + +Checks if object is a device — is_torch_device • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Checks if object is a device

    +
    + +
    is_torch_device(x)
    + +

    Arguments

    + + + + + + +
    x

    object to check

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/is_torch_dtype.html b/static/docs/dev/reference/is_torch_dtype.html new file mode 100644 index 0000000000000000000000000000000000000000..dc77421b3a4681b644c206c178499c8c4746d738 --- /dev/null +++ b/static/docs/dev/reference/is_torch_dtype.html @@ -0,0 +1,237 @@ + + + + + + + + +Check if object is a torch data type — is_torch_dtype • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Check if object is a torch data type

    +
    + +
    is_torch_dtype(x)
    + +

    Arguments

    + + + + + + +
    x

    object to check.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/is_torch_layout.html b/static/docs/dev/reference/is_torch_layout.html new file mode 100644 index 0000000000000000000000000000000000000000..853afaacc906d08f2284df57e7b804166d0cd9aa --- /dev/null +++ b/static/docs/dev/reference/is_torch_layout.html @@ -0,0 +1,237 @@ + + + + + + + + +Check if an object is a torch layout. — is_torch_layout • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Check if an object is a torch layout.

    +
    + +
    is_torch_layout(x)
    + +

    Arguments

    + + + + + + +
    x

    object to check

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/is_torch_memory_format.html b/static/docs/dev/reference/is_torch_memory_format.html new file mode 100644 index 0000000000000000000000000000000000000000..fe7f264e8edaf435539e22f4c1cd83ffa8e384c9 --- /dev/null +++ b/static/docs/dev/reference/is_torch_memory_format.html @@ -0,0 +1,237 @@ + + + + + + + + +Check if an object is a memory format — is_torch_memory_format • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Check if an object is a memory format

    +
    + +
    is_torch_memory_format(x)
    + +

    Arguments

    + + + + + + +
    x

    object to check

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/is_torch_qscheme.html b/static/docs/dev/reference/is_torch_qscheme.html new file mode 100644 index 0000000000000000000000000000000000000000..02062fd8d7073cdac159cef2e7c1f7d79b830fce --- /dev/null +++ b/static/docs/dev/reference/is_torch_qscheme.html @@ -0,0 +1,237 @@ + + + + + + + + +Checks if an object is a QScheme — is_torch_qscheme • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Checks if an object is a QScheme

    +
    + +
    is_torch_qscheme(x)
    + +

    Arguments

    + + + + + + +
    x

    object to check

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/is_undefined_tensor.html b/static/docs/dev/reference/is_undefined_tensor.html new file mode 100644 index 0000000000000000000000000000000000000000..7ca1794573d1626c3dd9aacf5196a40757274fac --- /dev/null +++ b/static/docs/dev/reference/is_undefined_tensor.html @@ -0,0 +1,237 @@ + + + + + + + + +Checks if a tensor is undefined — is_undefined_tensor • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Checks if a tensor is undefined

    +
    + +
    is_undefined_tensor(x)
    + +

    Arguments

    + + + + + + +
    x

    tensor to check

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/load_state_dict.html b/static/docs/dev/reference/load_state_dict.html new file mode 100644 index 0000000000000000000000000000000000000000..9dce4a9d005ca5038c192530e7ed4798ab47708b --- /dev/null +++ b/static/docs/dev/reference/load_state_dict.html @@ -0,0 +1,250 @@ + + + + + + + + +Load a state dict file — load_state_dict • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    This function should only be used to load models saved in python. +For it to work correctly you need to use torch.save with the flag: +_use_new_zipfile_serialization=True and also remove all nn.Parameter +classes from the tensors in the dict.

    +
    + +
    load_state_dict(path)
    + +

    Arguments

    + + + + + + +
    path

    to the state dict file

    + +

    Value

    + +

    a named list of tensors.

    +

    Details

    + +

    The above might change with development of this +in pytorch's C++ api.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_adaptive_avg_pool1d.html b/static/docs/dev/reference/nn_adaptive_avg_pool1d.html new file mode 100644 index 0000000000000000000000000000000000000000..55445a2f98e7bf1e6c0d748e490b64417d088f3b --- /dev/null +++ b/static/docs/dev/reference/nn_adaptive_avg_pool1d.html @@ -0,0 +1,248 @@ + + + + + + + + +Applies a 1D adaptive average pooling over an input signal composed of several input planes. — nn_adaptive_avg_pool1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    The output size is H, for any input size. +The number of output features is equal to the number of input planes.

    +
    + +
    nn_adaptive_avg_pool1d(output_size)
    + +

    Arguments

    + + + + + + +
    output_size

    the target output size H

    + + +

    Examples

    +
    if (torch_is_installed()) { +# target output size of 5 +m = nn_adaptive_avg_pool1d(5) +input <- torch_randn(1, 64, 8) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_adaptive_avg_pool2d.html b/static/docs/dev/reference/nn_adaptive_avg_pool2d.html new file mode 100644 index 0000000000000000000000000000000000000000..6284101909d76bc4dfab42bdd45def96d5638ac3 --- /dev/null +++ b/static/docs/dev/reference/nn_adaptive_avg_pool2d.html @@ -0,0 +1,255 @@ + + + + + + + + +Applies a 2D adaptive average pooling over an input signal composed of several input planes. — nn_adaptive_avg_pool2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    The output is of size H x W, for any input size. +The number of output features is equal to the number of input planes.

    +
    + +
    nn_adaptive_avg_pool2d(output_size)
    + +

    Arguments

    + + + + + + +
    output_size

    the target output size of the image of the form H x W. +Can be a tuple (H, W) or a single H for a square image H x H. +H and W can be either a int, or NULL which means the size will +be the same as that of the input.

    + + +

    Examples

    +
    if (torch_is_installed()) { +# target output size of 5x7 +m <- nn_adaptive_avg_pool2d(c(5,7)) +input <- torch_randn(1, 64, 8, 9) +output <- m(input) +# target output size of 7x7 (square) +m <- nn_adaptive_avg_pool2d(7) +input <- torch_randn(1, 64, 10, 9) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_adaptive_avg_pool3d.html b/static/docs/dev/reference/nn_adaptive_avg_pool3d.html new file mode 100644 index 0000000000000000000000000000000000000000..433f892f63337fd514400316ab3339047cd419e2 --- /dev/null +++ b/static/docs/dev/reference/nn_adaptive_avg_pool3d.html @@ -0,0 +1,255 @@ + + + + + + + + +Applies a 3D adaptive average pooling over an input signal composed of several input planes. — nn_adaptive_avg_pool3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    The output is of size D x H x W, for any input size. +The number of output features is equal to the number of input planes.

    +
    + +
    nn_adaptive_avg_pool3d(output_size)
    + +

    Arguments

    + + + + + + +
    output_size

    the target output size of the form D x H x W. +Can be a tuple (D, H, W) or a single number D for a cube D x D x D. +D, H and W can be either a int, or None which means the size will +be the same as that of the input.

    + + +

    Examples

    +
    if (torch_is_installed()) { +# target output size of 5x7x9 +m <- nn_adaptive_avg_pool3d(c(5,7,9)) +input <- torch_randn(1, 64, 8, 9, 10) +output <- m(input) +# target output size of 7x7x7 (cube) +m <- nn_adaptive_avg_pool3d(7) +input <- torch_randn(1, 64, 10, 9, 8) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_adaptive_log_softmax_with_loss.html b/static/docs/dev/reference/nn_adaptive_log_softmax_with_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..546ea9b8c1575fee3cd210ad33e9dc3bc24bbfc2 --- /dev/null +++ b/static/docs/dev/reference/nn_adaptive_log_softmax_with_loss.html @@ -0,0 +1,335 @@ + + + + + + + + +AdaptiveLogSoftmaxWithLoss module — nn_adaptive_log_softmax_with_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + + + +
    nn_adaptive_log_softmax_with_loss(
    +  in_features,
    +  n_classes,
    +  cutoffs,
    +  div_value = 4,
    +  head_bias = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    in_features

    (int): Number of features in the input tensor

    n_classes

    (int): Number of classes in the dataset

    cutoffs

    (Sequence): Cutoffs used to assign targets to their buckets

    div_value

    (float, optional): value used as an exponent to compute sizes +of the clusters. Default: 4.0

    head_bias

    (bool, optional): If True, adds a bias term to the 'head' of the +adaptive softmax. Default: False

    + +

    Value

    + +

    NamedTuple with output and loss fields:

      +
    • output is a Tensor of size N containing computed target +log probabilities for each example

    • +
    • loss is a Scalar representing the computed negative +log likelihood loss

    • +
    + +

    Details

    + +

    Adaptive softmax is an approximate strategy for training models with large +output spaces. It is most effective when the label distribution is highly +imbalanced, for example in natural language modelling, where the word +frequency distribution approximately follows the Zipf's law.

    +

    Adaptive softmax partitions the labels into several clusters, according to +their frequency. These clusters may contain different number of targets +each.

    +

    Additionally, clusters containing less frequent labels assign lower +dimensional embeddings to those labels, which speeds up the computation. +For each minibatch, only clusters for which at least one target is +present are evaluated.

    +

    The idea is that the clusters which are accessed frequently +(like the first one, containing most frequent labels), should also be cheap +to compute -- that is, contain a small number of assigned labels. +We highly recommend taking a look at the original paper for more details.

      +
    • cutoffs should be an ordered Sequence of integers sorted +in the increasing order. +It controls number of clusters and the partitioning of targets into +clusters. For example setting cutoffs = c(10, 100, 1000) +means that first 10 targets will be assigned +to the 'head' of the adaptive softmax, targets 11, 12, ..., 100 will be +assigned to the first cluster, and targets 101, 102, ..., 1000 will be +assigned to the second cluster, while targets +1001, 1002, ..., n_classes - 1 will be assigned +to the last, third cluster.

    • +
    • div_value is used to compute the size of each additional cluster, +which is given as +\(\left\lfloor\frac{\mbox{in\_features}}{\mbox{div\_value}^{idx}}\right\rfloor\), +where \(idx\) is the cluster index (with clusters +for less frequent words having larger indices, +and indices starting from \(1\)).

    • +
    • head_bias if set to True, adds a bias term to the 'head' of the +adaptive softmax. See paper for details. Set to False in the official +implementation.

    • +
    + +

    Note

    + +

    This module returns a NamedTuple with output +and loss fields. See further documentation for details.

    +

    To compute log-probabilities for all classes, the log_prob +method can be used.

    +

    Warning

    + + + +

    Labels passed as inputs to this module should be sorted according to +their frequency. This means that the most frequent label should be +represented by the index 0, and the least frequent +label should be represented by the index n_classes - 1.

    +

    Shape

    + + + +
      +
    • input: \((N, \mbox{in\_features})\)

    • +
    • target: \((N)\) where each value satisfies \(0 <= \mbox{target[i]} <= \mbox{n\_classes}\)

    • +
    • output1: \((N)\)

    • +
    • output2: Scalar

    • +
    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_adaptive_max_pool1d.html b/static/docs/dev/reference/nn_adaptive_max_pool1d.html new file mode 100644 index 0000000000000000000000000000000000000000..4bc5e5a01a677908db020e7b7a500acafd5da3c6 --- /dev/null +++ b/static/docs/dev/reference/nn_adaptive_max_pool1d.html @@ -0,0 +1,253 @@ + + + + + + + + +Applies a 1D adaptive max pooling over an input signal composed of several input planes. — nn_adaptive_max_pool1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    The output size is H, for any input size. +The number of output features is equal to the number of input planes.

    +
    + +
    nn_adaptive_max_pool1d(output_size, return_indices = FALSE)
    + +

    Arguments

    + + + + + + + + + + +
    output_size

    the target output size H

    return_indices

    if TRUE, will return the indices along with the outputs. +Useful to pass to nn_max_unpool1d(). Default: FALSE

    + + +

    Examples

    +
    if (torch_is_installed()) { +# target output size of 5 +m <- nn_adaptive_max_pool1d(5) +input <- torch_randn(1, 64, 8) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_adaptive_max_pool2d.html b/static/docs/dev/reference/nn_adaptive_max_pool2d.html new file mode 100644 index 0000000000000000000000000000000000000000..84424bc3afa160e44787e8e76551f92d762ebf53 --- /dev/null +++ b/static/docs/dev/reference/nn_adaptive_max_pool2d.html @@ -0,0 +1,260 @@ + + + + + + + + +Applies a 2D adaptive max pooling over an input signal composed of several input planes. — nn_adaptive_max_pool2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    The output is of size H x W, for any input size. +The number of output features is equal to the number of input planes.

    +
    + +
    nn_adaptive_max_pool2d(output_size, return_indices = FALSE)
    + +

    Arguments

    + + + + + + + + + + +
    output_size

    the target output size of the image of the form H x W. +Can be a tuple (H, W) or a single H for a square image H x H. +H and W can be either a int, or None which means the size will +be the same as that of the input.

    return_indices

    if TRUE, will return the indices along with the outputs. +Useful to pass to nn_max_unpool2d(). Default: FALSE

    + + +

    Examples

    +
    if (torch_is_installed()) { +# target output size of 5x7 +m <- nn_adaptive_max_pool2d(c(5,7)) +input <- torch_randn(1, 64, 8, 9) +output <- m(input) +# target output size of 7x7 (square) +m <- nn_adaptive_max_pool2d(7) +input <- torch_randn(1, 64, 10, 9) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_adaptive_max_pool3d.html b/static/docs/dev/reference/nn_adaptive_max_pool3d.html new file mode 100644 index 0000000000000000000000000000000000000000..7e3786e6a4f8e17c6ef069e008dfd30a3235fb04 --- /dev/null +++ b/static/docs/dev/reference/nn_adaptive_max_pool3d.html @@ -0,0 +1,260 @@ + + + + + + + + +Applies a 3D adaptive max pooling over an input signal composed of several input planes. — nn_adaptive_max_pool3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    The output is of size D x H x W, for any input size. +The number of output features is equal to the number of input planes.

    +
    + +
    nn_adaptive_max_pool3d(output_size, return_indices = FALSE)
    + +

    Arguments

    + + + + + + + + + + +
    output_size

    the target output size of the image of the form D x H x W. +Can be a tuple (D, H, W) or a single D for a cube D x D x D. +D, H and W can be either a int, or None which means the size will +be the same as that of the input.

    return_indices

    if TRUE, will return the indices along with the outputs. +Useful to pass to nn_max_unpool3d(). Default: FALSE

    + + +

    Examples

    +
    if (torch_is_installed()) { +# target output size of 5x7x9 +m <- nn_adaptive_max_pool3d(c(5,7,9)) +input <- torch_randn(1, 64, 8, 9, 10) +output <- m(input) +# target output size of 7x7x7 (cube) +m <- nn_adaptive_max_pool3d(7) +input <- torch_randn(1, 64, 10, 9, 8) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_avg_pool1d.html b/static/docs/dev/reference/nn_avg_pool1d.html new file mode 100644 index 0000000000000000000000000000000000000000..e2cbc95e193aaeebcceb2327d3ea238f90b144a5 --- /dev/null +++ b/static/docs/dev/reference/nn_avg_pool1d.html @@ -0,0 +1,305 @@ + + + + + + + + +Applies a 1D average pooling over an input signal composed of several +input planes. — nn_avg_pool1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    In the simplest case, the output value of the layer with input size \((N, C, L)\), +output \((N, C, L_{out})\) and kernel_size \(k\) +can be precisely described as:

    +

    $$ + \mbox{out}(N_i, C_j, l) = \frac{1}{k} \sum_{m=0}^{k-1} +\mbox{input}(N_i, C_j, \mbox{stride} \times l + m) +$$

    +
    + +
    nn_avg_pool1d(
    +  kernel_size,
    +  stride = NULL,
    +  padding = 0,
    +  ceil_mode = FALSE,
    +  count_include_pad = TRUE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    kernel_size

    the size of the window

    stride

    the stride of the window. Default value is kernel_size

    padding

    implicit zero padding to be added on both sides

    ceil_mode

    when TRUE, will use ceil instead of floor to compute the output shape

    count_include_pad

    when TRUE, will include the zero-padding in the averaging calculation

    + +

    Details

    + +

    If padding is non-zero, then the input is implicitly zero-padded on both sides +for padding number of points.

    +

    The parameters kernel_size, stride, padding can each be +an int or a one-element tuple.

    +

    Shape

    + + + +
      +
    • Input: \((N, C, L_{in})\)

    • +
    • Output: \((N, C, L_{out})\), where

    • +
    + +

    $$ + L_{out} = \left\lfloor \frac{L_{in} + + 2 \times \mbox{padding} - \mbox{kernel\_size}}{\mbox{stride}} + 1\right\rfloor +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +# pool with window of size=3, stride=2 +m <- nn_avg_pool1d(3, stride=2) +m(torch_randn(1, 1, 8)) + +} +
    #> torch_tensor +#> (1,.,.) = +#> 0.8779 -0.8918 -0.1521 +#> [ CPUFloatType{1,1,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_avg_pool2d.html b/static/docs/dev/reference/nn_avg_pool2d.html new file mode 100644 index 0000000000000000000000000000000000000000..23f1fa92c1f4f7aca83a661763e07b18713ecccc --- /dev/null +++ b/static/docs/dev/reference/nn_avg_pool2d.html @@ -0,0 +1,318 @@ + + + + + + + + +Applies a 2D average pooling over an input signal composed of several input +planes. — nn_avg_pool2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    In the simplest case, the output value of the layer with input size \((N, C, H, W)\), +output \((N, C, H_{out}, W_{out})\) and kernel_size \((kH, kW)\) +can be precisely described as:

    +

    $$ + out(N_i, C_j, h, w) = \frac{1}{kH * kW} \sum_{m=0}^{kH-1} \sum_{n=0}^{kW-1} +input(N_i, C_j, stride[0] \times h + m, stride[1] \times w + n) +$$

    +
    + +
    nn_avg_pool2d(
    +  kernel_size,
    +  stride = NULL,
    +  padding = 0,
    +  ceil_mode = FALSE,
    +  count_include_pad = TRUE,
    +  divisor_override = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    kernel_size

    the size of the window

    stride

    the stride of the window. Default value is kernel_size

    padding

    implicit zero padding to be added on both sides

    ceil_mode

    when TRUE, will use ceil instead of floor to compute the output shape

    count_include_pad

    when TRUE, will include the zero-padding in the averaging calculation

    divisor_override

    if specified, it will be used as divisor, otherwise kernel_size will be used

    + +

    Details

    + +

    If padding is non-zero, then the input is implicitly zero-padded on both sides +for padding number of points.

    +

    The parameters kernel_size, stride, padding can either be:

      +
    • a single int -- in which case the same value is used for the height and width dimension

    • +
    • a tuple of two ints -- in which case, the first int is used for the height dimension, +and the second int for the width dimension

    • +
    + +

    Shape

    + + + +
      +
    • Input: \((N, C, H_{in}, W_{in})\)

    • +
    • Output: \((N, C, H_{out}, W_{out})\), where

    • +
    + +

    $$ + H_{out} = \left\lfloor\frac{H_{in} + 2 \times \mbox{padding}[0] - + \mbox{kernel\_size}[0]}{\mbox{stride}[0]} + 1\right\rfloor +$$ +$$ + W_{out} = \left\lfloor\frac{W_{in} + 2 \times \mbox{padding}[1] - + \mbox{kernel\_size}[1]}{\mbox{stride}[1]} + 1\right\rfloor +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +# pool of square window of size=3, stride=2 +m <- nn_avg_pool2d(3, stride=2) +# pool of non-square window +m <- nn_avg_pool2d(c(3, 2), stride=c(2, 1)) +input <- torch_randn(20, 16, 50, 32) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_avg_pool3d.html b/static/docs/dev/reference/nn_avg_pool3d.html new file mode 100644 index 0000000000000000000000000000000000000000..6353f87d10100b5ea8d7eef8ac2c000d64f3d816 --- /dev/null +++ b/static/docs/dev/reference/nn_avg_pool3d.html @@ -0,0 +1,326 @@ + + + + + + + + +Applies a 3D average pooling over an input signal composed of several input +planes. — nn_avg_pool3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    In the simplest case, the output value of the layer with input size \((N, C, D, H, W)\), +output \((N, C, D_{out}, H_{out}, W_{out})\) and kernel_size \((kD, kH, kW)\) +can be precisely described as:

    +

    $$ +\begin{array}{ll} +\mbox{out}(N_i, C_j, d, h, w) = & \sum_{k=0}^{kD-1} \sum_{m=0}^{kH-1} \sum_{n=0}^{kW-1} \\ +& \frac{\mbox{input}(N_i, C_j, \mbox{stride}[0] \times d + k, \mbox{stride}[1] \times h + m, \mbox{stride}[2] \times w + n)}{kD \times kH \times kW} +\end{array} +$$

    +
    + +
    nn_avg_pool3d(
    +  kernel_size,
    +  stride = NULL,
    +  padding = 0,
    +  ceil_mode = FALSE,
    +  count_include_pad = TRUE,
    +  divisor_override = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    kernel_size

    the size of the window

    stride

    the stride of the window. Default value is kernel_size

    padding

    implicit zero padding to be added on all three sides

    ceil_mode

    when TRUE, will use ceil instead of floor to compute the output shape

    count_include_pad

    when TRUE, will include the zero-padding in the averaging calculation

    divisor_override

    if specified, it will be used as divisor, otherwise kernel_size will be used

    + +

    Details

    + +

    If padding is non-zero, then the input is implicitly zero-padded on all three sides +for padding number of points.

    +

    The parameters kernel_size, stride can either be:

      +
    • a single int -- in which case the same value is used for the depth, height and width dimension

    • +
    • a tuple of three ints -- in which case, the first int is used for the depth dimension, +the second int for the height dimension and the third int for the width dimension

    • +
    + +

    Shape

    + + + +
      +
    • Input: \((N, C, D_{in}, H_{in}, W_{in})\)

    • +
    • Output: \((N, C, D_{out}, H_{out}, W_{out})\), where

    • +
    + +

    $$ + D_{out} = \left\lfloor\frac{D_{in} + 2 \times \mbox{padding}[0] - + \mbox{kernel\_size}[0]}{\mbox{stride}[0]} + 1\right\rfloor +$$ +$$ + H_{out} = \left\lfloor\frac{H_{in} + 2 \times \mbox{padding}[1] - + \mbox{kernel\_size}[1]}{\mbox{stride}[1]} + 1\right\rfloor +$$ +$$ + W_{out} = \left\lfloor\frac{W_{in} + 2 \times \mbox{padding}[2] - + \mbox{kernel\_size}[2]}{\mbox{stride}[2]} + 1\right\rfloor +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +# pool of square window of size=3, stride=2 +m = nn_avg_pool3d(3, stride=2) +# pool of non-square window +m = nn_avg_pool3d(c(3, 2, 2), stride=c(2, 1, 2)) +input = torch_randn(20, 16, 50,44, 31) +output = m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_batch_norm1d.html b/static/docs/dev/reference/nn_batch_norm1d.html new file mode 100644 index 0000000000000000000000000000000000000000..27bf89c1f50de26a4a0e3ded327d827eccfe5e8d --- /dev/null +++ b/static/docs/dev/reference/nn_batch_norm1d.html @@ -0,0 +1,320 @@ + + + + + + + + +BatchNorm1D module — nn_batch_norm1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies Batch Normalization over a 2D or 3D input (a mini-batch of 1D +inputs with optional additional channel dimension) as described in the paper +Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

    +
    + +
    nn_batch_norm1d(
    +  num_features,
    +  eps = 1e-05,
    +  momentum = 0.1,
    +  affine = TRUE,
    +  track_running_stats = TRUE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    num_features

    \(C\) from an expected input of size +\((N, C, L)\) or \(L\) from input of size \((N, L)\)

    eps

    a value added to the denominator for numerical stability. +Default: 1e-5

    momentum

    the value used for the running_mean and running_var +computation. Can be set to NULL for cumulative moving average +(i.e. simple average). Default: 0.1

    affine

    a boolean value that when set to TRUE, this module has +learnable affine parameters. Default: TRUE

    track_running_stats

    a boolean value that when set to TRUE, this +module tracks the running mean and variance, and when set to FALSE, +this module does not track such statistics and always uses batch +statistics in both training and eval modes. Default: TRUE

    + +

    Details

    + +

    $$ +y = \frac{x - \mathrm{E}[x]}{\sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta +$$

    +

    The mean and standard-deviation are calculated per-dimension over +the mini-batches and \(\gamma\) and \(\beta\) are learnable parameter vectors +of size C (where C is the input size). By default, the elements of \(\gamma\) +are set to 1 and the elements of \(\beta\) are set to 0.

    +

    Also by default, during training this layer keeps running estimates of its +computed mean and variance, which are then used for normalization during +evaluation. The running estimates are kept with a default :attr:momentum +of 0.1. +If track_running_stats is set to FALSE, this layer then does not +keep running estimates, and batch statistics are instead used during +evaluation time as well.

    +

    Note

    + + + + +

    This momentum argument is different from one used in optimizer +classes and the conventional notion of momentum. Mathematically, the +update rule for running statistics here is +\(\hat{x}_{\mbox{new}} = (1 - \mbox{momentum}) \times \hat{x} + \mbox{momentum} \times x_t\), +where \(\hat{x}\) is the estimated statistic and \(x_t\) is the +new observed value.

    +

    Because the Batch Normalization is done over the C dimension, computing statistics +on (N, L) slices, it's common terminology to call this Temporal Batch Normalization.

    +

    Shape

    + + + +
      +
    • Input: \((N, C)\) or \((N, C, L)\)

    • +
    • Output: \((N, C)\) or \((N, C, L)\) (same shape as input)

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +# With Learnable Parameters +m <- nn_batch_norm1d(100) +# Without Learnable Parameters +m <- nn_batch_norm1d(100, affine = FALSE) +input <- torch_randn(20, 100) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_batch_norm2d.html b/static/docs/dev/reference/nn_batch_norm2d.html new file mode 100644 index 0000000000000000000000000000000000000000..1d8d25422bd2618d622e65d7e9923487bab385da --- /dev/null +++ b/static/docs/dev/reference/nn_batch_norm2d.html @@ -0,0 +1,319 @@ + + + + + + + + +BatchNorm2D — nn_batch_norm2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies Batch Normalization over a 4D input (a mini-batch of 2D inputs +additional channel dimension) as described in the paper +Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.

    +
    + +
    nn_batch_norm2d(
    +  num_features,
    +  eps = 1e-05,
    +  momentum = 0.1,
    +  affine = TRUE,
    +  track_running_stats = TRUE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    num_features

    \(C\) from an expected input of size +\((N, C, H, W)\)

    eps

    a value added to the denominator for numerical stability. +Default: 1e-5

    momentum

    the value used for the running_mean and running_var +computation. Can be set to None for cumulative moving average +(i.e. simple average). Default: 0.1

    affine

    a boolean value that when set to TRUE, this module has +learnable affine parameters. Default: TRUE

    track_running_stats

    a boolean value that when set to TRUE, this +module tracks the running mean and variance, and when set to FALSE, +this module does not track such statistics and uses batch statistics instead +in both training and eval modes if the running mean and variance are None. +Default: TRUE

    + +

    Details

    + +

    $$ + y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta +$$

    +

    The mean and standard-deviation are calculated per-dimension over +the mini-batches and \(\gamma\) and \(\beta\) are learnable parameter vectors +of size C (where C is the input size). By default, the elements of \(\gamma\) are set +to 1 and the elements of \(\beta\) are set to 0. The standard-deviation is calculated +via the biased estimator, equivalent to torch_var(input, unbiased=FALSE). +Also by default, during training this layer keeps running estimates of its +computed mean and variance, which are then used for normalization during +evaluation. The running estimates are kept with a default momentum +of 0.1.

    +

    If track_running_stats is set to FALSE, this layer then does not +keep running estimates, and batch statistics are instead used during +evaluation time as well.

    +

    Note

    + +

    This momentum argument is different from one used in optimizer +classes and the conventional notion of momentum. Mathematically, the +update rule for running statistics here is +\(\hat{x}_{\mbox{new}} = (1 - \mbox{momentum}) \times \hat{x} + \mbox{momentum} \times x_t\), +where \(\hat{x}\) is the estimated statistic and \(x_t\) is the +new observed value. +Because the Batch Normalization is done over the C dimension, computing statistics +on (N, H, W) slices, it's common terminology to call this Spatial Batch Normalization.

    +

    Shape

    + + + +
      +
    • Input: \((N, C, H, W)\)

    • +
    • Output: \((N, C, H, W)\) (same shape as input)

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +# With Learnable Parameters +m <- nn_batch_norm2d(100) +# Without Learnable Parameters +m <- nn_batch_norm2d(100, affine=FALSE) +input <- torch_randn(20, 100, 35, 45) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_bce_loss.html b/static/docs/dev/reference/nn_bce_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..6823dd37e35274a5b283ec52c120ecd6c319a2f2 --- /dev/null +++ b/static/docs/dev/reference/nn_bce_loss.html @@ -0,0 +1,304 @@ + + + + + + + + +Binary cross entropy loss — nn_bce_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Creates a criterion that measures the Binary Cross Entropy +between the target and the output:

    +
    + +
    nn_bce_loss(weight = NULL, reduction = "mean")
    + +

    Arguments

    + + + + + + + + + + +
    weight

    (Tensor, optional): a manual rescaling weight given to the loss +of each batch element. If given, has to be a Tensor of size nbatch.

    reduction

    (string, optional): Specifies the reduction to apply to the output: +'none' | 'mean' | 'sum'. 'none': no reduction will be applied, +'mean': the sum of the output will be divided by the number of +elements in the output, 'sum': the output will be summed. Note: size_average +and reduce are in the process of being deprecated, and in the meantime, +specifying either of those two args will override reduction. Default: 'mean'

    + +

    Details

    + +

    The unreduced (i.e. with reduction set to 'none') loss can be described as: +$$ + \ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad +l_n = - w_n \left[ y_n \cdot \log x_n + (1 - y_n) \cdot \log (1 - x_n) \right] +$$ +where \(N\) is the batch size. If reduction is not 'none' +(default 'mean'), then

    +

    $$ + \ell(x, y) = \left\{ \begin{array}{ll} +\mbox{mean}(L), & \mbox{if reduction} = \mbox{'mean';}\\ +\mbox{sum}(L), & \mbox{if reduction} = \mbox{'sum'.} +\end{array} +\right. +$$

    +

    This is used for measuring the error of a reconstruction in for example +an auto-encoder. Note that the targets \(y\) should be numbers +between 0 and 1.

    +

    Notice that if \(x_n\) is either 0 or 1, one of the log terms would be +mathematically undefined in the above loss equation. PyTorch chooses to set +\(\log (0) = -\infty\), since \(\lim_{x\to 0} \log (x) = -\infty\).

    +

    However, an infinite term in the loss equation is not desirable for several reasons. +For one, if either \(y_n = 0\) or \((1 - y_n) = 0\), then we would be +multiplying 0 with infinity. Secondly, if we have an infinite loss value, then +we would also have an infinite term in our gradient, since +\(\lim_{x\to 0} \frac{d}{dx} \log (x) = \infty\).

    +

    This would make BCELoss's backward method nonlinear with respect to \(x_n\), +and using it for things like linear regression would not be straight-forward. +Our solution is that BCELoss clamps its log function outputs to be greater than +or equal to -100. This way, we can always have a finite loss value and a linear +backward method.

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where \(*\) means, any number of additional +dimensions

    • +
    • Target: \((N, *)\), same shape as the input

    • +
    • Output: scalar. If reduction is 'none', then \((N, *)\), same +shape as input.

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_sigmoid() +loss <- nn_bce_loss() +input <- torch_randn(3, requires_grad=TRUE) +target <- torch_rand(3) +output <- loss(m(input), target) +output$backward() + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_bilinear.html b/static/docs/dev/reference/nn_bilinear.html new file mode 100644 index 0000000000000000000000000000000000000000..1b22a1d64da9377eb25701d344f595df8df6baf6 --- /dev/null +++ b/static/docs/dev/reference/nn_bilinear.html @@ -0,0 +1,290 @@ + + + + + + + + +Bilinear module — nn_bilinear • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a bilinear transformation to the incoming data +\(y = x_1^T A x_2 + b\)

    +
    + +
    nn_bilinear(in1_features, in2_features, out_features, bias = TRUE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    in1_features

    size of each first input sample

    in2_features

    size of each second input sample

    out_features

    size of each output sample

    bias

    If set to FALSE, the layer will not learn an additive bias. +Default: TRUE

    + +

    Shape

    + + + +
      +
    • Input1: \((N, *, H_{in1})\) \(H_{in1}=\mbox{in1\_features}\) and +\(*\) means any number of additional dimensions. All but the last +dimension of the inputs should be the same.

    • +
    • Input2: \((N, *, H_{in2})\) where \(H_{in2}=\mbox{in2\_features}\).

    • +
    • Output: \((N, *, H_{out})\) where \(H_{out}=\mbox{out\_features}\) +and all but the last dimension are the same shape as the input.

    • +
    + +

    Attributes

    + + + +
      +
    • weight: the learnable weights of the module of shape +\((\mbox{out\_features}, \mbox{in1\_features}, \mbox{in2\_features})\). +The values are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\), where +\(k = \frac{1}{\mbox{in1\_features}}\)

    • +
    • bias: the learnable bias of the module of shape \((\mbox{out\_features})\). +If bias is TRUE, the values are initialized from +\(\mathcal{U}(-\sqrt{k}, \sqrt{k})\), where +\(k = \frac{1}{\mbox{in1\_features}}\)

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_bilinear(20, 30, 50) +input1 <- torch_randn(128, 20) +input2 <- torch_randn(128, 30) +output = m(input1, input2) +print(output$size()) + +} +
    #> [1] 128 50
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_celu.html b/static/docs/dev/reference/nn_celu.html new file mode 100644 index 0000000000000000000000000000000000000000..dedaed87bb6c39d6f323f815ec81d06c301d848e --- /dev/null +++ b/static/docs/dev/reference/nn_celu.html @@ -0,0 +1,266 @@ + + + + + + + + +CELU module — nn_celu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the element-wise function:

    +
    + +
    nn_celu(alpha = 1, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + +
    alpha

    the \(\alpha\) value for the CELU formulation. Default: 1.0

    inplace

    can optionally do the operation in-place. Default: FALSE

    + +

    Details

    + +

    $$ + \mbox{CELU}(x) = \max(0,x) + \min(0, \alpha * (\exp(x/\alpha) - 1)) +$$

    +

    More details can be found in the paper +Continuously Differentiable Exponential Linear Units.

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_celu() +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_conv1d.html b/static/docs/dev/reference/nn_conv1d.html new file mode 100644 index 0000000000000000000000000000000000000000..9f55c5bc63dc37cfaf4d121bdf234e60d1033491 --- /dev/null +++ b/static/docs/dev/reference/nn_conv1d.html @@ -0,0 +1,377 @@ + + + + + + + + +Conv1D module — nn_conv1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 1D convolution over an input signal composed of several input +planes. +In the simplest case, the output value of the layer with input size +\((N, C_{\mbox{in}}, L)\) and output \((N, C_{\mbox{out}}, L_{\mbox{out}})\) can be +precisely described as:

    +
    + +
    nn_conv1d(
    +  in_channels,
    +  out_channels,
    +  kernel_size,
    +  stride = 1,
    +  padding = 0,
    +  dilation = 1,
    +  groups = 1,
    +  bias = TRUE,
    +  padding_mode = "zeros"
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    in_channels

    (int): Number of channels in the input image

    out_channels

    (int): Number of channels produced by the convolution

    kernel_size

    (int or tuple): Size of the convolving kernel

    stride

    (int or tuple, optional): Stride of the convolution. Default: 1

    padding

    (int or tuple, optional): Zero-padding added to both sides of +the input. Default: 0

    dilation

    (int or tuple, optional): Spacing between kernel +elements. Default: 1

    groups

    (int, optional): Number of blocked connections from input +channels to output channels. Default: 1

    bias

    (bool, optional): If TRUE, adds a learnable bias to the +output. Default: TRUE

    padding_mode

    (string, optional): 'zeros', 'reflect', +'replicate' or 'circular'. Default: 'zeros'

    + +

    Details

    + +

    $$ +\mbox{out}(N_i, C_{\mbox{out}_j}) = \mbox{bias}(C_{\mbox{out}_j}) + + \sum_{k = 0}^{C_{in} - 1} \mbox{weight}(C_{\mbox{out}_j}, k) +\star \mbox{input}(N_i, k) +$$

    +

    where \(\star\) is the valid +cross-correlation operator, +\(N\) is a batch size, \(C\) denotes a number of channels, +\(L\) is a length of signal sequence.

      +
    • stride controls the stride for the cross-correlation, a single +number or a one-element tuple.

    • +
    • padding controls the amount of implicit zero-paddings on both sides +for padding number of points.

    • +
    • dilation controls the spacing between the kernel points; also +known as the à trous algorithm. It is harder to describe, but this +link +has a nice visualization of what dilation does.

    • +
    • groups controls the connections between inputs and outputs. +in_channels and out_channels must both be divisible by +groups. For example,

        +
      • At groups=1, all inputs are convolved to all outputs.

      • +
      • At groups=2, the operation becomes equivalent to having two conv +layers side by side, each seeing half the input channels, +and producing half the output channels, and both subsequently +concatenated.

      • +
      • At groups= in_channels, each input channel is convolved with +its own set of filters, +of size \(\left\lfloor\frac{out\_channels}{in\_channels}\right\rfloor\).

      • +
    • +
    + +

    Note

    + + + + +

    Depending of the size of your kernel, several (of the last) +columns of the input might be lost, because it is a valid +cross-correlation, and not a full cross-correlation. +It is up to the user to add proper padding.

    +

    When groups == in_channels and out_channels == K * in_channels, +where K is a positive integer, this operation is also termed in +literature as depthwise convolution. +In other words, for an input of size \((N, C_{in}, L_{in})\), +a depthwise convolution with a depthwise multiplier K, can be constructed by arguments +\((C_{\mbox{in}}=C_{in}, C_{\mbox{out}}=C_{in} \times K, ..., \mbox{groups}=C_{in})\).

    +

    Shape

    + + + +
      +
    • Input: \((N, C_{in}, L_{in})\)

    • +
    • Output: \((N, C_{out}, L_{out})\) where

    • +
    + +

    $$ + L_{out} = \left\lfloor\frac{L_{in} + 2 \times \mbox{padding} - \mbox{dilation} + \times (\mbox{kernel\_size} - 1) - 1}{\mbox{stride}} + 1\right\rfloor +$$

    +

    Attributes

    + + + +
      +
    • weight (Tensor): the learnable weights of the module of shape +\((\mbox{out\_channels}, \frac{\mbox{in\_channels}}{\mbox{groups}}, \mbox{kernel\_size})\). +The values of these weights are sampled from +\(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where +\(k = \frac{groups}{C_{\mbox{in}} * \mbox{kernel\_size}}\)

    • +
    • bias (Tensor): the learnable bias of the module of shape +(out_channels). If bias is TRUE, then the values of these weights are +sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where +\(k = \frac{groups}{C_{\mbox{in}} * \mbox{kernel\_size}}\)

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_conv1d(16, 33, 3, stride=2) +input <- torch_randn(20, 16, 50) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_conv2d.html b/static/docs/dev/reference/nn_conv2d.html new file mode 100644 index 0000000000000000000000000000000000000000..7d3d08ffd7c1966a17e91a6f9d862134b6d035ac --- /dev/null +++ b/static/docs/dev/reference/nn_conv2d.html @@ -0,0 +1,394 @@ + + + + + + + + +Conv2D module — nn_conv2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 2D convolution over an input signal composed of several input +planes.

    +
    + +
    nn_conv2d(
    +  in_channels,
    +  out_channels,
    +  kernel_size,
    +  stride = 1,
    +  padding = 0,
    +  dilation = 1,
    +  groups = 1,
    +  bias = TRUE,
    +  padding_mode = "zeros"
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    in_channels

    (int): Number of channels in the input image

    out_channels

    (int): Number of channels produced by the convolution

    kernel_size

    (int or tuple): Size of the convolving kernel

    stride

    (int or tuple, optional): Stride of the convolution. Default: 1

    padding

    (int or tuple, optional): Zero-padding added to both sides of +the input. Default: 0

    dilation

    (int or tuple, optional): Spacing between kernel elements. Default: 1

    groups

    (int, optional): Number of blocked connections from input +channels to output channels. Default: 1

    bias

    (bool, optional): If TRUE, adds a learnable bias to the +output. Default: TRUE

    padding_mode

    (string, optional): 'zeros', 'reflect', +'replicate' or 'circular'. Default: 'zeros'

    + +

    Details

    + +

    In the simplest case, the output value of the layer with input size +\((N, C_{\mbox{in}}, H, W)\) and output \((N, C_{\mbox{out}}, H_{\mbox{out}}, W_{\mbox{out}})\) +can be precisely described as:

    +

    $$ +\mbox{out}(N_i, C_{\mbox{out}_j}) = \mbox{bias}(C_{\mbox{out}_j}) + + \sum_{k = 0}^{C_{\mbox{in}} - 1} \mbox{weight}(C_{\mbox{out}_j}, k) \star \mbox{input}(N_i, k) +$$

    +

    where \(\star\) is the valid 2D cross-correlation operator, +\(N\) is a batch size, \(C\) denotes a number of channels, +\(H\) is a height of input planes in pixels, and \(W\) is +width in pixels.

      +
    • stride controls the stride for the cross-correlation, a single +number or a tuple.

    • +
    • padding controls the amount of implicit zero-paddings on both +sides for padding number of points for each dimension.

    • +
    • dilation controls the spacing between the kernel points; also +known as the à trous algorithm. It is harder to describe, but this link_ +has a nice visualization of what dilation does.

    • +
    • groups controls the connections between inputs and outputs. +in_channels and out_channels must both be divisible by +groups. For example,

        +
      • At groups=1, all inputs are convolved to all outputs.

      • +
      • At groups=2, the operation becomes equivalent to having two conv +layers side by side, each seeing half the input channels, +and producing half the output channels, and both subsequently +concatenated.

      • +
      • At groups= in_channels, each input channel is convolved with +its own set of filters, of size: +\(\left\lfloor\frac{out\_channels}{in\_channels}\right\rfloor\).

      • +
    • +
    + +

    The parameters kernel_size, stride, padding, dilation can either be:

      +
    • a single int -- in which case the same value is used for the height and +width dimension

    • +
    • a tuple of two ints -- in which case, the first int is used for the height dimension, +and the second int for the width dimension

    • +
    + +

    Note

    + + + + +

    Depending of the size of your kernel, several (of the last) +columns of the input might be lost, because it is a valid cross-correlation, +and not a full cross-correlation. +It is up to the user to add proper padding.

    +

    When groups == in_channels and out_channels == K * in_channels, +where K is a positive integer, this operation is also termed in +literature as depthwise convolution. +In other words, for an input of size :math:(N, C_{in}, H_{in}, W_{in}), +a depthwise convolution with a depthwise multiplier K, can be constructed by arguments +\((in\_channels=C_{in}, out\_channels=C_{in} \times K, ..., groups=C_{in})\).

    +

    In some circumstances when using the CUDA backend with CuDNN, this operator +may select a nondeterministic algorithm to increase performance. If this is +undesirable, you can try to make the operation deterministic (potentially at +a performance cost) by setting backends_cudnn_deterministic = TRUE.

    +

    Shape

    + + + +
      +
    • Input: \((N, C_{in}, H_{in}, W_{in})\)

    • +
    • Output: \((N, C_{out}, H_{out}, W_{out})\) where +$$ + H_{out} = \left\lfloor\frac{H_{in} + 2 \times \mbox{padding}[0] - \mbox{dilation}[0] + \times (\mbox{kernel\_size}[0] - 1) - 1}{\mbox{stride}[0]} + 1\right\rfloor +$$ +$$ + W_{out} = \left\lfloor\frac{W_{in} + 2 \times \mbox{padding}[1] - \mbox{dilation}[1] + \times (\mbox{kernel\_size}[1] - 1) - 1}{\mbox{stride}[1]} + 1\right\rfloor +$$

    • +
    + +

    Attributes

    + + + +
      +
    • weight (Tensor): the learnable weights of the module of shape +\((\mbox{out\_channels}, \frac{\mbox{in\_channels}}{\mbox{groups}}\), +\(\mbox{kernel\_size[0]}, \mbox{kernel\_size[1]})\). +The values of these weights are sampled from +\(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where +\(k = \frac{groups}{C_{\mbox{in}} * \prod_{i=0}^{1}\mbox{kernel\_size}[i]}\)

    • +
    • bias (Tensor): the learnable bias of the module of shape +(out_channels). If bias is TRUE, +then the values of these weights are +sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where +\(k = \frac{groups}{C_{\mbox{in}} * \prod_{i=0}^{1}\mbox{kernel\_size}[i]}\)

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { + +# With square kernels and equal stride +m <- nn_conv2d(16, 33, 3, stride = 2) +# non-square kernels and unequal stride and with padding +m <- nn_conv2d(16, 33, c(3, 5), stride=c(2, 1), padding=c(4, 2)) +# non-square kernels and unequal stride and with padding and dilation +m <- nn_conv2d(16, 33, c(3, 5), stride=c(2, 1), padding=c(4, 2), dilation=c(3, 1)) +input <- torch_randn(20, 16, 50, 100) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_conv3d.html b/static/docs/dev/reference/nn_conv3d.html new file mode 100644 index 0000000000000000000000000000000000000000..020a5498d8be6fbc29b8b01f9d7c09fdeb0ef639 --- /dev/null +++ b/static/docs/dev/reference/nn_conv3d.html @@ -0,0 +1,382 @@ + + + + + + + + +Conv3D module — nn_conv3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 3D convolution over an input signal composed of several input +planes. +In the simplest case, the output value of the layer with input size \((N, C_{in}, D, H, W)\) +and output \((N, C_{out}, D_{out}, H_{out}, W_{out})\) can be precisely described as:

    +
    + +
    nn_conv3d(
    +  in_channels,
    +  out_channels,
    +  kernel_size,
    +  stride = 1,
    +  padding = 0,
    +  dilation = 1,
    +  groups = 1,
    +  bias = TRUE,
    +  padding_mode = "zeros"
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    in_channels

    (int): Number of channels in the input image

    out_channels

    (int): Number of channels produced by the convolution

    kernel_size

    (int or tuple): Size of the convolving kernel

    stride

    (int or tuple, optional): Stride of the convolution. Default: 1

    padding

    (int or tuple, optional): Zero-padding added to all three sides of the input. Default: 0

    dilation

    (int or tuple, optional): Spacing between kernel elements. Default: 1

    groups

    (int, optional): Number of blocked connections from input channels to output channels. Default: 1

    bias

    (bool, optional): If TRUE, adds a learnable bias to the output. Default: TRUE

    padding_mode

    (string, optional): 'zeros', 'reflect', 'replicate' or 'circular'. Default: 'zeros'

    + +

    Details

    + +

    $$ + out(N_i, C_{out_j}) = bias(C_{out_j}) + + \sum_{k = 0}^{C_{in} - 1} weight(C_{out_j}, k) \star input(N_i, k) +$$

    +

    where \(\star\) is the valid 3D cross-correlation operator

      +
    • stride controls the stride for the cross-correlation.

    • +
    • padding controls the amount of implicit zero-paddings on both +sides for padding number of points for each dimension.

    • +
    • dilation controls the spacing between the kernel points; also known as the à trous algorithm. +It is harder to describe, but this link_ has a nice visualization of what dilation does.

    • +
    • groups controls the connections between inputs and outputs. +in_channels and out_channels must both be divisible by +groups. For example,

    • +
    • At groups=1, all inputs are convolved to all outputs.

    • +
    • At groups=2, the operation becomes equivalent to having two conv +layers side by side, each seeing half the input channels, +and producing half the output channels, and both subsequently +concatenated.

    • +
    • At groups= in_channels, each input channel is convolved with +its own set of filters, of size +\(\left\lfloor\frac{out\_channels}{in\_channels}\right\rfloor\).

    • +
    + +

    The parameters kernel_size, stride, padding, dilation can either be:

      +
    • a single int -- in which case the same value is used for the depth, height and width dimension

    • +
    • a tuple of three ints -- in which case, the first int is used for the depth dimension, +the second int for the height dimension and the third int for the width dimension

    • +
    + +

    Note

    + +

    Depending of the size of your kernel, several (of the last) +columns of the input might be lost, because it is a valid cross-correlation, +and not a full cross-correlation. +It is up to the user to add proper padding.

    +

    When groups == in_channels and out_channels == K * in_channels, +where K is a positive integer, this operation is also termed in +literature as depthwise convolution. +In other words, for an input of size \((N, C_{in}, D_{in}, H_{in}, W_{in})\), +a depthwise convolution with a depthwise multiplier K, can be constructed by arguments +\((in\_channels=C_{in}, out\_channels=C_{in} \times K, ..., groups=C_{in})\).

    +

    In some circumstances when using the CUDA backend with CuDNN, this operator +may select a nondeterministic algorithm to increase performance. If this is +undesirable, you can try to make the operation deterministic (potentially at +a performance cost) by setting torch.backends.cudnn.deterministic = TRUE. +Please see the notes on :doc:/notes/randomness for background.

    +

    Shape

    + + + +
      +
    • Input: \((N, C_{in}, D_{in}, H_{in}, W_{in})\)

    • +
    • Output: \((N, C_{out}, D_{out}, H_{out}, W_{out})\) where +$$ + D_{out} = \left\lfloor\frac{D_{in} + 2 \times \mbox{padding}[0] - \mbox{dilation}[0] + \times (\mbox{kernel\_size}[0] - 1) - 1}{\mbox{stride}[0]} + 1\right\rfloor + $$ +$$ + H_{out} = \left\lfloor\frac{H_{in} + 2 \times \mbox{padding}[1] - \mbox{dilation}[1] + \times (\mbox{kernel\_size}[1] - 1) - 1}{\mbox{stride}[1]} + 1\right\rfloor + $$ +$$ + W_{out} = \left\lfloor\frac{W_{in} + 2 \times \mbox{padding}[2] - \mbox{dilation}[2] + \times (\mbox{kernel\_size}[2] - 1) - 1}{\mbox{stride}[2]} + 1\right\rfloor + $$

    • +
    + +

    Attributes

    + + + +
      +
    • weight (Tensor): the learnable weights of the module of shape +\((\mbox{out\_channels}, \frac{\mbox{in\_channels}}{\mbox{groups}},\) +\(\mbox{kernel\_size[0]}, \mbox{kernel\_size[1]}, \mbox{kernel\_size[2]})\). +The values of these weights are sampled from +\(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where +\(k = \frac{groups}{C_{\mbox{in}} * \prod_{i=0}^{2}\mbox{kernel\_size}[i]}\)

    • +
    • bias (Tensor): the learnable bias of the module of shape (out_channels). If bias is True, +then the values of these weights are +sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where +\(k = \frac{groups}{C_{\mbox{in}} * \prod_{i=0}^{2}\mbox{kernel\_size}[i]}\)

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +# With square kernels and equal stride +m <- nn_conv3d(16, 33, 3, stride=2) +# non-square kernels and unequal stride and with padding +m <- nn_conv3d(16, 33, c(3, 5, 2), stride=c(2, 1, 1), padding=c(4, 2, 0)) +input <- torch_randn(20, 16, 10, 50, 100) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_conv_transpose1d.html b/static/docs/dev/reference/nn_conv_transpose1d.html new file mode 100644 index 0000000000000000000000000000000000000000..559fdc1b3347caee4bf79170c7bd0a3ebb050ab8 --- /dev/null +++ b/static/docs/dev/reference/nn_conv_transpose1d.html @@ -0,0 +1,375 @@ + + + + + + + + +ConvTranspose1D — nn_conv_transpose1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 1D transposed convolution operator over an input image +composed of several input planes.

    +
    + +
    nn_conv_transpose1d(
    +  in_channels,
    +  out_channels,
    +  kernel_size,
    +  stride = 1,
    +  padding = 0,
    +  output_padding = 0,
    +  groups = 1,
    +  bias = TRUE,
    +  dilation = 1,
    +  padding_mode = "zeros"
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    in_channels

    (int): Number of channels in the input image

    out_channels

    (int): Number of channels produced by the convolution

    kernel_size

    (int or tuple): Size of the convolving kernel

    stride

    (int or tuple, optional): Stride of the convolution. Default: 1

    padding

    (int or tuple, optional): dilation * (kernel_size - 1) - padding zero-padding +will be added to both sides of the input. Default: 0

    output_padding

    (int or tuple, optional): Additional size added to one side +of the output shape. Default: 0

    groups

    (int, optional): Number of blocked connections from input channels to output channels. Default: 1

    bias

    (bool, optional): If True, adds a learnable bias to the output. Default: TRUE

    dilation

    (int or tuple, optional): Spacing between kernel elements. Default: 1

    padding_mode

    (string, optional): 'zeros', 'reflect', +'replicate' or 'circular'. Default: 'zeros'

    + +

    Details

    + +

    This module can be seen as the gradient of Conv1d with respect to its input. +It is also known as a fractionally-strided convolution or +a deconvolution (although it is not an actual deconvolution operation).

      +
    • stride controls the stride for the cross-correlation.

    • +
    • padding controls the amount of implicit zero-paddings on both +sides for dilation * (kernel_size - 1) - padding number of points. See note +below for details.

    • +
    • output_padding controls the additional size added to one side +of the output shape. See note below for details.

    • +
    • dilation controls the spacing between the kernel points; also known as the +à trous algorithm. It is harder to describe, but this link +has a nice visualization of what dilation does.

    • +
    • groups controls the connections between inputs and outputs. +in_channels and out_channels must both be divisible by +groups. For example,

        +
      • At groups=1, all inputs are convolved to all outputs.

      • +
      • At groups=2, the operation becomes equivalent to having two conv +layers side by side, each seeing half the input channels, +and producing half the output channels, and both subsequently +concatenated.

      • +
      • At groups= in_channels, each input channel is convolved with +its own set of filters (of size +\(\left\lfloor\frac{out\_channels}{in\_channels}\right\rfloor\)).

      • +
    • +
    + +

    Note

    + +

    Depending of the size of your kernel, several (of the last) +columns of the input might be lost, because it is a valid cross-correlation, +and not a full cross-correlation. +It is up to the user to add proper padding.

    +

    The padding argument effectively adds dilation * (kernel_size - 1) - padding +amount of zero padding to both sizes of the input. This is set so that +when a ~torch.nn.Conv1d and a ~torch.nn.ConvTranspose1d +are initialized with same parameters, they are inverses of each other in +regard to the input and output shapes. However, when stride > 1, +~torch.nn.Conv1d maps multiple input shapes to the same output +shape. output_padding is provided to resolve this ambiguity by +effectively increasing the calculated output shape on one side. Note +that output_padding is only used to find output shape, but does +not actually add zero-padding to output.

    +

    In some circumstances when using the CUDA backend with CuDNN, this operator +may select a nondeterministic algorithm to increase performance. If this is +undesirable, you can try to make the operation deterministic (potentially at +a performance cost) by setting torch.backends.cudnn.deterministic = TRUE.

    +

    Shape

    + + + +
      +
    • Input: \((N, C_{in}, L_{in})\)

    • +
    • Output: \((N, C_{out}, L_{out})\) where +$$ + L_{out} = (L_{in} - 1) \times \mbox{stride} - 2 \times \mbox{padding} + \mbox{dilation} +\times (\mbox{kernel\_size} - 1) + \mbox{output\_padding} + 1 +$$

    • +
    + +

    Attributes

    + + + +
      +
    • weight (Tensor): the learnable weights of the module of shape +\((\mbox{in\_channels}, \frac{\mbox{out\_channels}}{\mbox{groups}},\) +\(\mbox{kernel\_size})\). +The values of these weights are sampled from +\(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where +\(k = \frac{groups}{C_{\mbox{out}} * \mbox{kernel\_size}}\)

    • +
    • bias (Tensor): the learnable bias of the module of shape (out_channels). +If bias is TRUE, then the values of these weights are +sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where +\(k = \frac{groups}{C_{\mbox{out}} * \mbox{kernel\_size}}\)

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_conv_transpose1d(32, 16, 2) +input <- torch_randn(10, 32, 2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_conv_transpose2d.html b/static/docs/dev/reference/nn_conv_transpose2d.html new file mode 100644 index 0000000000000000000000000000000000000000..c1ef53b11701373b505d2c0d549d30ad4560e8c7 --- /dev/null +++ b/static/docs/dev/reference/nn_conv_transpose2d.html @@ -0,0 +1,395 @@ + + + + + + + + +ConvTranpose2D module — nn_conv_transpose2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 2D transposed convolution operator over an input image +composed of several input planes.

    +
    + +
    nn_conv_transpose2d(
    +  in_channels,
    +  out_channels,
    +  kernel_size,
    +  stride = 1,
    +  padding = 0,
    +  output_padding = 0,
    +  groups = 1,
    +  bias = TRUE,
    +  dilation = 1,
    +  padding_mode = "zeros"
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    in_channels

    (int): Number of channels in the input image

    out_channels

    (int): Number of channels produced by the convolution

    kernel_size

    (int or tuple): Size of the convolving kernel

    stride

    (int or tuple, optional): Stride of the convolution. Default: 1

    padding

    (int or tuple, optional): dilation * (kernel_size - 1) - padding zero-padding +will be added to both sides of each dimension in the input. Default: 0

    output_padding

    (int or tuple, optional): Additional size added to one side +of each dimension in the output shape. Default: 0

    groups

    (int, optional): Number of blocked connections from input channels to output channels. Default: 1

    bias

    (bool, optional): If True, adds a learnable bias to the output. Default: True

    dilation

    (int or tuple, optional): Spacing between kernel elements. Default: 1

    padding_mode

    (string, optional): 'zeros', 'reflect', +'replicate' or 'circular'. Default: 'zeros'

    + +

    Details

    + +

    This module can be seen as the gradient of Conv2d with respect to its input. +It is also known as a fractionally-strided convolution or +a deconvolution (although it is not an actual deconvolution operation).

      +
    • stride controls the stride for the cross-correlation.

    • +
    • padding controls the amount of implicit zero-paddings on both +sides for dilation * (kernel_size - 1) - padding number of points. See note +below for details.

    • +
    • output_padding controls the additional size added to one side +of the output shape. See note below for details.

    • +
    • dilation controls the spacing between the kernel points; also known as the à trous algorithm. +It is harder to describe, but this link_ has a nice visualization of what dilation does.

    • +
    • groups controls the connections between inputs and outputs. +in_channels and out_channels must both be divisible by +groups. For example,

        +
      • At groups=1, all inputs are convolved to all outputs.

      • +
      • At groups=2, the operation becomes equivalent to having two conv +layers side by side, each seeing half the input channels, +and producing half the output channels, and both subsequently +concatenated.

      • +
      • At groups= in_channels, each input channel is convolved with +its own set of filters (of size +\(\left\lfloor\frac{out\_channels}{in\_channels}\right\rfloor\)).

      • +
    • +
    + +

    The parameters kernel_size, stride, padding, output_padding +can either be:

      +
    • a single int -- in which case the same value is used for the height and width dimensions

    • +
    • a tuple of two ints -- in which case, the first int is used for the height dimension, +and the second int for the width dimension

    • +
    + +

    Note

    + +

    Depending of the size of your kernel, several (of the last) +columns of the input might be lost, because it is a valid cross-correlation_, +and not a full cross-correlation. It is up to the user to add proper padding.

    +

    The padding argument effectively adds dilation * (kernel_size - 1) - padding +amount of zero padding to both sizes of the input. This is set so that +when a nn_conv2d and a nn_conv_transpose2d are initialized with same +parameters, they are inverses of each other in +regard to the input and output shapes. However, when stride > 1, +nn_conv2d maps multiple input shapes to the same output +shape. output_padding is provided to resolve this ambiguity by +effectively increasing the calculated output shape on one side. Note +that output_padding is only used to find output shape, but does +not actually add zero-padding to output.

    +

    In some circumstances when using the CUDA backend with CuDNN, this operator +may select a nondeterministic algorithm to increase performance. If this is +undesirable, you can try to make the operation deterministic (potentially at +a performance cost) by setting torch.backends.cudnn.deterministic = TRUE.

    +

    Shape

    + + + +
      +
    • Input: \((N, C_{in}, H_{in}, W_{in})\)

    • +
    • Output: \((N, C_{out}, H_{out}, W_{out})\) where +$$ + H_{out} = (H_{in} - 1) \times \mbox{stride}[0] - 2 \times \mbox{padding}[0] + \mbox{dilation}[0] +\times (\mbox{kernel\_size}[0] - 1) + \mbox{output\_padding}[0] + 1 +$$ +$$ + W_{out} = (W_{in} - 1) \times \mbox{stride}[1] - 2 \times \mbox{padding}[1] + \mbox{dilation}[1] +\times (\mbox{kernel\_size}[1] - 1) + \mbox{output\_padding}[1] + 1 +$$

    • +
    + +

    Attributes

    + + + +
      +
    • weight (Tensor): the learnable weights of the module of shape +\((\mbox{in\_channels}, \frac{\mbox{out\_channels}}{\mbox{groups}},\) +\(\mbox{kernel\_size[0]}, \mbox{kernel\_size[1]})\). +The values of these weights are sampled from +\(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where +\(k = \frac{groups}{C_{\mbox{out}} * \prod_{i=0}^{1}\mbox{kernel\_size}[i]}\)

    • +
    • bias (Tensor): the learnable bias of the module of shape (out_channels) +If bias is True, then the values of these weights are +sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where +\(k = \frac{groups}{C_{\mbox{out}} * \prod_{i=0}^{1}\mbox{kernel\_size}[i]}\)

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +# With square kernels and equal stride +m <- nn_conv_transpose2d(16, 33, 3, stride=2) +# non-square kernels and unequal stride and with padding +m <- nn_conv_transpose2d(16, 33, c(3, 5), stride=c(2, 1), padding=c(4, 2)) +input <- torch_randn(20, 16, 50, 100) +output <- m(input) +# exact output size can be also specified as an argument +input <- torch_randn(1, 16, 12, 12) +downsample <- nn_conv2d(16, 16, 3, stride=2, padding=1) +upsample <- nn_conv_transpose2d(16, 16, 3, stride=2, padding=1) +h <- downsample(input) +h$size() +output <- upsample(h, output_size=input$size()) +output$size() + +} +
    #> [1] 1 16 12 12
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_conv_transpose3d.html b/static/docs/dev/reference/nn_conv_transpose3d.html new file mode 100644 index 0000000000000000000000000000000000000000..c1d318cd590c6829e22865dba7b3dbddd0eddff6 --- /dev/null +++ b/static/docs/dev/reference/nn_conv_transpose3d.html @@ -0,0 +1,396 @@ + + + + + + + + +ConvTranpose3D module — nn_conv_transpose3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 3D transposed convolution operator over an input image composed of several input +planes.

    +
    + +
    nn_conv_transpose3d(
    +  in_channels,
    +  out_channels,
    +  kernel_size,
    +  stride = 1,
    +  padding = 0,
    +  output_padding = 0,
    +  groups = 1,
    +  bias = TRUE,
    +  dilation = 1,
    +  padding_mode = "zeros"
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    in_channels

    (int): Number of channels in the input image

    out_channels

    (int): Number of channels produced by the convolution

    kernel_size

    (int or tuple): Size of the convolving kernel

    stride

    (int or tuple, optional): Stride of the convolution. Default: 1

    padding

    (int or tuple, optional): dilation * (kernel_size - 1) - padding zero-padding +will be added to both sides of each dimension in the input. Default: 0 +output_padding (int or tuple, optional): Additional size added to one side +of each dimension in the output shape. Default: 0

    output_padding

    (int or tuple, optional): Additional size added to one side +of each dimension in the output shape. Default: 0

    groups

    (int, optional): Number of blocked connections from input channels to output channels. Default: 1

    bias

    (bool, optional): If True, adds a learnable bias to the output. Default: True

    dilation

    (int or tuple, optional): Spacing between kernel elements. Default: 1

    padding_mode

    (string, optional): 'zeros', 'reflect', 'replicate' or 'circular'. Default: 'zeros'

    + +

    Details

    + +

    The transposed convolution operator multiplies each input value element-wise by a learnable kernel, +and sums over the outputs from all input feature planes.

    +

    This module can be seen as the gradient of Conv3d with respect to its input. +It is also known as a fractionally-strided convolution or +a deconvolution (although it is not an actual deconvolution operation).

      +
    • stride controls the stride for the cross-correlation.

    • +
    • padding controls the amount of implicit zero-paddings on both +sides for dilation * (kernel_size - 1) - padding number of points. See note +below for details.

    • +
    • output_padding controls the additional size added to one side +of the output shape. See note below for details.

    • +
    • dilation controls the spacing between the kernel points; also known as the à trous algorithm. +It is harder to describe, but this link_ has a nice visualization of what dilation does.

    • +
    • groups controls the connections between inputs and outputs. +in_channels and out_channels must both be divisible by +groups. For example,

        +
      • At groups=1, all inputs are convolved to all outputs.

      • +
      • At groups=2, the operation becomes equivalent to having two conv +layers side by side, each seeing half the input channels, +and producing half the output channels, and both subsequently +concatenated.

      • +
      • At groups= in_channels, each input channel is convolved with +its own set of filters (of size +\(\left\lfloor\frac{out\_channels}{in\_channels}\right\rfloor\)).

      • +
    • +
    + +

    The parameters kernel_size, stride, padding, output_padding +can either be:

      +
    • a single int -- in which case the same value is used for the depth, height and width dimensions

    • +
    • a tuple of three ints -- in which case, the first int is used for the depth dimension, +the second int for the height dimension and the third int for the width dimension

    • +
    + +

    Note

    + +

    Depending of the size of your kernel, several (of the last) +columns of the input might be lost, because it is a valid cross-correlation, +and not a full cross-correlation. +It is up to the user to add proper padding.

    +

    The padding argument effectively adds dilation * (kernel_size - 1) - padding +amount of zero padding to both sizes of the input. This is set so that +when a ~torch.nn.Conv3d and a ~torch.nn.ConvTranspose3d +are initialized with same parameters, they are inverses of each other in +regard to the input and output shapes. However, when stride > 1, +~torch.nn.Conv3d maps multiple input shapes to the same output +shape. output_padding is provided to resolve this ambiguity by +effectively increasing the calculated output shape on one side. Note +that output_padding is only used to find output shape, but does +not actually add zero-padding to output.

    +

    In some circumstances when using the CUDA backend with CuDNN, this operator +may select a nondeterministic algorithm to increase performance. If this is +undesirable, you can try to make the operation deterministic (potentially at +a performance cost) by setting torch.backends.cudnn.deterministic = TRUE.

    +

    Shape

    + + + +
      +
    • Input: \((N, C_{in}, D_{in}, H_{in}, W_{in})\)

    • +
    • Output: \((N, C_{out}, D_{out}, H_{out}, W_{out})\) where +$$ + D_{out} = (D_{in} - 1) \times \mbox{stride}[0] - 2 \times \mbox{padding}[0] + \mbox{dilation}[0] +\times (\mbox{kernel\_size}[0] - 1) + \mbox{output\_padding}[0] + 1 +$$ +$$ + H_{out} = (H_{in} - 1) \times \mbox{stride}[1] - 2 \times \mbox{padding}[1] + \mbox{dilation}[1] +\times (\mbox{kernel\_size}[1] - 1) + \mbox{output\_padding}[1] + 1 +$$ +$$ + W_{out} = (W_{in} - 1) \times \mbox{stride}[2] - 2 \times \mbox{padding}[2] + \mbox{dilation}[2] +\times (\mbox{kernel\_size}[2] - 1) + \mbox{output\_padding}[2] + 1 +$$

    • +
    + +

    Attributes

    + + + +
      +
    • weight (Tensor): the learnable weights of the module of shape +\((\mbox{in\_channels}, \frac{\mbox{out\_channels}}{\mbox{groups}},\) +\(\mbox{kernel\_size[0]}, \mbox{kernel\_size[1]}, \mbox{kernel\_size[2]})\). +The values of these weights are sampled from +\(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where +\(k = \frac{groups}{C_{\mbox{out}} * \prod_{i=0}^{2}\mbox{kernel\_size}[i]}\)

    • +
    • bias (Tensor): the learnable bias of the module of shape (out_channels) +If bias is True, then the values of these weights are +sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where +\(k = \frac{groups}{C_{\mbox{out}} * \prod_{i=0}^{2}\mbox{kernel\_size}[i]}\)

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +# With square kernels and equal stride +m <- nn_conv_transpose3d(16, 33, 3, stride=2) +# non-square kernels and unequal stride and with padding +m <- nn_conv_transpose3d(16, 33, c(3, 5, 2), stride=c(2, 1, 1), padding=c(0, 4, 2)) +input <- torch_randn(20, 16, 10, 50, 100) +output <- m(input) +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_cross_entropy_loss.html b/static/docs/dev/reference/nn_cross_entropy_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..c852a52bda0eb39a47b47840eb062f54afdf85e3 --- /dev/null +++ b/static/docs/dev/reference/nn_cross_entropy_loss.html @@ -0,0 +1,310 @@ + + + + + + + + +CrossEntropyLoss module — nn_cross_entropy_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    This criterion combines nn_log_softmax() and nn_nll_loss() in one single class. +It is useful when training a classification problem with C classes.

    +
    + +
    nn_cross_entropy_loss(weight = NULL, ignore_index = -100, reduction = "mean")
    + +

    Arguments

    + + + + + + + + + + + + + + +
    weight

    (Tensor, optional): a manual rescaling weight given to each class. +If given, has to be a Tensor of size C

    ignore_index

    (int, optional): Specifies a target value that is ignored +and does not contribute to the input gradient. When size_average is +TRUE, the loss is averaged over non-ignored targets.

    reduction

    (string, optional): Specifies the reduction to apply to the output: +'none' | 'mean' | 'sum'. 'none': no reduction will be applied, +'mean': the sum of the output will be divided by the number of +elements in the output, 'sum': the output will be summed. Note: size_average +and reduce are in the process of being deprecated, and in the meantime, +specifying either of those two args will override reduction. Default: 'mean'

    + +

    Details

    + +

    If provided, the optional argument weight should be a 1D Tensor +assigning weight to each of the classes.

    +

    This is particularly useful when you have an unbalanced training set. +The input is expected to contain raw, unnormalized scores for each class. +input has to be a Tensor of size either \((minibatch, C)\) or +\((minibatch, C, d_1, d_2, ..., d_K)\) +with \(K \geq 1\) for the K-dimensional case (described later).

    +

    This criterion expects a class index in the range \([0, C-1]\) as the +target for each value of a 1D tensor of size minibatch; if ignore_index +is specified, this criterion also accepts this class index (this index may not +necessarily be in the class range).

    +

    The loss can be described as: +$$ + \mbox{loss}(x, class) = -\log\left(\frac{\exp(x[class])}{\sum_j \exp(x[j])}\right) += -x[class] + \log\left(\sum_j \exp(x[j])\right) +$$ +or in the case of the weight argument being specified: +$$ + \mbox{loss}(x, class) = weight[class] \left(-x[class] + \log\left(\sum_j \exp(x[j])\right)\right) +$$

    +

    The losses are averaged across observations for each minibatch. +Can also be used for higher dimension inputs, such as 2D images, by providing +an input of size \((minibatch, C, d_1, d_2, ..., d_K)\) with \(K \geq 1\), +where \(K\) is the number of dimensions, and a target of appropriate shape +(see below).

    +

    Shape

    + + + +
      +
    • Input: \((N, C)\) where C = number of classes, or +\((N, C, d_1, d_2, ..., d_K)\) with \(K \geq 1\) +in the case of K-dimensional loss.

    • +
    • Target: \((N)\) where each value is \(0 \leq \mbox{targets}[i] \leq C-1\), or +\((N, d_1, d_2, ..., d_K)\) with \(K \geq 1\) in the case of +K-dimensional loss.

    • +
    • Output: scalar. +If reduction is 'none', then the same size as the target: +\((N)\), or +\((N, d_1, d_2, ..., d_K)\) with \(K \geq 1\) in the case +of K-dimensional loss.

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +loss <- nn_cross_entropy_loss() +input <- torch_randn(3, 5, requires_grad=TRUE) +target <- torch_randint(low = 1, high = 5, size = 3, dtype = torch_long()) +output <- loss(input, target) +output$backward() + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_dropout.html b/static/docs/dev/reference/nn_dropout.html new file mode 100644 index 0000000000000000000000000000000000000000..68a657e2c8ab04211d20f5d059b72c930374ea2a --- /dev/null +++ b/static/docs/dev/reference/nn_dropout.html @@ -0,0 +1,272 @@ + + + + + + + + +Dropout module — nn_dropout • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    During training, randomly zeroes some of the elements of the input +tensor with probability p using samples from a Bernoulli +distribution. Each channel will be zeroed out independently on every forward +call.

    +
    + +
    nn_dropout(p = 0.5, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + +
    p

    probability of an element to be zeroed. Default: 0.5

    inplace

    If set to TRUE, will do this operation in-place. Default: FALSE.

    + +

    Details

    + +

    This has proven to be an effective technique for regularization and +preventing the co-adaptation of neurons as described in the paper +Improving neural networks by preventing co-adaptation of feature detectors.

    +

    Furthermore, the outputs are scaled by a factor of :math:\frac{1}{1-p} during +training. This means that during evaluation the module simply computes an +identity function.

    +

    Shape

    + + + +
      +
    • Input: \((*)\). Input can be of any shape

    • +
    • Output: \((*)\). Output is of the same shape as input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_dropout(p = 0.2) +input <- torch_randn(20, 16) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_dropout2d.html b/static/docs/dev/reference/nn_dropout2d.html new file mode 100644 index 0000000000000000000000000000000000000000..9797db17c975b8e28ceb215563380d9485905060 --- /dev/null +++ b/static/docs/dev/reference/nn_dropout2d.html @@ -0,0 +1,276 @@ + + + + + + + + +Dropout2D module — nn_dropout2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Randomly zero out entire channels (a channel is a 2D feature map, +e.g., the \(j\)-th channel of the \(i\)-th sample in the +batched input is a 2D tensor \(\mbox{input}[i, j]\)).

    +
    + +
    nn_dropout2d(p = 0.5, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + +
    p

    (float, optional): probability of an element to be zero-ed.

    inplace

    (bool, optional): If set to TRUE, will do this operation +in-place

    + +

    Details

    + +

    Each channel will be zeroed out independently on every forward call with +probability p using samples from a Bernoulli distribution. +Usually the input comes from nn_conv2d modules.

    +

    As described in the paper +Efficient Object Localization Using Convolutional Networks , +if adjacent pixels within feature maps are strongly correlated +(as is normally the case in early convolution layers) then i.i.d. dropout +will not regularize the activations and will otherwise just result +in an effective learning rate decrease. +In this case, nn_dropout2d will help promote independence between +feature maps and should be used instead.

    +

    Shape

    + + + +
      +
    • Input: \((N, C, H, W)\)

    • +
    • Output: \((N, C, H, W)\) (same shape as input)

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_dropout2d(p = 0.2) +input <- torch_randn(20, 16, 32, 32) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_dropout3d.html b/static/docs/dev/reference/nn_dropout3d.html new file mode 100644 index 0000000000000000000000000000000000000000..524c3235f07254ac16da83a7c05b1209b7461528 --- /dev/null +++ b/static/docs/dev/reference/nn_dropout3d.html @@ -0,0 +1,276 @@ + + + + + + + + +Dropout3D module — nn_dropout3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Randomly zero out entire channels (a channel is a 3D feature map, +e.g., the \(j\)-th channel of the \(i\)-th sample in the +batched input is a 3D tensor \(\mbox{input}[i, j]\)).

    +
    + +
    nn_dropout3d(p = 0.5, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + +
    p

    (float, optional): probability of an element to be zeroed.

    inplace

    (bool, optional): If set to TRUE, will do this operation +in-place

    + +

    Details

    + +

    Each channel will be zeroed out independently on every forward call with +probability p using samples from a Bernoulli distribution. +Usually the input comes from nn_conv2d modules.

    +

    As described in the paper +Efficient Object Localization Using Convolutional Networks , +if adjacent pixels within feature maps are strongly correlated +(as is normally the case in early convolution layers) then i.i.d. dropout +will not regularize the activations and will otherwise just result +in an effective learning rate decrease.

    +

    In this case, nn_dropout3d will help promote independence between +feature maps and should be used instead.

    +

    Shape

    + + + +
      +
    • Input: \((N, C, D, H, W)\)

    • +
    • Output: \((N, C, D, H, W)\) (same shape as input)

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_dropout3d(p = 0.2) +input <- torch_randn(20, 16, 4, 32, 32) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_elu.html b/static/docs/dev/reference/nn_elu.html new file mode 100644 index 0000000000000000000000000000000000000000..90185470ffbfba297eeefa2c3cb9feeab07b1ca4 --- /dev/null +++ b/static/docs/dev/reference/nn_elu.html @@ -0,0 +1,264 @@ + + + + + + + + +ELU module — nn_elu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the element-wise function:

    +
    + +
    nn_elu(alpha = 1, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + +
    alpha

    the \(\alpha\) value for the ELU formulation. Default: 1.0

    inplace

    can optionally do the operation in-place. Default: FALSE

    + +

    Details

    + +

    $$ + \mbox{ELU}(x) = \max(0,x) + \min(0, \alpha * (\exp(x) - 1)) +$$

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_elu() +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_embedding.html b/static/docs/dev/reference/nn_embedding.html new file mode 100644 index 0000000000000000000000000000000000000000..9a9385d10bd27069550acf422c285ceed351cc3b --- /dev/null +++ b/static/docs/dev/reference/nn_embedding.html @@ -0,0 +1,333 @@ + + + + + + + + +Embedding module — nn_embedding • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    A simple lookup table that stores embeddings of a fixed dictionary and size. +This module is often used to store word embeddings and retrieve them using indices. +The input to the module is a list of indices, and the output is the corresponding +word embeddings.

    +
    + +
    nn_embedding(
    +  num_embeddings,
    +  embedding_dim,
    +  padding_idx = NULL,
    +  max_norm = NULL,
    +  norm_type = 2,
    +  scale_grad_by_freq = FALSE,
    +  sparse = FALSE,
    +  .weight = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    num_embeddings

    (int): size of the dictionary of embeddings

    embedding_dim

    (int): the size of each embedding vector

    padding_idx

    (int, optional): If given, pads the output with the embedding vector at padding_idx +(initialized to zeros) whenever it encounters the index.

    max_norm

    (float, optional): If given, each embedding vector with norm larger than max_norm +is renormalized to have norm max_norm.

    norm_type

    (float, optional): The p of the p-norm to compute for the max_norm option. Default 2.

    scale_grad_by_freq

    (boolean, optional): If given, this will scale gradients by the inverse of frequency of +the words in the mini-batch. Default False.

    sparse

    (bool, optional): If True, gradient w.r.t. weight matrix will be a sparse tensor.

    .weight

    (Tensor) embeddings weights (in case you want to set it manually)

    +

    See Notes for more details regarding sparse gradients.

    + +

    Note

    + +

    Keep in mind that only a limited number of optimizers support +sparse gradients: currently it's optim.SGD (CUDA and CPU), +optim.SparseAdam (CUDA and CPU) and optim.Adagrad (CPU)

    +

    With padding_idx set, the embedding vector at +padding_idx is initialized to all zeros. However, note that this +vector can be modified afterwards, e.g., using a customized +initialization method, and thus changing the vector used to pad the +output. The gradient for this vector from nn_embedding +is always zero.

    +

    Attributes

    + + + +
      +
    • weight (Tensor): the learnable weights of the module of shape (num_embeddings, embedding_dim) +initialized from \(\mathcal{N}(0, 1)\)

    • +
    + +

    Shape

    + + + +
      +
    • Input: \((*)\), LongTensor of arbitrary shape containing the indices to extract

    • +
    • Output: \((*, H)\), where * is the input shape and \(H=\mbox{embedding\_dim}\)

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +# an Embedding module containing 10 tensors of size 3 +embedding <- nn_embedding(10, 3) +# a batch of 2 samples of 4 indices each +input <- torch_tensor(rbind(c(1,2,4,5),c(4,3,2,9)), dtype = torch_long()) +embedding(input) +# example with padding_idx +embedding <- nn_embedding(10, 3, padding_idx=1) +input <- torch_tensor(matrix(c(1,3,1,6), nrow = 1), dtype = torch_long()) +embedding(input) + +} +
    #> torch_tensor +#> (1,.,.) = +#> 0.0000 0.0000 0.0000 +#> -1.2943 -1.0279 0.6483 +#> 0.0000 0.0000 0.0000 +#> 0.4053 0.7866 -0.3922 +#> [ CPUFloatType{1,4,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_fractional_max_pool2d.html b/static/docs/dev/reference/nn_fractional_max_pool2d.html new file mode 100644 index 0000000000000000000000000000000000000000..5d5a1432d844e5950aa2a54c02b75a9756deec87 --- /dev/null +++ b/static/docs/dev/reference/nn_fractional_max_pool2d.html @@ -0,0 +1,276 @@ + + + + + + + + +Applies a 2D fractional max pooling over an input signal composed of several input planes. — nn_fractional_max_pool2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fractional MaxPooling is described in detail in the paper +Fractional MaxPooling by Ben Graham

    +
    + +
    nn_fractional_max_pool2d(
    +  kernel_size,
    +  output_size = NULL,
    +  output_ratio = NULL,
    +  return_indices = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    kernel_size

    the size of the window to take a max over. +Can be a single number k (for a square kernel of k x k) or a tuple (kh, kw)

    output_size

    the target output size of the image of the form oH x oW. +Can be a tuple (oH, oW) or a single number oH for a square image oH x oH

    output_ratio

    If one wants to have an output size as a ratio of the input size, this option can be given. +This has to be a number or tuple in the range (0, 1)

    return_indices

    if TRUE, will return the indices along with the outputs. +Useful to pass to nn_max_unpool2d(). Default: FALSE

    + +

    Details

    + +

    The max-pooling operation is applied in \(kH \times kW\) regions by a stochastic +step size determined by the target output size. +The number of output features is equal to the number of input planes.

    + +

    Examples

    +
    if (torch_is_installed()) { +# pool of square window of size=3, and target output size 13x12 +m = nn_fractional_max_pool2d(3, output_size=c(13, 12)) +# pool of square window and target output size being half of input image size +m = nn_fractional_max_pool2d(3, output_ratio=c(0.5, 0.5)) +input = torch_randn(20, 16, 50, 32) +output = m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_fractional_max_pool3d.html b/static/docs/dev/reference/nn_fractional_max_pool3d.html new file mode 100644 index 0000000000000000000000000000000000000000..51bec672a27764e25ea098db2dcbff4113e99a68 --- /dev/null +++ b/static/docs/dev/reference/nn_fractional_max_pool3d.html @@ -0,0 +1,276 @@ + + + + + + + + +Applies a 3D fractional max pooling over an input signal composed of several input planes. — nn_fractional_max_pool3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fractional MaxPooling is described in detail in the paper +Fractional MaxPooling by Ben Graham

    +
    + +
    nn_fractional_max_pool3d(
    +  kernel_size,
    +  output_size = NULL,
    +  output_ratio = NULL,
    +  return_indices = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    kernel_size

    the size of the window to take a max over. +Can be a single number k (for a square kernel of k x k x k) or a tuple (kt x kh x kw)

    output_size

    the target output size of the image of the form oT x oH x oW. +Can be a tuple (oT, oH, oW) or a single number oH for a square image oH x oH x oH

    output_ratio

    If one wants to have an output size as a ratio of the input size, this option can be given. +This has to be a number or tuple in the range (0, 1)

    return_indices

    if TRUE, will return the indices along with the outputs. +Useful to pass to nn_max_unpool3d(). Default: FALSE

    + +

    Details

    + +

    The max-pooling operation is applied in \(kTxkHxkW\) regions by a stochastic +step size determined by the target output size. +The number of output features is equal to the number of input planes.

    + +

    Examples

    +
    if (torch_is_installed()) { +# pool of cubic window of size=3, and target output size 13x12x11 +m = nn_fractional_max_pool3d(3, output_size=c(13, 12, 11)) +# pool of cubic window and target output size being half of input size +m = nn_fractional_max_pool3d(3, output_ratio=c(0.5, 0.5, 0.5)) +input = torch_randn(20, 16, 50, 32, 16) +output = m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_gelu.html b/static/docs/dev/reference/nn_gelu.html new file mode 100644 index 0000000000000000000000000000000000000000..7e5c22eb2813212cfc8fdef6fc65c4bc2f21697b --- /dev/null +++ b/static/docs/dev/reference/nn_gelu.html @@ -0,0 +1,252 @@ + + + + + + + + +GELU module — nn_gelu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the Gaussian Error Linear Units function: +$$\mbox{GELU}(x) = x * \Phi(x)$$

    +
    + +
    nn_gelu()
    + + +

    Details

    + +

    where \(\Phi(x)\) is the Cumulative Distribution Function for Gaussian Distribution.

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m = nn_gelu() +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_glu.html b/static/docs/dev/reference/nn_glu.html new file mode 100644 index 0000000000000000000000000000000000000000..91216be121e3f5c43d4b19a0922d5511aba13d6a --- /dev/null +++ b/static/docs/dev/reference/nn_glu.html @@ -0,0 +1,259 @@ + + + + + + + + +GLU module — nn_glu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the gated linear unit function +\({GLU}(a, b)= a \otimes \sigma(b)\) where \(a\) is the first half +of the input matrices and \(b\) is the second half.

    +
    + +
    nn_glu(dim = -1)
    + +

    Arguments

    + + + + + + +
    dim

    (int): the dimension on which to split the input. Default: -1

    + +

    Shape

    + + + +
      +
    • Input: \((\ast_1, N, \ast_2)\) where * means, any number of additional +dimensions

    • +
    • Output: \((\ast_1, M, \ast_2)\) where \(M=N/2\)

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_glu() +input <- torch_randn(4, 2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_hardshrink.html b/static/docs/dev/reference/nn_hardshrink.html new file mode 100644 index 0000000000000000000000000000000000000000..31000594f2fa28ba5c89d5c7f6f0978282b57d95 --- /dev/null +++ b/static/docs/dev/reference/nn_hardshrink.html @@ -0,0 +1,266 @@ + + + + + + + + +Hardshwink module — nn_hardshrink • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the hard shrinkage function element-wise:

    +
    + +
    nn_hardshrink(lambd = 0.5)
    + +

    Arguments

    + + + + + + +
    lambd

    the \(\lambda\) value for the Hardshrink formulation. Default: 0.5

    + +

    Details

    + +

    $$ + \mbox{HardShrink}(x) = + \left\{ \begin{array}{ll} +x, & \mbox{ if } x > \lambda \\ +x, & \mbox{ if } x < -\lambda \\ +0, & \mbox{ otherwise } +\end{array} +\right. +$$

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_hardshrink() +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_hardsigmoid.html b/static/docs/dev/reference/nn_hardsigmoid.html new file mode 100644 index 0000000000000000000000000000000000000000..7641a4e8ca54f8e1fedd30eb12d3b43f2ea9083c --- /dev/null +++ b/static/docs/dev/reference/nn_hardsigmoid.html @@ -0,0 +1,257 @@ + + + + + + + + +Hardsigmoid module — nn_hardsigmoid • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the element-wise function:

    +
    + +
    nn_hardsigmoid()
    + + +

    Details

    + +

    $$ +\mbox{Hardsigmoid}(x) = \left\{ \begin{array}{ll} + 0 & \mbox{if~} x \le -3, \\ + 1 & \mbox{if~} x \ge +3, \\ + x / 6 + 1 / 2 & \mbox{otherwise} +\end{array} +\right. +$$

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_hardsigmoid() +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_hardswish.html b/static/docs/dev/reference/nn_hardswish.html new file mode 100644 index 0000000000000000000000000000000000000000..f2fbdc30f565c2c2232946ac98d0be8529330928 --- /dev/null +++ b/static/docs/dev/reference/nn_hardswish.html @@ -0,0 +1,260 @@ + + + + + + + + +Hardswish module — nn_hardswish • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the hardswish function, element-wise, as described in the paper: +Searching for MobileNetV3

    +
    + +
    nn_hardswish()
    + + +

    Details

    + +

    $$ \mbox{Hardswish}(x) = \left\{ + \begin{array}{ll} + 0 & \mbox{if } x \le -3, \\ + x & \mbox{if } x \ge +3, \\ + x \cdot (x + 3)/6 & \mbox{otherwise} + \end{array} + \right. $$

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +m <- nn_hardswish() +input <- torch_randn(2) +output <- m(input) +} + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_hardtanh.html b/static/docs/dev/reference/nn_hardtanh.html new file mode 100644 index 0000000000000000000000000000000000000000..939462aa11b54e42766f187ac670ec9b8fb03e3f --- /dev/null +++ b/static/docs/dev/reference/nn_hardtanh.html @@ -0,0 +1,277 @@ + + + + + + + + +Hardtanh module — nn_hardtanh • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the HardTanh function element-wise +HardTanh is defined as:

    +
    + +
    nn_hardtanh(min_val = -1, max_val = 1, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    min_val

    minimum value of the linear region range. Default: -1

    max_val

    maximum value of the linear region range. Default: 1

    inplace

    can optionally do the operation in-place. Default: FALSE

    + +

    Details

    + +

    $$ +\mbox{HardTanh}(x) = \left\{ \begin{array}{ll} + 1 & \mbox{ if } x > 1 \\ + -1 & \mbox{ if } x < -1 \\ + x & \mbox{ otherwise } \\ +\end{array} +\right. +$$

    +

    The range of the linear region :math:[-1, 1] can be adjusted using +min_val and max_val.

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_hardtanh(-2, 2) +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_identity.html b/static/docs/dev/reference/nn_identity.html new file mode 100644 index 0000000000000000000000000000000000000000..99566eea8af5ce33543bdd2f91ad6a49392c9065 --- /dev/null +++ b/static/docs/dev/reference/nn_identity.html @@ -0,0 +1,246 @@ + + + + + + + + +Identity module — nn_identity • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    A placeholder identity operator that is argument-insensitive.

    +
    + +
    nn_identity(...)
    + +

    Arguments

    + + + + + + +
    ...

    any arguments (unused)

    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_identity(54, unused_argument1 = 0.1, unused_argument2 = FALSE) +input <- torch_randn(128, 20) +output <- m(input) +print(output$size()) + +} +
    #> [1] 128 20
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_init_calculate_gain.html b/static/docs/dev/reference/nn_init_calculate_gain.html new file mode 100644 index 0000000000000000000000000000000000000000..8ddfa92b88b7e2793470c7e39e99308493116ea3 --- /dev/null +++ b/static/docs/dev/reference/nn_init_calculate_gain.html @@ -0,0 +1,241 @@ + + + + + + + + +Calculate gain — nn_init_calculate_gain • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Return the recommended gain value for the given nonlinearity function.

    +
    + +
    nn_init_calculate_gain(nonlinearity, param = NULL)
    + +

    Arguments

    + + + + + + + + + + +
    nonlinearity

    the non-linear function

    param

    optional parameter for the non-linear function

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_init_constant_.html b/static/docs/dev/reference/nn_init_constant_.html new file mode 100644 index 0000000000000000000000000000000000000000..21b0b2f2d0d20efc5b7ab8f2eb82350f22e4ce47 --- /dev/null +++ b/static/docs/dev/reference/nn_init_constant_.html @@ -0,0 +1,252 @@ + + + + + + + + +Constant initialization — nn_init_constant_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fills the input Tensor with the value val.

    +
    + +
    nn_init_constant_(tensor, val)
    + +

    Arguments

    + + + + + + + + + + +
    tensor

    an n-dimensional Tensor

    val

    the value to fill the tensor with

    + + +

    Examples

    +
    if (torch_is_installed()) { +w <- torch_empty(3, 5) +nn_init_constant_(w, 0.3) + +} +
    #> torch_tensor +#> 0.3000 0.3000 0.3000 0.3000 0.3000 +#> 0.3000 0.3000 0.3000 0.3000 0.3000 +#> 0.3000 0.3000 0.3000 0.3000 0.3000 +#> [ CPUFloatType{3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_init_dirac_.html b/static/docs/dev/reference/nn_init_dirac_.html new file mode 100644 index 0000000000000000000000000000000000000000..123d799a67be07f885a961249ad125674ebaffce --- /dev/null +++ b/static/docs/dev/reference/nn_init_dirac_.html @@ -0,0 +1,256 @@ + + + + + + + + +Dirac initialization — nn_init_dirac_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fills the 3, 4, 5-dimensional input Tensor with the Dirac +delta function. Preserves the identity of the inputs in Convolutional +layers, where as many input channels are preserved as possible. In case +of groups>1, each group of channels preserves identity.

    +
    + +
    nn_init_dirac_(tensor, groups = 1)
    + +

    Arguments

    + + + + + + + + + + +
    tensor

    a 3, 4, 5-dimensional torch.Tensor

    groups

    (optional) number of groups in the conv layer (default: 1)

    + + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +w <- torch_empty(3, 16, 5, 5) +nn_init_dirac_(w) +} + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_init_eye_.html b/static/docs/dev/reference/nn_init_eye_.html new file mode 100644 index 0000000000000000000000000000000000000000..60af723378a4d404785fda7d337324451d750e6c --- /dev/null +++ b/static/docs/dev/reference/nn_init_eye_.html @@ -0,0 +1,252 @@ + + + + + + + + +Eye initialization — nn_init_eye_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fills the 2-dimensional input Tensor with the identity matrix. +Preserves the identity of the inputs in Linear layers, where as +many inputs are preserved as possible.

    +
    + +
    nn_init_eye_(tensor)
    + +

    Arguments

    + + + + + + +
    tensor

    a 2-dimensional torch tensor.

    + + +

    Examples

    +
    if (torch_is_installed()) { +w <- torch_empty(3, 5) +nn_init_eye_(w) + +} +
    #> torch_tensor +#> 1 0 0 0 0 +#> 0 1 0 0 0 +#> 0 0 1 0 0 +#> [ CPUFloatType{3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_init_kaiming_normal_.html b/static/docs/dev/reference/nn_init_kaiming_normal_.html new file mode 100644 index 0000000000000000000000000000000000000000..f618bd8ecb7a8da9900f47e500face82cd1a05ca --- /dev/null +++ b/static/docs/dev/reference/nn_init_kaiming_normal_.html @@ -0,0 +1,273 @@ + + + + + + + + +Kaiming normal initialization — nn_init_kaiming_normal_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fills the input Tensor with values according to the method +described in Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification - He, K. et al. (2015), using a +normal distribution.

    +
    + +
    nn_init_kaiming_normal_(
    +  tensor,
    +  a = 0,
    +  mode = "fan_in",
    +  nonlinearity = "leaky_relu"
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    tensor

    an n-dimensional torch.Tensor

    a

    the negative slope of the rectifier used after this layer (only used +with 'leaky_relu')

    mode

    either 'fan_in' (default) or 'fan_out'. Choosing 'fan_in' preserves +the magnitude of the variance of the weights in the forward pass. Choosing +'fan_out' preserves the magnitudes in the backwards pass.

    nonlinearity

    the non-linear function. recommended to use only with 'relu' +or 'leaky_relu' (default).

    + + +

    Examples

    +
    if (torch_is_installed()) { +w <- torch_empty(3, 5) +nn_init_kaiming_normal_(w, mode = "fan_in", nonlinearity = "leaky_relu") + +} +
    #> torch_tensor +#> -0.5594 0.2408 0.3946 0.5860 -0.4834 +#> -0.0442 0.7170 -0.3028 0.4015 -0.8906 +#> -0.5157 -0.1763 0.9366 0.4640 -0.5356 +#> [ CPUFloatType{3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_init_kaiming_uniform_.html b/static/docs/dev/reference/nn_init_kaiming_uniform_.html new file mode 100644 index 0000000000000000000000000000000000000000..f76da57af376c83218b9795ef2d115fee6af05bc --- /dev/null +++ b/static/docs/dev/reference/nn_init_kaiming_uniform_.html @@ -0,0 +1,273 @@ + + + + + + + + +Kaiming uniform initialization — nn_init_kaiming_uniform_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fills the input Tensor with values according to the method +described in Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification - He, K. et al. (2015), using a +uniform distribution.

    +
    + +
    nn_init_kaiming_uniform_(
    +  tensor,
    +  a = 0,
    +  mode = "fan_in",
    +  nonlinearity = "leaky_relu"
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    tensor

    an n-dimensional torch.Tensor

    a

    the negative slope of the rectifier used after this layer (only used +with 'leaky_relu')

    mode

    either 'fan_in' (default) or 'fan_out'. Choosing 'fan_in' preserves +the magnitude of the variance of the weights in the forward pass. Choosing +'fan_out' preserves the magnitudes in the backwards pass.

    nonlinearity

    the non-linear function. recommended to use only with 'relu' +or 'leaky_relu' (default).

    + + +

    Examples

    +
    if (torch_is_installed()) { +w <- torch_empty(3, 5) +nn_init_kaiming_uniform_(w, mode = "fan_in", nonlinearity = "leaky_relu") + +} +
    #> torch_tensor +#> -0.7460 0.2070 -0.1066 -0.4344 -0.4666 +#> -0.5351 -0.4524 0.0950 -1.0077 -0.2169 +#> -0.9525 0.8753 0.0070 -0.4553 -0.3445 +#> [ CPUFloatType{3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_init_normal_.html b/static/docs/dev/reference/nn_init_normal_.html new file mode 100644 index 0000000000000000000000000000000000000000..083d8213e28ae936f2808e7684807676ae58f22e --- /dev/null +++ b/static/docs/dev/reference/nn_init_normal_.html @@ -0,0 +1,256 @@ + + + + + + + + +Normal initialization — nn_init_normal_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fills the input Tensor with values drawn from the normal distribution

    +
    + +
    nn_init_normal_(tensor, mean = 0, std = 1)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    tensor

    an n-dimensional Tensor

    mean

    the mean of the normal distribution

    std

    the standard deviation of the normal distribution

    + + +

    Examples

    +
    if (torch_is_installed()) { +w <- torch_empty(3, 5) +nn_init_normal_(w) + +} +
    #> torch_tensor +#> -1.0569 -1.0900 1.2740 -1.7728 0.0593 +#> -1.7131 -0.1353 0.8191 0.1481 -0.9940 +#> -0.7544 -1.0298 0.4237 1.4650 0.0575 +#> [ CPUFloatType{3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_init_ones_.html b/static/docs/dev/reference/nn_init_ones_.html new file mode 100644 index 0000000000000000000000000000000000000000..b86684d786ddf1d632ad3f19932cfbf888dea18c --- /dev/null +++ b/static/docs/dev/reference/nn_init_ones_.html @@ -0,0 +1,248 @@ + + + + + + + + +Ones initialization — nn_init_ones_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fills the input Tensor with the scalar value 1

    +
    + +
    nn_init_ones_(tensor)
    + +

    Arguments

    + + + + + + +
    tensor

    an n-dimensional Tensor

    + + +

    Examples

    +
    if (torch_is_installed()) { +w <- torch_empty(3, 5) +nn_init_ones_(w) + +} +
    #> torch_tensor +#> 1 1 1 1 1 +#> 1 1 1 1 1 +#> 1 1 1 1 1 +#> [ CPUFloatType{3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_init_orthogonal_.html b/static/docs/dev/reference/nn_init_orthogonal_.html new file mode 100644 index 0000000000000000000000000000000000000000..37e75e5c79e4a72c56e5e5c6f05f6cbe5c501523 --- /dev/null +++ b/static/docs/dev/reference/nn_init_orthogonal_.html @@ -0,0 +1,258 @@ + + + + + + + + +Orthogonal initialization — nn_init_orthogonal_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fills the input Tensor with a (semi) orthogonal matrix, as +described in Exact solutions to the nonlinear dynamics of learning in deep linear neural networks - Saxe, A. et al. (2013). The input tensor must have +at least 2 dimensions, and for tensors with more than 2 dimensions the +trailing dimensions are flattened.

    +
    + +
    nn_init_orthogonal_(tensor, gain = 1)
    + +

    Arguments

    + + + + + + + + + + +
    tensor

    an n-dimensional Tensor

    gain

    optional scaling factor

    + + +

    Examples

    +
    if (torch_is_installed()) { +w <- torch_empty(3,5) +nn_init_orthogonal_(w) + +} +
    #> torch_tensor +#> 0.0648 0.4597 -0.1806 -0.8669 0.0179 +#> 0.5648 0.4042 0.5758 0.1281 -0.4119 +#> -0.1093 -0.5109 0.7101 -0.4227 0.2102 +#> [ CPUFloatType{3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_init_sparse_.html b/static/docs/dev/reference/nn_init_sparse_.html new file mode 100644 index 0000000000000000000000000000000000000000..ef8017aa0ef1f0adb00e5d9b04d80c1572d7f933 --- /dev/null +++ b/static/docs/dev/reference/nn_init_sparse_.html @@ -0,0 +1,258 @@ + + + + + + + + +Sparse initialization — nn_init_sparse_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fills the 2D input Tensor as a sparse matrix, where the +non-zero elements will be drawn from the normal distribution +as described in Deep learning via Hessian-free optimization - Martens, J. (2010).

    +
    + +
    nn_init_sparse_(tensor, sparsity, std = 0.01)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    tensor

    an n-dimensional Tensor

    sparsity

    The fraction of elements in each column to be set to zero

    std

    the standard deviation of the normal distribution used to generate +the non-zero values

    + + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +w <- torch_empty(3, 5) +nn_init_sparse_(w, sparsity = 0.1) +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_init_trunc_normal_.html b/static/docs/dev/reference/nn_init_trunc_normal_.html new file mode 100644 index 0000000000000000000000000000000000000000..6b55c98b33b0aa1b535b5c997bd9c7c2f2480bfd --- /dev/null +++ b/static/docs/dev/reference/nn_init_trunc_normal_.html @@ -0,0 +1,266 @@ + + + + + + + + +Truncated normal initialization — nn_init_trunc_normal_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fills the input Tensor with values drawn from a truncated +normal distribution.

    +
    + +
    nn_init_trunc_normal_(tensor, mean = 0, std = 1, a = -2, b = -2)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    tensor

    an n-dimensional Tensor

    mean

    the mean of the normal distribution

    std

    the standard deviation of the normal distribution

    a

    the minimum cutoff value

    b

    the maximum cutoff value

    + + +

    Examples

    +
    if (torch_is_installed()) { +w <- torch_empty(3, 5) +nn_init_trunc_normal_(w) + +} +
    #> torch_tensor +#> -2 -2 -2 -2 -2 +#> -2 -2 -2 -2 -2 +#> -2 -2 -2 -2 -2 +#> [ CPUFloatType{3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_init_uniform_.html b/static/docs/dev/reference/nn_init_uniform_.html new file mode 100644 index 0000000000000000000000000000000000000000..84adc840cf42731ec58055ea883d74dc218e572d --- /dev/null +++ b/static/docs/dev/reference/nn_init_uniform_.html @@ -0,0 +1,256 @@ + + + + + + + + +Uniform initialization — nn_init_uniform_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fills the input Tensor with values drawn from the uniform distribution

    +
    + +
    nn_init_uniform_(tensor, a = 0, b = 1)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    tensor

    an n-dimensional Tensor

    a

    the lower bound of the uniform distribution

    b

    the upper bound of the uniform distribution

    + + +

    Examples

    +
    if (torch_is_installed()) { +w <- torch_empty(3, 5) +nn_init_uniform_(w) + +} +
    #> torch_tensor +#> 0.8556 0.9331 0.3515 0.8071 0.4948 +#> 0.6075 0.9042 0.7181 0.7329 0.7563 +#> 0.2584 0.5293 0.9757 0.3030 0.3341 +#> [ CPUFloatType{3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_init_xavier_normal_.html b/static/docs/dev/reference/nn_init_xavier_normal_.html new file mode 100644 index 0000000000000000000000000000000000000000..36dbce44e2d476e16d31fc14c995e2e4b79d39d7 --- /dev/null +++ b/static/docs/dev/reference/nn_init_xavier_normal_.html @@ -0,0 +1,256 @@ + + + + + + + + +Xavier normal initialization — nn_init_xavier_normal_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fills the input Tensor with values according to the method +described in Understanding the difficulty of training deep feedforward neural networks - Glorot, X. & Bengio, Y. (2010), using a normal +distribution.

    +
    + +
    nn_init_xavier_normal_(tensor, gain = 1)
    + +

    Arguments

    + + + + + + + + + + +
    tensor

    an n-dimensional Tensor

    gain

    an optional scaling factor

    + + +

    Examples

    +
    if (torch_is_installed()) { +w <- torch_empty(3, 5) +nn_init_xavier_normal_(w) + +} +
    #> torch_tensor +#> 1.2535 -0.2197 0.5425 -3.0052 -4.2446 +#> -0.3570 -1.6970 -2.0154 -0.5348 2.7582 +#> 0.8714 -0.8924 0.7675 3.2553 -1.4333 +#> [ CPUFloatType{3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_init_xavier_uniform_.html b/static/docs/dev/reference/nn_init_xavier_uniform_.html new file mode 100644 index 0000000000000000000000000000000000000000..bcd4669892fb976b30a2f3ff84814910328920ac --- /dev/null +++ b/static/docs/dev/reference/nn_init_xavier_uniform_.html @@ -0,0 +1,256 @@ + + + + + + + + +Xavier uniform initialization — nn_init_xavier_uniform_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fills the input Tensor with values according to the method +described in Understanding the difficulty of training deep feedforward neural networks - Glorot, X. & Bengio, Y. (2010), using a uniform +distribution.

    +
    + +
    nn_init_xavier_uniform_(tensor, gain = 1)
    + +

    Arguments

    + + + + + + + + + + +
    tensor

    an n-dimensional Tensor

    gain

    an optional scaling factor

    + + +

    Examples

    +
    if (torch_is_installed()) { +w <- torch_empty(3, 5) +nn_init_xavier_uniform_(w) + +} +
    #> torch_tensor +#> 1.3397 1.1040 -3.0453 -1.7935 0.9545 +#> -0.0194 -2.4483 2.9345 2.2750 -2.4048 +#> -0.4406 -2.2409 0.4155 -0.1573 1.9776 +#> [ CPUFloatType{3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_init_zeros_.html b/static/docs/dev/reference/nn_init_zeros_.html new file mode 100644 index 0000000000000000000000000000000000000000..325822336a7bc33d75748f99f9b27b62c4fe30a5 --- /dev/null +++ b/static/docs/dev/reference/nn_init_zeros_.html @@ -0,0 +1,248 @@ + + + + + + + + +Zeros initialization — nn_init_zeros_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fills the input Tensor with the scalar value 0

    +
    + +
    nn_init_zeros_(tensor)
    + +

    Arguments

    + + + + + + +
    tensor

    an n-dimensional tensor

    + + +

    Examples

    +
    if (torch_is_installed()) { +w <- torch_empty(3, 5) +nn_init_zeros_(w) + +} +
    #> torch_tensor +#> 0 0 0 0 0 +#> 0 0 0 0 0 +#> 0 0 0 0 0 +#> [ CPUFloatType{3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_leaky_relu.html b/static/docs/dev/reference/nn_leaky_relu.html new file mode 100644 index 0000000000000000000000000000000000000000..5c79b53185bf45e2242aa8768025d25f7653c9d0 --- /dev/null +++ b/static/docs/dev/reference/nn_leaky_relu.html @@ -0,0 +1,273 @@ + + + + + + + + +LeakyReLU module — nn_leaky_relu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the element-wise function:

    +
    + +
    nn_leaky_relu(negative_slope = 0.01, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + +
    negative_slope

    Controls the angle of the negative slope. Default: 1e-2

    inplace

    can optionally do the operation in-place. Default: FALSE

    + +

    Details

    + +

    $$ + \mbox{LeakyReLU}(x) = \max(0, x) + \mbox{negative\_slope} * \min(0, x) +$$ +or

    +

    $$ + \mbox{LeakyRELU}(x) = + \left\{ \begin{array}{ll} +x, & \mbox{ if } x \geq 0 \\ +\mbox{negative\_slope} \times x, & \mbox{ otherwise } +\end{array} +\right. +$$

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_leaky_relu(0.1) +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_linear.html b/static/docs/dev/reference/nn_linear.html new file mode 100644 index 0000000000000000000000000000000000000000..8a847728c4e551180550b6307709ffeea34faf1a --- /dev/null +++ b/static/docs/dev/reference/nn_linear.html @@ -0,0 +1,281 @@ + + + + + + + + +Linear module — nn_linear • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a linear transformation to the incoming data: y = xA^T + b

    +
    + +
    nn_linear(in_features, out_features, bias = TRUE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    in_features

    size of each input sample

    out_features

    size of each output sample

    bias

    If set to FALSE, the layer will not learn an additive bias. +Default: TRUE

    + +

    Shape

    + + + +
      +
    • Input: (N, *, H_in) where * means any number of +additional dimensions and H_in = in_features.

    • +
    • Output: (N, *, H_out) where all but the last dimension +are the same shape as the input and :math:H_out = out_features.

    • +
    + +

    Attributes

    + + + +
      +
    • weight: the learnable weights of the module of shape +(out_features, in_features). The values are +initialized from \(U(-\sqrt{k}, \sqrt{k})\)s, where +\(k = \frac{1}{\mbox{in\_features}}\)

    • +
    • bias: the learnable bias of the module of shape \((\mbox{out\_features})\). +If bias is TRUE, the values are initialized from +\(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where +\(k = \frac{1}{\mbox{in\_features}}\)

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_linear(20, 30) +input <- torch_randn(128, 20) +output <- m(input) +print(output$size()) + +} +
    #> [1] 128 30
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_log_sigmoid.html b/static/docs/dev/reference/nn_log_sigmoid.html new file mode 100644 index 0000000000000000000000000000000000000000..39f9426aee285b8f30e10c1e1e993cdaf760c7ea --- /dev/null +++ b/static/docs/dev/reference/nn_log_sigmoid.html @@ -0,0 +1,253 @@ + + + + + + + + +LogSigmoid module — nn_log_sigmoid • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the element-wise function: +$$ + \mbox{LogSigmoid}(x) = \log\left(\frac{ 1 }{ 1 + \exp(-x)}\right) + $$

    +
    + +
    nn_log_sigmoid()
    + + +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_log_sigmoid() +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_log_softmax.html b/static/docs/dev/reference/nn_log_softmax.html new file mode 100644 index 0000000000000000000000000000000000000000..0d498fdbcf64ba9dad3970fd550ad0aadfa48610 --- /dev/null +++ b/static/docs/dev/reference/nn_log_softmax.html @@ -0,0 +1,266 @@ + + + + + + + + +LogSoftmax module — nn_log_softmax • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the \(\log(\mbox{Softmax}(x))\) function to an n-dimensional +input Tensor. The LogSoftmax formulation can be simplified as:

    +
    + +
    nn_log_softmax(dim)
    + +

    Arguments

    + + + + + + +
    dim

    (int): A dimension along which LogSoftmax will be computed.

    + +

    Value

    + +

    a Tensor of the same dimension and shape as the input with +values in the range [-inf, 0)

    +

    Details

    + +

    $$ + \mbox{LogSoftmax}(x_{i}) = \log\left(\frac{\exp(x_i) }{ \sum_j \exp(x_j)} \right) +$$

    +

    Shape

    + + + +
      +
    • Input: \((*)\) where * means, any number of additional +dimensions

    • +
    • Output: \((*)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_log_softmax(1) +input <- torch_randn(2, 3) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_lp_pool1d.html b/static/docs/dev/reference/nn_lp_pool1d.html new file mode 100644 index 0000000000000000000000000000000000000000..5803fb9e9949cd2f984c4641549c525849148ca3 --- /dev/null +++ b/static/docs/dev/reference/nn_lp_pool1d.html @@ -0,0 +1,292 @@ + + + + + + + + +Applies a 1D power-average pooling over an input signal composed of several input +planes. — nn_lp_pool1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    On each window, the function computed is:

    +

    $$ + f(X) = \sqrt[p]{\sum_{x \in X} x^{p}} +$$

    +
    + +
    nn_lp_pool1d(norm_type, kernel_size, stride = NULL, ceil_mode = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    norm_type

    if inf than one gets max pooling if 0 you get sum pooling ( +proportional to the avg pooling)

    kernel_size

    a single int, the size of the window

    stride

    a single int, the stride of the window. Default value is kernel_size

    ceil_mode

    when TRUE, will use ceil instead of floor to compute the output shape

    + +

    Details

    + + +
      +
    • At p = \(\infty\), one gets Max Pooling

    • +
    • At p = 1, one gets Sum Pooling (which is proportional to Average Pooling)

    • +
    + +

    Note

    + +

    If the sum to the power of p is zero, the gradient of this function is +not defined. This implementation will set the gradient to zero in this case.

    +

    Shape

    + + + +
      +
    • Input: \((N, C, L_{in})\)

    • +
    • Output: \((N, C, L_{out})\), where

    • +
    + +

    $$ + L_{out} = \left\lfloor\frac{L_{in} - \mbox{kernel\_size}}{\mbox{stride}} + 1\right\rfloor +$$

    + +

    Examples

    +
    if (torch_is_installed()) { +# power-2 pool of window of length 3, with stride 2. +m <- nn_lp_pool1d(2, 3, stride=2) +input <- torch_randn(20, 16, 50) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_lp_pool2d.html b/static/docs/dev/reference/nn_lp_pool2d.html new file mode 100644 index 0000000000000000000000000000000000000000..cee096eefefdcad8c2e3e1006a4f576404589183 --- /dev/null +++ b/static/docs/dev/reference/nn_lp_pool2d.html @@ -0,0 +1,304 @@ + + + + + + + + +Applies a 2D power-average pooling over an input signal composed of several input +planes. — nn_lp_pool2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    On each window, the function computed is:

    +

    $$ + f(X) = \sqrt[p]{\sum_{x \in X} x^{p}} +$$

    +
    + +
    nn_lp_pool2d(norm_type, kernel_size, stride = NULL, ceil_mode = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    norm_type

    if inf than one gets max pooling if 0 you get sum pooling ( +proportional to the avg pooling)

    kernel_size

    the size of the window

    stride

    the stride of the window. Default value is kernel_size

    ceil_mode

    when TRUE, will use ceil instead of floor to compute the output shape

    + +

    Details

    + + +
      +
    • At p = \(\infty\), one gets Max Pooling

    • +
    • At p = 1, one gets Sum Pooling (which is proportional to average pooling)

    • +
    + +

    The parameters kernel_size, stride can either be:

      +
    • a single int -- in which case the same value is used for the height and width dimension

    • +
    • a tuple of two ints -- in which case, the first int is used for the height dimension, +and the second int for the width dimension

    • +
    + +

    Note

    + +

    If the sum to the power of p is zero, the gradient of this function is +not defined. This implementation will set the gradient to zero in this case.

    +

    Shape

    + + + +
      +
    • Input: \((N, C, H_{in}, W_{in})\)

    • +
    • Output: \((N, C, H_{out}, W_{out})\), where

    • +
    + +

    $$ + H_{out} = \left\lfloor\frac{H_{in} - \mbox{kernel\_size}[0]}{\mbox{stride}[0]} + 1\right\rfloor +$$ +$$ + W_{out} = \left\lfloor\frac{W_{in} - \mbox{kernel\_size}[1]}{\mbox{stride}[1]} + 1\right\rfloor +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +# power-2 pool of square window of size=3, stride=2 +m <- nn_lp_pool2d(2, 3, stride=2) +# pool of non-square window of power 1.2 +m <- nn_lp_pool2d(1.2, c(3, 2), stride=c(2, 1)) +input <- torch_randn(20, 16, 50, 32) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_max_pool1d.html b/static/docs/dev/reference/nn_max_pool1d.html new file mode 100644 index 0000000000000000000000000000000000000000..8edf0bc32b3cb650ce9fa506d3b2a01612697b2f --- /dev/null +++ b/static/docs/dev/reference/nn_max_pool1d.html @@ -0,0 +1,301 @@ + + + + + + + + +MaxPool1D module — nn_max_pool1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 1D max pooling over an input signal composed of several input +planes.

    +
    + +
    nn_max_pool1d(
    +  kernel_size,
    +  stride = NULL,
    +  padding = 0,
    +  dilation = 1,
    +  return_indices = FALSE,
    +  ceil_mode = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    kernel_size

    the size of the window to take a max over

    stride

    the stride of the window. Default value is kernel_size

    padding

    implicit zero padding to be added on both sides

    dilation

    a parameter that controls the stride of elements in the window

    return_indices

    if TRUE, will return the max indices along with the outputs. +Useful for nn_max_unpool1d() later.

    ceil_mode

    when TRUE, will use ceil instead of floor to compute the output shape

    + +

    Details

    + +

    In the simplest case, the output value of the layer with input size \((N, C, L)\) +and output \((N, C, L_{out})\) can be precisely described as:

    +

    $$ + out(N_i, C_j, k) = \max_{m=0, \ldots, \mbox{kernel\_size} - 1} +input(N_i, C_j, stride \times k + m) +$$

    +

    If padding is non-zero, then the input is implicitly zero-padded on both sides +for padding number of points. dilation controls the spacing between the kernel points. +It is harder to describe, but this link +has a nice visualization of what dilation does.

    +

    Shape

    + + + +
      +
    • Input: \((N, C, L_{in})\)

    • +
    • Output: \((N, C, L_{out})\), where

    • +
    + +

    $$ + L_{out} = \left\lfloor \frac{L_{in} + 2 \times \mbox{padding} - \mbox{dilation} + \times (\mbox{kernel\_size} - 1) - 1}{\mbox{stride}} + 1\right\rfloor +$$

    + +

    Examples

    +
    if (torch_is_installed()) { +# pool of size=3, stride=2 +m <- nn_max_pool1d(3, stride=2) +input <- torch_randn(20, 16, 50) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_max_pool2d.html b/static/docs/dev/reference/nn_max_pool2d.html new file mode 100644 index 0000000000000000000000000000000000000000..3b93be206f870db010a64772ef8c0f771933e282 --- /dev/null +++ b/static/docs/dev/reference/nn_max_pool2d.html @@ -0,0 +1,316 @@ + + + + + + + + +MaxPool2D module — nn_max_pool2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 2D max pooling over an input signal composed of several input +planes.

    +
    + +
    nn_max_pool2d(
    +  kernel_size,
    +  stride = NULL,
    +  padding = 0,
    +  dilation = 1,
    +  return_indices = FALSE,
    +  ceil_mode = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    kernel_size

    the size of the window to take a max over

    stride

    the stride of the window. Default value is kernel_size

    padding

    implicit zero padding to be added on both sides

    dilation

    a parameter that controls the stride of elements in the window

    return_indices

    if TRUE, will return the max indices along with the outputs. +Useful for nn_max_unpool2d() later.

    ceil_mode

    when TRUE, will use ceil instead of floor to compute the output shape

    + +

    Details

    + +

    In the simplest case, the output value of the layer with input size \((N, C, H, W)\), +output \((N, C, H_{out}, W_{out})\) and kernel_size \((kH, kW)\) +can be precisely described as:

    +

    $$ +\begin{array}{ll} +out(N_i, C_j, h, w) ={} & \max_{m=0, \ldots, kH-1} \max_{n=0, \ldots, kW-1} \\ +& \mbox{input}(N_i, C_j, \mbox{stride[0]} \times h + m, + \mbox{stride[1]} \times w + n) +\end{array} +$$

    +

    If padding is non-zero, then the input is implicitly zero-padded on both sides +for padding number of points. dilation controls the spacing between the kernel points. +It is harder to describe, but this link has a nice visualization of what dilation does.

    +

    The parameters kernel_size, stride, padding, dilation can either be:

      +
    • a single int -- in which case the same value is used for the height and width dimension

    • +
    • a tuple of two ints -- in which case, the first int is used for the height dimension, +and the second int for the width dimension

    • +
    + +

    Shape

    + + + +
      +
    • Input: \((N, C, H_{in}, W_{in})\)

    • +
    • Output: \((N, C, H_{out}, W_{out})\), where

    • +
    + +

    $$ + H_{out} = \left\lfloor\frac{H_{in} + 2 * \mbox{padding[0]} - \mbox{dilation[0]} + \times (\mbox{kernel\_size[0]} - 1) - 1}{\mbox{stride[0]}} + 1\right\rfloor +$$

    +

    $$ + W_{out} = \left\lfloor\frac{W_{in} + 2 * \mbox{padding[1]} - \mbox{dilation[1]} + \times (\mbox{kernel\_size[1]} - 1) - 1}{\mbox{stride[1]}} + 1\right\rfloor +$$

    + +

    Examples

    +
    if (torch_is_installed()) { +# pool of square window of size=3, stride=2 +m <- nn_max_pool2d(3, stride=2) +# pool of non-square window +m <- nn_max_pool2d(c(3, 2), stride=c(2, 1)) +input <- torch_randn(20, 16, 50, 32) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_max_pool3d.html b/static/docs/dev/reference/nn_max_pool3d.html new file mode 100644 index 0000000000000000000000000000000000000000..c46536060f47f540902bd17871deddb46893731c --- /dev/null +++ b/static/docs/dev/reference/nn_max_pool3d.html @@ -0,0 +1,321 @@ + + + + + + + + +Applies a 3D max pooling over an input signal composed of several input +planes. — nn_max_pool3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    In the simplest case, the output value of the layer with input size \((N, C, D, H, W)\), +output \((N, C, D_{out}, H_{out}, W_{out})\) and kernel_size \((kD, kH, kW)\) +can be precisely described as:

    +
    + +
    nn_max_pool3d(
    +  kernel_size,
    +  stride = NULL,
    +  padding = 0,
    +  dilation = 1,
    +  return_indices = FALSE,
    +  ceil_mode = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    kernel_size

    the size of the window to take a max over

    stride

    the stride of the window. Default value is kernel_size

    padding

    implicit zero padding to be added on all three sides

    dilation

    a parameter that controls the stride of elements in the window

    return_indices

    if TRUE, will return the max indices along with the outputs. +Useful for torch_nn.MaxUnpool3d later

    ceil_mode

    when TRUE, will use ceil instead of floor to compute the output shape

    + +

    Details

    + +

    $$ +\begin{array}{ll} +\mbox{out}(N_i, C_j, d, h, w) = & \max_{k=0, \ldots, kD-1} \max_{m=0, \ldots, kH-1} \max_{n=0, \ldots, kW-1} \\ + & \mbox{input}(N_i, C_j, \mbox{stride[0]} \times d + k, \mbox{stride[1]} \times h + m, \mbox{stride[2]} \times w + n) +\end{array} +$$

    +

    If padding is non-zero, then the input is implicitly zero-padded on both sides +for padding number of points. dilation controls the spacing between the kernel points. +It is harder to describe, but this link_ has a nice visualization of what dilation does. +The parameters kernel_size, stride, padding, dilation can either be:

      +
    • a single int -- in which case the same value is used for the depth, height and width dimension

    • +
    • a tuple of three ints -- in which case, the first int is used for the depth dimension, +the second int for the height dimension and the third int for the width dimension

    • +
    + +

    Shape

    + + + +
      +
    • Input: \((N, C, D_{in}, H_{in}, W_{in})\)

    • +
    • Output: \((N, C, D_{out}, H_{out}, W_{out})\), where +$$ + D_{out} = \left\lfloor\frac{D_{in} + 2 \times \mbox{padding}[0] - \mbox{dilation}[0] \times + (\mbox{kernel\_size}[0] - 1) - 1}{\mbox{stride}[0]} + 1\right\rfloor +$$

    • +
    + +

    $$ + H_{out} = \left\lfloor\frac{H_{in} + 2 \times \mbox{padding}[1] - \mbox{dilation}[1] \times + (\mbox{kernel\_size}[1] - 1) - 1}{\mbox{stride}[1]} + 1\right\rfloor +$$

    +

    $$ + W_{out} = \left\lfloor\frac{W_{in} + 2 \times \mbox{padding}[2] - \mbox{dilation}[2] \times + (\mbox{kernel\_size}[2] - 1) - 1}{\mbox{stride}[2]} + 1\right\rfloor +$$

    + +

    Examples

    +
    if (torch_is_installed()) { +# pool of square window of size=3, stride=2 +m <- nn_max_pool3d(3, stride=2) +# pool of non-square window +m <- nn_max_pool3d(c(3, 2, 2), stride=c(2, 1, 2)) +input <- torch_randn(20, 16, 50,44, 31) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_max_unpool1d.html b/static/docs/dev/reference/nn_max_unpool1d.html new file mode 100644 index 0000000000000000000000000000000000000000..8673d13f5caf0de4daa52402a8376360abbb811d --- /dev/null +++ b/static/docs/dev/reference/nn_max_unpool1d.html @@ -0,0 +1,309 @@ + + + + + + + + +Computes a partial inverse of MaxPool1d. — nn_max_unpool1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    MaxPool1d is not fully invertible, since the non-maximal values are lost. +MaxUnpool1d takes in as input the output of MaxPool1d +including the indices of the maximal values and computes a partial inverse +in which all non-maximal values are set to zero.

    +
    + +
    nn_max_unpool1d(kernel_size, stride = NULL, padding = 0)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    kernel_size

    (int or tuple): Size of the max pooling window.

    stride

    (int or tuple): Stride of the max pooling window. +It is set to kernel_size by default.

    padding

    (int or tuple): Padding that was added to the input

    + +

    Note

    + +

    MaxPool1d can map several input sizes to the same output +sizes. Hence, the inversion process can get ambiguous. +To accommodate this, you can provide the needed output size +as an additional argument output_size in the forward call. +See the Inputs and Example below.

    +

    Inputs

    + + + +
      +
    • input: the input Tensor to invert

    • +
    • indices: the indices given out by nn_max_pool1d()

    • +
    • output_size (optional): the targeted output size

    • +
    + +

    Shape

    + + + +
      +
    • Input: \((N, C, H_{in})\)

    • +
    • Output: \((N, C, H_{out})\), where +$$ + H_{out} = (H_{in} - 1) \times \mbox{stride}[0] - 2 \times \mbox{padding}[0] + \mbox{kernel\_size}[0] +$$ +or as given by output_size in the call operator

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +pool <- nn_max_pool1d(2, stride=2, return_indices=TRUE) +unpool <- nn_max_unpool1d(2, stride=2) + +input <- torch_tensor(array(1:8/1, dim = c(1,1,8))) +out <- pool(input) +unpool(out[[1]], out[[2]]) + +# Example showcasing the use of output_size +input <- torch_tensor(array(1:8/1, dim = c(1,1,8))) +out <- pool(input) +unpool(out[[1]], out[[2]], output_size=input$size()) +unpool(out[[1]], out[[2]]) + +} +
    #> torch_tensor +#> (1,1,.,.) = +#> 0 +#> 2 +#> 0 +#> 4 +#> 0 +#> 6 +#> 0 +#> 8 +#> [ CPUFloatType{1,1,8,1} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_max_unpool2d.html b/static/docs/dev/reference/nn_max_unpool2d.html new file mode 100644 index 0000000000000000000000000000000000000000..66e43126bfae6b4dbc4e3edc48d2e0ffa592950b --- /dev/null +++ b/static/docs/dev/reference/nn_max_unpool2d.html @@ -0,0 +1,306 @@ + + + + + + + + +Computes a partial inverse of MaxPool2d. — nn_max_unpool2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    MaxPool2d is not fully invertible, since the non-maximal values are lost. +MaxUnpool2d takes in as input the output of MaxPool2d +including the indices of the maximal values and computes a partial inverse +in which all non-maximal values are set to zero.

    +
    + +
    nn_max_unpool2d(kernel_size, stride = NULL, padding = 0)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    kernel_size

    (int or tuple): Size of the max pooling window.

    stride

    (int or tuple): Stride of the max pooling window. +It is set to kernel_size by default.

    padding

    (int or tuple): Padding that was added to the input

    + +

    Note

    + +

    MaxPool2d can map several input sizes to the same output +sizes. Hence, the inversion process can get ambiguous. +To accommodate this, you can provide the needed output size +as an additional argument output_size in the forward call. +See the Inputs and Example below.

    +

    Inputs

    + + + +
      +
    • input: the input Tensor to invert

    • +
    • indices: the indices given out by nn_max_pool2d()

    • +
    • output_size (optional): the targeted output size

    • +
    + +

    Shape

    + + + +
      +
    • Input: \((N, C, H_{in}, W_{in})\)

    • +
    • Output: \((N, C, H_{out}, W_{out})\), where +$$ + H_{out} = (H_{in} - 1) \times \mbox{stride[0]} - 2 \times \mbox{padding[0]} + \mbox{kernel\_size[0]} +$$ +$$ + W_{out} = (W_{in} - 1) \times \mbox{stride[1]} - 2 \times \mbox{padding[1]} + \mbox{kernel\_size[1]} +$$ +or as given by output_size in the call operator

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { + +pool <- nn_max_pool2d(2, stride=2, return_indices=TRUE) +unpool <- nn_max_unpool2d(2, stride=2) +input <- torch_randn(1,1,4,4) +out <- pool(input) +unpool(out[[1]], out[[2]]) + +# specify a different output size than input size +unpool(out[[1]], out[[2]], output_size=c(1, 1, 5, 5)) + +} +
    #> torch_tensor +#> (1,1,.,.) = +#> 0.0000 0.0000 0.0000 0.0000 0.3625 +#> 0.0000 0.0000 0.7802 0.0000 0.0000 +#> 0.0000 0.5085 2.6613 0.0000 0.0000 +#> 0.0000 0.0000 0.0000 0.0000 0.0000 +#> 0.0000 0.0000 0.0000 0.0000 0.0000 +#> [ CPUFloatType{1,1,5,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_max_unpool3d.html b/static/docs/dev/reference/nn_max_unpool3d.html new file mode 100644 index 0000000000000000000000000000000000000000..0451277ed32791c4838b0642e677bb1402091981 --- /dev/null +++ b/static/docs/dev/reference/nn_max_unpool3d.html @@ -0,0 +1,300 @@ + + + + + + + + +Computes a partial inverse of MaxPool3d. — nn_max_unpool3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    MaxPool3d is not fully invertible, since the non-maximal values are lost. +MaxUnpool3d takes in as input the output of MaxPool3d +including the indices of the maximal values and computes a partial inverse +in which all non-maximal values are set to zero.

    +
    + +
    nn_max_unpool3d(kernel_size, stride = NULL, padding = 0)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    kernel_size

    (int or tuple): Size of the max pooling window.

    stride

    (int or tuple): Stride of the max pooling window. +It is set to kernel_size by default.

    padding

    (int or tuple): Padding that was added to the input

    + +

    Note

    + +

    MaxPool3d can map several input sizes to the same output +sizes. Hence, the inversion process can get ambiguous. +To accommodate this, you can provide the needed output size +as an additional argument output_size in the forward call. +See the Inputs section below.

    +

    Inputs

    + + + +
      +
    • input: the input Tensor to invert

    • +
    • indices: the indices given out by nn_max_pool3d()

    • +
    • output_size (optional): the targeted output size

    • +
    + +

    Shape

    + + + +
      +
    • Input: \((N, C, D_{in}, H_{in}, W_{in})\)

    • +
    • Output: \((N, C, D_{out}, H_{out}, W_{out})\), where

    • +
    + +

    $$ + D_{out} = (D_{in} - 1) \times \mbox{stride[0]} - 2 \times \mbox{padding[0]} + \mbox{kernel\_size[0]} +$$ +$$ + H_{out} = (H_{in} - 1) \times \mbox{stride[1]} - 2 \times \mbox{padding[1]} + \mbox{kernel\_size[1]} +$$ +$$ + W_{out} = (W_{in} - 1) \times \mbox{stride[2]} - 2 \times \mbox{padding[2]} + \mbox{kernel\_size[2]} +$$

    +

    or as given by output_size in the call operator

    + +

    Examples

    +
    if (torch_is_installed()) { + +# pool of square window of size=3, stride=2 +pool <- nn_max_pool3d(3, stride=2, return_indices=TRUE) +unpool <- nn_max_unpool3d(3, stride=2) +out <- pool(torch_randn(20, 16, 51, 33, 15)) +unpooled_output <- unpool(out[[1]], out[[2]]) +unpooled_output$size() + +} +
    #> [1] 20 16 51 33 15
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_module.html b/static/docs/dev/reference/nn_module.html new file mode 100644 index 0000000000000000000000000000000000000000..c9f118eb37c269b2417f14e3e000f19fa78cf040 --- /dev/null +++ b/static/docs/dev/reference/nn_module.html @@ -0,0 +1,276 @@ + + + + + + + + +Base class for all neural network modules. — nn_module • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Your models should also subclass this class.

    +
    + +
    nn_module(
    +  classname = NULL,
    +  inherit = nn_Module,
    +  ...,
    +  parent_env = parent.frame()
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    classname

    an optional name for the module

    inherit

    an optional module to inherit from

    ...

    methods implementation

    parent_env

    passed to R6::R6Class().

    + +

    Details

    + +

    Modules can also contain other Modules, allowing to nest them in a tree +structure. You can assign the submodules as regular attributes.

    + +

    Examples

    +
    if (torch_is_installed()) { +model <- nn_module( + initialize = function() { + self$conv1 <- nn_conv2d(1, 20, 5) + self$conv2 <- nn_conv2d(20, 20, 5) + }, + forward = function(input) { + input <- self$conv1(input) + input <- nnf_relu(input) + input <- self$conv2(input) + input <- nnf_relu(input) + input + } +) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_module_list.html b/static/docs/dev/reference/nn_module_list.html new file mode 100644 index 0000000000000000000000000000000000000000..7ab7fc21d966b64aff5eca7b71dcc53b7eac2d7f --- /dev/null +++ b/static/docs/dev/reference/nn_module_list.html @@ -0,0 +1,257 @@ + + + + + + + + +Holds submodules in a list. — nn_module_list • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    nn_module_list can be indexed like a regular R list, but +modules it contains are properly registered, and will be visible by all +nn_module methods.

    +
    + +
    nn_module_list(modules = list())
    + +

    Arguments

    + + + + + + +
    modules

    a list of modules to add

    + + +

    Examples

    +
    if (torch_is_installed()) { + +my_module <- nn_module( + initialize = function() { + self$linears <- nn_module_list(lapply(1:10, function(x) nn_linear(10, 10))) + }, + forward = function(x) { + for (i in 1:length(self$linears)) + x <- self$linears[[i]](x) + x + } +) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_multihead_attention.html b/static/docs/dev/reference/nn_multihead_attention.html new file mode 100644 index 0000000000000000000000000000000000000000..f15be544229abaabe1b488c0c10d1b98d4898e54 --- /dev/null +++ b/static/docs/dev/reference/nn_multihead_attention.html @@ -0,0 +1,330 @@ + + + + + + + + +MultiHead attention — nn_multihead_attention • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Allows the model to jointly attend to information +from different representation subspaces. +See reference: Attention Is All You Need

    +
    + +
    nn_multihead_attention(
    +  embed_dim,
    +  num_heads,
    +  dropout = 0,
    +  bias = TRUE,
    +  add_bias_kv = FALSE,
    +  add_zero_attn = FALSE,
    +  kdim = NULL,
    +  vdim = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    embed_dim

    total dimension of the model.

    num_heads

    parallel attention heads.

    dropout

    a Dropout layer on attn_output_weights. Default: 0.0.

    bias

    add bias as module parameter. Default: True.

    add_bias_kv

    add bias to the key and value sequences at dim=0.

    add_zero_attn

    add a new batch of zeros to the key and +value sequences at dim=1.

    kdim

    total number of features in key. Default: NULL

    vdim

    total number of features in value. Default: NULL. +Note: if kdim and vdim are NULL, they will be set to embed_dim such that +query, key, and value have the same number of features.

    + +

    Details

    + +

    $$ + \mbox{MultiHead}(Q, K, V) = \mbox{Concat}(head_1,\dots,head_h)W^O +\mbox{where} head_i = \mbox{Attention}(QW_i^Q, KW_i^K, VW_i^V) +$$

    +

    Shape

    + + + + +

    Inputs:

      +
    • query: \((L, N, E)\) where L is the target sequence length, N is the batch size, E is +the embedding dimension.

    • +
    • key: \((S, N, E)\), where S is the source sequence length, N is the batch size, E is +the embedding dimension.

    • +
    • value: \((S, N, E)\) where S is the source sequence length, N is the batch size, E is +the embedding dimension.

    • +
    • key_padding_mask: \((N, S)\) where N is the batch size, S is the source sequence length. +If a ByteTensor is provided, the non-zero positions will be ignored while the position +with the zero positions will be unchanged. If a BoolTensor is provided, the positions with the +value of True will be ignored while the position with the value of False will be unchanged.

    • +
    • attn_mask: 2D mask \((L, S)\) where L is the target sequence length, S is the source sequence length. +3D mask \((N*num_heads, L, S)\) where N is the batch size, L is the target sequence length, +S is the source sequence length. attn_mask ensure that position i is allowed to attend the unmasked +positions. If a ByteTensor is provided, the non-zero positions are not allowed to attend +while the zero positions will be unchanged. If a BoolTensor is provided, positions with True +is not allowed to attend while False values will be unchanged. If a FloatTensor +is provided, it will be added to the attention weight.

    • +
    + +

    Outputs:

      +
    • attn_output: \((L, N, E)\) where L is the target sequence length, N is the batch size, +E is the embedding dimension.

    • +
    • attn_output_weights: \((N, L, S)\) where N is the batch size, +L is the target sequence length, S is the source sequence length.

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +multihead_attn = nn_multihead_attention(embed_dim, num_heads) +out <- multihead_attn(query, key, value) +attn_output <- out[[1]] +attn_output_weights <- out[[2]] +} + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_prelu.html b/static/docs/dev/reference/nn_prelu.html new file mode 100644 index 0000000000000000000000000000000000000000..c21bb2d739776f399702532b195b05cc0767e7a3 --- /dev/null +++ b/static/docs/dev/reference/nn_prelu.html @@ -0,0 +1,303 @@ + + + + + + + + +PReLU module — nn_prelu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the element-wise function: +$$ + \mbox{PReLU}(x) = \max(0,x) + a * \min(0,x) +$$ +or +$$ + \mbox{PReLU}(x) = + \left\{ \begin{array}{ll} +x, & \mbox{ if } x \geq 0 \\ +ax, & \mbox{ otherwise } +\end{array} +\right. +$$

    +
    + +
    nn_prelu(num_parameters = 1, init = 0.25)
    + +

    Arguments

    + + + + + + + + + + +
    num_parameters

    (int): number of \(a\) to learn. +Although it takes an int as input, there is only two values are legitimate: +1, or the number of channels at input. Default: 1

    init

    (float): the initial value of \(a\). Default: 0.25

    + +

    Details

    + +

    Here \(a\) is a learnable parameter. When called without arguments, nn.prelu() uses a single +parameter \(a\) across all input channels. If called with nn_prelu(nChannels), +a separate \(a\) is used for each input channel.

    +

    Note

    + +

    weight decay should not be used when learning \(a\) for good performance.

    +

    Channel dim is the 2nd dim of input. When input has dims < 2, then there is +no channel dim and the number of channels = 1.

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + +

    Attributes

    + + + +
      +
    • weight (Tensor): the learnable weights of shape (num_parameters).

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_prelu() +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_relu.html b/static/docs/dev/reference/nn_relu.html new file mode 100644 index 0000000000000000000000000000000000000000..b6d05816fdd115b385262b0ceec814e4c6d9e51f --- /dev/null +++ b/static/docs/dev/reference/nn_relu.html @@ -0,0 +1,260 @@ + + + + + + + + +ReLU module — nn_relu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the rectified linear unit function element-wise +$$\mbox{ReLU}(x) = (x)^+ = \max(0, x)$$

    +
    + +
    nn_relu(inplace = FALSE)
    + +

    Arguments

    + + + + + + +
    inplace

    can optionally do the operation in-place. Default: FALSE

    + +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_relu() +input <- torch_randn(2) +m(input) + +} +
    #> torch_tensor +#> 0.0000 +#> 0.2676 +#> [ CPUFloatType{2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_relu6.html b/static/docs/dev/reference/nn_relu6.html new file mode 100644 index 0000000000000000000000000000000000000000..54e90583319df5192cd18c5db9c445258f339830 --- /dev/null +++ b/static/docs/dev/reference/nn_relu6.html @@ -0,0 +1,260 @@ + + + + + + + + +ReLu6 module — nn_relu6 • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the element-wise function:

    +
    + +
    nn_relu6(inplace = FALSE)
    + +

    Arguments

    + + + + + + +
    inplace

    can optionally do the operation in-place. Default: FALSE

    + +

    Details

    + +

    $$ + \mbox{ReLU6}(x) = \min(\max(0,x), 6) +$$

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_relu6() +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_rnn.html b/static/docs/dev/reference/nn_rnn.html new file mode 100644 index 0000000000000000000000000000000000000000..7373904057cd097cfcfb6d836660bfd88ba10266 --- /dev/null +++ b/static/docs/dev/reference/nn_rnn.html @@ -0,0 +1,479 @@ + + + + + + + + +RNN module — nn_rnn • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a multi-layer Elman RNN with \(\tanh\) or \(\mbox{ReLU}\) non-linearity +to an input sequence.

    +
    + +
    nn_rnn(
    +  input_size,
    +  hidden_size,
    +  num_layers = 1,
    +  nonlinearity = NULL,
    +  bias = TRUE,
    +  batch_first = FALSE,
    +  dropout = 0,
    +  bidirectional = FALSE,
    +  ...
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input_size

    The number of expected features in the input x

    hidden_size

    The number of features in the hidden state h

    num_layers

    Number of recurrent layers. E.g., setting num_layers=2 +would mean stacking two RNNs together to form a stacked RNN, +with the second RNN taking in outputs of the first RNN and +computing the final results. Default: 1

    nonlinearity

    The non-linearity to use. Can be either 'tanh' or +'relu'. Default: 'tanh'

    bias

    If FALSE, then the layer does not use bias weights b_ih and +b_hh. Default: TRUE

    batch_first

    If TRUE, then the input and output tensors are provided +as (batch, seq, feature). Default: FALSE

    dropout

    If non-zero, introduces a Dropout layer on the outputs of each +RNN layer except the last layer, with dropout probability equal to +dropout. Default: 0

    bidirectional

    If TRUE, becomes a bidirectional RNN. Default: FALSE

    ...

    other arguments that can be passed to the super class.

    + +

    Details

    + +

    For each element in the input sequence, each layer computes the following +function:

    +

    $$ +h_t = \tanh(W_{ih} x_t + b_{ih} + W_{hh} h_{(t-1)} + b_{hh}) +$$

    +

    where \(h_t\) is the hidden state at time t, \(x_t\) is +the input at time t, and \(h_{(t-1)}\) is the hidden state of the +previous layer at time t-1 or the initial hidden state at time 0. +If nonlinearity is 'relu', then \(\mbox{ReLU}\) is used instead of +\(\tanh\).

    +

    Inputs

    + + + +
      +
    • input of shape (seq_len, batch, input_size): tensor containing the features +of the input sequence. The input can also be a packed variable length +sequence.

    • +
    • h_0 of shape (num_layers * num_directions, batch, hidden_size): tensor +containing the initial hidden state for each element in the batch. +Defaults to zero if not provided. If the RNN is bidirectional, +num_directions should be 2, else it should be 1.

    • +
    + +

    Outputs

    + + + +
      +
    • output of shape (seq_len, batch, num_directions * hidden_size): tensor +containing the output features (h_t) from the last layer of the RNN, +for each t. If a :class:nn_packed_sequence has +been given as the input, the output will also be a packed sequence. +For the unpacked case, the directions can be separated +using output$view(seq_len, batch, num_directions, hidden_size), +with forward and backward being direction 0 and 1 respectively. +Similarly, the directions can be separated in the packed case.

    • +
    • h_n of shape (num_layers * num_directions, batch, hidden_size): tensor +containing the hidden state for t = seq_len. +Like output, the layers can be separated using +h_n$view(num_layers, num_directions, batch, hidden_size).

    • +
    + +

    Shape

    + + + +
      +
    • Input1: \((L, N, H_{in})\) tensor containing input features where +\(H_{in}=\mbox{input\_size}\) and L represents a sequence length.

    • +
    • Input2: \((S, N, H_{out})\) tensor +containing the initial hidden state for each element in the batch. +\(H_{out}=\mbox{hidden\_size}\) +Defaults to zero if not provided. where \(S=\mbox{num\_layers} * \mbox{num\_directions}\) +If the RNN is bidirectional, num_directions should be 2, else it should be 1.

    • +
    • Output1: \((L, N, H_{all})\) where \(H_{all}=\mbox{num\_directions} * \mbox{hidden\_size}\)

    • +
    • Output2: \((S, N, H_{out})\) tensor containing the next hidden state +for each element in the batch

    • +
    + +

    Attributes

    + + + +
      +
    • weight_ih_l[k]: the learnable input-hidden weights of the k-th layer, +of shape (hidden_size, input_size) for k = 0. Otherwise, the shape is +(hidden_size, num_directions * hidden_size)

    • +
    • weight_hh_l[k]: the learnable hidden-hidden weights of the k-th layer, +of shape (hidden_size, hidden_size)

    • +
    • bias_ih_l[k]: the learnable input-hidden bias of the k-th layer, +of shape (hidden_size)

    • +
    • bias_hh_l[k]: the learnable hidden-hidden bias of the k-th layer, +of shape (hidden_size)

    • +
    + +

    Note

    + + + + +

    All the weights and biases are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) +where \(k = \frac{1}{\mbox{hidden\_size}}\)

    + +

    Examples

    +
    if (torch_is_installed()) { +rnn <- nn_rnn(10, 20, 2) +input <- torch_randn(5, 3, 10) +h0 <- torch_randn(2, 3, 20) +rnn(input, h0) + +} +
    #> [[1]] +#> torch_tensor +#> (1,.,.) = +#> Columns 1 to 9 0.7161 0.2226 -0.0357 0.7649 -0.8778 -0.1407 0.4345 -0.4041 0.4700 +#> -0.6661 0.5943 -0.2670 0.3763 -0.3201 -0.3885 0.6264 -0.2784 0.6874 +#> 0.7119 0.5669 0.6824 0.6195 -0.4618 0.5758 -0.2675 0.0119 -0.0457 +#> +#> Columns 10 to 18 0.6897 0.7546 0.6845 -0.4144 0.7661 0.4690 -0.1977 0.7332 0.4410 +#> -0.1916 0.3248 -0.2304 0.0221 -0.4793 0.2567 0.0762 0.5599 0.9369 +#> 0.9354 0.4537 -0.0197 -0.1920 0.4417 -0.3814 -0.3907 0.7424 0.6078 +#> +#> Columns 19 to 20 0.4127 -0.0569 +#> -0.3132 -0.0694 +#> 0.6415 0.2514 +#> +#> (2,.,.) = +#> Columns 1 to 9 -0.2898 0.0264 0.3182 -0.0434 -0.5073 -0.1787 -0.1681 0.0146 -0.0225 +#> -0.0181 0.2862 0.3248 0.1155 0.1944 -0.4386 -0.2307 0.3997 0.1378 +#> -0.4991 -0.0315 -0.0003 0.0188 -0.2259 -0.4118 -0.3810 -0.3168 -0.5106 +#> +#> Columns 10 to 18 0.7821 -0.4271 -0.3880 0.0370 0.4379 0.2096 -0.5661 0.8335 0.3392 +#> 0.8353 -0.3238 -0.1885 -0.0234 -0.1032 -0.0077 0.0145 0.2430 0.5450 +#> 0.5930 -0.2843 -0.3047 -0.3095 0.4868 0.5259 -0.6060 0.3336 0.2464 +#> +#> Columns 19 to 20 0.3208 0.1020 +#> 0.1593 0.5415 +#> 0.0749 0.3683 +#> +#> (3,.,.) = +#> Columns 1 to 9 -0.1481 -0.0683 0.2063 0.6848 -0.5957 0.2993 -0.5960 -0.2625 -0.4177 +#> -0.1699 0.3487 0.1203 0.6257 -0.2836 0.3066 -0.3963 0.0289 -0.3004 +#> 0.0403 0.1568 0.3575 0.1806 -0.2720 -0.2082 -0.6743 -0.2219 -0.0111 +#> +#> Columns 10 to 18 0.4896 0.1430 -0.2018 -0.1651 0.3086 0.0932 -0.3173 0.4461 0.2299 +#> 0.3123 -0.1588 -0.4074 -0.2209 0.3587 -0.0577 -0.2296 0.1526 0.0306 +#> 0.8882 0.4440 -0.3783 -0.1405 0.2539 0.3952 -0.4508 0.5254 0.2067 +#> +#> Columns 19 to 20 0.4001 0.0285 +#> 0.0215 0.2779 +#> 0.1267 -0.0228 +#> +#> (4,.,.) = +#> Columns 1 to 9 0.3506 0.1406 0.6083 -0.1419 -0.2605 0.0976 -0.5502 -0.2227 -0.3056 +#> -0.4934 0.2197 0.3039 0.3677 -0.3203 -0.1752 -0.3564 -0.3170 -0.2151 +#> 0.0362 0.3331 -0.0831 0.1896 -0.5873 -0.2126 -0.4437 -0.0855 -0.5144 +#> +#> Columns 10 to 18 0.6062 -0.0803 0.2738 -0.0751 0.0101 0.5143 -0.1504 0.4770 0.5594 +#> 0.4784 0.1200 -0.2187 -0.4459 0.3747 0.1906 -0.3137 -0.0697 0.4113 +#> 0.7554 0.3072 -0.2845 -0.2918 0.3419 0.5291 -0.0655 0.5719 -0.0901 +#> +#> Columns 19 to 20 0.4132 0.0458 +#> -0.0708 0.1516 +#> 0.1421 0.0844 +#> +#> (5,.,.) = +#> Columns 1 to 9 -0.2136 -0.2048 0.3624 0.1719 -0.1033 -0.1328 -0.4668 0.0815 -0.2587 +#> -0.6862 0.0822 0.4199 0.7108 -0.6515 0.3491 -0.2806 -0.2946 -0.3405 +#> -0.5081 0.1604 -0.0631 0.6548 -0.3199 0.1808 -0.3563 -0.1484 -0.4249 +#> +#> Columns 10 to 18 0.4771 -0.2079 -0.4903 -0.0846 -0.2072 0.1638 0.0225 0.2321 0.3682 +#> 0.4638 -0.3009 -0.4535 -0.5379 0.0756 -0.2180 -0.0427 0.2593 0.3729 +#> 0.5043 0.1803 -0.4552 -0.4080 0.5145 0.1950 0.1238 0.5248 0.6656 +#> +#> Columns 19 to 20 -0.1138 -0.2101 +#> 0.2529 0.1496 +#> 0.1794 0.0149 +#> [ CPUFloatType{5,3,20} ] +#> +#> [[2]] +#> torch_tensor +#> (1,.,.) = +#> Columns 1 to 9 -0.3036 -0.2660 -0.3928 0.1197 0.3798 0.0506 -0.1945 0.3296 -0.2563 +#> -0.4802 -0.3518 -0.5960 -0.8864 0.3126 -0.5528 -0.3168 -0.0299 0.6901 +#> -0.3496 -0.3312 -0.6669 -0.2741 -0.0690 0.0672 -0.3587 -0.3930 0.6656 +#> +#> Columns 10 to 18 -0.5301 0.1393 0.1400 0.3850 -0.1208 -0.2940 0.0193 -0.2334 0.1071 +#> -0.6637 0.0099 -0.3535 0.4259 -0.0213 -0.7286 0.6091 -0.1422 -0.6873 +#> -0.4200 -0.5866 0.3875 -0.0704 -0.2061 -0.0485 0.0553 -0.5008 0.0471 +#> +#> Columns 19 to 20 -0.2102 0.4555 +#> 0.3224 -0.4292 +#> 0.5104 -0.1123 +#> +#> (2,.,.) = +#> Columns 1 to 9 -0.2136 -0.2048 0.3624 0.1719 -0.1033 -0.1328 -0.4668 0.0815 -0.2587 +#> -0.6862 0.0822 0.4199 0.7108 -0.6515 0.3491 -0.2806 -0.2946 -0.3405 +#> -0.5081 0.1604 -0.0631 0.6548 -0.3199 0.1808 -0.3563 -0.1484 -0.4249 +#> +#> Columns 10 to 18 0.4771 -0.2079 -0.4903 -0.0846 -0.2072 0.1638 0.0225 0.2321 0.3682 +#> 0.4638 -0.3009 -0.4535 -0.5379 0.0756 -0.2180 -0.0427 0.2593 0.3729 +#> 0.5043 0.1803 -0.4552 -0.4080 0.5145 0.1950 0.1238 0.5248 0.6656 +#> +#> Columns 19 to 20 -0.1138 -0.2101 +#> 0.2529 0.1496 +#> 0.1794 0.0149 +#> [ CPUFloatType{2,3,20} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_rrelu.html b/static/docs/dev/reference/nn_rrelu.html new file mode 100644 index 0000000000000000000000000000000000000000..d066786e128b136d955375206be05897ccb2e382 --- /dev/null +++ b/static/docs/dev/reference/nn_rrelu.html @@ -0,0 +1,283 @@ + + + + + + + + +RReLU module — nn_rrelu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the randomized leaky rectified liner unit function, element-wise, +as described in the paper:

    +
    + +
    nn_rrelu(lower = 1/8, upper = 1/3, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    lower

    lower bound of the uniform distribution. Default: \(\frac{1}{8}\)

    upper

    upper bound of the uniform distribution. Default: \(\frac{1}{3}\)

    inplace

    can optionally do the operation in-place. Default: FALSE

    + +

    Details

    + +

    Empirical Evaluation of Rectified Activations in Convolutional Network.

    +

    The function is defined as:

    +

    $$ +\mbox{RReLU}(x) = +\left\{ \begin{array}{ll} +x & \mbox{if } x \geq 0 \\ +ax & \mbox{ otherwise } +\end{array} +\right. +$$

    +

    where \(a\) is randomly sampled from uniform distribution +\(\mathcal{U}(\mbox{lower}, \mbox{upper})\). +See: https://arxiv.org/pdf/1505.00853.pdf

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_rrelu(0.1, 0.3) +input <- torch_randn(2) +m(input) + +} +
    #> torch_tensor +#> 1.1319 +#> 1.1798 +#> [ CPUFloatType{2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_selu.html b/static/docs/dev/reference/nn_selu.html new file mode 100644 index 0000000000000000000000000000000000000000..1e99afe235953155ebbd390aaee352cfecdeb4b4 --- /dev/null +++ b/static/docs/dev/reference/nn_selu.html @@ -0,0 +1,264 @@ + + + + + + + + +SELU module — nn_selu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applied element-wise, as:

    +
    + +
    nn_selu(inplace = FALSE)
    + +

    Arguments

    + + + + + + +
    inplace

    (bool, optional): can optionally do the operation in-place. Default: FALSE

    + +

    Details

    + +

    $$ + \mbox{SELU}(x) = \mbox{scale} * (\max(0,x) + \min(0, \alpha * (\exp(x) - 1))) +$$

    +

    with \(\alpha = 1.6732632423543772848170429916717\) and +\(\mbox{scale} = 1.0507009873554804934193349852946\).

    +

    More details can be found in the paper +Self-Normalizing Neural Networks.

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_selu() +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_sequential.html b/static/docs/dev/reference/nn_sequential.html new file mode 100644 index 0000000000000000000000000000000000000000..ba7d119e0fd69533819cc89ce396ac3a356299c6 --- /dev/null +++ b/static/docs/dev/reference/nn_sequential.html @@ -0,0 +1,259 @@ + + + + + + + + +A sequential container — nn_sequential • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    A sequential container. +Modules will be added to it in the order they are passed in the constructor. +See examples.

    +
    + +
    nn_sequential(..., name = NULL)
    + +

    Arguments

    + + + + + + + + + + +
    ...

    sequence of modules to be added

    name

    optional name for the generated module.

    + + +

    Examples

    +
    if (torch_is_installed()) { + +model <- nn_sequential( + nn_conv2d(1, 20, 5), + nn_relu(), + nn_conv2d(20, 64, 5), + nn_relu() +) +input <- torch_randn(32, 1, 28, 28) +output <- model(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_sigmoid.html b/static/docs/dev/reference/nn_sigmoid.html new file mode 100644 index 0000000000000000000000000000000000000000..e096f91b93da316b6d40e85397ba989dbbd708be --- /dev/null +++ b/static/docs/dev/reference/nn_sigmoid.html @@ -0,0 +1,252 @@ + + + + + + + + +Sigmoid module — nn_sigmoid • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the element-wise function:

    +
    + +
    nn_sigmoid()
    + + +

    Details

    + +

    $$ + \mbox{Sigmoid}(x) = \sigma(x) = \frac{1}{1 + \exp(-x)} +$$

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_sigmoid() +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_softmax.html b/static/docs/dev/reference/nn_softmax.html new file mode 100644 index 0000000000000000000000000000000000000000..5accadffa9365b6eb487482d3e3e2f1296862cf4 --- /dev/null +++ b/static/docs/dev/reference/nn_softmax.html @@ -0,0 +1,279 @@ + + + + + + + + +Softmax module — nn_softmax • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the Softmax function to an n-dimensional input Tensor +rescaling them so that the elements of the n-dimensional output Tensor +lie in the range [0,1] and sum to 1. +Softmax is defined as:

    +
    + +
    nn_softmax(dim)
    + +

    Arguments

    + + + + + + +
    dim

    (int): A dimension along which Softmax will be computed (so every slice +along dim will sum to 1).

    + +

    Value

    + +

    : +a Tensor of the same dimension and shape as the input with +values in the range [0, 1]

    +

    Details

    + +

    $$ + \mbox{Softmax}(x_{i}) = \frac{\exp(x_i)}{\sum_j \exp(x_j)} +$$

    +

    When the input Tensor is a sparse tensor then the unspecifed +values are treated as -Inf.

    +

    Note

    + +

    This module doesn't work directly with NLLLoss, +which expects the Log to be computed between the Softmax and itself. +Use LogSoftmax instead (it's faster and has better numerical properties).

    +

    Shape

    + + + +
      +
    • Input: \((*)\) where * means, any number of additional +dimensions

    • +
    • Output: \((*)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_softmax(1) +input <- torch_randn(2, 3) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_softmax2d.html b/static/docs/dev/reference/nn_softmax2d.html new file mode 100644 index 0000000000000000000000000000000000000000..f143cf3f001f5cb7872be5f4f90891db67a95f5b --- /dev/null +++ b/static/docs/dev/reference/nn_softmax2d.html @@ -0,0 +1,254 @@ + + + + + + + + +Softmax2d module — nn_softmax2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies SoftMax over features to each spatial location. +When given an image of Channels x Height x Width, it will +apply Softmax to each location \((Channels, h_i, w_j)\)

    +
    + +
    nn_softmax2d()
    + + +

    Value

    + +

    a Tensor of the same dimension and shape as the input with +values in the range [0, 1]

    +

    Shape

    + + + +
      +
    • Input: \((N, C, H, W)\)

    • +
    • Output: \((N, C, H, W)\) (same shape as input)

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_softmax2d() +input <- torch_randn(2, 3, 12, 13) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_softmin.html b/static/docs/dev/reference/nn_softmin.html new file mode 100644 index 0000000000000000000000000000000000000000..b3e143a44c8c0fd559506cc568f0fc3a0cbbd34b --- /dev/null +++ b/static/docs/dev/reference/nn_softmin.html @@ -0,0 +1,271 @@ + + + + + + + + +Softmin — nn_softmin • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the Softmin function to an n-dimensional input Tensor +rescaling them so that the elements of the n-dimensional output Tensor +lie in the range [0, 1] and sum to 1. +Softmin is defined as:

    +
    + +
    nn_softmin(dim)
    + +

    Arguments

    + + + + + + +
    dim

    (int): A dimension along which Softmin will be computed (so every slice +along dim will sum to 1).

    + +

    Value

    + +

    a Tensor of the same dimension and shape as the input, with +values in the range [0, 1].

    +

    Details

    + +

    $$ + \mbox{Softmin}(x_{i}) = \frac{\exp(-x_i)}{\sum_j \exp(-x_j)} +$$

    +

    Shape

    + + + +
      +
    • Input: \((*)\) where * means, any number of additional +dimensions

    • +
    • Output: \((*)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_softmin(dim = 1) +input <- torch_randn(2, 2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_softplus.html b/static/docs/dev/reference/nn_softplus.html new file mode 100644 index 0000000000000000000000000000000000000000..5674031dadb91f73544bb97081b00118bba83f60 --- /dev/null +++ b/static/docs/dev/reference/nn_softplus.html @@ -0,0 +1,271 @@ + + + + + + + + +Softplus module — nn_softplus • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the element-wise function: +$$ + \mbox{Softplus}(x) = \frac{1}{\beta} * \log(1 + \exp(\beta * x)) +$$

    +
    + +
    nn_softplus(beta = 1, threshold = 20)
    + +

    Arguments

    + + + + + + + + + + +
    beta

    the \(\beta\) value for the Softplus formulation. Default: 1

    threshold

    values above this revert to a linear function. Default: 20

    + +

    Details

    + +

    SoftPlus is a smooth approximation to the ReLU function and can be used +to constrain the output of a machine to always be positive. +For numerical stability the implementation reverts to the linear function +when \(input \times \beta > threshold\).

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_softplus() +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_softshrink.html b/static/docs/dev/reference/nn_softshrink.html new file mode 100644 index 0000000000000000000000000000000000000000..e57eb934f834ac8859478f4b6ce7870f83797d68 --- /dev/null +++ b/static/docs/dev/reference/nn_softshrink.html @@ -0,0 +1,266 @@ + + + + + + + + +Softshrink module — nn_softshrink • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the soft shrinkage function elementwise:

    +
    + +
    nn_softshrink(lambd = 0.5)
    + +

    Arguments

    + + + + + + +
    lambd

    the \(\lambda\) (must be no less than zero) value for the Softshrink formulation. Default: 0.5

    + +

    Details

    + +

    $$ + \mbox{SoftShrinkage}(x) = + \left\{ \begin{array}{ll} +x - \lambda, & \mbox{ if } x > \lambda \\ +x + \lambda, & \mbox{ if } x < -\lambda \\ +0, & \mbox{ otherwise } +\end{array} +\right. +$$

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_softshrink() +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_softsign.html b/static/docs/dev/reference/nn_softsign.html new file mode 100644 index 0000000000000000000000000000000000000000..a48214231f3080247ffd5320dd4b40403dad245b --- /dev/null +++ b/static/docs/dev/reference/nn_softsign.html @@ -0,0 +1,253 @@ + + + + + + + + +Softsign module — nn_softsign • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the element-wise function: +$$ + \mbox{SoftSign}(x) = \frac{x}{ 1 + |x|} +$$

    +
    + +
    nn_softsign()
    + + +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_softsign() +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_tanh.html b/static/docs/dev/reference/nn_tanh.html new file mode 100644 index 0000000000000000000000000000000000000000..bdbc2bbaf596af87e5e91fd01c964cbf6f300aa5 --- /dev/null +++ b/static/docs/dev/reference/nn_tanh.html @@ -0,0 +1,252 @@ + + + + + + + + +Tanh module — nn_tanh • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the element-wise function:

    +
    + +
    nn_tanh()
    + + +

    Details

    + +

    $$ + \mbox{Tanh}(x) = \tanh(x) = \frac{\exp(x) - \exp(-x)} {\exp(x) + \exp(-x)} +$$

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_tanh() +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_tanhshrink.html b/static/docs/dev/reference/nn_tanhshrink.html new file mode 100644 index 0000000000000000000000000000000000000000..7624148048c38262aaabe2085094a46e12a3f7d7 --- /dev/null +++ b/static/docs/dev/reference/nn_tanhshrink.html @@ -0,0 +1,252 @@ + + + + + + + + +Tanhshrink module — nn_tanhshrink • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the element-wise function:

    +
    + +
    nn_tanhshrink()
    + + +

    Details

    + +

    $$ + \mbox{Tanhshrink}(x) = x - \tanh(x) +$$

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_tanhshrink() +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_threshold.html b/static/docs/dev/reference/nn_threshold.html new file mode 100644 index 0000000000000000000000000000000000000000..d21695b9e66dd3e73e87becb61fcb27d6a58739a --- /dev/null +++ b/static/docs/dev/reference/nn_threshold.html @@ -0,0 +1,274 @@ + + + + + + + + +Threshoold module — nn_threshold • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Thresholds each element of the input Tensor.

    +
    + +
    nn_threshold(threshold, value, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    threshold

    The value to threshold at

    value

    The value to replace with

    inplace

    can optionally do the operation in-place. Default: FALSE

    + +

    Details

    + +

    Threshold is defined as: +$$ + y = + \left\{ \begin{array}{ll} + x, &\mbox{ if } x > \mbox{threshold} \\ + \mbox{value}, &\mbox{ otherwise } + \end{array} + \right. +$$

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_threshold(0.1, 20) +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_utils_rnn_pack_padded_sequence.html b/static/docs/dev/reference/nn_utils_rnn_pack_padded_sequence.html new file mode 100644 index 0000000000000000000000000000000000000000..70da3c57d2407b02f3941d9a11cee349405fd9bd --- /dev/null +++ b/static/docs/dev/reference/nn_utils_rnn_pack_padded_sequence.html @@ -0,0 +1,278 @@ + + + + + + + + +Packs a Tensor containing padded sequences of variable length. — nn_utils_rnn_pack_padded_sequence • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    input can be of size T x B x * where T is the length of the +longest sequence (equal to lengths[1]), B is the batch size, and +* is any number of dimensions (including 0). If batch_first is +TRUE, B x T x * input is expected.

    +
    + +
    nn_utils_rnn_pack_padded_sequence(
    +  input,
    +  lengths,
    +  batch_first = FALSE,
    +  enforce_sorted = TRUE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    input

    (Tensor): padded batch of variable length sequences.

    lengths

    (Tensor): list of sequences lengths of each batch element.

    batch_first

    (bool, optional): if TRUE, the input is expected in B x T x * +format.

    enforce_sorted

    (bool, optional): if TRUE, the input is expected to +contain sequences sorted by length in a decreasing order. If +FALSE, the input will get sorted unconditionally. Default: TRUE.

    + +

    Value

    + +

    a PackedSequence object

    +

    Details

    + +

    For unsorted sequences, use enforce_sorted = FALSE. If enforce_sorted is +TRUE, the sequences should be sorted by length in a decreasing order, i.e. +input[,1] should be the longest sequence, and input[,B] the shortest +one. enforce_sorted = TRUE is only necessary for ONNX export.

    +

    Note

    + +

    This function accepts any input that has at least two dimensions. You +can apply it to pack the labels, and use the output of the RNN with +them to compute the loss directly. A Tensor can be retrieved from +a PackedSequence object by accessing its .data attribute.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_utils_rnn_pack_sequence.html b/static/docs/dev/reference/nn_utils_rnn_pack_sequence.html new file mode 100644 index 0000000000000000000000000000000000000000..e53f0a2b930ff6f2ff7866c785c39d7810c04883 --- /dev/null +++ b/static/docs/dev/reference/nn_utils_rnn_pack_sequence.html @@ -0,0 +1,265 @@ + + + + + + + + +Packs a list of variable length Tensors — nn_utils_rnn_pack_sequence • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    sequences should be a list of Tensors of size L x *, where L is +the length of a sequence and * is any number of trailing dimensions, +including zero.

    +
    + +
    nn_utils_rnn_pack_sequence(sequences, enforce_sorted = TRUE)
    + +

    Arguments

    + + + + + + + + + + +
    sequences

    (list[Tensor]): A list of sequences of decreasing length.

    enforce_sorted

    (bool, optional): if TRUE, checks that the input +contains sequences sorted by length in a decreasing order. If +FALSE, this condition is not checked. Default: TRUE.

    + +

    Value

    + +

    a PackedSequence object

    +

    Details

    + +

    For unsorted sequences, use enforce_sorted = FALSE. If enforce_sorted +is TRUE, the sequences should be sorted in the order of decreasing length. +enforce_sorted = TRUE is only necessary for ONNX export.

    + +

    Examples

    +
    if (torch_is_installed()) { +x <- torch_tensor(c(1,2,3), dtype = torch_long()) +y <- torch_tensor(c(4, 5), dtype = torch_long()) +z <- torch_tensor(c(6), dtype = torch_long()) + +p <- nn_utils_rnn_pack_sequence(list(x, y, z)) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_utils_rnn_pad_packed_sequence.html b/static/docs/dev/reference/nn_utils_rnn_pad_packed_sequence.html new file mode 100644 index 0000000000000000000000000000000000000000..e5f9681f54a154212fca60a2e2214a6ca814c9d0 --- /dev/null +++ b/static/docs/dev/reference/nn_utils_rnn_pad_packed_sequence.html @@ -0,0 +1,299 @@ + + + + + + + + +Pads a packed batch of variable length sequences. — nn_utils_rnn_pad_packed_sequence • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    It is an inverse operation to nn_utils_rnn_pack_padded_sequence().

    +
    + +
    nn_utils_rnn_pad_packed_sequence(
    +  sequence,
    +  batch_first = FALSE,
    +  padding_value = 0,
    +  total_length = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    sequence

    (PackedSequence): batch to pad

    batch_first

    (bool, optional): if True, the output will be in ``B x T x *` +format.

    padding_value

    (float, optional): values for padded elements.

    total_length

    (int, optional): if not NULL, the output will be padded to +have length total_length. This method will throw ValueError +if total_length is less than the max sequence length in +sequence.

    + +

    Value

    + +

    Tuple of Tensor containing the padded sequence, and a Tensor +containing the list of lengths of each sequence in the batch. +Batch elements will be re-ordered as they were ordered originally when +the batch was passed to nn_utils_rnn_pack_padded_sequence() or +nn_utils_rnn_pack_sequence().

    +

    Details

    + +

    The returned Tensor's data will be of size T x B x *, where T is the length +of the longest sequence and B is the batch size. If batch_first is TRUE, +the data will be transposed into B x T x * format.

    +

    Note

    + +

    total_length is useful to implement the +pack sequence -> recurrent network -> unpack sequence pattern in a +nn_module wrapped in ~torch.nn.DataParallel.

    + +

    Examples

    +
    if (torch_is_installed()) { +seq <- torch_tensor(rbind(c(1,2,0), c(3,0,0), c(4,5,6))) +lens <- c(2,1,3) +packed <- nn_utils_rnn_pack_padded_sequence(seq, lens, batch_first = TRUE, + enforce_sorted = FALSE) +packed +nn_utils_rnn_pad_packed_sequence(packed, batch_first=TRUE) + +} +
    #> [[1]] +#> torch_tensor +#> 1 2 0 +#> 3 0 0 +#> 4 5 6 +#> [ CPUFloatType{3,3} ] +#> +#> [[2]] +#> torch_tensor +#> 2 +#> 1 +#> 3 +#> [ CPULongType{3} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nn_utils_rnn_pad_sequence.html b/static/docs/dev/reference/nn_utils_rnn_pad_sequence.html new file mode 100644 index 0000000000000000000000000000000000000000..114fcfaf3abf7e911a6dffeac275853e81b30286 --- /dev/null +++ b/static/docs/dev/reference/nn_utils_rnn_pad_sequence.html @@ -0,0 +1,276 @@ + + + + + + + + +Pad a list of variable length Tensors with padding_value — nn_utils_rnn_pad_sequence • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    pad_sequence stacks a list of Tensors along a new dimension, +and pads them to equal length. For example, if the input is list of +sequences with size L x * and if batch_first is False, and T x B x * +otherwise.

    +
    + +
    nn_utils_rnn_pad_sequence(sequences, batch_first = FALSE, padding_value = 0)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    sequences

    (list[Tensor]): list of variable length sequences.

    batch_first

    (bool, optional): output will be in B x T x * if TRUE, +or in T x B x * otherwise

    padding_value

    (float, optional): value for padded elements. Default: 0.

    + +

    Value

    + +

    Tensor of size T x B x * if batch_first is FALSE. +Tensor of size B x T x * otherwise

    +

    Details

    + +

    B is batch size. It is equal to the number of elements in sequences. +T is length of the longest sequence. +L is length of the sequence. +* is any number of trailing dimensions, including none.

    +

    Note

    + +

    This function returns a Tensor of size T x B x * or B x T x * +where T is the length of the longest sequence. This function assumes +trailing dimensions and type of all the Tensors in sequences are same.

    + +

    Examples

    +
    if (torch_is_installed()) { +a <- torch_ones(25, 300) +b <- torch_ones(22, 300) +c <- torch_ones(15, 300) +nn_utils_rnn_pad_sequence(list(a, b, c))$size() + +} +
    #> [1] 25 3 300
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_adaptive_avg_pool1d.html b/static/docs/dev/reference/nnf_adaptive_avg_pool1d.html new file mode 100644 index 0000000000000000000000000000000000000000..56eb4459895fc7e447069f0f8296cb993c0181e6 --- /dev/null +++ b/static/docs/dev/reference/nnf_adaptive_avg_pool1d.html @@ -0,0 +1,243 @@ + + + + + + + + +Adaptive_avg_pool1d — nnf_adaptive_avg_pool1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 1D adaptive average pooling over an input signal composed of +several input planes.

    +
    + +
    nnf_adaptive_avg_pool1d(input, output_size)
    + +

    Arguments

    + + + + + + + + + + +
    input

    input tensor of shape (minibatch , in_channels , iW)

    output_size

    the target output size (single integer)

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_adaptive_avg_pool2d.html b/static/docs/dev/reference/nnf_adaptive_avg_pool2d.html new file mode 100644 index 0000000000000000000000000000000000000000..6b80f5494427d436dd11d9036c6cdedc3966a487 --- /dev/null +++ b/static/docs/dev/reference/nnf_adaptive_avg_pool2d.html @@ -0,0 +1,243 @@ + + + + + + + + +Adaptive_avg_pool2d — nnf_adaptive_avg_pool2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 2D adaptive average pooling over an input signal composed of +several input planes.

    +
    + +
    nnf_adaptive_avg_pool2d(input, output_size)
    + +

    Arguments

    + + + + + + + + + + +
    input

    input tensor (minibatch, in_channels , iH , iW)

    output_size

    the target output size (single integer or double-integer tuple)

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_adaptive_avg_pool3d.html b/static/docs/dev/reference/nnf_adaptive_avg_pool3d.html new file mode 100644 index 0000000000000000000000000000000000000000..94d7cd08f3edb82b759cd070c56c5e1e2ec94ef1 --- /dev/null +++ b/static/docs/dev/reference/nnf_adaptive_avg_pool3d.html @@ -0,0 +1,243 @@ + + + + + + + + +Adaptive_avg_pool3d — nnf_adaptive_avg_pool3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 3D adaptive average pooling over an input signal composed of +several input planes.

    +
    + +
    nnf_adaptive_avg_pool3d(input, output_size)
    + +

    Arguments

    + + + + + + + + + + +
    input

    input tensor (minibatch, in_channels , iT * iH , iW)

    output_size

    the target output size (single integer or triple-integer tuple)

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_adaptive_max_pool1d.html b/static/docs/dev/reference/nnf_adaptive_max_pool1d.html new file mode 100644 index 0000000000000000000000000000000000000000..c431ccf336975c6c9045b322cd8a81f6ad4929eb --- /dev/null +++ b/static/docs/dev/reference/nnf_adaptive_max_pool1d.html @@ -0,0 +1,247 @@ + + + + + + + + +Adaptive_max_pool1d — nnf_adaptive_max_pool1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 1D adaptive max pooling over an input signal composed of +several input planes.

    +
    + +
    nnf_adaptive_max_pool1d(input, output_size, return_indices = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    input tensor of shape (minibatch , in_channels , iW)

    output_size

    the target output size (single integer)

    return_indices

    whether to return pooling indices. Default: FALSE

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_adaptive_max_pool2d.html b/static/docs/dev/reference/nnf_adaptive_max_pool2d.html new file mode 100644 index 0000000000000000000000000000000000000000..2194d20eafd01c75b6610d7f0c0e2e7797a59b39 --- /dev/null +++ b/static/docs/dev/reference/nnf_adaptive_max_pool2d.html @@ -0,0 +1,247 @@ + + + + + + + + +Adaptive_max_pool2d — nnf_adaptive_max_pool2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 2D adaptive max pooling over an input signal composed of +several input planes.

    +
    + +
    nnf_adaptive_max_pool2d(input, output_size, return_indices = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    input tensor (minibatch, in_channels , iH , iW)

    output_size

    the target output size (single integer or double-integer tuple)

    return_indices

    whether to return pooling indices. Default: FALSE

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_adaptive_max_pool3d.html b/static/docs/dev/reference/nnf_adaptive_max_pool3d.html new file mode 100644 index 0000000000000000000000000000000000000000..4254c42eb3a52bea87b90df66297d10f5f2807e2 --- /dev/null +++ b/static/docs/dev/reference/nnf_adaptive_max_pool3d.html @@ -0,0 +1,247 @@ + + + + + + + + +Adaptive_max_pool3d — nnf_adaptive_max_pool3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 3D adaptive max pooling over an input signal composed of +several input planes.

    +
    + +
    nnf_adaptive_max_pool3d(input, output_size, return_indices = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    input tensor (minibatch, in_channels , iT * iH , iW)

    output_size

    the target output size (single integer or triple-integer tuple)

    return_indices

    whether to return pooling indices. Default:FALSE

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_affine_grid.html b/static/docs/dev/reference/nnf_affine_grid.html new file mode 100644 index 0000000000000000000000000000000000000000..358f769afec045a806a55707c121b8630341972e --- /dev/null +++ b/static/docs/dev/reference/nnf_affine_grid.html @@ -0,0 +1,261 @@ + + + + + + + + +Affine_grid — nnf_affine_grid • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Generates a 2D or 3D flow field (sampling grid), given a batch of +affine matrices theta.

    +
    + +
    nnf_affine_grid(theta, size, align_corners = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    theta

    (Tensor) input batch of affine matrices with shape +(\(N \times 2 \times 3\)) for 2D or (\(N \times 3 \times 4\)) for 3D

    size

    (torch.Size) the target output image size. (\(N \times C \times H \times W\) +for 2D or \(N \times C \times D \times H \times W\) for 3D) +Example: torch.Size((32, 3, 24, 24))

    align_corners

    (bool, optional) if True, consider -1 and 1 +to refer to the centers of the corner pixels rather than the image corners. +Refer to nnf_grid_sample() for a more complete description. A grid generated by +nnf_affine_grid() should be passed to nnf_grid_sample() with the same setting for +this option. Default: False

    + +

    Note

    + + + + +

    This function is often used in conjunction with nnf_grid_sample() +to build Spatial Transformer Networks_ .

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_alpha_dropout.html b/static/docs/dev/reference/nnf_alpha_dropout.html new file mode 100644 index 0000000000000000000000000000000000000000..ca460d9376b94502beaae0d9105bca05301121b5 --- /dev/null +++ b/static/docs/dev/reference/nnf_alpha_dropout.html @@ -0,0 +1,250 @@ + + + + + + + + +Alpha_dropout — nnf_alpha_dropout • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies alpha dropout to the input.

    +
    + +
    nnf_alpha_dropout(input, p = 0.5, training = FALSE, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    input

    the input tensor

    p

    probability of an element to be zeroed. Default: 0.5

    training

    apply dropout if is TRUE. Default: TRUE

    inplace

    If set to TRUE, will do this operation in-place. +Default: FALSE

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_avg_pool1d.html b/static/docs/dev/reference/nnf_avg_pool1d.html new file mode 100644 index 0000000000000000000000000000000000000000..d94290e302c2890a44cfe441b6bc9ea4f898a407 --- /dev/null +++ b/static/docs/dev/reference/nnf_avg_pool1d.html @@ -0,0 +1,271 @@ + + + + + + + + +Avg_pool1d — nnf_avg_pool1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 1D average pooling over an input signal composed of several +input planes.

    +
    + +
    nnf_avg_pool1d(
    +  input,
    +  kernel_size,
    +  stride = NULL,
    +  padding = 0,
    +  ceil_mode = FALSE,
    +  count_include_pad = TRUE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor of shape (minibatch , in_channels , iW)

    kernel_size

    the size of the window. Can be a single number or a +tuple (kW,).

    stride

    the stride of the window. Can be a single number or a tuple +(sW,). Default: kernel_size

    padding

    implicit zero paddings on both sides of the input. Can be a +single number or a tuple (padW,). Default: 0

    ceil_mode

    when True, will use ceil instead of floor to compute the +output shape. Default: FALSE

    count_include_pad

    when True, will include the zero-padding in the +averaging calculation. Default: TRUE

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_avg_pool2d.html b/static/docs/dev/reference/nnf_avg_pool2d.html new file mode 100644 index 0000000000000000000000000000000000000000..c542a48d402e2ae06ace99396f5f81b0aa4cb3e4 --- /dev/null +++ b/static/docs/dev/reference/nnf_avg_pool2d.html @@ -0,0 +1,279 @@ + + + + + + + + +Avg_pool2d — nnf_avg_pool2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies 2D average-pooling operation in \(kH * kW\) regions by step size +\(sH * sW\) steps. The number of output features is equal to the number of +input planes.

    +
    + +
    nnf_avg_pool2d(
    +  input,
    +  kernel_size,
    +  stride = NULL,
    +  padding = 0,
    +  ceil_mode = FALSE,
    +  count_include_pad = TRUE,
    +  divisor_override = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor (minibatch, in_channels , iH , iW)

    kernel_size

    size of the pooling region. Can be a single number or a +tuple (kH, kW)

    stride

    stride of the pooling operation. Can be a single number or a +tuple (sH, sW). Default: kernel_size

    padding

    implicit zero paddings on both sides of the input. Can be a +single number or a tuple (padH, padW). Default: 0

    ceil_mode

    when True, will use ceil instead of floor in the formula +to compute the output shape. Default: FALSE

    count_include_pad

    when True, will include the zero-padding in the +averaging calculation. Default: TRUE

    divisor_override

    if specified, it will be used as divisor, otherwise +size of the pooling region will be used. Default: NULL

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_avg_pool3d.html b/static/docs/dev/reference/nnf_avg_pool3d.html new file mode 100644 index 0000000000000000000000000000000000000000..f8205cf67b849a27ee11af9e7dfa2a3ba7f9a373 --- /dev/null +++ b/static/docs/dev/reference/nnf_avg_pool3d.html @@ -0,0 +1,279 @@ + + + + + + + + +Avg_pool3d — nnf_avg_pool3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies 3D average-pooling operation in \(kT * kH * kW\) regions by step +size \(sT * sH * sW\) steps. The number of output features is equal to +\(\lfloor \frac{ \mbox{input planes} }{sT} \rfloor\).

    +
    + +
    nnf_avg_pool3d(
    +  input,
    +  kernel_size,
    +  stride = NULL,
    +  padding = 0,
    +  ceil_mode = FALSE,
    +  count_include_pad = TRUE,
    +  divisor_override = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor (minibatch, in_channels , iT * iH , iW)

    kernel_size

    size of the pooling region. Can be a single number or a +tuple (kT, kH, kW)

    stride

    stride of the pooling operation. Can be a single number or a +tuple (sT, sH, sW). Default: kernel_size

    padding

    implicit zero paddings on both sides of the input. Can be a +single number or a tuple (padT, padH, padW), Default: 0

    ceil_mode

    when True, will use ceil instead of floor in the formula +to compute the output shape

    count_include_pad

    when True, will include the zero-padding in the +averaging calculation

    divisor_override

    NA if specified, it will be used as divisor, otherwise +size of the pooling region will be used. Default: NULL

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_batch_norm.html b/static/docs/dev/reference/nnf_batch_norm.html new file mode 100644 index 0000000000000000000000000000000000000000..3dc710a6d05e82ad17616bdbc7126e90aa3a280d --- /dev/null +++ b/static/docs/dev/reference/nnf_batch_norm.html @@ -0,0 +1,275 @@ + + + + + + + + +Batch_norm — nnf_batch_norm • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies Batch Normalization for each channel across a batch of data.

    +
    + +
    nnf_batch_norm(
    +  input,
    +  running_mean,
    +  running_var,
    +  weight = NULL,
    +  bias = NULL,
    +  training = FALSE,
    +  momentum = 0.1,
    +  eps = 1e-05
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor

    running_mean

    the running_mean tensor

    running_var

    the running_var tensor

    weight

    the weight tensor

    bias

    the bias tensor

    training

    bool wether it's training. Default: FALSE

    momentum

    the value used for the running_mean and running_var computation. +Can be set to None for cumulative moving average (i.e. simple average). Default: 0.1

    eps

    a value added to the denominator for numerical stability. Default: 1e-5

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_bilinear.html b/static/docs/dev/reference/nnf_bilinear.html new file mode 100644 index 0000000000000000000000000000000000000000..f37b3b30ef639f2589eb3b0856db2a967783df84 --- /dev/null +++ b/static/docs/dev/reference/nnf_bilinear.html @@ -0,0 +1,258 @@ + + + + + + + + +Bilinear — nnf_bilinear • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a bilinear transformation to the incoming data: +\(y = x_1 A x_2 + b\)

    +
    + +
    nnf_bilinear(input1, input2, weight, bias = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    input1

    \((N, *, H_{in1})\) where \(H_{in1}=\mbox{in1\_features}\) +and \(*\) means any number of additional dimensions. +All but the last dimension of the inputs should be the same.

    input2

    \((N, *, H_{in2})\) where \(H_{in2}=\mbox{in2\_features}\)

    weight

    \((\mbox{out\_features}, \mbox{in1\_features}, +\mbox{in2\_features})\)

    bias

    \((\mbox{out\_features})\)

    + +

    Value

    + +

    output \((N, *, H_{out})\) where \(H_{out}=\mbox{out\_features}\) +and all but the last dimension are the same shape as the input.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_binary_cross_entropy.html b/static/docs/dev/reference/nnf_binary_cross_entropy.html new file mode 100644 index 0000000000000000000000000000000000000000..5710325943edca86e8cdf25086255f949b91983f --- /dev/null +++ b/static/docs/dev/reference/nnf_binary_cross_entropy.html @@ -0,0 +1,259 @@ + + + + + + + + +Binary_cross_entropy — nnf_binary_cross_entropy • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Function that measures the Binary Cross Entropy +between the target and the output.

    +
    + +
    nnf_binary_cross_entropy(
    +  input,
    +  target,
    +  weight = NULL,
    +  reduction = c("mean", "sum", "none")
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    input

    tensor (N,*) where ** means, any number of additional dimensions

    target

    tensor (N,*) , same shape as the input

    weight

    (tensor) weight for each value.

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_binary_cross_entropy_with_logits.html b/static/docs/dev/reference/nnf_binary_cross_entropy_with_logits.html new file mode 100644 index 0000000000000000000000000000000000000000..6f149cd7055941cc8f4fcccb085018b70c7f658e --- /dev/null +++ b/static/docs/dev/reference/nnf_binary_cross_entropy_with_logits.html @@ -0,0 +1,266 @@ + + + + + + + + +Binary_cross_entropy_with_logits — nnf_binary_cross_entropy_with_logits • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Function that measures Binary Cross Entropy between target and output +logits.

    +
    + +
    nnf_binary_cross_entropy_with_logits(
    +  input,
    +  target,
    +  weight = NULL,
    +  reduction = c("mean", "sum", "none"),
    +  pos_weight = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    input

    Tensor of arbitrary shape

    target

    Tensor of the same shape as input

    weight

    (Tensor, optional) a manual rescaling weight if provided it's +repeated to match input tensor shape.

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    pos_weight

    (Tensor, optional) a weight of positive examples. +Must be a vector with length equal to the number of classes.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_celu.html b/static/docs/dev/reference/nnf_celu.html new file mode 100644 index 0000000000000000000000000000000000000000..929b1bac0be37bdff0526adfa24bff378dd4f0fd --- /dev/null +++ b/static/docs/dev/reference/nnf_celu.html @@ -0,0 +1,248 @@ + + + + + + + + +Celu — nnf_celu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies element-wise, \(CELU(x) = max(0,x) + min(0, \alpha * (exp(x \alpha) - 1))\).

    +
    + +
    nnf_celu(input, alpha = 1, inplace = FALSE)
    +
    +nnf_celu_(input, alpha = 1)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    alpha

    the alpha value for the CELU formulation. Default: 1.0

    inplace

    can optionally do the operation in-place. Default: FALSE

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_conv1d.html b/static/docs/dev/reference/nnf_conv1d.html new file mode 100644 index 0000000000000000000000000000000000000000..663e45bccff85b2f53b7c39b43fa8608b0aa75ad --- /dev/null +++ b/static/docs/dev/reference/nnf_conv1d.html @@ -0,0 +1,275 @@ + + + + + + + + +Conv1d — nnf_conv1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 1D convolution over an input signal composed of several input +planes.

    +
    + +
    nnf_conv1d(
    +  input,
    +  weight,
    +  bias = NULL,
    +  stride = 1,
    +  padding = 0,
    +  dilation = 1,
    +  groups = 1
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor of shape (minibatch, in_channels , iW)

    weight

    filters of shape (out_channels, in_channels/groups , kW)

    bias

    optional bias of shape (out_channels). Default: NULL

    stride

    the stride of the convolving kernel. Can be a single number or +a one-element tuple (sW,). Default: 1

    padding

    implicit paddings on both sides of the input. Can be a +single number or a one-element tuple (padW,). Default: 0

    dilation

    the spacing between kernel elements. Can be a single number or +a one-element tuple (dW,). Default: 1

    groups

    split input into groups, in_channels should be divisible by +the number of groups. Default: 1

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_conv2d.html b/static/docs/dev/reference/nnf_conv2d.html new file mode 100644 index 0000000000000000000000000000000000000000..a4a8ba49da8d833a942056da7fcfbba1576d5dc7 --- /dev/null +++ b/static/docs/dev/reference/nnf_conv2d.html @@ -0,0 +1,275 @@ + + + + + + + + +Conv2d — nnf_conv2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 2D convolution over an input image composed of several input +planes.

    +
    + +
    nnf_conv2d(
    +  input,
    +  weight,
    +  bias = NULL,
    +  stride = 1,
    +  padding = 0,
    +  dilation = 1,
    +  groups = 1
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor of shape (minibatch, in_channels, iH , iW)

    weight

    filters of shape (out_channels , in_channels/groups, kH , kW)

    bias

    optional bias tensor of shape (out_channels). Default: NULL

    stride

    the stride of the convolving kernel. Can be a single number or a +tuple (sH, sW). Default: 1

    padding

    implicit paddings on both sides of the input. Can be a +single number or a tuple (padH, padW). Default: 0

    dilation

    the spacing between kernel elements. Can be a single number or +a tuple (dH, dW). Default: 1

    groups

    split input into groups, in_channels should be divisible by the +number of groups. Default: 1

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_conv3d.html b/static/docs/dev/reference/nnf_conv3d.html new file mode 100644 index 0000000000000000000000000000000000000000..17087ae862aa235d7befb5da50c1d98263457983 --- /dev/null +++ b/static/docs/dev/reference/nnf_conv3d.html @@ -0,0 +1,275 @@ + + + + + + + + +Conv3d — nnf_conv3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 3D convolution over an input image composed of several input +planes.

    +
    + +
    nnf_conv3d(
    +  input,
    +  weight,
    +  bias = NULL,
    +  stride = 1,
    +  padding = 0,
    +  dilation = 1,
    +  groups = 1
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor of shape (minibatch, in_channels , iT , iH , iW)

    weight

    filters of shape (out_channels , in_channels/groups, kT , kH , kW)

    bias

    optional bias tensor of shape (out_channels). Default: NULL

    stride

    the stride of the convolving kernel. Can be a single number or a +tuple (sT, sH, sW). Default: 1

    padding

    implicit paddings on both sides of the input. Can be a +single number or a tuple (padT, padH, padW). Default: 0

    dilation

    the spacing between kernel elements. Can be a single number or +a tuple (dT, dH, dW). Default: 1

    groups

    split input into groups, in_channels should be divisible by +the number of groups. Default: 1

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_conv_tbc.html b/static/docs/dev/reference/nnf_conv_tbc.html new file mode 100644 index 0000000000000000000000000000000000000000..6f567569e493163593d6bd3f7b1086c7f4c163a5 --- /dev/null +++ b/static/docs/dev/reference/nnf_conv_tbc.html @@ -0,0 +1,253 @@ + + + + + + + + +Conv_tbc — nnf_conv_tbc • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 1-dimensional sequence convolution over an input sequence. +Input and output dimensions are (Time, Batch, Channels) - hence TBC.

    +
    + +
    nnf_conv_tbc(input, weight, bias, pad = 0)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    input

    input tensor of shape \((\mbox{sequence length} \times +batch \times \mbox{in\_channels})\)

    weight

    filter of shape (\(\mbox{kernel width} \times \mbox{in\_channels} +\times \mbox{out\_channels}\))

    bias

    bias of shape (\(\mbox{out\_channels}\))

    pad

    number of timesteps to pad. Default: 0

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_conv_transpose1d.html b/static/docs/dev/reference/nnf_conv_transpose1d.html new file mode 100644 index 0000000000000000000000000000000000000000..2c17419a8d6eaa89e9d4a72334f247339e34f952 --- /dev/null +++ b/static/docs/dev/reference/nnf_conv_transpose1d.html @@ -0,0 +1,280 @@ + + + + + + + + +Conv_transpose1d — nnf_conv_transpose1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 1D transposed convolution operator over an input signal +composed of several input planes, sometimes also called "deconvolution".

    +
    + +
    nnf_conv_transpose1d(
    +  input,
    +  weight,
    +  bias = NULL,
    +  stride = 1,
    +  padding = 0,
    +  output_padding = 0,
    +  groups = 1,
    +  dilation = 1
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor of shape (minibatch, in_channels , iW)

    weight

    filters of shape (out_channels, in_channels/groups , kW)

    bias

    optional bias of shape (out_channels). Default: NULL

    stride

    the stride of the convolving kernel. Can be a single number or +a one-element tuple (sW,). Default: 1

    padding

    implicit paddings on both sides of the input. Can be a +single number or a one-element tuple (padW,). Default: 0

    output_padding

    padding applied to the output

    groups

    split input into groups, in_channels should be divisible by +the number of groups. Default: 1

    dilation

    the spacing between kernel elements. Can be a single number or +a one-element tuple (dW,). Default: 1

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_conv_transpose2d.html b/static/docs/dev/reference/nnf_conv_transpose2d.html new file mode 100644 index 0000000000000000000000000000000000000000..278b06b2c73c97671795d1bb02b891c56047ae55 --- /dev/null +++ b/static/docs/dev/reference/nnf_conv_transpose2d.html @@ -0,0 +1,280 @@ + + + + + + + + +Conv_transpose2d — nnf_conv_transpose2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 2D transposed convolution operator over an input image +composed of several input planes, sometimes also called "deconvolution".

    +
    + +
    nnf_conv_transpose2d(
    +  input,
    +  weight,
    +  bias = NULL,
    +  stride = 1,
    +  padding = 0,
    +  output_padding = 0,
    +  groups = 1,
    +  dilation = 1
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor of shape (minibatch, in_channels, iH , iW)

    weight

    filters of shape (out_channels , in_channels/groups, kH , kW)

    bias

    optional bias tensor of shape (out_channels). Default: NULL

    stride

    the stride of the convolving kernel. Can be a single number or a +tuple (sH, sW). Default: 1

    padding

    implicit paddings on both sides of the input. Can be a +single number or a tuple (padH, padW). Default: 0

    output_padding

    padding applied to the output

    groups

    split input into groups, in_channels should be divisible by the +number of groups. Default: 1

    dilation

    the spacing between kernel elements. Can be a single number or +a tuple (dH, dW). Default: 1

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_conv_transpose3d.html b/static/docs/dev/reference/nnf_conv_transpose3d.html new file mode 100644 index 0000000000000000000000000000000000000000..30c476573bb3bba72dc38d6a50fec1385fb9bb17 --- /dev/null +++ b/static/docs/dev/reference/nnf_conv_transpose3d.html @@ -0,0 +1,280 @@ + + + + + + + + +Conv_transpose3d — nnf_conv_transpose3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 3D transposed convolution operator over an input image +composed of several input planes, sometimes also called "deconvolution"

    +
    + +
    nnf_conv_transpose3d(
    +  input,
    +  weight,
    +  bias = NULL,
    +  stride = 1,
    +  padding = 0,
    +  output_padding = 0,
    +  groups = 1,
    +  dilation = 1
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor of shape (minibatch, in_channels , iT , iH , iW)

    weight

    filters of shape (out_channels , in_channels/groups, kT , kH , kW)

    bias

    optional bias tensor of shape (out_channels). Default: NULL

    stride

    the stride of the convolving kernel. Can be a single number or a +tuple (sT, sH, sW). Default: 1

    padding

    implicit paddings on both sides of the input. Can be a +single number or a tuple (padT, padH, padW). Default: 0

    output_padding

    padding applied to the output

    groups

    split input into groups, in_channels should be divisible by +the number of groups. Default: 1

    dilation

    the spacing between kernel elements. Can be a single number or +a tuple (dT, dH, dW). Default: 1

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_cosine_embedding_loss.html b/static/docs/dev/reference/nnf_cosine_embedding_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..e6c7b12f93fbb36d1000483f897b7c16b68e4f1c --- /dev/null +++ b/static/docs/dev/reference/nnf_cosine_embedding_loss.html @@ -0,0 +1,269 @@ + + + + + + + + +Cosine_embedding_loss — nnf_cosine_embedding_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Creates a criterion that measures the loss given input tensors x_1, x_2 and a +Tensor label y with values 1 or -1. This is used for measuring whether two inputs +are similar or dissimilar, using the cosine distance, and is typically used +for learning nonlinear embeddings or semi-supervised learning.

    +
    + +
    nnf_cosine_embedding_loss(
    +  input1,
    +  input2,
    +  target,
    +  margin = 0,
    +  reduction = c("mean", "sum", "none")
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    input1

    the input x_1 tensor

    input2

    the input x_2 tensor

    target

    the target tensor

    margin

    Should be a number from -1 to 1 , 0 to 0.5 is suggested. If margin +is missing, the default value is 0.

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_cosine_similarity.html b/static/docs/dev/reference/nnf_cosine_similarity.html new file mode 100644 index 0000000000000000000000000000000000000000..55fbfafe17f1ba738c3e33722f22485c30c09ccf --- /dev/null +++ b/static/docs/dev/reference/nnf_cosine_similarity.html @@ -0,0 +1,255 @@ + + + + + + + + +Cosine_similarity — nnf_cosine_similarity • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Returns cosine similarity between x1 and x2, computed along dim.

    +
    + +
    nnf_cosine_similarity(x1, x2, dim = 1, eps = 1e-08)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    x1

    (Tensor) First input.

    x2

    (Tensor) Second input (of size matching x1).

    dim

    (int, optional) Dimension of vectors. Default: 1

    eps

    (float, optional) Small value to avoid division by zero. +Default: 1e-8

    + +

    Details

    + +

    $$ + \mbox{similarity} = \frac{x_1 \cdot x_2}{\max(\Vert x_1 \Vert _2 \cdot \Vert x_2 \Vert _2, \epsilon)} +$$

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_cross_entropy.html b/static/docs/dev/reference/nnf_cross_entropy.html new file mode 100644 index 0000000000000000000000000000000000000000..037d9e303647f892bee4c5919cade8ea3b413555 --- /dev/null +++ b/static/docs/dev/reference/nnf_cross_entropy.html @@ -0,0 +1,269 @@ + + + + + + + + +Cross_entropy — nnf_cross_entropy • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    This criterion combines log_softmax and nll_loss in a single +function.

    +
    + +
    nnf_cross_entropy(
    +  input,
    +  target,
    +  weight = NULL,
    +  ignore_index = -100,
    +  reduction = c("mean", "sum", "none")
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    input

    (Tensor) \((N, C)\) where C = number of classes or \((N, C, H, W)\) +in case of 2D Loss, or \((N, C, d_1, d_2, ..., d_K)\) where \(K \geq 1\) +in the case of K-dimensional loss.

    target

    (Tensor) \((N)\) where each value is \(0 \leq \mbox{targets}[i] \leq C-1\), +or \((N, d_1, d_2, ..., d_K)\) where \(K \geq 1\) for K-dimensional loss.

    weight

    (Tensor, optional) a manual rescaling weight given to each class. If +given, has to be a Tensor of size C

    ignore_index

    (int, optional) Specifies a target value that is ignored +and does not contribute to the input gradient.

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_ctc_loss.html b/static/docs/dev/reference/nnf_ctc_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..81fd7b574c0d28b81ba84e8996c71849f63db590 --- /dev/null +++ b/static/docs/dev/reference/nnf_ctc_loss.html @@ -0,0 +1,277 @@ + + + + + + + + +Ctc_loss — nnf_ctc_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    The Connectionist Temporal Classification loss.

    +
    + +
    nnf_ctc_loss(
    +  log_probs,
    +  targets,
    +  input_lengths,
    +  target_lengths,
    +  blank = 0,
    +  reduction = c("mean", "sum", "none"),
    +  zero_infinity = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    log_probs

    \((T, N, C)\) where C = number of characters in alphabet including blank, +T = input length, and N = batch size. The logarithmized probabilities of +the outputs (e.g. obtained with nnf_log_softmax).

    targets

    \((N, S)\) or (sum(target_lengths)). Targets cannot be blank. +In the second form, the targets are assumed to be concatenated.

    input_lengths

    \((N)\). Lengths of the inputs (must each be \(\leq T\))

    target_lengths

    \((N)\). Lengths of the targets

    blank

    (int, optional) Blank label. Default \(0\).

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    zero_infinity

    (bool, optional) Whether to zero infinite losses and the +associated gradients. Default: FALSE Infinite losses mainly occur when the +inputs are too short to be aligned to the targets.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_dropout.html b/static/docs/dev/reference/nnf_dropout.html new file mode 100644 index 0000000000000000000000000000000000000000..df3e44bc8bb78a3354bb763fb384465b96cfbb51 --- /dev/null +++ b/static/docs/dev/reference/nnf_dropout.html @@ -0,0 +1,254 @@ + + + + + + + + +Dropout — nnf_dropout • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    During training, randomly zeroes some of the elements of the input +tensor with probability p using samples from a Bernoulli +distribution.

    +
    + +
    nnf_dropout(input, p = 0.5, training = TRUE, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    input

    the input tensor

    p

    probability of an element to be zeroed. Default: 0.5

    training

    apply dropout if is TRUE. Default: TRUE

    inplace

    If set to TRUE, will do this operation in-place. +Default: FALSE

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_dropout2d.html b/static/docs/dev/reference/nnf_dropout2d.html new file mode 100644 index 0000000000000000000000000000000000000000..a4dd2f25651dd0daa5dd202d2bac36a8472b740d --- /dev/null +++ b/static/docs/dev/reference/nnf_dropout2d.html @@ -0,0 +1,258 @@ + + + + + + + + +Dropout2d — nnf_dropout2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Randomly zero out entire channels (a channel is a 2D feature map, +e.g., the \(j\)-th channel of the \(i\)-th sample in the +batched input is a 2D tensor \(input[i, j]\)) of the input tensor). +Each channel will be zeroed out independently on every forward call with +probability p using samples from a Bernoulli distribution.

    +
    + +
    nnf_dropout2d(input, p = 0.5, training = TRUE, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    input

    the input tensor

    p

    probability of a channel to be zeroed. Default: 0.5

    training

    apply dropout if is TRUE. Default: TRUE.

    inplace

    If set to TRUE, will do this operation in-place. +Default: FALSE

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_dropout3d.html b/static/docs/dev/reference/nnf_dropout3d.html new file mode 100644 index 0000000000000000000000000000000000000000..498cd6d0c11386f757c1865885157efea7a31cf3 --- /dev/null +++ b/static/docs/dev/reference/nnf_dropout3d.html @@ -0,0 +1,258 @@ + + + + + + + + +Dropout3d — nnf_dropout3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Randomly zero out entire channels (a channel is a 3D feature map, +e.g., the \(j\)-th channel of the \(i\)-th sample in the +batched input is a 3D tensor \(input[i, j]\)) of the input tensor). +Each channel will be zeroed out independently on every forward call with +probability p using samples from a Bernoulli distribution.

    +
    + +
    nnf_dropout3d(input, p = 0.5, training = TRUE, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    input

    the input tensor

    p

    probability of a channel to be zeroed. Default: 0.5

    training

    apply dropout if is TRUE. Default: TRUE.

    inplace

    If set to TRUE, will do this operation in-place. +Default: FALSE

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_elu.html b/static/docs/dev/reference/nnf_elu.html new file mode 100644 index 0000000000000000000000000000000000000000..67f531cf7c9d9d201b6e14b9e8f55b8e49be56f6 --- /dev/null +++ b/static/docs/dev/reference/nnf_elu.html @@ -0,0 +1,259 @@ + + + + + + + + +Elu — nnf_elu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies element-wise, +$$ELU(x) = max(0,x) + min(0, \alpha * (exp(x) - 1))$$.

    +
    + +
    nnf_elu(input, alpha = 1, inplace = FALSE)
    +
    +nnf_elu_(input, alpha = 1)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    alpha

    the alpha value for the ELU formulation. Default: 1.0

    inplace

    can optionally do the operation in-place. Default: FALSE

    + + +

    Examples

    +
    if (torch_is_installed()) { +x <- torch_randn(2, 2) +y <- nnf_elu(x, alpha = 1) +nnf_elu_(x, alpha = 1) +torch_equal(x, y) + +} +
    #> [1] TRUE
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_embedding.html b/static/docs/dev/reference/nnf_embedding.html new file mode 100644 index 0000000000000000000000000000000000000000..aa6f85192b442b5b57bd92c0d9165a2460eaa5bf --- /dev/null +++ b/static/docs/dev/reference/nnf_embedding.html @@ -0,0 +1,282 @@ + + + + + + + + +Embedding — nnf_embedding • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    A simple lookup table that looks up embeddings in a fixed dictionary and size.

    +
    + +
    nnf_embedding(
    +  input,
    +  weight,
    +  padding_idx = NULL,
    +  max_norm = NULL,
    +  norm_type = 2,
    +  scale_grad_by_freq = FALSE,
    +  sparse = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    (LongTensor) Tensor containing indices into the embedding matrix

    weight

    (Tensor) The embedding matrix with number of rows equal to the +maximum possible index + 1, and number of columns equal to the embedding size

    padding_idx

    (int, optional) If given, pads the output with the embedding +vector at padding_idx (initialized to zeros) whenever it encounters the index.

    max_norm

    (float, optional) If given, each embedding vector with norm larger +than max_norm is renormalized to have norm max_norm. Note: this will modify +weight in-place.

    norm_type

    (float, optional) The p of the p-norm to compute for the max_norm +option. Default 2.

    scale_grad_by_freq

    (boolean, optional) If given, this will scale gradients +by the inverse of frequency of the words in the mini-batch. Default FALSE.

    sparse

    (bool, optional) If TRUE, gradient w.r.t. weight will be a +sparse tensor. See Notes under nn_embedding for more details regarding +sparse gradients.

    + +

    Details

    + +

    This module is often used to retrieve word embeddings using indices. +The input to the module is a list of indices, and the embedding matrix, +and the output is the corresponding word embeddings.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_embedding_bag.html b/static/docs/dev/reference/nnf_embedding_bag.html new file mode 100644 index 0000000000000000000000000000000000000000..9343ab4419fcdc7221b29f7210a072978e9a2b48 --- /dev/null +++ b/static/docs/dev/reference/nnf_embedding_bag.html @@ -0,0 +1,299 @@ + + + + + + + + +Embedding_bag — nnf_embedding_bag • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Computes sums, means or maxes of bags of embeddings, without instantiating the +intermediate embeddings.

    +
    + +
    nnf_embedding_bag(
    +  input,
    +  weight,
    +  offsets = NULL,
    +  max_norm = NULL,
    +  norm_type = 2,
    +  scale_grad_by_freq = FALSE,
    +  mode = "mean",
    +  sparse = FALSE,
    +  per_sample_weights = NULL,
    +  include_last_offset = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    (LongTensor) Tensor containing bags of indices into the embedding matrix

    weight

    (Tensor) The embedding matrix with number of rows equal to the +maximum possible index + 1, and number of columns equal to the embedding size

    offsets

    (LongTensor, optional) Only used when input is 1D. offsets +determines the starting index position of each bag (sequence) in input.

    max_norm

    (float, optional) If given, each embedding vector with norm +larger than max_norm is renormalized to have norm max_norm. +Note: this will modify weight in-place.

    norm_type

    (float, optional) The p in the p-norm to compute for the +max_norm option. Default 2.

    scale_grad_by_freq

    (boolean, optional) if given, this will scale gradients +by the inverse of frequency of the words in the mini-batch. Default FALSE. Note: this option is not supported when mode="max".

    mode

    (string, optional) "sum", "mean" or "max". Specifies +the way to reduce the bag. Default: 'mean'

    sparse

    (bool, optional) if TRUE, gradient w.r.t. weight will be a +sparse tensor. See Notes under nn_embedding for more details regarding +sparse gradients. Note: this option is not supported when mode="max".

    per_sample_weights

    (Tensor, optional) a tensor of float / double weights, +or NULL to indicate all weights should be taken to be 1. If specified, +per_sample_weights must have exactly the same shape as input and is treated +as having the same offsets, if those are not NULL.

    include_last_offset

    (bool, optional) if TRUE, the size of offsets is +equal to the number of bags + 1.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_fold.html b/static/docs/dev/reference/nnf_fold.html new file mode 100644 index 0000000000000000000000000000000000000000..e1fb0d14b4d8ed2ba0e8c0a0351705120900a900 --- /dev/null +++ b/static/docs/dev/reference/nnf_fold.html @@ -0,0 +1,277 @@ + + + + + + + + +Fold — nnf_fold • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Combines an array of sliding local blocks into a large containing +tensor.

    +
    + +
    nnf_fold(
    +  input,
    +  output_size,
    +  kernel_size,
    +  dilation = 1,
    +  padding = 0,
    +  stride = 1
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    the input tensor

    output_size

    the shape of the spatial dimensions of the output (i.e., +output$sizes()[-c(1,2)])

    kernel_size

    the size of the sliding blocks

    dilation

    a parameter that controls the stride of elements within the +neighborhood. Default: 1

    padding

    implicit zero padding to be added on both sides of input. +Default: 0

    stride

    the stride of the sliding blocks in the input spatial dimensions. +Default: 1

    + +

    Warning

    + + + + +

    Currently, only 4-D output tensors (batched image-like tensors) are +supported.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_fractional_max_pool2d.html b/static/docs/dev/reference/nnf_fractional_max_pool2d.html new file mode 100644 index 0000000000000000000000000000000000000000..838b7aa83d23eb68622b5d490afe814d79a4724c --- /dev/null +++ b/static/docs/dev/reference/nnf_fractional_max_pool2d.html @@ -0,0 +1,274 @@ + + + + + + + + +Fractional_max_pool2d — nnf_fractional_max_pool2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies 2D fractional max pooling over an input signal composed of several input planes.

    +
    + +
    nnf_fractional_max_pool2d(
    +  input,
    +  kernel_size,
    +  output_size = NULL,
    +  output_ratio = NULL,
    +  return_indices = FALSE,
    +  random_samples = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    the input tensor

    kernel_size

    the size of the window to take a max over. Can be a +single number \(k\) (for a square kernel of \(k * k\)) or +a tuple (kH, kW)

    output_size

    the target output size of the image of the form \(oH * oW\). +Can be a tuple (oH, oW) or a single number \(oH\) for a square image \(oH * oH\)

    output_ratio

    If one wants to have an output size as a ratio of the input size, +this option can be given. This has to be a number or tuple in the range (0, 1)

    return_indices

    if True, will return the indices along with the outputs.

    random_samples

    optional random samples.

    + +

    Details

    + +

    Fractional MaxPooling is described in detail in the paper Fractional MaxPooling_ by Ben Graham

    +

    The max-pooling operation is applied in \(kH * kW\) regions by a stochastic +step size determined by the target output size. +The number of output features is equal to the number of input planes.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_fractional_max_pool3d.html b/static/docs/dev/reference/nnf_fractional_max_pool3d.html new file mode 100644 index 0000000000000000000000000000000000000000..871015041ea529a4ee57f02cd7ebf04159af13bf --- /dev/null +++ b/static/docs/dev/reference/nnf_fractional_max_pool3d.html @@ -0,0 +1,275 @@ + + + + + + + + +Fractional_max_pool3d — nnf_fractional_max_pool3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies 3D fractional max pooling over an input signal composed of several input planes.

    +
    + +
    nnf_fractional_max_pool3d(
    +  input,
    +  kernel_size,
    +  output_size = NULL,
    +  output_ratio = NULL,
    +  return_indices = FALSE,
    +  random_samples = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    the input tensor

    kernel_size

    the size of the window to take a max over. Can be a single number \(k\) +(for a square kernel of \(k * k * k\)) or a tuple (kT, kH, kW)

    output_size

    the target output size of the form \(oT * oH * oW\). +Can be a tuple (oT, oH, oW) or a single number \(oH\) for a cubic output +\(oH * oH * oH\)

    output_ratio

    If one wants to have an output size as a ratio of the +input size, this option can be given. This has to be a number or tuple in the +range (0, 1)

    return_indices

    if True, will return the indices along with the outputs.

    random_samples

    undocumented argument.

    + +

    Details

    + +

    Fractional MaxPooling is described in detail in the paper Fractional MaxPooling_ by Ben Graham

    +

    The max-pooling operation is applied in \(kT * kH * kW\) regions by a stochastic +step size determined by the target output size. +The number of output features is equal to the number of input planes.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_gelu.html b/static/docs/dev/reference/nnf_gelu.html new file mode 100644 index 0000000000000000000000000000000000000000..4a9b17ecd7839dd606c61fa1e0b741b6e5de71ba --- /dev/null +++ b/static/docs/dev/reference/nnf_gelu.html @@ -0,0 +1,248 @@ + + + + + + + + +Gelu — nnf_gelu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Gelu

    +
    + +
    nnf_gelu(input)
    + +

    Arguments

    + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    + +

    gelu(input) -> Tensor

    + + + + +

    Applies element-wise the function +\(GELU(x) = x * \Phi(x)\)

    +

    where \(\Phi(x)\) is the Cumulative Distribution Function for +Gaussian Distribution.

    +

    See Gaussian Error Linear Units (GELUs).

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_glu.html b/static/docs/dev/reference/nnf_glu.html new file mode 100644 index 0000000000000000000000000000000000000000..84ef56f1b81506d8322de45020ca7bb3e3e8f2e5 --- /dev/null +++ b/static/docs/dev/reference/nnf_glu.html @@ -0,0 +1,248 @@ + + + + + + + + +Glu — nnf_glu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    The gated linear unit. Computes:

    +
    + +
    nnf_glu(input, dim = -1)
    + +

    Arguments

    + + + + + + + + + + +
    input

    (Tensor) input tensor

    dim

    (int) dimension on which to split the input. Default: -1

    + +

    Details

    + +

    $$GLU(a, b) = a \otimes \sigma(b)$$

    +

    where input is split in half along dim to form a and b, \(\sigma\) +is the sigmoid function and \(\otimes\) is the element-wise product +between matrices.

    +

    See Language Modeling with Gated Convolutional Networks.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_grid_sample.html b/static/docs/dev/reference/nnf_grid_sample.html new file mode 100644 index 0000000000000000000000000000000000000000..b62972fb5a0f30fb7988baac8916e232d2a2e9f0 --- /dev/null +++ b/static/docs/dev/reference/nnf_grid_sample.html @@ -0,0 +1,309 @@ + + + + + + + + +Grid_sample — nnf_grid_sample • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Given an input and a flow-field grid, computes the +output using input values and pixel locations from grid.

    +
    + +
    nnf_grid_sample(
    +  input,
    +  grid,
    +  mode = c("bilinear", "nearest"),
    +  padding_mode = c("zeros", "border", "reflection"),
    +  align_corners = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    input

    (Tensor) input of shape \((N, C, H_{\mbox{in}}, W_{\mbox{in}})\) (4-D case) or \((N, C, D_{\mbox{in}}, H_{\mbox{in}}, W_{\mbox{in}})\) (5-D case)

    grid

    (Tensor) flow-field of shape \((N, H_{\mbox{out}}, W_{\mbox{out}}, 2)\) (4-D case) or \((N, D_{\mbox{out}}, H_{\mbox{out}}, W_{\mbox{out}}, 3)\) (5-D case)

    mode

    (str) interpolation mode to calculate output values 'bilinear' | 'nearest'. +Default: 'bilinear'

    padding_mode

    (str) padding mode for outside grid values 'zeros' | 'border' +| 'reflection'. Default: 'zeros'

    align_corners

    (bool, optional) Geometrically, we consider the pixels of the +input as squares rather than points. If set to True, the extrema (-1 and +1) are considered as referring to the center points of the input's corner pixels. +If set to False, they are instead considered as referring to the corner +points of the input's corner pixels, making the sampling more resolution +agnostic. This option parallels the align_corners option in nnf_interpolate(), and +so whichever option is used here should also be used there to resize the input +image before grid sampling. Default: False

    + +

    Details

    + +

    Currently, only spatial (4-D) and volumetric (5-D) input are +supported.

    +

    In the spatial (4-D) case, for input with shape +\((N, C, H_{\mbox{in}}, W_{\mbox{in}})\) and grid with shape +\((N, H_{\mbox{out}}, W_{\mbox{out}}, 2)\), the output will have shape +\((N, C, H_{\mbox{out}}, W_{\mbox{out}})\).

    +

    For each output location output[n, :, h, w], the size-2 vector +grid[n, h, w] specifies input pixel locations x and y, +which are used to interpolate the output value output[n, :, h, w]. +In the case of 5D inputs, grid[n, d, h, w] specifies the +x, y, z pixel locations for interpolating +output[n, :, d, h, w]. mode argument specifies nearest or +bilinear interpolation method to sample the input pixels.

    +

    grid specifies the sampling pixel locations normalized by the +input spatial dimensions. Therefore, it should have most values in +the range of [-1, 1]. For example, values x = -1, y = -1 is the +left-top pixel of input, and values x = 1, y = 1 is the +right-bottom pixel of input.

    +

    If grid has values outside the range of [-1, 1], the corresponding +outputs are handled as defined by padding_mode. Options are

      +
    • padding_mode="zeros": use 0 for out-of-bound grid locations,

    • +
    • padding_mode="border": use border values for out-of-bound grid locations,

    • +
    • padding_mode="reflection": use values at locations reflected by +the border for out-of-bound grid locations. For location far away +from the border, it will keep being reflected until becoming in bound, +e.g., (normalized) pixel location x = -3.5 reflects by border -1 +and becomes x' = 1.5, then reflects by border 1 and becomes +x'' = -0.5.

    • +
    + +

    Note

    + + + + +

    This function is often used in conjunction with nnf_affine_grid() +to build Spatial Transformer Networks_ .

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_group_norm.html b/static/docs/dev/reference/nnf_group_norm.html new file mode 100644 index 0000000000000000000000000000000000000000..2eefc5dc4d3289ef2c68aa5c6ab5cec806201203 --- /dev/null +++ b/static/docs/dev/reference/nnf_group_norm.html @@ -0,0 +1,253 @@ + + + + + + + + +Group_norm — nnf_group_norm • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies Group Normalization for last certain number of dimensions.

    +
    + +
    nnf_group_norm(input, num_groups, weight = NULL, bias = NULL, eps = 1e-05)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    input

    the input tensor

    num_groups

    number of groups to separate the channels into

    weight

    the weight tensor

    bias

    the bias tensor

    eps

    a value added to the denominator for numerical stability. Default: 1e-5

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_gumbel_softmax.html b/static/docs/dev/reference/nnf_gumbel_softmax.html new file mode 100644 index 0000000000000000000000000000000000000000..84200f7584c6c98102fdad0ac2a27ac1380324ec --- /dev/null +++ b/static/docs/dev/reference/nnf_gumbel_softmax.html @@ -0,0 +1,251 @@ + + + + + + + + +Gumbel_softmax — nnf_gumbel_softmax • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Samples from the Gumbel-Softmax distribution and +optionally discretizes.

    +
    + +
    nnf_gumbel_softmax(logits, tau = 1, hard = FALSE, dim = -1)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    logits

    [..., num_features] unnormalized log probabilities

    tau

    non-negative scalar temperature

    hard

    if True, the returned samples will be discretized as one-hot vectors, but will be differentiated as if it is the soft sample in autograd

    dim

    (int) A dimension along which softmax will be computed. Default: -1.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_hardshrink.html b/static/docs/dev/reference/nnf_hardshrink.html new file mode 100644 index 0000000000000000000000000000000000000000..53e62793c28337ec4866fc577ad457ad0efde173 --- /dev/null +++ b/static/docs/dev/reference/nnf_hardshrink.html @@ -0,0 +1,242 @@ + + + + + + + + +Hardshrink — nnf_hardshrink • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the hard shrinkage function element-wise

    +
    + +
    nnf_hardshrink(input, lambd = 0.5)
    + +

    Arguments

    + + + + + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    lambd

    the lambda value for the Hardshrink formulation. Default: 0.5

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_hardsigmoid.html b/static/docs/dev/reference/nnf_hardsigmoid.html new file mode 100644 index 0000000000000000000000000000000000000000..c305ed699f92df71adc87fe38b2b9ca8b1660495 --- /dev/null +++ b/static/docs/dev/reference/nnf_hardsigmoid.html @@ -0,0 +1,242 @@ + + + + + + + + +Hardsigmoid — nnf_hardsigmoid • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the element-wise function \(\mbox{Hardsigmoid}(x) = \frac{ReLU6(x + 3)}{6}\)

    +
    + +
    nnf_hardsigmoid(input, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    inplace

    NA If set to True, will do this operation in-place. Default: False

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_hardswish.html b/static/docs/dev/reference/nnf_hardswish.html new file mode 100644 index 0000000000000000000000000000000000000000..9ab64946bcb67861f1fa8b40b6eb7a32f1197411 --- /dev/null +++ b/static/docs/dev/reference/nnf_hardswish.html @@ -0,0 +1,253 @@ + + + + + + + + +Hardswish — nnf_hardswish • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the hardswish function, element-wise, as described in the paper: +Searching for MobileNetV3.

    +
    + +
    nnf_hardswish(input, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    inplace

    can optionally do the operation in-place. Default: FALSE

    + +

    Details

    + +

    $$ \mbox{Hardswish}(x) = \left\{ + \begin{array}{ll} + 0 & \mbox{if } x \le -3, \\ + x & \mbox{if } x \ge +3, \\ + x \cdot (x + 3)/6 & \mbox{otherwise} + \end{array} + \right. $$

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_hardtanh.html b/static/docs/dev/reference/nnf_hardtanh.html new file mode 100644 index 0000000000000000000000000000000000000000..fdc0a0e01b36fffca0db338ae00d2eb895155844 --- /dev/null +++ b/static/docs/dev/reference/nnf_hardtanh.html @@ -0,0 +1,252 @@ + + + + + + + + +Hardtanh — nnf_hardtanh • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the HardTanh function element-wise.

    +
    + +
    nnf_hardtanh(input, min_val = -1, max_val = 1, inplace = FALSE)
    +
    +nnf_hardtanh_(input, min_val = -1, max_val = 1)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    min_val

    minimum value of the linear region range. Default: -1

    max_val

    maximum value of the linear region range. Default: 1

    inplace

    can optionally do the operation in-place. Default: FALSE

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_hinge_embedding_loss.html b/static/docs/dev/reference/nnf_hinge_embedding_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..30628937cf0d2f091b562d9b91c7f021b0b829bb --- /dev/null +++ b/static/docs/dev/reference/nnf_hinge_embedding_loss.html @@ -0,0 +1,258 @@ + + + + + + + + +Hinge_embedding_loss — nnf_hinge_embedding_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Measures the loss given an input tensor xx and a labels tensor yy (containing 1 or -1). +This is usually used for measuring whether two inputs are similar or dissimilar, e.g. +using the L1 pairwise distance as xx , and is typically used for learning nonlinear +embeddings or semi-supervised learning.

    +
    + +
    nnf_hinge_embedding_loss(input, target, margin = 1, reduction = "mean")
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    input

    tensor (N,*) where ** means, any number of additional dimensions

    target

    tensor (N,*) , same shape as the input

    margin

    Has a default value of 1.

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_instance_norm.html b/static/docs/dev/reference/nnf_instance_norm.html new file mode 100644 index 0000000000000000000000000000000000000000..bbac15572bdc996054e5bc778ef218f09abc2bb5 --- /dev/null +++ b/static/docs/dev/reference/nnf_instance_norm.html @@ -0,0 +1,276 @@ + + + + + + + + +Instance_norm — nnf_instance_norm • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies Instance Normalization for each channel in each data sample in a +batch.

    +
    + +
    nnf_instance_norm(
    +  input,
    +  running_mean = NULL,
    +  running_var = NULL,
    +  weight = NULL,
    +  bias = NULL,
    +  use_input_stats = TRUE,
    +  momentum = 0.1,
    +  eps = 1e-05
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    the input tensor

    running_mean

    the running_mean tensor

    running_var

    the running var tensor

    weight

    the weight tensor

    bias

    the bias tensor

    use_input_stats

    whether to use input stats

    momentum

    a double for the momentum

    eps

    an eps double for numerical stability

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_interpolate.html b/static/docs/dev/reference/nnf_interpolate.html new file mode 100644 index 0000000000000000000000000000000000000000..b0a3ff2a81ad73cf4067861d167d85c2d2793812 --- /dev/null +++ b/static/docs/dev/reference/nnf_interpolate.html @@ -0,0 +1,295 @@ + + + + + + + + +Interpolate — nnf_interpolate • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Down/up samples the input to either the given size or the given +scale_factor

    +
    + +
    nnf_interpolate(
    +  input,
    +  size = NULL,
    +  scale_factor = NULL,
    +  mode = "nearest",
    +  align_corners = FALSE,
    +  recompute_scale_factor = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    (Tensor) the input tensor

    size

    (int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int]) +output spatial size.

    scale_factor

    (float or Tuple[float]) multiplier for spatial size. +Has to match input size if it is a tuple.

    mode

    (str) algorithm used for upsampling: 'nearest' | 'linear' | 'bilinear' +| 'bicubic' | 'trilinear' | 'area' Default: 'nearest'

    align_corners

    (bool, optional) Geometrically, we consider the pixels +of the input and output as squares rather than points. If set to TRUE, +the input and output tensors are aligned by the center points of their corner +pixels, preserving the values at the corner pixels. If set to False, the +input and output tensors are aligned by the corner points of their corner pixels, +and the interpolation uses edge value padding for out-of-boundary values, +making this operation independent of input size when scale_factor is kept +the same. This only has an effect when mode is 'linear', 'bilinear', +'bicubic' or 'trilinear'. Default: False

    recompute_scale_factor

    (bool, optional) recompute the scale_factor +for use in the interpolation calculation. When scale_factor is passed +as a parameter, it is used to compute the output_size. If recompute_scale_factor +is ```True`` or not specified, a new scale_factor will be computed based on +the output and input sizes for use in the interpolation computation (i.e. the +computation will be identical to if the computed `output_size` were passed-in +explicitly). Otherwise, the passed-in `scale_factor` will be used in the +interpolation computation. Note that when `scale_factor` is floating-point, +the recomputed scale_factor may differ from the one passed in due to rounding +and precision issues.

    + +

    Details

    + +

    The algorithm used for interpolation is determined by mode.

    +

    Currently temporal, spatial and volumetric sampling are supported, i.e. +expected inputs are 3-D, 4-D or 5-D in shape.

    +

    The input dimensions are interpreted in the form: +mini-batch x channels x [optional depth] x [optional height] x width.

    +

    The modes available for resizing are: nearest, linear (3D-only), +bilinear, bicubic (4D-only), trilinear (5D-only), area

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_kl_div.html b/static/docs/dev/reference/nnf_kl_div.html new file mode 100644 index 0000000000000000000000000000000000000000..88ebbb6c2e9d48089bfe411a645a2a3f5021094c --- /dev/null +++ b/static/docs/dev/reference/nnf_kl_div.html @@ -0,0 +1,248 @@ + + + + + + + + +Kl_div — nnf_kl_div • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    The Kullback-Leibler divergence Loss.

    +
    + +
    nnf_kl_div(input, target, reduction = "mean")
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    tensor (N,*) where ** means, any number of additional dimensions

    target

    tensor (N,*) , same shape as the input

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_l1_loss.html b/static/docs/dev/reference/nnf_l1_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..8268e3188f1011cd0949b60a166c233d33864f7f --- /dev/null +++ b/static/docs/dev/reference/nnf_l1_loss.html @@ -0,0 +1,248 @@ + + + + + + + + +L1_loss — nnf_l1_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Function that takes the mean element-wise absolute value difference.

    +
    + +
    nnf_l1_loss(input, target, reduction = "mean")
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    tensor (N,*) where ** means, any number of additional dimensions

    target

    tensor (N,*) , same shape as the input

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_layer_norm.html b/static/docs/dev/reference/nnf_layer_norm.html new file mode 100644 index 0000000000000000000000000000000000000000..df20405d7d27abf4e31e91c3a66ef2888528963a --- /dev/null +++ b/static/docs/dev/reference/nnf_layer_norm.html @@ -0,0 +1,261 @@ + + + + + + + + +Layer_norm — nnf_layer_norm • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies Layer Normalization for last certain number of dimensions.

    +
    + +
    nnf_layer_norm(
    +  input,
    +  normalized_shape,
    +  weight = NULL,
    +  bias = NULL,
    +  eps = 1e-05
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    input

    the input tensor

    normalized_shape

    input shape from an expected input of size. If a single +integer is used, it is treated as a singleton list, and this module will normalize +over the last dimension which is expected to be of that specific size.

    weight

    the weight tensor

    bias

    the bias tensor

    eps

    a value added to the denominator for numerical stability. Default: 1e-5

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_leaky_relu.html b/static/docs/dev/reference/nnf_leaky_relu.html new file mode 100644 index 0000000000000000000000000000000000000000..7b510d142f54276713573d7a17c2612013dc2e10 --- /dev/null +++ b/static/docs/dev/reference/nnf_leaky_relu.html @@ -0,0 +1,248 @@ + + + + + + + + +Leaky_relu — nnf_leaky_relu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies element-wise, +\(LeakyReLU(x) = max(0, x) + negative_slope * min(0, x)\)

    +
    + +
    nnf_leaky_relu(input, negative_slope = 0.01, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    negative_slope

    Controls the angle of the negative slope. Default: 1e-2

    inplace

    can optionally do the operation in-place. Default: FALSE

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_linear.html b/static/docs/dev/reference/nnf_linear.html new file mode 100644 index 0000000000000000000000000000000000000000..6eb36efde659a026115b6e80e0575dfb1773dbd8 --- /dev/null +++ b/static/docs/dev/reference/nnf_linear.html @@ -0,0 +1,246 @@ + + + + + + + + +Linear — nnf_linear • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a linear transformation to the incoming data: \(y = xA^T + b\).

    +
    + +
    nnf_linear(input, weight, bias = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    \((N, *, in\_features)\) where * means any number of +additional dimensions

    weight

    \((out\_features, in\_features)\) the weights tensor.

    bias

    optional tensor \((out\_features)\)

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_local_response_norm.html b/static/docs/dev/reference/nnf_local_response_norm.html new file mode 100644 index 0000000000000000000000000000000000000000..80dd85fd7d492049c32ca9e4fe0ab02460bf8d65 --- /dev/null +++ b/static/docs/dev/reference/nnf_local_response_norm.html @@ -0,0 +1,257 @@ + + + + + + + + +Local_response_norm — nnf_local_response_norm • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies local response normalization over an input signal composed of +several input planes, where channels occupy the second dimension. +Applies normalization across channels.

    +
    + +
    nnf_local_response_norm(input, size, alpha = 1e-04, beta = 0.75, k = 1)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    input

    the input tensor

    size

    amount of neighbouring channels used for normalization

    alpha

    multiplicative factor. Default: 0.0001

    beta

    exponent. Default: 0.75

    k

    additive factor. Default: 1

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_log_softmax.html b/static/docs/dev/reference/nnf_log_softmax.html new file mode 100644 index 0000000000000000000000000000000000000000..2ab49f3f52b87bb4ef3e41615be5692925382953 --- /dev/null +++ b/static/docs/dev/reference/nnf_log_softmax.html @@ -0,0 +1,253 @@ + + + + + + + + +Log_softmax — nnf_log_softmax • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a softmax followed by a logarithm.

    +
    + +
    nnf_log_softmax(input, dim = NULL, dtype = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    (Tensor) input

    dim

    (int) A dimension along which log_softmax will be computed.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. +If specified, the input tensor is casted to dtype before the operation +is performed. This is useful for preventing data type overflows. +Default: NULL.

    + +

    Details

    + +

    While mathematically equivalent to log(softmax(x)), doing these two +operations separately is slower, and numerically unstable. This function +uses an alternative formulation to compute the output and gradient correctly.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_logsigmoid.html b/static/docs/dev/reference/nnf_logsigmoid.html new file mode 100644 index 0000000000000000000000000000000000000000..32598e00ee67525d12282f26ab2f78d5d440741b --- /dev/null +++ b/static/docs/dev/reference/nnf_logsigmoid.html @@ -0,0 +1,238 @@ + + + + + + + + +Logsigmoid — nnf_logsigmoid • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies element-wise \(LogSigmoid(x_i) = log(\frac{1}{1 + exp(-x_i)})\)

    +
    + +
    nnf_logsigmoid(input)
    + +

    Arguments

    + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_lp_pool1d.html b/static/docs/dev/reference/nnf_lp_pool1d.html new file mode 100644 index 0000000000000000000000000000000000000000..025a4532fb23dc293c1911caa44fefc4830c4f5b --- /dev/null +++ b/static/docs/dev/reference/nnf_lp_pool1d.html @@ -0,0 +1,258 @@ + + + + + + + + +Lp_pool1d — nnf_lp_pool1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 1D power-average pooling over an input signal composed of +several input planes. If the sum of all inputs to the power of p is +zero, the gradient is set to zero as well.

    +
    + +
    nnf_lp_pool1d(input, norm_type, kernel_size, stride = NULL, ceil_mode = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    input

    the input tensor

    norm_type

    if inf than one gets max pooling if 0 you get sum pooling ( +proportional to the avg pooling)

    kernel_size

    a single int, the size of the window

    stride

    a single int, the stride of the window. Default value is kernel_size

    ceil_mode

    when True, will use ceil instead of floor to compute the output shape

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_lp_pool2d.html b/static/docs/dev/reference/nnf_lp_pool2d.html new file mode 100644 index 0000000000000000000000000000000000000000..174cddec2e5c3a6c0f33fac30d56a093190b625d --- /dev/null +++ b/static/docs/dev/reference/nnf_lp_pool2d.html @@ -0,0 +1,258 @@ + + + + + + + + +Lp_pool2d — nnf_lp_pool2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 2D power-average pooling over an input signal composed of +several input planes. If the sum of all inputs to the power of p is +zero, the gradient is set to zero as well.

    +
    + +
    nnf_lp_pool2d(input, norm_type, kernel_size, stride = NULL, ceil_mode = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    input

    the input tensor

    norm_type

    if inf than one gets max pooling if 0 you get sum pooling ( +proportional to the avg pooling)

    kernel_size

    a single int, the size of the window

    stride

    a single int, the stride of the window. Default value is kernel_size

    ceil_mode

    when True, will use ceil instead of floor to compute the output shape

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_margin_ranking_loss.html b/static/docs/dev/reference/nnf_margin_ranking_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..7d28cbe7c1c59e6d9669c6d35527a6a3b780d54e --- /dev/null +++ b/static/docs/dev/reference/nnf_margin_ranking_loss.html @@ -0,0 +1,258 @@ + + + + + + + + +Margin_ranking_loss — nnf_margin_ranking_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Creates a criterion that measures the loss given inputs x1 , x2 , two 1D +mini-batch Tensors, and a label 1D mini-batch tensor y (containing 1 or -1).

    +
    + +
    nnf_margin_ranking_loss(input1, input2, target, margin = 0, reduction = "mean")
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    input1

    the first tensor

    input2

    the second input tensor

    target

    the target tensor

    margin

    Has a default value of 00 .

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_max_pool1d.html b/static/docs/dev/reference/nnf_max_pool1d.html new file mode 100644 index 0000000000000000000000000000000000000000..314e8f726a85951134da5bb27ea37fbdb9bb51c0 --- /dev/null +++ b/static/docs/dev/reference/nnf_max_pool1d.html @@ -0,0 +1,276 @@ + + + + + + + + +Max_pool1d — nnf_max_pool1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 1D max pooling over an input signal composed of several input +planes.

    +
    + +
    nnf_max_pool1d(
    +  input,
    +  kernel_size,
    +  stride = NULL,
    +  padding = 0,
    +  dilation = 1,
    +  ceil_mode = FALSE,
    +  return_indices = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor of shape (minibatch , in_channels , iW)

    kernel_size

    the size of the window. Can be a single number or a +tuple (kW,).

    stride

    the stride of the window. Can be a single number or a tuple +(sW,). Default: kernel_size

    padding

    implicit zero paddings on both sides of the input. Can be a +single number or a tuple (padW,). Default: 0

    dilation

    controls the spacing between the kernel points; also known as +the à trous algorithm.

    ceil_mode

    when True, will use ceil instead of floor to compute the +output shape. Default: FALSE

    return_indices

    whether to return the indices where the max occurs.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_max_pool2d.html b/static/docs/dev/reference/nnf_max_pool2d.html new file mode 100644 index 0000000000000000000000000000000000000000..90176839ef48cd550c7f5b9ce43348d4ac612f22 --- /dev/null +++ b/static/docs/dev/reference/nnf_max_pool2d.html @@ -0,0 +1,276 @@ + + + + + + + + +Max_pool2d — nnf_max_pool2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 2D max pooling over an input signal composed of several input +planes.

    +
    + +
    nnf_max_pool2d(
    +  input,
    +  kernel_size,
    +  stride = kernel_size,
    +  padding = 0,
    +  dilation = 1,
    +  ceil_mode = FALSE,
    +  return_indices = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor (minibatch, in_channels , iH , iW)

    kernel_size

    size of the pooling region. Can be a single number or a +tuple (kH, kW)

    stride

    stride of the pooling operation. Can be a single number or a +tuple (sH, sW). Default: kernel_size

    padding

    implicit zero paddings on both sides of the input. Can be a +single number or a tuple (padH, padW). Default: 0

    dilation

    controls the spacing between the kernel points; also known as +the à trous algorithm.

    ceil_mode

    when True, will use ceil instead of floor in the formula +to compute the output shape. Default: FALSE

    return_indices

    whether to return the indices where the max occurs.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_max_pool3d.html b/static/docs/dev/reference/nnf_max_pool3d.html new file mode 100644 index 0000000000000000000000000000000000000000..01b77c612813e2652550e8b41b28378c991d7dcd --- /dev/null +++ b/static/docs/dev/reference/nnf_max_pool3d.html @@ -0,0 +1,276 @@ + + + + + + + + +Max_pool3d — nnf_max_pool3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 3D max pooling over an input signal composed of several input +planes.

    +
    + +
    nnf_max_pool3d(
    +  input,
    +  kernel_size,
    +  stride = NULL,
    +  padding = 0,
    +  dilation = 1,
    +  ceil_mode = FALSE,
    +  return_indices = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor (minibatch, in_channels , iT * iH , iW)

    kernel_size

    size of the pooling region. Can be a single number or a +tuple (kT, kH, kW)

    stride

    stride of the pooling operation. Can be a single number or a +tuple (sT, sH, sW). Default: kernel_size

    padding

    implicit zero paddings on both sides of the input. Can be a +single number or a tuple (padT, padH, padW), Default: 0

    dilation

    controls the spacing between the kernel points; also known as +the à trous algorithm.

    ceil_mode

    when True, will use ceil instead of floor in the formula +to compute the output shape

    return_indices

    whether to return the indices where the max occurs.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_max_unpool1d.html b/static/docs/dev/reference/nnf_max_unpool1d.html new file mode 100644 index 0000000000000000000000000000000000000000..e11da41ec2844345449eaa0c06bf43b4fd0d554e --- /dev/null +++ b/static/docs/dev/reference/nnf_max_unpool1d.html @@ -0,0 +1,264 @@ + + + + + + + + +Max_unpool1d — nnf_max_unpool1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Computes a partial inverse of MaxPool1d.

    +
    + +
    nnf_max_unpool1d(
    +  input,
    +  indices,
    +  kernel_size,
    +  stride = NULL,
    +  padding = 0,
    +  output_size = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    the input Tensor to invert

    indices

    the indices given out by max pool

    kernel_size

    Size of the max pooling window.

    stride

    Stride of the max pooling window. It is set to kernel_size by default.

    padding

    Padding that was added to the input

    output_size

    the targeted output size

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_max_unpool2d.html b/static/docs/dev/reference/nnf_max_unpool2d.html new file mode 100644 index 0000000000000000000000000000000000000000..574d56f9619047fa0d3d1a36ec8c3db6b3c01630 --- /dev/null +++ b/static/docs/dev/reference/nnf_max_unpool2d.html @@ -0,0 +1,264 @@ + + + + + + + + +Max_unpool2d — nnf_max_unpool2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Computes a partial inverse of MaxPool2d.

    +
    + +
    nnf_max_unpool2d(
    +  input,
    +  indices,
    +  kernel_size,
    +  stride = NULL,
    +  padding = 0,
    +  output_size = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    the input Tensor to invert

    indices

    the indices given out by max pool

    kernel_size

    Size of the max pooling window.

    stride

    Stride of the max pooling window. It is set to kernel_size by default.

    padding

    Padding that was added to the input

    output_size

    the targeted output size

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_max_unpool3d.html b/static/docs/dev/reference/nnf_max_unpool3d.html new file mode 100644 index 0000000000000000000000000000000000000000..3ff723d7f992539038f49abd9ba1d32a19c34c33 --- /dev/null +++ b/static/docs/dev/reference/nnf_max_unpool3d.html @@ -0,0 +1,264 @@ + + + + + + + + +Max_unpool3d — nnf_max_unpool3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Computes a partial inverse of MaxPool3d.

    +
    + +
    nnf_max_unpool3d(
    +  input,
    +  indices,
    +  kernel_size,
    +  stride = NULL,
    +  padding = 0,
    +  output_size = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    the input Tensor to invert

    indices

    the indices given out by max pool

    kernel_size

    Size of the max pooling window.

    stride

    Stride of the max pooling window. It is set to kernel_size by default.

    padding

    Padding that was added to the input

    output_size

    the targeted output size

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_mse_loss.html b/static/docs/dev/reference/nnf_mse_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..e8127c59dd101576067627db84d6ecc7ca9998d6 --- /dev/null +++ b/static/docs/dev/reference/nnf_mse_loss.html @@ -0,0 +1,248 @@ + + + + + + + + +Mse_loss — nnf_mse_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Measures the element-wise mean squared error.

    +
    + +
    nnf_mse_loss(input, target, reduction = "mean")
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    tensor (N,*) where ** means, any number of additional dimensions

    target

    tensor (N,*) , same shape as the input

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_multi_head_attention_forward.html b/static/docs/dev/reference/nnf_multi_head_attention_forward.html new file mode 100644 index 0000000000000000000000000000000000000000..83a484eb071183b388b71f1d89b8edef23fa4781 --- /dev/null +++ b/static/docs/dev/reference/nnf_multi_head_attention_forward.html @@ -0,0 +1,366 @@ + + + + + + + + +Multi head attention forward — nnf_multi_head_attention_forward • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Allows the model to jointly attend to information from different representation +subspaces. See reference: Attention Is All You Need

    +
    + +
    nnf_multi_head_attention_forward(
    +  query,
    +  key,
    +  value,
    +  embed_dim_to_check,
    +  num_heads,
    +  in_proj_weight,
    +  in_proj_bias,
    +  bias_k,
    +  bias_v,
    +  add_zero_attn,
    +  dropout_p,
    +  out_proj_weight,
    +  out_proj_bias,
    +  training = TRUE,
    +  key_padding_mask = NULL,
    +  need_weights = TRUE,
    +  attn_mask = NULL,
    +  use_separate_proj_weight = FALSE,
    +  q_proj_weight = NULL,
    +  k_proj_weight = NULL,
    +  v_proj_weight = NULL,
    +  static_k = NULL,
    +  static_v = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    query

    \((L, N, E)\) where L is the target sequence length, N is the batch size, E is +the embedding dimension.

    key

    \((S, N, E)\), where S is the source sequence length, N is the batch size, E is +the embedding dimension.

    value

    \((S, N, E)\) where S is the source sequence length, N is the batch size, E is +the embedding dimension.

    embed_dim_to_check

    total dimension of the model.

    num_heads

    parallel attention heads.

    in_proj_weight

    input projection weight and bias.

    in_proj_bias

    currently undocumented.

    bias_k

    bias of the key and value sequences to be added at dim=0.

    bias_v

    currently undocumented.

    add_zero_attn

    add a new batch of zeros to the key and +value sequences at dim=1.

    dropout_p

    probability of an element to be zeroed.

    out_proj_weight

    the output projection weight and bias.

    out_proj_bias

    currently undocumented.

    training

    apply dropout if is TRUE.

    key_padding_mask

    \((N, S)\) where N is the batch size, S is the source sequence length. +If a ByteTensor is provided, the non-zero positions will be ignored while the position +with the zero positions will be unchanged. If a BoolTensor is provided, the positions with the +value of True will be ignored while the position with the value of False will be unchanged.

    need_weights

    output attn_output_weights.

    attn_mask

    2D mask \((L, S)\) where L is the target sequence length, S is the source sequence length. +3D mask \((N*num_heads, L, S)\) where N is the batch size, L is the target sequence length, +S is the source sequence length. attn_mask ensure that position i is allowed to attend the unmasked +positions. If a ByteTensor is provided, the non-zero positions are not allowed to attend +while the zero positions will be unchanged. If a BoolTensor is provided, positions with True +is not allowed to attend while False values will be unchanged. If a FloatTensor +is provided, it will be added to the attention weight.

    use_separate_proj_weight

    the function accept the proj. weights for +query, key, and value in different forms. If false, in_proj_weight will be used, +which is a combination of q_proj_weight, k_proj_weight, v_proj_weight.

    q_proj_weight

    input projection weight and bias.

    k_proj_weight

    currently undocumented.

    v_proj_weight

    currently undocumented.

    static_k

    static key and value used for attention operators.

    static_v

    currently undocumented.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_multi_margin_loss.html b/static/docs/dev/reference/nnf_multi_margin_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..3b288405ae5ea3cd7f97664152bdf40d3e03fc79 --- /dev/null +++ b/static/docs/dev/reference/nnf_multi_margin_loss.html @@ -0,0 +1,272 @@ + + + + + + + + +Multi_margin_loss — nnf_multi_margin_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Creates a criterion that optimizes a multi-class classification hinge loss +(margin-based loss) between input x (a 2D mini-batch Tensor) and output y +(which is a 1D tensor of target class indices, 0 <= y <= x$size(2) - 1 ).

    +
    + +
    nnf_multi_margin_loss(
    +  input,
    +  target,
    +  p = 1,
    +  margin = 1,
    +  weight = NULL,
    +  reduction = "mean"
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    tensor (N,*) where ** means, any number of additional dimensions

    target

    tensor (N,*) , same shape as the input

    p

    Has a default value of 1. 1 and 2 are the only supported values.

    margin

    Has a default value of 1.

    weight

    a manual rescaling weight given to each class. If given, it has to +be a Tensor of size C. Otherwise, it is treated as if having all ones.

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_multilabel_margin_loss.html b/static/docs/dev/reference/nnf_multilabel_margin_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..21ceae250d58ac502051f3946e30bc7fc003f55c --- /dev/null +++ b/static/docs/dev/reference/nnf_multilabel_margin_loss.html @@ -0,0 +1,252 @@ + + + + + + + + +Multilabel_margin_loss — nnf_multilabel_margin_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Creates a criterion that optimizes a multi-class multi-classification hinge loss +(margin-based loss) between input x (a 2D mini-batch Tensor) and output y (which +is a 2D Tensor of target class indices).

    +
    + +
    nnf_multilabel_margin_loss(input, target, reduction = "mean")
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    tensor (N,*) where ** means, any number of additional dimensions

    target

    tensor (N,*) , same shape as the input

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_multilabel_soft_margin_loss.html b/static/docs/dev/reference/nnf_multilabel_soft_margin_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..36fe68438f251f67ca2547c946856bd6356d0c05 --- /dev/null +++ b/static/docs/dev/reference/nnf_multilabel_soft_margin_loss.html @@ -0,0 +1,254 @@ + + + + + + + + +Multilabel_soft_margin_loss — nnf_multilabel_soft_margin_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Creates a criterion that optimizes a multi-label one-versus-all loss based on +max-entropy, between input x and target y of size (N, C).

    +
    + +
    nnf_multilabel_soft_margin_loss(input, target, weight, reduction = "mean")
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    input

    tensor (N,*) where ** means, any number of additional dimensions

    target

    tensor (N,*) , same shape as the input

    weight

    weight tensor to apply on the loss.

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_nll_loss.html b/static/docs/dev/reference/nnf_nll_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..03ef3df53c3dd3005b44b703ce78aa12b2c36636 --- /dev/null +++ b/static/docs/dev/reference/nnf_nll_loss.html @@ -0,0 +1,267 @@ + + + + + + + + +Nll_loss — nnf_nll_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    The negative log likelihood loss.

    +
    + +
    nnf_nll_loss(
    +  input,
    +  target,
    +  weight = NULL,
    +  ignore_index = -100,
    +  reduction = "mean"
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    input

    \((N, C)\) where C = number of classes or \((N, C, H, W)\) in +case of 2D Loss, or \((N, C, d_1, d_2, ..., d_K)\) where \(K \geq 1\) in +the case of K-dimensional loss.

    target

    \((N)\) where each value is \(0 \leq \mbox{targets}[i] \leq C-1\), +or \((N, d_1, d_2, ..., d_K)\) where \(K \geq 1\) for K-dimensional loss.

    weight

    (Tensor, optional) a manual rescaling weight given to each class. +If given, has to be a Tensor of size C

    ignore_index

    (int, optional) Specifies a target value that is ignored and +does not contribute to the input gradient.

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_normalize.html b/static/docs/dev/reference/nnf_normalize.html new file mode 100644 index 0000000000000000000000000000000000000000..7c3084161ba356de228325fa8a33fdde02eb63d5 --- /dev/null +++ b/static/docs/dev/reference/nnf_normalize.html @@ -0,0 +1,262 @@ + + + + + + + + +Normalize — nnf_normalize • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Performs \(L_p\) normalization of inputs over specified dimension.

    +
    + +
    nnf_normalize(input, p = 2, dim = 1, eps = 1e-12, out = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor of any shape

    p

    (float) the exponent value in the norm formulation. Default: 2

    dim

    (int) the dimension to reduce. Default: 1

    eps

    (float) small value to avoid division by zero. Default: 1e-12

    out

    (Tensor, optional) the output tensor. If out is used, this operation won't be differentiable.

    + +

    Details

    + +

    For a tensor input of sizes \((n_0, ..., n_{dim}, ..., n_k)\), each +\(n_{dim}\) -element vector \(v\) along dimension dim is transformed as

    +

    $$ + v = \frac{v}{\max(\Vert v \Vert_p, \epsilon)}. +$$

    +

    With the default arguments it uses the Euclidean norm over vectors along +dimension \(1\) for normalization.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_one_hot.html b/static/docs/dev/reference/nnf_one_hot.html new file mode 100644 index 0000000000000000000000000000000000000000..958d1157e02d73663ac12a8e93924f3c27147bc0 --- /dev/null +++ b/static/docs/dev/reference/nnf_one_hot.html @@ -0,0 +1,252 @@ + + + + + + + + +One_hot — nnf_one_hot • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Takes LongTensor with index values of shape (*) and returns a tensor +of shape (*, num_classes) that have zeros everywhere except where the +index of last dimension matches the corresponding value of the input tensor, +in which case it will be 1.

    +
    + +
    nnf_one_hot(tensor, num_classes = -1)
    + +

    Arguments

    + + + + + + + + + + +
    tensor

    (LongTensor) class values of any shape.

    num_classes

    (int) Total number of classes. If set to -1, the number +of classes will be inferred as one greater than the largest class value in +the input tensor.

    + +

    Details

    + +

    One-hot on Wikipedia: https://en.wikipedia.org/wiki/One-hot

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_pad.html b/static/docs/dev/reference/nnf_pad.html new file mode 100644 index 0000000000000000000000000000000000000000..de1e931a3ec9c2bc424093c9370b667f3753de5b --- /dev/null +++ b/static/docs/dev/reference/nnf_pad.html @@ -0,0 +1,280 @@ + + + + + + + + +Pad — nnf_pad • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Pads tensor.

    +
    + +
    nnf_pad(input, pad, mode = "constant", value = 0)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    input

    (Tensor) N-dimensional tensor

    pad

    (tuple) m-elements tuple, where \(\frac{m}{2} \leq\) input dimensions +and \(m\) is even.

    mode

    'constant', 'reflect', 'replicate' or 'circular'. Default: 'constant'

    value

    fill value for 'constant' padding. Default: 0.

    + +

    Padding size

    + + + + +

    The padding size by which to pad some dimensions of input +are described starting from the last dimension and moving forward. +\(\left\lfloor\frac{\mbox{len(pad)}}{2}\right\rfloor\) dimensions +of input will be padded. +For example, to pad only the last dimension of the input tensor, then +pad has the form +\((\mbox{padding\_left}, \mbox{padding\_right})\); +to pad the last 2 dimensions of the input tensor, then use +\((\mbox{padding\_left}, \mbox{padding\_right},\) +\(\mbox{padding\_top}, \mbox{padding\_bottom})\); +to pad the last 3 dimensions, use +\((\mbox{padding\_left}, \mbox{padding\_right},\) +\(\mbox{padding\_top}, \mbox{padding\_bottom}\) +\(\mbox{padding\_front}, \mbox{padding\_back})\).

    +

    Padding mode

    + + + + +

    See nn_constant_pad_2d, nn_reflection_pad_2d, and +nn_replication_pad_2d for concrete examples on how each of the +padding modes works. Constant padding is implemented for arbitrary dimensions. +tensor, or the last 2 dimensions of 4D input tensor, or the last dimension of +3D input tensor. Reflect padding is only implemented for padding the last 2 +dimensions of 4D input tensor, or the last dimension of 3D input tensor.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_pairwise_distance.html b/static/docs/dev/reference/nnf_pairwise_distance.html new file mode 100644 index 0000000000000000000000000000000000000000..0cfb0d4f107d6b3ade687eb4810a4729fae59c96 --- /dev/null +++ b/static/docs/dev/reference/nnf_pairwise_distance.html @@ -0,0 +1,254 @@ + + + + + + + + +Pairwise_distance — nnf_pairwise_distance • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Computes the batchwise pairwise distance between vectors using the p-norm.

    +
    + +
    nnf_pairwise_distance(x1, x2, p = 2, eps = 1e-06, keepdim = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    x1

    (Tensor) First input.

    x2

    (Tensor) Second input (of size matching x1).

    p

    the norm degree. Default: 2

    eps

    (float, optional) Small value to avoid division by zero. +Default: 1e-8

    keepdim

    Determines whether or not to keep the vector dimension. Default: False

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_pdist.html b/static/docs/dev/reference/nnf_pdist.html new file mode 100644 index 0000000000000000000000000000000000000000..b1c057bbfc8d771420de8dc598a73ca712f1248a --- /dev/null +++ b/static/docs/dev/reference/nnf_pdist.html @@ -0,0 +1,252 @@ + + + + + + + + +Pdist — nnf_pdist • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Computes the p-norm distance between every pair of row vectors in the input. +This is identical to the upper triangular portion, excluding the diagonal, of +torch_norm(input[:, None] - input, dim=2, p=p). This function will be faster +if the rows are contiguous.

    +
    + +
    nnf_pdist(input, p = 2)
    + +

    Arguments

    + + + + + + + + + + +
    input

    input tensor of shape \(N \times M\).

    p

    p value for the p-norm distance to calculate between each vector pair +\(\in [0, \infty]\).

    + +

    Details

    + +

    If input has shape \(N \times M\) then the output will have shape +\(\frac{1}{2} N (N - 1)\).

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_pixel_shuffle.html b/static/docs/dev/reference/nnf_pixel_shuffle.html new file mode 100644 index 0000000000000000000000000000000000000000..9a2c89f76a75488c896e19b6eac3da5d31ff3c76 --- /dev/null +++ b/static/docs/dev/reference/nnf_pixel_shuffle.html @@ -0,0 +1,243 @@ + + + + + + + + +Pixel_shuffle — nnf_pixel_shuffle • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Rearranges elements in a tensor of shape \((*, C \times r^2, H, W)\) to a +tensor of shape \((*, C, H \times r, W \times r)\).

    +
    + +
    nnf_pixel_shuffle(input, upscale_factor)
    + +

    Arguments

    + + + + + + + + + + +
    input

    (Tensor) the input tensor

    upscale_factor

    (int) factor to increase spatial resolution by

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_poisson_nll_loss.html b/static/docs/dev/reference/nnf_poisson_nll_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..52fd8d421809fb9bb3d75208245c7e9b9372e0f2 --- /dev/null +++ b/static/docs/dev/reference/nnf_poisson_nll_loss.html @@ -0,0 +1,271 @@ + + + + + + + + +Poisson_nll_loss — nnf_poisson_nll_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Poisson negative log likelihood loss.

    +
    + +
    nnf_poisson_nll_loss(
    +  input,
    +  target,
    +  log_input = TRUE,
    +  full = FALSE,
    +  eps = 1e-08,
    +  reduction = "mean"
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    tensor (N,*) where ** means, any number of additional dimensions

    target

    tensor (N,*) , same shape as the input

    log_input

    if TRUE the loss is computed as \(\exp(\mbox{input}) - \mbox{target} * \mbox{input}\), +if FALSE then loss is \(\mbox{input} - \mbox{target} * \log(\mbox{input}+\mbox{eps})\). +Default: TRUE.

    full

    whether to compute full loss, i. e. to add the Stirling approximation +term. Default: FALSE.

    eps

    (float, optional) Small value to avoid evaluation of \(\log(0)\) when +log_input=FALSE. Default: 1e-8

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_prelu.html b/static/docs/dev/reference/nnf_prelu.html new file mode 100644 index 0000000000000000000000000000000000000000..dd367e9f46bd80bc9983c68cdee890406b3d3af0 --- /dev/null +++ b/static/docs/dev/reference/nnf_prelu.html @@ -0,0 +1,246 @@ + + + + + + + + +Prelu — nnf_prelu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies element-wise the function +\(PReLU(x) = max(0,x) + weight * min(0,x)\) +where weight is a learnable parameter.

    +
    + +
    nnf_prelu(input, weight)
    + +

    Arguments

    + + + + + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    weight

    (Tensor) the learnable weights

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_relu.html b/static/docs/dev/reference/nnf_relu.html new file mode 100644 index 0000000000000000000000000000000000000000..ee200337eac8ef12d543dd2f9992592db522022b --- /dev/null +++ b/static/docs/dev/reference/nnf_relu.html @@ -0,0 +1,244 @@ + + + + + + + + +Relu — nnf_relu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the rectified linear unit function element-wise.

    +
    + +
    nnf_relu(input, inplace = FALSE)
    +
    +nnf_relu_(input)
    + +

    Arguments

    + + + + + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    inplace

    can optionally do the operation in-place. Default: FALSE

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_relu6.html b/static/docs/dev/reference/nnf_relu6.html new file mode 100644 index 0000000000000000000000000000000000000000..68f9e6b29202ca091a00d025a266a4fc34cdcc2b --- /dev/null +++ b/static/docs/dev/reference/nnf_relu6.html @@ -0,0 +1,242 @@ + + + + + + + + +Relu6 — nnf_relu6 • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the element-wise function \(ReLU6(x) = min(max(0,x), 6)\).

    +
    + +
    nnf_relu6(input, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    inplace

    can optionally do the operation in-place. Default: FALSE

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_rrelu.html b/static/docs/dev/reference/nnf_rrelu.html new file mode 100644 index 0000000000000000000000000000000000000000..d11fdeff75c81990798759fdde52191c1b3dba9f --- /dev/null +++ b/static/docs/dev/reference/nnf_rrelu.html @@ -0,0 +1,256 @@ + + + + + + + + +Rrelu — nnf_rrelu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Randomized leaky ReLU.

    +
    + +
    nnf_rrelu(input, lower = 1/8, upper = 1/3, training = FALSE, inplace = FALSE)
    +
    +nnf_rrelu_(input, lower = 1/8, upper = 1/3, training = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    lower

    lower bound of the uniform distribution. Default: 1/8

    upper

    upper bound of the uniform distribution. Default: 1/3

    training

    bool wether it's a training pass. DEfault: FALSE

    inplace

    can optionally do the operation in-place. Default: FALSE

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_selu.html b/static/docs/dev/reference/nnf_selu.html new file mode 100644 index 0000000000000000000000000000000000000000..26e7e0b4ce5fb023a9cf4061357589d9d0bb5961 --- /dev/null +++ b/static/docs/dev/reference/nnf_selu.html @@ -0,0 +1,259 @@ + + + + + + + + +Selu — nnf_selu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies element-wise, +$$SELU(x) = scale * (max(0,x) + min(0, \alpha * (exp(x) - 1)))$$, +with \(\alpha=1.6732632423543772848170429916717\) and +\(scale=1.0507009873554804934193349852946\).

    +
    + +
    nnf_selu(input, inplace = FALSE)
    +
    +nnf_selu_(input)
    + +

    Arguments

    + + + + + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    inplace

    can optionally do the operation in-place. Default: FALSE

    + + +

    Examples

    +
    if (torch_is_installed()) { +x <- torch_randn(2, 2) +y <- nnf_selu(x) +nnf_selu_(x) +torch_equal(x, y) + +} +
    #> [1] TRUE
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_sigmoid.html b/static/docs/dev/reference/nnf_sigmoid.html new file mode 100644 index 0000000000000000000000000000000000000000..97dff7ee39cfaaf9cb7cfc2b6e3ce70f7752545c --- /dev/null +++ b/static/docs/dev/reference/nnf_sigmoid.html @@ -0,0 +1,238 @@ + + + + + + + + +Sigmoid — nnf_sigmoid • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies element-wise \(Sigmoid(x_i) = \frac{1}{1 + exp(-x_i)}\)

    +
    + +
    nnf_sigmoid(input)
    + +

    Arguments

    + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_smooth_l1_loss.html b/static/docs/dev/reference/nnf_smooth_l1_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..d8d531cda87f52d374e3eea6d8fa31b7b6dbb078 --- /dev/null +++ b/static/docs/dev/reference/nnf_smooth_l1_loss.html @@ -0,0 +1,250 @@ + + + + + + + + +Smooth_l1_loss — nnf_smooth_l1_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Function that uses a squared term if the absolute +element-wise error falls below 1 and an L1 term otherwise.

    +
    + +
    nnf_smooth_l1_loss(input, target, reduction = "mean")
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    tensor (N,*) where ** means, any number of additional dimensions

    target

    tensor (N,*) , same shape as the input

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_soft_margin_loss.html b/static/docs/dev/reference/nnf_soft_margin_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..e476566afd96076babca062962eaffe95bcc3714 --- /dev/null +++ b/static/docs/dev/reference/nnf_soft_margin_loss.html @@ -0,0 +1,250 @@ + + + + + + + + +Soft_margin_loss — nnf_soft_margin_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Creates a criterion that optimizes a two-class classification logistic loss +between input tensor x and target tensor y (containing 1 or -1).

    +
    + +
    nnf_soft_margin_loss(input, target, reduction = "mean")
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    tensor (N,*) where ** means, any number of additional dimensions

    target

    tensor (N,*) , same shape as the input

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_softmax.html b/static/docs/dev/reference/nnf_softmax.html new file mode 100644 index 0000000000000000000000000000000000000000..3118c0ddf604be5ab7be3cb3659f957bb8600fc3 --- /dev/null +++ b/static/docs/dev/reference/nnf_softmax.html @@ -0,0 +1,252 @@ + + + + + + + + +Softmax — nnf_softmax • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a softmax function.

    +
    + +
    nnf_softmax(input, dim, dtype = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    (Tensor) input

    dim

    (int) A dimension along which softmax will be computed.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. +Default: NULL.

    + +

    Details

    + +

    Softmax is defined as:

    +

    $$Softmax(x_{i}) = exp(x_i)/\sum_j exp(x_j)$$

    +

    It is applied to all slices along dim, and will re-scale them so that the elements +lie in the range [0, 1] and sum to 1.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_softmin.html b/static/docs/dev/reference/nnf_softmin.html new file mode 100644 index 0000000000000000000000000000000000000000..6148d299a365c5c8b97c3d307de4cef00216930f --- /dev/null +++ b/static/docs/dev/reference/nnf_softmin.html @@ -0,0 +1,252 @@ + + + + + + + + +Softmin — nnf_softmin • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a softmin function.

    +
    + +
    nnf_softmin(input, dim, dtype = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    (Tensor) input

    dim

    (int) A dimension along which softmin will be computed +(so every slice along dim will sum to 1).

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. +This is useful for preventing data type overflows. Default: NULL.

    + +

    Details

    + +

    Note that

    +

    $$Softmin(x) = Softmax(-x)$$.

    +

    See nnf_softmax definition for mathematical formula.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_softplus.html b/static/docs/dev/reference/nnf_softplus.html new file mode 100644 index 0000000000000000000000000000000000000000..e58234ac4c77c7571f4805adf2fa5dc108c0cca6 --- /dev/null +++ b/static/docs/dev/reference/nnf_softplus.html @@ -0,0 +1,250 @@ + + + + + + + + +Softplus — nnf_softplus • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies element-wise, the function \(Softplus(x) = 1/\beta * log(1 + exp(\beta * x))\).

    +
    + +
    nnf_softplus(input, beta = 1, threshold = 20)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    beta

    the beta value for the Softplus formulation. Default: 1

    threshold

    values above this revert to a linear function. Default: 20

    + +

    Details

    + +

    For numerical stability the implementation reverts to the linear function +when \(input * \beta > threshold\).

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_softshrink.html b/static/docs/dev/reference/nnf_softshrink.html new file mode 100644 index 0000000000000000000000000000000000000000..962321d58bb4f591cd72701f565b0ce6eff82f43 --- /dev/null +++ b/static/docs/dev/reference/nnf_softshrink.html @@ -0,0 +1,243 @@ + + + + + + + + +Softshrink — nnf_softshrink • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the soft shrinkage function elementwise

    +
    + +
    nnf_softshrink(input, lambd = 0.5)
    + +

    Arguments

    + + + + + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    lambd

    the lambda (must be no less than zero) value for the Softshrink +formulation. Default: 0.5

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_softsign.html b/static/docs/dev/reference/nnf_softsign.html new file mode 100644 index 0000000000000000000000000000000000000000..80c5bd6bcbb98e7471c83ebb61030d9b76fdd208 --- /dev/null +++ b/static/docs/dev/reference/nnf_softsign.html @@ -0,0 +1,238 @@ + + + + + + + + +Softsign — nnf_softsign • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies element-wise, the function \(SoftSign(x) = x/(1 + |x|\)

    +
    + +
    nnf_softsign(input)
    + +

    Arguments

    + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_tanhshrink.html b/static/docs/dev/reference/nnf_tanhshrink.html new file mode 100644 index 0000000000000000000000000000000000000000..f28344eb83b489503a1d97cc8de96695a2c5e75e --- /dev/null +++ b/static/docs/dev/reference/nnf_tanhshrink.html @@ -0,0 +1,238 @@ + + + + + + + + +Tanhshrink — nnf_tanhshrink • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies element-wise, \(Tanhshrink(x) = x - Tanh(x)\)

    +
    + +
    nnf_tanhshrink(input)
    + +

    Arguments

    + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_threshold.html b/static/docs/dev/reference/nnf_threshold.html new file mode 100644 index 0000000000000000000000000000000000000000..c630d71043faa7aacfc41d98bdfbbbc5cd54f878 --- /dev/null +++ b/static/docs/dev/reference/nnf_threshold.html @@ -0,0 +1,252 @@ + + + + + + + + +Threshold — nnf_threshold • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Thresholds each element of the input Tensor.

    +
    + +
    nnf_threshold(input, threshold, value, inplace = FALSE)
    +
    +nnf_threshold_(input, threshold, value)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    threshold

    The value to threshold at

    value

    The value to replace with

    inplace

    can optionally do the operation in-place. Default: FALSE

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_triplet_margin_loss.html b/static/docs/dev/reference/nnf_triplet_margin_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..7d095f737e175677fbd1a0a46534d48a99515fff --- /dev/null +++ b/static/docs/dev/reference/nnf_triplet_margin_loss.html @@ -0,0 +1,287 @@ + + + + + + + + +Triplet_margin_loss — nnf_triplet_margin_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Creates a criterion that measures the triplet loss given an input tensors x1 , +x2 , x3 and a margin with a value greater than 0 . This is used for measuring +a relative similarity between samples. A triplet is composed by a, p and n (i.e., +anchor, positive examples and negative examples respectively). The shapes of all +input tensors should be (N, D).

    +
    + +
    nnf_triplet_margin_loss(
    +  anchor,
    +  positive,
    +  negative,
    +  margin = 1,
    +  p = 2,
    +  eps = 1e-06,
    +  swap = FALSE,
    +  reduction = "mean"
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    anchor

    the anchor input tensor

    positive

    the positive input tensor

    negative

    the negative input tensor

    margin

    Default: 1.

    p

    The norm degree for pairwise distance. Default: 2.

    eps

    (float, optional) Small value to avoid division by zero.

    swap

    The distance swap is described in detail in the paper Learning shallow +convolutional feature descriptors with triplet losses by V. Balntas, E. Riba et al. +Default: FALSE.

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/nnf_unfold.html b/static/docs/dev/reference/nnf_unfold.html new file mode 100644 index 0000000000000000000000000000000000000000..a4b819a95e394bc95d5059da67e5fa3f0771d273 --- /dev/null +++ b/static/docs/dev/reference/nnf_unfold.html @@ -0,0 +1,269 @@ + + + + + + + + +Unfold — nnf_unfold • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Extracts sliding local blocks from an batched input tensor.

    +
    + +
    nnf_unfold(input, kernel_size, dilation = 1, padding = 0, stride = 1)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    input

    the input tensor

    kernel_size

    the size of the sliding blocks

    dilation

    a parameter that controls the stride of elements within the +neighborhood. Default: 1

    padding

    implicit zero padding to be added on both sides of input. +Default: 0

    stride

    the stride of the sliding blocks in the input spatial dimensions. +Default: 1

    + +

    Warning

    + + + + +

    Currently, only 4-D input tensors (batched image-like tensors) are +supported.

    + + +

    More than one element of the unfolded tensor may refer to a single +memory location. As a result, in-place operations (especially ones that +are vectorized) may result in incorrect behavior. If you need to write +to the tensor, please clone it first.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/optim_adam.html b/static/docs/dev/reference/optim_adam.html new file mode 100644 index 0000000000000000000000000000000000000000..02fa265f1994d7a1e5c4b41e65d150912a633910 --- /dev/null +++ b/static/docs/dev/reference/optim_adam.html @@ -0,0 +1,280 @@ + + + + + + + + +Implements Adam algorithm. — optim_adam • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    It has been proposed in Adam: A Method for Stochastic Optimization.

    +
    + +
    optim_adam(
    +  params,
    +  lr = 0.001,
    +  betas = c(0.9, 0.999),
    +  eps = 1e-08,
    +  weight_decay = 0,
    +  amsgrad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    params

    (iterable): iterable of parameters to optimize or dicts defining +parameter groups

    lr

    (float, optional): learning rate (default: 1e-3)

    betas

    (Tuple[float, float], optional): coefficients used for computing +running averages of gradient and its square (default: (0.9, 0.999))

    eps

    (float, optional): term added to the denominator to improve +numerical stability (default: 1e-8)

    weight_decay

    (float, optional): weight decay (L2 penalty) (default: 0)

    amsgrad

    (boolean, optional): whether to use the AMSGrad variant of this +algorithm from the paper On the Convergence of Adam and Beyond +(default: FALSE)

    + + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +optimizer <- optim_adam(model$parameters(), lr=0.1) +optimizer$zero_grad() +loss_fn(model(input), target)$backward() +optimizer$step() +} + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/optim_required.html b/static/docs/dev/reference/optim_required.html new file mode 100644 index 0000000000000000000000000000000000000000..3904eea399e9efbe8c0d61ec87784837532d1369 --- /dev/null +++ b/static/docs/dev/reference/optim_required.html @@ -0,0 +1,229 @@ + + + + + + + + +Dummy value indicating a required value. — optim_required • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    export

    +
    + +
    optim_required()
    + + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/optim_sgd.html b/static/docs/dev/reference/optim_sgd.html new file mode 100644 index 0000000000000000000000000000000000000000..68e772dbaa7d262a52882c56648432ef1bf79cdc --- /dev/null +++ b/static/docs/dev/reference/optim_sgd.html @@ -0,0 +1,305 @@ + + + + + + + + +SGD optimizer — optim_sgd • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Implements stochastic gradient descent (optionally with momentum). +Nesterov momentum is based on the formula from +On the importance of initialization and momentum in deep learning.

    +
    + +
    optim_sgd(
    +  params,
    +  lr = optim_required(),
    +  momentum = 0,
    +  dampening = 0,
    +  weight_decay = 0,
    +  nesterov = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    params

    (iterable): iterable of parameters to optimize or dicts defining +parameter groups

    lr

    (float): learning rate

    momentum

    (float, optional): momentum factor (default: 0)

    dampening

    (float, optional): dampening for momentum (default: 0)

    weight_decay

    (float, optional): weight decay (L2 penalty) (default: 0)

    nesterov

    (bool, optional): enables Nesterov momentum (default: FALSE)

    + +

    Note

    + + + + +

    The implementation of SGD with Momentum-Nesterov subtly differs from +Sutskever et. al. and implementations in some other frameworks.

    +

    Considering the specific case of Momentum, the update can be written as +$$ + \begin{array}{ll} +v_{t+1} & = \mu * v_{t} + g_{t+1}, \\ +p_{t+1} & = p_{t} - \mbox{lr} * v_{t+1}, +\end{array} +$$

    +

    where \(p\), \(g\), \(v\) and \(\mu\) denote the +parameters, gradient, velocity, and momentum respectively.

    +

    This is in contrast to Sutskever et. al. and +other frameworks which employ an update of the form

    +

    $$ + \begin{array}{ll} +v_{t+1} & = \mu * v_{t} + \mbox{lr} * g_{t+1}, \\ +p_{t+1} & = p_{t} - v_{t+1}. +\end{array} +$$ +The Nesterov version is analogously modified.

    + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +optimizer <- optim_sgd(model$parameters(), lr=0.1, momentum=0.9) +optimizer$zero_grad() +loss_fn(model(input), target)$backward() +optimizer$step() +} + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/pipe.html b/static/docs/dev/reference/pipe.html new file mode 100644 index 0000000000000000000000000000000000000000..32c04a00725e4a9961eb2fc6e4dce3af7ace8c74 --- /dev/null +++ b/static/docs/dev/reference/pipe.html @@ -0,0 +1,229 @@ + + + + + + + + +Pipe operator — %>% • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    See magrittr::%>% for details.

    +
    + +
    lhs %>% rhs
    + + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/tensor_dataset.html b/static/docs/dev/reference/tensor_dataset.html new file mode 100644 index 0000000000000000000000000000000000000000..91d9439f23a987fffed280a853e7d7f39bfbc63a --- /dev/null +++ b/static/docs/dev/reference/tensor_dataset.html @@ -0,0 +1,237 @@ + + + + + + + + +Dataset wrapping tensors. — tensor_dataset • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Each sample will be retrieved by indexing tensors along the first dimension.

    +
    + +
    tensor_dataset(...)
    + +

    Arguments

    + + + + + + +
    ...

    tensors that have the same size of the first dimension.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_abs.html b/static/docs/dev/reference/torch_abs.html new file mode 100644 index 0000000000000000000000000000000000000000..9d049210288de8ddea7fc413210468709e8dadff --- /dev/null +++ b/static/docs/dev/reference/torch_abs.html @@ -0,0 +1,256 @@ + + + + + + + + +Abs — torch_abs • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Abs

    +
    + +
    torch_abs(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    abs(input) -> Tensor

    + + + + +

    Computes the element-wise absolute value of the given input tensor.

    +

    $$ + \mbox{out}_{i} = |\mbox{input}_{i}| +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_abs(torch_tensor(c(-1, -2, 3))) +} +
    #> torch_tensor +#> 1 +#> 2 +#> 3 +#> [ CPUFloatType{3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_acos.html b/static/docs/dev/reference/torch_acos.html new file mode 100644 index 0000000000000000000000000000000000000000..cdff7bc3eb3c020db4745e15edf52ee766bbc606 --- /dev/null +++ b/static/docs/dev/reference/torch_acos.html @@ -0,0 +1,259 @@ + + + + + + + + +Acos — torch_acos • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Acos

    +
    + +
    torch_acos(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    acos(input) -> Tensor

    + + + + +

    Returns a new tensor with the arccosine of the elements of input.

    +

    $$ + \mbox{out}_{i} = \cos^{-1}(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_acos(a) +} +
    #> torch_tensor +#> 0.8976 +#> 1.4614 +#> 1.3778 +#> 2.7205 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_adaptive_avg_pool1d.html b/static/docs/dev/reference/torch_adaptive_avg_pool1d.html new file mode 100644 index 0000000000000000000000000000000000000000..83814fec4cf762d6f46b1a1ed4d3d1a9b6f8ad7b --- /dev/null +++ b/static/docs/dev/reference/torch_adaptive_avg_pool1d.html @@ -0,0 +1,249 @@ + + + + + + + + +Adaptive_avg_pool1d — torch_adaptive_avg_pool1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Adaptive_avg_pool1d

    +
    + +
    torch_adaptive_avg_pool1d(self, output_size)
    + +

    Arguments

    + + + + + + + + + + +
    self

    the input tensor

    output_size

    the target output size (single integer)

    + +

    adaptive_avg_pool1d(input, output_size) -> Tensor

    + + + + +

    Applies a 1D adaptive average pooling over an input signal composed of +several input planes.

    +

    See nn_adaptive_avg_pool1d() for details and output shape.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_add.html b/static/docs/dev/reference/torch_add.html new file mode 100644 index 0000000000000000000000000000000000000000..490331432d9c2c41714ea1f5d2ec5ea7c4e99aec --- /dev/null +++ b/static/docs/dev/reference/torch_add.html @@ -0,0 +1,292 @@ + + + + + + + + +Add — torch_add • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Add

    +
    + +
    torch_add(self, other, alpha = 1L)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    other

    (Tensor/Number) the second input tensor/number.

    alpha

    (Number) the scalar multiplier for other

    + +

    add(input, other, out=NULL)

    + + + + +

    Adds the scalar other to each element of the input input +and returns a new resulting tensor.

    +

    $$ + \mbox{out} = \mbox{input} + \mbox{other} +$$ +If input is of type FloatTensor or DoubleTensor, other must be +a real number, otherwise it should be an integer.

    +

    add(input, other, *, alpha=1, out=NULL)

    + + + + +

    Each element of the tensor other is multiplied by the scalar +alpha and added to each element of the tensor input. +The resulting tensor is returned.

    +

    The shapes of input and other must be +broadcastable .

    +

    $$ + \mbox{out} = \mbox{input} + \mbox{alpha} \times \mbox{other} +$$ +If other is of type FloatTensor or DoubleTensor, alpha must be +a real number, otherwise it should be an integer.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_add(a, 20) + + +a = torch_randn(c(4)) +a +b = torch_randn(c(4, 1)) +b +torch_add(a, b) +} +
    #> torch_tensor +#> -0.3169 0.3608 -1.4928 0.4897 +#> 0.5190 1.1967 -0.6570 1.3255 +#> -1.0508 -0.3731 -2.2268 -0.2443 +#> -0.9534 -0.2757 -2.1294 -0.1469 +#> [ CPUFloatType{4,4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_addbmm.html b/static/docs/dev/reference/torch_addbmm.html new file mode 100644 index 0000000000000000000000000000000000000000..e6ac58f2ad04da264fbe2a96d0fb91b9d666a8cb --- /dev/null +++ b/static/docs/dev/reference/torch_addbmm.html @@ -0,0 +1,287 @@ + + + + + + + + +Addbmm — torch_addbmm • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Addbmm

    +
    + +
    torch_addbmm(self, batch1, batch2, beta = 1L, alpha = 1L)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) matrix to be added

    batch1

    (Tensor) the first batch of matrices to be multiplied

    batch2

    (Tensor) the second batch of matrices to be multiplied

    beta

    (Number, optional) multiplier for input (\(\beta\))

    alpha

    (Number, optional) multiplier for batch1 @ batch2 (\(\alpha\))

    + +

    addbmm(input, batch1, batch2, *, beta=1, alpha=1, out=NULL) -> Tensor

    + + + + +

    Performs a batch matrix-matrix product of matrices stored +in batch1 and batch2, +with a reduced add step (all matrix multiplications get accumulated +along the first dimension). +input is added to the final result.

    +

    batch1 and batch2 must be 3-D tensors each containing the +same number of matrices.

    +

    If batch1 is a \((b \times n \times m)\) tensor, batch2 is a +\((b \times m \times p)\) tensor, input must be +broadcastable with a \((n \times p)\) tensor +and out will be a \((n \times p)\) tensor.

    +

    $$ + out = \beta\ \mbox{input} + \alpha\ (\sum_{i=0}^{b-1} \mbox{batch1}_i \mathbin{@} \mbox{batch2}_i) +$$ +For inputs of type FloatTensor or DoubleTensor, arguments beta and alpha +must be real numbers, otherwise they should be integers.

    + +

    Examples

    +
    if (torch_is_installed()) { + +M = torch_randn(c(3, 5)) +batch1 = torch_randn(c(10, 3, 4)) +batch2 = torch_randn(c(10, 4, 5)) +torch_addbmm(M, batch1, batch2) +} +
    #> torch_tensor +#> -0.0019 -2.7238 14.9011 0.2573 1.0161 +#> -4.0793 -7.3787 -3.1121 -3.8778 7.2987 +#> 8.5516 -8.2610 -2.8574 1.6860 6.3424 +#> [ CPUFloatType{3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_addcdiv.html b/static/docs/dev/reference/torch_addcdiv.html new file mode 100644 index 0000000000000000000000000000000000000000..21a5526949dace7e1e9cb74bd5c51c3239eb42c2 --- /dev/null +++ b/static/docs/dev/reference/torch_addcdiv.html @@ -0,0 +1,290 @@ + + + + + + + + +Addcdiv — torch_addcdiv • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Addcdiv

    +
    + +
    torch_addcdiv(self, tensor1, tensor2, value = 1L)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the tensor to be added

    tensor1

    (Tensor) the numerator tensor

    tensor2

    (Tensor) the denominator tensor

    value

    (Number, optional) multiplier for \(\mbox{tensor1} / \mbox{tensor2}\)

    + +

    addcdiv(input, tensor1, tensor2, *, value=1, out=NULL) -> Tensor

    + + + + +

    Performs the element-wise division of tensor1 by tensor2, +multiply the result by the scalar value and add it to input.

    +

    Warning

    + + + +

    Integer division with addcdiv is deprecated, and in a future release +addcdiv will perform a true division of tensor1 and tensor2. +The current addcdiv behavior can be replicated using torch_floor_divide() +for integral inputs +(input + value * tensor1 // tensor2) +and torch_div() for float inputs +(input + value * tensor1 / tensor2). +The new addcdiv behavior can be implemented with torch_true_divide() +(input + value * torch.true_divide(tensor1, +tensor2).

    +

    $$ + \mbox{out}_i = \mbox{input}_i + \mbox{value} \times \frac{\mbox{tensor1}_i}{\mbox{tensor2}_i} +$$

    +

    The shapes of input, tensor1, and tensor2 must be +broadcastable .

    +

    For inputs of type FloatTensor or DoubleTensor, value must be +a real number, otherwise an integer.

    + +

    Examples

    +
    if (torch_is_installed()) { + +t = torch_randn(c(1, 3)) +t1 = torch_randn(c(3, 1)) +t2 = torch_randn(c(1, 3)) +torch_addcdiv(t, t1, t2, 0.1) +} +
    #> torch_tensor +#> -0.5168 -1.0842 -0.2506 +#> -0.4703 -1.5705 -0.2148 +#> -0.5364 -0.8793 -0.2657 +#> [ CPUFloatType{3,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_addcmul.html b/static/docs/dev/reference/torch_addcmul.html new file mode 100644 index 0000000000000000000000000000000000000000..c974313125d9a567ebaf569854c55e7a9caaa822 --- /dev/null +++ b/static/docs/dev/reference/torch_addcmul.html @@ -0,0 +1,277 @@ + + + + + + + + +Addcmul — torch_addcmul • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Addcmul

    +
    + +
    torch_addcmul(self, tensor1, tensor2, value = 1L)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the tensor to be added

    tensor1

    (Tensor) the tensor to be multiplied

    tensor2

    (Tensor) the tensor to be multiplied

    value

    (Number, optional) multiplier for \(tensor1 .* tensor2\)

    + +

    addcmul(input, tensor1, tensor2, *, value=1, out=NULL) -> Tensor

    + + + + +

    Performs the element-wise multiplication of tensor1 +by tensor2, multiply the result by the scalar value +and add it to input.

    +

    $$ + \mbox{out}_i = \mbox{input}_i + \mbox{value} \times \mbox{tensor1}_i \times \mbox{tensor2}_i +$$ +The shapes of tensor, tensor1, and tensor2 must be +broadcastable .

    +

    For inputs of type FloatTensor or DoubleTensor, value must be +a real number, otherwise an integer.

    + +

    Examples

    +
    if (torch_is_installed()) { + +t = torch_randn(c(1, 3)) +t1 = torch_randn(c(3, 1)) +t2 = torch_randn(c(1, 3)) +torch_addcmul(t, t1, t2, 0.1) +} +
    #> torch_tensor +#> 0.0254 0.7760 0.0074 +#> -0.0154 0.9929 0.0002 +#> 0.0596 0.5947 0.0134 +#> [ CPUFloatType{3,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_addmm.html b/static/docs/dev/reference/torch_addmm.html new file mode 100644 index 0000000000000000000000000000000000000000..a60407a1843c0abd933aa86a0eda033b6aa2f120 --- /dev/null +++ b/static/docs/dev/reference/torch_addmm.html @@ -0,0 +1,283 @@ + + + + + + + + +Addmm — torch_addmm • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Addmm

    +
    + +
    torch_addmm(self, mat1, mat2, beta = 1L, alpha = 1L)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) matrix to be added

    mat1

    (Tensor) the first matrix to be multiplied

    mat2

    (Tensor) the second matrix to be multiplied

    beta

    (Number, optional) multiplier for input (\(\beta\))

    alpha

    (Number, optional) multiplier for \(mat1 @ mat2\) (\(\alpha\))

    + +

    addmm(input, mat1, mat2, *, beta=1, alpha=1, out=NULL) -> Tensor

    + + + + +

    Performs a matrix multiplication of the matrices mat1 and mat2. +The matrix input is added to the final result.

    +

    If mat1 is a \((n \times m)\) tensor, mat2 is a +\((m \times p)\) tensor, then input must be +broadcastable with a \((n \times p)\) tensor +and out will be a \((n \times p)\) tensor.

    +

    alpha and beta are scaling factors on matrix-vector product between +mat1 and mat2 and the added matrix input respectively.

    +

    $$ + \mbox{out} = \beta\ \mbox{input} + \alpha\ (\mbox{mat1}_i \mathbin{@} \mbox{mat2}_i) +$$ +For inputs of type FloatTensor or DoubleTensor, arguments beta and +alpha must be real numbers, otherwise they should be integers.

    + +

    Examples

    +
    if (torch_is_installed()) { + +M = torch_randn(c(2, 3)) +mat1 = torch_randn(c(2, 3)) +mat2 = torch_randn(c(3, 3)) +torch_addmm(M, mat1, mat2) +} +
    #> torch_tensor +#> -1.3813 -0.7177 -0.6912 +#> 1.3986 -0.4737 -0.7401 +#> [ CPUFloatType{2,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_addmv.html b/static/docs/dev/reference/torch_addmv.html new file mode 100644 index 0000000000000000000000000000000000000000..afcebb82d7196938355572ac4b62110efe6dabcc --- /dev/null +++ b/static/docs/dev/reference/torch_addmv.html @@ -0,0 +1,284 @@ + + + + + + + + +Addmv — torch_addmv • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Addmv

    +
    + +
    torch_addmv(self, mat, vec, beta = 1L, alpha = 1L)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) vector to be added

    mat

    (Tensor) matrix to be multiplied

    vec

    (Tensor) vector to be multiplied

    beta

    (Number, optional) multiplier for input (\(\beta\))

    alpha

    (Number, optional) multiplier for \(mat @ vec\) (\(\alpha\))

    + +

    addmv(input, mat, vec, *, beta=1, alpha=1, out=NULL) -> Tensor

    + + + + +

    Performs a matrix-vector product of the matrix mat and +the vector vec. +The vector input is added to the final result.

    +

    If mat is a \((n \times m)\) tensor, vec is a 1-D tensor of +size m, then input must be +broadcastable with a 1-D tensor of size n and +out will be 1-D tensor of size n.

    +

    alpha and beta are scaling factors on matrix-vector product between +mat and vec and the added tensor input respectively.

    +

    $$ + \mbox{out} = \beta\ \mbox{input} + \alpha\ (\mbox{mat} \mathbin{@} \mbox{vec}) +$$ +For inputs of type FloatTensor or DoubleTensor, arguments beta and +alpha must be real numbers, otherwise they should be integers

    + +

    Examples

    +
    if (torch_is_installed()) { + +M = torch_randn(c(2)) +mat = torch_randn(c(2, 3)) +vec = torch_randn(c(3)) +torch_addmv(M, mat, vec) +} +
    #> torch_tensor +#> -0.6406 +#> -0.0850 +#> [ CPUFloatType{2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_addr.html b/static/docs/dev/reference/torch_addr.html new file mode 100644 index 0000000000000000000000000000000000000000..2dc15a9d1da5d375e792c9b57fa2014c1f16770f --- /dev/null +++ b/static/docs/dev/reference/torch_addr.html @@ -0,0 +1,286 @@ + + + + + + + + +Addr — torch_addr • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Addr

    +
    + +
    torch_addr(self, vec1, vec2, beta = 1L, alpha = 1L)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) matrix to be added

    vec1

    (Tensor) the first vector of the outer product

    vec2

    (Tensor) the second vector of the outer product

    beta

    (Number, optional) multiplier for input (\(\beta\))

    alpha

    (Number, optional) multiplier for \(\mbox{vec1} \otimes \mbox{vec2}\) (\(\alpha\))

    + +

    addr(input, vec1, vec2, *, beta=1, alpha=1, out=NULL) -> Tensor

    + + + + +

    Performs the outer-product of vectors vec1 and vec2 +and adds it to the matrix input.

    +

    Optional values beta and alpha are scaling factors on the +outer product between vec1 and vec2 and the added matrix +input respectively.

    +

    $$ + \mbox{out} = \beta\ \mbox{input} + \alpha\ (\mbox{vec1} \otimes \mbox{vec2}) +$$ +If vec1 is a vector of size n and vec2 is a vector +of size m, then input must be +broadcastable with a matrix of size +\((n \times m)\) and out will be a matrix of size +\((n \times m)\).

    +

    For inputs of type FloatTensor or DoubleTensor, arguments beta and +alpha must be real numbers, otherwise they should be integers

    + +

    Examples

    +
    if (torch_is_installed()) { + +vec1 = torch_arange(1., 4.) +vec2 = torch_arange(1., 3.) +M = torch_zeros(c(3, 2)) +torch_addr(M, vec1, vec2) +} +
    #> torch_tensor +#> 1 2 +#> 2 4 +#> 3 6 +#> [ CPUFloatType{3,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_allclose.html b/static/docs/dev/reference/torch_allclose.html new file mode 100644 index 0000000000000000000000000000000000000000..2f6dd57f75f22f77629193bbca832ec969db2378 --- /dev/null +++ b/static/docs/dev/reference/torch_allclose.html @@ -0,0 +1,273 @@ + + + + + + + + +Allclose — torch_allclose • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Allclose

    +
    + +
    torch_allclose(self, other, rtol = 1e-05, atol = 0, equal_nan = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) first tensor to compare

    other

    (Tensor) second tensor to compare

    rtol

    (float, optional) relative tolerance. Default: 1e-05

    atol

    (float, optional) absolute tolerance. Default: 1e-08

    equal_nan

    (bool, optional) if TRUE, then two NaN s will be compared as equal. Default: FALSE

    + +

    allclose(input, other, rtol=1e-05, atol=1e-08, equal_nan=False) -> bool

    + + + + +

    This function checks if all input and other satisfy the condition:

    +

    $$ + \vert \mbox{input} - \mbox{other} \vert \leq \mbox{atol} + \mbox{rtol} \times \vert \mbox{other} \vert +$$ +elementwise, for all elements of input and other. The behaviour of this function is analogous to +numpy.allclose <https://docs.scipy.org/doc/numpy/reference/generated/numpy.allclose.html>_

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_allclose(torch_tensor(c(10000., 1e-07)), torch_tensor(c(10000.1, 1e-08))) +torch_allclose(torch_tensor(c(10000., 1e-08)), torch_tensor(c(10000.1, 1e-09))) +torch_allclose(torch_tensor(c(1.0, NaN)), torch_tensor(c(1.0, NaN))) +torch_allclose(torch_tensor(c(1.0, NaN)), torch_tensor(c(1.0, NaN)), equal_nan=TRUE) +} +
    #> [1] TRUE
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_angle.html b/static/docs/dev/reference/torch_angle.html new file mode 100644 index 0000000000000000000000000000000000000000..7105df85ebb88dd162fa01d22849748b8b2fa681 --- /dev/null +++ b/static/docs/dev/reference/torch_angle.html @@ -0,0 +1,254 @@ + + + + + + + + +Angle — torch_angle • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Angle

    +
    + +
    torch_angle(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    angle(input) -> Tensor

    + + + + +

    Computes the element-wise angle (in radians) of the given input tensor.

    +

    $$ + \mbox{out}_{i} = angle(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +torch_angle(torch_tensor(c(-1 + 1i, -2 + 2i, 3 - 3i)))*180/3.14159 +} + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_arange.html b/static/docs/dev/reference/torch_arange.html new file mode 100644 index 0000000000000000000000000000000000000000..933379133d3780ec9d093ddcb60fbacbd518368e --- /dev/null +++ b/static/docs/dev/reference/torch_arange.html @@ -0,0 +1,295 @@ + + + + + + + + +Arange — torch_arange • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Arange

    +
    + +
    torch_arange(
    +  start,
    +  end,
    +  step = 1,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    start

    (Number) the starting value for the set of points. Default: 0.

    end

    (Number) the ending value for the set of points

    step

    (Number) the gap between each pair of adjacent points. Default: 1.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type). If dtype is not given, infer the data type from the other input arguments. If any of start, end, or stop are floating-point, the dtype is inferred to be the default dtype, see ~torch.get_default_dtype. Otherwise, the dtype is inferred to be torch.int64.

    layout

    (torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    arange(start=0, end, step=1, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

    + + + + +

    Returns a 1-D tensor of size \(\left\lceil \frac{\mbox{end} - \mbox{start}}{\mbox{step}} \right\rceil\) +with values from the interval [start, end) taken with common difference +step beginning from start.

    +

    Note that non-integer step is subject to floating point rounding errors when +comparing against end; to avoid inconsistency, we advise adding a small epsilon to end +in such cases.

    +

    $$ + \mbox{out}_{{i+1}} = \mbox{out}_{i} + \mbox{step} +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_arange(start = 0, end = 5) +torch_arange(1, 4) +torch_arange(1, 2.5, 0.5) +} +
    #> torch_tensor +#> 1.0000 +#> 1.5000 +#> 2.0000 +#> [ CPUFloatType{3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_argmax.html b/static/docs/dev/reference/torch_argmax.html new file mode 100644 index 0000000000000000000000000000000000000000..1f91684a13fa2929ed9dcfd56f7695edbda6b0a6 --- /dev/null +++ b/static/docs/dev/reference/torch_argmax.html @@ -0,0 +1,281 @@ + + + + + + + + +Argmax — torch_argmax • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Argmax

    +
    + +
    torch_argmax(self, dim = NULL, keepdim = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int) the dimension to reduce. If NULL, the argmax of the flattened input is returned.

    keepdim

    (bool) whether the output tensor has dim retained or not. Ignored if dim=NULL.

    + +

    argmax(input) -> LongTensor

    + + + + +

    Returns the indices of the maximum value of all elements in the input tensor.

    +

    This is the second value returned by torch_max. See its +documentation for the exact semantics of this method.

    +

    argmax(input, dim, keepdim=False) -> LongTensor

    + + + + +

    Returns the indices of the maximum values of a tensor across a dimension.

    +

    This is the second value returned by torch_max. See its +documentation for the exact semantics of this method.

    + +

    Examples

    +
    if (torch_is_installed()) { + +if (FALSE) { +a = torch_randn(c(4, 4)) +a +torch_argmax(a) +} + + +a = torch_randn(c(4, 4)) +a +torch_argmax(a, dim=1) +} +
    #> torch_tensor +#> 2 +#> 3 +#> 3 +#> 3 +#> [ CPULongType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_argmin.html b/static/docs/dev/reference/torch_argmin.html new file mode 100644 index 0000000000000000000000000000000000000000..469b779f5d521adabf9e1062f0e650184d13b33c --- /dev/null +++ b/static/docs/dev/reference/torch_argmin.html @@ -0,0 +1,279 @@ + + + + + + + + +Argmin — torch_argmin • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Argmin

    +
    + +
    torch_argmin(self, dim = NULL, keepdim = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int) the dimension to reduce. If NULL, the argmin of the flattened input is returned.

    keepdim

    (bool) whether the output tensor has dim retained or not. Ignored if dim=NULL.

    + +

    argmin(input) -> LongTensor

    + + + + +

    Returns the indices of the minimum value of all elements in the input tensor.

    +

    This is the second value returned by torch_min. See its +documentation for the exact semantics of this method.

    +

    argmin(input, dim, keepdim=False, out=NULL) -> LongTensor

    + + + + +

    Returns the indices of the minimum values of a tensor across a dimension.

    +

    This is the second value returned by torch_min. See its +documentation for the exact semantics of this method.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4, 4)) +a +torch_argmin(a) + + +a = torch_randn(c(4, 4)) +a +torch_argmin(a, dim=1) +} +
    #> torch_tensor +#> 2 +#> 3 +#> 3 +#> 3 +#> [ CPULongType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_argsort.html b/static/docs/dev/reference/torch_argsort.html new file mode 100644 index 0000000000000000000000000000000000000000..28d9cedccabc815b47c9fb784c21ef5350bb8f2a --- /dev/null +++ b/static/docs/dev/reference/torch_argsort.html @@ -0,0 +1,267 @@ + + + + + + + + +Argsort — torch_argsort • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Argsort

    +
    + +
    torch_argsort(self, dim = -1L, descending = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int, optional) the dimension to sort along

    descending

    (bool, optional) controls the sorting order (ascending or descending)

    + +

    argsort(input, dim=-1, descending=False) -> LongTensor

    + + + + +

    Returns the indices that sort a tensor along a given dimension in ascending +order by value.

    +

    This is the second value returned by torch_sort. See its documentation +for the exact semantics of this method.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4, 4)) +a +torch_argsort(a, dim=1) +} +
    #> torch_tensor +#> 3 1 1 2 +#> 2 2 0 3 +#> 1 3 2 1 +#> 0 0 3 0 +#> [ CPULongType{4,4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_as_strided.html b/static/docs/dev/reference/torch_as_strided.html new file mode 100644 index 0000000000000000000000000000000000000000..4f96e72a3ef4d40846d46bddb8d5c92241cbf2c3 --- /dev/null +++ b/static/docs/dev/reference/torch_as_strided.html @@ -0,0 +1,283 @@ + + + + + + + + +As_strided — torch_as_strided • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    As_strided

    +
    + +
    torch_as_strided(self, size, stride, storage_offset = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    size

    (tuple or ints) the shape of the output tensor

    stride

    (tuple or ints) the stride of the output tensor

    storage_offset

    (int, optional) the offset in the underlying storage of the output tensor

    + +

    as_strided(input, size, stride, storage_offset=0) -> Tensor

    + + + + +

    Create a view of an existing torch_Tensor input with specified +size, stride and storage_offset.

    +

    Warning

    + + + +

    More than one element of a created tensor may refer to a single memory +location. As a result, in-place operations (especially ones that are +vectorized) may result in incorrect behavior. If you need to write to +the tensors, please clone them first.

    Many PyTorch functions, which return a view of a tensor, are internally
    +implemented with this function. Those functions, like
    +`torch_Tensor.expand`, are easier to read and are therefore more
    +advisable to use.
    +
    + + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_randn(c(3, 3)) +x +t = torch_as_strided(x, list(2, 2), list(1, 2)) +t +t = torch_as_strided(x, list(2, 2), list(1, 2), 1) +t +} +
    #> torch_tensor +#> 1.2727 0.7072 +#> -0.7263 0.1785 +#> [ CPUFloatType{2,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_asin.html b/static/docs/dev/reference/torch_asin.html new file mode 100644 index 0000000000000000000000000000000000000000..472e8f4cb108abf5d212646109b0610e83609038 --- /dev/null +++ b/static/docs/dev/reference/torch_asin.html @@ -0,0 +1,259 @@ + + + + + + + + +Asin — torch_asin • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Asin

    +
    + +
    torch_asin(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    asin(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the arcsine of the elements of input.

    +

    $$ + \mbox{out}_{i} = \sin^{-1}(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_asin(a) +} +
    #> torch_tensor +#> -1.3162 +#> 0.6367 +#> -1.0489 +#> nan +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_atan.html b/static/docs/dev/reference/torch_atan.html new file mode 100644 index 0000000000000000000000000000000000000000..d5f7ecd53ae4961c7c5584dfd5a2968dd525a883 --- /dev/null +++ b/static/docs/dev/reference/torch_atan.html @@ -0,0 +1,259 @@ + + + + + + + + +Atan — torch_atan • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Atan

    +
    + +
    torch_atan(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    atan(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the arctangent of the elements of input.

    +

    $$ + \mbox{out}_{i} = \tan^{-1}(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_atan(a) +} +
    #> torch_tensor +#> -1.2005 +#> -0.9267 +#> -0.8862 +#> -0.8779 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_atan2.html b/static/docs/dev/reference/torch_atan2.html new file mode 100644 index 0000000000000000000000000000000000000000..b6818b8bed2bed357ce04d37d14fe801eb2d42cd --- /dev/null +++ b/static/docs/dev/reference/torch_atan2.html @@ -0,0 +1,267 @@ + + + + + + + + +Atan2 — torch_atan2 • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Atan2

    +
    + +
    torch_atan2(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the first input tensor

    other

    (Tensor) the second input tensor

    + +

    atan2(input, other, out=NULL) -> Tensor

    + + + + +

    Element-wise arctangent of \(\mbox{input}_{i} / \mbox{other}_{i}\) +with consideration of the quadrant. Returns a new tensor with the signed angles +in radians between vector \((\mbox{other}_{i}, \mbox{input}_{i})\) +and vector \((1, 0)\). (Note that \(\mbox{other}_{i}\), the second +parameter, is the x-coordinate, while \(\mbox{input}_{i}\), the first +parameter, is the y-coordinate.)

    +

    The shapes of input and other must be +broadcastable .

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_atan2(a, torch_randn(c(4))) +} +
    #> torch_tensor +#> -1.8237 +#> 0.1158 +#> -0.1993 +#> -1.5104 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_avg_pool1d.html b/static/docs/dev/reference/torch_avg_pool1d.html new file mode 100644 index 0000000000000000000000000000000000000000..c6ec26b3ac21be734c81f37dd143693f5f2e966b --- /dev/null +++ b/static/docs/dev/reference/torch_avg_pool1d.html @@ -0,0 +1,272 @@ + + + + + + + + +Avg_pool1d — torch_avg_pool1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Avg_pool1d

    +
    + +
    torch_avg_pool1d(
    +  self,
    +  kernel_size,
    +  stride = list(),
    +  padding = 0L,
    +  ceil_mode = FALSE,
    +  count_include_pad = TRUE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    self

    input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iW)\)

    kernel_size

    the size of the window. Can be a single number or a tuple (kW,)

    stride

    the stride of the window. Can be a single number or a tuple (sW,). Default: kernel_size

    padding

    implicit zero paddings on both sides of the input. Can be a single number or a tuple (padW,). Default: 0

    ceil_mode

    when TRUE, will use ceil instead of floor to compute the output shape. Default: FALSE

    count_include_pad

    when TRUE, will include the zero-padding in the averaging calculation. Default: TRUE

    + +

    avg_pool1d(input, kernel_size, stride=NULL, padding=0, ceil_mode=FALSE, count_include_pad=TRUE) -> Tensor

    + + + + +

    Applies a 1D average pooling over an input signal composed of several +input planes.

    +

    See nn_avg_pool1d() for details and output shape.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_baddbmm.html b/static/docs/dev/reference/torch_baddbmm.html new file mode 100644 index 0000000000000000000000000000000000000000..bc182261ef925736f8eac1b40674ac10c2ace162 --- /dev/null +++ b/static/docs/dev/reference/torch_baddbmm.html @@ -0,0 +1,333 @@ + + + + + + + + +Baddbmm — torch_baddbmm • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Baddbmm

    +
    + +
    torch_baddbmm(self, batch1, batch2, beta = 1L, alpha = 1L)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the tensor to be added

    batch1

    (Tensor) the first batch of matrices to be multiplied

    batch2

    (Tensor) the second batch of matrices to be multiplied

    beta

    (Number, optional) multiplier for input (\(\beta\))

    alpha

    (Number, optional) multiplier for \(\mbox{batch1} \mathbin{@} \mbox{batch2}\) (\(\alpha\))

    + +

    baddbmm(input, batch1, batch2, *, beta=1, alpha=1, out=NULL) -> Tensor

    + + + + +

    Performs a batch matrix-matrix product of matrices in batch1 +and batch2. +input is added to the final result.

    +

    batch1 and batch2 must be 3-D tensors each containing the same +number of matrices.

    +

    If batch1 is a \((b \times n \times m)\) tensor, batch2 is a +\((b \times m \times p)\) tensor, then input must be +broadcastable with a +\((b \times n \times p)\) tensor and out will be a +\((b \times n \times p)\) tensor. Both alpha and beta mean the +same as the scaling factors used in torch_addbmm.

    +

    $$ + \mbox{out}_i = \beta\ \mbox{input}_i + \alpha\ (\mbox{batch1}_i \mathbin{@} \mbox{batch2}_i) +$$ +For inputs of type FloatTensor or DoubleTensor, arguments beta and +alpha must be real numbers, otherwise they should be integers.

    + +

    Examples

    +
    if (torch_is_installed()) { + +M = torch_randn(c(10, 3, 5)) +batch1 = torch_randn(c(10, 3, 4)) +batch2 = torch_randn(c(10, 4, 5)) +torch_baddbmm(M, batch1, batch2) +} +
    #> torch_tensor +#> (1,.,.) = +#> 2.7673 3.8142 -2.1566 0.8392 -0.1574 +#> -3.5008 -3.0245 1.5608 0.8823 -3.8403 +#> -0.6895 2.3852 -1.9370 -0.4578 3.7591 +#> +#> (2,.,.) = +#> 2.8193 4.0901 -1.0565 -1.4749 4.1633 +#> -1.5332 2.0761 0.5535 0.1496 0.8969 +#> -3.9711 2.0648 2.1003 2.6881 -5.2599 +#> +#> (3,.,.) = +#> 7.2532 0.1823 -3.8754 -1.7195 0.9933 +#> 0.3700 0.0226 -2.8153 1.1894 -0.8335 +#> 7.0359 -0.9349 3.8406 -1.1739 7.6222 +#> +#> (4,.,.) = +#> -0.0215 -1.5237 1.3790 -1.2662 -1.5822 +#> -0.8591 4.3549 -1.2652 0.5812 -1.1035 +#> 1.0322 2.8543 -1.4274 -1.8826 2.4363 +#> +#> (5,.,.) = +#> -2.5454 3.1478 1.1194 -0.0983 -1.0287 +#> 0.0967 -0.5739 -4.6500 2.1894 -1.3741 +#> 1.2304 0.7907 2.5194 -4.3185 -2.9295 +#> +#> (6,.,.) = +#> 1.1406 2.5135 -1.7900 3.7070 -0.5922 +#> -0.3170 -1.6689 1.1494 -1.8042 2.0719 +#> 0.1365 -3.1741 -0.1377 0.1946 0.7151 +#> +#> (7,.,.) = +#> -4.1217 -1.1835 1.0868 -0.7996 1.3881 +#> -0.9917 1.4596 0.1372 -0.8100 0.4499 +#> -2.5818 -3.8951 -3.5722 -0.5064 -0.2227 +#> +#> (8,.,.) = +#> 2.7398 -1.6400 -2.1485 -2.2978 -3.7031 +#> -2.1908 -1.4017 1.6094 1.8951 -0.0367 +#> -1.2369 3.2816 1.2695 3.6934 -0.3198 +#> +#> (9,.,.) = +#> 1.3105 1.0761 0.6626 0.9699 -0.2097 +#> -0.5385 0.6569 -1.8404 -0.2079 0.6920 +#> -0.0850 -1.7620 -0.3243 -1.3616 1.3040 +#> +#> (10,.,.) = +#> -2.5976 -3.9216 -1.1513 1.6483 1.9870 +#> 0.3967 -1.2132 1.1914 1.6283 0.8361 +#> 2.9039 1.0315 1.8009 -3.4176 -0.4645 +#> [ CPUFloatType{10,3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_bartlett_window.html b/static/docs/dev/reference/torch_bartlett_window.html new file mode 100644 index 0000000000000000000000000000000000000000..f636807a8bf98ada52262b94458788d85a020d67 --- /dev/null +++ b/static/docs/dev/reference/torch_bartlett_window.html @@ -0,0 +1,292 @@ + + + + + + + + +Bartlett_window — torch_bartlett_window • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Bartlett_window

    +
    + +
    torch_bartlett_window(
    +  window_length,
    +  periodic = TRUE,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    window_length

    (int) the size of returned window

    periodic

    (bool, optional) If TRUE, returns a window to be used as periodic function. If False, return a symmetric window.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type). Only floating point types are supported.

    layout

    (torch.layout, optional) the desired layout of returned window tensor. Only torch_strided (dense layout) is supported.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    Note

    + + +
    If `window_length` \eqn{=1}, the returned window contains a single value 1.
    +
    + +

    bartlett_window(window_length, periodic=TRUE, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

    + + + + +

    Bartlett window function.

    +

    $$ + w[n] = 1 - \left| \frac{2n}{N-1} - 1 \right| = \left\{ \begin{array}{ll} + \frac{2n}{N - 1} & \mbox{if } 0 \leq n \leq \frac{N - 1}{2} \\ + 2 - \frac{2n}{N - 1} & \mbox{if } \frac{N - 1}{2} < n < N \\ + \end{array} + \right. , +$$ +where \(N\) is the full window size.

    +

    The input window_length is a positive integer controlling the +returned window size. periodic flag determines whether the returned +window trims off the last duplicate value from the symmetric window and is +ready to be used as a periodic window with functions like +torch_stft. Therefore, if periodic is true, the \(N\) in +above formula is in fact \(\mbox{window\_length} + 1\). Also, we always have +torch_bartlett_window(L, periodic=TRUE) equal to +torch_bartlett_window(L + 1, periodic=False)[:-1]).

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_bernoulli.html b/static/docs/dev/reference/torch_bernoulli.html new file mode 100644 index 0000000000000000000000000000000000000000..3f20322b69d087d5099f02cada9c70ec93ef49da --- /dev/null +++ b/static/docs/dev/reference/torch_bernoulli.html @@ -0,0 +1,283 @@ + + + + + + + + +Bernoulli — torch_bernoulli • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Bernoulli

    +
    + +
    torch_bernoulli(self, p, generator = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor of probability values for the Bernoulli +distribution

    p

    (Number) a probability value. If p is passed than it's used instead of +the values in self tensor.

    generator

    (torch.Generator, optional) a pseudorandom number generator for sampling

    + +

    bernoulli(input, *, generator=NULL, out=NULL) -> Tensor

    + + + + +

    Draws binary random numbers (0 or 1) from a Bernoulli distribution.

    +

    The input tensor should be a tensor containing probabilities +to be used for drawing the binary random number. +Hence, all values in input have to be in the range: +\(0 \leq \mbox{input}_i \leq 1\).

    +

    The \(\mbox{i}^{th}\) element of the output tensor will draw a +value \(1\) according to the \(\mbox{i}^{th}\) probability value given +in input.

    +

    $$ + \mbox{out}_{i} \sim \mathrm{Bernoulli}(p = \mbox{input}_{i}) +$$ +The returned out tensor only has values 0 or 1 and is of the same +shape as input.

    +

    out can have integral dtype, but input must have floating +point dtype.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_empty(c(3, 3))$uniform_(0, 1) # generate a uniform random matrix with range c(0, 1) +a +torch_bernoulli(a) +a = torch_ones(c(3, 3)) # probability of drawing "1" is 1 +torch_bernoulli(a) +a = torch_zeros(c(3, 3)) # probability of drawing "1" is 0 +torch_bernoulli(a) +} +
    #> torch_tensor +#> 0 0 0 +#> 0 0 0 +#> 0 0 0 +#> [ CPUFloatType{3,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_bincount.html b/static/docs/dev/reference/torch_bincount.html new file mode 100644 index 0000000000000000000000000000000000000000..5621ecde4fcefcc0b2eaddf2ac881742bf79063d --- /dev/null +++ b/static/docs/dev/reference/torch_bincount.html @@ -0,0 +1,279 @@ + + + + + + + + +Bincount — torch_bincount • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Bincount

    +
    + +
    torch_bincount(self, weights = list(), minlength = 0L)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) 1-d int tensor

    weights

    (Tensor) optional, weight for each value in the input tensor. Should be of same size as input tensor.

    minlength

    (int) optional, minimum number of bins. Should be non-negative.

    + +

    bincount(input, weights=NULL, minlength=0) -> Tensor

    + + + + +

    Count the frequency of each value in an array of non-negative ints.

    +

    The number of bins (size 1) is one larger than the largest value in +input unless input is empty, in which case the result is a +tensor of size 0. If minlength is specified, the number of bins is at least +minlength and if input is empty, then the result is tensor of size +minlength filled with zeros. If n is the value at position i, +out[n] += weights[i] if weights is specified else +out[n] += 1.

    +

    .. include:: cuda_deterministic.rst

    + +

    Examples

    +
    if (torch_is_installed()) { + +input = torch_randint(0, 8, list(5), dtype=torch_int64()) +weights = torch_linspace(0, 1, steps=5) +input +weights +torch_bincount(input, weights) +input$bincount(weights) +} +
    #> torch_tensor +#> 0.0000 +#> 0.7500 +#> 0.0000 +#> 0.2500 +#> 0.0000 +#> 0.0000 +#> 0.5000 +#> 1.0000 +#> [ CPUFloatType{8} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_bitwise_and.html b/static/docs/dev/reference/torch_bitwise_and.html new file mode 100644 index 0000000000000000000000000000000000000000..b91f99e33562e842dd19fb2572114dd958b511e9 --- /dev/null +++ b/static/docs/dev/reference/torch_bitwise_and.html @@ -0,0 +1,248 @@ + + + + + + + + +Bitwise_and — torch_bitwise_and • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Bitwise_and

    +
    + +
    torch_bitwise_and(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    NA the first input tensor

    other

    NA the second input tensor

    + +

    bitwise_and(input, other, out=NULL) -> Tensor

    + + + + +

    Computes the bitwise AND of input and other. The input tensor must be of +integral or Boolean types. For bool tensors, it computes the logical AND.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_bitwise_not.html b/static/docs/dev/reference/torch_bitwise_not.html new file mode 100644 index 0000000000000000000000000000000000000000..2e9b82ba548b71a31995a3846a2d8f4b7e95f3e1 --- /dev/null +++ b/static/docs/dev/reference/torch_bitwise_not.html @@ -0,0 +1,244 @@ + + + + + + + + +Bitwise_not — torch_bitwise_not • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Bitwise_not

    +
    + +
    torch_bitwise_not(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    bitwise_not(input, out=NULL) -> Tensor

    + + + + +

    Computes the bitwise NOT of the given input tensor. The input tensor must be of +integral or Boolean types. For bool tensors, it computes the logical NOT.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_bitwise_or.html b/static/docs/dev/reference/torch_bitwise_or.html new file mode 100644 index 0000000000000000000000000000000000000000..dc297ef1d28d0155e0c4c3c5afe30e80b1a0cdb9 --- /dev/null +++ b/static/docs/dev/reference/torch_bitwise_or.html @@ -0,0 +1,248 @@ + + + + + + + + +Bitwise_or — torch_bitwise_or • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Bitwise_or

    +
    + +
    torch_bitwise_or(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    NA the first input tensor

    other

    NA the second input tensor

    + +

    bitwise_or(input, other, out=NULL) -> Tensor

    + + + + +

    Computes the bitwise OR of input and other. The input tensor must be of +integral or Boolean types. For bool tensors, it computes the logical OR.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_bitwise_xor.html b/static/docs/dev/reference/torch_bitwise_xor.html new file mode 100644 index 0000000000000000000000000000000000000000..90ca7ac6da50e0c9c098f0e125866347e4f2b58e --- /dev/null +++ b/static/docs/dev/reference/torch_bitwise_xor.html @@ -0,0 +1,248 @@ + + + + + + + + +Bitwise_xor — torch_bitwise_xor • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Bitwise_xor

    +
    + +
    torch_bitwise_xor(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    NA the first input tensor

    other

    NA the second input tensor

    + +

    bitwise_xor(input, other, out=NULL) -> Tensor

    + + + + +

    Computes the bitwise XOR of input and other. The input tensor must be of +integral or Boolean types. For bool tensors, it computes the logical XOR.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_blackman_window.html b/static/docs/dev/reference/torch_blackman_window.html new file mode 100644 index 0000000000000000000000000000000000000000..b9291353530b90d13285105fa215c4ee136595e9 --- /dev/null +++ b/static/docs/dev/reference/torch_blackman_window.html @@ -0,0 +1,288 @@ + + + + + + + + +Blackman_window — torch_blackman_window • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Blackman_window

    +
    + +
    torch_blackman_window(
    +  window_length,
    +  periodic = TRUE,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    window_length

    (int) the size of returned window

    periodic

    (bool, optional) If TRUE, returns a window to be used as periodic function. If False, return a symmetric window.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type). Only floating point types are supported.

    layout

    (torch.layout, optional) the desired layout of returned window tensor. Only torch_strided (dense layout) is supported.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    Note

    + + +
    If `window_length` \eqn{=1}, the returned window contains a single value 1.
    +
    + +

    blackman_window(window_length, periodic=TRUE, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

    + + + + +

    Blackman window function.

    +

    $$ + w[n] = 0.42 - 0.5 \cos \left( \frac{2 \pi n}{N - 1} \right) + 0.08 \cos \left( \frac{4 \pi n}{N - 1} \right) +$$ +where \(N\) is the full window size.

    +

    The input window_length is a positive integer controlling the +returned window size. periodic flag determines whether the returned +window trims off the last duplicate value from the symmetric window and is +ready to be used as a periodic window with functions like +torch_stft. Therefore, if periodic is true, the \(N\) in +above formula is in fact \(\mbox{window\_length} + 1\). Also, we always have +torch_blackman_window(L, periodic=TRUE) equal to +torch_blackman_window(L + 1, periodic=False)[:-1]).

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_bmm.html b/static/docs/dev/reference/torch_bmm.html new file mode 100644 index 0000000000000000000000000000000000000000..930104cded2a21b6f14435f54df1c51d694c263a --- /dev/null +++ b/static/docs/dev/reference/torch_bmm.html @@ -0,0 +1,319 @@ + + + + + + + + +Bmm — torch_bmm • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Bmm

    +
    + +
    torch_bmm(self, mat2)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the first batch of matrices to be multiplied

    mat2

    (Tensor) the second batch of matrices to be multiplied

    + +

    Note

    + +

    This function does not broadcast . +For broadcasting matrix products, see torch_matmul.

    +

    bmm(input, mat2, out=NULL) -> Tensor

    + + + + +

    Performs a batch matrix-matrix product of matrices stored in input +and mat2.

    +

    input and mat2 must be 3-D tensors each containing +the same number of matrices.

    +

    If input is a \((b \times n \times m)\) tensor, mat2 is a +\((b \times m \times p)\) tensor, out will be a +\((b \times n \times p)\) tensor.

    +

    $$ + \mbox{out}_i = \mbox{input}_i \mathbin{@} \mbox{mat2}_i +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +input = torch_randn(c(10, 3, 4)) +mat2 = torch_randn(c(10, 4, 5)) +res = torch_bmm(input, mat2) +res +} +
    #> torch_tensor +#> (1,.,.) = +#> 0.1611 3.2873 -2.7979 0.5577 -0.6578 +#> 0.4613 -0.3262 0.8856 1.8179 0.5950 +#> 0.3713 -1.0938 -0.0862 -0.2558 -0.2166 +#> +#> (2,.,.) = +#> 1.7443 -0.3017 -4.1491 -3.1967 -0.9234 +#> 0.3326 -0.1091 -0.1212 -1.4089 -0.1950 +#> -0.5194 1.0499 -2.9961 -0.3035 0.2586 +#> +#> (3,.,.) = +#> -4.2654 3.1454 5.5731 0.0939 -0.2378 +#> 2.7372 -1.7179 -4.6256 0.3196 -1.2640 +#> -2.6296 1.7937 3.1864 0.0494 -1.7055 +#> +#> (4,.,.) = +#> 0.2021 1.0450 -1.9129 -0.0005 -0.9617 +#> 1.6322 -2.6664 0.4473 -2.3600 1.0421 +#> -0.8634 2.2971 -1.7369 1.3270 -1.4233 +#> +#> (5,.,.) = +#> 0.9213 -1.8910 -2.8053 0.2184 -0.4071 +#> 0.1079 0.2815 -0.0232 -0.0372 0.7164 +#> 0.4005 -0.3513 -0.0439 -0.3791 -0.6494 +#> +#> (6,.,.) = +#> -0.4160 -2.6689 -1.4779 -0.2370 3.2740 +#> 1.1628 2.6820 -0.3655 2.7943 -3.7204 +#> -0.0693 0.9010 -0.0905 -1.5431 -1.8316 +#> +#> (7,.,.) = +#> -0.6875 -6.1682 -0.6918 -0.4328 1.8613 +#> 1.3210 0.0232 0.2676 0.9049 -1.5467 +#> -0.5596 5.2267 -0.2623 -1.0450 -0.3493 +#> +#> (8,.,.) = +#> -0.2389 0.4121 -0.2401 0.1842 0.9670 +#> 0.4465 -1.6942 2.7656 1.1619 -3.8290 +#> 0.6562 -2.4667 -1.6473 -1.5007 -2.2954 +#> +#> (9,.,.) = +#> -0.1681 -1.9747 2.9924 1.4263 -1.4143 +#> -0.9687 1.3590 -2.0486 -0.9397 -0.0612 +#> 0.7662 -0.1328 1.9483 0.1000 1.3884 +#> +#> (10,.,.) = +#> -0.9815 -0.6052 -0.7606 0.2180 -3.1664 +#> 0.0858 1.0146 0.1378 -1.6044 0.3625 +#> -2.1684 -2.1501 -2.8241 -1.3085 0.4587 +#> [ CPUFloatType{10,3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_broadcast_tensors.html b/static/docs/dev/reference/torch_broadcast_tensors.html new file mode 100644 index 0000000000000000000000000000000000000000..afb3f7e17e823f8f7a64d3148b167765b0386d5f --- /dev/null +++ b/static/docs/dev/reference/torch_broadcast_tensors.html @@ -0,0 +1,255 @@ + + + + + + + + +Broadcast_tensors — torch_broadcast_tensors • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Broadcast_tensors

    +
    + +
    torch_broadcast_tensors(tensors)
    + +

    Arguments

    + + + + + + +
    tensors

    a list containing any number of tensors of the same type

    + +

    broadcast_tensors(tensors) -> List of Tensors

    + + + + +

    Broadcasts the given tensors according to broadcasting-semantics.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_arange(0, 3)$view(c(1, 3)) +y = torch_arange(0, 2)$view(c(2, 1)) +out = torch_broadcast_tensors(list(x, y)) +out[[1]] +} +
    #> torch_tensor +#> 0 1 2 +#> 0 1 2 +#> [ CPUFloatType{2,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_can_cast.html b/static/docs/dev/reference/torch_can_cast.html new file mode 100644 index 0000000000000000000000000000000000000000..17e91d95d2a70763af8daf06d4ec62dfc533455e --- /dev/null +++ b/static/docs/dev/reference/torch_can_cast.html @@ -0,0 +1,255 @@ + + + + + + + + +Can_cast — torch_can_cast • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Can_cast

    +
    + +
    torch_can_cast(from, to)
    + +

    Arguments

    + + + + + + + + + + +
    from

    (dtype) The original torch_dtype.

    to

    (dtype) The target torch_dtype.

    + +

    can_cast(from, to) -> bool

    + + + + +

    Determines if a type conversion is allowed under PyTorch casting rules +described in the type promotion documentation .

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_can_cast(torch_double(), torch_float()) +torch_can_cast(torch_float(), torch_int()) +} +
    #> [1] FALSE
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_cartesian_prod.html b/static/docs/dev/reference/torch_cartesian_prod.html new file mode 100644 index 0000000000000000000000000000000000000000..4409b7195bea05d7f102e77fbbf03d83a8c9612e --- /dev/null +++ b/static/docs/dev/reference/torch_cartesian_prod.html @@ -0,0 +1,254 @@ + + + + + + + + +Cartesian_prod — torch_cartesian_prod • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Do cartesian product of the given sequence of tensors.

    +
    + +
    torch_cartesian_prod(tensors)
    + +

    Arguments

    + + + + + + +
    tensors

    a list containing any number of 1 dimensional tensors.

    + + +

    Examples

    +
    if (torch_is_installed()) { + +a = c(1, 2, 3) +b = c(4, 5) +tensor_a = torch_tensor(a) +tensor_b = torch_tensor(b) +torch_cartesian_prod(list(tensor_a, tensor_b)) +} +
    #> torch_tensor +#> 1 4 +#> 1 5 +#> 2 4 +#> 2 5 +#> 3 4 +#> 3 5 +#> [ CPUFloatType{6,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_cat.html b/static/docs/dev/reference/torch_cat.html new file mode 100644 index 0000000000000000000000000000000000000000..99f758ab4b2afc2d506bd14daa15962bd51aa173 --- /dev/null +++ b/static/docs/dev/reference/torch_cat.html @@ -0,0 +1,264 @@ + + + + + + + + +Cat — torch_cat • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Cat

    +
    + +
    torch_cat(tensors, dim = 1L)
    + +

    Arguments

    + + + + + + + + + + +
    tensors

    (sequence of Tensors) any python sequence of tensors of the same type. Non-empty tensors provided must have the same shape, except in the cat dimension.

    dim

    (int, optional) the dimension over which the tensors are concatenated

    + +

    cat(tensors, dim=0, out=NULL) -> Tensor

    + + + + +

    Concatenates the given sequence of seq tensors in the given dimension. +All tensors must either have the same shape (except in the concatenating +dimension) or be empty.

    +

    torch_cat can be seen as an inverse operation for torch_split() +and torch_chunk.

    +

    torch_cat can be best understood via examples.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_randn(c(2, 3)) +x +torch_cat(list(x, x, x), 1) +torch_cat(list(x, x, x), 2) +} +
    #> torch_tensor +#> 0.8422 0.3027 0.4810 0.8422 0.3027 0.4810 0.8422 0.3027 0.4810 +#> -1.2264 0.6089 1.3649 -1.2264 0.6089 1.3649 -1.2264 0.6089 1.3649 +#> [ CPUFloatType{2,9} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_cdist.html b/static/docs/dev/reference/torch_cdist.html new file mode 100644 index 0000000000000000000000000000000000000000..c342cd5365b692e0fda052352a1041bd25d29eac --- /dev/null +++ b/static/docs/dev/reference/torch_cdist.html @@ -0,0 +1,255 @@ + + + + + + + + +Cdist — torch_cdist • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Cdist

    +
    + +
    torch_cdist(x1, x2, p = 2L, compute_mode = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    x1

    (Tensor) input tensor of shape \(B \times P \times M\).

    x2

    (Tensor) input tensor of shape \(B \times R \times M\).

    p

    NA p value for the p-norm distance to calculate between each vector pair \(\in [0, \infty]\).

    compute_mode

    NA 'use_mm_for_euclid_dist_if_necessary' - will use matrix multiplication approach to calculate euclidean distance (p = 2) if P > 25 or R > 25 'use_mm_for_euclid_dist' - will always use matrix multiplication approach to calculate euclidean distance (p = 2) 'donot_use_mm_for_euclid_dist' - will never use matrix multiplication approach to calculate euclidean distance (p = 2) Default: use_mm_for_euclid_dist_if_necessary.

    + +

    TEST

    + + + + +

    Computes batched the p-norm distance between each pair of the two collections of row vectors.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_ceil.html b/static/docs/dev/reference/torch_ceil.html new file mode 100644 index 0000000000000000000000000000000000000000..d41fd44d6c64a3c1f32730e64f67523d4a261b0b --- /dev/null +++ b/static/docs/dev/reference/torch_ceil.html @@ -0,0 +1,260 @@ + + + + + + + + +Ceil — torch_ceil • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Ceil

    +
    + +
    torch_ceil(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    ceil(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the ceil of the elements of input, +the smallest integer greater than or equal to each element.

    +

    $$ + \mbox{out}_{i} = \left\lceil \mbox{input}_{i} \right\rceil = \left\lfloor \mbox{input}_{i} \right\rfloor + 1 +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_ceil(a) +} +
    #> torch_tensor +#> -0 +#> 2 +#> 1 +#> 1 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_celu.html b/static/docs/dev/reference/torch_celu.html new file mode 100644 index 0000000000000000000000000000000000000000..0eceaaf285bf8f2e34fda85c3653d4289aec0607 --- /dev/null +++ b/static/docs/dev/reference/torch_celu.html @@ -0,0 +1,247 @@ + + + + + + + + +Celu — torch_celu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Celu

    +
    + +
    torch_celu(self, alpha = 1)
    + +

    Arguments

    + + + + + + + + + + +
    self

    the input tensor

    alpha

    the alpha value for the CELU formulation. Default: 1.0

    + +

    celu(input, alpha=1.) -> Tensor

    + + + + +

    See nnf_celu() for more info.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_celu_.html b/static/docs/dev/reference/torch_celu_.html new file mode 100644 index 0000000000000000000000000000000000000000..d82161cd695b9057f85d7d1fa75c081f6c9ab8a9 --- /dev/null +++ b/static/docs/dev/reference/torch_celu_.html @@ -0,0 +1,247 @@ + + + + + + + + +Celu_ — torch_celu_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Celu_

    +
    + +
    torch_celu_(self, alpha = 1)
    + +

    Arguments

    + + + + + + + + + + +
    self

    the input tensor

    alpha

    the alpha value for the CELU formulation. Default: 1.0

    + +

    celu_(input, alpha=1.) -> Tensor

    + + + + +

    In-place version of torch_celu().

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_chain_matmul.html b/static/docs/dev/reference/torch_chain_matmul.html new file mode 100644 index 0000000000000000000000000000000000000000..0a1526d214573bf1c357c1c4c4c6ac7b72b05c73 --- /dev/null +++ b/static/docs/dev/reference/torch_chain_matmul.html @@ -0,0 +1,261 @@ + + + + + + + + +Chain_matmul — torch_chain_matmul • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Chain_matmul

    +
    + +
    torch_chain_matmul(matrices)
    + +

    Arguments

    + + + + + + +
    matrices

    (Tensors...) a sequence of 2 or more 2-D tensors whose product is to be determined.

    + +

    TEST

    + + + + +

    Returns the matrix product of the \(N\) 2-D tensors. This product is efficiently computed +using the matrix chain order algorithm which selects the order in which incurs the lowest cost in terms +of arithmetic operations ([CLRS]_). Note that since this is a function to compute the product, \(N\) +needs to be greater than or equal to 2; if equal to 2 then a trivial matrix-matrix product is returned. +If \(N\) is 1, then this is a no-op - the original matrix is returned as is.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(3, 4)) +b = torch_randn(c(4, 5)) +c = torch_randn(c(5, 6)) +d = torch_randn(c(6, 7)) +torch_chain_matmul(list(a, b, c, d)) +} +
    #> torch_tensor +#> -3.4661 0.3385 1.5381 -5.5143 -5.5374 -1.9805 -6.2549 +#> 8.6568 -0.4593 -0.5739 5.2981 4.8523 1.4588 4.3676 +#> 27.5014 -3.0642 0.2539 -0.8520 1.0841 -1.8489 4.2574 +#> [ CPUFloatType{3,7} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_cholesky.html b/static/docs/dev/reference/torch_cholesky.html new file mode 100644 index 0000000000000000000000000000000000000000..ee1f3bd44ac072e1c3b8b589f316034b89c04e65 --- /dev/null +++ b/static/docs/dev/reference/torch_cholesky.html @@ -0,0 +1,283 @@ + + + + + + + + +Cholesky — torch_cholesky • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Cholesky

    +
    + +
    torch_cholesky(self, upper = FALSE)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor \(A\) of size \((*, n, n)\) where * is zero or more +batch dimensions consisting of symmetric positive-definite matrices.

    upper

    (bool, optional) flag that indicates whether to return a +upper or lower triangular matrix. Default: FALSE

    + +

    cholesky(input, upper=False, out=NULL) -> Tensor

    + + + + +

    Computes the Cholesky decomposition of a symmetric positive-definite +matrix \(A\) or for batches of symmetric positive-definite matrices.

    +

    If upper is TRUE, the returned matrix U is upper-triangular, and +the decomposition has the form:

    +

    $$ + A = U^TU +$$ +If upper is FALSE, the returned matrix L is lower-triangular, and +the decomposition has the form:

    +

    $$ + A = LL^T +$$ +If upper is TRUE, and \(A\) is a batch of symmetric positive-definite +matrices, then the returned tensor will be composed of upper-triangular Cholesky factors +of each of the individual matrices. Similarly, when upper is FALSE, the returned +tensor will be composed of lower-triangular Cholesky factors of each of the individual +matrices.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(3, 3)) +a = torch_mm(a, a$t()) # make symmetric positive-definite +l = torch_cholesky(a) +a +l +torch_mm(l, l$t()) +a = torch_randn(c(3, 2, 2)) +if (FALSE) { +a = torch_matmul(a, a$transpose(-1, -2)) + 1e-03 # make symmetric positive-definite +l = torch_cholesky(a) +z = torch_matmul(l, l$transpose(-1, -2)) +torch_max(torch_abs(z - a)) # Max non-zero +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_cholesky_inverse.html b/static/docs/dev/reference/torch_cholesky_inverse.html new file mode 100644 index 0000000000000000000000000000000000000000..99562296539d0fd55511efad95429f417acb4eb2 --- /dev/null +++ b/static/docs/dev/reference/torch_cholesky_inverse.html @@ -0,0 +1,272 @@ + + + + + + + + +Cholesky_inverse — torch_cholesky_inverse • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Cholesky_inverse

    +
    + +
    torch_cholesky_inverse(self, upper = FALSE)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input 2-D tensor \(u\), a upper or lower triangular Cholesky factor

    upper

    (bool, optional) whether to return a lower (default) or upper triangular matrix

    + +

    cholesky_inverse(input, upper=False, out=NULL) -> Tensor

    + + + + +

    Computes the inverse of a symmetric positive-definite matrix \(A\) using its +Cholesky factor \(u\): returns matrix inv. The inverse is computed using +LAPACK routines dpotri and spotri (and the corresponding MAGMA routines).

    +

    If upper is FALSE, \(u\) is lower triangular +such that the returned tensor is

    +

    $$ + inv = (uu^{{T}})^{{-1}} +$$ +If upper is TRUE or not provided, \(u\) is upper +triangular such that the returned tensor is

    +

    $$ + inv = (u^T u)^{{-1}} +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +if (FALSE) { +a = torch_randn(c(3, 3)) +a = torch_mm(a, a$t()) + 1e-05 * torch_eye(3) # make symmetric positive definite +u = torch_cholesky(a) +a +torch_cholesky_inverse(u) +a$inverse() +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_cholesky_solve.html b/static/docs/dev/reference/torch_cholesky_solve.html new file mode 100644 index 0000000000000000000000000000000000000000..5a3bf06fd6c2619b521b7bc6891bb22335251ee5 --- /dev/null +++ b/static/docs/dev/reference/torch_cholesky_solve.html @@ -0,0 +1,282 @@ + + + + + + + + +Cholesky_solve — torch_cholesky_solve • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Cholesky_solve

    +
    + +
    torch_cholesky_solve(self, input2, upper = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) input matrix \(b\) of size \((*, m, k)\), where \(*\) is zero or more batch dimensions

    input2

    (Tensor) input matrix \(u\) of size \((*, m, m)\), where \(*\) is zero of more batch dimensions composed of upper or lower triangular Cholesky factor

    upper

    (bool, optional) whether to consider the Cholesky factor as a lower or upper triangular matrix. Default: FALSE.

    + +

    cholesky_solve(input, input2, upper=False, out=NULL) -> Tensor

    + + + + +

    Solves a linear system of equations with a positive semidefinite +matrix to be inverted given its Cholesky factor matrix \(u\).

    +

    If upper is FALSE, \(u\) is and lower triangular and c is +returned such that:

    +

    $$ + c = (u u^T)^{{-1}} b +$$ +If upper is TRUE or not provided, \(u\) is upper triangular +and c is returned such that:

    +

    $$ + c = (u^T u)^{{-1}} b +$$ +torch_cholesky_solve(b, u) can take in 2D inputs b, u or inputs that are +batches of 2D matrices. If the inputs are batches, then returns +batched outputs c

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(3, 3)) +a = torch_mm(a, a$t()) # make symmetric positive definite +u = torch_cholesky(a) +a +b = torch_randn(c(3, 2)) +b +torch_cholesky_solve(b, u) +torch_mm(a$inverse(), b) +} +
    #> torch_tensor +#> 0.4124 0.7031 +#> 1.7785 1.9837 +#> -1.8951 -2.2718 +#> [ CPUFloatType{3,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_chunk.html b/static/docs/dev/reference/torch_chunk.html new file mode 100644 index 0000000000000000000000000000000000000000..a4bee0db66fdea802272a1a093d539dbaef7d94d --- /dev/null +++ b/static/docs/dev/reference/torch_chunk.html @@ -0,0 +1,254 @@ + + + + + + + + +Chunk — torch_chunk • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Chunk

    +
    + +
    torch_chunk(self, chunks, dim = 1L)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the tensor to split

    chunks

    (int) number of chunks to return

    dim

    (int) dimension along which to split the tensor

    + +

    chunk(input, chunks, dim=0) -> List of Tensors

    + + + + +

    Splits a tensor into a specific number of chunks. Each chunk is a view of +the input tensor.

    +

    Last chunk will be smaller if the tensor size along the given dimension +dim is not divisible by chunks.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_clamp.html b/static/docs/dev/reference/torch_clamp.html new file mode 100644 index 0000000000000000000000000000000000000000..4da2290b658618e5da7f48e46e791d31cd92a4f8 --- /dev/null +++ b/static/docs/dev/reference/torch_clamp.html @@ -0,0 +1,301 @@ + + + + + + + + +Clamp — torch_clamp • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Clamp

    +
    + +
    torch_clamp(self, min = NULL, max = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    min

    (Number) lower-bound of the range to be clamped to

    max

    (Number) upper-bound of the range to be clamped to

    + +

    clamp(input, min, max, out=NULL) -> Tensor

    + + + + +

    Clamp all elements in input into the range [ min, max ] and return +a resulting tensor:

    +

    $$ + y_i = \left\{ \begin{array}{ll} + \mbox{min} & \mbox{if } x_i < \mbox{min} \\ + x_i & \mbox{if } \mbox{min} \leq x_i \leq \mbox{max} \\ + \mbox{max} & \mbox{if } x_i > \mbox{max} + \end{array} + \right. +$$ +If input is of type FloatTensor or DoubleTensor, args min +and max must be real numbers, otherwise they should be integers.

    +

    clamp(input, *, min, out=NULL) -> Tensor

    + + + + +

    Clamps all elements in input to be larger or equal min.

    +

    If input is of type FloatTensor or DoubleTensor, value +should be a real number, otherwise it should be an integer.

    +

    clamp(input, *, max, out=NULL) -> Tensor

    + + + + +

    Clamps all elements in input to be smaller or equal max.

    +

    If input is of type FloatTensor or DoubleTensor, value +should be a real number, otherwise it should be an integer.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_clamp(a, min=-0.5, max=0.5) + + +a = torch_randn(c(4)) +a +torch_clamp(a, min=0.5) + + +a = torch_randn(c(4)) +a +torch_clamp(a, max=0.5) +} +
    #> torch_tensor +#> 0.0344 +#> -0.8505 +#> 0.5000 +#> 0.0837 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_combinations.html b/static/docs/dev/reference/torch_combinations.html new file mode 100644 index 0000000000000000000000000000000000000000..1ff51b1776c3b8fb19348451277e3b6a53fd2d76 --- /dev/null +++ b/static/docs/dev/reference/torch_combinations.html @@ -0,0 +1,270 @@ + + + + + + + + +Combinations — torch_combinations • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Combinations

    +
    + +
    torch_combinations(self, r = 2L, with_replacement = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) 1D vector.

    r

    (int, optional) number of elements to combine

    with_replacement

    (boolean, optional) whether to allow duplication in combination

    + +

    combinations(input, r=2, with_replacement=False) -> seq

    + + + + +

    Compute combinations of length \(r\) of the given tensor. The behavior is similar to +python's itertools.combinations when with_replacement is set to False, and +itertools.combinations_with_replacement when with_replacement is set to TRUE.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = c(1, 2, 3) +tensor_a = torch_tensor(a) +torch_combinations(tensor_a) +torch_combinations(tensor_a, r=3) +torch_combinations(tensor_a, with_replacement=TRUE) +} +
    #> torch_tensor +#> 1 1 +#> 1 2 +#> 1 3 +#> 2 2 +#> 2 3 +#> 3 3 +#> [ CPUFloatType{6,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_conj.html b/static/docs/dev/reference/torch_conj.html new file mode 100644 index 0000000000000000000000000000000000000000..b2219db6d3931b13a27ec38ccfa6b3b7c05d5f90 --- /dev/null +++ b/static/docs/dev/reference/torch_conj.html @@ -0,0 +1,253 @@ + + + + + + + + +Conj — torch_conj • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Conj

    +
    + +
    torch_conj(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    conj(input) -> Tensor

    + + + + +

    Computes the element-wise conjugate of the given input tensor.

    +

    $$ + \mbox{out}_{i} = conj(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +torch_conj(torch_tensor(c(-1 + 1i, -2 + 2i, 3 - 3i))) +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_conv1d.html b/static/docs/dev/reference/torch_conv1d.html new file mode 100644 index 0000000000000000000000000000000000000000..683fa21490e25fbd3644635fec76fabbc9728fe5 --- /dev/null +++ b/static/docs/dev/reference/torch_conv1d.html @@ -0,0 +1,4589 @@ + + + + + + + + +Conv1d — torch_conv1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Conv1d

    +
    + +
    torch_conv1d(
    +  input,
    +  weight,
    +  bias = list(),
    +  stride = 1L,
    +  padding = 0L,
    +  dilation = 1L,
    +  groups = 1L
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iW)\)

    weight

    filters of shape \((\mbox{out\_channels} , \frac{\mbox{in\_channels}}{\mbox{groups}} , kW)\)

    bias

    optional bias of shape \((\mbox{out\_channels})\). Default: NULL

    stride

    the stride of the convolving kernel. Can be a single number or a one-element tuple (sW,). Default: 1

    padding

    implicit paddings on both sides of the input. Can be a single number or a one-element tuple (padW,). Default: 0

    dilation

    the spacing between kernel elements. Can be a single number or a one-element tuple (dW,). Default: 1

    groups

    split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

    + +

    conv1d(input, weight, bias=NULL, stride=1, padding=0, dilation=1, groups=1) -> Tensor

    + + + + +

    Applies a 1D convolution over an input signal composed of several input +planes.

    +

    See nn_conv1d() for details and output shape.

    + +

    Examples

    +
    if (torch_is_installed()) { + +filters = torch_randn(c(33, 16, 3)) +inputs = torch_randn(c(20, 16, 50)) +nnf_conv1d(inputs, filters) +} +
    #> torch_tensor +#> (1,.,.) = +#> Columns 1 to 8 -1.9392 5.0621 3.0437 -3.5636 -7.5239 0.8669 -1.8524 5.9130 +#> 2.7449 1.6015 -7.6731 -11.0644 5.8939 -1.2001 3.9313 -10.1132 +#> -11.4874 2.1056 -1.0310 -0.7005 12.9240 13.1916 -4.5442 -7.9363 +#> 6.0818 -9.9529 0.0051 0.6025 3.2260 4.7597 -4.4539 -10.4886 +#> -1.4388 -8.6063 1.5505 2.9923 2.8992 14.6708 6.9700 -3.7005 +#> -9.8520 12.1492 1.1354 4.2429 9.8903 -2.6861 -3.1849 -10.1582 +#> 0.9200 -8.3914 9.7954 0.3859 3.4860 1.9175 5.8735 -2.8894 +#> 4.4610 9.0711 17.1165 6.0199 2.8014 -5.3435 6.4462 4.5218 +#> 11.4783 -7.2977 1.7851 -4.6470 -1.2914 -9.0278 -1.2200 -6.6805 +#> -3.9618 1.4190 -2.4918 -10.8161 -8.2211 -5.6850 2.8294 13.3770 +#> -10.4270 0.3910 0.6235 -5.0395 21.1592 -8.8681 17.0434 -5.3047 +#> 1.7043 3.1305 2.8700 -7.3307 9.2678 -18.8262 -3.5185 -12.6776 +#> 1.6884 0.3176 -3.8358 -6.1441 -3.3914 1.6720 -7.0779 3.5488 +#> 3.2735 16.9618 17.5538 0.2881 -10.9703 -12.2581 3.3125 4.4180 +#> 8.6331 3.8098 5.7529 -1.6187 -7.9152 -8.1661 5.5171 -5.6580 +#> 0.6991 1.0305 -2.0063 -0.6770 0.6176 4.7773 0.3323 -3.2240 +#> -9.2985 -1.7995 -12.3496 -3.1609 5.0106 7.7908 -1.5697 2.7021 +#> 3.3292 5.3498 -8.4542 8.5496 3.7287 8.7445 -0.4681 -2.2675 +#> -2.9159 3.5349 6.1476 5.6574 4.7744 8.0564 10.0144 1.8210 +#> -0.5171 2.9428 0.2183 -5.6238 -0.0016 1.3590 -11.7849 5.4779 +#> 1.1912 -4.2315 6.3105 11.3349 -3.7946 -2.0639 -4.9689 -3.7005 +#> -0.1363 9.0812 4.5614 -11.8651 3.0685 -3.2603 3.8185 13.1912 +#> 3.5125 5.6862 12.7178 -2.9094 -0.3736 -8.8316 -1.4607 14.7057 +#> -1.3676 8.3705 4.6200 -3.2310 -0.7064 -8.3868 3.6637 -4.2383 +#> 5.9052 0.8962 -5.4688 -3.9047 3.9755 -7.4258 -3.7159 -4.1862 +#> 5.6656 -2.3616 4.2609 4.4746 -5.8625 0.7051 -10.9567 -3.4700 +#> 4.3938 -6.4445 2.0676 -6.3462 -6.4908 4.7513 4.0558 8.4510 +#> 0.3763 5.5158 1.8604 -8.6866 5.9295 4.9654 4.8272 2.5466 +#> 17.9655 8.7774 -1.5719 -14.2141 -20.2145 2.7196 9.0862 15.8173 +#> 5.4497 0.2770 2.3034 -5.7729 6.9052 -10.3341 6.1157 12.3847 +#> 0.3482 -0.2238 -2.3005 2.0549 -6.7614 5.0227 1.1398 -2.9690 +#> 0.6327 2.3686 -12.6709 7.0034 -8.0163 -5.9293 -11.2388 -14.8064 +#> 0.2650 -7.3227 1.5752 -4.9176 3.5857 -0.4669 3.1913 9.6913 +#> +#> Columns 9 to 16 -0.6154 1.3591 -1.9612 0.4248 -1.6151 1.2296 4.3672 -4.2460 +#> -7.1212 -13.5480 -14.6091 -3.4657 5.4540 -4.5291 -9.4951 0.2237 +#> 1.6766 -3.5983 11.4963 6.9899 -8.0171 9.3252 -8.0778 8.0385 +#> -0.5801 0.8058 0.0117 6.5013 -14.7542 -4.9332 -5.9705 -4.4002 +#> -1.5819 5.1493 -1.1527 -3.3323 -0.4756 4.6146 7.0533 0.6563 +#> 5.3804 3.8913 -0.5100 -9.8165 0.2300 -3.5009 -6.6559 3.3777 +#> -0.0898 5.3951 -7.4505 -5.7412 -9.1515 -1.4072 3.7023 -0.0851 +#> 14.1511 -1.9829 3.4283 -18.5947 14.6786 11.9702 -3.5608 6.4718 +#> -4.5561 -4.5728 3.2706 11.1624 -8.9752 11.4919 2.2541 1.3316 +#> 4.0303 -4.7324 -12.9543 6.6408 -5.0110 4.1167 7.7090 2.6313 +#> 4.4952 -5.2520 1.7760 -6.0764 -6.2064 -1.1744 8.9036 -2.6773 +#> -2.8759 -5.6710 2.2459 15.7782 -6.0662 7.1497 -6.0907 -2.9721 +#> -9.1744 2.8917 1.9773 0.9547 -0.6993 -3.8925 -0.6842 2.3955 +#> 3.8871 -0.8197 -13.3794 -14.7574 0.7448 15.9117 -9.6009 -7.5170 +#> 2.6610 6.0994 -0.0780 -4.1576 -5.7099 -6.5641 2.3007 -2.9237 +#> 7.6397 6.8257 12.3299 1.2675 6.8290 -5.7057 -2.1383 10.3878 +#> -2.1706 -1.7073 5.0162 -6.2541 13.7454 12.6861 1.8961 0.4629 +#> -1.2440 -7.6432 -0.4992 -1.2575 10.5352 2.9736 -10.3638 5.8272 +#> 14.9095 4.4100 -2.4995 3.1485 3.0959 -0.2141 -9.3531 -1.4908 +#> -6.5168 -7.7695 -1.8647 -1.1292 -0.2509 -5.0355 1.0347 4.6722 +#> 11.7160 0.7139 9.5218 6.5213 0.6176 2.5592 -8.4594 11.1248 +#> 5.5684 3.4908 -1.9734 -7.9368 -3.8589 -3.0954 1.1574 2.2952 +#> 2.7959 3.8238 1.0927 3.6093 -8.5999 11.6195 -13.2602 -10.0014 +#> -14.1328 1.1295 2.3065 -4.9148 -2.6217 -4.5627 3.0950 -8.5041 +#> -10.2239 0.4371 4.8232 -8.7538 11.7149 10.3083 -8.6122 3.4551 +#> -5.8450 -2.0972 8.2066 9.2659 -6.3882 7.8466 5.1257 -12.5129 +#> 7.3825 7.2185 13.2157 14.9490 1.9005 -4.8808 6.6319 -18.2276 +#> -3.9029 1.7394 -3.1885 0.6199 0.5334 2.0353 -3.9030 -3.5253 +#> -8.5593 14.4150 7.3401 11.3958 11.8662 5.8342 6.8932 -3.1852 +#> -2.5997 -0.6474 -2.3001 -10.4394 -3.2690 -3.9822 -1.4075 -1.2428 +#> 8.3899 6.7432 -2.6195 -1.0633 5.4585 -1.1791 -8.8594 8.3103 +#> -4.9787 -5.6692 0.3629 -9.5947 0.4275 -1.2912 6.3715 11.1544 +#> -7.0017 3.6194 4.6785 0.8878 -13.7756 -18.8658 -3.2674 -0.3311 +#> +#> Columns 17 to 24 -6.0395 -4.7292 -1.4530 -4.8371 -6.1932 -9.3966 0.8125 2.0888 +#> -0.5817 10.3544 -4.8773 -3.6496 -6.6131 6.7948 4.6345 -2.0666 +#> 14.9900 9.2408 5.8285 -1.6623 2.7469 2.4549 -1.3833 1.8332 +#> 14.1976 10.3182 -5.4717 1.2565 -0.1235 12.0829 1.2812 -7.6003 +#> 6.9293 -16.4129 2.7948 2.7357 8.2863 -4.6122 3.1267 6.8616 +#> -5.1138 1.7644 -6.3663 3.8744 -5.9589 -7.8464 -0.0432 0.4551 +#> 5.1275 -7.7724 -4.9620 -2.2237 1.0306 -7.3156 -10.0563 9.8865 +#> -14.1796 -9.8568 6.6058 9.1759 -11.6857 -16.6484 2.2584 1.1150 +#> 11.9193 -0.6228 3.4260 7.1265 -0.4470 3.1540 -2.8272 -7.4899 +#> 2.8415 -12.4887 -3.0011 0.7632 3.5436 -9.2418 -1.0072 1.4292 +#> -9.4365 5.8049 -1.2157 -3.8951 -5.8822 5.8152 4.6455 -0.5333 +#> 18.2522 -1.5752 -4.8467 1.0388 2.2340 5.2267 -1.0198 11.7728 +#> 17.5104 2.4909 -3.4580 0.9312 5.0226 -2.0512 -2.9678 -5.2989 +#> -13.4170 -21.2946 13.3405 -11.3831 -8.1909 -15.7700 -4.5563 8.0102 +#> -6.4040 16.9159 7.6974 -13.6738 0.2983 2.0692 4.1760 -3.5950 +#> 0.1877 3.2537 6.0387 -3.7541 -1.9845 6.3741 3.9154 0.3985 +#> 11.2750 13.8730 2.6284 -8.0898 4.9685 3.8585 -1.6134 0.7810 +#> -1.6800 -0.2496 2.2689 9.4880 2.9581 12.4249 14.5996 -4.2152 +#> -15.7468 4.1156 -3.2533 4.7198 -4.8878 -1.3850 3.2005 1.4606 +#> 7.0839 2.5130 -8.0207 -3.6627 -1.8095 -0.1537 4.1842 1.8173 +#> 2.4382 0.9138 8.8720 4.3209 10.5049 -9.9046 10.8212 -7.2577 +#> -0.4045 2.7304 -1.1687 -2.7728 -3.1842 1.7765 -0.3995 -4.3213 +#> -6.0051 -18.5712 9.9075 -0.0024 -6.7645 -8.8143 -4.6291 6.9310 +#> -9.9842 5.2304 2.5916 -4.3784 6.4098 -0.5394 -11.6019 -8.8339 +#> 11.7127 -5.1780 -0.0861 -4.5638 3.1813 -0.6096 0.8383 13.1129 +#> 4.9336 -8.6516 -0.7450 -5.7607 -1.4587 -4.2274 -1.1462 -9.8302 +#> -2.3611 14.8394 5.1314 -8.4575 0.8626 4.2962 1.5338 -4.2622 +#> 2.2767 -2.0903 -2.6380 -5.4283 -2.6349 -1.3169 2.9036 -0.2084 +#> -10.6204 3.2733 7.2570 -4.0742 -12.2701 -6.4754 7.6803 -14.7380 +#> -5.0862 -8.7578 -2.1929 3.4163 -4.6247 7.9152 3.4955 -1.4093 +#> -6.3402 6.7090 4.4255 -3.5398 -6.3926 -3.1649 -1.6904 -1.0705 +#> 0.6983 3.3284 -1.1082 5.6871 4.6380 9.5359 0.1505 -9.2344 +#> 15.3411 1.7856 -0.4800 2.8846 0.5410 11.9808 8.6936 -2.9945 +#> +#> Columns 25 to 32 8.4758 3.2485 1.3263 -2.1658 7.2784 -10.1472 1.9357 -0.4875 +#> 4.0942 -6.9749 -13.5927 9.9495 -10.8870 -1.3890 3.6737 -3.3366 +#> -2.4957 -11.0305 -6.0567 -0.1481 9.4782 -5.2385 -5.3943 4.6520 +#> 2.2831 -3.7755 5.0576 1.4039 -6.0726 6.9990 7.9663 -10.6788 +#> 5.5881 -10.9483 8.5682 -11.4979 21.9150 -9.3776 -12.8329 -1.7029 +#> -12.9199 2.2646 1.3169 4.1906 6.0932 5.2066 -2.3822 -7.1427 +#> 0.8463 2.1142 3.4666 -6.1619 -7.3718 9.1168 -3.5694 1.0355 +#> 10.7768 2.4936 11.9295 1.5696 -10.3159 -2.7693 2.2250 -8.7409 +#> -1.3807 -5.8368 -15.0976 3.2400 -10.8282 3.8730 3.9030 -0.4394 +#> -9.3837 8.0380 10.6763 -8.6889 7.4962 -2.3875 -0.1039 3.1041 +#> 15.0613 -7.2106 -5.8828 4.8567 -0.0310 1.0219 -7.0270 -0.9296 +#> -1.4862 -14.9527 -4.5142 3.0598 -6.2907 -4.4331 14.7986 -8.1580 +#> 2.6705 6.5091 -1.2496 -6.2086 -2.7199 -2.8468 7.2839 3.7043 +#> 8.7040 -9.3363 11.9846 -8.2819 -6.8337 6.1654 -11.6724 -3.5310 +#> 14.9324 13.4981 -8.9747 -4.8106 -10.7791 12.1841 2.5933 0.9230 +#> 13.6497 -5.9087 7.7415 -0.5427 2.5481 4.3269 5.2614 -0.4958 +#> 5.9884 -9.1013 13.2570 -7.2317 -8.4937 -5.0591 -2.1794 2.3220 +#> 5.3694 -14.7493 -2.2215 3.0352 3.3232 -11.6518 3.3380 -2.8690 +#> 3.3107 -2.0289 -4.8124 -1.7406 -6.7335 1.2736 10.8123 -5.5763 +#> -8.7857 7.7820 1.7681 -5.7144 2.1087 -1.2935 6.2793 -2.0212 +#> -15.4372 -3.5755 0.4401 4.6833 2.8549 -6.9048 0.5300 6.9146 +#> -5.8586 17.0472 3.2858 1.9170 3.8974 0.8677 -10.0318 16.0038 +#> -6.5340 1.3313 -3.3355 2.3734 -4.4620 1.1076 5.0707 -4.9228 +#> -1.1953 3.2453 6.1584 -9.9091 -8.1963 8.7176 -5.4719 -0.6555 +#> 9.7245 -5.9719 -5.5202 -8.8272 -2.5195 0.6149 3.3568 5.8306 +#> 10.0505 -13.1304 5.4540 -11.7057 -3.8276 -6.2989 5.6240 -16.6291 +#> 3.9620 -14.6415 -4.4895 5.0164 3.0707 -6.9295 6.5739 3.2594 +#> 6.4580 -9.0752 -8.0518 -3.2322 4.8367 -0.1585 1.1076 2.3886 +#> 16.6152 8.2122 -15.0584 1.5698 2.2651 -2.3922 10.5806 -2.1872 +#> 3.6539 6.7948 -3.7955 1.5235 -0.1542 2.7298 7.3141 0.8973 +#> -5.4835 3.2712 1.4302 -7.2277 -3.1467 7.3340 -1.2581 6.5278 +#> 8.8211 6.4511 0.4508 -2.5657 0.8169 -5.3580 0.0615 -0.6817 +#> -3.7658 -6.2467 -0.7776 11.3720 5.1722 7.1388 -7.9943 4.8153 +#> +#> Columns 33 to 40 8.6171 7.4242 1.8350 2.3095 1.4498 -8.4101 1.4130 0.0554 +#> -1.6881 -3.1905 -10.1875 -14.0532 2.1352 -0.6725 -7.3952 -1.2065 +#> -11.9393 -10.0697 -5.0396 5.4406 7.1564 5.4031 6.5729 7.1854 +#> -1.1113 -3.1415 -6.1276 0.8331 -2.2253 0.4918 -2.5745 -8.2295 +#> -1.8134 -13.2332 -12.2761 -7.6760 -8.7211 -0.2142 1.8931 10.3959 +#> -6.3212 9.9672 6.5620 1.7724 5.8659 -2.9735 -6.2767 -10.8691 +#> 6.6070 -14.5145 4.7604 2.6366 -1.2538 -2.4690 -2.2550 -11.2636 +#> 5.7745 1.3711 -1.4180 -12.2406 5.5435 -2.6493 3.3535 10.0547 +#> -2.2914 -4.7679 -2.6354 -5.2423 0.4516 2.4769 0.4755 5.5777 +#> 13.9415 0.2480 0.8020 13.4989 -7.3414 0.6097 -3.6648 -4.7836 +#> -1.9710 2.4796 -14.5737 3.7869 -4.3465 1.6049 8.0136 5.6078 +#> 8.9937 -10.2986 0.4741 0.9593 -0.0722 6.5673 -15.7288 -0.0970 +#> -3.0580 -4.8259 2.6330 4.5046 1.2412 1.2833 -6.8520 6.0733 +#> 19.7095 10.2863 -1.2508 7.7044 3.7236 -6.2476 -5.4432 6.9685 +#> -6.2856 5.0749 -4.6435 -5.6642 -6.4963 1.3583 5.8303 -5.2723 +#> -6.8327 7.1009 1.0545 13.0128 0.6232 -4.4436 5.3381 2.9114 +#> 4.9522 -7.9362 -16.4873 2.2234 1.4765 2.3503 9.0933 8.3663 +#> 6.3653 -3.8334 -7.6281 -11.2745 4.7442 -1.6257 7.3943 15.8016 +#> -7.4445 -1.4786 -2.2314 -12.0941 -0.6881 10.1857 2.8515 10.0926 +#> -5.7967 -6.4754 4.8557 3.4277 3.8330 -1.9905 -18.7623 -12.2126 +#> -12.7489 -8.2549 4.3713 -7.0047 -0.9216 1.1699 2.6289 3.5111 +#> -6.1219 4.8058 -3.4355 7.8423 -1.7072 -9.7325 -6.4437 4.2030 +#> 7.7782 4.0045 19.9890 3.0283 4.5056 16.5469 -5.7926 3.4264 +#> 13.4204 10.1940 -1.3420 7.6205 1.3880 -2.0436 10.2775 6.1513 +#> -5.7765 -6.8705 8.4631 -1.6095 8.7580 -0.0233 -11.4192 -4.0085 +#> 10.2234 -5.4722 -5.4740 7.0522 12.6428 -0.8548 8.3546 14.4354 +#> 5.5446 9.5760 -7.5310 7.8422 -5.6884 8.1071 10.2373 9.7893 +#> 2.6095 1.1579 -0.6507 4.8192 3.0473 -5.1181 -4.4558 6.0310 +#> 9.2626 6.7385 14.8097 -3.9879 4.5355 -14.5104 0.1930 2.5541 +#> 4.9256 4.8284 10.0454 -0.5807 -4.0650 -3.6238 -15.5092 -1.8986 +#> -0.5726 0.3441 5.7173 8.2974 3.6005 -8.6032 -1.3940 -12.4486 +#> 4.1821 9.0040 -3.3337 1.6628 1.3810 -5.4552 0.3616 -4.8888 +#> -0.4909 6.2546 -0.9669 6.9366 -0.3558 -6.7471 -3.0725 -12.0050 +#> +#> Columns 41 to 48 -8.2843 4.1627 -9.2066 1.1866 -8.6254 -2.0750 1.5929 -0.6789 +#> 0.9104 -15.8556 -8.8004 -2.5957 1.4070 -12.5140 -3.4297 -1.8037 +#> 1.6306 -2.2232 -1.4897 24.0795 13.0168 7.4404 -5.5602 -1.6311 +#> -5.1574 17.7188 6.3567 -8.4696 -0.9507 6.3167 -26.5885 14.2686 +#> 7.1526 6.4220 -7.5260 8.3354 -17.7470 6.9888 5.8661 5.3227 +#> 7.3520 9.0862 5.2303 4.3617 2.3903 -13.2851 -8.7022 9.4803 +#> -1.0648 7.2604 -9.7395 -8.9616 2.4903 -9.9758 4.4388 5.3299 +#> 6.3518 12.8072 -9.5355 -0.6330 8.0765 -3.3519 25.0816 11.4433 +#> -11.2174 -5.8939 5.3469 3.2387 8.7287 15.2322 -0.9303 10.4744 +#> 14.3205 -6.1806 -4.3585 12.0610 -18.1874 0.7745 -2.0500 -6.2361 +#> -1.9409 6.0303 -14.0147 3.1559 5.8128 -9.2374 2.7300 -8.4599 +#> 3.3133 -0.6815 -20.0191 3.4106 1.8044 -1.5587 -4.6563 1.2868 +#> 3.8531 -4.9154 8.3035 -13.7146 3.5576 5.2042 -20.7035 11.6480 +#> -17.5621 6.9807 8.8214 -6.4896 -8.2771 11.1657 1.2894 7.0215 +#> 0.3366 7.2566 -1.7966 -23.2965 6.2045 6.7148 -10.0865 12.0572 +#> -1.8600 10.6898 -0.7430 7.6799 11.3518 2.5778 -0.5178 -5.7240 +#> -2.2901 5.0679 12.7800 -1.3705 3.6508 -0.8481 -4.6365 -6.8374 +#> 7.2507 1.3439 -8.4469 13.5135 0.2989 0.3833 -2.4522 -4.9250 +#> 9.3477 3.1048 -4.4668 -5.2266 0.8285 19.2942 11.2295 5.0283 +#> 4.9897 -4.2612 4.8910 -11.7271 6.3791 2.6028 -3.6656 2.0408 +#> 7.5806 5.3110 2.1232 11.5513 -4.5345 -0.3127 9.7307 -4.6856 +#> -1.4535 -14.6608 17.5748 3.4663 -4.9230 4.4870 12.7810 -8.4799 +#> 11.3412 -15.1403 -0.3003 6.3041 -12.7827 11.2627 -4.1196 1.3282 +#> -6.0969 -2.7584 -1.1272 -6.0139 -9.9496 6.3426 2.9772 -5.2248 +#> -6.6534 1.2610 5.2604 -0.9078 2.6687 5.5641 11.2080 4.3217 +#> -16.8381 -0.8241 -3.9536 -7.8504 -3.7651 18.2988 -3.7778 -3.2625 +#> 1.4984 -2.0963 4.1146 10.2833 4.0339 19.6106 -3.8761 -12.6146 +#> -5.9259 -3.5328 2.1454 2.7817 4.6381 6.2012 3.8716 -3.4074 +#> -0.3086 14.8235 1.3368 -13.3648 9.4660 11.4596 2.5582 8.5313 +#> 0.9569 -2.1296 -2.7922 2.1363 -6.2216 -0.3625 -2.1549 -1.6005 +#> -5.1335 7.9228 7.3517 -8.3503 3.3215 5.6427 7.6625 -5.2101 +#> 3.8252 -4.2529 -8.9876 -4.0487 2.7030 -1.8167 -2.8292 9.4868 +#> -1.2154 0.4104 -2.6581 16.1400 8.5149 -17.6945 3.7941 -17.0860 +#> +#> (2,.,.) = +#> Columns 1 to 8 10.0387 5.4708 -6.3734 2.1558 -4.8035 -0.9792 0.3938 -2.9347 +#> 4.3439 0.1890 9.6327 9.5763 13.4223 -1.5037 5.3226 0.6079 +#> -3.6064 4.4535 3.5067 -6.0042 15.1842 -4.4269 -3.6917 3.6661 +#> -2.6294 -1.3005 12.9892 6.3745 6.4265 0.1348 -7.1079 3.3662 +#> -15.6274 6.3366 -2.9989 -1.6680 -0.9202 -6.2965 -9.3487 6.1968 +#> 12.9490 7.2405 -0.9496 19.4406 19.6362 -4.4806 -0.2145 -7.6033 +#> 10.5872 7.2060 -6.0659 2.0209 -4.7614 0.5210 -3.7657 6.9106 +#> 1.5741 -6.6284 -5.2211 -2.1328 -1.6123 -9.4595 1.1185 -2.6316 +#> -6.7133 -3.4881 11.8812 -14.7721 -6.0150 6.5723 -0.5993 4.4268 +#> 2.1109 -0.8600 -4.3527 5.8283 -19.6201 2.2504 4.4910 -2.6555 +#> 2.2285 0.6531 -2.5456 3.2600 -0.1149 -6.1569 -0.8513 -10.9193 +#> 0.6550 7.5810 -10.6247 14.7980 12.3875 1.7739 -1.4805 9.6636 +#> 3.5144 5.8005 -7.9258 -7.9768 2.0918 -2.3824 1.1336 4.4373 +#> 3.9089 16.6491 6.3135 1.1714 -0.1699 -5.7123 0.3445 -6.9033 +#> 2.1465 -4.6260 -4.2637 -6.6357 0.4658 -14.7272 -2.2871 -1.3554 +#> -6.6704 -5.3987 -7.2616 -6.2063 16.4898 -9.9293 -3.7227 0.4519 +#> -3.7417 -1.3909 -8.7551 -3.2635 -0.4101 -4.8708 1.0206 3.4510 +#> -8.0268 -13.9665 -2.0084 -0.5276 6.5104 0.2762 4.9685 0.6996 +#> -5.2428 -11.7157 3.5161 -7.2069 -9.3662 -11.2496 0.7623 3.3160 +#> 11.4105 5.4018 2.9417 6.3190 -0.5390 8.0009 -0.7611 -0.0764 +#> -3.5791 -7.1524 11.4355 2.6900 -2.2364 -8.0876 -0.1940 1.3643 +#> 6.2868 -8.3695 3.4881 -8.1108 -13.3025 -6.5792 9.2919 -8.9605 +#> 3.0644 2.1191 4.5545 -3.8662 0.2318 2.0430 5.2194 -1.3587 +#> 2.8715 8.0543 -0.4770 3.3060 3.6380 3.8188 1.4443 -7.1562 +#> 5.6883 -1.8343 -4.5909 -11.2525 9.4542 4.8489 0.6133 -4.9701 +#> -5.7903 9.9498 8.5145 3.1517 -2.9569 4.8534 -6.0148 9.1335 +#> -26.7042 -1.0212 -1.9860 -8.6255 -13.8608 6.9840 -6.9400 -5.2177 +#> -5.3356 -0.9771 -1.8447 -4.2235 7.4861 1.6444 -3.4178 -1.8249 +#> -2.7627 -9.5803 -14.8031 -18.4314 -6.0312 8.8467 -6.3099 -5.9350 +#> 12.5973 -14.7640 -3.8700 -2.5082 -1.0931 0.9760 8.9502 -7.6603 +#> 5.4796 -10.3674 -0.6473 -6.9611 -0.4527 1.9616 -0.9417 1.1845 +#> 15.3209 1.8884 -8.4978 10.5386 11.8920 -0.6137 5.7161 -3.5002 +#> 4.3078 8.2875 4.1719 -2.0938 6.3321 10.0098 -5.0424 -10.5911 +#> +#> Columns 9 to 16 4.1597 2.1692 2.2054 4.1077 3.4318 0.1999 5.8228 11.8961 +#> 4.9303 1.2193 7.3910 0.9713 -4.2890 10.2032 3.0738 0.6304 +#> -12.8485 -2.4395 2.8374 -1.3661 -5.2101 4.7085 -7.1550 0.3047 +#> 2.3786 1.9684 -0.5510 2.3542 12.7663 7.7894 -2.7148 -7.6698 +#> -1.2276 -4.7408 4.8722 -1.0672 -5.0058 6.7234 11.2136 5.9072 +#> -6.3154 -4.3397 -3.6689 -7.0067 -4.6603 3.8145 -4.4339 -2.6450 +#> 8.4286 1.4060 -0.0919 3.8065 7.8747 2.1162 10.6630 -8.4432 +#> -13.4430 -5.6906 -0.6097 -1.6807 -8.9876 1.3663 -4.5168 1.5895 +#> -4.9139 -2.6789 3.2314 -1.8767 -3.4357 3.1801 -1.3308 -11.0145 +#> 0.4473 9.2804 9.8407 -0.9183 -7.9442 3.8241 2.9361 0.6527 +#> 13.1485 -8.5842 -1.0406 -0.7228 13.1383 8.0534 -9.9279 3.9771 +#> 1.5954 -3.8329 1.7396 9.4148 -17.0650 -2.6608 -1.2154 0.5246 +#> 2.3919 11.6748 2.4551 5.2270 -4.3684 2.5156 -1.6903 -1.5021 +#> -12.5887 6.1945 6.9322 -11.3265 5.0488 4.8533 5.1383 12.7364 +#> 9.3312 -4.7861 -2.6901 3.4044 11.0314 -0.8612 -10.4005 -5.7105 +#> -1.3443 4.0103 3.4640 -5.2949 -10.3093 4.1236 -5.3233 0.1790 +#> -1.3425 6.7567 -10.8357 3.3593 10.2135 0.9623 -15.2821 11.1393 +#> -0.9694 3.0059 4.2774 -0.5631 -6.2557 4.7646 -1.6835 -0.3867 +#> -4.9846 -18.5112 6.2997 -0.1264 -0.4779 10.0308 0.4328 -0.9063 +#> 0.4264 5.3685 -3.0136 3.1583 -5.4775 0.2059 1.8156 -4.0099 +#> -4.9440 -5.6953 3.6705 -1.9392 -3.9051 -4.7664 -1.5824 2.0965 +#> -3.1212 1.3969 5.6954 -3.2546 -0.9011 -0.0827 -6.7062 -0.3926 +#> -7.4393 3.3147 9.8627 -4.7677 -2.2820 0.1915 4.7477 -4.1498 +#> -0.0417 -1.9506 -13.5259 -4.2987 -2.8120 -2.9729 -0.7631 -0.9877 +#> -6.0731 -7.7160 -0.7670 -3.9131 -4.4119 -13.2602 -0.7533 1.8420 +#> -8.2742 2.9680 0.1263 2.1729 7.1899 10.2709 -8.6530 4.4435 +#> 3.8645 3.1229 2.9930 2.0002 14.6195 -1.7418 -1.6529 4.5627 +#> -3.6491 3.1947 2.4108 2.2028 -8.9881 4.2088 2.1915 0.4089 +#> 10.8190 11.3701 -0.3371 -1.5075 -8.2668 -5.0072 9.9827 -5.3444 +#> 8.5812 -12.1142 8.8235 -8.1469 0.6937 -3.8408 3.2769 -1.0715 +#> 1.4975 6.2713 4.2284 1.1946 3.6433 3.4696 0.2590 2.0327 +#> -0.5495 -5.7626 -14.2627 -10.9216 -13.5098 -0.0631 -17.7195 -4.3136 +#> 5.6840 5.3397 0.6496 4.0252 -1.9396 -6.4027 17.9018 4.6041 +#> +#> Columns 17 to 24 1.0671 -5.6735 -1.7356 0.2826 -1.7342 2.7489 0.3404 -8.4617 +#> -2.8290 -2.3481 -3.2571 -6.7371 -8.7289 3.6104 -8.7822 0.8914 +#> -8.2273 -3.3133 7.0569 3.7742 -0.0975 2.9954 1.5037 -6.7629 +#> -1.6117 2.9180 -5.7385 -1.8675 -0.2565 2.1521 8.4325 9.3827 +#> -3.8608 3.2983 -10.8959 15.2713 8.4308 1.3514 -4.7878 3.2051 +#> 5.3297 -7.1427 -1.5392 -9.7090 -14.8433 -3.4915 -5.8743 -5.7523 +#> 0.5973 0.3342 -5.6272 1.9873 -6.8259 -5.4839 -3.9521 -2.4994 +#> -2.7891 -2.7704 -1.3523 10.0187 5.2736 -7.4167 1.3432 -8.6362 +#> -4.7752 2.2692 4.1968 4.5852 3.0411 -3.7009 11.2278 8.1053 +#> 0.2368 19.0208 6.9000 -6.1644 -5.8023 1.9438 -18.6880 6.7844 +#> 5.5147 1.0650 5.9031 5.7364 -5.3463 9.3030 -0.9224 -13.0262 +#> -17.2922 -2.0803 -6.3797 -1.3416 0.6258 -9.0174 -13.1975 -0.2615 +#> 0.7377 4.0114 -12.3645 -3.9963 -6.7659 -3.4750 -1.0186 6.0733 +#> 2.8350 14.5161 0.5353 3.6602 -6.8281 3.6620 -0.2507 -5.5430 +#> -2.5208 -11.8489 -12.3997 3.7713 -12.2817 0.5379 12.8663 -8.6197 +#> -8.6146 -6.3982 -0.1499 7.1789 4.8402 2.8423 5.8598 -3.7231 +#> 4.8239 6.2888 9.8213 0.6283 2.7577 6.7405 -6.1247 -6.6673 +#> -17.6537 -0.8464 2.1170 6.4050 15.8401 8.8400 -2.0488 -1.3690 +#> 4.5079 -8.5434 -1.2726 3.7948 -7.1957 2.6481 6.4863 0.5812 +#> 7.6805 10.0123 2.5964 -1.2343 0.0422 -7.6710 -8.5354 7.6572 +#> -1.4183 -3.0296 -8.3676 -0.9816 2.2705 -8.2214 4.9600 0.5649 +#> 6.0839 -8.0292 7.8519 1.4806 -11.9069 -3.2227 -0.5125 4.3146 +#> -2.2727 18.6645 -4.2197 -3.6080 -0.2691 -5.0248 -0.0947 2.2153 +#> -0.0806 -8.0914 -7.5449 -2.3888 -13.5686 -1.0365 4.5406 0.6311 +#> 7.5030 6.6010 -2.6141 2.4009 0.8367 -9.9619 -5.4632 -4.2336 +#> -14.9680 2.9012 9.4707 0.2266 -1.4075 0.9262 10.3921 5.3077 +#> -6.0033 5.8817 6.0979 -3.2846 -6.3061 15.2949 4.8469 3.9848 +#> -2.4820 -1.5915 -1.1837 -2.3656 -5.6047 3.5448 -3.5234 1.9242 +#> 0.6128 3.7932 -9.4951 -4.1153 -3.4303 -4.9710 3.5952 -6.4762 +#> 0.0694 2.2889 -3.6859 5.7302 1.9990 -8.6007 3.5309 0.0725 +#> 7.6183 -10.2740 -3.2182 -6.3023 -9.8338 1.2554 4.5090 4.7307 +#> -10.2709 -5.4335 6.6342 7.5687 1.4987 -0.7195 -3.6944 -2.4728 +#> 3.5139 -2.6378 -1.1257 -1.1352 14.2617 4.2697 -0.8229 -5.1419 +#> +#> Columns 25 to 32 -2.8605 -7.6106 -7.7580 0.0828 0.7966 -3.0951 7.2010 -1.2648 +#> 2.0580 -15.4701 -5.6807 0.7402 -10.2822 -6.9050 1.6919 5.5862 +#> 4.2127 5.8471 5.8205 2.6206 -9.8988 -0.7593 -3.4190 -4.3496 +#> -5.3343 -1.6717 11.1063 4.4985 4.5081 -4.4372 -8.2722 9.1665 +#> -7.1482 -1.3140 12.1533 3.3619 -4.2655 -2.3480 3.9023 -4.4744 +#> 9.3125 -1.2498 -5.1631 5.4271 13.0314 -7.1259 -8.3223 0.6921 +#> -2.9091 5.0933 4.8840 2.0200 -1.0488 6.1469 4.2876 -10.2045 +#> -0.3533 -3.6134 -6.0261 -7.1966 5.2609 3.5250 -1.5032 -16.6616 +#> 1.2913 3.2078 -1.6241 -3.3776 -11.8939 5.4887 4.0506 -0.1919 +#> -8.2025 0.1463 -1.5267 -1.6365 2.9047 7.5340 -0.1444 -3.6001 +#> -3.0462 -11.6913 3.6328 5.5470 0.8737 -5.2903 -2.0251 -11.3414 +#> -6.2072 -17.4478 -1.3415 -10.6948 -9.0720 -8.5059 13.6543 8.4352 +#> -1.0737 2.0677 0.7558 -6.9782 -1.3208 -0.6046 -1.0724 5.6944 +#> -3.2665 -9.1576 -2.8227 -2.6193 0.7109 6.3560 -2.6093 -8.2808 +#> -5.2311 -14.1676 2.0376 5.2761 -5.3529 -4.6507 -7.6715 9.9249 +#> -0.3210 -2.3175 3.3561 2.0451 1.7077 2.1029 -1.6021 -3.2309 +#> -6.9755 0.3410 4.5739 -7.0452 -3.8745 -1.8202 -8.3847 0.8312 +#> 8.3349 -2.4430 2.4211 -5.4273 -3.1578 -8.6491 -2.8242 -2.8450 +#> -1.3986 -6.5020 1.0437 -3.8064 1.6528 7.6997 9.7848 -15.6016 +#> -4.6690 8.1407 -3.5937 -1.1580 5.3055 -1.5233 0.8949 1.6268 +#> -0.2378 3.9313 0.3446 -3.1099 -9.2425 7.5811 3.0254 5.1405 +#> 5.6826 19.7336 6.1109 7.8935 -3.8027 2.6965 9.0919 0.8480 +#> -3.9442 -0.9370 -3.3676 -3.1504 8.1480 4.8276 -6.7637 -0.8035 +#> 10.4251 -7.3698 0.1856 2.1287 0.8263 -9.4881 -1.4680 7.9508 +#> 2.9594 -1.7114 -10.8204 -12.7198 -6.2216 0.6603 7.5504 -8.6227 +#> -10.6754 -9.4401 5.8064 -2.4149 -8.7198 -0.0471 -0.9241 -0.9043 +#> -6.0706 -2.4452 -0.1827 1.3683 5.4613 5.0386 -19.4988 6.0188 +#> 5.6529 -4.6562 -2.3702 5.7741 -4.5701 -4.9322 4.5069 2.3271 +#> 2.7945 -11.9716 -5.3644 10.6809 3.2669 -0.1990 3.8550 1.4302 +#> 2.9058 6.3966 1.0677 -0.8874 7.7929 4.9075 16.4945 -15.9696 +#> 5.8018 8.0413 -0.4099 3.6087 -2.0954 7.9919 4.6651 -4.0117 +#> 3.3499 -1.8451 -2.4768 -12.3844 -9.8948 -9.7921 3.4200 2.0898 +#> 2.5268 8.7827 -1.3587 8.8208 4.3436 -7.2032 0.6781 20.7918 +#> +#> Columns 33 to 40 -0.2995 1.1191 6.7945 0.7649 -7.2045 4.2842 1.4715 4.9328 +#> 6.7228 -2.6851 -1.3990 2.2491 -2.1846 3.4793 -9.1367 8.4017 +#> 0.8091 -5.9336 -2.8827 -4.2341 -7.1728 -4.6606 2.7360 0.1913 +#> 4.4369 -2.8495 0.7949 -5.1449 2.9258 -4.1219 4.4489 -10.4940 +#> 7.8574 -5.7745 5.0699 -2.4012 -4.0257 -8.5597 6.4500 -10.9222 +#> 9.8428 -6.9184 -0.2836 -2.8993 -1.3804 10.3463 -10.2509 5.5472 +#> -10.6990 3.1359 -12.1451 -14.6491 -2.3592 8.7361 -0.8053 3.4501 +#> -1.3851 1.3700 7.2683 -8.2200 -15.7723 14.6232 2.2407 9.4232 +#> -5.4510 1.5006 -1.0942 0.4086 1.5158 -3.9327 1.4447 2.6731 +#> -5.7838 4.5294 -3.6341 -9.8300 0.0597 8.1621 -8.7432 18.0181 +#> 12.2544 -6.2685 1.2426 -9.7115 -4.1616 0.8954 -2.7068 13.7022 +#> -4.0988 -1.2120 5.0910 6.4699 -0.2618 5.1909 -10.1513 -1.7987 +#> -6.0932 -3.5896 -3.2720 -0.4075 8.3522 -3.3826 13.1417 -11.4535 +#> -3.0120 13.0322 6.9227 -14.9900 -1.2819 -2.6046 18.4206 10.6289 +#> -0.5705 -0.9780 -2.3203 1.0331 14.8899 6.4160 -1.8385 1.1234 +#> -3.5166 -6.2960 8.3617 2.9712 -7.6379 4.2812 1.3522 -5.7266 +#> -3.9419 2.6470 -1.9473 -8.9569 -0.7584 3.3980 14.5743 -10.2095 +#> 10.6213 -3.2907 8.2270 2.4184 -0.6461 -5.0426 2.6884 -1.6167 +#> 10.2942 6.1348 7.0960 -2.5108 0.6207 -6.1762 -3.7501 -4.4878 +#> -0.6256 -1.0605 -17.0469 2.6994 -4.2948 3.7081 -1.2754 -5.0648 +#> 8.3084 6.0258 2.5210 0.6074 -4.6763 4.5620 -11.2779 -1.9649 +#> -0.7987 1.9118 -6.1674 -0.0607 1.3645 2.5053 -7.2885 16.7281 +#> -3.0225 9.0359 0.1897 -2.3280 7.3198 3.2974 -6.1207 10.8499 +#> -8.0671 -2.6701 -0.0388 -3.3756 10.8742 -1.6988 -0.5941 10.7106 +#> 5.3680 1.4411 1.7096 9.8732 1.6266 4.1371 -2.7059 -14.2736 +#> -5.2262 -9.5127 1.3886 -3.0064 3.0941 -2.8367 9.5156 -0.6969 +#> 18.5352 5.2560 7.0702 -4.4319 2.4627 -6.6754 6.4994 0.0374 +#> 1.6114 -1.9833 0.0352 2.6434 -2.1654 -4.8480 -0.1521 1.9257 +#> 3.0409 -1.2848 -3.1187 8.3898 12.7157 1.7213 5.3244 4.2758 +#> 4.5793 2.3647 5.9688 5.6638 -0.5780 0.8704 -11.4496 -0.9863 +#> 0.9182 4.2894 -5.4648 -0.2197 2.0977 1.9876 -3.1105 -4.9824 +#> -14.5905 -15.3422 7.1894 8.6722 3.7829 -3.4129 -8.0301 1.0123 +#> 8.1811 5.3511 1.0564 0.9844 -6.6549 -9.0158 -14.3013 -3.1582 +#> +#> Columns 41 to 48 5.6332 2.2932 -4.5964 6.4198 11.1123 0.0795 -2.1457 5.0331 +#> -4.1047 -4.9831 2.4336 3.1466 0.0397 -5.9841 -5.2063 -8.4516 +#> 6.4033 4.3415 9.4705 -7.1406 2.5291 -5.1500 3.6671 0.6115 +#> -2.4928 -6.3541 -8.5856 2.8727 4.4635 6.7641 -9.0326 0.6842 +#> 13.9401 -10.6037 21.6569 3.9669 1.1742 -6.8804 4.2434 -7.7035 +#> 3.0021 2.0676 -8.8853 -1.3246 11.0303 7.7684 -4.2882 -12.0431 +#> 6.5844 -6.9448 -9.4087 2.1672 -5.5186 -11.8524 -3.4786 8.0810 +#> 17.4261 3.3265 7.0836 7.8669 8.9668 3.7108 -2.3977 3.5640 +#> -9.8889 6.9500 5.6497 -5.3105 -2.7911 -0.2394 -6.0394 11.5696 +#> 0.1476 -10.2242 7.2625 2.4323 -7.7611 3.3847 5.2573 7.1294 +#> -5.4409 -2.7633 12.5376 10.5969 -1.3762 -8.4004 6.0051 3.8142 +#> 10.6442 2.4963 1.8327 7.9667 -5.6774 -7.6771 -15.2554 0.1189 +#> 9.5391 -11.4385 -8.3000 -5.2080 -0.5006 -8.3721 -2.1355 1.9657 +#> 7.3890 3.6394 -2.9041 12.4842 9.8456 16.7926 8.8750 10.0341 +#> -0.5811 -3.7149 -16.8386 12.0227 6.3468 -12.0353 -10.9658 -9.1868 +#> 8.5468 -6.7005 -6.6056 -3.4504 7.5381 1.1473 3.6462 -11.4767 +#> -6.7523 4.0718 7.9301 5.5260 -1.3192 1.4566 2.8061 9.2092 +#> -2.0472 -4.2016 14.1996 3.8699 -2.8507 -3.6281 8.0653 -7.3780 +#> 2.4319 6.0648 0.1928 3.5088 -1.0562 -6.6013 -5.3961 1.4732 +#> 7.8926 -16.3563 -0.5517 0.7026 -1.6527 5.8850 1.0421 5.9831 +#> -4.6519 7.9746 1.9849 -8.3457 -2.4586 -0.6843 -12.2213 -11.0160 +#> -5.6572 -1.6221 0.3711 -10.3329 -6.3663 -5.1670 0.8120 7.7180 +#> 2.9991 -0.7780 -1.8604 3.9307 -7.3936 6.4536 3.1914 11.0404 +#> -5.8615 12.9612 -4.8467 -1.3229 0.6688 -0.5670 -3.7521 -0.3018 +#> -5.4614 6.8128 6.2624 3.8757 4.4406 3.3667 2.2684 1.9731 +#> 13.3009 13.8089 1.5255 8.6716 -0.1487 -3.5703 -3.4931 22.3602 +#> -8.1353 2.2880 11.7727 -2.9184 -7.3796 5.0952 4.3697 1.0045 +#> 2.8701 -1.0420 0.2520 -2.1208 1.2186 1.5434 0.4500 2.5658 +#> 2.6756 -13.7405 -13.2246 12.2021 8.1566 -5.2898 7.2462 6.0131 +#> 3.0103 -10.4379 -7.5588 5.1218 -4.4178 -5.3754 4.2676 8.6003 +#> -6.4673 -2.9102 -10.8066 -10.0045 -3.7663 2.5766 6.2474 -2.0447 +#> -2.6220 0.7522 2.0396 8.4006 5.3325 -6.8119 0.7586 -12.5327 +#> -8.5465 0.3816 -4.5711 -0.3143 2.2783 2.4071 1.1744 0.8159 +#> +#> (3,.,.) = +#> Columns 1 to 8 -2.9465 6.7491 3.4035 -3.6492 5.0866 -1.3004 0.2449 -5.4939 +#> 13.0754 3.2661 4.7796 -1.4391 -15.0498 0.9982 -7.4219 1.7322 +#> 11.5146 6.5782 11.1510 8.3414 -9.8245 -1.5042 1.6307 -16.1146 +#> 4.3577 -2.5324 -10.9075 -12.6360 2.2983 2.4125 -4.1170 0.6302 +#> 3.8053 -1.5692 4.8981 13.4494 -0.2783 1.3898 0.4411 1.7879 +#> -1.1363 15.0526 -0.3598 2.6539 -1.6215 -6.7269 2.1990 -10.1371 +#> 6.0341 -1.9518 -9.8891 2.2049 2.9623 -10.9494 -2.2168 2.7169 +#> -8.4840 13.4977 6.8483 10.2970 6.8995 -8.4330 -6.5020 -1.1835 +#> 1.5872 -10.8291 1.1714 2.6117 -3.2293 5.4288 0.8903 -3.5801 +#> 2.9126 -2.0079 -6.5620 -9.1304 -0.2895 -9.4825 17.6155 4.1331 +#> 3.8673 13.2476 2.4900 3.8707 6.4898 -12.6353 3.6338 3.7556 +#> 2.7610 -1.5891 5.9757 -6.3396 5.4864 -13.9254 -3.6302 9.7405 +#> -2.6819 -10.4204 -7.2094 -5.8542 -3.6023 3.0869 -13.3427 5.9596 +#> -12.0655 4.0919 1.9086 -14.4974 -6.2530 -2.7969 -5.5693 4.2034 +#> 0.6609 -7.6478 -8.0833 8.2815 -1.7428 -1.3188 -25.3015 -1.3408 +#> -0.8759 7.2017 3.8076 2.0407 0.2781 0.7483 -5.6527 -2.9568 +#> 1.1847 9.2117 1.6848 0.1483 8.1943 0.1440 1.0069 6.9198 +#> 3.9311 11.6083 7.9933 -2.4647 -10.8112 2.0084 1.4939 -2.1771 +#> 0.8121 -13.0258 9.5050 5.3774 -4.4842 -7.9937 -14.3164 1.3361 +#> -4.9986 -1.7274 -14.8807 -8.0427 7.7506 -6.5877 15.2168 7.6586 +#> 1.5007 -5.3454 -0.8189 7.9164 -6.3935 -0.6613 -0.0635 -10.9799 +#> -4.9972 -11.5440 -4.6368 1.8109 -7.5331 -2.3971 3.0837 -11.0933 +#> 3.2319 -5.5267 4.5814 -6.7636 -3.9420 -5.8516 -1.9070 7.6854 +#> -6.6653 -8.1467 -3.8014 4.9708 12.5034 1.7842 -3.7049 -7.7959 +#> -6.6782 -3.3112 -5.6619 6.2035 -0.0263 -2.6124 -1.3062 -4.2076 +#> -2.1601 4.4574 9.6345 5.7250 5.8007 12.2038 0.2432 -0.1653 +#> -1.3970 -4.2147 7.8978 -3.8411 7.8179 4.9035 -6.9380 5.4670 +#> 2.5700 0.8235 5.6162 2.9144 -4.4624 -0.7601 -3.3247 -6.3907 +#> 0.6469 -3.6597 -11.5515 3.2741 9.6091 15.3186 -0.6391 1.1884 +#> -6.5224 -2.7463 -5.8976 -8.7366 -0.5906 -12.1694 1.5458 8.0285 +#> -3.5449 -10.1203 -9.3352 3.2810 -2.5780 2.0167 2.7202 -8.8663 +#> -5.0322 2.0523 3.1426 11.1675 4.0442 12.3353 -6.2100 -0.0758 +#> -5.0508 1.4000 -11.1320 -10.0043 -5.4281 -5.8816 4.5383 -0.3971 +#> +#> Columns 9 to 16 6.5449 2.0045 -2.8130 -2.7645 -0.0235 1.1601 1.0732 -2.3921 +#> 3.6279 0.0634 -12.7863 1.7218 -2.3510 -5.9596 -0.6172 2.1836 +#> 4.9811 1.3213 1.2502 0.1739 -6.4937 -1.0192 -1.0153 1.6010 +#> -0.5681 -10.2578 -12.6408 -5.5517 -6.3547 -5.5246 -4.2951 8.5464 +#> -1.0979 -0.3486 -4.0974 8.9978 -7.3452 5.2716 2.8365 1.7610 +#> -0.1936 9.5106 -5.1467 -10.5008 2.0831 2.8340 -3.6761 5.0899 +#> -8.7012 2.7162 -2.0035 1.0220 -8.0065 -1.0102 3.4017 -0.2082 +#> -1.0910 6.3155 5.8994 2.0598 12.9580 3.7722 -7.1164 1.6312 +#> 9.4666 -0.2722 2.1142 8.7033 -5.7696 5.8161 0.7354 -1.4988 +#> 0.6759 -1.8277 -9.4586 10.9843 -0.8031 -5.1404 7.8896 -13.2030 +#> -10.3155 -1.6863 -4.6110 -1.5937 -0.4920 -1.6325 -1.9721 8.3756 +#> -7.2470 3.6684 -2.2973 10.8370 0.2955 3.1221 3.6971 -7.4725 +#> -3.0602 3.7926 2.8544 -3.2670 -5.1942 1.4339 -4.6983 -5.8587 +#> -5.4354 10.1609 -4.9332 4.1405 4.1991 2.8666 -5.2437 11.7911 +#> -1.0875 7.6569 -1.9310 -9.0522 4.2194 1.2091 -1.2352 11.8580 +#> 5.4735 0.9263 0.3010 -8.7748 7.6049 -3.1868 -2.7634 6.5455 +#> -5.7583 -4.8841 1.8643 1.1487 3.9973 -7.8404 -1.2693 2.1135 +#> 2.5554 -9.4165 1.5076 1.1334 -2.5302 0.3660 -3.9877 2.9519 +#> 1.4318 -0.4066 8.2863 -2.3862 3.9937 3.7839 -1.3307 -4.2478 +#> -7.1912 1.5832 -3.5860 -1.8168 10.1623 1.8318 4.0577 -2.3840 +#> 6.4235 1.5555 2.0653 3.3499 -0.8222 3.1654 2.7664 3.6706 +#> 8.9828 4.9193 3.5951 8.4067 0.3966 -2.6981 -0.4956 -13.2621 +#> -1.0758 -0.1383 -3.1166 0.7782 -6.2175 3.8216 -4.6978 -5.9193 +#> -3.2667 3.1698 15.1409 7.3503 -0.2144 -11.8199 3.2289 -3.4907 +#> 2.1314 1.8190 5.8104 6.4171 3.3890 15.9284 0.2489 5.5254 +#> -3.1983 -6.0544 -2.0201 4.5444 -6.5154 -0.5975 -4.5367 1.9974 +#> -3.9888 2.1322 -0.4907 4.7185 -3.9945 -8.7659 -2.5032 -6.1154 +#> 7.7187 5.8723 -3.5478 0.8724 0.6887 -3.1265 1.5609 -2.7086 +#> 14.9549 3.9259 0.2833 0.5005 11.6841 3.9938 -9.0293 -4.2390 +#> -4.8966 1.3657 5.3542 -3.5113 -1.2462 11.2799 -6.2368 -3.7606 +#> 3.7314 -1.1981 -0.5762 -8.8711 1.3083 -6.9604 0.4904 3.6096 +#> -2.1686 -5.0282 13.2207 5.9258 2.0903 13.1469 6.6172 6.0303 +#> 1.0073 -8.7611 1.3372 0.2772 -10.6286 -1.2899 8.4865 1.0604 +#> +#> Columns 17 to 24 -11.7194 -3.4742 6.7228 -3.2348 6.6872 2.7962 -3.8216 -16.4645 +#> 6.3566 -2.0647 -2.4954 5.1539 2.6215 -1.7163 -7.0786 -3.4445 +#> 2.5244 -1.3425 3.3126 -2.9585 -8.8994 -12.6755 11.0108 16.4860 +#> 9.7978 -3.0523 -0.5564 6.7757 8.1707 5.7241 -1.3237 -18.5762 +#> -5.9852 10.7143 -2.2588 -2.7854 -18.7064 -13.0260 4.9254 1.3460 +#> 5.0855 -11.0771 -6.2980 8.4517 -4.4639 2.9253 0.8851 -11.2825 +#> -4.9577 0.0768 -1.7747 0.3088 1.3094 -9.3061 -7.2066 -1.4033 +#> -4.6579 -5.2138 -0.9962 -9.4146 -0.8550 0.0451 -4.0479 -11.2631 +#> 4.9886 1.4922 1.5610 -8.2084 3.1271 -1.9722 13.3146 9.7915 +#> -0.4112 3.1523 6.8147 -6.1784 12.6918 1.5125 -5.6114 4.0260 +#> 0.3532 -5.5388 1.2163 4.0721 -10.7757 -11.1506 -19.9871 9.2129 +#> 2.8650 -12.4232 4.4705 2.9344 14.7577 -16.6749 1.7247 0.4227 +#> 8.7762 0.5141 -9.4361 16.8197 -1.0409 10.8817 8.5502 1.0062 +#> -13.2626 -14.6037 7.8303 -5.6896 -9.6640 13.5902 -0.7065 -27.8346 +#> -1.7589 -8.7919 6.2547 11.9314 -2.6878 6.3434 -10.0364 -4.8781 +#> 2.3292 -3.5457 -1.8573 9.5039 2.0050 4.4728 -2.5737 -1.5500 +#> 3.6692 -2.5691 2.4715 -4.3304 -7.8685 -1.5144 -0.6543 10.2911 +#> -2.6615 3.5720 0.6263 -3.3037 -9.7682 -9.8934 -0.3650 -3.0510 +#> 6.1805 4.0238 -15.4475 3.1536 -4.5537 0.5307 1.4736 3.2388 +#> 11.1083 -1.7856 -9.8816 0.2387 21.6622 8.1461 -6.9643 -0.1668 +#> -1.1595 2.2325 1.3895 -17.5522 3.2825 12.1828 12.0439 7.0735 +#> -0.7802 10.8209 7.4489 -1.9758 -3.6576 8.3138 5.8285 11.1171 +#> 2.5402 -10.0043 -2.2162 -3.5739 1.8395 -2.3006 7.2071 -3.8794 +#> -4.4282 6.6475 6.0691 6.0688 -4.5014 -20.3268 -2.6398 -0.2482 +#> 3.2296 5.2469 -9.3186 7.3619 9.4032 2.2472 9.3938 1.0058 +#> 1.9222 -10.2521 2.8342 -0.2041 -5.4550 -21.9493 8.6508 -3.0456 +#> -2.2264 10.3704 1.2869 -7.2307 5.6671 5.0697 -2.9211 17.3764 +#> 3.6620 3.2212 -3.4542 -1.4203 -7.2373 -4.5592 5.3546 0.5723 +#> -8.2548 -1.8444 -2.9206 6.5498 7.7863 8.8395 -1.5226 -6.9835 +#> -0.8807 0.8944 -1.4556 11.3731 -2.2037 -5.9420 -5.6442 -5.9234 +#> 1.9328 3.0542 0.8298 -3.4456 5.9718 17.0679 4.0231 -2.3350 +#> -0.2588 1.7942 6.3263 9.9000 -2.6712 -8.2161 -8.1444 -6.1425 +#> -6.1458 7.8033 3.2257 8.8162 15.9783 0.5267 -12.7499 -3.1776 +#> +#> Columns 25 to 32 -2.0157 4.1089 7.2682 3.6886 -2.9480 -8.2878 6.8678 -5.5896 +#> -4.0806 -12.2172 12.4462 -0.5027 6.8148 3.1091 16.0016 -0.6848 +#> -0.2405 -4.5660 -0.8068 6.6606 12.8058 6.2472 -0.3635 1.0788 +#> 7.7058 1.8378 -7.6407 -0.3896 -7.3546 0.3916 0.5564 1.8342 +#> 1.9186 13.8919 -8.3621 -3.6377 1.1202 3.6434 6.2728 -1.4171 +#> -14.0436 -19.9002 8.4650 18.4132 7.9181 -12.8233 -0.5752 0.3696 +#> 12.6063 -3.5674 -4.0703 -7.0508 5.3739 0.8898 -2.3077 11.6459 +#> 8.3819 -2.0539 14.4301 8.0666 -2.7405 -2.2027 9.5361 11.3209 +#> 12.0999 5.2191 7.2094 -4.4292 -1.6779 5.8554 10.1625 -8.1495 +#> -0.5939 4.0770 -0.6233 -16.4889 -9.4226 1.3257 0.7432 -12.6853 +#> 3.7446 6.1835 -6.3783 -18.7896 3.6412 8.0568 7.2069 -1.3634 +#> 7.3596 -5.7071 13.2045 18.7954 17.2428 15.4416 1.3781 1.8619 +#> -0.9375 1.7349 -7.3584 11.9405 -8.6807 -2.0350 -3.8199 10.3507 +#> -16.6160 17.3572 7.1191 -8.0920 -5.6566 5.2656 2.5318 -5.1723 +#> 2.5319 4.4528 1.4150 6.7175 -9.4336 0.0099 7.0687 11.2796 +#> -6.9945 5.7012 0.7478 10.7271 -5.0086 2.3393 -0.5579 7.2747 +#> 15.7752 17.5875 -18.5369 -9.9868 -5.4545 13.4179 -2.3004 2.9022 +#> 5.9505 -2.6579 -0.1754 -1.5275 6.0338 11.6684 9.5316 4.0963 +#> 5.0278 -10.0900 -1.7168 4.4559 3.7852 2.4527 12.2397 8.3822 +#> 4.6039 -0.4500 0.2139 4.2664 2.9937 -0.7525 -8.7684 14.3166 +#> 0.4572 -5.7957 0.0972 6.9616 16.6140 1.5037 3.2261 -8.7615 +#> -8.5531 4.3549 7.4059 -8.5630 -7.7292 -17.0612 0.8232 -9.3367 +#> -11.3403 -12.7695 10.4222 -2.2739 4.2038 0.2435 -1.7741 -10.3972 +#> -7.3868 7.8150 -3.7302 2.9652 -3.8516 -3.7140 -8.3359 0.2041 +#> -0.5373 -2.7766 -3.5409 1.3619 17.1908 -2.1378 9.3833 8.4610 +#> 12.0572 13.5887 0.1190 0.4920 -6.8214 13.0381 3.7484 -14.7119 +#> 8.6603 13.5787 -14.9618 -13.6073 -0.9042 14.7748 8.6161 -16.1236 +#> -8.3565 2.8309 2.6476 3.6061 3.8109 3.7507 10.7297 1.4240 +#> 13.0005 17.4906 16.0882 4.8015 -6.7219 -3.7315 -0.7771 3.3905 +#> -6.2958 -10.8272 10.8498 -4.4420 -2.2415 -21.2301 2.6585 0.1607 +#> 0.6630 -3.9846 -4.0316 -4.9802 -0.7418 -2.7902 -3.6381 7.4480 +#> 6.6112 -1.8158 5.6075 9.8480 -4.0444 -1.3972 -0.8709 8.1595 +#> -13.8851 -8.0466 -0.3055 0.3725 11.3732 -2.1039 -5.3575 -10.2266 +#> +#> Columns 33 to 40 4.7198 -0.0901 0.2457 3.8183 4.5227 -5.3886 -2.8884 -7.6746 +#> -10.9014 -0.4934 -2.5116 -1.1350 4.2911 -1.6436 -0.0854 -1.2101 +#> -3.7498 -3.9846 1.1016 -2.7053 0.9169 4.6343 8.4728 7.9549 +#> 3.8178 -14.2022 -3.3856 13.3016 -1.6856 -4.3524 3.5354 -0.1561 +#> 1.1652 5.5548 3.8279 -0.7134 -0.3133 -0.4691 -10.8889 1.7277 +#> -8.2708 -21.7539 1.9627 7.8289 5.8654 3.9668 4.7696 -2.9379 +#> 13.8735 -7.8829 0.2100 14.1288 -5.4307 -3.8724 -3.1266 1.1431 +#> -1.7572 1.0280 2.6630 -1.8167 -2.3524 5.4642 2.0585 -2.4154 +#> 0.7114 -1.9926 -3.9740 -1.2824 -5.1682 7.0500 7.5277 -0.9279 +#> 11.6471 -1.4907 8.2696 1.9277 6.3606 4.6438 -2.1029 -3.6280 +#> 2.9605 -3.9471 -4.6455 16.6811 -0.1063 -16.2037 1.9691 5.2829 +#> -7.8695 -6.0213 -10.7647 -1.2503 -7.0724 -0.9308 6.0930 6.8161 +#> -0.7300 -1.2248 -1.1506 0.7065 7.9529 3.5382 -2.1695 3.3102 +#> 14.5596 -2.1343 -8.5324 14.5377 3.7628 5.9530 1.7875 -9.9836 +#> -3.1529 -6.2340 3.3469 13.8249 -0.8234 -4.9439 -0.5999 4.5493 +#> -1.7360 3.7705 -2.2131 -4.3659 7.7011 -1.1756 -4.5246 2.1410 +#> 11.6924 -4.7556 3.6914 1.0142 -8.0472 -3.8438 1.6591 7.1282 +#> -3.6113 5.4425 1.3223 -13.9324 -2.0020 -8.5013 -0.6448 -1.2193 +#> -10.7613 8.6417 -5.3610 1.0033 4.5142 -6.6560 2.0836 -6.6214 +#> 4.9961 -18.0090 -0.2932 -1.6754 -7.1008 6.1781 3.5243 3.0648 +#> -7.7127 -7.6799 4.8339 -4.9892 -1.6488 4.7189 8.1236 -5.9356 +#> 5.9332 12.1854 7.8402 -7.2979 5.1513 11.0374 1.6301 -5.2815 +#> -0.2638 5.0955 -2.2106 2.1170 1.0265 -1.5688 1.9216 -6.3343 +#> -0.9763 7.2011 -6.1354 -0.6408 7.9626 7.7870 -4.8278 0.9937 +#> 4.3236 -11.2081 -6.7775 -7.9818 -6.1075 12.4051 12.9151 4.5897 +#> 6.0191 -5.1637 1.5112 -1.0737 -1.0153 1.2376 -1.9025 2.0195 +#> 6.6469 -1.1426 -6.0725 5.3781 2.0921 -0.6679 -6.2060 -2.2261 +#> 1.7494 -0.3657 -4.7677 -4.1958 5.4964 1.0836 -3.4167 -6.8927 +#> 6.6011 -4.7835 0.5737 -1.7932 1.1056 -8.7393 -6.7464 -15.0448 +#> 2.1360 12.7848 -8.4477 2.5211 1.2313 -8.4252 4.2519 -9.1689 +#> 6.4285 -4.4071 4.5360 -1.8803 -0.9859 5.9919 0.6359 -7.9721 +#> 4.2081 -4.5014 2.3329 -20.6892 4.4741 3.5308 2.4164 15.9934 +#> 4.5220 5.8845 -0.0032 -6.1738 7.4844 -0.3890 -2.8013 -0.7244 +#> +#> Columns 41 to 48 1.3805 -4.9380 0.8535 7.9801 -8.7107 -9.8639 8.0869 -0.9497 +#> 0.3161 3.5444 7.2082 8.0190 2.0419 -5.0187 -5.7169 -5.0288 +#> -1.6709 -3.9399 -2.3093 5.7695 9.9227 -5.0367 -9.8780 9.4281 +#> 12.9532 3.8538 6.7989 -0.0335 -9.1111 9.1214 -13.1752 -10.1677 +#> -3.4043 11.1772 -3.0363 -10.4170 -12.1433 13.0533 -6.6098 16.1098 +#> -2.2308 10.2838 11.9694 22.5632 9.2597 -9.2814 -4.7671 -8.3936 +#> -1.8761 -3.3415 9.8528 -6.0485 -3.2158 5.9935 4.5926 2.2405 +#> 7.0757 -0.5773 -8.1496 -5.5039 6.8523 4.2756 -0.2683 -0.3216 +#> 7.0463 -11.1999 -5.8586 -13.1925 2.2496 13.2353 3.6401 4.8425 +#> -2.4496 -1.1532 5.7173 -8.0488 -9.3571 1.8937 -12.0181 -9.3051 +#> 5.3051 2.1933 -1.0898 4.3864 -4.0046 -6.7609 -1.0443 -2.4709 +#> -9.9847 5.9119 4.9733 4.5662 -3.3084 6.2302 -4.7580 -4.9996 +#> -6.9757 -0.6847 -7.5730 -0.9035 -9.4966 -0.2145 -11.6262 -3.3807 +#> 0.7097 -1.2748 -10.2944 7.9670 -9.4208 6.7085 -9.3770 4.6652 +#> 6.1547 -0.3353 -4.2997 1.4726 -16.2000 0.7614 5.2241 -17.7287 +#> -2.9057 1.5622 -0.2821 8.3046 8.2795 -4.9097 -5.4188 -2.7664 +#> 4.4802 -12.3069 -9.8593 -0.4712 -6.6066 -2.9359 -6.0635 2.7054 +#> 1.1890 10.8593 0.3688 0.8153 1.8687 0.4189 -9.7313 6.2856 +#> -5.0629 1.6361 -14.5558 -3.1869 2.2867 10.2302 5.5203 -3.1036 +#> 3.6322 -1.2016 0.4924 -4.0717 8.7493 10.2594 -4.7146 0.1071 +#> 0.4285 -5.8888 -4.1580 -6.1036 6.5381 -0.7054 0.9250 0.8008 +#> -0.5785 -12.7201 -1.6306 -0.0454 5.2061 -1.0831 13.5820 4.5648 +#> -5.3590 6.5304 -2.3736 -1.0916 1.6488 2.4671 -9.6795 -8.8535 +#> -1.8945 0.9741 2.7552 9.1384 -8.9336 -8.3000 13.0976 -0.9421 +#> -3.7371 -6.2150 -4.6558 4.8962 12.4271 5.5632 0.1805 3.3066 +#> 8.0820 -6.3138 -1.6506 1.9567 -13.2407 12.0384 0.7110 7.1839 +#> -0.8565 -20.9714 -12.0888 1.1083 -1.9453 6.3443 -6.5816 -1.4029 +#> -4.7264 -3.8826 5.0662 8.9830 5.1964 4.6282 5.2139 3.2613 +#> -0.6169 -9.8690 -3.7684 -14.4861 -6.3009 7.8225 10.5893 -13.2439 +#> -0.7888 8.9360 4.1799 6.5549 -0.8809 1.1476 10.6641 -5.9324 +#> -0.2343 -11.8711 -3.0718 -6.1033 6.0796 6.5202 4.4145 -2.7017 +#> 7.7586 16.7302 5.6891 9.7125 3.2340 2.8381 -2.1649 3.7011 +#> -6.3852 -7.4833 17.1817 6.7457 9.0055 -18.5461 0.9357 5.4377 +#> +#> (4,.,.) = +#> Columns 1 to 8 9.3815 0.5067 -0.4746 0.8557 -0.2518 4.1846 5.2638 -3.6556 +#> -6.6823 -1.5660 6.0075 9.0911 2.7352 -1.7840 -3.8560 -6.8980 +#> -7.3805 -7.3294 -1.7421 4.9221 4.4610 -5.2977 13.2838 -5.6296 +#> -8.0408 5.5889 20.9149 2.8095 -0.5835 -3.4809 -0.6350 -7.5940 +#> 6.2594 -0.2308 -10.0346 -5.3197 12.1477 -16.1269 12.2065 -2.3300 +#> -2.1598 -2.0914 5.8031 9.2689 -7.6693 1.4311 11.7978 -1.2850 +#> -10.8211 1.6311 2.8033 4.9707 2.0051 4.8654 -0.6368 7.6161 +#> 3.8561 11.1155 -7.9433 -15.2565 1.6465 12.6285 6.8346 5.1274 +#> -8.0238 1.3107 4.4103 -1.0850 6.5162 3.9244 -4.6141 5.3600 +#> 13.5724 -2.2656 -2.8300 8.3306 -9.0484 4.6498 3.2783 -9.2353 +#> -2.2913 3.2422 1.3463 -6.4135 -7.6607 -2.3304 -8.4497 5.1440 +#> -11.9072 -5.6344 -2.7368 10.5469 -0.4579 1.7915 -0.8638 8.5353 +#> 0.3101 -9.1636 4.1188 7.0597 0.7060 -5.3735 4.1513 -11.9009 +#> 16.7822 -1.0691 -7.0354 0.3090 5.2873 9.0019 17.5214 -6.9909 +#> -8.0635 4.3959 4.6130 -7.4464 -5.2318 -0.3592 -12.1825 -0.4734 +#> -11.4288 -2.7838 2.9892 -3.8081 -2.3873 -4.6520 6.1755 -9.5882 +#> 8.8121 -1.9857 -13.4389 -4.5692 -1.5536 -2.9367 1.3208 -1.1337 +#> -1.7257 -12.6874 -0.4554 -4.5100 3.7578 -6.8492 5.5284 -10.4876 +#> -4.8348 -3.4583 -2.9028 -9.0365 4.7328 -13.9674 -1.3429 -3.5877 +#> 3.4851 -2.4339 11.5437 9.1441 -11.9711 4.1880 1.8360 -3.6242 +#> -7.7222 -2.2043 -1.8311 -3.0957 -2.2801 2.7201 -2.8722 -3.5286 +#> 9.3062 0.6945 -8.5838 8.0356 7.3675 11.4115 -2.4561 -6.8050 +#> 2.2952 0.2877 -4.3856 5.6378 -1.8479 10.9303 -4.5143 -2.9657 +#> 4.5877 -1.1598 -17.2627 0.1477 12.8224 -0.1667 -11.2564 13.6121 +#> -0.7933 -8.7304 -12.4579 -10.0943 -0.3981 -0.0413 -5.6221 -1.8680 +#> 6.7075 0.8151 5.1015 8.2805 8.1205 -2.6840 10.6711 -3.1135 +#> 1.9171 -1.7400 -13.7950 6.0659 15.5111 -4.1585 -11.8906 -12.8805 +#> -2.8296 -11.4839 -1.9139 6.7354 6.7694 -3.3406 3.3734 -6.6906 +#> 2.0097 -10.8567 12.4634 -11.8622 -1.2709 16.1235 -2.8331 -7.6479 +#> -5.2550 -1.2211 2.3181 -5.3669 -7.9116 7.4502 -3.0042 2.0301 +#> -4.1020 -3.3707 5.1646 2.0484 4.8935 -0.5884 2.7162 -2.3101 +#> 0.8329 -4.2334 -3.8673 -7.0354 -7.4673 -10.5435 5.8531 -6.1334 +#> -7.3772 -4.5909 -1.9161 6.6259 3.6650 -3.4721 -15.8138 -0.5154 +#> +#> Columns 9 to 16 -2.2739 0.8283 -3.6091 -3.3817 -6.4904 -1.6953 -11.6233 9.6465 +#> 3.1725 -2.0295 3.1037 -1.5620 -2.9030 4.3717 6.3062 -11.6154 +#> 2.6086 5.4694 2.0637 1.0243 -6.3620 7.1537 -8.1323 -1.4023 +#> -0.9116 1.0728 -8.3562 -0.1994 -11.0499 10.4495 -2.3881 -3.9351 +#> 0.2697 6.3687 5.7472 11.9531 4.1077 -0.6103 -0.8076 -4.0339 +#> -1.4583 1.7878 5.7407 10.0656 -1.1768 -4.7450 -2.7035 0.6984 +#> 5.8765 3.3211 0.2865 -6.4075 -2.6308 -0.7213 -9.3295 -0.1074 +#> 1.0711 2.5402 -9.0822 -2.6449 1.8962 -19.1070 0.1262 12.6525 +#> -1.8196 3.4460 -6.8566 -2.2804 -2.1786 16.4890 -3.9616 -0.4180 +#> 5.6616 -5.4913 -2.2831 -6.6371 -1.7186 -8.4043 -2.4147 1.9391 +#> 5.0043 4.6891 2.4815 -3.2522 2.3669 -9.7451 3.7433 -7.3471 +#> -18.1050 -5.6349 4.8301 -6.6695 -0.6437 9.9406 2.0819 -5.4838 +#> 4.1282 0.2126 -14.6296 -10.7571 -3.8312 3.3462 0.6422 -0.6044 +#> -0.7692 4.0226 -13.8080 -6.6801 -1.7263 -5.4621 -12.8972 7.8290 +#> -6.2263 -11.7697 -9.2510 -11.1648 -6.6947 9.1236 2.8376 -4.2448 +#> -3.7582 -10.3329 -4.1958 7.8152 -3.6387 -2.1770 8.8520 -4.5941 +#> 12.4033 5.5779 -6.7851 -3.5180 2.8896 -7.5427 -5.9868 4.3383 +#> 1.5012 2.6535 -4.5133 5.1882 -3.7772 4.8348 12.0824 1.6978 +#> -9.2360 1.2548 3.9144 -6.1924 5.6979 -5.8403 2.9249 5.4458 +#> 6.0617 -2.4988 4.3071 -4.7455 6.6305 -7.2826 4.2941 8.8152 +#> -1.3790 -4.0888 2.1732 6.6667 5.7116 1.4884 -3.3303 12.0984 +#> 4.6833 0.4357 2.2986 -6.3403 5.6785 -1.1201 -0.8818 15.9166 +#> -0.0499 -7.0754 0.1177 -4.0228 -1.0969 3.6127 -5.4017 2.4523 +#> -8.1815 5.6716 5.0038 -6.6016 -1.2290 5.5799 4.2395 -8.0558 +#> -2.3371 -8.0296 -9.6448 -2.5299 6.7300 0.8519 -0.9271 10.0746 +#> -5.0255 11.4261 -3.3775 -4.8680 -1.5682 8.3731 -10.4490 -0.6064 +#> -3.4161 -4.5474 -10.1656 -3.8014 3.1146 -0.8437 13.1989 4.0744 +#> -3.1343 -1.5437 1.7597 4.2420 1.9727 2.9057 1.0924 2.3585 +#> -7.2000 -11.6088 -6.2610 5.7236 -3.6506 2.5864 9.5943 4.3496 +#> -0.0143 3.5620 3.9286 -1.7451 9.7302 -5.7654 -0.1410 7.4993 +#> 0.2777 -11.5208 -4.9884 -1.0496 4.1476 -4.7511 -5.0362 6.0351 +#> 1.9071 7.9963 6.2941 -0.4414 -4.3196 14.0197 -1.9299 -2.8859 +#> 0.1823 -7.3858 0.8481 12.6923 -1.2820 7.4711 7.2569 -0.1897 +#> +#> Columns 17 to 24 -6.3902 -8.0915 2.8120 -5.4677 -12.1083 0.7293 -0.1854 -11.1693 +#> -5.4198 -0.0203 -7.4073 -7.8397 0.7944 5.9828 6.9035 4.1706 +#> 7.0808 -2.9617 -0.4648 11.5878 16.2616 6.1828 10.2831 2.6774 +#> -4.3103 -3.4252 14.0725 8.9026 6.5727 4.0217 7.1244 7.2774 +#> 12.8870 3.5684 1.6039 -6.3148 7.0076 -4.8579 3.2048 8.0879 +#> -0.9314 -10.3732 7.2991 8.0166 -0.9176 -6.0767 5.2560 5.7824 +#> 1.9082 -1.3171 5.0438 2.9462 0.3494 -3.3253 -6.1741 -3.2421 +#> -0.0169 -7.9599 2.0991 -6.8749 -6.8766 -2.0351 -1.9117 -6.2487 +#> 1.5205 -2.9213 -8.4928 0.6182 9.0535 5.8862 2.8992 1.1099 +#> -17.0103 6.3513 -6.8099 -8.3324 7.9425 -3.0891 -8.2486 -5.5774 +#> 0.2098 -3.9244 9.8871 6.5982 -1.9152 -2.5609 -3.6247 3.7114 +#> 11.5530 -4.6364 -5.5291 -7.5888 6.5992 11.5859 2.2213 -1.9854 +#> 3.0392 -1.2233 3.5053 5.5386 2.4279 1.6361 -1.3988 2.2867 +#> -0.8353 -16.6261 1.6398 -5.1569 -2.8185 -6.3679 -7.1365 4.2858 +#> 9.6753 3.8341 19.2793 4.3791 -14.7301 -0.5535 -3.9796 7.3269 +#> 9.4752 4.8538 -1.5916 6.8589 10.5771 -0.0706 -0.2979 -5.7065 +#> -14.1045 -2.6599 13.2903 9.9040 0.0470 3.2708 4.7037 8.0030 +#> 2.8563 9.4973 -0.5336 -6.3764 11.0968 12.6636 6.9392 4.5617 +#> 17.0599 7.5924 1.1280 3.5068 -8.0345 -0.6109 2.3953 6.2543 +#> -10.5462 -0.7995 -9.0660 -4.4262 5.4083 -0.0945 -0.8333 -2.9669 +#> 4.1783 5.8788 6.2085 8.8895 -2.6238 1.2547 11.4961 1.8723 +#> -9.0594 3.5422 0.8785 -2.5702 -7.0482 -5.3648 1.9069 -5.1079 +#> 1.7619 2.7703 -5.5693 -4.5051 2.3811 -5.2538 -12.9502 -2.4305 +#> -3.6373 -3.1968 4.4038 2.7994 -5.1367 -3.5099 -7.0419 2.2849 +#> 4.3217 -3.8034 -3.7976 -1.6171 4.7725 2.1851 8.3179 2.8682 +#> -12.5976 -7.7804 6.1259 0.5927 -0.6537 8.4436 -10.1696 -7.8077 +#> -13.3516 4.7491 12.5992 4.6612 1.9889 -3.1061 -10.5402 5.2656 +#> 0.2885 5.6938 -6.7537 -1.5300 2.2709 -0.7323 1.7429 2.3142 +#> -6.5488 10.0074 -12.3938 -6.8540 -7.6912 3.3745 -21.5179 -4.2562 +#> 4.7510 -0.6089 -2.7716 -6.2754 -3.1802 6.1061 -3.5172 -3.9791 +#> -6.5220 6.6923 0.4181 3.3658 -0.3326 -3.0382 -0.1421 -1.9329 +#> 2.9057 -4.5653 -2.1983 -6.5553 6.7875 6.1814 1.0676 -1.8543 +#> -5.1031 14.3404 -0.8404 -4.6729 5.7388 10.2501 8.3465 0.7604 +#> +#> Columns 25 to 32 1.2654 -3.3516 -5.7053 -7.8255 3.6590 6.0428 8.9688 -5.3477 +#> 1.6726 -10.3291 -15.2812 8.2254 -8.4873 6.0614 2.9004 -1.9538 +#> -0.2418 -1.1665 -6.5616 -7.1512 -6.2113 -12.8616 3.0833 1.7057 +#> -5.6332 -2.7662 4.8149 6.1077 -2.6404 -12.7719 -3.4250 4.0562 +#> 1.9879 2.2246 4.2971 -20.7711 5.8182 -1.4030 2.9192 -0.2680 +#> 1.3256 -3.0180 -12.9403 -5.2071 -15.3067 -17.2713 7.8258 10.3330 +#> -1.4015 0.0434 -2.0839 1.2152 9.6107 4.4360 3.9643 5.6152 +#> 9.0424 9.9949 -15.5909 -10.5344 -2.1971 -8.0964 1.6513 5.0014 +#> -2.9262 -6.5216 4.1790 8.5427 0.5833 15.8765 -3.4010 5.5310 +#> 2.3271 -8.1133 4.3857 -4.7348 18.2139 -0.5364 9.7689 -13.9685 +#> 4.1562 -5.3540 -6.7082 -6.2944 10.5866 -13.2188 3.0162 2.6646 +#> -9.2604 -5.5886 6.0124 -11.2143 6.3229 3.1214 3.9772 -4.5632 +#> -9.3874 1.6861 7.1812 4.9253 6.6248 -2.1831 -3.7881 9.2211 +#> 1.0030 0.9090 -19.2417 -21.9289 6.8239 15.9540 5.5406 11.2358 +#> -5.7796 0.9200 4.3031 14.7015 -10.6792 1.3597 -1.3136 8.6494 +#> 4.8535 7.4838 3.6532 -0.5258 1.4696 -9.6067 -3.3348 1.7364 +#> 3.0247 -3.0456 0.0834 -0.6601 7.2650 -11.9222 -0.2793 -1.0224 +#> 2.6200 2.4871 -5.9044 2.6964 1.6822 -1.0856 -3.1480 -5.5987 +#> 2.7681 7.1753 -6.9790 2.2352 -0.2698 -2.5281 -10.9540 4.2393 +#> 4.0676 0.8708 -1.2757 -2.0553 14.0620 -7.5132 5.7005 -0.9112 +#> -0.6234 5.1424 -4.4367 0.2276 -7.5168 -7.6055 -1.8155 -2.5680 +#> 13.4083 -0.1383 1.6389 -5.3740 -1.2201 8.6550 8.2300 -3.3025 +#> -3.1029 -1.8290 1.5303 -0.7020 -3.4181 2.4569 3.0376 -3.1733 +#> -2.2551 -11.1942 2.0088 -8.2837 -4.5408 4.6051 1.7681 4.5066 +#> 7.2819 -0.1614 -0.3589 -5.9853 0.2030 -1.1363 -3.0869 7.0417 +#> -13.9386 -15.7282 1.0017 -7.6061 8.7221 11.7999 10.0351 3.9576 +#> 3.8211 -9.8737 6.2607 8.0226 6.9275 -4.7078 -2.8938 -0.5316 +#> 4.9167 -6.1248 -5.2153 -9.3726 2.8991 8.4113 8.1048 1.2021 +#> 4.3773 4.6809 -0.0869 14.9806 5.5849 9.2625 -1.1807 -8.1128 +#> 12.0883 7.3920 -0.4220 0.9516 8.3091 -1.5727 -2.5052 4.5375 +#> 3.4472 5.8978 -2.9269 4.7144 1.5507 -0.4913 -1.3357 -3.3784 +#> -3.9602 4.6302 5.0555 -3.3751 -5.7936 8.4193 -6.2159 7.7606 +#> 9.1413 2.5148 4.1361 -7.9684 -14.1566 -4.1577 -2.6422 -4.2231 +#> +#> Columns 33 to 40 -1.8361 2.5977 1.2113 3.1584 2.4803 1.8712 9.6124 1.7342 +#> 4.4894 6.4122 17.2042 6.2978 -1.3853 -2.7681 4.3347 5.1712 +#> 5.1208 10.0671 2.5129 3.5565 6.1656 -14.1704 -4.6014 -10.7715 +#> -7.6823 -0.3134 11.0295 -2.8157 -2.1397 5.3854 -0.0939 1.8866 +#> -0.1719 1.3564 3.6817 -6.5371 3.6307 -5.8634 3.8469 -3.5201 +#> -8.8105 7.3172 6.9655 10.9895 -13.5253 -8.4398 2.9433 -6.6314 +#> 1.8326 -1.7513 3.6676 -17.4745 2.9683 0.9017 -7.1492 7.2882 +#> 0.3307 3.4832 -0.8127 0.1858 -4.9683 3.1905 -1.0582 -9.1252 +#> 9.0351 -4.9397 1.1190 -9.4008 10.7649 -3.1090 -4.6578 5.9950 +#> 3.8868 -13.8184 1.3700 -0.7495 -1.6484 8.8782 2.2646 11.2848 +#> -9.5724 1.3846 6.1086 6.2509 -7.7991 12.9296 -0.6927 -9.7555 +#> 23.0605 11.2223 17.4334 -7.4375 -0.2172 -6.4786 5.3294 -1.2919 +#> -6.1336 3.0130 -4.3595 3.0022 -9.9482 -4.3258 2.4032 -1.9318 +#> -3.3434 -2.2167 -2.1505 7.3530 7.3721 4.3594 3.2649 -8.3715 +#> -2.9273 4.3399 -10.7637 -4.4302 -2.4921 1.7669 -13.7607 -5.5637 +#> -7.0023 4.0979 -6.1513 5.6763 -0.0058 0.6027 0.1248 -3.0934 +#> -9.5418 15.9676 -1.0498 13.0614 -7.0726 6.8900 -4.4018 -14.4214 +#> 0.6100 4.3300 7.3375 9.5600 -4.7694 -6.5430 6.5318 -13.5396 +#> 8.2764 -7.9832 -15.2114 -8.6635 -1.8658 -18.0196 7.6218 -16.1250 +#> -2.3019 -3.6860 3.7449 5.6943 -3.4252 11.4617 3.9088 15.7300 +#> 5.4615 11.3182 -0.9004 -10.1088 -5.5132 -12.9972 -13.6019 -6.4688 +#> -0.8671 -2.3171 -21.9437 -7.3357 -0.8372 -5.7185 -7.5967 -4.5414 +#> 9.1727 -10.9384 -5.1395 -4.3122 8.1625 -3.0933 4.3229 5.8023 +#> 11.9205 3.1939 -7.0176 -4.2303 3.2226 -2.9097 -0.5847 -11.5925 +#> -0.5190 5.2495 5.5915 6.6845 6.1935 -6.8317 6.3080 3.2115 +#> 8.1689 12.3485 -7.0101 0.4498 14.1312 16.4469 13.2634 -7.4017 +#> -11.1469 -10.3666 0.5546 4.3177 -0.9639 5.5811 -5.3490 1.2830 +#> -0.8181 -0.8712 -3.5820 2.8442 2.0308 -8.6577 2.0294 -3.3143 +#> -2.6309 -10.4239 -11.9457 3.5868 -1.8183 3.0974 -2.9380 4.6135 +#> 4.8803 -10.9800 -10.3520 -6.0651 -6.1950 -1.9074 0.8738 -4.3113 +#> 0.1633 -2.6691 -9.3632 -3.7809 3.1943 -2.9829 -4.8624 -1.2437 +#> 3.5621 10.9607 0.5768 16.1664 1.4864 3.3760 15.1032 -4.1169 +#> -7.9020 4.5644 7.8298 -0.8661 -2.3574 -2.6209 -6.4854 13.1534 +#> +#> Columns 41 to 48 4.2696 -10.3398 9.8238 4.2835 7.1176 3.2086 -4.8103 -4.7768 +#> 4.6720 -12.1254 8.2943 -9.1417 -7.3231 -14.2636 5.1581 6.0995 +#> -9.4891 -17.9925 15.2434 -13.9374 -13.2522 -2.1237 -10.3948 14.2589 +#> 9.9647 -19.6312 14.1153 -18.9570 -12.0324 -1.3652 7.8780 8.0777 +#> 3.3484 -8.8792 7.4224 -4.1295 -7.4020 -2.4369 -1.8368 -11.3500 +#> -4.8635 -10.7434 -3.5431 -10.1043 -16.2888 1.6730 8.8689 21.3476 +#> -9.5860 5.9653 14.7955 -4.6409 0.7384 -10.4116 -0.7186 -5.9172 +#> -8.3793 4.7615 -4.9623 7.8984 0.9772 -0.2394 -4.7146 1.8378 +#> 3.5802 -2.7157 13.5225 -8.0359 0.0310 -6.2094 -5.0738 2.6893 +#> 7.5559 6.5254 9.2457 -1.8114 8.8511 8.7851 -2.5641 6.4545 +#> -8.1791 -4.4865 -1.6442 17.4838 -14.0743 10.0539 -14.0073 1.7466 +#> -10.7323 -4.3227 0.6978 -10.4341 2.5087 -12.0362 0.2711 -6.4079 +#> 4.7360 0.7439 1.9109 -3.7773 -2.6147 -6.3711 12.8484 -5.3801 +#> 20.3780 -2.7399 14.9806 -1.5724 2.9260 -0.0032 -14.2525 1.7104 +#> -2.2088 -1.3158 -2.1839 6.6328 -9.5388 4.3666 -1.6111 0.6622 +#> 7.3581 -6.4935 1.7580 5.1221 -7.6628 10.5954 -4.7309 14.6242 +#> -9.5354 4.9613 -1.1469 -4.6229 -1.4870 -1.8687 -9.3710 -6.9784 +#> 12.1129 -17.8627 5.2556 -9.1915 -4.8105 -9.4641 4.0789 2.8252 +#> 6.8028 -9.4066 -8.1274 -0.9585 -2.7877 -12.1167 4.1587 2.0131 +#> 1.5452 20.2106 -9.0438 1.1733 7.0840 -5.9307 -0.9919 1.4335 +#> -5.7004 -3.4985 -9.4653 -19.3110 -7.3291 6.2590 -7.9861 10.7984 +#> -4.4106 20.2895 -2.6798 -4.3932 7.2682 3.7751 -7.3893 7.0946 +#> -2.6276 5.8938 7.2839 -17.2202 16.3209 4.5467 8.6853 0.1249 +#> -9.7645 2.4152 -12.1889 18.6565 8.5801 1.8104 0.8734 5.1252 +#> -0.1649 10.0940 5.6633 -9.5501 11.7118 -16.7442 -4.4452 1.0700 +#> 5.2197 -8.8343 14.1881 0.3360 9.9435 0.9216 3.0978 -11.8234 +#> 0.1632 -3.7635 0.4943 7.8677 -13.9478 6.3833 -2.3129 4.2759 +#> 7.4062 -6.4231 7.2962 -7.2470 0.5831 -5.7529 -2.3694 7.5074 +#> 13.1673 -10.4342 -0.7006 0.6684 11.4740 -6.3343 10.9019 -3.7141 +#> 1.2306 5.4821 -7.6930 7.1085 4.4791 -4.2813 4.3127 -9.0234 +#> 10.4695 6.8462 -3.1281 -12.0219 3.2788 -2.4583 0.0424 9.2522 +#> 4.2526 -5.3190 5.6858 13.3091 16.0954 1.7858 2.0846 -3.7272 +#> -5.0684 -4.0662 -5.4795 -4.0989 -8.7127 6.9814 -7.9572 12.0195 +#> +#> (5,.,.) = +#> Columns 1 to 6 5.8336e+00 1.4469e+00 1.0283e+01 2.8077e+00 -3.6094e+00 -1.5380e-01 +#> 4.4940e+00 7.0896e+00 -2.2995e-04 -9.4848e+00 3.5691e+00 -6.2526e-01 +#> -7.6153e+00 -8.3259e+00 -5.7668e+00 6.1621e+00 1.3068e+01 1.2865e+00 +#> -2.9380e+00 -6.5334e-02 -2.3914e-01 -2.6353e+00 5.0308e+00 -2.9705e+00 +#> 2.1598e+00 -2.5440e+00 -2.9192e+00 9.8470e+00 2.7998e+00 -6.0515e-01 +#> -2.2917e+00 -3.6183e+00 1.3518e+01 1.4176e+01 8.4649e+00 1.1038e+01 +#> -9.1844e+00 1.2930e+00 2.3062e+00 -7.7915e+00 -5.1413e+00 -2.6392e+00 +#> -2.3330e+00 -6.8423e+00 5.9670e+00 8.3960e+00 -6.5738e-01 1.0778e+01 +#> 2.1833e+00 -3.8961e+00 -1.4628e+01 -7.7291e+00 9.0423e+00 -1.1237e+01 +#> -5.5193e+00 4.8869e+00 1.0202e+01 -1.2893e+01 -8.4842e+00 7.4763e+00 +#> -4.1294e+00 -7.2557e+00 1.0769e+01 -7.6877e+00 4.7998e+00 8.2344e+00 +#> 4.5731e+00 2.8154e+00 -3.8774e+00 -2.7598e+00 -4.8445e+00 3.2821e+00 +#> 8.7788e+00 -2.6621e+00 -1.2134e+01 3.6804e-01 -1.8133e+01 -1.3970e+01 +#> 1.1779e+01 -4.0730e+00 1.5088e+01 2.6929e+01 -2.6832e+00 9.8234e-01 +#> -3.6524e-01 -3.1626e+00 -5.6657e-01 -1.2514e+01 -1.3301e+01 -1.1408e+01 +#> -1.0719e+01 -8.0286e+00 1.0051e+00 -3.0784e-01 -7.7974e+00 8.4203e+00 +#> -5.8883e+00 -3.9020e+00 -1.9682e+00 7.4097e+00 -6.7653e+00 -1.0275e+00 +#> 8.7535e-01 -2.5602e+00 -1.4949e+00 3.3937e+00 6.4309e+00 6.5225e+00 +#> 7.1358e+00 2.9573e+00 -1.0967e+01 -9.2460e+00 7.7102e+00 -7.1687e+00 +#> 2.9202e+00 3.9027e+00 2.9854e+00 -1.0808e+01 -1.0620e+01 5.8729e+00 +#> -8.5519e+00 -3.5855e+00 1.1963e+01 -5.1478e+00 -1.5484e+00 3.6894e+00 +#> -4.3179e+00 3.8282e+00 1.0106e+01 -3.9487e+00 5.7075e-01 -3.4411e+00 +#> 1.0022e+01 1.6128e+00 2.7114e+00 1.1418e+01 4.2626e+00 6.4562e+00 +#> 7.8232e-01 1.2545e+01 -6.5005e+00 -1.1441e+00 5.2024e+00 -8.6110e+00 +#> 9.8817e+00 -1.0433e+01 -1.1409e+01 3.2136e+00 -3.5242e+00 -1.0030e+01 +#> 9.1674e-01 9.9108e+00 -7.4078e+00 1.3646e+01 7.4239e+00 -7.9529e+00 +#> -1.8578e+01 3.7623e-01 6.8215e+00 9.9675e-02 -4.9704e+00 -5.8879e+00 +#> 1.9956e+00 4.8642e+00 2.0417e+00 1.7247e+00 2.2569e+00 3.4207e+00 +#> 7.1978e+00 5.8948e+00 4.4842e+00 -1.3967e+01 -4.8855e+00 -8.6459e-01 +#> 1.0278e+01 -7.0889e+00 1.7776e+00 1.8428e+00 3.3566e+00 1.4120e+00 +#> -9.7108e+00 -3.2075e+00 5.6131e+00 -6.6512e+00 -2.8241e+00 -7.9182e+00 +#> 4.5832e+00 -2.6150e+00 -1.1328e+01 -2.5173e+00 -6.5660e+00 -1.0109e+01 +#> -1.2307e+01 1.3603e+00 1.0558e+01 7.8136e-01 -1.3191e+00 5.4988e+00 +#> +#> Columns 7 to 12 -2.0645e+00 -6.6983e+00 -6.1994e+00 -1.6564e+00 -4.5502e+00 -2.8016e+00 +#> -1.3859e+01 -3.6551e+00 2.5218e+00 4.3770e+00 -1.4367e+01 -6.3245e+00 +#> 1.2623e+01 1.2601e+01 6.3387e+00 2.2144e+01 3.9178e+00 -3.8783e+00 +#> -6.7408e+00 1.9949e+01 1.5427e+01 -4.1423e+00 -6.7698e+00 -6.9732e-01 +#> 1.1015e+01 2.6099e-01 -8.3667e+00 -9.5767e+00 9.1215e+00 -3.1890e+00 +#> 3.6462e+00 8.5545e+00 6.3157e+00 1.6946e+01 -1.4022e+01 -4.6502e+00 +#> -5.3064e+00 8.5455e+00 1.0837e+00 -2.1713e+00 1.0200e+01 1.3736e+01 +#> 1.0995e+01 -6.3629e+00 -1.4443e+01 -2.3231e+00 5.3971e+00 1.7415e+01 +#> -1.0402e+01 1.0123e+01 -2.2828e+00 5.5813e+00 7.9876e+00 5.5451e+00 +#> -4.6158e+00 -9.5204e+00 -1.0898e+00 -5.7203e-01 -1.0733e+01 2.5479e+00 +#> -1.5395e-01 -2.5729e+00 -6.5111e+00 -6.7184e+00 -4.9331e+00 1.2344e+01 +#> -1.2416e+01 -5.9973e+00 -4.7462e+00 7.3203e+00 1.8486e+00 -2.4777e+00 +#> -1.2116e+00 5.0463e+00 1.1736e+01 -4.4435e-01 -1.2165e+00 1.0625e+00 +#> -2.2685e+00 -2.2781e+01 -1.9375e+01 -3.1510e-01 6.4274e+00 2.7375e+00 +#> -1.2431e+01 8.4353e+00 4.3142e+00 -9.1101e+00 4.7028e+00 1.2753e+01 +#> 1.3462e+01 2.9800e+00 1.1975e+01 3.4457e+00 -5.2758e+00 -4.2949e+00 +#> 8.3694e+00 -3.9662e+00 -3.5001e+00 -4.9132e+00 2.8551e+00 9.5853e+00 +#> 1.1037e+01 2.3242e+00 -4.4274e-01 -1.8270e+00 -8.3952e+00 -8.1380e+00 +#> -4.9490e+00 -2.2897e+00 -3.0064e+00 9.1670e-01 1.5170e+01 6.6728e+00 +#> -8.2737e+00 -1.6769e+01 4.1306e+00 5.2120e+00 3.9413e+00 -4.8320e-01 +#> 2.8436e+00 -5.8850e+00 -3.9048e+00 9.0619e+00 6.8739e+00 1.0564e+01 +#> 7.0394e-01 5.0636e+00 1.0411e+01 1.8480e+01 2.1017e+00 4.0144e+00 +#> -6.5799e-01 -7.0307e+00 -9.2050e+00 -2.4637e+00 1.6049e+00 1.5621e-01 +#> -1.7425e+01 2.3908e+00 1.0688e+01 -1.8332e+00 -4.7906e+00 -3.3172e+00 +#> -3.5468e+00 -6.2092e+00 -9.6285e+00 2.9284e+00 8.3679e+00 -2.9525e+00 +#> -7.5318e+00 -3.8661e+00 -2.3946e+00 -2.8203e+00 1.3830e+00 -1.0135e+01 +#> 5.2110e+00 -9.7605e+00 8.4143e-01 9.1831e+00 1.2640e+01 -1.0897e+01 +#> 3.0835e-01 -2.8095e+00 5.8838e+00 1.2429e+01 -3.4084e+00 -7.6109e+00 +#> -1.6576e+01 -5.6398e+00 9.6835e+00 -5.9646e-01 4.0844e+00 8.8613e+00 +#> 1.3851e+00 6.4901e+00 7.8120e+00 -1.1958e+01 -2.6831e-01 6.2586e+00 +#> 2.4582e+00 6.5239e+00 4.0140e+00 7.7614e+00 6.1779e+00 5.0389e+00 +#> 4.8228e-01 9.7339e+00 -3.6486e+00 -9.3653e+00 -1.9010e+01 -4.9112e+00 +#> 1.7829e+00 6.7601e-01 1.4006e+01 1.0871e+01 -6.7866e+00 -9.9635e+00 +#> +#> Columns 13 to 18 4.0804e+00 -4.6274e+00 -4.0114e+00 -5.9758e+00 6.3923e+00 7.5205e+00 +#> -3.7319e+00 2.9354e+00 -4.8929e+00 1.6966e+01 8.5034e+00 5.2201e+00 +#> 4.6159e+00 1.1864e+01 -4.0793e+00 -3.4063e-01 9.1695e+00 2.1266e+00 +#> -3.8896e+00 -1.0185e+01 1.2285e+01 -6.5640e+00 3.7502e+00 2.7837e+00 +#> 4.8891e+00 -5.5146e+00 1.8514e+01 3.9871e+00 9.7924e-01 1.3543e+00 +#> 1.0535e-01 3.7238e+00 -1.6232e+01 -1.2843e-01 7.9798e+00 1.4718e+01 +#> 1.0952e+00 1.9138e+00 -2.8797e-01 -8.5654e+00 9.5860e+00 1.1858e+00 +#> 4.6907e+00 3.2192e+00 -1.1024e+01 2.7595e+00 5.3012e+00 3.4819e+00 +#> -5.4489e+00 6.3026e-01 1.0845e+01 -1.5170e+01 -4.0282e+00 -1.3003e+01 +#> 9.3268e-01 -1.1519e+01 -1.8905e+00 -1.4264e+01 -7.1144e-01 4.4427e+00 +#> -1.0634e+01 2.2329e+01 -1.6086e+01 1.0390e+01 4.2664e+00 8.4624e+00 +#> 5.9640e+00 -7.8229e+00 2.2603e+00 2.0475e+00 1.4151e+01 6.4597e-01 +#> 5.0788e+00 -9.5068e+00 -3.3177e+00 -7.0264e+00 3.7865e+00 6.8228e+00 +#> 1.2794e+01 -1.0997e+01 7.9029e+00 -1.6953e+01 -6.4281e+00 -2.7841e+00 +#> -9.8346e+00 -6.9945e+00 2.0859e+00 5.6787e+00 2.7236e+00 9.3853e+00 +#> 6.5656e+00 -1.4124e+00 -6.5000e+00 9.4045e+00 -4.7943e+00 4.2398e+00 +#> 9.9943e+00 -1.2300e-01 -8.9361e+00 1.6648e+00 4.5557e+00 3.6385e+00 +#> 5.6028e+00 3.6349e+00 -7.6901e-01 1.3754e+01 -1.9914e+00 8.9277e-02 +#> -1.6124e+00 2.1531e+00 2.8876e+00 7.7484e+00 -1.1959e+01 -9.7570e-01 +#> -3.1444e+00 5.9413e+00 -6.1925e+00 -1.2860e+01 2.8159e+00 -6.0415e+00 +#> -5.8180e+00 4.9203e+00 -1.8490e+00 6.6601e-01 6.4857e-01 -7.6449e-01 +#> -1.6410e+01 2.0862e+00 -4.4882e+00 2.3420e+00 -6.1667e-01 -1.0148e+01 +#> 5.1785e+00 -7.9062e+00 5.6711e+00 -5.0990e+00 2.1522e+00 -1.4937e+00 +#> 7.2697e+00 -6.1065e+00 5.7418e+00 8.9145e+00 -1.0260e+01 -1.1386e+00 +#> -3.6550e+00 -1.0537e+00 -1.2647e+01 -6.2766e+00 -1.7208e+00 -9.7136e+00 +#> 9.5335e+00 -3.6806e+00 1.8300e+01 4.3355e+00 9.6918e+00 -3.4956e+00 +#> -4.4250e+00 4.3727e+00 -1.2206e+00 -1.1575e+01 -6.9490e-01 4.0831e+00 +#> 3.5346e+00 2.9208e-01 -3.0783e+00 3.0060e+00 -1.8771e+00 -2.6873e+00 +#> -1.2386e+00 -1.7936e+00 -1.4961e+01 -1.1672e+00 -3.5595e+00 1.6893e+00 +#> -9.2694e+00 9.4057e-01 -6.9681e+00 6.0922e-01 -8.3566e+00 -7.3884e+00 +#> -7.1675e+00 1.3295e+00 -4.0344e+00 -2.7340e+00 -8.4022e+00 -4.8550e+00 +#> 4.4742e+00 4.4855e+00 1.8283e+00 -4.6183e+00 -5.4392e+00 1.9086e+00 +#> -6.6812e+00 1.1725e+01 -7.5816e+00 -1.5030e+00 -5.6127e+00 -6.3232e+00 +#> +#> Columns 19 to 24 1.9075e+00 -5.1671e+00 -2.8861e+00 3.8863e+00 -1.0865e+00 -4.4868e+00 +#> -2.7110e+00 -2.7595e+00 7.3627e+00 6.8774e+00 5.3323e+00 -2.3386e+00 +#> 6.4896e+00 -5.9896e+00 6.2405e+00 4.5535e+00 -5.5806e+00 2.2119e+00 +#> 3.6726e+00 3.6630e+00 1.3051e+01 5.6107e+00 -1.4811e+00 6.2675e+00 +#> 9.7341e+00 1.8405e+00 -3.9884e-01 4.5383e+00 7.2716e+00 -8.9318e+00 +#> 1.6564e+00 -8.6805e+00 1.4102e+00 4.6522e-01 -6.2129e+00 2.2132e+01 +#> -3.2298e+00 -8.6574e+00 6.4470e+00 -6.9426e-01 -3.5898e+00 -1.3447e+00 +#> 2.7328e+00 -4.1683e+00 -6.0934e+00 -1.8344e+00 7.0255e+00 2.4824e+00 +#> 4.2133e+00 2.8368e+00 2.2248e+00 -2.2850e+00 6.1526e+00 -5.8541e+00 +#> 5.0729e+00 4.4385e+00 -7.1746e+00 1.5144e+00 -9.0863e-01 5.8397e-01 +#> -2.6362e+00 -5.5934e+00 5.1918e+00 1.8751e+01 -2.0175e+01 -3.6741e+00 +#> -3.5194e+00 -1.6747e+00 4.5705e+00 -2.7844e+00 8.1209e-01 -8.7155e+00 +#> -2.0420e+00 2.1456e+00 -2.7647e+00 -1.0230e-01 5.2683e+00 -5.0584e-01 +#> 5.6779e+00 4.0394e-01 -7.8927e+00 1.2400e+01 -8.8077e+00 -5.2084e-01 +#> -1.3497e+01 5.7529e+00 3.1664e+00 3.0371e+00 -9.5848e-01 2.2515e+00 +#> 4.5942e+00 -6.2759e+00 -2.5370e+00 3.0009e-01 5.1784e-01 9.5865e-01 +#> -4.3198e+00 3.6395e+00 7.1711e+00 5.6559e+00 -7.3939e+00 1.6095e+00 +#> 7.7820e+00 1.4765e+00 4.6742e+00 4.6878e+00 3.5061e+00 1.2920e+00 +#> -6.8726e+00 3.3282e+00 -4.0796e+00 3.6112e+00 -3.8519e+00 -4.4891e+00 +#> -6.0432e+00 3.8831e+00 -2.7348e+00 -5.3159e+00 -1.5399e+00 -2.8460e+00 +#> -2.5613e-01 7.7684e+00 -3.2013e+00 -8.2297e+00 2.1071e+00 2.4986e+00 +#> -3.3099e+00 4.5167e+00 -7.7357e+00 -1.7795e+00 5.5184e+00 5.7502e+00 +#> -2.8577e+00 3.6075e+00 -4.3231e+00 -3.0083e+00 2.1724e-01 -3.0792e+00 +#> -5.4847e-01 -8.1615e+00 -8.6641e+00 -7.0225e-01 -2.9733e+00 -1.7660e-01 +#> 6.4026e-01 -3.5293e+00 4.6284e+00 -6.9687e+00 9.9917e-01 -1.6749e+00 +#> 9.1805e-01 5.7766e+00 6.8738e-01 7.5628e+00 -4.7703e+00 -1.4434e+01 +#> 3.9465e+00 7.3027e+00 -8.6644e+00 1.0841e+01 -1.1006e+01 -1.1051e+01 +#> 4.9543e+00 -5.5705e+00 -2.7287e+00 -1.6229e-01 3.6888e-01 -5.1247e+00 +#> 4.2543e+00 -4.6977e+00 -1.2617e+01 -6.9965e+00 -1.3153e+00 7.0018e-01 +#> -9.3969e+00 4.5462e+00 -3.1446e+00 1.9827e+00 -7.9423e-01 3.1600e+00 +#> 1.6407e+00 3.5578e+00 4.7857e-01 -7.9521e+00 -2.6705e+00 3.1548e+00 +#> -4.9212e+00 -5.4186e+00 -7.7663e+00 -1.7866e+00 7.3785e+00 3.7391e+00 +#> 1.0988e+01 -3.5739e+00 2.3192e+00 7.7857e-01 -6.1537e+00 -1.6571e+00 +#> +#> Columns 25 to 30 3.5060e+00 -6.6555e+00 6.6771e+00 -1.2302e+00 -1.3100e+00 -1.2005e+00 +#> -4.3183e+00 1.8904e+00 9.3727e+00 -1.4739e+01 4.6993e+00 -3.0541e+00 +#> 1.2514e+01 -1.2208e-01 3.9735e+00 -6.9719e+00 -1.6826e+00 8.0822e-01 +#> -6.0562e+00 -8.4440e+00 2.2050e+00 -4.1130e+00 3.6959e+00 9.7097e-01 +#> 9.8301e+00 -1.2278e+01 7.3278e+00 6.7034e-01 -5.4279e+00 1.7780e+00 +#> -7.4711e-01 1.1437e+00 1.3250e+01 -2.1135e+00 -6.4001e+00 -1.5373e+00 +#> 2.6650e+00 -7.7118e+00 7.3130e+00 2.5775e+00 8.6126e+00 -7.0690e+00 +#> 1.0283e+00 3.5418e+00 3.3780e+00 7.3297e+00 -9.9980e+00 1.2897e+00 +#> 3.5313e+00 4.8007e+00 -2.0213e+00 -7.9567e+00 1.1169e+01 -6.5328e+00 +#> -1.3208e+00 -1.5740e+01 1.4278e+01 -1.6370e+01 1.0310e+01 8.5325e+00 +#> -4.1625e-01 -5.1591e+00 8.1490e+00 4.5922e+00 -4.2217e+00 -8.2004e+00 +#> 1.0554e+01 5.0466e+00 -4.3511e+00 1.6601e+00 9.6846e-02 -4.3014e+00 +#> 4.2997e+00 -1.0469e+01 -3.0786e+00 3.6426e+00 3.3674e+00 6.3131e+00 +#> 7.9593e+00 -1.7446e+01 1.0004e+01 3.2395e+00 -1.7300e+01 3.9032e+00 +#> 1.8155e+00 -7.1766e+00 -1.7172e-01 1.5534e+01 -7.6793e+00 -7.3116e+00 +#> 4.7103e+00 -2.9560e+00 -6.8889e+00 6.8551e+00 -6.6955e+00 2.3126e+00 +#> 4.2955e+00 -4.4392e+00 1.3627e+00 1.7298e-01 -9.5954e+00 1.4855e+01 +#> -6.1302e+00 2.0692e-01 -2.7076e+00 -1.4112e+01 -7.9239e+00 7.6179e+00 +#> -8.4681e+00 -5.5159e-03 -6.0258e+00 5.8443e+00 -1.1785e+01 -3.1252e+00 +#> -4.6787e+00 2.4320e+00 2.2983e+00 5.3531e+00 1.4982e+00 3.5188e+00 +#> 3.6223e+00 1.5777e+00 5.3822e-01 -6.7858e+00 -3.5065e-01 -1.8204e+00 +#> 8.0087e+00 -2.8586e+00 4.0240e+00 -1.1221e+01 4.6164e+00 -6.6989e+00 +#> -4.8832e+00 -3.2109e+00 -3.7940e+00 3.4574e+00 2.9035e+00 -3.2206e+00 +#> 1.0745e+01 -3.6220e+00 -4.1194e+00 1.6118e+01 -5.1622e+00 3.7233e+00 +#> -1.7687e+00 4.3935e+00 -3.8096e+00 -2.1342e+00 -8.3650e-01 -3.9519e+00 +#> 1.4233e+01 -5.6465e+00 6.9745e-01 7.9877e+00 2.0713e-01 9.6992e+00 +#> 4.9125e+00 -1.0809e+01 6.4334e-03 -1.4920e+00 -9.3754e-01 8.2461e-01 +#> 6.0181e+00 -5.9128e+00 7.4054e+00 -7.4188e+00 -4.3923e-01 -7.3626e+00 +#> 5.6962e-01 -1.0416e+00 7.1098e+00 -1.2487e+00 1.3191e+01 -1.4202e+01 +#> 3.8604e-02 1.5730e+00 -1.1650e+01 4.1796e+00 -8.5456e+00 -3.7950e+00 +#> -4.3958e+00 -2.9770e-01 4.9461e+00 -1.1407e+00 2.4189e+00 -3.2541e+00 +#> 1.1090e+00 2.1914e+00 -1.9405e-02 4.0812e+00 -1.3151e+00 5.7624e+00 +#> -5.2244e+00 -6.3511e-01 -6.2743e+00 -8.5156e+00 3.3536e+00 -1.6210e+01 +#> +#> Columns 31 to 36 6.1982e+00 5.5916e+00 2.7885e+00 1.0189e+01 1.1431e+01 4.9566e+00 +#> 1.0178e+01 4.4730e+00 1.5423e+00 3.3104e+00 5.3882e+00 3.8084e+00 +#> -2.0036e+00 -1.2035e+01 8.6341e-01 -4.6895e+00 -4.4213e+00 -2.1763e+00 +#> -8.4662e+00 -7.7066e+00 1.1197e+01 -5.2408e+00 2.4517e+00 5.3138e+00 +#> 5.7486e+00 -1.4716e+01 3.6035e-01 2.1045e+01 -1.1319e+01 1.5318e+00 +#> -9.4682e+00 2.8308e+00 1.3540e+01 -1.7959e+00 1.5806e+01 1.0672e+01 +#> 9.4510e+00 -3.5718e+00 -8.4978e-01 4.5584e+00 1.2796e+01 -1.5654e+00 +#> -1.1028e+01 7.9593e-01 1.3840e+01 -2.7557e+00 6.1373e+00 -1.2787e+01 +#> 7.2767e+00 -1.7013e+01 -1.1136e+01 -8.7254e+00 -6.3872e+00 -1.1509e+01 +#> 5.8077e+00 1.5022e+00 1.7553e+00 2.4263e+00 3.5528e+00 -5.0187e+00 +#> 1.7892e+01 1.8655e+00 3.1090e+00 -2.4692e+00 1.1657e+01 1.1899e+00 +#> 2.2901e+01 1.9642e+00 -6.3054e+00 -4.4449e-01 6.8761e-01 1.7599e+01 +#> -1.6541e+00 -4.3045e+00 -6.2966e-01 -5.8996e+00 -9.5683e+00 1.0512e+01 +#> -3.4407e+00 3.6485e+00 1.9150e+01 1.2891e+01 -1.2427e-01 4.2755e+00 +#> -4.0717e-01 5.4538e+00 -2.6612e+00 -5.5491e+00 1.7631e+00 1.7065e+00 +#> -1.6581e+01 3.3456e+00 7.9727e+00 -8.9622e-01 -9.1770e+00 6.0081e+00 +#> -3.6655e+00 4.2207e+00 8.4427e+00 -9.9253e+00 -2.2105e+00 -4.1150e+00 +#> 1.6683e+00 -6.3756e-01 3.6346e+00 -4.0476e-01 -1.3779e+01 -7.5887e+00 +#> -1.9393e+00 -9.3332e+00 5.2066e-01 6.6675e+00 -1.9191e+01 5.3200e+00 +#> -7.8613e+00 1.3930e+01 2.7022e+00 -1.4145e+01 7.8802e+00 2.6400e+00 +#> -6.8321e+00 -9.7115e+00 -5.3862e-01 -1.0395e+01 -6.3867e-01 -5.0146e+00 +#> -6.9120e+00 -5.6570e+00 -4.9373e+00 1.2915e+00 -2.3262e+00 -2.1438e+01 +#> -1.6716e+00 6.6241e+00 9.7926e+00 2.5665e+00 -7.9879e+00 5.3613e+00 +#> 8.8860e+00 3.0840e+00 -1.8972e+01 5.5391e+00 -1.3550e-01 -3.1654e+00 +#> 3.7155e+00 8.4405e-01 -4.5309e+00 -2.8464e+00 -6.6865e+00 4.7325e+00 +#> 4.9660e+00 -1.1579e+01 5.5520e+00 8.9030e+00 -8.2167e-02 -6.5898e-01 +#> -4.4488e+00 -1.1496e+01 3.3262e-01 -9.9417e-01 -1.5805e+01 -1.3600e+01 +#> 2.1313e+00 -1.5164e+00 1.5813e+00 5.9362e+00 -5.3972e+00 -2.6751e+00 +#> 3.9508e+00 5.6909e+00 1.2322e+00 -6.3192e-01 -4.4705e-01 -1.5460e+01 +#> 9.0135e+00 4.3258e+00 -4.4283e+00 6.1289e+00 1.0678e+00 6.2983e+00 +#> -1.5593e+01 -5.7544e+00 4.4779e+00 -1.4258e+00 2.6896e+00 -4.6289e+00 +#> 1.0181e+01 5.1590e+00 -8.8165e+00 -5.6228e+00 -2.1333e+00 5.3188e+00 +#> 5.9610e+00 7.9238e+00 -1.0611e+01 -3.9474e+00 1.2250e+01 -4.6364e+00 +#> +#> Columns 37 to 42 -7.3558e+00 1.8478e+00 -1.3565e+01 -3.9673e+00 4.8040e+00 -4.2955e+00 +#> 1.5991e+01 2.5160e+00 -6.4857e+00 2.9707e+00 3.3668e+00 1.6766e+01 +#> 5.3307e+00 1.6203e+01 1.1577e+01 1.1139e+01 6.6203e+00 1.8992e+00 +#> -3.6759e+00 1.3892e+01 1.6713e+01 1.7167e+01 1.4447e+01 3.9491e+00 +#> -9.7422e+00 8.7446e-01 -9.0631e+00 6.8964e+00 -1.6838e+01 -1.0605e+01 +#> -2.2836e+00 2.2875e+01 7.8057e+00 1.1797e-01 -5.8903e+00 -1.8103e+00 +#> -5.7325e-01 -2.3644e+00 4.9360e+00 5.3952e+00 -4.1858e+00 -1.0905e+01 +#> -2.0070e+01 -2.1911e+00 -1.6904e+01 -1.2237e+01 -1.2856e+01 -8.6475e+00 +#> 1.6517e+00 -8.2912e+00 -3.2250e+00 -4.3587e+00 7.6742e+00 7.9976e-01 +#> 4.4946e+00 5.4656e+00 -9.9353e-01 7.4549e+00 -9.2468e+00 6.6536e+00 +#> -1.0185e+01 3.3628e+00 1.3991e+01 -3.1431e-01 7.8805e-01 -2.8357e+00 +#> -1.1942e+01 -5.9051e+00 -5.0902e+00 -8.9735e-01 5.6305e+00 -8.4694e+00 +#> 9.8223e+00 7.3412e+00 3.5796e+00 5.7735e+00 1.9905e+00 9.3903e+00 +#> -2.2032e+01 4.9234e+00 -1.4373e+01 -4.7423e+00 -4.5105e+00 -5.1470e+00 +#> 7.5400e-01 -2.7651e+00 1.7615e+00 -4.3036e+00 2.1885e+00 -9.9168e+00 +#> -4.3423e+00 -9.7685e-01 1.5548e+00 1.2219e+01 1.2048e+00 5.0471e+00 +#> 1.3638e+00 1.4873e+01 8.5603e+00 6.7804e+00 6.8853e+00 5.3534e+00 +#> -5.7750e+00 1.0367e+01 -9.4022e+00 8.1251e+00 -4.1837e+00 6.0742e+00 +#> -1.4540e+01 -1.1080e+01 -7.2195e+00 -7.9642e+00 -1.1162e+01 -3.3161e+00 +#> -5.3722e+00 -6.4250e+00 5.9252e+00 1.1901e+00 -6.4091e+00 9.8618e+00 +#> 8.5537e+00 8.0950e+00 2.3728e+00 -1.0542e+00 -4.2487e+00 -1.0509e+00 +#> 1.3275e+01 -3.9895e+00 -1.9680e+00 -1.2985e+01 -1.3368e+01 -3.7960e+00 +#> 3.4736e+00 -2.1850e+00 -3.7348e+00 -6.5829e-01 -2.7725e+00 -3.2504e+00 +#> 3.1980e+00 6.1895e+00 5.6776e-01 -7.4301e+00 4.7530e-01 -4.2809e+00 +#> -1.7356e+00 -4.0475e+00 -1.5024e+01 -3.2310e-01 -3.7233e+00 2.0674e+00 +#> -1.5031e+01 4.4899e+00 -7.4734e+00 3.8080e+00 1.2273e+01 -5.1166e+00 +#> -2.8133e+00 3.3891e+00 1.1317e+01 4.7004e+00 2.1072e+00 4.1002e+00 +#> 2.6798e+00 -8.6531e-01 -9.9506e+00 -3.2235e+00 -9.8978e+00 -6.3470e-02 +#> -3.7034e+00 -1.5443e+01 -1.6219e+01 -2.6023e+00 -6.0245e+00 -9.2024e+00 +#> -7.9181e+00 -1.0742e+01 -5.6723e+00 -7.1277e+00 -2.9193e-01 -1.3050e+01 +#> 1.0241e+01 3.1233e+00 3.9747e+00 9.2563e+00 -1.0152e+00 6.2321e+00 +#> -2.0754e+00 2.9852e+00 -1.1903e+01 1.1825e+00 -7.8679e+00 1.7905e+00 +#> 2.1825e+01 7.8318e-02 1.3404e+01 8.4809e+00 1.3420e+01 4.5805e+00 +#> +#> Columns 43 to 48 -1.8168e-01 -1.1012e+00 -7.1910e+00 -1.4037e+00 -1.7236e+00 1.2356e+00 +#> 2.8378e-02 1.8888e+00 7.9651e+00 -1.2617e+00 -2.4814e+00 -4.5150e+00 +#> -6.1803e-01 7.5025e-01 -8.1981e+00 -1.4032e+00 3.2832e+00 -5.0151e+00 +#> 1.1292e+01 7.0137e+00 -4.8292e+00 7.0170e+00 4.8443e+00 -3.9258e+00 +#> 1.0790e+00 1.2487e+01 -2.6888e+00 3.3796e+00 1.9949e+00 -3.6263e+00 +#> -8.9870e-01 -1.6069e+01 -1.2749e+01 -8.4440e+00 -5.3007e+00 -2.0938e+01 +#> -2.7425e+00 6.9554e-01 9.1293e-01 -1.6858e+00 4.3919e-01 9.5408e+00 +#> -4.1454e+00 -1.6091e+01 -6.6829e+00 -3.1834e+00 -1.4984e+00 -8.7432e+00 +#> 1.1823e+01 1.4894e+01 6.4397e-01 1.3332e+00 8.0127e+00 7.9523e+00 +#> 1.2801e+00 -2.6508e+00 6.0092e+00 7.6999e+00 -5.3024e+00 1.2944e+01 +#> -4.6366e+00 -4.0845e-01 -2.9419e+00 -5.5225e+00 6.1049e+00 -3.9872e+00 +#> -4.0252e+00 -2.7645e+00 5.7635e+00 -8.9263e+00 3.6107e+00 -4.5948e+00 +#> -1.4826e+00 7.4472e+00 1.7991e+00 1.1321e-01 4.5220e+00 -3.4062e+00 +#> 2.9610e+00 6.0132e-01 -1.1946e+01 9.7302e+00 -3.0436e+00 1.9629e+00 +#> -6.7898e+00 -2.9701e+00 -8.9971e+00 2.4158e+00 -2.8677e+00 8.1492e-01 +#> -9.7827e+00 -5.8758e+00 -9.7358e+00 7.3615e+00 2.2624e+00 -1.7039e+00 +#> -4.4273e+00 -3.6459e+00 -5.4875e-01 6.2453e+00 -7.5960e-01 1.4502e+00 +#> 5.1361e-01 -3.6357e+00 7.6778e-01 1.0157e+01 1.0523e+01 -9.4993e+00 +#> 6.1523e-01 2.7263e+00 4.0079e+00 -1.0994e+01 2.4581e+00 2.9155e+00 +#> 8.1481e+00 4.0605e+00 1.8207e+01 -1.7273e+00 5.1386e+00 8.0329e+00 +#> 3.2023e+00 2.3604e+00 -2.7896e+00 -3.9848e+00 -9.7489e+00 3.1861e+00 +#> -6.7553e+00 7.6011e+00 5.3619e+00 2.2031e+00 -6.0388e+00 1.1303e+01 +#> -7.0094e-01 -8.7572e+00 -3.5856e+00 1.3358e-01 -2.0777e+00 -4.5189e+00 +#> -9.5313e+00 5.6471e-01 -6.8210e+00 -9.6729e+00 -1.4461e+01 -7.9584e+00 +#> -1.6693e+00 7.4138e+00 4.3911e+00 4.3128e+00 8.3905e+00 -2.9648e-01 +#> 3.5896e+00 1.1364e+01 -1.3229e+01 -8.8098e+00 6.6256e+00 -4.3239e+00 +#> 1.0316e+01 3.7142e-01 1.2612e+00 6.3484e-01 -2.2128e-01 5.8600e+00 +#> -6.0903e+00 6.3052e+00 -2.4741e+00 -1.1545e+00 -2.2059e+00 -1.1759e+00 +#> -3.2627e+00 -2.4473e+00 -7.9768e+00 -2.2971e+00 1.7707e+00 -3.2508e+00 +#> -9.4117e+00 -7.5099e-01 6.0180e+00 -2.0775e+00 1.0671e+01 -2.2179e+00 +#> 8.1073e+00 -1.9226e+00 -1.5990e+00 3.2765e+00 -8.1194e+00 5.6850e+00 +#> -1.9664e+00 -1.1811e+01 -8.8313e-01 1.0266e+01 7.1744e-01 -8.2211e+00 +#> -3.7228e-01 7.0762e+00 3.0507e+00 1.0752e+01 3.8020e+00 2.8253e+00 +#> +#> (6,.,.) = +#> Columns 1 to 8 -3.8431 7.3834 -5.6651 5.0708 -1.4517 -4.6431 -0.8883 7.4393 +#> -1.2381 -11.5041 -0.6932 4.2547 -4.0637 -2.9001 2.5411 4.2100 +#> -6.3624 -5.3346 -4.7683 -0.3192 -3.9026 -7.1352 1.4164 6.1695 +#> 1.6059 1.4589 -9.0341 9.8772 -0.2623 9.6224 7.9349 24.2502 +#> -4.0875 2.9824 -14.0975 -13.2114 5.4580 -2.1398 -2.3853 -2.6447 +#> -1.3363 -18.6336 -5.9642 7.3051 -0.4355 -4.6710 2.6051 0.8527 +#> -7.5498 -4.6894 -8.6756 4.6694 0.6881 8.1117 3.9346 -2.5690 +#> -2.2236 -4.7131 -3.3760 -3.0355 -2.1877 -8.5541 -0.2228 -3.6215 +#> 2.3723 8.7867 -2.0226 3.5694 -7.1310 -3.2541 -6.1476 9.9991 +#> 6.3060 4.2074 -1.3340 8.1741 -10.7549 4.7283 8.2079 -9.2118 +#> -8.5755 -7.6608 6.8131 -5.8313 5.3371 4.4907 -4.8163 -8.6892 +#> -7.9691 1.5445 -1.8785 2.3053 -7.6918 6.7353 -13.6094 5.5143 +#> 15.5013 5.4054 8.1804 4.2990 -0.1030 -4.2687 2.5195 7.4986 +#> -17.2673 0.8237 -16.5174 4.2845 3.3804 -5.8239 4.2698 5.4976 +#> 6.2063 7.0952 10.5727 8.8347 4.9185 -3.6430 -1.8074 -5.1455 +#> 3.3922 1.3143 0.6030 -1.0616 -0.5150 -5.7062 17.7802 0.9328 +#> 5.6760 3.8643 2.0599 -3.4919 -2.9444 -5.4062 0.8802 -8.1386 +#> 0.0862 0.0255 1.9659 -1.3832 -1.9223 1.4673 0.4598 5.9191 +#> 1.4391 -2.7191 -3.6907 5.8302 6.3299 -1.9486 -9.4407 0.9470 +#> 11.6963 -8.8488 2.8334 -1.9257 0.8096 6.1601 1.1925 -1.6437 +#> -5.1210 6.9766 -2.8494 0.2123 -0.9609 1.1285 -0.4353 -6.7621 +#> 4.7367 -3.7483 2.3989 -2.3040 3.1682 1.1741 5.7830 0.1155 +#> -5.1974 -3.2208 -1.1463 5.1683 -5.4692 11.4663 -6.5940 -2.0914 +#> -1.8723 3.3740 -3.7008 2.6528 10.6767 -3.1249 -11.2269 -8.2101 +#> 8.8385 6.7610 1.4517 -5.8783 -2.4141 -4.0425 -7.8986 -0.9080 +#> -10.1123 -2.7159 -17.9195 4.9543 1.3166 0.7267 -7.6073 11.2488 +#> -0.5219 13.5147 -5.8599 -2.4669 7.9078 -0.0490 5.8223 6.0661 +#> -3.6575 -3.2303 -7.4904 4.6595 1.8652 -4.7055 0.7859 -0.4844 +#> 8.4408 11.4584 7.4887 5.0873 -7.1360 -0.4913 -9.3323 -3.5790 +#> 0.4671 -1.1088 12.0600 -1.5976 1.3205 5.9996 -10.3857 7.2800 +#> 3.3133 -2.4864 -1.4580 5.9497 3.3518 2.1192 11.6811 0.5496 +#> 5.3611 0.5139 7.0425 1.4836 -3.1040 -6.4254 -9.0574 -7.4924 +#> -7.3011 -2.0470 -1.6937 -4.3892 2.0341 11.0654 14.2605 6.6830 +#> +#> Columns 9 to 16 -0.8120 7.4782 19.8912 7.9250 5.3954 10.3623 -8.3094 -4.5597 +#> 4.0740 13.3418 1.6228 0.0288 4.8979 4.6911 -5.6091 -7.1796 +#> -3.8880 -6.3417 -17.9954 -10.8148 -4.0093 6.5191 -9.4727 5.2284 +#> -8.0646 -5.8118 -0.1473 12.2662 -2.3938 -6.0465 -5.6511 7.1533 +#> -1.3761 -0.7350 -4.5809 -8.0448 -5.2520 6.9067 5.5636 9.8490 +#> -16.5892 -2.6397 8.6574 8.2825 -1.1968 -10.3730 -2.1449 -3.6058 +#> -8.2305 -1.0772 8.4838 4.4443 -2.0186 -2.4281 -7.0722 -3.3671 +#> -10.1104 3.0282 14.4971 -4.5735 -5.7620 -5.8830 13.1251 -3.0741 +#> -2.6544 3.5008 -19.3625 -8.2454 -2.7320 9.9883 -0.9108 11.3159 +#> -6.7775 13.9699 2.8887 -0.1321 2.9471 4.9556 -2.7823 -2.2237 +#> -4.7958 1.1634 2.4740 -0.2882 5.4787 -6.7352 -8.1452 -4.8458 +#> 8.6103 14.6142 -0.4122 8.9540 -5.1263 -4.9815 -4.4847 -0.6936 +#> -2.3469 -7.4132 -2.3151 4.9861 -3.6393 -5.7564 -3.9316 0.4889 +#> -10.0148 13.3231 6.2298 10.7633 7.8557 18.2105 -12.0861 -5.3622 +#> -1.1570 4.9031 9.5066 3.8215 -18.6123 -15.8685 -4.3792 -0.2539 +#> 1.6511 -9.7728 3.4944 0.3973 -6.2884 -8.5203 3.0773 0.9752 +#> -5.0465 -11.3065 -7.4616 -0.1316 5.7957 -4.1953 -1.9611 -1.8844 +#> -0.3280 0.5115 -1.4591 -8.2698 -7.8082 -2.3492 1.2596 9.7844 +#> 0.8887 -2.5462 3.2239 -12.8953 -4.2650 -3.7035 -8.1158 2.2499 +#> 4.8492 -7.3734 -3.7737 6.6471 8.6360 -6.4765 9.5265 -4.8609 +#> 0.5326 -6.0841 -5.3797 -3.3401 -9.5274 -6.0842 -0.4826 3.6246 +#> 6.6203 3.3530 -8.8277 -10.0762 4.6060 19.2481 0.1302 -6.2784 +#> -7.6193 23.1043 -6.3003 5.5355 0.1201 5.3113 -2.4701 -2.0311 +#> 1.4992 -5.5342 -0.1859 -6.0348 -6.4621 4.8979 -10.7997 -10.0918 +#> 1.5731 -0.6728 -8.2198 3.5697 6.4091 0.7899 -2.1718 -0.9641 +#> 0.2540 -0.2781 -0.2170 0.6706 11.4201 15.2664 -4.3636 -3.6404 +#> 2.6073 -0.5216 -8.0851 -2.3002 10.7249 5.4236 -4.6967 -10.5057 +#> 2.5230 3.8005 -1.1218 -4.7448 4.6458 8.1591 -12.9378 -7.1927 +#> 4.7360 8.1318 12.3759 -5.6282 -2.6114 -3.5258 -4.7704 3.0365 +#> -2.2549 8.4677 6.0758 0.9920 -1.2759 -7.2451 0.8246 1.0953 +#> 0.8551 -5.4134 5.1745 4.8013 5.0497 -3.0157 -7.4301 -3.2177 +#> -4.8331 -9.1480 -1.8889 -1.5710 -14.2567 -2.8050 10.4588 2.0963 +#> 1.7096 -2.0408 -0.0545 4.2426 4.2510 0.7132 -2.9194 -5.0759 +#> +#> Columns 17 to 24 2.6041 -2.7814 -5.6413 5.0081 -2.8758 -8.5565 -5.6881 3.1331 +#> 6.7941 -1.9458 -5.2941 7.7836 -0.4092 -8.0201 0.3970 -7.6578 +#> 0.6905 2.0132 -0.4893 7.8418 4.8785 -8.9924 4.4282 4.7115 +#> -10.0949 -0.1184 18.6563 -11.3439 1.0212 12.8898 -2.0014 -4.3306 +#> 12.8555 -8.2099 3.4299 8.5190 5.0619 3.3760 -0.6128 28.6188 +#> -13.3237 4.6439 -2.0882 -0.0384 13.4190 -3.7056 8.4316 -10.2330 +#> 5.1761 -1.4201 0.2314 -5.0628 0.6908 -5.1780 -4.8495 4.2790 +#> -4.5325 -1.7420 4.1852 -3.8144 -2.4941 5.0209 -1.3389 -7.1552 +#> 0.7916 -14.4147 5.5268 2.0693 -10.2866 2.5438 7.6568 -4.2905 +#> -3.2468 -1.1817 -4.1815 -4.7880 -12.0480 -2.4661 2.6152 -1.2081 +#> 6.7382 1.8363 8.8386 3.6054 0.8293 -7.1265 2.7732 2.1463 +#> 8.5587 1.4978 -9.8710 7.9441 -0.0867 -19.0129 -14.4886 5.7760 +#> 3.7488 7.6406 0.6706 -5.7490 -1.9489 -5.2499 -9.1955 -3.6295 +#> -2.9359 -12.2543 3.0039 4.7083 -6.1880 -2.8042 6.8415 6.8957 +#> 9.4746 10.8160 4.5054 -6.9951 -4.0581 -5.2669 -8.4338 -7.2506 +#> -1.9632 2.1132 -0.3668 -3.2008 -1.6090 6.6668 -6.0098 3.8223 +#> -3.8161 11.6790 11.3140 -3.6064 1.4355 7.5667 -5.0646 -6.0837 +#> 4.2442 4.7703 4.7685 9.5189 0.7365 1.4151 6.9032 -0.9291 +#> 3.7390 -2.5007 11.1561 6.3443 -1.7992 2.0021 5.2217 0.7952 +#> -5.4934 1.6850 -12.2096 -6.3842 6.5940 5.7759 1.8569 -6.4054 +#> 2.5908 3.9067 12.3438 1.7453 -5.8490 7.9098 11.8704 -7.2959 +#> 5.3583 -6.4650 -8.6559 2.7904 -2.9893 -4.1956 6.7185 7.5843 +#> -6.3543 4.7106 -3.1060 -3.7198 1.3608 -7.2809 2.6153 -2.4212 +#> 6.5172 1.1924 -7.4937 8.5603 -2.7305 -7.6439 -14.9833 7.0846 +#> -4.3924 -9.8153 -6.3095 5.7879 4.7444 0.1552 -6.2586 -4.4975 +#> -0.6651 -12.2799 11.2119 2.7600 -5.4118 -1.2694 -15.0990 8.1280 +#> -7.8125 2.1491 23.4992 -7.1674 -8.4802 7.1878 0.1168 1.8585 +#> 6.8342 -6.3625 -8.1900 10.2074 -1.0959 -5.1161 0.8583 13.1324 +#> 4.5211 -3.6171 -11.9831 -2.4082 0.7759 -2.6898 0.0079 -0.5101 +#> 6.6346 -7.8264 -1.1006 1.0998 1.8459 -6.5513 1.9777 11.5169 +#> -8.8377 -3.6688 1.6344 -2.5381 -4.4863 3.7497 3.9530 -5.3874 +#> -6.3750 -3.2493 -8.7438 3.8994 3.8035 -5.0823 -6.0544 -8.7810 +#> 5.6659 5.4076 -2.7472 -4.8992 10.0706 4.0290 -1.8043 6.2159 +#> +#> Columns 25 to 32 0.7427 -0.3925 5.6627 0.5508 -7.6576 -5.1700 0.7781 -2.8209 +#> 1.2037 -5.2113 7.5220 6.8838 -9.0041 -4.5219 -10.6045 -20.6301 +#> -9.1645 -3.7385 -7.8983 -1.0718 2.6469 -2.5640 1.1154 0.4253 +#> -5.8675 12.2688 4.7816 -12.9854 3.6264 13.4326 -7.6249 -6.3909 +#> -7.8522 -3.3144 5.0485 -5.7656 -2.5531 4.2993 4.7858 -10.8891 +#> -0.7370 7.1488 -2.1010 2.3537 9.4115 4.7932 2.8704 -14.1572 +#> -7.0131 1.6708 -3.7218 -13.1652 2.8087 4.4407 -6.9831 -5.7197 +#> -2.7847 -10.1558 -2.9248 6.2940 -4.3385 7.8863 2.4613 -8.9948 +#> -12.4222 2.5276 1.8685 -2.3223 -8.8269 -4.4598 -1.8154 4.0663 +#> -17.2300 6.4685 1.8689 5.3603 6.7398 3.3667 -4.8997 -9.4086 +#> 1.2356 1.6288 3.5676 -10.3513 -2.2020 1.3730 -6.1823 5.2587 +#> -4.2661 0.7121 -6.2024 4.2070 0.5956 -10.9004 4.6460 -17.3319 +#> -10.7968 2.4177 1.1019 6.1138 -2.1474 5.7430 3.5664 -11.3855 +#> -12.5341 1.4357 12.5342 7.7619 -1.1197 -8.1808 -1.7186 -6.8173 +#> 9.1906 6.2513 -8.2495 -4.5827 2.5592 8.0839 2.9613 1.8699 +#> 2.1904 -7.0417 2.0164 -2.4970 7.5029 17.5735 1.9224 -9.7896 +#> -3.2724 3.9292 -8.3599 -5.7366 -5.1671 1.6543 -0.7723 10.2821 +#> -1.9706 -12.8953 4.1068 4.9962 -2.3567 6.1539 -1.5427 -10.7669 +#> 7.3496 -8.1844 5.1235 1.2408 3.1449 2.0168 8.2432 7.9997 +#> -4.0664 3.4344 0.3256 11.2229 0.7095 1.4735 1.5725 -8.4207 +#> -1.6166 1.4562 -4.4718 3.3023 0.3778 7.5133 5.1532 10.1043 +#> 3.3225 -8.8089 -3.0923 3.3325 1.5031 4.3136 -1.8835 6.7184 +#> -4.6379 7.2774 2.1881 8.0453 11.9964 -11.0276 -2.0683 0.7021 +#> 9.7671 1.3784 -4.5355 -3.9622 -1.7377 -11.7821 2.3796 6.4589 +#> 0.2182 -0.3091 11.9139 5.2245 3.4034 1.2695 17.0450 -4.1572 +#> 3.5552 5.8776 -2.7593 -8.6977 -14.6022 -6.1475 -14.9508 -3.9013 +#> -6.6385 12.0537 4.1298 -11.5479 -1.1747 9.0885 -5.2279 10.9613 +#> -2.1465 -8.8670 1.5405 4.0049 -2.5045 -1.5200 2.9076 -5.5131 +#> 3.7108 -4.0137 5.6951 0.1028 -8.8379 9.2006 5.8924 -3.7027 +#> 10.8200 -14.8880 10.4045 0.1085 -3.0449 2.5855 7.8488 3.8576 +#> 5.4769 2.0296 -3.8539 -6.6743 0.4720 7.5457 0.1194 3.6049 +#> 7.3787 -4.8472 -7.0770 9.2727 0.9584 8.7025 -1.2500 -13.6226 +#> 14.5414 2.2860 6.7500 -0.9226 3.1862 8.2491 -1.1172 7.9106 +#> +#> Columns 33 to 40 -5.6168 -6.4860 2.6363 2.5115 -8.7517 -6.7171 -6.6574 -3.3769 +#> 5.3670 -0.1472 9.6242 -3.7355 -12.6752 -0.7998 4.8054 -9.7223 +#> 2.3392 1.3709 -9.9185 -12.6104 5.3177 0.2155 7.2612 -7.5051 +#> -3.7129 -7.8671 6.5456 -17.9262 7.7747 7.7368 1.1430 5.7418 +#> 10.9394 2.0021 -0.8691 0.4828 -1.2898 -7.6861 -6.4627 -5.5539 +#> 1.8966 -0.3136 3.3956 2.6334 15.1075 7.4043 -4.7763 -12.4683 +#> -4.7089 6.0495 8.3688 -3.4752 5.0511 1.1077 -1.4599 5.3383 +#> -0.5215 8.8620 2.4174 1.6004 -6.2814 6.4169 0.1949 0.8299 +#> -5.9857 -0.3503 7.0461 -20.2335 -8.0556 2.9546 2.0319 -0.7711 +#> 0.7776 -8.4149 3.4837 5.9291 -0.3201 8.3164 -3.6599 -1.4523 +#> 11.1929 6.5765 -6.4420 0.0670 0.9790 7.1676 -11.9064 8.0665 +#> -0.3148 -6.7382 0.0216 -0.1783 -8.5459 -5.9415 4.3921 3.1296 +#> -1.3924 2.6817 7.5451 0.3968 4.4998 -1.5098 5.7484 1.6824 +#> -8.7046 -10.6875 4.4194 -1.1980 -1.0027 -0.4302 -2.3993 -19.0529 +#> 4.8254 9.1486 12.5051 8.7380 -0.9360 0.1790 5.9854 7.6658 +#> 6.9828 1.3313 -7.0587 2.1813 -0.8202 11.3124 4.2426 0.6668 +#> 6.6578 1.5624 -13.2574 -7.5266 4.4840 -2.2918 -9.5036 7.7070 +#> 9.1161 -4.9644 -1.1914 -5.7381 -7.8603 -6.3567 4.3512 -6.6192 +#> 12.4280 8.9660 15.9672 -3.5249 -7.5437 10.3486 10.2216 3.0577 +#> -11.6479 2.1733 -6.1750 3.9584 7.0792 6.9122 2.5134 2.9520 +#> 0.4872 -2.3682 -5.0912 10.5690 5.2461 5.8557 0.4970 -0.5975 +#> -4.6388 -7.6717 1.0318 9.5039 6.5533 3.0746 -1.8915 4.7517 +#> 1.0830 0.5759 -2.3991 -0.7123 0.3355 -5.1589 9.4931 -12.9261 +#> 0.8625 -2.6225 14.8504 21.5154 -4.1961 -3.3638 -9.7227 2.2374 +#> -5.3223 5.9341 -1.4369 -4.6652 6.2489 3.5743 2.6827 -0.7111 +#> 1.3230 -6.6167 -3.3289 -15.8255 -6.4208 -15.8053 -13.3011 7.2120 +#> 4.2213 -4.2158 -13.7303 -1.7531 1.1995 3.4246 -4.6039 9.5911 +#> 1.8121 -2.1075 0.8085 1.5771 -5.6438 3.4963 0.4645 -5.3024 +#> -5.6359 9.3658 4.9352 4.7401 -8.7382 -2.3576 0.6898 -3.6523 +#> -4.4666 7.5691 2.2186 0.5155 -4.7183 3.1927 5.8108 5.7489 +#> -2.5874 1.1465 -1.3267 -5.5489 5.9748 7.1293 3.6966 4.8829 +#> 8.8428 -6.9308 18.6298 6.8286 1.8857 -11.7044 0.6557 2.5373 +#> -14.6013 -8.3760 -10.6774 10.0636 8.9325 2.8039 -0.0727 -3.0897 +#> +#> Columns 41 to 48 6.9642 -0.6262 2.2064 -4.1543 0.0911 5.4757 8.7951 1.6013 +#> 9.6895 7.9675 6.8978 -0.5035 2.4456 0.2763 5.9792 4.9250 +#> -2.0042 1.3626 -2.3506 -2.3861 -1.2558 9.7202 -16.8533 -5.8326 +#> -5.9632 -11.5851 3.4268 4.8003 1.3087 1.2331 6.3146 -5.7044 +#> -0.4662 -1.3376 1.9251 -0.6392 -1.0576 8.9563 -6.2924 -4.8836 +#> -10.5783 -2.8322 2.6248 -12.4610 -6.1862 -8.0927 -12.2853 13.1715 +#> 2.2852 0.6102 4.1804 5.3986 8.5801 6.9560 7.8326 -5.5230 +#> -8.6194 6.2151 5.7252 3.9505 -8.3117 -3.1280 2.8561 7.1808 +#> 9.7590 4.1079 2.2343 -1.7692 -0.3579 0.7964 2.9640 -11.9145 +#> -7.0074 1.1420 -4.2457 2.8591 -1.1378 1.1163 3.8781 12.4765 +#> -6.1290 12.2708 -11.1302 3.8243 -2.9552 8.6422 -6.0264 -7.4022 +#> 10.0635 9.6584 2.4000 4.1588 7.9187 1.2381 0.4380 0.8508 +#> -6.0808 7.0076 9.3598 0.1588 8.6685 1.5170 8.1652 3.7259 +#> -2.2060 -8.3047 -3.0411 -8.1245 -3.1376 -4.5335 6.5306 8.1359 +#> 4.8976 12.9044 0.6897 4.5309 2.5029 9.2748 4.8736 -7.1804 +#> -8.4457 -2.7248 -3.2400 -0.5759 -0.9208 5.8858 1.6766 -5.2751 +#> -5.7156 3.9459 -4.4457 4.1108 6.7611 3.5183 -6.4844 -3.0592 +#> 3.3449 -6.2755 7.4682 5.7035 5.5690 -1.8141 -1.9710 0.6901 +#> 6.5519 -6.2914 10.2937 0.9887 -4.0702 -0.5473 -3.3893 -5.1029 +#> -16.6018 3.3911 1.6646 7.8269 -8.0762 -7.4555 0.7765 9.0354 +#> 0.2345 1.2441 -4.0380 10.4353 -2.5967 -7.1556 -6.2753 2.0644 +#> -0.1557 4.8470 -2.2030 3.2334 -0.7537 1.3210 2.5823 4.4626 +#> -3.8668 -1.3391 -4.5296 -6.3264 2.4471 -4.8476 -0.7466 8.3751 +#> 5.7443 3.7541 -1.4076 -8.6073 -8.8374 -1.3317 0.0160 -9.8453 +#> 5.1070 -1.5126 5.4086 -11.8511 -3.9446 -9.0676 -4.9246 -4.2725 +#> 8.4218 -2.2294 -5.2719 -7.5469 1.8683 7.0001 1.4385 -7.4544 +#> -6.4732 -2.3976 -6.5612 3.7650 7.0920 11.5379 -9.4214 -3.5016 +#> 1.3462 2.4093 2.7490 -5.4944 1.8231 -1.4234 3.8278 -6.4919 +#> 0.9387 1.8054 3.5880 -1.4546 2.0829 4.4832 22.5412 -14.8157 +#> -2.4800 1.0776 -1.3618 -0.0731 -5.1824 0.0743 7.2691 -0.8040 +#> -0.0316 -8.2954 0.1998 5.2957 -0.0135 5.4739 6.7748 0.5406 +#> 7.6616 -3.9551 4.7891 -9.6003 -4.8387 -10.1878 -10.6751 2.1577 +#> -9.2890 -3.0727 -7.7070 3.2493 0.9178 -6.5737 5.3924 -3.1437 +#> +#> (7,.,.) = +#> Columns 1 to 8 -2.3296 2.6220 2.7053 9.0223 4.6784 -8.9184 -0.2804 -2.5976 +#> -4.7880 2.0406 -0.7386 7.3500 4.1096 -9.1256 -8.0225 -2.0870 +#> 8.0657 -9.0503 -9.4772 3.4253 0.7609 -10.2437 1.5640 2.8089 +#> -0.7459 8.8218 2.3357 -6.3770 4.8054 0.3979 10.4343 -5.6464 +#> 13.4301 -5.0842 -10.3859 -1.6107 -3.8801 1.9869 6.5917 -1.8032 +#> 3.5070 0.8849 2.6876 2.3717 8.3952 15.8351 -10.0632 1.4077 +#> 8.9518 6.0343 1.6781 -7.7070 3.9087 2.8646 7.4742 -11.8379 +#> 15.5281 2.9651 5.2348 4.3290 4.4333 0.6046 -12.8930 -12.1883 +#> -0.1533 -8.4006 -8.8320 7.7445 0.3838 -23.2785 11.6583 -4.4223 +#> -5.7791 3.1234 -2.5414 -1.6776 -10.7481 2.6806 3.2963 -2.1340 +#> -12.2836 4.3460 -0.4746 -6.8613 -7.8982 3.5393 -9.3517 4.6484 +#> -7.9799 12.9991 5.6166 7.0813 5.2047 5.1364 -8.4394 -10.7971 +#> -3.3649 -1.8186 1.0821 -1.8201 0.5154 9.5670 -3.4895 -4.5462 +#> 2.7008 2.7005 -6.8547 15.7891 -5.4901 -12.3276 -7.1192 -5.3654 +#> -8.3179 16.8236 3.6043 -6.3824 8.4656 1.5503 -19.6949 -5.3716 +#> 8.5076 -5.4067 3.3730 -4.3485 -2.0729 9.7513 -8.3290 8.3339 +#> -8.5696 -6.8044 -3.2408 -9.7262 -14.2968 3.2913 2.8368 5.8087 +#> -2.4612 -1.7636 -1.2623 3.2902 -0.2236 0.0514 0.9184 8.6516 +#> 11.4526 4.2531 -10.1231 5.7154 -0.8421 -0.4120 -10.9707 -1.2357 +#> 0.6259 -7.3116 -1.8922 -13.2551 -1.0134 13.2029 -2.3882 -6.5237 +#> -2.6817 -5.8175 2.5477 2.7028 4.5282 2.4201 8.0314 2.1558 +#> 12.3809 1.4826 -1.4175 5.9531 1.5363 -11.6131 2.6520 -0.7645 +#> -4.5831 7.5182 2.1211 8.1539 1.2662 2.5533 -1.0333 -6.3098 +#> -5.4063 17.3063 1.2650 -4.5729 0.5667 0.0456 -10.8855 -3.3304 +#> 2.7603 -22.3222 -9.7392 3.0270 -7.3130 -5.0089 0.9636 -7.4817 +#> 2.7866 2.3214 5.1668 4.9353 -1.0383 -10.4922 4.7751 -6.8500 +#> -15.2789 -3.5497 11.7025 9.0850 -19.2380 -0.5861 14.1731 5.6085 +#> 9.0317 -2.9111 -7.7741 2.9326 -0.2974 -7.5001 0.2570 -0.6743 +#> -1.7161 8.5792 0.8657 -4.5549 6.9916 -0.7809 0.9465 -10.1163 +#> 6.1382 8.3956 -2.8173 -1.6776 2.0230 -0.0545 -5.4137 -2.1501 +#> 9.6015 -1.1818 -2.2238 -0.3524 2.0411 1.0046 7.4119 0.4235 +#> -8.8153 -3.8452 -5.2957 -7.5852 -0.7384 6.8361 -14.5995 2.1828 +#> -13.5845 2.8668 13.2323 -10.7866 -6.0058 -1.1561 14.1621 -1.2926 +#> +#> Columns 9 to 16 -7.1634 -6.0972 -11.7513 -10.4073 -0.8159 2.5930 4.2177 -10.0934 +#> -2.2544 -11.3961 -11.6843 -2.3137 -8.5706 -10.0163 0.3512 4.9911 +#> 11.9232 2.4842 10.0709 -8.5920 -8.1947 -4.8634 5.0969 4.6391 +#> 1.5472 -4.8407 13.2325 -0.1264 -5.5050 3.4931 0.4756 5.8156 +#> 5.0784 -2.8242 -1.0371 8.3050 -7.4883 5.5563 -2.4524 -11.9313 +#> -4.0314 2.1286 3.4392 -7.1105 -5.7933 -4.3309 11.4669 14.5174 +#> -0.2058 -6.9356 1.3772 -1.9061 -0.6770 7.9332 -14.9127 -13.5586 +#> -12.2776 -4.7740 -1.5404 -3.5163 7.0517 -7.9475 -0.8112 7.3606 +#> 9.7158 -11.3029 12.3129 1.6495 -0.3677 -8.4003 6.5051 0.1048 +#> -1.4861 -4.0380 -5.0327 5.3579 6.3946 -1.1793 -11.0562 -12.8570 +#> 0.2693 -12.0579 0.8015 -0.8628 -7.8605 -0.5489 -1.3176 -15.1082 +#> -7.4996 -12.2279 -5.1412 1.7302 3.5835 -5.4725 4.9118 3.7829 +#> 3.3948 8.1425 -2.5546 9.6217 1.8721 1.7571 -3.5932 -8.2782 +#> -11.4339 -15.8573 -16.3337 -6.0683 -16.1905 7.7043 15.7809 -10.0472 +#> -2.1120 7.9248 2.0360 6.3012 -1.7394 14.4151 0.3460 -2.1862 +#> 7.2541 12.2710 4.0430 3.7129 -6.8094 6.2648 7.8792 2.5758 +#> -6.9766 3.3028 0.9064 -5.1602 5.7350 3.2716 -1.7567 -14.7903 +#> 0.7560 4.0701 -3.0778 5.6728 -2.0256 -1.9973 -0.0311 -6.1567 +#> -6.8235 3.2236 1.2478 12.4593 3.9118 -2.7748 -21.0220 -3.1948 +#> -4.4552 -5.8244 -7.0327 4.2571 5.7014 -16.1163 -9.0073 15.7807 +#> -3.1768 3.3695 7.5084 1.9599 10.0857 6.9237 -2.1567 10.6698 +#> 12.1337 4.5736 1.5905 -1.4781 9.7632 0.4483 9.8829 3.7366 +#> 2.1638 5.9205 -5.8842 3.1058 1.5394 -10.3690 -5.2912 1.1492 +#> -3.4409 8.9310 2.8766 3.5897 -6.0677 10.1503 17.7390 -2.9372 +#> -4.6623 -3.7795 -6.0469 1.1800 -4.1813 -7.3442 2.9797 -1.6414 +#> -4.3184 1.7215 4.3932 -6.3918 -8.4958 -6.0978 18.0118 -7.7651 +#> 4.5519 6.9310 7.5572 1.7235 6.0236 3.7269 -2.1889 -17.1142 +#> 8.9957 -2.0600 -6.0540 1.6865 -7.9547 -8.3940 5.2172 -6.8774 +#> 2.5671 4.0671 -2.3180 21.7114 -0.1279 -0.2939 6.8952 -3.6456 +#> 1.7494 3.4129 -9.8035 5.2906 6.6857 -3.6860 -2.1410 -1.5441 +#> 1.2420 3.9898 8.4892 -0.7814 -2.5251 4.7544 -7.1279 2.6486 +#> -2.8809 9.5647 -2.0376 -9.1030 -4.2586 10.2570 15.3708 -9.0445 +#> 10.9928 -3.3645 -2.9894 -12.0953 0.2006 -1.0859 10.1212 -6.4035 +#> +#> Columns 17 to 24 2.7812 5.4220 -4.8518 1.6159 3.3163 -0.9405 -1.4396 -0.8000 +#> -0.7204 1.9521 -11.0833 7.5696 6.9420 -0.1180 -0.0605 -4.4413 +#> -0.9011 6.4584 -2.6853 6.1056 0.2922 -2.6252 2.7197 -3.1311 +#> 2.9205 -14.7327 6.8285 5.4563 -6.8013 3.2424 5.1511 0.8192 +#> 15.2468 3.6672 -8.8271 4.7590 -5.2288 3.2019 5.0552 -13.5845 +#> 11.3536 4.5270 8.1449 -10.5235 -0.8249 7.0970 5.2966 8.2157 +#> -12.3210 4.9029 -5.9232 -7.0295 -5.2866 2.5831 3.4113 -7.1652 +#> 7.4477 5.9604 -1.1821 -4.0429 2.6185 -2.0651 0.9707 1.4009 +#> -7.7386 -2.0487 -11.2013 10.8187 2.0498 1.4085 -7.2806 -6.4198 +#> 4.2641 -6.3158 1.5590 -5.2147 -4.5397 -1.8869 2.6813 -1.0346 +#> 0.5320 11.1622 -2.4356 0.2217 1.1017 1.4617 1.7884 4.4048 +#> 1.2410 4.4032 2.1811 -4.8886 -6.3978 -8.0271 -6.3209 -4.5243 +#> -4.8644 -16.1335 8.0001 -0.4512 -0.0640 4.5421 -6.0189 10.2372 +#> 10.8666 -2.3392 -14.6404 -6.6768 6.3468 3.3562 -4.9963 3.7271 +#> -16.8252 -2.4095 4.3875 4.9275 8.5122 2.0512 0.3071 11.8187 +#> -3.2934 -13.0638 12.3757 0.8423 -7.7589 -14.8246 0.6222 6.8520 +#> 4.0858 -12.8416 10.5734 0.2094 4.0656 -0.8490 0.2859 3.8753 +#> 7.9200 -3.2439 -2.2367 1.8224 -4.3161 -3.6697 -2.3160 -10.1752 +#> -2.4015 -6.4772 -2.8352 13.3998 6.2246 3.7878 1.7386 3.4412 +#> -2.6908 -5.1969 12.1578 -18.5621 2.7941 -5.0791 -2.7870 -4.6360 +#> 0.4089 3.1580 -5.9018 8.9719 4.3680 3.1811 5.9165 -0.9216 +#> 4.0521 6.9054 -11.9093 -2.7798 11.2565 2.5289 1.6041 2.8995 +#> -7.3418 -5.9512 2.9487 -7.1501 -6.3839 3.1446 -1.2858 8.3549 +#> 5.4442 7.4219 -11.5866 -5.0148 7.2163 2.7395 2.7243 11.2979 +#> 3.5898 5.2307 2.9838 0.8806 1.8287 -5.2266 -7.1442 -13.2070 +#> 6.0116 -13.3017 -11.0431 -1.5814 -0.7696 -4.5543 -5.2704 -7.5289 +#> 4.3283 -11.9296 -8.7027 11.9328 -4.8087 6.9332 -0.5710 12.0423 +#> 4.5342 -0.4509 -11.4358 2.1943 5.0237 -1.9125 -2.5020 0.6515 +#> -9.9458 -1.3030 -4.9437 3.8813 -0.5505 -0.7792 -3.2231 19.6330 +#> -4.7559 5.9901 6.6762 -1.1477 -2.8810 -3.1458 -1.0710 0.7236 +#> -8.8685 -7.1802 -4.9199 4.3553 9.2855 -1.5581 6.4446 2.6088 +#> 5.5704 -0.1899 11.7272 -13.7112 -7.5853 -4.6329 -8.1911 -11.9715 +#> 0.8659 9.6754 6.2192 4.4845 -6.6850 -0.1068 4.6408 1.6410 +#> +#> Columns 25 to 32 0.1021 3.4989 7.5064 8.2781 -1.5580 -10.7283 6.7941 6.3417 +#> -8.8004 -2.8351 -0.6603 -4.9096 -4.8975 3.5512 7.6414 10.3805 +#> 4.3131 -5.1323 -10.3353 -6.8733 4.1013 8.1932 -9.6053 7.0819 +#> -2.9337 -10.0615 -2.3167 -1.6669 -5.6632 10.8148 6.1126 7.1849 +#> -2.6452 4.3381 -4.6695 -6.1262 8.3285 11.5907 -7.4366 -3.3797 +#> -0.2017 -2.2431 -6.9068 16.7291 8.9836 1.6426 -2.0245 -4.8889 +#> -2.5906 -7.4285 -3.3424 -4.9273 -4.7368 3.6852 9.7596 -10.0672 +#> 8.3528 -4.5681 -0.9854 9.0356 5.1900 -13.4860 5.7939 -5.0061 +#> 2.0299 -2.1688 -1.0443 -17.0904 -0.0215 3.8864 -14.0175 2.9196 +#> 3.3222 3.1324 5.8982 16.2515 -9.2976 8.5235 -4.8997 -17.1181 +#> -14.4956 8.2583 1.0591 7.5273 -19.3976 10.6573 19.1982 -11.3771 +#> -3.1710 13.0068 -7.3756 -0.5681 0.7774 9.9759 11.0113 -7.3256 +#> -3.8406 -0.8126 -5.5768 2.0762 0.5199 -0.5397 -1.2605 12.0127 +#> -3.2277 2.1645 1.0629 14.9632 1.8371 -5.9239 -13.8835 15.9642 +#> -12.8184 -2.6658 -2.0042 -0.0844 -7.2206 6.6421 9.7363 2.4991 +#> -1.6525 -4.6428 -9.1115 8.3921 3.7704 -7.8665 10.2522 0.1801 +#> -3.7896 2.4306 -1.3701 0.3734 -7.4602 4.2084 11.4515 12.0422 +#> -2.4753 2.0030 -0.5177 -6.5314 1.2207 4.5226 -2.2401 4.3960 +#> 2.8886 3.2794 -10.2895 -0.2028 6.2209 1.5111 5.6472 -0.8896 +#> 0.0851 -14.5293 -1.1060 9.2817 -0.9770 -3.4438 13.8217 0.0651 +#> 17.0007 -6.9814 -8.2417 -1.7077 6.6519 4.8516 1.1276 2.4622 +#> 0.2145 -7.9254 1.7342 6.1690 -2.1715 -10.6964 -9.7442 -2.3695 +#> 11.2452 1.3741 6.4360 4.1201 5.1333 1.9648 -19.2341 0.7180 +#> -7.4940 12.5695 9.4383 -6.4369 -11.3819 -0.5540 -11.7748 -18.9718 +#> 0.2186 -1.3956 0.0665 -2.0291 14.3456 0.3513 6.7104 12.9168 +#> 5.1630 1.5422 3.2075 -2.9081 -5.7640 -4.0527 -11.5494 3.4883 +#> 7.2746 9.1093 6.0529 -4.7185 -20.0544 2.4024 0.6728 -1.1473 +#> -5.1805 -0.7158 -1.3168 -0.3273 0.5368 -6.3159 -3.8350 4.0969 +#> 4.8603 3.2546 19.2123 5.7714 -5.8778 -12.2052 2.0928 -2.4711 +#> -1.3025 -2.7034 2.7477 7.6757 5.7388 -4.8970 5.2955 -6.8052 +#> 10.1599 -8.0182 -7.3798 -1.0700 0.6162 -6.7383 3.3595 -1.1978 +#> -13.4505 5.0993 -4.7529 2.2823 3.8764 -4.0923 -7.7800 0.6165 +#> -6.1131 -6.7054 6.5676 -10.4996 1.9459 11.2073 9.8336 1.6834 +#> +#> Columns 33 to 40 -0.3298 -4.2969 11.2754 9.8055 0.8983 -9.3014 -5.8688 3.6415 +#> 6.0148 -5.7631 -1.8169 -0.5250 -3.9140 -0.2316 -8.7621 6.5795 +#> 4.9109 -19.3549 -2.2391 3.8585 -3.1540 4.0717 -6.5839 0.5183 +#> -2.9798 -9.4008 4.7706 2.1077 -3.3910 -3.2499 -0.6783 1.5453 +#> 12.7098 3.2865 -4.9499 10.0107 -3.5558 -0.0282 -3.1283 0.5524 +#> 2.4122 -0.9889 5.2093 11.4160 4.1561 -3.1497 -16.0259 4.1585 +#> -7.8926 -0.9984 -1.2411 3.9572 0.3797 3.0215 -3.0578 0.1023 +#> -10.8219 16.1887 5.4966 -2.0509 -3.2074 -3.4945 1.6944 -2.2159 +#> 0.3110 -8.4711 -3.5092 -15.3411 8.8879 5.6060 1.4967 1.9971 +#> 11.3866 -7.1140 7.2142 -1.5308 1.5688 9.0305 4.3703 -1.1801 +#> 6.6862 -2.6590 -2.9354 17.2175 -6.7822 -1.0260 -1.1077 -9.6485 +#> -0.9948 -5.5579 1.7111 -4.4484 -12.6108 -4.6243 2.0521 -5.5649 +#> -7.1225 -10.3208 -7.9880 -3.7788 5.9515 -3.9717 4.4162 4.4355 +#> 1.6533 1.3678 10.8417 12.2366 5.4949 -10.2154 -2.8037 -9.9922 +#> -3.2760 9.1617 -10.6927 -7.5026 -5.4360 -9.0376 -1.5377 9.4721 +#> -12.4996 4.8649 -2.9894 4.7946 -11.5541 -0.3904 6.8418 0.8465 +#> -3.8046 -3.6546 2.3285 1.1938 5.6924 -7.7238 7.4887 -10.2338 +#> 2.0710 0.0309 4.4036 -1.1317 -15.0928 2.7721 9.5530 2.0894 +#> 7.4004 5.6765 -8.4092 -3.1803 -2.4403 4.3154 -3.0732 8.2051 +#> -13.4954 -3.2096 2.3950 -5.8532 1.8454 2.8569 -0.5035 -6.5992 +#> -1.3958 1.8985 4.7739 -7.2806 -6.6726 -0.9841 -2.1776 8.3176 +#> 6.4731 -0.7357 -5.5632 -7.4430 6.2662 -3.5883 -1.6045 17.4411 +#> 7.1276 5.1441 10.0146 -8.1522 7.7656 9.9407 1.4379 -5.7897 +#> 14.1812 9.2245 -18.1448 -7.7124 6.5122 0.2088 -2.4817 0.2880 +#> -15.7896 -1.2188 -6.7288 -8.9549 4.9331 1.4936 -8.7471 -6.4209 +#> 4.3099 -0.7303 1.7592 0.9840 14.6613 -17.9901 4.6189 -12.9752 +#> 14.9605 -1.3112 0.1678 0.7928 5.7157 -3.6127 5.1273 -4.5057 +#> 6.9167 -3.5551 -2.1515 5.1422 5.2428 -4.4654 -4.0587 4.0413 +#> 1.3875 16.1840 6.9282 -2.2695 -1.2116 -3.0950 9.6788 1.2790 +#> -8.8776 9.9031 0.9821 -4.0823 0.2807 6.5130 4.3705 1.5543 +#> -4.8403 1.3496 2.6027 4.1189 1.5527 1.4183 -3.0095 8.0521 +#> -4.3142 -2.8906 -15.3153 -14.9150 -3.0554 1.5348 4.7710 0.2526 +#> -5.3966 0.1269 0.7146 -1.8717 -4.1178 -0.9931 5.4409 0.6686 +#> +#> Columns 41 to 48 -1.8616 5.7532 6.5965 -5.0088 0.0345 -0.7763 -1.6356 -2.0600 +#> -7.7402 1.2094 8.4545 2.6980 10.4990 5.3276 -1.5464 -8.0669 +#> -5.0384 -3.4051 13.2699 -1.1863 -9.1671 8.4433 1.3991 -1.6152 +#> 11.0023 -2.5902 -2.1461 10.9165 3.5941 -6.6523 -9.4431 0.1495 +#> 0.2131 -10.8118 -11.3988 0.1859 1.9479 -11.8650 -9.3801 -1.0290 +#> 7.3107 7.5041 18.2200 9.3550 4.7096 17.1780 0.5387 -19.5252 +#> 4.5644 -0.2605 3.7791 -2.1329 4.6946 1.2555 4.3208 -3.4587 +#> -2.5980 -5.5629 -2.0509 1.5486 -9.3856 2.5339 -6.3174 6.2199 +#> -10.2091 -13.6487 5.8027 8.9926 0.3840 0.6081 7.1941 17.2857 +#> -6.9849 6.1649 -2.8074 -7.9156 7.2325 -8.4858 1.0448 0.6305 +#> -0.9542 0.2664 2.1968 -6.6221 -3.5052 2.9299 -15.5179 -4.4278 +#> 3.5436 1.7725 2.2777 -7.4842 7.5133 7.5347 -0.7854 5.3106 +#> 8.3427 -0.7514 -4.5845 -7.2193 -3.9886 1.0598 9.0724 -4.7909 +#> -0.8176 1.7874 16.4429 5.9142 6.8127 -2.3502 3.9062 14.6298 +#> 17.9763 -5.2250 -8.0704 -5.5407 -0.2623 4.9040 -1.4951 -2.7139 +#> 4.4399 0.6390 -6.6045 -3.6975 -11.5393 6.4525 0.6170 -7.2279 +#> -8.9183 -0.5764 -4.1493 -9.9795 -5.9897 -8.3440 -6.1989 4.5492 +#> -4.5282 -1.2998 -12.6075 -5.2066 -4.1720 -7.8010 -6.3624 -2.1599 +#> -0.0395 0.5525 -1.2190 11.7965 -2.8265 -2.9007 10.7658 -6.0432 +#> -3.8849 -0.7795 -6.0214 14.7302 3.7549 -1.9099 4.5305 1.1395 +#> 2.5980 5.7914 0.5088 -0.2587 1.6536 1.5198 0.6970 7.6877 +#> -3.4199 5.5684 5.8389 0.3654 0.7566 -4.6547 3.9163 7.2315 +#> 0.8831 5.6800 -1.6161 -4.8695 8.0604 5.2900 6.0896 0.1284 +#> 12.9451 0.7244 2.5631 -7.6859 4.2499 7.7468 8.8400 17.9649 +#> -12.1270 -1.1188 8.6782 8.2348 -0.1812 3.9842 12.4738 1.1312 +#> -15.5627 0.9205 4.8937 1.3959 0.9364 -5.9993 5.0125 15.9446 +#> -0.5960 8.9775 2.1488 -7.5156 -3.5539 -0.5377 5.5796 13.2049 +#> -2.0846 3.1420 10.8564 1.0379 7.3284 7.2150 8.0023 -0.7732 +#> 10.4065 2.3664 -13.2031 -10.0701 3.7662 2.1972 13.5044 0.0219 +#> 6.2028 10.0813 -7.7484 4.6236 1.5654 1.1147 0.6467 -9.8578 +#> 0.7657 5.9682 5.6290 10.6157 0.1378 -0.1181 13.5057 3.3180 +#> -13.0670 -7.8584 -9.9599 2.3546 3.1039 0.7870 7.8630 -5.2758 +#> 7.1305 12.3261 3.8330 -9.2226 8.3476 2.9376 -0.8149 0.6137 +#> +#> (8,.,.) = +#> Columns 1 to 8 -1.2317 -3.3196 -5.1569 1.9082 6.2612 4.3392 -11.2136 3.9773 +#> -10.3118 5.0747 3.5518 -0.0408 -3.1321 6.1357 -0.5448 1.5189 +#> 2.7030 7.0978 1.1454 1.0692 -12.5975 9.5899 -8.7267 -2.5848 +#> -1.5421 2.0308 -2.2283 1.3549 -13.0727 -1.4142 5.1770 6.6988 +#> 6.8411 -7.6045 -1.2982 10.8429 2.5187 0.5119 -2.7345 -0.1262 +#> 7.9873 13.6170 13.8393 -9.4866 3.6657 11.1016 6.9682 14.5323 +#> 2.0465 7.5210 6.8347 10.9886 1.0939 -13.6630 -9.4285 -5.0368 +#> 4.0440 -1.7207 -2.3052 -5.0791 -1.0243 2.6739 4.8054 -8.1103 +#> -4.8036 6.8107 -12.3930 5.6116 -12.0874 -1.4229 -2.4287 -0.7909 +#> -14.0824 -10.3870 0.2212 9.2112 -3.7067 -1.3332 -0.7219 4.9144 +#> -5.8525 -2.8898 0.8723 0.2240 11.5024 1.5236 -12.9817 -10.0872 +#> 12.6071 7.9935 -3.8990 -0.6208 0.1425 -0.8730 5.3491 -5.5887 +#> 7.4526 -3.1527 4.4941 -8.1563 -8.9867 -4.2534 1.7696 -8.5009 +#> -12.8709 -9.0025 5.0548 13.1891 6.1871 14.4623 -7.5337 10.4154 +#> 4.1507 4.8493 -14.9231 -4.2805 -1.6507 1.6043 -10.9938 4.4166 +#> 5.6636 -3.7006 -0.5060 -9.6451 -8.8515 8.5477 4.9868 -7.4543 +#> -3.6501 -3.3526 4.7915 -0.3429 4.2049 -9.7057 -12.3466 -7.9951 +#> 0.8060 -9.4069 -1.7457 -0.6788 -12.0969 1.4056 4.5861 2.7684 +#> 3.0246 -7.8256 6.0450 -5.3646 -2.7373 1.2020 6.0695 1.5647 +#> -4.2576 5.7032 16.2630 -3.7573 -0.1373 -10.6970 13.5898 -1.4398 +#> -0.1086 15.7111 -11.1226 7.0051 -7.8891 5.1548 0.6955 13.2780 +#> -5.2106 0.1764 -1.0260 2.6907 4.2760 -3.5172 -6.5097 15.4163 +#> -10.7636 -2.0764 -1.6827 3.6295 -1.7870 17.8416 -0.4092 -1.7633 +#> 15.1723 -1.2443 -0.7555 4.6745 12.0369 7.4046 -2.9553 -1.4032 +#> -7.1292 8.0863 2.0892 -8.8382 2.1167 -0.1061 2.3117 3.3048 +#> 1.1223 -5.3102 -9.7623 14.4588 3.6329 -4.3042 1.6126 -8.9402 +#> -7.8986 -12.2560 -9.3964 8.4937 1.2416 -4.3892 -1.9670 -0.6979 +#> -2.9293 3.8861 7.1722 1.9240 -2.3646 2.8111 3.3984 2.8330 +#> -1.9388 7.8489 1.8083 0.6661 -6.2467 -0.2845 7.0653 -1.8716 +#> 2.4168 -0.3931 3.8223 -7.7273 5.4307 2.2596 -1.1483 5.8083 +#> -5.0786 8.5213 8.2180 2.5309 -9.2810 -13.2719 -1.2012 11.6763 +#> 15.2233 -4.5946 -6.8110 0.1354 1.0765 6.4974 2.4170 1.4068 +#> -1.7034 -1.2184 -3.5573 9.5502 6.9113 7.7414 1.7149 1.8343 +#> +#> Columns 9 to 16 -4.4885 0.5837 -5.2821 2.6704 -1.0122 -0.9631 4.5014 -0.3658 +#> 0.1016 5.2009 7.1366 -12.2656 -0.6036 4.2722 -6.8218 5.7402 +#> 1.8316 -0.9588 1.4152 1.6889 2.3983 -1.6298 -6.1908 -5.2894 +#> 6.0208 5.0804 2.9412 -2.0281 -4.6607 -8.1584 -13.4804 -6.7688 +#> 7.3720 -10.3945 -5.6677 -3.5158 -1.6095 8.3750 7.9127 5.1704 +#> 15.4399 9.3323 1.6973 1.5998 -3.9071 -3.9725 -1.0085 -2.0901 +#> 7.8788 1.0264 -0.4291 2.0364 -1.5741 3.5325 10.4612 3.0324 +#> 8.7283 -9.3981 0.4778 -6.7209 6.4405 10.4767 9.5827 5.6104 +#> -7.3756 -0.7232 6.2456 3.4731 6.6796 -1.3627 -3.0187 -1.6944 +#> -13.1355 2.9039 0.0459 8.7924 -5.9811 0.8449 3.8136 5.9508 +#> 4.6568 1.1027 -6.3764 -9.4723 -1.3233 6.0747 2.4053 6.8966 +#> -0.5289 2.6472 -9.4330 8.1228 -5.9426 3.4072 0.4823 -9.4226 +#> -4.7529 -1.0135 2.1678 4.4102 -10.6251 -5.6192 -3.6697 1.6366 +#> 15.0061 6.5173 -4.8301 8.6828 6.0451 4.5656 1.2344 10.9157 +#> 0.4303 -6.5763 -4.8707 -2.3870 -9.4889 4.1435 3.3063 -1.1898 +#> -0.2692 -2.4433 6.2689 -1.5584 -3.4070 -1.3909 0.1893 -0.4532 +#> 2.5107 -1.5668 -10.2115 -5.5648 -1.3647 -8.0708 2.0161 1.6512 +#> -3.5940 -6.3579 -3.8869 -7.9065 -0.9742 6.7063 -8.4719 1.3576 +#> -3.7089 -1.8038 -0.6117 -9.8607 3.3698 22.9605 -3.6562 4.8463 +#> 1.6361 2.5420 11.4186 14.3684 2.3310 -8.3540 0.9381 -0.0351 +#> 0.1027 -9.0479 0.9032 -1.0097 -2.8629 -1.9554 -2.0904 -15.8881 +#> -0.7654 -4.2773 15.1426 9.0580 6.0057 -6.6287 -1.0927 2.0078 +#> -6.9771 6.4067 -5.5631 6.5593 2.6215 6.9967 1.5216 -3.5922 +#> 1.4184 1.0813 -7.7687 3.8479 -12.0237 -2.4082 0.5225 7.0061 +#> 6.2708 3.9158 6.9341 6.5268 4.7231 -0.6777 9.4545 7.8420 +#> -6.4314 -2.6474 -6.6454 3.6796 2.9888 -4.9358 3.7865 -6.6632 +#> -17.2334 3.7974 -2.9756 -0.2797 9.6011 -11.6500 -10.3269 2.0359 +#> 0.7378 0.3811 5.8920 7.2694 1.1036 2.4982 2.2517 8.2471 +#> -18.3964 -4.0590 5.4808 11.2866 -9.6668 3.9698 0.3985 4.5973 +#> 7.9941 3.3524 3.4157 -1.5425 4.7768 6.5502 3.1802 -2.4336 +#> 4.0940 -2.9506 9.5420 2.8346 -0.2584 0.2626 -4.8959 -3.7916 +#> -9.1633 1.4314 6.1877 -1.1202 -9.1444 1.4858 7.0451 6.2695 +#> 10.0667 4.4522 7.0475 4.2658 3.3789 -17.7391 -3.6927 -4.9239 +#> +#> Columns 17 to 24 1.1590 2.1596 11.5904 5.3307 -5.9865 -4.5522 -0.3769 0.3412 +#> -9.1646 -13.4521 4.3623 -8.3640 5.4370 1.4135 -8.5062 5.5152 +#> -14.5086 -7.4210 -5.6404 7.7579 1.1688 10.6242 -0.2768 11.8314 +#> -1.4226 0.2549 -1.3752 -5.4226 5.6763 13.8060 -9.4249 3.5398 +#> -9.5247 6.5736 -5.2558 2.7476 -6.4570 0.6130 -3.5709 10.0829 +#> -3.8160 -14.4933 0.9134 6.0329 13.7381 12.3846 4.8808 9.9607 +#> 0.2729 3.9416 4.0583 -6.8990 -5.9573 2.6702 2.2853 1.4887 +#> 12.3245 -0.8259 -2.8609 -1.4567 6.0206 -1.0838 -5.6750 0.6743 +#> -3.5316 -4.2598 2.0282 -6.4856 -1.9813 12.7992 -6.5678 -2.8261 +#> 6.2282 2.9076 -1.2547 -3.2117 1.5294 -7.2852 16.2665 -14.1529 +#> 0.7745 0.8543 -0.5128 4.6769 -8.7667 -2.6857 -4.1179 1.5439 +#> -11.6602 4.0413 0.9736 -12.6496 -3.9132 -4.5023 -5.2052 10.3583 +#> -5.7737 -1.4171 -2.9368 -12.0729 -6.1664 6.3264 -4.3236 -11.8384 +#> -2.1509 2.6165 6.5411 19.5070 -6.2026 -4.6205 -6.2938 12.5549 +#> 5.9643 4.1513 1.4794 -10.6649 -11.2228 1.7241 -16.8030 -10.8468 +#> -0.0405 8.2949 -10.8254 0.8035 6.2583 -1.9041 -10.9151 3.7618 +#> -5.1036 -8.0058 -4.0197 -2.9640 0.6133 -6.3221 1.5265 4.5398 +#> -5.0752 -6.4282 -8.9374 -3.0473 3.9440 -5.0527 -8.7547 8.2707 +#> 5.9394 7.3741 -2.8414 -7.4727 -4.4240 -2.3974 -6.3999 -11.8137 +#> 12.5434 3.6482 -2.6699 -11.8778 1.9999 -0.5057 13.6465 -12.6433 +#> -1.5947 2.9400 1.1899 -5.5050 -7.8701 4.1474 8.1508 -3.8619 +#> 8.0020 5.4543 6.7161 4.7623 2.0368 3.6720 17.7880 -7.5285 +#> 1.6826 0.6658 0.7317 2.1400 1.9443 -4.3019 -1.9108 -5.3094 +#> -1.0087 -5.2954 0.7836 12.4790 -6.3694 -0.6966 1.5099 -0.7596 +#> -3.8252 -3.9653 -0.9693 2.4905 -4.2343 5.8967 -8.7370 7.5073 +#> -7.3033 2.0156 8.0625 1.1625 3.2388 1.3559 -15.0233 10.9750 +#> 3.0027 -0.0993 8.0798 0.3630 -10.8551 -11.8839 2.5450 -10.9695 +#> -5.2685 -1.5443 8.3662 2.4834 -2.0784 5.2307 -3.9741 5.1350 +#> 11.5153 2.6065 4.4680 -19.1725 -8.3254 5.6935 -17.2886 -11.1543 +#> 8.7581 5.1499 -0.3413 3.5133 0.3349 4.7417 10.8582 -3.1654 +#> 3.0637 3.2390 1.0381 -1.9561 7.5077 -0.0968 3.0674 -2.0990 +#> 6.2414 -4.6814 -7.8332 -3.0702 15.8663 -9.5176 -0.0717 13.9568 +#> -0.3057 5.3359 7.4240 3.4238 -4.8786 1.0715 6.7994 10.0825 +#> +#> Columns 25 to 32 2.0486 0.8708 0.5544 6.1321 -0.0544 0.4094 -8.8153 -5.4935 +#> 6.3016 -5.4775 -8.4473 -3.2009 6.2352 -22.3775 -9.4374 -5.1860 +#> 9.5832 -9.2713 4.0406 1.7950 -11.8648 5.4144 0.1795 12.1535 +#> 4.7111 -1.0170 1.0492 9.0870 2.7757 -0.9608 -10.3316 1.5690 +#> -5.8797 -3.1423 -7.8039 3.9120 -13.6611 -4.9237 -3.0517 -3.6668 +#> 15.4324 4.1853 0.8577 0.5581 18.6150 1.9960 4.7495 -8.0974 +#> -6.1923 2.8643 -1.6250 2.7741 -1.5373 -8.7042 -4.4776 8.0329 +#> -14.5298 7.1356 10.3003 -18.2865 6.0448 -6.1983 2.5545 -14.4642 +#> -2.0427 -10.6095 4.6922 5.5868 -8.7672 10.6886 7.3272 12.1353 +#> -8.0065 8.2435 2.7104 11.1526 2.6338 6.5038 9.6109 0.4804 +#> 8.7848 -7.9189 -11.4750 -1.4746 1.1250 -21.5257 -6.2951 -8.1664 +#> -1.6037 -7.7364 4.2800 -4.0937 -2.2630 -22.1018 5.3075 8.7757 +#> -7.1919 1.8938 1.7081 -1.9652 1.0867 0.5521 -5.5878 3.1036 +#> -8.7931 4.0906 3.5745 3.2660 5.1747 19.7471 0.3309 -4.6035 +#> -0.8961 3.6014 -7.5782 -7.8223 1.9755 -24.2763 -28.2850 3.6287 +#> 5.2228 7.5141 6.2048 -8.2092 3.0490 2.1969 -3.0558 0.5392 +#> -7.7035 -7.1654 1.9086 -1.3701 2.0369 -3.1623 -4.1980 -9.3112 +#> -9.7242 -13.4276 -1.7406 -2.1395 -5.7136 -8.6700 -1.6221 -0.1690 +#> -4.8184 -4.3520 -2.0174 -11.0387 -8.3672 -11.4500 -5.3209 -0.8526 +#> 9.3625 11.6845 -1.4533 -7.0848 6.4473 2.9298 14.2704 -0.9498 +#> -14.4770 0.0319 7.0016 -2.4201 -8.2706 6.8810 -3.5194 1.5913 +#> 7.1092 4.3264 -4.7503 3.5927 -1.4041 13.2379 14.9831 1.1556 +#> -1.9736 13.7259 6.9042 -3.2147 -0.8209 9.0448 6.1615 5.7822 +#> 0.2902 -3.1471 -9.7509 2.9221 14.1677 3.5735 -3.9131 -2.1705 +#> 1.0779 0.8206 -0.5985 -10.3218 -7.7895 4.4428 8.3950 -0.8942 +#> 0.3501 -3.8845 7.8536 8.3398 -5.4798 -0.5387 -2.7007 -1.9552 +#> -0.0627 3.5151 -0.9520 9.7943 3.1222 9.1339 -2.0090 -10.5566 +#> 10.0780 -3.1242 -3.4301 1.3516 0.4802 6.2696 1.3840 1.7056 +#> 12.8798 12.5197 2.2178 0.8452 8.7783 14.1163 -7.8866 15.9475 +#> 0.0998 2.8998 -7.4788 -5.0958 2.5650 0.1309 13.2316 6.2798 +#> 1.4239 3.1518 6.5176 2.6320 3.3761 13.7262 1.3285 -0.7827 +#> -3.9298 -6.0559 -2.6825 -3.6432 4.8206 -11.5525 5.8214 -1.3849 +#> 10.8922 4.9468 -10.5936 3.9613 1.7132 6.7847 -0.9719 6.4322 +#> +#> Columns 33 to 40 6.7518 0.0637 -0.5057 -6.0995 2.2678 -0.1418 -0.0894 -1.8291 +#> -7.2281 3.5500 11.5275 -1.1611 -4.6947 -8.4302 -1.5892 8.6065 +#> 1.3153 8.3020 -0.0371 -8.4444 8.9109 3.2795 1.4463 -3.0666 +#> -2.7541 4.7649 1.0819 -4.4879 6.4546 -0.9405 -1.2156 -11.2372 +#> -2.3492 -2.5303 10.0099 -0.1765 -0.1695 7.3308 -14.3103 -1.7786 +#> 1.4942 2.9634 2.2040 -13.2088 6.4048 -10.0300 -21.0272 -8.5312 +#> 4.9926 -10.2556 -0.7436 0.4673 8.5513 13.2979 -4.5161 3.3698 +#> 1.1766 1.7118 -5.1453 -7.9682 0.4434 2.5463 -1.0197 2.4115 +#> -9.9690 7.2995 8.9152 -1.4745 -2.5456 0.4566 19.7125 -3.3651 +#> 3.5575 -8.2574 7.9604 -3.8160 10.6529 -0.6370 -3.8722 9.4154 +#> 4.2048 -4.6499 -0.0380 4.7352 0.2503 9.0735 -11.5653 5.9082 +#> 16.7160 -6.6212 13.1925 -3.3743 -6.5159 -4.9169 17.4261 -1.2669 +#> 0.0743 -3.5589 -7.6503 9.8061 -4.1787 14.1485 4.9883 4.0758 +#> -3.2329 5.0001 7.1155 -1.9461 0.9129 10.2497 -6.3972 2.6947 +#> -3.2503 -4.7514 -16.5120 9.3119 -6.0590 8.9492 -3.6170 4.6540 +#> 3.9317 7.7332 -8.8682 -3.6867 6.9156 8.7876 -3.0058 -2.1417 +#> 18.4717 6.0510 -8.5210 14.9308 8.1312 7.9611 3.8285 6.3706 +#> -7.6681 7.9707 19.1257 -4.4328 2.4504 -1.9814 3.8325 1.0614 +#> -11.5544 -4.2736 -4.6343 -2.8643 -7.1619 -5.1235 3.1156 4.9920 +#> 7.5518 -6.5793 0.3576 11.3374 1.5424 0.5173 -1.5897 6.1460 +#> -5.7413 5.3503 -5.2814 -8.5277 -2.9417 -9.4174 -8.9315 0.9418 +#> 4.5978 -11.5649 -20.8196 7.9550 4.7317 -10.5978 -9.1658 -1.1714 +#> -5.5631 -3.6984 3.9669 -2.8908 4.0008 -4.4950 4.9831 1.4214 +#> 2.4905 -10.9587 -11.9720 4.8999 -4.7274 -4.5001 11.1181 -1.0998 +#> 5.9553 3.1635 5.5144 4.0108 0.9412 -2.8774 3.0265 -2.3413 +#> 14.4224 -5.8882 4.3856 1.2994 -4.3984 7.7702 11.2618 -2.3455 +#> 2.1057 -2.5446 -10.4532 2.0138 -2.5983 11.3659 -7.3662 -0.2779 +#> -0.6512 -2.6942 2.0764 3.4600 1.7283 -6.8488 -5.9745 -2.4836 +#> -5.8764 -7.6335 1.2505 1.7193 -3.9201 6.6460 1.1170 5.6142 +#> 6.3060 -5.5585 1.4505 6.7922 -9.8482 4.4323 -1.1784 -3.5730 +#> -5.8433 2.3117 -6.2789 -1.9090 6.1445 -4.9370 -8.7354 -1.8873 +#> 2.0338 6.2214 8.1495 -2.3130 -0.6697 2.0147 7.2954 0.6291 +#> 0.8160 1.2701 2.8749 -3.5246 0.4742 1.0937 -9.5596 -12.5237 +#> +#> Columns 41 to 48 15.4829 2.2627 -3.8973 -9.7836 1.1493 0.5589 1.7641 -0.8840 +#> 2.3187 3.1692 3.7884 -1.5596 0.8787 1.6910 7.6269 -11.8201 +#> -11.8418 -8.2104 -0.5398 3.7971 -1.2872 -2.1573 0.2527 -3.5010 +#> -6.8187 -5.1291 3.2852 -6.8728 7.9513 9.5984 0.9122 -15.2566 +#> 3.4903 -2.7251 3.8807 -8.0941 -2.2015 -7.3066 5.0231 -3.6624 +#> -3.9741 6.1723 9.8998 -3.8340 -12.1560 1.7174 -3.7623 5.3394 +#> 10.6500 -7.2663 -7.8058 5.2731 16.6877 2.3567 1.8999 4.1658 +#> 15.6738 5.8046 -3.3873 -9.3194 2.3546 0.0086 4.8869 11.7072 +#> -1.9869 -5.9006 0.9127 4.3662 6.3242 -4.6508 4.0298 -11.5901 +#> -1.9954 2.3345 -2.9135 4.3971 1.6289 4.1065 -0.1429 2.5092 +#> -2.3726 7.7030 -12.0237 -4.2762 -3.5185 11.9282 -2.6169 -6.9259 +#> 5.7853 -2.6772 0.3960 6.8302 12.4755 -5.4547 -8.6617 -4.1516 +#> -7.4025 -2.6990 7.7098 5.0920 5.8451 6.7012 2.1474 -5.9251 +#> 27.1486 18.9463 -7.2993 -15.4892 3.6741 12.9396 17.7948 -0.8815 +#> 0.9300 0.4986 -18.4950 13.5777 1.7858 12.1219 8.9036 -12.1938 +#> -5.6350 1.4006 2.6860 -0.2889 1.0460 -2.4814 3.2362 -1.8450 +#> 0.8046 -4.1961 -17.4817 -12.7517 2.5014 12.6322 -11.9623 -2.0113 +#> 0.5882 -9.0460 4.0003 -11.7903 0.3451 -4.5666 1.3767 -1.3709 +#> 0.4105 -14.2709 0.5560 3.4148 0.2631 -7.3312 16.2476 -3.8253 +#> -1.3780 -1.0883 10.1791 18.6183 10.5953 8.9525 -7.7608 5.9384 +#> -4.4125 0.3892 -11.2676 3.9380 0.6758 -10.2265 -4.2889 3.6558 +#> -10.4216 -2.8228 2.2852 8.5535 -4.5853 -3.4975 7.6815 7.8351 +#> -2.5886 5.6191 1.3650 -0.8839 -2.8282 -5.7420 6.9080 2.4722 +#> 9.7513 12.6118 4.1706 3.4328 -7.2018 0.3146 7.1430 8.7496 +#> -2.4896 -10.3852 13.3378 8.6283 15.3907 -5.1322 -3.1432 -2.0923 +#> 14.0131 -2.0976 9.5618 -7.1335 1.5136 -1.0878 4.2914 -7.3082 +#> -12.1128 -0.6549 -1.8092 -4.4573 -12.9953 11.5015 -5.7832 -17.2620 +#> -3.0415 -1.2888 8.2886 -7.2860 0.6435 -5.1612 8.2681 -2.4388 +#> 2.5986 -6.4298 -1.4084 0.7011 -1.4112 -10.7474 5.2707 -4.9578 +#> -2.4491 -10.8547 -1.1192 0.5329 -3.5352 -12.4681 9.8951 4.4189 +#> -1.2082 -7.2711 -4.0508 3.8085 1.4658 -7.3905 2.7460 2.0371 +#> 11.4228 3.3356 16.4942 9.5570 -0.5468 3.4326 2.3980 2.5207 +#> -8.8880 2.7444 1.3878 -1.5233 7.9070 -4.8843 -10.5793 8.9785 +#> +#> (9,.,.) = +#> Columns 1 to 8 -0.0401 -5.3587 -1.5157 6.2732 -13.6976 -6.7935 -1.4780 5.7833 +#> 8.0295 7.2195 -2.0625 3.9150 2.6615 2.6223 -4.3817 -13.8680 +#> 0.0117 1.2248 -2.0779 7.9729 1.2167 -5.9579 7.6680 -3.5438 +#> 7.9207 2.3120 1.5136 5.0809 1.6964 0.2778 3.1050 -2.5285 +#> -12.4954 0.1997 1.6190 -5.3132 -4.5581 -7.3059 4.6492 18.3984 +#> 6.6989 8.0327 3.4943 8.5599 -5.4795 -11.8677 -10.8685 -5.4175 +#> -3.5288 -6.9770 6.5822 2.4493 -7.6729 9.9095 0.4195 6.9523 +#> -0.0140 -10.2004 5.0841 -0.5328 -7.4023 1.9564 -1.8222 -3.6415 +#> 7.1545 -5.4684 -8.4736 8.6685 6.2781 0.8422 6.3377 -4.6109 +#> -12.4599 2.0575 -1.8555 -5.7501 0.2061 6.6149 -2.3379 10.9490 +#> -4.1286 -10.9646 6.1999 3.4294 -0.2630 -4.1317 -0.5053 2.8262 +#> 3.9036 3.3610 -4.8625 3.4878 1.6393 4.8154 -9.4592 -10.1161 +#> 0.6990 1.2127 -3.1301 0.2518 9.4377 0.9270 2.3993 -3.1561 +#> 0.8206 2.4236 5.4289 8.1309 -11.2299 -15.7591 -2.9568 10.8937 +#> 1.0354 -0.7454 0.9291 0.7489 4.2153 -12.2105 2.7888 -1.5642 +#> -2.7756 -4.6378 3.1716 -1.5000 2.3015 -4.6238 7.1705 -0.5005 +#> -13.5580 -7.1779 11.3490 -5.8058 -7.8655 5.0732 8.6333 1.6669 +#> -9.2148 3.8687 -6.5184 -9.2128 -0.0878 9.0653 9.8284 -3.4182 +#> 6.1588 4.3002 -3.3343 -3.2978 8.4184 5.9972 2.9810 -4.6630 +#> 5.8986 4.7918 3.7645 2.8260 14.0795 11.1743 -5.7443 -2.8881 +#> 10.9387 3.0948 5.1992 -0.2369 -1.1449 2.5200 0.5243 -9.9567 +#> -1.2741 0.0109 4.8255 -7.3392 -9.4379 1.2815 8.6107 15.7691 +#> -2.9800 2.0068 -2.5025 -1.9067 5.0608 6.0216 -14.4670 2.7249 +#> -6.0917 3.4364 0.1186 -2.7239 -2.3238 -11.5371 -5.3078 4.5473 +#> 8.3556 2.3350 8.2108 7.8274 -1.7551 12.6956 -10.0731 -6.1833 +#> -5.7983 -11.7023 1.5639 4.6513 -9.0626 1.8596 -4.6818 4.4735 +#> -1.6112 -4.2717 2.3752 -5.5697 2.6525 2.7888 11.7979 10.5297 +#> -0.1627 -1.7397 -0.4723 0.6410 -2.0990 -0.0016 4.6182 9.5013 +#> -8.1699 -8.9439 0.4498 -5.0688 -2.0963 -2.8840 1.0486 18.9727 +#> 2.1265 -8.2606 1.2341 -6.4923 -5.1748 5.5922 -12.1637 8.1622 +#> -0.6031 4.6973 9.2818 -2.4909 -2.8083 0.5951 4.9702 4.3780 +#> -3.7960 7.0361 -6.2691 2.5651 1.7682 1.3532 4.4861 -15.8085 +#> 16.6168 -2.5662 8.5518 3.7907 -4.6948 3.5654 -7.3757 2.9434 +#> +#> Columns 9 to 16 6.8515 -5.7924 -1.1463 -0.8660 -8.4046 -3.4402 9.0900 1.6555 +#> -6.3973 -12.5254 -2.8980 0.0926 -14.1004 -8.7530 3.4872 13.0476 +#> 1.9196 1.9927 -1.6439 10.9017 8.9194 -15.2162 4.5042 6.2516 +#> -7.2779 1.0948 -5.1268 -8.5474 -2.6354 1.4645 0.1540 -0.7131 +#> -6.0449 3.3190 -3.4381 0.3023 14.8809 4.8864 -6.5688 -7.9893 +#> 1.5443 -2.8698 -10.6467 4.7107 -10.2424 -16.8032 -0.7090 12.1916 +#> 3.6293 1.0568 -0.8823 -8.6903 5.1829 9.4234 -5.7579 -8.1969 +#> 14.9818 1.6012 -7.0659 9.8859 -3.9187 4.4756 -6.4286 -2.3969 +#> -0.1391 2.9228 8.1983 4.6002 12.6124 2.5987 -4.0217 -2.2259 +#> -4.5055 8.3775 2.7163 -8.3042 11.1703 7.6232 -5.9136 -9.1705 +#> -3.7365 2.1000 1.3560 -2.7441 -1.3666 7.7541 2.7336 -1.9950 +#> -15.9188 0.5446 -4.0162 -14.9346 -9.2775 1.3314 9.5464 -4.7217 +#> -9.2875 5.1200 -5.8243 3.4159 0.6104 -7.7458 -1.5204 0.6096 +#> 11.4252 9.9446 -10.1528 -6.4593 6.5204 3.7000 -9.1748 -1.6567 +#> -4.0282 -9.9866 2.6477 -10.0355 -13.7235 1.0370 11.0200 3.7142 +#> 10.3721 -8.4207 -1.2325 4.2280 -2.5999 -10.9444 11.3093 5.2491 +#> -3.1942 7.6872 -0.6778 7.6504 5.6901 4.6751 0.7692 -3.9619 +#> -2.4893 -4.0110 -4.5442 6.8061 -2.1776 -3.1564 -4.6753 -2.0282 +#> -1.2711 6.2013 4.9302 6.2452 -1.9400 -4.4155 -1.8291 -7.0316 +#> -5.7117 8.8139 -16.1747 -4.5036 14.2471 11.4689 -8.3989 -4.5683 +#> 0.8491 -1.9457 3.1733 3.2209 -2.4952 -4.1152 -5.4848 1.1790 +#> 6.7018 -4.1316 1.7074 4.9630 10.7363 -2.7869 -5.5036 -2.6856 +#> -3.8230 9.8087 6.6230 -9.9396 -11.6228 2.0427 -8.1820 2.7967 +#> -5.7637 -7.6294 6.3921 -5.0245 3.3726 -11.1227 2.6857 0.1126 +#> -3.9921 -0.1089 -4.7720 7.4220 8.9761 4.3805 -3.3061 3.9361 +#> -5.4848 3.4496 -7.0230 3.5695 -7.5833 3.4404 -2.9668 -3.6015 +#> -7.7163 1.8404 12.2752 0.5312 2.3088 -3.5838 -19.5577 -4.0711 +#> 1.6340 -3.3698 -1.8235 3.9398 4.3444 -12.4113 -1.6669 7.3683 +#> 7.9665 -8.2360 -3.6457 0.1929 0.0513 4.3488 -6.7815 1.8229 +#> 8.2299 3.8260 1.7625 0.0936 -10.6032 20.4361 1.5287 -2.6170 +#> 9.5839 0.4424 -1.7036 2.1978 7.0456 -6.7660 -6.6564 -1.6229 +#> -11.3345 -9.5121 -3.7039 4.2622 0.3526 5.8893 14.7855 -0.0100 +#> 9.3577 -4.2840 3.5418 -5.2092 -7.5807 5.2774 2.8565 6.9503 +#> +#> Columns 17 to 24 -1.1178 -5.7008 5.9227 6.2578 -2.9593 -2.4386 -2.4347 0.0266 +#> 8.8592 0.1163 -0.8759 7.1805 3.2065 3.1771 0.5320 -10.5708 +#> 3.3209 -3.6363 4.7289 -2.2140 5.6121 -0.1514 0.9353 1.3090 +#> -4.0686 -4.7128 -7.4325 3.3564 7.8527 -9.6121 -3.3140 -0.8054 +#> 1.9529 -2.7376 14.5873 -2.6028 9.5261 8.4248 -11.6912 3.6645 +#> 11.4815 4.3680 -6.4507 6.2285 11.1233 -3.5196 9.1821 -2.0697 +#> -11.0958 7.5177 10.8553 -3.7892 -3.6787 -1.0752 -1.3976 -1.7058 +#> 7.3960 2.9949 5.6724 9.3524 -3.9775 3.0908 14.5466 -0.2611 +#> -0.9222 -0.8765 -2.0030 -2.0776 -4.7682 -3.0885 -0.7070 -0.7704 +#> 1.1207 -9.1757 7.6451 -8.5055 -10.1224 -0.3377 7.9083 4.9216 +#> 1.3136 -0.8340 16.4055 0.2280 9.2684 -1.8509 -3.4839 -3.6320 +#> 1.5548 6.2844 7.3618 -1.0777 7.4141 -9.8509 -0.1014 -3.9631 +#> -2.5788 -2.9492 -9.8552 -6.0596 -2.2006 0.4549 1.3011 0.4578 +#> 3.1423 -4.5543 9.1342 6.1294 3.6404 -5.0029 -1.3460 0.3086 +#> -3.1659 9.7089 -7.4630 -1.8297 13.9918 -11.4251 -0.7111 7.1855 +#> -4.0978 -11.9546 0.4081 -3.7149 6.9790 4.6450 -1.3808 8.5917 +#> -7.5492 -4.6058 0.6708 3.8632 -1.5932 2.6781 -1.2377 -3.7517 +#> 4.3857 -4.5138 0.1166 3.3124 -4.9680 -0.8735 2.4706 -5.2127 +#> 8.3726 12.4269 -5.2034 2.3746 4.9432 -3.9395 5.0369 9.4501 +#> -1.8028 1.0127 1.6399 5.2724 -0.4377 10.1737 3.4244 -4.2763 +#> 0.3288 0.7244 2.7199 -3.2596 -2.9759 -4.2100 4.0019 7.3318 +#> 3.9588 0.3420 0.8685 -9.8066 -13.4092 6.7500 -1.0379 2.6942 +#> 2.9159 3.9992 -0.0392 -11.7874 1.1412 -0.9075 8.6785 -2.6231 +#> 7.3890 14.5941 -5.1672 -12.2903 8.6983 -2.5509 -3.1014 0.8061 +#> 0.5636 4.2588 13.8632 0.7863 6.2155 14.8437 1.6649 -8.9861 +#> 3.5527 5.6034 7.2890 5.6278 3.2482 -1.6864 -14.5621 -1.5483 +#> 3.3780 -3.9101 2.6302 -10.9969 -9.1415 1.6487 -6.3810 11.2122 +#> 7.1961 -0.5232 1.3476 -3.9508 0.5511 8.8228 -3.5775 -1.3342 +#> -8.6096 1.2976 -8.8711 -15.1234 -7.6538 13.2325 3.1092 -1.0854 +#> 1.0315 -4.0530 2.4703 -6.2399 -8.1853 -0.9545 -0.2738 -1.9879 +#> -1.0833 0.3655 -10.8048 -5.1990 -6.1180 -0.3845 -3.4704 1.9010 +#> 6.7201 4.3863 -1.9426 16.8580 9.8743 0.6026 6.0198 4.3914 +#> -7.5689 -7.8332 7.3660 -1.1380 -7.6509 7.4930 -3.3141 1.5017 +#> +#> Columns 25 to 32 -2.6328 -4.5319 -7.1864 4.8594 4.1551 -1.6908 16.8185 -2.0045 +#> 4.2067 -9.0978 -5.9054 -5.2251 2.4363 17.6033 7.5333 -9.7814 +#> -2.7554 -2.3861 5.9767 1.9709 14.0948 -10.1872 -1.7323 -8.9705 +#> 1.9248 -8.7555 5.2803 -3.3823 -6.6890 0.9320 2.7809 11.6424 +#> -1.1710 -8.0971 -1.5792 5.1196 6.0805 -15.9685 -9.8254 2.1945 +#> 3.4229 13.3123 1.7412 7.7443 -12.8203 0.1005 18.0222 0.6522 +#> -2.4779 -4.0404 12.9208 5.1077 -3.9396 -13.3427 4.9993 0.5800 +#> 0.7383 0.5066 0.5028 5.0332 -15.0639 13.3385 8.5822 8.0929 +#> 4.1591 -13.0543 3.7959 -4.2798 3.8052 -4.1710 -9.2550 0.0112 +#> -19.3894 -1.0482 12.2905 2.6449 -1.2021 2.8008 2.7482 -11.1357 +#> -5.2213 -2.2390 0.3899 6.5409 -4.4120 12.9182 2.6424 -12.3500 +#> 5.1960 0.1905 -1.8664 -0.4012 11.7711 3.6295 11.3020 -9.2769 +#> 0.8557 4.5771 -1.8557 5.4125 4.7146 -16.4669 2.4940 8.4388 +#> -6.5435 -10.4828 -7.4447 9.1780 2.5317 2.5086 16.7430 10.3668 +#> 18.0602 -0.5392 -4.3564 1.4777 0.6322 9.7552 17.1769 0.3597 +#> 1.8624 -2.1766 -0.1327 7.9368 -3.7893 0.1420 1.6865 6.7944 +#> -11.7524 3.9455 -0.4495 7.7694 0.2370 -2.8936 -6.5377 4.2342 +#> -2.8456 -6.5280 -0.5771 1.7983 -0.3862 10.8229 -7.4962 -9.9794 +#> 12.8165 3.8421 3.3570 5.9936 -5.2011 12.9821 5.5467 -15.9329 +#> -10.5982 10.2841 -4.3786 -6.7834 -7.9043 -11.2157 -5.7437 13.0609 +#> 8.4495 9.0868 13.1325 1.7125 -6.6945 -0.2319 -0.6027 -11.0652 +#> 6.3865 5.2043 -6.4957 3.5516 3.2219 -12.3862 0.1227 -5.2055 +#> -5.2216 12.6346 3.7573 -7.7498 10.6991 3.1593 8.4007 -4.3679 +#> 12.5380 7.2983 -11.7928 2.5340 12.8789 -0.3407 2.7112 -1.0381 +#> -0.9214 6.4992 -13.0042 6.0661 3.9832 -7.1026 5.6302 13.4000 +#> -2.1768 -6.6846 -13.8017 1.8433 4.4714 1.3138 4.8917 14.7906 +#> -4.2788 3.8993 13.7874 -4.6234 8.0018 12.5315 -5.2471 0.7957 +#> 7.5338 2.3114 -4.8775 11.4832 7.3880 -5.8596 6.9446 -5.0459 +#> 17.4258 8.2201 3.7529 12.9822 -5.2978 -5.2244 9.5446 -4.8894 +#> 0.1072 4.9824 -9.7295 7.1216 3.4568 -2.5361 -4.1698 0.6386 +#> 9.3592 0.1783 11.7393 8.9554 -11.0262 -4.5985 4.8218 2.9281 +#> -0.3339 -9.8969 -12.6341 -3.9049 -5.7417 2.9067 1.3615 6.0554 +#> -5.0467 1.7893 -5.2092 -2.5334 16.4103 2.5706 -15.9351 -9.0296 +#> +#> Columns 33 to 40 6.2091 0.1991 3.5344 -0.9969 -8.6031 4.7295 -6.6684 -3.4680 +#> 17.4568 -2.6171 -9.9754 -0.0237 4.8433 4.7948 10.5709 -7.4558 +#> -9.0177 0.9439 3.9046 10.7104 9.0576 -1.2505 12.4538 14.4930 +#> -4.1310 -9.1202 -6.1499 -3.6964 1.2821 9.2477 3.8688 -0.2891 +#> -23.1013 0.4968 1.2061 12.0732 -5.0711 -2.3337 -1.7073 1.1353 +#> 6.6329 3.2435 5.2979 4.1493 -5.4980 0.2584 9.4383 15.8394 +#> -7.8822 4.2218 12.1040 1.0965 -1.9303 2.0269 0.1280 1.8397 +#> 0.7127 12.8749 -5.7977 -0.1436 -7.1988 -1.4078 1.6558 5.8912 +#> 9.7196 0.6410 -1.4691 5.3841 5.9562 2.7362 -2.6035 -0.7117 +#> 15.7405 -5.8802 15.0010 -13.5364 4.4596 -5.1669 3.4519 2.5614 +#> -1.9596 -11.9555 -2.3222 7.3614 -0.3040 5.6089 -6.7231 13.2914 +#> 12.0305 1.3015 15.0780 -3.8119 8.8774 -10.5482 2.1028 7.0329 +#> -12.0409 9.9197 -0.9035 1.5472 0.0250 6.7408 2.9665 -7.3522 +#> 15.3074 -5.0977 -0.5890 -3.2326 -11.9586 5.3730 0.3302 -7.4397 +#> -5.2714 0.3554 -10.4245 -0.2297 -1.6945 11.1466 -3.9075 -14.8633 +#> -12.3549 3.3810 -8.4166 0.6233 6.1302 -6.4055 4.9078 0.5862 +#> -16.7691 -15.3161 2.1979 9.4139 7.0512 9.3076 -2.2918 7.5642 +#> 3.4194 -0.4941 -3.7123 4.0891 2.5900 -2.9446 7.2037 5.0012 +#> 2.6715 -9.1567 -1.0215 -2.7875 1.2827 -7.7540 -6.0772 -5.9831 +#> -10.1948 11.5526 -3.2811 -12.5704 5.6780 -1.6276 -1.2724 5.6591 +#> -11.3361 4.0491 5.9046 2.5699 -0.1597 -1.3786 1.2125 -0.9712 +#> 2.1922 7.5539 -4.0624 -5.0876 -13.0993 6.8749 -0.6148 -12.8928 +#> 15.7846 11.2008 0.2007 -6.8360 -1.3051 -4.3575 4.8508 -3.4005 +#> 16.5002 -6.7961 2.8574 4.1257 -0.2058 -1.5497 6.5875 -8.5155 +#> -13.5434 2.0469 -6.2474 11.2225 -2.8352 -0.5849 -6.3699 2.1378 +#> 5.5689 6.3972 6.5323 5.2079 -2.2170 12.5505 -3.4878 8.8321 +#> 0.2990 -12.0754 3.3492 -2.5455 -1.8806 7.3532 -14.6075 7.3780 +#> 6.0513 -0.3128 0.7940 1.6838 2.5102 0.3200 3.7384 -2.9474 +#> 15.1318 4.2079 -10.3092 -9.1043 0.4524 7.4112 -9.8660 -9.8504 +#> -6.5203 10.6804 -12.3515 4.4717 -13.9374 2.9997 -8.6741 -8.5732 +#> 2.0869 -7.0619 2.1865 -4.9065 6.5477 1.0061 2.3397 -5.1603 +#> 5.8797 1.4344 1.5147 5.2506 -1.2364 1.2565 3.4842 1.2513 +#> -13.3068 -3.7998 -10.6292 1.2736 -4.6309 -2.5332 -5.4764 6.7562 +#> +#> Columns 41 to 48 -3.5574 7.5599 2.0042 3.4539 -0.8589 -3.3040 3.3101 -11.0791 +#> 5.4038 7.0159 -6.2993 -3.0544 3.6550 1.1032 -5.8003 1.3008 +#> 2.7843 -1.0163 7.7313 8.6528 8.1484 15.0378 4.8930 7.2796 +#> 5.7043 -7.2706 2.1721 13.6769 -5.2003 -4.8240 -0.6589 -10.6869 +#> 6.5594 1.1363 -2.4785 -1.1522 -7.2464 1.6686 -0.2455 -6.7628 +#> -28.2058 -8.7507 6.6609 -3.9997 5.7921 3.9306 -4.3053 7.3204 +#> 7.9689 -9.8504 7.8741 6.0412 -4.4477 6.1368 5.7541 -2.4991 +#> -2.6163 7.9938 -1.1072 -6.1312 2.9033 8.3875 0.8004 10.6133 +#> 19.3417 -5.7705 -5.2083 7.7241 -2.8094 4.8042 6.2480 5.1922 +#> -2.4543 2.1262 -6.9594 -5.6646 0.9048 6.0474 4.4061 -8.3221 +#> 0.7589 0.3771 9.3862 -1.1914 0.0915 11.6177 -1.5250 -8.1866 +#> 1.8998 -2.8191 11.5197 -7.9579 -10.5968 3.2860 -2.8600 -4.5691 +#> 1.3142 -1.5478 1.9273 2.2043 0.2335 -3.0520 4.8588 1.8683 +#> -1.5740 0.3910 -7.8552 -1.5160 0.6581 -3.3445 3.0745 -6.4489 +#> 3.7713 -3.5507 9.1197 11.5505 -17.4841 -9.6546 1.2523 -8.5845 +#> -0.8830 5.1521 -1.8431 8.3170 -3.1605 -1.8789 0.2834 1.7550 +#> -0.2878 2.9302 2.4239 5.1226 15.0652 8.3873 3.6716 1.3386 +#> -3.3260 20.2701 -8.4066 -11.0285 9.3202 4.9653 -7.8801 6.4595 +#> 7.3999 -1.3009 -4.7659 -3.0299 -4.3380 -0.7846 -0.0295 0.2334 +#> -1.2469 -2.6782 5.1911 -5.6863 -3.0801 -5.2409 -4.3599 -1.7330 +#> -3.7087 -3.1212 -5.9343 1.8252 -3.9633 -1.2791 0.7218 3.9059 +#> 3.5941 0.7342 -3.0248 -4.0062 -5.4251 -6.4783 3.4834 -3.6418 +#> -5.5708 -7.3934 0.5518 -4.8714 1.8200 3.0631 4.8407 -0.5243 +#> -4.6621 -2.6307 -2.7314 -1.5075 0.3348 0.6291 8.1803 3.7558 +#> 3.7802 2.4882 1.2630 -3.1052 -4.1978 -9.7503 -11.2655 4.2330 +#> 1.9003 -1.8347 2.2605 12.1232 2.9234 -1.2920 9.1332 -2.7259 +#> 9.4641 -7.7054 -3.1332 2.6496 2.0928 0.5605 3.8503 -3.1029 +#> 0.6528 3.8899 -4.9479 3.0866 0.6270 -0.2846 7.4450 -2.0769 +#> -9.5134 3.7089 -7.7251 4.0346 -8.5719 -3.4793 11.6490 -10.6066 +#> 0.7984 0.9333 -1.8003 -10.2657 -3.6976 -5.1811 0.4680 -1.1693 +#> -0.9469 -3.2107 -4.2615 6.3643 1.2269 -0.2931 1.4162 -0.5164 +#> -10.6698 18.0495 5.9027 -11.9230 -3.0949 1.2751 -9.0796 12.7728 +#> -1.7290 4.4678 2.5966 -5.5755 -6.3106 -7.8328 -2.7642 -8.1503 +#> +#> (10,.,.) = +#> Columns 1 to 8 14.4710 4.6432 -6.8133 3.7200 -2.6654 -4.0008 -9.0046 -0.3256 +#> 2.6093 0.3767 10.7315 3.0589 -4.4227 -9.5532 5.4612 9.7847 +#> 2.9513 6.5225 -1.8419 -5.8784 4.0442 1.9532 -6.0345 4.1286 +#> -3.2467 -0.1254 7.7224 -2.7941 -8.4919 16.0064 0.4471 -3.8678 +#> -10.8613 3.1102 7.4981 -2.7834 0.5129 10.9916 -5.3026 5.3011 +#> 11.3749 10.6835 5.6861 5.2611 14.7803 6.1482 -0.0210 -5.5726 +#> 4.0023 0.2153 -3.2174 -8.3354 4.2527 2.1872 -2.9610 2.3508 +#> 2.4431 -1.0216 0.5010 -7.4281 -7.5019 3.3549 -0.7031 -3.9651 +#> -6.0749 -11.0627 1.2948 5.7971 -17.0622 -2.2280 -7.2597 -1.4245 +#> 3.7310 -1.6254 1.3617 3.5650 -6.1422 -2.6705 -0.4709 4.5567 +#> -9.8843 0.6354 4.8846 -4.8381 4.8865 2.0070 -4.4283 15.0839 +#> 13.7834 -6.3143 2.9675 5.2545 -11.7613 -6.4473 7.2150 8.5128 +#> -9.4301 -3.4960 -1.4475 2.3521 5.7713 -3.7340 -2.5474 3.0973 +#> 9.2346 13.2348 -4.6010 -14.8336 -1.9623 9.7488 -11.0311 6.8809 +#> -6.4975 -8.5517 0.3187 1.9106 8.6420 3.1085 -3.8641 4.4096 +#> 4.7967 -3.6075 5.8131 -14.5341 -4.3010 7.2033 5.2022 2.8585 +#> -16.6634 7.5446 -5.7558 -6.7430 11.2488 5.6837 3.0768 4.0601 +#> -3.8207 -7.3227 5.7054 -2.1779 -10.0372 -1.7432 3.8939 4.9310 +#> -5.1015 -10.4106 9.9179 -2.8573 8.5237 4.0813 -1.1077 2.8891 +#> -1.4281 -2.5566 -9.9267 5.1031 1.8095 -3.2307 8.0328 -4.4473 +#> 2.0832 -1.8587 -3.9835 8.5245 3.7592 5.9499 8.6746 -9.7774 +#> 1.0120 2.9970 -10.7454 -0.7120 9.0160 -3.3007 -19.0601 -3.6214 +#> 6.0270 4.3670 0.2912 6.4658 -4.9769 5.0067 2.1219 1.2629 +#> 3.2517 -6.9647 5.6150 -1.4422 10.5213 0.8685 -14.3389 -1.6863 +#> 4.7981 3.6669 2.1774 6.5129 6.3029 -4.2212 -7.1775 0.6849 +#> 4.1268 9.8775 0.3653 1.7378 -18.7752 3.0751 -3.1661 -5.7616 +#> -9.5046 1.5973 -6.7126 1.0743 0.1221 -0.4248 -6.5908 7.3890 +#> 7.7763 -0.7585 1.1607 1.1947 0.5593 -4.3177 -7.9144 4.9266 +#> -6.5896 -22.8091 -5.1244 8.9467 -5.3432 6.1592 7.9922 2.2006 +#> -3.3691 -3.1167 -1.4385 6.0750 -4.5615 3.9180 -4.1986 -6.5412 +#> 5.1560 2.3632 2.0595 -9.7522 5.4654 3.2598 0.3563 2.0999 +#> 2.0908 -5.3153 11.3245 2.5395 -1.7546 -3.3645 -12.8333 -8.2892 +#> 8.8760 1.2254 -5.4517 11.1036 -5.5584 2.3060 -0.5097 -3.2205 +#> +#> Columns 9 to 16 14.5479 6.6808 -2.0072 4.2442 -0.1356 0.8253 6.9246 -2.2184 +#> -0.9735 7.6945 2.7680 11.0302 4.8427 6.4257 10.5309 4.8268 +#> -5.8721 3.2921 -5.6136 -5.5610 5.9241 -7.7979 -10.2809 -3.3400 +#> -7.4690 7.2934 1.1521 -4.7303 -9.2411 -0.9633 11.4470 -2.2516 +#> -5.9393 5.4131 -6.2276 4.5821 1.3449 12.5757 6.3698 3.5507 +#> -11.1041 2.2527 4.1273 14.2510 -4.9141 -2.6712 -6.8055 -2.1377 +#> 3.3734 3.5293 -5.9588 -4.5543 -11.7529 -1.3317 5.3928 -7.5389 +#> 15.7623 -2.3318 -3.0392 12.6354 -3.0868 8.5040 -6.3759 8.9538 +#> 0.6534 -7.9175 -5.0673 -8.4351 -3.4650 -6.8837 -0.1515 -1.9018 +#> 4.6077 -3.1372 -1.9121 -1.9368 -4.0526 1.7457 -6.9543 1.4152 +#> 3.6549 5.7417 -3.6029 0.4043 -3.4691 16.9400 10.5767 7.9510 +#> -9.4562 15.0964 -5.0772 7.7207 3.4396 8.2146 -3.8646 -2.3953 +#> 0.0823 7.2684 -9.4364 5.7086 -4.1273 -6.8211 2.5935 4.2336 +#> 8.7147 -0.8841 -8.3409 7.1285 -8.4872 -7.4519 3.1871 4.1935 +#> 5.3103 1.8878 17.6789 -5.2104 1.2377 5.0434 21.0108 13.5516 +#> -3.9218 -6.2199 13.1058 -8.2961 -0.5789 -10.8592 -10.2193 0.0549 +#> 2.8405 -4.7218 -4.1269 -2.0875 -7.2561 7.3622 2.0732 5.6603 +#> -0.9309 1.1359 -8.0774 6.8599 6.1417 3.7662 -3.6261 4.2135 +#> -10.3160 11.8156 -6.0465 -4.8211 9.5632 6.6152 11.8579 -1.8774 +#> -1.8500 -7.1162 -6.5278 -0.8735 -13.7540 2.1832 -7.1184 -3.5752 +#> -0.0181 -9.3558 3.7128 -1.4525 7.1921 4.6078 -3.3366 -0.9247 +#> 5.4197 -10.4160 2.7022 -4.6919 -2.1032 -7.5483 -4.9965 -2.0029 +#> 4.1998 -3.4277 -7.8864 13.4715 0.7598 -4.7825 -7.5446 2.3903 +#> 0.9815 -1.8990 17.6363 -1.1371 7.7331 -11.9590 -2.6291 4.4274 +#> -1.2596 -6.1834 -8.3681 10.2132 -5.4146 3.1174 -0.3621 -4.6778 +#> 7.4613 12.8624 -4.8624 2.9646 -7.0720 10.1873 1.1685 3.4204 +#> 5.3303 -4.5529 -10.5102 -5.4973 -1.3891 4.8987 0.6605 3.0140 +#> -3.1125 -4.1321 -1.0305 1.3574 -3.4780 -6.3012 -0.8300 -3.5251 +#> -0.1821 -14.8435 0.8051 6.1117 -10.4716 -5.6095 2.3493 -2.9083 +#> 0.2080 -0.7521 -10.1554 12.1975 -8.0559 -5.7300 -4.0059 -5.5448 +#> -8.6250 -11.8101 9.1740 -9.0937 -5.1054 -7.0094 -4.4310 -11.1649 +#> -3.8040 11.3338 10.4777 -0.9818 -0.9765 -8.1743 -9.9641 -0.8615 +#> 6.4034 -19.4679 1.4102 2.1337 1.8203 -8.4996 -9.2010 -6.2137 +#> +#> Columns 17 to 24 4.1192 10.9130 -6.0056 -2.1138 8.5664 5.1662 2.6595 -1.3731 +#> 4.0597 1.1002 -10.7103 -0.0916 14.5790 6.5852 -4.4298 4.4207 +#> -1.4257 13.8238 0.7385 1.6014 -1.3178 1.8495 -7.2067 9.7789 +#> 3.9501 8.0051 4.3776 0.5641 -5.2933 -0.1487 -9.0504 1.1126 +#> -6.9081 9.5299 -0.6400 -6.7744 4.4750 1.6003 -0.6061 6.1316 +#> -5.1433 19.4295 6.6673 -4.3950 5.2266 21.6277 6.4043 -0.2752 +#> -5.0103 1.6013 3.9599 -7.4589 2.6976 -0.9549 4.2999 -6.8065 +#> 7.4476 2.7529 -8.5339 2.3087 6.9197 3.1030 7.2101 -0.2283 +#> 3.3662 -9.1169 0.4361 3.1868 -6.0411 -11.1444 -13.0576 1.2490 +#> 4.5949 -6.2269 -15.8484 8.1420 -0.9314 -8.6579 5.3700 -5.1534 +#> -5.6058 4.9588 -6.7369 -1.1946 6.5778 8.9600 4.0150 14.8578 +#> 2.0496 6.4323 3.3880 -1.0165 6.4750 0.2090 -8.1803 10.0744 +#> -1.8714 3.9368 6.5303 -12.0373 1.5525 -4.4925 -7.8129 -3.0793 +#> -3.1583 2.3940 -1.7749 8.2701 16.7172 -4.3440 19.2828 4.4032 +#> -5.6533 8.0105 3.2189 -10.4303 6.5888 1.4778 -2.9537 -12.4291 +#> -0.1350 5.0839 2.8423 -0.2953 -8.4328 -1.9774 -6.2660 -0.4665 +#> -2.4046 -4.0025 -1.5798 8.4617 3.0757 -9.2885 8.9053 0.2256 +#> 10.1778 8.4762 -11.0694 2.4767 -2.3942 -1.1300 -9.0895 7.2365 +#> 6.0020 -3.4611 -0.4594 0.9693 -1.6207 5.6691 -9.7336 8.2720 +#> -4.1412 -11.0522 5.2162 0.8411 -7.0415 -2.3906 10.2475 -0.7341 +#> 4.9153 9.5439 -3.9681 4.3850 -1.4356 -10.0032 7.7016 -7.3656 +#> 6.4802 -4.0935 -10.7369 -1.4834 -4.3873 -8.8123 -4.6086 -15.6554 +#> 6.2670 -9.3744 1.1737 7.7070 8.1785 -11.6734 7.9465 6.8682 +#> -8.1977 -2.7518 7.7775 0.7281 0.3783 16.3828 8.0922 -5.0288 +#> -10.7717 -7.5790 13.4127 1.0349 -0.0678 -3.4017 7.5944 5.2857 +#> -0.9464 2.7255 -1.0871 2.3097 11.4077 -9.7370 1.1826 3.8529 +#> 11.7870 4.3044 -3.5921 -1.1888 -6.8060 -16.7895 -0.6630 -1.1693 +#> -1.0368 2.5376 -0.7813 -1.5229 1.7481 -1.2655 -3.1261 -0.6197 +#> -4.1932 -0.5911 7.1051 3.7213 -4.3230 -6.3889 -4.1842 -16.6919 +#> 1.0108 -9.6746 6.1159 -4.2510 4.1172 -4.3838 -4.8486 2.5938 +#> -2.0697 0.1736 5.6889 8.3053 -11.4090 -6.4399 -3.8511 -16.8787 +#> -7.3775 8.6551 -6.2978 -8.3835 -0.3105 14.6197 -0.5400 4.4779 +#> 3.8251 4.0045 2.8259 -1.9221 -8.0333 0.4056 8.8029 1.6251 +#> +#> Columns 25 to 32 -4.0010 5.2869 -3.7124 -0.3850 -3.4061 -6.2136 -2.9467 1.8112 +#> 6.8547 -3.1490 8.5090 -5.2604 -11.8212 -10.6668 -5.7048 10.8873 +#> -14.7036 -6.5548 1.6447 0.0889 5.9533 2.4828 3.0379 6.4476 +#> -14.1212 9.7051 9.6831 -3.6053 6.0167 5.2231 3.8789 -2.0557 +#> 4.5233 -3.6778 10.6239 -7.6003 0.1486 1.1952 -13.5926 3.7119 +#> -13.0996 0.2291 0.1330 1.1188 4.8989 -2.9820 -7.7579 -8.0266 +#> -2.3823 -1.5344 -4.1569 -10.4308 5.7587 -0.8466 0.3815 7.2276 +#> -2.2049 12.2634 -12.7209 14.9204 -0.0622 4.5250 -15.3472 -10.9432 +#> -12.5341 -0.4378 -6.1007 -6.7649 3.6043 3.8794 6.9019 2.4826 +#> 7.9965 -8.9160 -0.5156 -7.4211 8.0894 -3.8668 0.0773 -6.1718 +#> 5.9205 -1.6424 5.9659 -0.5204 -5.9036 2.7446 0.0560 1.6184 +#> -8.7911 -3.9587 -1.6014 -7.6753 -11.9352 1.3038 0.7882 6.5391 +#> -5.1887 -5.5627 3.9967 -0.1711 7.8940 -4.6065 0.4770 5.9022 +#> -7.2264 12.3871 1.4501 -10.5747 4.2974 -8.2536 -12.8194 1.3355 +#> 3.5932 4.4770 16.5684 -0.1417 -2.6059 -9.0042 4.2586 8.0855 +#> -0.5715 -1.7804 8.4751 12.0436 -7.5025 9.8106 -0.4650 7.0134 +#> -6.5063 6.9795 -11.7314 0.3454 2.3116 0.8328 2.1417 -3.1126 +#> 2.0151 2.1232 -0.6683 7.5267 -2.6835 5.6750 -9.6292 4.5255 +#> 16.3539 3.2921 3.0363 0.3554 -5.7509 4.3470 -4.5798 -1.6581 +#> 5.5927 1.7704 -9.3390 -2.6007 1.8508 -0.0748 11.5658 -5.3670 +#> -2.5411 2.5914 1.2881 2.5088 2.1225 -4.2355 -1.0349 -11.8804 +#> -4.2727 1.1581 -2.0757 5.9721 5.6380 -9.3281 2.2306 -4.2831 +#> 0.2923 0.8767 5.1367 -3.8978 0.7931 -7.1608 0.3274 -3.0203 +#> 0.5253 -0.7469 -2.1695 4.9469 -0.0681 -0.1998 -6.7128 9.1915 +#> 1.4316 6.7870 -11.9403 -1.0241 2.7523 4.2159 -5.2916 -1.1015 +#> -11.7556 9.3717 -7.0557 2.2428 -4.9803 -3.5776 8.9442 8.4118 +#> -1.3536 -8.4380 11.1821 6.6431 7.1123 -8.1119 -0.0093 1.8174 +#> -1.5371 -3.4120 -0.2186 -0.4295 -9.3453 -2.9871 -3.1140 11.2028 +#> 4.3040 -10.9187 -3.9711 -0.6291 -11.9327 0.6901 -2.0339 7.4203 +#> 5.5496 -4.4382 -9.7760 0.0982 -6.9906 6.1753 3.3855 -9.9118 +#> 0.1587 1.0764 -7.0968 0.8729 1.8725 -1.1229 -0.2826 2.1209 +#> 4.8244 5.7039 -7.7527 0.6460 7.2443 4.3282 2.4718 3.3759 +#> 6.2145 3.8019 0.1981 7.4547 -7.1679 -5.8358 4.5452 -5.1186 +#> +#> Columns 33 to 40 -2.7463 -2.0798 -2.5194 7.7455 -0.6367 -7.5936 -5.9359 -8.5481 +#> -3.3242 -0.7103 3.6360 -5.1043 -11.5722 0.8882 -4.3279 -10.3382 +#> 5.4391 1.6034 -13.6770 5.5982 -6.3312 2.5836 13.0854 3.0990 +#> -6.0943 12.4081 -17.8393 -3.5582 6.9505 5.7650 -0.7693 -3.1509 +#> -10.7622 10.3388 6.1969 1.7298 -0.2892 -8.7279 7.2891 4.4228 +#> 5.4087 -14.6842 -17.9690 5.9490 -2.2659 13.3579 4.7606 -2.1772 +#> 2.5449 13.5166 -2.2249 4.7754 11.4285 -14.5705 8.6252 7.5728 +#> 19.4670 -5.2286 7.7477 7.1915 6.3136 -4.3305 -4.1927 -8.4972 +#> 4.4044 10.9534 -6.1862 1.3248 -2.4400 0.9649 2.6163 -2.4498 +#> -1.7708 8.6231 1.7543 8.6069 -4.8777 -3.1206 -3.1282 1.0544 +#> 3.7684 -4.9195 9.6077 -13.7415 6.4011 -6.7006 -4.1248 2.6999 +#> -17.3474 6.9439 -2.9371 1.2146 -4.3929 -9.4780 -5.4103 1.3074 +#> -5.3908 7.3266 -1.6835 3.3961 -2.2084 10.3198 -4.2306 3.3221 +#> 7.5038 5.8080 -9.9477 16.2983 -0.9450 -7.1822 -13.0129 -3.8743 +#> -9.1869 -3.6647 2.8171 -5.9277 8.1256 2.9991 -7.9105 -10.7075 +#> -4.0340 -5.9935 8.3321 -8.2468 7.8384 0.0060 -0.6545 4.3919 +#> 9.0310 -3.1660 -2.9173 -1.1388 -3.1402 0.4344 -8.0797 8.5828 +#> 0.4831 5.4021 4.9846 -0.3507 -10.9405 5.4938 -2.6585 0.7319 +#> -7.5524 15.0568 -1.5017 -5.3615 -5.8247 1.2927 9.2929 -2.6711 +#> 10.3731 -9.5647 0.9368 3.4030 1.5107 -0.1655 -2.6225 10.0625 +#> -0.6429 -9.5170 -5.2768 -0.2312 0.9537 -2.8683 12.2291 -1.5264 +#> 3.5289 -11.2751 11.3402 9.9709 -1.9313 3.9201 7.5383 1.0913 +#> 5.3155 7.0822 -7.8948 10.0645 -10.6934 3.1523 -3.9992 -7.4225 +#> -8.3727 5.3745 10.6670 -3.6268 -0.9629 0.4118 7.7708 -0.6398 +#> 3.7513 -17.4524 -4.8192 4.7726 -3.1548 -8.3748 -2.4123 5.3698 +#> 4.3340 16.0960 -2.9203 4.5094 -6.5853 -6.5885 -5.4492 -4.2702 +#> -2.1476 3.2810 5.6629 -10.6544 -4.9819 -1.7934 -2.1906 2.5553 +#> -2.3297 -2.1629 2.2620 1.2632 -7.7004 -4.1165 4.5297 2.8979 +#> -7.5091 -11.5038 11.4662 5.7960 -7.0260 -8.6066 -4.9869 2.1815 +#> -0.8860 -7.0743 8.7135 -6.5663 3.8496 1.9958 0.8628 -1.2757 +#> 4.4538 -4.4086 -3.7311 2.2965 -1.3821 -4.7465 6.0835 2.8220 +#> -8.2009 1.5255 5.5705 1.2606 -4.9644 17.2563 -10.5917 1.6127 +#> -4.1038 -18.1976 6.0696 -7.1599 3.0302 -1.8089 2.2677 6.4528 +#> +#> Columns 41 to 48 -5.0606 6.2417 -2.8217 5.1493 -3.5163 -5.6563 -3.6681 -2.1512 +#> -4.0298 5.3893 -1.1602 1.5378 -8.1873 -8.5062 -2.0137 16.1354 +#> -4.4442 0.7066 -1.4783 -1.9802 12.1887 9.8826 7.7054 1.4902 +#> -4.4092 2.3773 -7.5726 6.1283 -4.4703 0.7849 -3.1640 13.5514 +#> 1.8827 1.6585 6.9649 1.0963 -7.3246 9.6848 11.5215 -10.1604 +#> -20.4653 0.4185 10.8977 8.1515 9.7242 -13.3273 -6.8315 2.1960 +#> -0.5615 -7.1728 0.5999 11.9708 1.6972 4.7447 -6.3877 -1.4654 +#> -4.5520 -3.8538 7.9964 -1.3561 1.0739 2.5283 9.4726 -9.1893 +#> 3.7516 -3.8982 -13.4918 -5.1956 -13.5435 10.5221 1.4650 6.7804 +#> 4.4838 0.6042 0.4535 9.5781 -4.8421 6.4397 7.0565 -4.1577 +#> 1.5536 -0.4799 12.1631 3.3254 14.9297 -3.2445 -4.3268 14.8057 +#> 4.6843 2.0066 -8.5335 0.4175 -9.3473 -13.4129 12.0193 3.7965 +#> -1.3756 6.6072 0.3018 -0.5654 1.6305 7.2815 -1.4047 -3.2000 +#> -6.8800 -2.6158 -2.4326 0.5977 -2.6295 -3.3078 -11.7381 -17.4652 +#> 7.0267 12.5131 3.4610 -6.0618 12.0014 -5.4000 -8.3710 5.1105 +#> 0.9499 0.8749 -2.2749 -8.7281 16.2070 3.9815 2.2909 -7.3912 +#> 1.4547 4.1245 5.9095 8.6042 19.5148 6.1538 12.7950 -3.8956 +#> 2.6229 8.2897 0.5670 -1.1295 -9.9689 1.0132 11.4646 11.5797 +#> 2.0644 3.0247 9.6181 -0.0726 -7.5189 -1.8044 -13.0203 -1.8426 +#> -1.6236 -16.4245 -4.8513 1.6835 0.2332 -7.5492 -11.0121 -0.0169 +#> 4.3706 -0.3532 0.9759 -1.4393 -6.0106 6.8486 0.1169 -4.7158 +#> -1.3000 -3.4431 -4.1769 0.9913 -2.8124 10.1722 -9.3012 -10.8872 +#> 8.3708 6.1809 -5.0569 3.9997 -10.6746 8.9321 -9.1896 -8.8022 +#> -7.5716 0.4196 4.5086 -11.1168 1.9562 -11.1500 5.5407 -8.1387 +#> -2.4173 -4.9560 1.5303 8.2358 -9.4596 -2.5966 -7.0269 -5.9465 +#> -6.8669 3.2344 -13.4096 -3.5862 -7.4090 2.1559 9.1505 -7.6138 +#> 5.5773 -5.7943 -0.3198 -9.7214 3.1791 4.4402 -8.7427 -5.3370 +#> -5.3191 2.7368 -2.3253 2.5054 -4.4633 -4.3220 -7.9961 -2.8243 +#> 8.4724 4.7369 -6.1556 -0.8707 -7.7329 6.5241 -15.5860 2.7061 +#> 10.8530 3.3671 1.5378 9.2073 -13.9800 6.7152 -10.1386 9.1299 +#> 0.7500 -5.5101 -1.0907 -0.3966 1.0970 4.4621 -12.0540 -8.8876 +#> -4.4861 -0.3344 -8.7108 -2.5508 0.6081 -5.0729 12.3159 2.2751 +#> 4.0570 0.1595 -12.4873 -0.3785 -4.9597 -3.9137 -8.8278 7.8069 +#> +#> (11,.,.) = +#> Columns 1 to 8 3.7937 3.3443 -2.4370 -0.7493 2.5299 0.4709 6.0562 -0.3626 +#> 10.6307 11.9516 9.8038 -0.7408 -3.8674 -4.6190 10.3573 -1.0870 +#> -10.6999 -3.8307 -0.2548 -10.6186 16.3173 -1.7586 11.4501 -5.5764 +#> -7.8635 -4.3516 3.3598 -10.4165 3.8293 7.3386 5.5245 0.6330 +#> -7.1622 -8.3438 4.7686 0.1439 13.4165 -8.6863 -3.6452 3.5247 +#> 1.9894 1.6171 0.9422 1.7833 0.1280 -4.0095 3.9278 -11.9568 +#> -11.5697 -2.7070 -0.9483 4.8115 1.9425 7.3465 14.8355 4.7956 +#> 1.2308 0.3228 -0.8247 8.3839 1.2995 1.8578 -5.8216 1.6002 +#> -0.7065 -3.7597 1.1004 -4.9003 7.1400 2.3689 2.0262 -6.3006 +#> -0.6387 -5.4112 0.7874 10.8427 -11.5375 -2.7082 -0.1494 -4.1081 +#> 3.1051 -4.3198 9.7297 -9.6335 9.2616 2.5857 1.3609 3.3331 +#> 2.5867 12.2692 -7.2361 5.3503 -8.8797 -0.2300 4.9374 4.7238 +#> -3.8307 -0.3206 -1.1478 -4.8669 -1.1158 1.6285 2.0032 2.0974 +#> 0.6222 10.6310 -0.8089 -1.0949 -3.1346 -3.2931 8.5881 4.4816 +#> 5.0675 -4.1099 -5.7666 -5.5653 7.9136 3.2587 0.1896 1.7996 +#> -5.7315 -7.7680 -2.8600 -8.7944 8.2526 -2.3587 -1.6003 0.5170 +#> -5.9382 1.3654 -3.4797 3.0315 -0.8547 0.4773 -6.1367 -3.9820 +#> 3.6287 -1.2956 6.1237 -0.9495 4.2141 -3.6351 -0.1131 1.4238 +#> -0.0918 6.8673 3.0499 -2.8238 -5.1383 1.6259 -2.6161 14.0690 +#> -4.7757 6.0958 0.4287 5.5608 -14.7232 8.1290 3.6246 -2.3185 +#> 5.0560 -4.1627 0.3608 3.1132 -3.0299 -8.4261 -4.3013 -10.9617 +#> -8.4905 -9.3292 6.5858 6.3036 7.6876 4.1438 1.9595 -2.7165 +#> -4.8891 13.0413 -5.9035 6.4807 -10.3121 -7.3228 5.7568 5.6401 +#> 7.2047 -1.7132 -5.7729 -2.8612 -6.7071 5.8677 -8.1895 5.1844 +#> -4.3016 9.8202 -0.6803 -0.6256 -0.9157 -13.1843 0.1588 -12.9502 +#> -7.4285 3.6688 -1.6415 0.9138 -5.5771 8.7736 6.6882 1.6432 +#> -4.4428 -9.3043 -0.6134 1.9890 0.4854 0.2168 0.8523 5.6972 +#> -4.3809 3.4880 6.5377 1.8884 3.3033 3.2709 5.9732 -1.5907 +#> 0.6638 -7.0466 0.8941 -2.5001 -1.2203 5.6382 -5.7607 1.6291 +#> 1.0479 5.6997 -2.9219 -2.9017 5.6748 0.8129 -0.8491 8.1921 +#> -4.0707 -1.5495 -0.4425 -0.2246 -0.4532 5.4448 1.6117 1.8572 +#> 2.2864 -6.7336 -0.1332 -1.3750 -1.8187 -10.8077 -6.0010 -7.3450 +#> 2.0508 -9.1659 -1.8159 -9.7242 4.7545 2.1221 6.2191 3.5645 +#> +#> Columns 9 to 16 8.3584 0.9567 0.6699 -0.5915 -2.5452 -13.7650 -4.9275 5.8179 +#> -2.4895 -2.6484 -14.1346 3.4432 -5.7197 -1.3415 -4.3216 -5.3908 +#> -18.3576 -4.5139 4.7621 4.8123 6.5106 13.7852 -3.4981 -4.8045 +#> 9.2857 -1.9781 -7.5667 -5.5825 -1.8052 -0.4736 3.6928 -5.4759 +#> -2.0240 -10.3960 5.4997 0.7885 -5.6982 3.8130 5.7453 -3.5903 +#> 1.8793 19.2363 -11.2523 2.2805 5.5907 -0.9210 -17.2159 -12.7291 +#> 6.9888 -1.4846 0.4863 5.9328 3.3185 3.3167 -0.3102 1.2225 +#> 2.3845 15.0868 -8.9965 4.0807 7.5237 -3.5945 11.1996 -3.3507 +#> -3.0860 -8.1743 2.3691 2.3801 1.7114 8.3665 6.5482 -2.0371 +#> 15.6948 -7.8191 13.9383 -2.4419 -4.7495 -1.9464 5.8553 -0.1548 +#> -15.9031 5.0678 -13.2995 2.1257 -2.7215 -7.4292 4.4587 4.5165 +#> -16.9304 -5.6543 6.8485 5.5245 5.5420 -6.3474 15.3867 -4.1328 +#> -2.4458 -1.8298 6.2050 -0.8572 10.9190 5.0564 -5.9771 5.1932 +#> 21.3183 24.2537 -2.5838 -3.2858 -1.3310 -2.3132 -5.4575 -1.7254 +#> 2.4244 -8.1272 -20.0456 1.9128 4.9572 -3.7852 -8.8739 14.7509 +#> -7.7226 7.3655 -4.0851 -0.8517 -1.0395 8.1234 0.8589 -0.0638 +#> 0.7190 -9.0523 -4.8855 -1.0134 11.8126 5.6483 10.1249 7.5263 +#> -6.6636 0.0449 -5.7391 -0.3796 -3.3397 11.2182 13.0471 -7.0371 +#> -13.1443 6.5986 -2.9793 5.8134 -1.6611 -4.4457 -2.6233 -0.1726 +#> -4.1520 -0.7067 11.2879 -16.1209 5.7117 -4.4844 12.8697 -1.6114 +#> 0.5737 -4.6153 5.4920 6.8388 1.4059 0.1756 -3.6811 9.9860 +#> 8.5067 -2.7990 12.9833 4.9783 -0.9485 -0.5419 -7.0666 4.7147 +#> 11.7475 8.2290 16.3922 -7.0700 1.0669 -6.5360 -6.9774 -3.5325 +#> 10.1249 2.4669 -9.1289 4.8102 1.4256 1.2654 -3.4537 -8.7730 +#> -11.1659 -6.1313 -7.3278 -3.7938 6.7892 5.7347 -6.2605 -9.0258 +#> 5.8518 -8.8304 6.6573 -5.1409 -0.4738 -9.2835 8.0228 -5.5465 +#> 3.2390 -5.8512 6.1396 -1.0107 -3.1562 -9.1976 4.1317 6.5381 +#> -1.3794 -0.1016 0.4497 1.0687 -4.4595 -3.7331 -5.9836 -5.8984 +#> 11.1390 8.7555 -0.2233 -6.3424 -2.9114 -2.0943 7.2410 5.7420 +#> -7.3059 10.4932 5.4640 -5.7067 5.3278 -8.3470 -0.7442 -1.5355 +#> 10.8857 2.7782 -2.0679 -0.5128 -4.5647 0.6927 -7.5803 -1.9672 +#> -7.0913 -17.9866 -14.8429 -7.0096 4.3522 17.9882 -3.2681 -6.9015 +#> 5.2191 -6.0421 5.3403 -0.5887 -13.5154 -1.0066 -0.4395 -3.0554 +#> +#> Columns 17 to 24 0.9972 -1.9477 1.6763 -2.3057 -0.0854 3.7226 -7.4466 -5.6549 +#> -5.1076 -8.0219 5.7801 9.0155 3.0495 -4.0400 -2.1674 14.0211 +#> 9.0838 1.6779 0.6311 0.6824 -4.0252 -2.5605 11.8455 7.5120 +#> -2.6526 4.4219 -7.9159 1.1084 9.5066 -14.6283 15.8692 -0.9054 +#> 10.4089 -0.7032 1.3462 3.7963 1.9185 1.8751 4.7767 8.6929 +#> -0.5876 -17.7739 -18.2536 -5.8443 -14.0135 11.5308 3.2224 9.4312 +#> 7.6872 -0.3288 -3.6229 -8.7525 -9.0824 -6.2285 6.2582 0.0213 +#> 3.7409 -18.3179 -4.3719 1.7264 -5.7159 5.2569 -5.1051 -5.0240 +#> 5.8859 8.8848 4.3772 4.3765 17.8811 -13.3823 15.7412 8.6556 +#> -5.4387 4.9601 9.8092 -0.7505 0.6326 -0.9829 -14.8984 -6.9666 +#> -1.4902 5.8527 6.8104 -4.0063 -4.3960 6.1615 -5.9452 -1.8706 +#> 0.1634 6.8792 1.2098 4.5989 -13.9686 -9.3451 2.5431 -1.0003 +#> 5.2184 2.6057 -7.0273 6.8009 -0.7976 -9.5926 3.1235 -11.6438 +#> 3.1717 -6.2223 -0.2322 5.0085 8.1052 -4.8153 6.6578 -0.5550 +#> -7.6704 -8.0143 -12.8910 -4.4232 2.8942 -13.4675 -0.6811 8.4524 +#> 3.1801 -6.9591 0.8362 2.7003 -4.3841 -3.0317 -2.6469 1.4808 +#> 6.2480 13.2894 13.2017 10.5169 0.7862 -4.6049 0.0015 -19.5002 +#> 2.4041 3.9094 7.5511 14.6806 7.4684 7.5682 -8.9797 -6.0516 +#> -5.6842 -8.3527 -8.3326 -2.1025 -5.2128 2.7183 9.3746 13.0983 +#> -2.8523 8.9349 -0.9335 2.3307 -5.6938 12.1805 2.8683 -0.5086 +#> -5.1087 -9.1301 2.2784 -11.1650 -1.2309 0.3493 7.1337 8.3113 +#> -8.6389 -14.4029 -4.8800 -16.3632 2.6205 10.3695 2.4457 2.6485 +#> -9.6999 0.2376 -4.7306 2.2126 -5.2387 -7.5828 0.0944 -4.6289 +#> 0.5303 3.0929 -6.9547 -9.1865 2.5571 -4.3240 -9.0880 1.8058 +#> 9.2528 -0.0228 2.6048 8.5337 -0.6221 4.3056 2.3981 3.9403 +#> 6.5828 12.7984 6.0844 -1.9288 8.5991 -15.2191 11.7522 -3.9922 +#> -6.9965 7.6586 1.7081 -6.5360 9.9816 -12.8057 -2.2373 -0.3058 +#> 0.9783 -7.0158 -2.5609 -3.4667 4.9569 -2.6522 8.1639 8.7794 +#> -7.5215 4.5414 -11.6417 -3.7401 6.8329 -10.8399 -11.5499 -7.0909 +#> -6.8994 1.9352 -4.4629 -2.6767 -2.5938 15.0271 -6.1265 -0.9355 +#> 0.3384 -7.7840 -4.9926 -8.5295 -2.9699 -0.4821 0.3740 3.2112 +#> 9.3060 4.3025 4.0113 12.1607 2.8971 7.9864 -7.2097 -1.0170 +#> -7.0919 2.8168 9.4173 -8.0272 4.4756 4.6679 7.5488 -6.8999 +#> +#> Columns 25 to 32 -2.5597 -0.9806 1.1756 -2.2707 2.0862 3.8241 2.4741 -0.2315 +#> 8.8242 3.8039 -5.1815 -4.6034 4.5181 -1.6453 -0.0998 6.8647 +#> 0.1486 -10.2521 -5.4038 4.4365 -8.6100 -5.6858 2.6825 3.4047 +#> -9.4426 -7.7393 0.9135 -1.3056 -12.3088 -9.0515 -0.2296 -9.9137 +#> -0.4161 -11.5724 -1.7100 14.7900 2.3039 -2.5595 15.5342 -1.0142 +#> 1.4336 2.4076 7.7402 -0.7544 -14.6283 0.0158 5.6794 14.8158 +#> 1.7489 1.9878 7.4678 -1.1058 -3.7967 -3.6468 -1.5439 -10.8562 +#> 9.5973 3.6431 10.6990 -3.1217 -6.6937 10.5877 -0.3785 1.5894 +#> -1.2239 -3.8936 1.1776 2.3063 7.8297 -9.1275 -3.9944 -0.2406 +#> -18.9179 -1.6482 1.3639 3.3204 -9.9353 2.7277 -1.7784 -0.1369 +#> 0.2878 -5.0676 -6.9829 4.1097 -0.9337 2.8044 12.0952 -4.5898 +#> 19.3418 8.9425 -10.2133 12.6115 -0.8296 -12.2732 15.3609 -7.7891 +#> -1.9803 -1.4505 -3.4261 -1.6209 -7.6682 -16.4376 -0.5219 -12.4175 +#> 6.5593 -14.7310 6.9862 12.6301 -15.9163 1.1773 15.5185 -23.9902 +#> 3.9017 4.4758 -2.4765 -10.5747 -2.4816 -3.9151 -3.5416 -4.3298 +#> -0.5229 -0.2976 -3.1351 4.9900 -11.7597 -5.8190 -0.1210 6.0816 +#> -3.0197 -8.0523 -0.5537 1.9743 -8.3856 6.8797 5.7334 -14.1970 +#> 2.3897 -5.7025 -4.9972 -3.0315 -2.6838 2.7946 3.3642 0.5335 +#> 5.8338 -0.5129 -9.5341 -0.1588 -1.2615 -1.2980 -2.9985 -4.5124 +#> 1.8187 7.9621 10.5903 7.5557 -3.6478 3.6597 -15.2382 -1.9051 +#> -7.7924 -1.5262 11.8106 -3.8921 -6.4083 7.1495 -6.2546 0.6946 +#> -4.4436 2.8346 -1.8207 -3.0062 0.8136 4.1881 -9.1924 -2.0089 +#> 0.5156 3.4063 -5.6720 1.4006 -4.1586 -3.0419 11.9767 -2.0378 +#> 13.1782 10.1899 -10.2059 1.9461 5.3552 -2.0539 7.2243 -0.4714 +#> 14.8124 7.7521 3.0912 3.8999 0.6046 -4.0801 -4.4360 -3.8308 +#> 10.8627 -3.8267 3.1769 11.1920 10.7253 -3.7321 19.8521 -9.5062 +#> -12.8728 -4.4582 -4.5226 -3.0155 8.2595 -10.6669 -3.0094 5.6649 +#> 8.6318 -3.1883 -5.9631 8.2891 3.7500 -7.9245 10.3776 2.9566 +#> 5.2366 0.5301 10.3018 -5.5081 10.5544 -8.2135 1.5258 -1.1528 +#> 8.3177 -2.0067 -11.9109 -5.3523 1.9335 1.8869 -0.5693 -3.4301 +#> -3.9508 -1.8663 7.4325 -2.8548 -10.7083 3.5088 -16.6890 -2.0355 +#> 13.7662 23.5428 5.6057 -1.4791 3.4818 5.9160 3.6912 5.0746 +#> -5.0284 1.7475 -9.1356 -13.0927 8.3176 -13.0039 -5.1364 5.9707 +#> +#> Columns 33 to 40 -4.1589 3.7303 -1.6060 4.4635 6.7841 -0.2459 -6.4094 4.1113 +#> -6.7551 1.7594 3.1004 8.3066 5.0678 7.1857 -9.3370 2.3213 +#> -0.1991 4.0297 1.6417 -2.3390 -4.1655 -6.4183 -2.2333 1.1998 +#> 7.2590 -1.4032 -4.5108 -1.6939 2.2592 9.0839 2.4611 -5.0034 +#> -4.2084 -1.7376 2.7075 12.6171 1.8772 -0.4516 -9.4619 13.7944 +#> -1.3379 -11.1412 -8.2708 3.2338 1.0159 16.4468 5.2993 -9.7824 +#> 3.1983 1.4617 -1.5322 2.2830 -2.2331 -10.2190 2.0204 1.8553 +#> 6.2902 -15.2740 1.7517 5.8250 4.6263 2.4940 -0.8620 -0.6863 +#> 5.6052 12.4055 4.6088 -1.7553 -7.0513 -11.5828 8.4270 6.6339 +#> -14.0078 5.9296 -9.6404 4.1751 -5.6653 -0.9729 -4.9558 0.0111 +#> -2.7454 4.8666 4.4243 4.6143 -3.3006 -4.7780 -5.3619 -1.4123 +#> -2.6294 4.2099 -1.5582 9.5657 4.8314 -2.8027 -1.7586 -1.1307 +#> 2.5317 3.1138 -9.2983 -5.4344 -6.7084 5.2707 1.2960 -0.8324 +#> 1.4242 5.9347 0.3738 24.5331 7.4974 2.2504 -2.4404 8.9424 +#> 0.6349 -1.4011 -0.8091 -4.0849 -0.0983 8.4596 -0.0663 -9.4661 +#> -1.6276 -10.1689 -5.7899 -4.1300 1.1119 3.3097 -2.9323 -6.6952 +#> 5.5432 8.0423 -9.6663 -4.0656 -8.8668 -0.3644 2.3113 -1.2062 +#> 8.5209 -3.5148 10.9451 10.1472 2.2316 10.6279 -8.4558 9.2929 +#> 6.5243 -3.0288 6.4580 10.3934 3.4049 5.9547 0.0918 8.7563 +#> 0.2904 -0.4497 -3.3537 -2.1805 -7.5533 -6.7769 2.9008 0.8825 +#> 1.2091 -2.4707 -0.3328 -2.1079 -5.1798 5.7991 2.5772 6.4596 +#> -12.9408 13.9249 -10.0330 -2.6644 -1.8878 -6.9288 -0.0266 1.6690 +#> -6.6501 3.6091 1.2646 12.2899 6.1853 5.2655 1.3089 0.0149 +#> -3.9240 11.4311 4.0317 -2.7872 6.6256 -5.9100 2.8111 -15.5841 +#> 7.4118 -1.0198 -12.4237 7.8476 -4.0985 -1.3785 8.0114 7.0813 +#> 3.7799 17.8669 3.8149 14.0909 -3.0418 1.0252 -5.0919 3.1712 +#> -4.4973 14.6470 6.9466 -7.5049 -4.7671 5.9034 6.4002 -6.0712 +#> -2.2247 10.2474 -1.3700 10.7991 3.9240 -1.4399 -4.0167 2.5126 +#> 10.2650 -8.5700 1.3659 -0.8875 -6.8651 1.5008 0.9678 8.9425 +#> 1.4923 -0.8188 -1.8771 7.8341 3.9058 -0.7921 0.2714 13.5229 +#> 8.5662 -5.0737 -4.2252 -2.5420 -6.5291 0.9155 6.9037 5.0505 +#> -3.8251 0.2528 -3.4308 -1.7888 -9.9637 8.6423 -13.1106 -5.5976 +#> -10.9219 4.3952 2.6134 -2.2916 10.3237 -1.8059 -4.2353 -10.4665 +#> +#> Columns 41 to 48 -3.6266 -0.4672 2.8204 -7.6010 3.9181 9.2954 2.6130 6.3411 +#> 7.9702 19.1315 6.0261 -15.5574 -5.9639 1.2486 12.3981 -5.7620 +#> 3.2285 6.0073 -2.0663 -3.3976 0.9136 5.8913 -8.4124 -6.3986 +#> -3.9151 -1.9902 -4.1941 1.9037 -5.3702 -2.1098 1.3982 -2.1048 +#> -2.2419 16.0544 10.6936 -9.0379 9.9122 -0.5433 -4.9791 -3.1856 +#> 3.6397 2.7877 5.7401 3.0013 -11.6554 -5.9296 6.7184 9.0771 +#> 4.1236 9.6333 3.7581 -3.0192 2.7629 -2.8612 -7.0557 13.5971 +#> 3.6429 -6.6866 9.4158 7.3372 1.7895 -2.1698 0.2234 6.4218 +#> 4.0710 0.1480 -3.4028 -2.7800 -5.6400 -1.7989 -2.0286 -11.1465 +#> -7.4089 7.6994 -2.5482 -8.9713 2.6950 -12.3743 19.5381 24.1576 +#> 4.4235 10.9627 -4.0982 4.2861 -4.1998 8.3667 5.4484 9.5414 +#> 17.3988 1.8155 2.6434 -10.1270 8.6963 -2.9827 -7.1048 4.0567 +#> -1.7410 3.7351 -2.7580 -2.8044 -2.9091 -6.2967 1.5733 4.1887 +#> -5.8837 1.3889 -1.2355 12.3573 -2.8756 -0.6569 -0.8305 -5.1950 +#> -1.9896 7.3914 5.7877 14.3839 -8.1825 3.6732 -0.3187 -10.0375 +#> -8.6129 -4.4320 3.6439 4.8098 8.3265 -2.1005 -8.5312 -7.4874 +#> 6.7591 -7.3323 -5.6843 5.3391 5.3316 7.7282 0.8120 12.7123 +#> -2.6858 5.1234 12.0991 -3.9599 -2.8497 -3.1925 1.4564 -7.4550 +#> 0.2012 15.4164 -1.8286 7.1329 2.8999 -15.3037 0.6159 -6.8872 +#> 7.9285 -4.5910 -5.1719 -0.6516 1.4483 -14.7208 -2.8044 11.5302 +#> -1.9663 3.7511 -1.9429 6.7636 0.4475 -10.1221 -14.3627 -3.2124 +#> -4.4533 0.4802 -1.9435 2.4874 6.0485 -8.6816 -2.8959 -10.7583 +#> -2.7271 5.3359 -9.3921 1.7631 3.2884 -7.7299 11.4036 5.9107 +#> -5.8764 -9.3208 6.4196 7.2703 -6.1126 -0.9637 0.4326 -4.2808 +#> -0.9386 -1.1206 4.6074 3.8725 7.6005 -2.5125 -13.6471 -4.9348 +#> 4.3929 -4.1937 -8.6613 -6.9293 8.8661 12.8171 5.5993 4.4867 +#> -21.8241 10.1130 -1.3049 4.8887 1.5002 -4.6623 -2.2722 8.9416 +#> 3.5709 11.4422 7.7696 -4.8174 3.1494 -0.0924 -2.4787 -6.4510 +#> -9.3814 -3.7620 -0.3368 2.9283 -0.9704 12.2476 -0.6800 4.0082 +#> -6.5955 1.0856 -3.0095 4.7550 4.1424 -7.7957 2.3028 -5.0474 +#> -8.4253 -8.6888 -3.1362 6.6683 5.5780 -2.4277 -4.7206 -3.3897 +#> -8.8405 -2.5572 19.1457 -3.5793 -1.0315 -8.6304 0.1489 -6.8972 +#> -7.1471 -1.5294 12.7806 0.0976 -7.4513 -6.1026 -16.2612 -3.3594 +#> +#> (12,.,.) = +#> Columns 1 to 6 1.9582e+00 1.5163e+00 -4.4381e+00 9.3688e+00 8.7846e+00 -2.0308e+00 +#> 5.8028e+00 9.3980e+00 -5.7602e+00 1.1914e+01 1.2539e+01 1.8618e+00 +#> 3.4829e+00 4.4942e+00 -2.4770e+00 1.5647e+00 1.1298e+01 4.5241e+00 +#> -4.4228e+00 1.4689e+00 5.5802e+00 -2.4450e+00 -7.5374e+00 -4.3167e+00 +#> -6.8749e-01 -8.7819e+00 -1.1497e+01 4.7102e+00 -7.3499e+00 1.1296e+01 +#> 1.1915e+00 9.6680e+00 -2.5148e+00 -6.1214e+00 1.3626e+01 1.4301e+01 +#> -2.3837e+00 2.1458e+00 4.3229e-01 5.4384e+00 -4.4749e+00 -3.2883e+00 +#> -4.6722e+00 -6.2104e+00 3.4471e-01 3.8749e+00 3.6713e+00 1.3000e+01 +#> 3.1968e+00 -5.2109e+00 5.6345e+00 6.7392e+00 -5.0622e+00 2.4158e+00 +#> 6.5981e+00 -6.6407e+00 -1.9040e+00 4.7759e+00 -8.4528e-01 -1.0665e+01 +#> 9.2419e+00 2.8670e+00 -8.0240e-01 4.2599e+00 2.8237e-04 1.9178e+00 +#> 7.0195e-01 1.6882e+01 -5.4213e+00 1.7245e+01 4.3374e+00 -5.9541e+00 +#> -8.4089e+00 1.1619e+01 2.2067e+00 -6.1709e-01 -1.8723e+00 -6.2607e+00 +#> -1.7670e+00 2.8082e+00 -1.2422e+01 2.3112e+01 8.8860e+00 1.4510e+01 +#> -1.2881e+00 8.7961e+00 1.2179e+01 -1.1539e+01 -6.3778e+00 -6.9043e+00 +#> 3.5359e+00 -5.0793e+00 2.0676e+00 -9.0520e+00 1.0036e+00 -4.4830e+00 +#> 2.4774e+00 1.4790e+00 6.3722e+00 5.8493e-01 1.8942e+00 -1.4599e+00 +#> 2.4485e+00 3.9772e+00 -6.7222e+00 8.5166e+00 3.2430e+00 1.3068e+00 +#> -8.5607e+00 -3.6612e-01 6.1566e+00 4.9916e+00 -9.4561e+00 1.2934e+01 +#> -4.8112e+00 2.9644e+00 1.5391e-01 -7.4910e+00 7.7132e+00 -5.3493e+00 +#> 3.3310e+00 -1.2281e+01 8.0321e+00 2.1130e+00 -9.9446e-01 -2.2052e+00 +#> 5.6940e-01 -1.1635e+01 -9.0541e+00 -4.4985e+00 -3.7620e+00 -6.5610e+00 +#> -1.0743e+01 1.3157e+01 -2.9189e+00 1.0627e+01 -2.7263e+00 -4.0187e+00 +#> 8.8342e+00 5.1593e+00 5.8149e+00 4.1091e+00 -5.3924e+00 2.4114e+00 +#> -4.2684e+00 4.9763e+00 -6.1863e+00 1.1412e+01 -2.6835e+00 1.3388e+01 +#> 7.9342e-01 8.7995e+00 2.8206e+00 1.3025e+01 3.8911e+00 -3.0966e-01 +#> 3.1382e+00 -4.0861e+00 1.4922e+01 -2.2620e+00 -2.3958e+00 -3.7674e+00 +#> 9.2701e+00 4.3355e+00 -1.0704e+01 1.1377e+01 6.5893e+00 8.0414e+00 +#> 5.4321e+00 5.2070e+00 -3.4322e+00 6.5029e+00 -6.9887e+00 -8.0402e+00 +#> -4.3460e+00 5.4807e+00 -9.5027e+00 2.0356e+00 -1.0137e+01 -6.3756e+00 +#> 2.7251e+00 -6.7244e+00 6.4854e+00 -3.7559e+00 -4.2434e+00 2.4309e+00 +#> -4.7270e+00 5.5261e+00 1.2363e+00 -1.2075e+01 7.2748e+00 2.3007e+00 +#> 7.1656e-01 -9.0506e-01 -3.9412e+00 -4.8804e+00 2.8452e+00 -9.9935e+00 +#> +#> Columns 7 to 12 -6.9090e+00 -5.8569e+00 3.5948e+00 -1.1790e+01 -3.1694e+00 6.1202e+00 +#> -8.2967e+00 3.7801e+00 1.6258e+00 -7.7937e+00 1.0489e+00 3.0526e+00 +#> 2.8955e+00 -2.5378e+00 6.5976e+00 4.3344e+00 -9.5210e+00 1.3229e+00 +#> -6.0320e+00 -5.3527e+00 -4.4327e+00 5.5768e+00 6.8403e+00 1.0133e+01 +#> 1.1643e+01 5.2540e+00 7.5433e+00 1.6085e+01 -8.3222e+00 5.7045e+00 +#> -5.0190e+00 -4.1345e+00 7.0509e+00 -1.8400e+00 -3.7420e+00 -7.7724e+00 +#> 6.9741e+00 2.9107e+00 -4.5999e+00 -1.2082e+01 -1.8717e+00 7.6204e+00 +#> -6.3690e-01 9.3017e+00 5.9703e-01 7.2948e+00 -3.3143e+00 -1.1276e+01 +#> 4.3913e-01 -5.6523e+00 -1.5859e+01 -3.2002e+00 3.8674e+00 -4.1639e+00 +#> -2.5547e+00 6.5887e-01 6.9181e+00 -1.3928e+01 6.7173e+00 -7.3907e+00 +#> 2.7851e+00 9.8552e+00 3.5245e+00 1.1526e+01 -8.4232e-01 7.2696e+00 +#> -8.8759e+00 4.2260e+00 -7.2686e+00 -1.1937e+01 4.0806e-01 -4.9664e+00 +#> -1.3335e+00 -3.5270e+00 -5.0098e+00 1.6989e+00 -1.1594e+00 9.4589e+00 +#> -3.4792e+00 -9.4576e+00 2.9774e+00 -9.2488e+00 -2.8140e+01 1.9555e+00 +#> 1.0540e+00 3.5825e+00 2.5980e+00 1.1157e+01 -1.2448e+00 2.2387e+01 +#> -1.1493e+01 4.1388e+00 -5.2556e-01 1.5224e+01 -9.9748e+00 1.8853e+00 +#> 4.9094e+00 3.0075e+00 5.9231e+00 6.9954e+00 4.0012e+00 2.8866e+00 +#> -7.0807e+00 5.2841e+00 7.3297e+00 1.1766e+01 7.6081e-01 5.3239e-02 +#> 9.4336e+00 1.0487e+01 1.6665e+00 1.3400e+01 1.0475e+01 6.7069e+00 +#> 6.7666e+00 -8.2492e-01 -1.1049e+01 -2.3066e+01 6.0383e+00 -6.2987e+00 +#> 7.8720e-01 6.0962e+00 1.0464e+01 5.2101e+00 2.8262e+00 2.5275e+00 +#> 8.5303e+00 -5.5873e+00 -4.7950e+00 -9.2813e+00 1.3190e+00 -5.7839e-01 +#> 7.4882e+00 -7.3008e-01 1.0160e+01 -8.7175e+00 -1.9188e+00 -8.6999e+00 +#> 6.9142e+00 -2.4507e+00 -8.4099e+00 2.7714e+00 -8.9878e+00 -3.9398e+00 +#> 1.0972e+01 -5.8331e+00 -1.1238e+01 -5.1030e+00 9.4947e-01 -5.4123e+00 +#> 1.1761e-01 -1.0268e+01 -1.0456e+01 -2.2644e+00 -4.6218e+00 -2.4325e+00 +#> -1.1273e+01 -1.0211e+01 6.8191e+00 1.1760e+01 9.0462e+00 3.8576e+00 +#> -2.0217e+00 -1.8282e+00 -4.4059e-01 -4.6378e+00 -3.9322e+00 1.8231e+00 +#> -2.6160e+00 1.4471e+00 -2.7594e+00 -1.2062e+01 9.7545e+00 1.4638e+01 +#> 1.1557e+01 6.1734e-01 -6.6312e+00 -2.9240e+00 3.2766e+00 3.6247e+00 +#> -5.4485e-01 -8.6700e+00 -4.9518e+00 -1.7252e+00 4.8794e+00 6.9322e+00 +#> 1.1924e+00 3.0659e+00 -1.0207e+01 -1.2909e+00 -1.2420e+01 -7.3237e+00 +#> -2.5777e+00 5.0013e+00 6.8978e+00 -2.6915e+00 -5.6920e+00 -4.2401e+00 +#> +#> Columns 13 to 18 1.2030e+01 -3.2740e+00 1.8418e+00 4.3851e+00 -9.0849e-01 -5.2315e+00 +#> -5.9734e+00 5.4140e+00 -9.8742e+00 4.2244e+00 -5.1225e+00 -2.9482e+00 +#> 2.2832e-01 5.8950e+00 6.0840e+00 8.5996e+00 8.1863e+00 6.5913e+00 +#> -1.0210e+01 7.5261e-01 6.5368e+00 -3.3902e-01 3.7261e+00 8.6245e+00 +#> -3.0129e+00 5.8725e+00 4.1971e+00 -9.3225e+00 -9.9618e+00 6.0806e+00 +#> 1.4161e+01 1.0679e+01 3.6866e+00 -1.2610e+00 -1.2392e+00 6.1928e-01 +#> 1.1042e+00 -9.2068e-01 -2.4594e+00 -6.0893e+00 -5.1094e+00 1.3665e+00 +#> 2.4856e+00 1.7492e+01 -3.1000e+00 -7.4753e+00 7.6473e+00 8.3733e-02 +#> -7.6926e+00 -1.9211e+00 -1.2787e+01 -7.4857e+00 8.0192e+00 6.5071e+00 +#> 1.2993e+01 -8.3812e+00 1.3754e+01 -1.0470e+01 1.2710e+01 -2.4153e+00 +#> 1.0431e+01 -1.1703e+01 1.8209e+01 2.9263e+00 -3.3364e+00 -6.9785e+00 +#> -1.2995e+00 -1.3377e+01 1.5937e+01 -9.1478e+00 -3.9198e+00 -7.8333e+00 +#> -3.3543e+00 -5.0684e+00 -7.3708e+00 3.6521e+00 2.6349e+00 1.4546e+01 +#> 1.6444e+01 1.4994e+01 -4.2387e+00 -1.6015e+01 -4.4054e+00 6.7506e-01 +#> -4.1046e+00 -8.1937e+00 -7.3669e+00 1.5166e+00 -2.7349e+00 2.2095e+00 +#> -6.0870e+00 1.9520e+00 9.1598e-01 1.3942e+01 1.2582e+00 2.5258e+00 +#> 5.9907e+00 -5.0078e+00 8.4053e+00 6.7368e+00 9.4565e+00 6.8918e+00 +#> -1.5269e+01 3.9213e+00 -1.4352e+00 4.4186e+00 -4.0238e+00 7.0786e+00 +#> -1.3054e+01 1.0303e+01 -1.2526e+01 -3.8376e-01 -5.0172e+00 8.6113e+00 +#> -1.4660e+00 -2.2566e+00 1.7378e-01 -1.0619e+01 1.2341e+00 -6.8871e+00 +#> -4.1382e+00 6.8596e+00 -5.8995e+00 -5.2521e+00 5.4384e+00 -1.1286e+00 +#> 1.4573e+01 -1.0460e+00 -8.5052e+00 4.5067e-01 2.5444e+00 1.7283e+00 +#> 2.3258e+00 6.2800e+00 4.8757e+00 -6.5851e+00 5.8745e+00 -1.1268e+00 +#> 1.2562e+01 -9.3875e+00 1.4156e-01 -6.8616e+00 -1.2719e+01 -5.0434e+00 +#> -3.9487e+00 7.1724e+00 -7.3348e+00 5.8512e-01 -4.1393e+00 9.3828e-01 +#> 4.4327e+00 -1.0829e+01 -1.7106e-01 -3.6880e-01 -1.9792e+00 6.8310e-01 +#> 2.4538e+00 -1.1562e+01 2.9378e+00 8.5902e+00 1.2170e+01 -2.4749e+00 +#> 6.8260e+00 -7.0341e-01 -4.6220e+00 2.3372e+00 -6.9777e+00 1.1196e+00 +#> 2.8531e+00 -1.0270e+01 -7.5814e+00 4.9443e+00 -5.1932e+00 1.1613e+01 +#> -2.5437e+00 3.0306e+00 8.1353e-01 -2.6043e+00 -5.9952e+00 8.3459e-01 +#> -8.6172e-01 5.4540e+00 -1.1564e+01 8.7192e-01 5.3836e+00 7.2653e+00 +#> -1.1370e+01 -4.5839e+00 -4.8711e+00 -1.0320e+00 -1.1259e+01 2.1886e-01 +#> 8.5988e+00 -2.3520e+00 1.1205e+01 1.3373e+01 -6.8720e+00 -1.4589e+01 +#> +#> Columns 19 to 24 -1.4757e+00 -2.9086e+00 3.7875e+00 3.8046e+00 9.9135e-01 -5.2814e+00 +#> -7.8455e+00 -5.6955e+00 6.1365e+00 7.1745e+00 -5.9925e+00 5.1950e+00 +#> 2.8120e+00 3.5108e+00 -2.3798e+00 -4.0304e+00 2.5297e+00 -1.7302e+01 +#> -1.6053e-01 -8.2070e+00 1.9308e+00 -2.7378e+00 1.2387e+01 -6.4349e-01 +#> 1.3876e+00 6.8483e+00 -4.4044e-01 -2.2385e+01 1.0914e+01 -2.2045e+01 +#> 3.9385e+00 -9.4922e+00 -1.9838e+01 2.6538e+01 -6.6371e+00 4.4715e+00 +#> 4.1184e-01 -3.9045e-02 1.0639e+00 -9.6737e+00 7.3801e+00 -7.8772e+00 +#> 2.6098e-01 -1.8976e+00 -7.4933e+00 8.8896e+00 7.5782e-01 -1.9326e+00 +#> 6.7810e+00 1.8683e+00 2.1269e+01 -9.7725e+00 -2.6076e-01 -3.2745e+00 +#> 8.4337e+00 9.0929e-01 4.1189e+00 8.4004e+00 -1.6291e+01 1.0513e+01 +#> -7.7256e+00 7.3382e+00 -5.5329e+00 -1.0845e+01 -1.3380e+01 7.6190e+00 +#> -8.1466e-01 8.6373e+00 8.0982e+00 4.8996e+00 2.9299e+00 4.8110e+00 +#> -4.1361e+00 3.3048e+00 6.1967e+00 4.1324e+00 3.5241e+00 -9.1599e+00 +#> -1.3167e+01 9.6421e+00 7.9457e+00 9.8987e+00 1.5261e+01 1.4769e-01 +#> -8.4662e+00 1.8609e+00 2.5667e+00 6.4801e+00 3.7954e+00 1.5022e+01 +#> -3.5729e+00 7.9538e+00 -2.6093e+00 8.2503e+00 7.0605e-01 -5.7706e+00 +#> 4.2524e+00 6.8400e+00 -1.2863e+01 -5.7441e+00 -3.1656e+00 1.2086e+00 +#> -1.4751e+00 -6.2958e-01 7.7179e-02 -7.1986e+00 -5.4876e+00 -1.3063e+01 +#> -2.6628e+00 2.4780e+00 -5.7869e+00 -4.9347e-01 2.4905e+00 -6.2947e+00 +#> 5.6991e-01 -2.4829e+00 5.8007e+00 1.4266e+01 7.6066e+00 3.1942e+00 +#> 8.9414e+00 1.9897e+00 -2.3509e+00 -3.2521e+00 -4.0727e-01 5.1780e+00 +#> 3.3468e+00 -7.8904e+00 4.2090e+00 8.5583e+00 -3.3650e+00 1.3906e+00 +#> -4.6435e+00 -2.9871e-01 -1.7427e+00 -3.7709e+00 -1.6402e+00 7.5157e+00 +#> 1.5779e+00 4.6603e+00 -4.4591e+00 7.7573e+00 -2.2442e+00 9.8996e+00 +#> -8.6225e+00 4.1677e+00 1.2836e+01 1.1512e+01 2.0040e+01 -1.2154e+01 +#> -4.8182e+00 -2.1046e+00 7.7618e+00 -1.7149e+01 1.0615e+01 -6.0287e+00 +#> 5.8925e+00 1.4365e+01 4.0609e+00 -1.6966e+01 -6.8104e+00 -3.8955e+00 +#> -4.4071e+00 -7.9534e-01 5.1252e+00 3.6052e+00 -1.1850e+00 -1.0037e+01 +#> 3.3453e+00 5.4545e+00 1.8156e+01 -5.3970e+00 3.1606e+00 -1.2713e+00 +#> -4.0493e+00 -4.8913e+00 5.6615e+00 -5.7611e+00 6.6297e+00 -1.2146e+00 +#> 2.5967e+00 -4.3332e+00 -2.1148e+00 4.6226e+00 5.3078e+00 1.4678e+00 +#> -4.7305e+00 -3.8472e+00 7.4686e+00 1.7495e+01 2.7979e+00 7.5927e+00 +#> 4.9660e+00 4.7289e+00 9.5351e+00 -2.1106e+00 -4.0900e-01 -9.1845e-01 +#> +#> Columns 25 to 30 6.1161e+00 6.9226e+00 -2.8981e+00 -2.7580e+00 -1.0426e+01 3.3847e+00 +#> 3.9221e+00 2.5279e+00 7.3571e+00 1.6244e+00 7.7329e-01 -1.2452e+00 +#> -3.1158e-01 3.9978e+00 -1.0066e+01 4.3027e+00 -2.6690e+00 -5.2712e+00 +#> -7.1519e+00 1.1379e+00 -8.2844e-01 6.3416e+00 6.2314e+00 2.7738e+00 +#> 4.4686e+00 9.1622e+00 -3.0936e+00 3.4314e+00 2.2551e+00 5.1574e+00 +#> 1.3488e+01 -2.1322e+00 1.0398e+00 8.0340e+00 7.3283e+00 -1.5809e+01 +#> 3.7692e+00 1.9621e+00 -9.0556e-01 -2.4837e+00 6.5141e+00 5.7643e+00 +#> 8.2587e+00 -2.0335e+01 4.0820e+00 -1.4321e+00 1.0502e+01 -3.8148e-01 +#> -8.9942e+00 1.9670e+00 -3.3506e+00 -9.3317e-01 -9.3726e+00 7.3447e+00 +#> -5.0825e+00 5.2144e+00 -4.7799e-02 9.0999e+00 -1.2141e+00 -7.9670e-01 +#> 2.1925e+00 -1.4159e+00 7.0442e+00 5.4507e-01 4.1791e+00 2.5825e+00 +#> 9.1459e+00 -4.1197e+00 -4.3125e+00 -9.0687e+00 -4.7147e+00 -7.2035e+00 +#> 1.3595e+00 4.6402e+00 -5.2194e+00 1.3152e+00 -2.5326e+00 -8.4170e+00 +#> -4.4392e-02 -2.8086e-01 1.2058e+01 1.6194e+01 -5.1273e+00 3.0479e+00 +#> -1.0030e+01 -5.9885e+00 5.6329e+00 -2.0701e+00 -1.1666e+01 1.5481e+00 +#> -3.3927e+00 -7.3879e+00 -7.6752e+00 -1.6255e-01 5.5445e+00 -1.4005e+00 +#> -5.6587e+00 9.7032e-03 1.0599e+00 -1.8436e+00 6.6282e-01 -4.5359e+00 +#> 1.1799e+00 -4.7255e+00 3.7418e-01 4.2854e+00 1.0477e+01 2.9613e+00 +#> 4.8317e+00 -4.2625e+00 2.7001e+00 -7.4672e-01 1.2663e+01 1.4830e+01 +#> 3.5157e-01 1.0214e+00 2.4288e+00 6.6755e+00 8.6543e+00 -2.0844e-01 +#> -1.1498e+00 -2.9015e+00 1.3124e+00 1.8678e+00 -7.2379e+00 3.8631e+00 +#> -1.4355e+01 5.2258e+00 -4.0159e+00 2.0858e+00 -1.2366e+01 1.5935e-01 +#> 8.1982e+00 -4.8342e+00 8.7440e-01 1.4225e+01 -1.1004e+00 -4.4817e+00 +#> -2.3645e+00 2.5067e+00 8.5244e+00 -1.1762e+01 -1.4766e+01 -1.3830e+01 +#> 9.2692e+00 1.6213e+00 -2.1836e+00 9.3800e-01 3.5906e+00 1.3250e+00 +#> 4.7266e+00 4.2532e+00 -6.7926e-01 -7.2347e+00 -1.2477e+01 -4.3235e+00 +#> -1.0179e+01 6.6467e+00 -5.6658e-02 -1.1237e+00 -4.4639e+00 2.1199e+00 +#> 2.4276e+00 5.1821e+00 -2.1755e-01 2.2693e+00 -2.6698e+00 3.4815e+00 +#> 2.4108e+00 -3.8796e+00 9.8660e-01 -9.2692e+00 -1.0325e+01 -7.4074e-02 +#> 3.4966e+00 -6.8966e+00 -3.5573e+00 3.1927e+00 6.6068e+00 9.5086e+00 +#> -3.8704e+00 -1.4158e+00 -1.1447e+00 -1.1132e+00 1.3367e+00 7.4420e+00 +#> -3.4355e+00 1.6264e+00 -6.1039e+00 -4.5346e+00 -5.4162e+00 -3.8093e+00 +#> -5.5183e+00 9.1193e+00 4.2706e+00 3.1916e+00 2.9840e+00 6.3735e-01 +#> +#> Columns 31 to 36 -9.0484e-02 6.9927e+00 7.3969e+00 -5.7673e+00 -4.3828e+00 -1.3971e+00 +#> -5.2099e+00 -7.5013e+00 -1.7246e+00 5.3855e+00 -3.3955e-01 -8.9563e+00 +#> -1.3297e+01 4.8407e+00 1.3566e+01 2.4213e+00 -4.6956e+00 -7.1628e+00 +#> -1.2235e+01 -1.1365e+01 -4.9232e+00 1.2284e+00 5.2631e+00 -2.5183e+00 +#> 4.4032e+00 -4.1133e+00 1.1311e+01 -1.3455e+01 -1.4408e+01 1.0434e+01 +#> -9.5784e+00 6.0335e+00 1.1400e+01 5.1747e+00 -4.6522e+00 -1.6362e+01 +#> 4.7644e+00 5.2691e+00 1.1483e+01 -7.0581e+00 -7.6905e+00 1.9435e+00 +#> -6.6241e+00 3.9515e+00 -3.2274e+00 9.5960e+00 5.0506e+00 -2.9879e+00 +#> -6.7060e+00 -1.1738e+01 -7.0699e+00 2.1243e+00 7.5988e+00 2.4997e+00 +#> -1.1083e+01 3.4505e+00 2.7356e+00 -7.9679e+00 7.6487e+00 -8.0102e-01 +#> 4.3550e+00 1.3824e+01 2.6706e+00 1.2870e+01 -1.0729e+01 -6.7232e+00 +#> -1.0844e+01 -1.0815e+01 -2.4823e-01 6.4956e+00 -8.4392e+00 9.5622e+00 +#> 1.8728e+00 -7.1832e+00 4.6905e+00 -1.2203e+01 -3.6089e+00 1.5070e+01 +#> -2.1337e+00 -2.3224e+00 2.3086e+00 4.2808e+00 7.1980e-01 -6.0761e+00 +#> 1.5361e+01 7.7081e+00 -1.1713e+01 -6.5901e+00 -2.2062e-01 2.4049e+00 +#> -9.4625e+00 4.8361e+00 -1.4522e+00 4.2046e+00 7.4177e+00 3.3904e+00 +#> 3.0925e-01 -1.4634e+00 5.1916e-01 3.0538e+00 -5.2062e+00 -5.0542e+00 +#> -9.3709e+00 -7.7323e+00 -1.2448e+00 1.0660e+00 1.5900e-01 7.7578e+00 +#> 5.5954e+00 8.9109e+00 -5.5711e+00 1.0904e+01 8.6947e+00 2.3469e+00 +#> -5.4309e+00 -1.5257e+01 -9.2526e+00 6.4138e+00 8.7340e+00 -8.7541e+00 +#> -1.1727e+01 6.8796e+00 -2.3330e+00 -5.5755e+00 5.3497e+00 -4.8680e+00 +#> 9.8880e+00 1.2896e+01 -8.6888e+00 -1.6464e+01 -4.0600e-01 1.3119e+00 +#> 9.4101e-01 1.5961e+01 -2.2882e+00 1.3269e+00 1.3616e+01 4.3966e+00 +#> 1.0615e+01 1.3888e+01 6.5107e+00 3.0149e+00 -1.4685e+01 1.2442e+01 +#> -4.8838e+00 -1.4796e+01 -4.2271e+00 1.0969e+01 -5.1779e+00 -2.2008e+00 +#> -6.8329e+00 1.9917e+00 1.6098e+01 -1.6539e+00 -2.4991e+00 1.3878e+01 +#> -7.0967e+00 2.3575e+00 1.1299e+01 6.6215e+00 2.3297e+00 1.5319e+01 +#> 4.8587e+00 4.4289e+00 4.7356e+00 6.4253e-01 1.3700e+00 5.1514e+00 +#> 8.7891e+00 -2.5138e+00 -9.8037e+00 -2.1609e+00 8.2152e+00 1.6041e+01 +#> 8.2567e+00 1.0464e+01 -1.4057e+01 -1.0571e+00 -5.0778e+00 2.1363e+00 +#> 8.5765e-01 4.0197e-01 -8.2174e+00 -1.3369e-01 1.1378e+01 -4.4959e+00 +#> -5.4334e+00 -8.5184e+00 3.4430e+00 -5.2772e+00 2.1432e+00 4.0265e+00 +#> -8.9269e+00 3.7571e+00 1.0541e+01 -8.2892e+00 -5.7320e+00 -5.3395e+00 +#> +#> Columns 37 to 42 6.5336e+00 5.4396e+00 -8.6902e+00 -1.0467e+00 5.2486e+00 1.2798e+00 +#> 3.4265e+00 -6.1059e+00 -1.1453e+01 -3.6784e+00 -4.1583e+00 5.8550e+00 +#> 8.0959e+00 -2.4088e-01 -7.0431e+00 3.7555e+00 1.2422e+00 6.9115e-02 +#> -5.0315e+00 4.6298e+00 -2.1419e+00 -4.1107e+00 -9.8939e-01 -6.2979e-01 +#> -1.0387e+01 -5.6365e+00 -9.6757e+00 6.3510e+00 1.2661e+01 -9.2142e-01 +#> 2.4167e+00 8.2972e+00 -1.2247e+01 -7.8396e+00 4.6219e+00 9.4729e+00 +#> 1.8849e+00 4.3804e+00 -4.3225e+00 -9.5198e-01 1.9971e+00 4.5372e+00 +#> -1.9068e+00 1.0075e+01 4.9945e+00 -6.3370e+00 -1.2891e+00 7.4768e-01 +#> 1.0198e+01 -4.5966e+00 8.2938e+00 7.5842e+00 -1.1039e+01 -2.0984e+00 +#> 1.0913e+01 -4.3925e+00 -1.6509e+00 8.5603e-01 1.0903e+00 -3.8952e+00 +#> -6.9291e+00 -6.5752e+00 9.8335e-01 -8.7769e+00 5.7905e+00 1.0136e+01 +#> 6.6339e+00 4.6321e+00 3.5160e-01 -3.0522e+00 -8.9206e+00 -1.5932e+00 +#> -3.3305e+00 3.0761e+00 4.1607e+00 -7.2895e+00 -6.8976e+00 -7.5342e-01 +#> -3.9297e+00 1.5378e+01 5.4435e+00 -1.6367e+01 3.2887e+00 9.1972e+00 +#> 1.5821e+00 1.3631e+00 -7.6733e+00 -1.4882e+01 1.9956e+00 -1.6447e+00 +#> -3.4146e-01 7.3848e+00 -5.5356e+00 -5.2053e+00 4.7189e+00 8.1394e-01 +#> -8.6958e+00 -2.2186e+00 1.5008e+01 -3.1278e+00 7.9075e+00 -1.6715e-01 +#> -1.1315e+00 5.3178e+00 -3.1307e+00 8.8815e+00 2.9763e+00 -7.6449e+00 +#> -9.3992e+00 1.2160e+00 2.1107e+00 4.0891e+00 -7.9666e+00 1.0358e+00 +#> -3.9945e+00 4.1562e-01 8.1565e+00 -5.9213e-01 -7.2816e+00 -2.8893e+00 +#> 5.2877e+00 -3.8570e+00 2.7873e+00 4.8510e+00 1.2445e+00 -1.2769e+01 +#> 1.8740e+01 -1.4332e+01 3.0651e+00 7.0418e+00 -4.5293e+00 -3.6274e+00 +#> 6.7745e-01 -3.0371e+00 3.1109e+00 1.2090e-02 -5.3179e+00 -2.5002e+00 +#> 5.2796e+00 1.0283e+00 -3.4523e-01 -1.0040e+01 -6.6017e+00 9.1503e+00 +#> -1.0775e+00 3.2323e+00 1.0403e+01 2.7762e+00 -1.5277e+01 -5.6901e-01 +#> 9.0629e-01 -4.0061e+00 6.1235e+00 -1.2577e+01 -3.8271e+00 3.0505e+00 +#> 7.4715e+00 -6.3082e+00 1.1744e+01 -9.8021e+00 -3.7131e+00 -1.7504e+00 +#> 8.2256e+00 -2.3969e+00 -5.2840e+00 8.5097e-01 1.5515e+00 8.4764e+00 +#> 7.9864e+00 4.1985e+00 -9.0979e+00 3.9316e+00 6.6061e+00 -2.8472e+00 +#> -2.7000e+00 2.2510e+00 5.7231e+00 2.7795e+00 5.0384e+00 6.0337e+00 +#> 7.1882e+00 3.7472e+00 -1.3904e-01 3.8474e+00 -4.8376e-01 -5.2268e-01 +#> -4.1407e-01 -2.9709e+00 2.4135e+00 3.1045e+00 -7.5430e+00 -6.4270e+00 +#> 5.8998e+00 -2.8439e-01 -2.7081e+00 -5.9818e+00 5.4839e+00 2.3218e+00 +#> +#> Columns 43 to 48 2.2402e+00 -1.5542e+00 1.4843e+00 3.2727e+00 -6.9243e+00 -2.8410e+00 +#> 1.0847e+00 -5.9554e+00 5.8303e+00 2.4813e+00 4.7389e+00 -7.3507e+00 +#> 8.4970e-01 -3.5377e+00 4.2821e+00 4.2873e+00 1.1666e+01 4.5596e+00 +#> -9.7468e+00 -8.9358e+00 4.5727e+00 5.4329e+00 8.6689e-01 1.0314e+01 +#> -4.8837e+00 5.9225e+00 -9.9869e-02 -4.2408e+00 3.9993e+00 -4.3992e+00 +#> -1.9045e+00 -7.1476e+00 9.9999e-01 1.5986e+01 1.2513e+01 7.1614e+00 +#> -1.7830e+00 5.9948e-01 7.8580e+00 -9.4462e+00 1.0200e+00 4.8011e+00 +#> -9.8234e+00 3.4861e-01 3.8357e+00 6.1562e+00 2.9061e+00 4.0485e+00 +#> 3.5596e+00 1.4595e+00 -1.3254e+00 -6.1885e+00 -1.2089e+00 -3.0521e+00 +#> 2.9925e+00 2.6702e+00 6.7610e+00 -2.3091e-01 6.8220e+00 -3.1444e+00 +#> -5.4020e+00 -2.3964e+00 -3.1865e+00 5.7555e+00 1.3291e+01 -8.5698e-01 +#> -2.8169e+00 -1.8385e+00 -9.0526e-01 1.9450e+00 -9.2923e+00 -4.7803e+00 +#> -3.0303e+00 3.2695e+00 3.0534e-01 -9.2509e-01 -3.9528e+00 1.1499e+00 +#> -2.5301e-01 -3.4039e+00 1.1043e+01 8.3450e+00 -6.2288e+00 -1.1530e+01 +#> -8.8186e+00 -2.2850e+00 1.1378e+01 -3.6004e+00 -2.0555e+00 -9.0479e-01 +#> -2.0104e+00 -4.7570e+00 1.1233e+00 1.9993e+00 -5.9040e+00 5.5085e+00 +#> -1.0149e+01 3.0721e+00 1.9551e-01 9.4429e+00 -5.2201e+00 1.5768e+00 +#> -4.0797e+00 -9.9219e+00 -7.6212e+00 2.4989e+00 -1.3341e-01 1.9144e+00 +#> 3.4272e+00 -4.3545e+00 -6.7412e+00 -1.0484e+01 -5.8306e+00 8.4177e+00 +#> 2.8215e+00 4.5339e+00 2.1628e+00 -1.6060e+00 1.7593e+00 8.6476e-01 +#> -3.3726e+00 -2.9092e+00 4.7258e+00 -7.0895e+00 6.2824e+00 -2.6824e+00 +#> 4.2323e+00 1.7419e+01 3.6262e+00 -7.0972e+00 3.3825e+00 -1.9260e+00 +#> 3.5111e+00 -3.4567e+00 2.1831e+00 -6.6562e+00 6.3965e+00 -8.3988e+00 +#> 4.6389e+00 3.6323e+00 -3.3976e+00 2.6526e+00 -8.4421e+00 -3.6572e+00 +#> 7.8457e+00 1.1802e+01 2.7869e+00 -7.3697e+00 2.0312e+00 -6.5736e+00 +#> 9.7525e-01 -3.6469e-02 -1.6891e+00 6.5193e+00 -1.5194e+01 -4.8096e+00 +#> 4.0414e+00 -2.8706e+00 -2.1379e+01 -3.6530e+00 9.7867e-01 -9.1130e+00 +#> 5.1010e+00 -1.8387e+00 -3.7601e+00 7.1863e-01 -1.9257e+00 -6.5057e-01 +#> 1.6760e+00 -4.5478e+00 1.5769e+00 -9.2460e+00 -1.5995e+01 -1.1016e+01 +#> 1.6188e+00 3.1737e+00 1.3289e+00 -1.2460e+01 3.4080e+00 6.0096e+00 +#> 3.7538e+00 -7.2063e-01 3.3712e+00 -9.0546e+00 -5.1337e+00 6.7879e+00 +#> 7.7956e+00 6.9683e+00 -1.4220e+00 6.4978e+00 -5.2899e+00 7.5029e+00 +#> 4.4002e+00 1.5309e+00 -6.6199e+00 5.7752e-01 5.1969e+00 2.1022e+00 +#> +#> (13,.,.) = +#> Columns 1 to 8 -0.9778 2.0864 -1.9347 3.3888 -0.1636 -3.8267 0.5605 9.9101 +#> -13.5658 -3.4669 -8.5129 -9.9463 -5.5977 2.4082 -5.8181 0.5399 +#> -11.2338 2.0017 -1.7188 -2.8144 0.6448 -6.0843 -8.3034 -0.3904 +#> -8.7948 -11.1114 5.0740 0.4245 -0.1223 2.4761 1.9931 -9.5092 +#> 8.7837 5.0841 -2.5632 -0.0376 -5.2262 2.1127 -1.4961 -6.8694 +#> -8.1744 -3.6743 -6.1567 -8.8577 -0.2625 -15.8838 -11.3298 16.3652 +#> 6.2160 -0.7043 2.2028 0.4975 -8.6895 0.5222 1.6079 7.0410 +#> -3.9090 4.1901 0.9985 3.0054 4.1974 0.7658 -6.6744 -1.5689 +#> -11.0796 -8.8534 0.9024 -2.9542 4.6354 11.8482 5.6382 -17.6922 +#> -0.9981 2.7919 -6.5282 17.6245 -11.2529 4.3189 -1.1030 10.6881 +#> 5.6450 10.5769 -8.8890 -1.4596 -12.3727 -2.0341 -8.5240 16.0834 +#> -3.4532 7.7638 -14.1343 5.3774 -7.3675 7.2708 -4.6601 10.7336 +#> -5.7117 -0.5653 9.7594 3.0666 4.1041 2.5848 8.7299 -9.0438 +#> -7.4324 4.4691 -13.9857 8.0078 -2.5114 -3.7753 -1.4817 -4.4988 +#> -0.4866 1.1374 7.0974 -1.2004 -3.8848 2.9834 12.7753 -8.9348 +#> -6.1833 11.8951 7.6308 9.2798 1.5523 -4.2564 -12.1051 1.2364 +#> -4.1699 11.0643 -1.2436 7.4335 3.0640 -7.0257 0.4135 0.7183 +#> 1.6508 5.3294 -2.6630 -3.7436 3.8429 -0.6369 -2.3379 -0.1505 +#> 6.2069 -2.2236 7.1296 -6.1202 -3.0155 8.5069 -7.5743 -0.1698 +#> -1.1382 -11.1565 3.1580 -1.1949 -0.6722 -3.1941 3.2886 -3.5690 +#> 4.6859 -5.8429 2.6151 5.2238 -2.2376 3.3885 1.7907 0.9217 +#> -2.9735 2.4143 4.0791 9.0949 2.4303 3.6517 5.2465 -0.7700 +#> -2.9390 2.5388 -0.8809 -4.5503 -4.8797 2.1322 0.6046 -0.2651 +#> -1.9825 6.9068 -10.2627 3.9346 -7.7332 -1.3122 0.7909 2.8877 +#> -8.7848 -5.0153 -3.2054 -7.6517 -0.0733 -3.7653 -8.5102 -11.1232 +#> -17.9673 0.4590 -5.5341 0.1872 -2.4148 -6.0210 8.3866 -10.3305 +#> -1.3640 3.2962 6.6594 10.2493 2.8223 3.0586 -1.5421 -3.4137 +#> -11.3088 5.3447 -5.6580 1.1274 -8.6965 -2.1191 -6.1493 -0.5026 +#> -1.8827 1.6988 20.9489 6.4057 3.9443 10.6856 8.5919 -10.9101 +#> 10.8214 -5.2606 5.7944 -6.0466 -0.6325 1.4181 4.2511 4.7328 +#> -7.7921 -7.8776 3.9628 7.7676 5.7189 0.6693 -4.5340 -1.5418 +#> -7.7887 2.7145 -11.8478 -9.0489 -0.5368 -12.5737 -1.5724 -1.7395 +#> 1.7873 2.7427 -2.4864 -1.1201 1.0337 -5.9219 -8.1600 2.8393 +#> +#> Columns 9 to 16 -1.8169 -7.9612 -2.1608 0.5852 -6.5375 8.1288 2.5874 -3.5900 +#> -1.4759 -1.2970 -1.1523 -3.5235 0.9351 8.4060 8.1361 -2.8203 +#> 15.8027 5.2125 10.0109 1.2406 1.4232 -2.5722 7.9275 -7.1921 +#> 5.4071 1.3940 6.5425 -5.8151 7.5485 9.8290 -5.3536 -4.6675 +#> 16.1111 -3.6397 2.8263 5.2734 -12.4194 -6.2390 7.4967 -14.4427 +#> 4.9413 6.7940 10.4025 14.9983 6.9236 17.2183 12.4610 -4.3952 +#> 7.0620 -8.7988 5.7420 3.7065 11.9922 -9.3160 1.3450 8.9910 +#> -1.6379 -5.3000 2.2178 6.1383 -5.8877 1.0227 11.9440 -3.0353 +#> 0.1425 3.7023 4.0341 -13.7450 2.3731 -1.8288 -0.2984 -2.9665 +#> -8.6226 7.2702 2.7093 -6.5834 1.2454 11.6249 -9.4339 -4.0186 +#> -5.5424 -7.9561 -1.6931 5.3224 2.0049 0.7928 -9.3779 7.3330 +#> -11.3829 -7.6404 -11.1493 8.6658 9.6880 -13.3295 14.4488 8.8575 +#> 10.0761 6.4298 -3.1485 1.9403 -1.6768 -11.3805 -3.4466 -3.4310 +#> 6.7750 -1.8505 -7.1245 9.9058 -2.0716 36.2091 7.2410 3.8925 +#> -10.3795 -2.4469 6.6220 10.7503 -7.5308 2.0596 -10.9463 -4.2121 +#> 12.1840 -1.5373 -4.3675 5.1165 -5.7954 1.5911 0.4830 -8.5760 +#> 2.5448 5.6987 -4.4362 0.4550 4.4976 -8.0434 -12.8337 7.6190 +#> 1.0408 -3.0536 -11.0099 -6.1630 -16.3065 -3.2734 -1.0025 -14.9008 +#> -14.4417 5.8758 4.1159 11.8370 -18.3083 4.2844 0.5498 0.3004 +#> -0.6270 4.8943 -4.9521 -5.0111 27.1115 -3.8876 5.0781 15.2980 +#> -2.0936 -2.2204 10.7438 5.6234 -5.8773 1.8612 -0.1747 -2.0794 +#> 1.7866 13.3086 14.1191 -7.1445 -0.3335 4.3896 -2.1735 -9.4727 +#> -8.2449 4.5875 -2.2806 -4.2115 1.3583 4.8266 1.2051 4.1941 +#> -13.0131 3.9361 -2.0040 14.2697 -1.9906 -0.3230 2.7687 -1.5652 +#> 8.2772 3.4863 -6.8052 4.1047 12.7934 -11.4059 8.9971 15.3194 +#> -1.3278 -2.0004 -12.5763 -0.9432 -1.7532 4.4697 6.3850 2.4699 +#> -5.6669 2.9796 1.5793 -13.1081 -6.6368 9.7166 -18.8036 6.8036 +#> 8.5978 4.7147 -3.3906 -0.8923 2.8644 5.3926 7.2355 -6.1755 +#> -1.9046 1.1967 -3.7383 3.0169 9.8420 -1.1192 -10.2677 -8.4321 +#> -6.7273 0.1615 2.1647 6.3368 -4.5556 -3.0376 0.5976 4.3843 +#> 5.8365 5.5424 3.4668 -2.8281 2.3065 5.5648 -0.8226 1.8988 +#> -3.2143 -3.7337 -8.0035 10.7733 -5.3736 -0.8198 -2.4268 0.0483 +#> 13.6660 -4.5237 4.5970 -5.1990 12.3013 1.6388 4.2768 2.5850 +#> +#> Columns 17 to 24 1.1647 0.0883 -1.8579 6.9675 -1.1364 -5.0060 -0.7780 -4.4172 +#> 4.7413 -2.1396 -19.5925 1.1176 -3.8387 0.3339 5.2362 -2.5868 +#> 3.0235 7.9256 10.5735 0.6956 4.5093 -3.4612 9.1438 -0.8495 +#> 4.4381 -9.2104 -14.2792 -21.5529 2.7342 18.2734 -0.5226 -17.2673 +#> -6.9523 9.1747 15.3743 4.9407 3.6362 -6.5721 11.8230 7.4990 +#> -0.8719 20.3313 -11.1857 -15.8077 15.2127 14.0973 2.1633 -23.9395 +#> -1.3459 2.0318 2.8195 -12.9894 -9.3207 8.1929 -3.7738 -11.1509 +#> 1.6179 12.0645 7.3246 1.8184 10.1276 -0.6606 -4.7457 -4.9631 +#> 17.2713 -8.8859 -0.1720 -0.1655 -10.7213 -5.3521 3.2695 14.7218 +#> -1.6149 6.2540 -14.3293 8.2155 -0.2284 -1.6483 0.5684 5.5994 +#> -5.4084 -6.5135 9.4296 -1.7666 12.9097 0.5136 -2.3926 4.3374 +#> -9.4100 -8.4902 10.9871 -11.5960 4.4534 -10.3119 5.0998 8.0114 +#> -0.4976 -8.7225 -8.1794 -13.4504 -0.4677 -5.3711 1.6503 -4.6690 +#> 6.1623 13.1666 3.0731 -12.0985 1.0742 -12.4761 4.2051 -2.4907 +#> -9.1310 -16.9587 -9.2522 -10.3044 2.7399 3.1901 -10.8720 1.1453 +#> -6.0153 3.4808 7.1725 1.4039 -2.9327 5.0683 7.5520 -9.3416 +#> 1.9538 0.7618 2.5276 -3.9420 8.3673 0.7737 6.4622 0.6261 +#> -0.8007 -2.1855 -7.4393 7.3172 2.7565 -8.6415 9.3671 2.1703 +#> -7.8823 -5.1172 -1.7955 -3.6778 -0.2620 6.4934 -1.8294 -7.3872 +#> 1.6145 18.7136 -6.2837 -13.3100 -2.6283 11.3937 -6.4198 -5.1121 +#> -3.6383 2.5777 -6.7709 7.2382 -7.8776 4.3315 -2.7850 2.6136 +#> 11.7729 4.2469 -7.3782 8.6584 -11.0195 -6.8946 -9.9003 1.3134 +#> -2.4433 1.7564 4.3630 1.8874 -1.7856 -0.3932 -9.2346 8.8489 +#> 0.1627 -3.4027 7.8790 1.4177 8.3922 -18.5751 -1.0273 9.8765 +#> 4.4287 0.8575 0.5978 -3.5768 -7.0705 5.7572 5.8927 6.1881 +#> 9.4920 -1.8612 9.4122 3.3498 3.9180 -2.6696 5.1081 2.6881 +#> -5.5692 -11.3642 2.3446 -0.4082 -1.7243 -6.4737 -2.4863 14.0122 +#> 2.6065 6.2615 0.1544 2.1110 -3.4092 -7.5758 7.7624 -4.2724 +#> 0.1356 3.7510 4.8822 3.8887 -11.2026 -0.8222 -0.8038 -0.4906 +#> -9.3040 -9.0981 1.7609 1.1214 -9.9225 7.2715 -11.7252 5.6880 +#> 2.3609 2.7230 -13.1896 -7.3445 -10.8696 8.9974 -1.7979 -15.9049 +#> 17.6879 1.3128 -3.2289 11.0028 5.5607 -2.8696 10.8506 0.0405 +#> -4.1897 -10.8829 1.3293 13.7790 -10.7932 -8.6982 0.8268 8.3045 +#> +#> Columns 25 to 32 3.3672 3.2518 4.6540 1.1434 -7.4825 2.9413 0.1813 0.1991 +#> 6.0787 14.7320 23.1539 -1.4293 8.0082 -6.9817 -1.0013 -0.0750 +#> -7.4752 -4.8351 6.6102 -1.4242 -8.6344 2.4255 -3.6908 -1.9024 +#> 1.1897 14.2074 8.5096 -11.0300 1.0316 -0.7676 -2.4233 -0.2152 +#> -2.3057 -5.1122 -2.3910 6.0738 -3.7626 4.5053 7.7907 -2.2122 +#> -15.4137 9.3436 15.0671 -6.1341 0.1437 7.1682 -4.1309 2.9222 +#> -4.2184 -2.2880 -4.5529 0.6118 5.1968 -5.0098 8.9606 -8.2621 +#> 7.7482 3.0246 -9.0170 -1.3437 3.2428 8.4212 -0.0081 -4.5559 +#> 5.0130 -1.6627 6.8645 -2.1399 0.6312 -8.5304 -0.6510 -5.3466 +#> 3.0054 -13.8221 -4.8138 -9.6695 -1.7051 -1.0551 1.3081 5.2647 +#> -0.0134 3.9877 -5.8991 -1.4447 -2.5632 -2.9159 6.0396 4.4077 +#> 9.1458 6.0467 14.2540 7.6451 4.6234 -9.4695 5.5932 -7.9882 +#> -4.3342 7.4167 3.5752 -6.1146 4.5217 -0.7637 -0.6503 -1.1054 +#> -6.8725 3.5239 -3.6538 -1.4832 5.1537 -2.9111 -1.3363 10.7392 +#> 4.3785 16.9610 1.1975 0.3920 14.0748 4.1755 -3.6533 2.4858 +#> -1.7937 2.5136 -3.1279 -5.8645 -4.2897 9.6523 -9.5733 -0.6507 +#> 6.7988 2.4583 -19.6900 -7.0410 2.4129 -0.6755 -0.0605 4.6293 +#> 9.0016 2.7707 5.4987 -4.2748 -5.7277 11.2927 -1.2466 0.5758 +#> 0.7269 -3.2907 1.9014 1.9705 2.9013 5.3157 -5.9270 8.9254 +#> -1.6971 4.8348 3.5307 8.9739 2.4663 -12.2650 0.3829 -7.1116 +#> 2.6750 -7.4207 -3.1601 9.3425 -0.9851 3.5724 1.0315 -2.5465 +#> -4.2564 -6.5003 -3.5519 3.4889 8.9461 -7.1588 -0.3210 6.7513 +#> -12.1691 -13.5520 -0.9275 -8.4403 11.4199 -0.2876 -1.6594 2.5912 +#> -8.0832 -4.7692 0.3542 4.4135 -2.7930 7.5920 -2.3239 8.9804 +#> 2.5175 10.1770 4.6237 10.5699 9.4463 -9.6478 -0.4713 -6.7494 +#> 6.5276 -1.8750 0.6352 1.9519 -8.2804 -1.9805 1.3021 0.6536 +#> -5.9238 -16.8892 -6.7147 3.6698 -7.4369 -1.8938 2.4947 1.9169 +#> -6.3955 -2.6127 9.3821 3.4279 1.4402 -2.4267 -1.4215 3.7506 +#> -3.0882 -8.8381 -3.2720 -2.6979 -1.1780 6.0459 1.1284 -2.4717 +#> -2.0886 4.5918 -3.5290 -5.5408 5.0077 -2.1669 3.2118 0.8196 +#> -1.4853 -3.1820 -7.3037 -0.5624 3.1029 -0.4055 -4.3800 -1.5676 +#> 9.8060 14.9224 14.7496 -2.9497 3.4719 7.5710 -9.0688 -4.5238 +#> -8.7652 -7.3070 12.1732 5.4162 -10.8798 0.5792 0.4540 -2.3674 +#> +#> Columns 33 to 40 0.5775 -4.4677 11.2720 -3.3636 -8.5356 -2.1139 -1.0137 -8.1068 +#> 6.4036 -4.8619 3.5149 6.8066 2.5327 -2.4894 5.0787 4.2105 +#> 1.0636 -5.4773 4.8422 3.3961 -0.7011 3.5336 10.6122 7.7750 +#> 0.7237 6.9199 9.8021 -11.3355 -1.1520 5.2284 0.2712 -1.3648 +#> 4.0831 -8.0998 7.4962 10.3927 -10.4632 9.3660 -7.0408 -4.8940 +#> 6.6636 9.5090 16.7417 4.6306 5.3977 1.9157 6.5911 8.0977 +#> 1.6209 3.8768 5.1800 0.6810 -8.0559 5.1859 -5.6461 1.7969 +#> 6.6280 9.1175 1.8795 -0.1595 -5.5815 -3.8655 -2.4064 -5.0959 +#> 5.0322 -11.1407 -5.2614 0.8215 6.3341 0.8956 -4.0355 -4.2597 +#> -4.4824 -2.1265 3.7489 -8.0417 -1.5849 -7.4492 -19.7344 -0.9276 +#> -3.6402 0.5646 5.4167 -8.9768 -4.1497 5.0244 9.2658 4.6353 +#> 5.8962 -7.5954 6.8295 4.3948 -1.3111 0.0832 3.4058 0.9073 +#> -0.8869 -4.1317 -0.0667 -2.5821 -2.3452 -3.3751 -2.9564 0.0604 +#> -1.2905 1.2066 13.5431 -6.4651 -2.0152 -8.9456 -5.1113 -13.7680 +#> 8.5245 3.3595 -9.1866 -9.1915 4.2073 8.5071 2.6865 -5.1196 +#> -5.5655 2.4154 -4.9278 -0.3808 -11.5661 -0.7320 7.0886 -0.3078 +#> -8.5356 5.7566 -1.0723 -10.9469 1.5997 3.7010 4.1550 -0.4964 +#> -1.3956 -3.8126 5.9658 -6.4291 1.7802 -2.6399 5.9742 -1.7544 +#> 5.2109 -1.2798 2.6850 -2.7984 9.2020 -8.7254 -4.1873 -5.6979 +#> -7.0239 12.5927 -12.9334 1.1010 -2.5309 -9.0615 1.7940 11.0528 +#> 5.8765 3.1526 2.6281 5.4769 10.3717 7.5385 2.4004 -8.3088 +#> 4.6767 1.8066 0.1697 3.5908 1.6994 4.1112 -5.9328 -4.9845 +#> 3.4727 -3.0859 0.9391 -6.9982 6.6553 -13.1016 -9.6493 1.4379 +#> 2.4858 -8.1152 -3.2048 -0.5419 -0.9222 9.8688 1.8992 -2.2098 +#> -2.5455 -4.3628 -11.5113 10.2497 2.7842 0.7743 3.8009 -1.7875 +#> 3.6694 -14.8864 3.9832 -11.6735 -13.5658 -1.5386 1.4304 -6.9780 +#> -1.4678 -6.4834 -2.2244 -12.9281 -0.8812 -0.0655 -10.2224 -2.8203 +#> 3.9205 -6.0842 4.4426 5.9737 -4.7561 -3.2656 2.9719 -0.1410 +#> 4.9297 -0.9006 -0.1484 -8.9344 -7.9250 -13.6350 -2.1800 -12.5243 +#> 1.4256 8.0593 6.3374 4.8553 5.4058 -3.2912 4.1656 -5.6653 +#> -4.2563 10.1613 1.5826 -0.1803 2.4414 -4.1139 -3.5719 0.2643 +#> -4.9414 -14.7304 -1.8926 -8.8752 6.2763 8.3244 6.4805 7.4341 +#> -4.4275 -4.0429 5.5091 12.0350 2.2849 4.7607 1.4306 3.0357 +#> +#> Columns 41 to 48 1.3425 2.4318 0.2297 3.9487 0.8340 -3.4892 -1.8934 -2.5187 +#> 2.5465 6.1363 4.7546 -1.1347 0.2565 4.8179 -6.1110 3.9354 +#> -0.9445 -5.7972 5.6068 0.1764 0.0661 5.9946 4.5057 4.3280 +#> 15.6166 -9.8865 1.3323 6.3277 -8.4087 3.5651 -8.7914 -11.7654 +#> 1.3859 1.2135 -6.7168 -0.0515 -2.8396 7.7291 3.0882 5.1256 +#> 1.1402 -9.2210 14.8552 -6.0190 3.4786 5.9440 1.1656 -9.2151 +#> 7.1774 7.7675 -4.4483 0.9239 -6.6282 2.3573 -4.6418 -6.5257 +#> -2.3929 -11.7843 -2.3346 -7.4551 3.8152 1.3559 7.4466 5.8157 +#> 1.2616 -3.3774 -10.5876 -4.8113 -2.2306 6.2377 3.9167 3.2517 +#> -5.8087 3.9114 1.9733 -2.9279 0.6541 3.8726 3.8843 -10.0951 +#> 3.9405 -2.6526 5.0703 4.1638 8.4561 2.9692 -5.7006 -8.5118 +#> 3.6775 0.7867 18.1272 -15.8702 -4.6949 -8.0838 6.5894 16.8597 +#> 0.6880 4.3539 -0.4666 -5.6496 -4.5788 0.7218 -4.0261 -3.4822 +#> -8.6434 -1.5937 0.8873 0.5744 10.9685 1.3127 0.1695 -6.8751 +#> 8.5375 4.0668 -6.1528 -1.6856 -6.2984 -9.0808 -2.0758 8.7333 +#> 1.9280 -0.7524 1.1469 4.2321 -5.0411 -2.8487 1.8767 12.3459 +#> -8.1954 -9.7367 4.8013 9.2316 13.0555 -4.6173 -6.3980 -2.1876 +#> 2.9570 -3.0677 7.4724 -0.1404 -1.2668 3.9883 1.5399 4.4235 +#> 14.3549 -4.9833 -5.5107 -0.5534 -1.3172 3.7463 8.8271 11.3629 +#> -3.7792 -0.2707 2.4581 -3.1972 -3.6777 5.9267 -4.8994 -14.2793 +#> -3.6525 8.5514 -8.2225 2.7873 -8.2155 -7.0475 11.2389 9.3106 +#> -12.0075 1.9691 -13.6354 -4.5603 1.7476 3.5187 -4.3323 -1.3716 +#> 1.2376 0.8189 7.9525 -6.1161 1.2412 9.3051 5.9537 -2.9717 +#> 5.0425 7.8080 3.6463 -7.0141 6.8462 -3.0663 0.1731 3.5539 +#> -6.2023 2.6357 4.5537 4.1061 0.4813 4.1727 -4.3997 8.9735 +#> 9.4631 -2.4638 -1.9929 8.5272 7.0650 0.9482 1.4085 -1.6877 +#> 0.3908 -2.8212 -2.1498 8.3580 0.0389 -3.4834 6.8670 -2.2132 +#> -1.8257 4.6533 0.2535 2.8972 0.3081 7.1296 -3.2923 2.2787 +#> 1.3109 -0.3302 -13.9930 1.8319 -14.3261 5.9126 -2.5189 -4.8976 +#> 2.0578 0.7190 -4.5069 -0.4589 -2.7491 2.7208 -2.3110 -0.6910 +#> 1.3674 -1.9422 -7.2157 6.5781 -0.6388 -4.9945 -12.9072 -0.9459 +#> 6.6912 3.1621 10.0633 -9.3664 5.1656 1.3459 1.4042 11.3953 +#> -1.4085 14.7674 -0.2491 5.4398 -12.1071 4.9412 -1.9091 -6.8413 +#> +#> (14,.,.) = +#> Columns 1 to 8 2.2600 -2.8558 -10.8461 11.0187 6.9782 -4.3478 -0.5539 -1.5358 +#> 2.4124 0.4629 5.8994 0.1709 -6.1074 5.0492 -5.0420 11.9547 +#> 7.1065 10.2237 -8.5146 -6.7201 -8.6567 -5.7960 7.6112 0.0106 +#> -3.6799 -1.3248 3.8999 4.1496 -6.4840 0.1998 2.3449 12.9582 +#> 4.9348 -7.8063 2.6865 0.9718 -13.2628 1.7727 -7.2850 5.8127 +#> 2.2097 7.6322 -3.9095 -4.9124 4.5758 5.9961 5.0590 -14.9655 +#> -10.5864 4.7010 7.0016 -0.9775 -3.7136 -1.1254 3.3070 5.5694 +#> -11.8419 -14.6315 5.5866 -3.0063 2.1711 1.3174 11.3221 -18.4676 +#> 2.5500 2.4397 -4.5283 4.2481 -3.9721 0.8438 4.8044 16.6811 +#> -3.6356 -13.4748 12.1426 3.4413 -12.4099 15.8512 -8.8948 -9.8018 +#> -6.2980 1.1834 3.4216 0.1284 -3.2141 -15.6710 -2.9212 -12.4916 +#> -1.7433 5.9026 -11.5393 11.1219 -12.1876 15.6494 -0.2747 29.1426 +#> 5.5712 -4.7204 -2.8677 1.5505 -4.0286 6.5050 -0.8830 16.4093 +#> -13.9082 -5.1252 8.5360 23.4810 2.1408 12.4500 -8.9152 0.8395 +#> 7.6985 -1.4053 -3.2569 2.8858 2.0554 -14.6744 2.0157 6.5206 +#> -0.9287 -10.3851 9.1769 -6.0633 2.6984 -3.3501 7.5678 -5.8300 +#> -4.7567 -2.6000 5.0655 -1.2978 -9.6344 -12.7557 -0.9538 -8.4263 +#> -6.4511 -6.8126 9.9144 -3.1847 -17.0333 -6.7505 -10.2389 -1.3184 +#> 8.3838 3.7580 -5.1037 -0.2602 -1.0659 1.9762 5.2730 0.7907 +#> -3.5293 -3.5601 5.3804 -4.2517 11.9572 14.2685 13.5047 0.6003 +#> 15.3931 2.2727 4.3013 -15.7087 1.1502 2.3006 20.0940 -3.1557 +#> 8.4442 1.0775 -10.7549 0.8859 14.6411 -2.5016 1.9741 -3.8478 +#> -8.8823 5.2919 -0.3432 4.1783 -6.0039 21.0042 -14.6209 -4.4580 +#> -0.8538 11.9851 -10.1445 13.7776 15.2895 -9.2509 -16.1284 3.7772 +#> -3.9917 -0.6193 -5.6516 -0.5427 4.6323 4.8671 9.3253 8.2588 +#> -1.6397 -8.5018 -3.3495 21.7717 1.2697 -3.9017 -2.1084 4.3235 +#> 4.4836 -2.3586 15.3751 5.0692 -10.0125 3.3953 -7.7981 -6.8305 +#> 3.2021 2.6053 -4.7890 5.3281 -0.1095 -1.1120 -7.5739 4.9639 +#> 0.7935 -4.9545 1.3251 11.3611 9.0432 1.2695 -2.4978 -0.2546 +#> -0.5736 -3.8872 -7.7943 5.9322 2.4866 -6.5556 -3.9849 -6.4153 +#> 3.0998 0.6605 4.3801 -0.3700 6.1907 -1.8227 14.0377 -6.7645 +#> -3.9354 -15.6906 3.4533 3.0959 -0.7309 -6.0984 -4.9267 5.3492 +#> 0.9763 8.2680 2.8493 -6.9326 10.0556 -0.5642 1.0170 -0.6914 +#> +#> Columns 9 to 16 -1.4014 -3.8321 5.1040 5.4307 -7.5006 1.7403 9.2552 1.6595 +#> 19.7852 -7.0015 -0.5662 3.0511 -0.7668 -6.8566 5.2111 -13.4175 +#> 2.1381 3.3892 -9.8033 4.2055 -0.3977 -10.0748 -4.3383 -6.9594 +#> -1.0467 13.0893 -2.8339 -16.7426 -0.6585 -2.6023 -14.9551 -7.2855 +#> -14.9770 -1.0909 6.8344 -13.0934 -1.1271 12.6072 2.4682 9.3305 +#> 6.3926 14.0606 -11.6504 7.0526 6.6679 -21.2735 -7.1550 -5.2344 +#> -10.0229 -1.6623 9.4548 -7.2335 3.8185 -1.4016 1.8229 2.1612 +#> -2.7789 13.5777 -2.1077 -0.8922 12.0194 3.5338 4.5966 -0.3275 +#> -8.2648 2.1932 10.4050 -5.2969 -3.5747 1.7460 0.5028 3.1883 +#> 4.7326 -0.7221 -4.2855 -12.4920 -0.5439 14.1273 5.7346 -12.9161 +#> -1.4505 -1.4576 -8.1009 -8.1000 8.6470 0.5084 7.7073 0.7724 +#> -6.2689 -2.1733 10.3319 2.3879 7.4911 -13.9156 3.7235 -8.3498 +#> -4.3175 -0.9807 -0.8975 -5.6346 -4.6694 -3.4608 -10.7194 9.9853 +#> -10.1202 17.7336 3.3008 1.6841 -3.6197 8.7401 -3.9455 -18.8883 +#> 4.1658 0.3644 -1.2469 -0.6469 -5.9142 -13.7340 -0.4149 10.0050 +#> -0.9701 6.6391 -9.5478 -2.9086 3.2133 -3.3332 -6.0938 0.6281 +#> 2.8207 -2.7162 -14.8283 -0.2728 1.1317 4.3732 -8.9964 3.7575 +#> 8.6114 4.2575 -8.0006 -2.7123 0.3408 9.4673 9.5063 -6.7093 +#> -18.3700 7.5151 4.1535 -3.4655 -1.7342 -2.5393 13.5231 10.7161 +#> 6.8329 -4.0682 -0.6335 -3.5361 8.2864 6.3652 -21.4809 -1.1721 +#> 6.2769 2.0555 0.3967 1.2667 -2.5802 -5.9748 -5.8337 -0.8392 +#> 10.7900 -2.8396 9.8649 6.9794 -9.6329 -2.0757 -2.0149 -2.7491 +#> -3.5650 -0.8591 -0.1728 -0.2713 9.2215 6.3208 1.4432 -9.7395 +#> 1.4093 0.2759 4.3701 11.6951 -4.7707 -16.6113 -2.4031 0.9886 +#> -8.2225 -9.9281 9.1841 3.6379 -0.0029 -3.8688 -12.0253 10.3216 +#> -9.6009 -0.7327 11.5823 -4.7354 -4.7562 10.2660 -1.7002 4.9996 +#> 7.2495 9.9516 -0.1641 -15.7963 -2.8375 19.9997 -2.5914 -2.9705 +#> 3.5448 -2.2903 6.5692 1.3634 -0.3906 1.1528 1.4527 -1.9760 +#> 5.6957 -10.7337 11.6013 -3.2464 4.9270 9.5724 -1.8049 -3.0091 +#> -9.4110 -6.9045 16.7280 0.7745 -5.6757 1.5603 4.1562 10.4568 +#> 0.4569 8.0082 2.0614 1.0410 -1.5383 -4.3348 -2.9376 -5.4775 +#> 0.6965 -1.9393 -6.3455 9.7045 -5.7857 -12.4620 1.8762 8.5201 +#> 17.7996 -7.9529 0.6533 -0.5032 -3.7281 -5.9325 -11.4193 -1.8686 +#> +#> Columns 17 to 24 -5.4768 4.9582 -5.0723 -1.1487 1.2050 2.4675 -12.0072 1.8139 +#> -5.8026 11.0532 11.4259 1.2884 5.8616 2.1643 3.0137 -12.2740 +#> -0.9038 -5.5463 9.6585 -10.6045 -4.5570 11.9275 -2.8928 7.6934 +#> -3.4287 1.2766 8.6902 14.7956 2.1803 4.1642 7.9175 -1.5596 +#> 13.5894 7.7346 0.5477 -0.7536 -4.7169 10.0166 -2.0151 -0.2430 +#> -24.3139 5.1565 15.2477 -0.1144 4.2824 25.6640 16.5368 -6.7587 +#> 7.5259 -2.6374 -3.7976 8.1169 -1.1820 -3.7018 5.2150 -4.7325 +#> -19.2848 -1.5864 3.5810 3.9467 -0.5479 -6.7748 0.3467 -11.3724 +#> 11.6433 -10.6654 -8.9308 1.0838 -2.7418 -7.8942 -6.5439 11.6651 +#> 7.1318 -1.3480 -7.2803 8.8523 -6.0541 -3.1280 -12.0955 7.0331 +#> 3.7680 10.2324 -0.4844 -3.9459 -1.5617 -0.8020 4.5583 -11.5668 +#> -7.1870 6.4922 -14.8820 2.5774 5.4022 -0.3494 -1.0015 -11.7287 +#> -0.2821 -1.6047 2.6962 2.9507 -1.8278 -12.9270 -1.3581 0.9846 +#> 2.3997 7.0725 -1.0677 9.4333 0.5657 1.0665 -8.7655 -2.6380 +#> 3.0206 1.9449 5.2204 4.9902 5.4449 -14.4223 -1.8163 -15.3560 +#> -14.0065 -3.6739 12.6390 -8.1281 -5.1985 -4.4967 10.6623 3.3192 +#> 5.4631 11.2123 -2.5677 -5.2898 -2.5867 -3.1398 0.2112 -5.9672 +#> -3.5132 5.2737 -0.4602 -3.8034 -3.5406 3.3348 -2.2505 3.2473 +#> 4.8308 -4.0960 2.4735 3.0338 -5.2545 -11.1473 -6.1799 -5.1850 +#> 0.6530 -2.3355 4.2858 8.0186 -2.0377 -6.1213 9.4021 0.5604 +#> 9.7189 -9.7026 -3.8299 1.9838 9.6862 4.2679 -9.6970 2.4873 +#> 4.9712 0.2527 0.0769 -0.6832 9.8139 0.5805 -7.5233 7.9888 +#> -3.3289 -1.1279 -4.6609 4.8636 -1.5157 -4.0133 -0.9233 3.9986 +#> 7.0947 3.0456 -8.1862 -9.2550 -0.3196 -1.0323 -0.3905 -5.1577 +#> 2.6294 0.5371 0.8569 -5.5430 -4.8138 -9.3191 2.2431 -1.7935 +#> 0.1546 7.6907 -15.3715 4.2409 2.1672 -0.2653 -5.3247 2.7335 +#> 5.6578 -6.4600 -11.3122 0.3388 -10.8876 -5.5536 0.7111 14.0562 +#> -0.0120 4.1150 0.6879 -8.6990 -2.3047 3.2561 3.3856 1.6094 +#> -1.6072 -0.9016 -4.6860 1.5532 -12.3131 -16.8786 3.2448 15.1862 +#> -3.2762 5.0642 -5.6370 5.6169 5.8870 -9.2878 2.9192 0.6714 +#> 5.3539 -8.4478 3.3212 1.2104 -4.6266 -1.4858 3.5183 4.5395 +#> -5.1764 3.9487 -7.7713 -3.8079 2.3851 1.6164 -1.6969 -1.3513 +#> -1.2971 2.1094 5.8598 -3.8874 1.6435 2.4079 4.9317 9.3203 +#> +#> Columns 25 to 32 -2.6964 -1.1113 -1.4024 -4.4222 0.4144 9.6956 4.1457 -0.7109 +#> -2.4912 -4.8149 -7.1492 8.6070 10.3873 10.9661 15.5382 10.3474 +#> -4.3286 8.9290 3.6952 4.2518 4.0972 4.2747 1.9436 5.8343 +#> -10.3608 9.0934 -0.0294 -3.8719 3.6704 20.6640 -1.7353 -11.6331 +#> -3.2692 -12.3798 -8.7836 -1.2800 -8.2774 -2.7337 6.7594 5.1531 +#> -10.8345 17.4734 3.7571 -16.9622 -8.7740 8.9832 0.4001 8.4319 +#> 3.5035 0.2823 10.0342 -0.4373 2.9493 5.5865 -0.7728 -9.2221 +#> 2.6898 4.1650 -2.8318 -4.0260 -4.7243 0.6167 -6.4298 6.5094 +#> 6.9303 2.7851 3.6933 2.7355 -2.6194 2.4837 1.1551 -1.1337 +#> 5.5545 -2.9953 6.3803 -6.6386 -5.6648 -5.3221 -3.4175 -2.6752 +#> -13.3938 -10.6384 -1.1191 3.0185 7.5009 4.1620 6.9191 -12.4121 +#> 9.8980 1.5878 14.3361 19.2343 5.1820 2.4226 11.3789 -1.4119 +#> 5.7504 15.8322 -0.7426 2.8932 -1.3498 6.3948 -1.5811 -1.3767 +#> 1.6785 -7.7196 -0.2856 2.1753 -18.4288 5.6670 5.3621 0.3818 +#> 8.5308 9.8964 -10.8518 0.2661 10.4546 11.3260 -10.8128 -7.6275 +#> 2.2311 11.8294 -7.7885 -1.7342 6.7255 0.0074 -2.4929 2.4669 +#> -4.5218 -2.5291 -2.2452 9.9933 1.6961 3.4532 -13.5014 -8.1877 +#> -12.8427 -3.1186 -5.3725 8.3846 4.8210 6.8567 1.7881 10.9682 +#> -2.3149 -7.5824 -6.9274 0.6322 -2.4300 -7.7349 8.2114 6.6951 +#> -9.0935 6.9586 12.4879 -3.2093 -3.3315 1.5685 -3.2253 3.4307 +#> 2.5824 5.1793 6.4223 -1.0388 -5.2914 2.5218 -8.0272 2.3941 +#> 18.3362 4.2767 -6.0339 -6.8537 -5.2728 -17.0345 -5.7337 3.2746 +#> 3.2076 -2.0787 -7.9874 7.2432 -3.0282 -1.8623 -4.2151 7.4416 +#> 14.9657 3.8570 0.1775 -0.9479 -3.9806 -8.6715 9.1971 -3.2055 +#> -8.5756 4.6311 -0.7387 8.2013 -8.3355 -6.6855 -2.8408 15.0039 +#> -0.6835 -1.5262 -5.0150 5.5284 1.3317 25.7027 1.1385 0.1626 +#> 1.1703 -13.3847 3.2461 -3.9855 -1.0056 5.1104 8.3900 -8.6340 +#> 1.9177 1.8562 -6.4541 -1.2059 -3.5174 -3.2030 13.2447 10.5126 +#> 8.6436 17.8570 -7.1956 -12.4798 -2.5568 -4.4195 -9.2614 0.9336 +#> -1.2621 4.2302 -3.5884 2.9379 -9.5656 -13.0744 -6.9742 -0.7941 +#> 5.1301 7.6236 4.3145 -5.1867 1.4815 -5.4339 -5.8556 2.0453 +#> -4.8896 5.7395 -3.8022 2.3572 4.5721 4.0151 -8.8917 10.3065 +#> -12.4749 -2.3375 5.8765 0.8283 4.7226 -1.8815 -0.2609 1.4435 +#> +#> Columns 33 to 40 -4.7302 -0.7203 -2.1366 -5.1505 -5.5650 -1.5344 2.8628 1.8069 +#> -10.0343 3.3875 6.5205 -4.0177 9.9163 -9.8909 9.0677 1.2172 +#> -6.3158 -4.3120 3.1926 5.0718 -1.0227 0.2026 -5.6831 7.3821 +#> 0.9460 10.9301 -2.3823 -2.4416 -2.5480 -0.6224 9.1862 5.6608 +#> 4.6530 7.5677 -12.8621 15.5798 1.2732 3.5899 -13.6831 -0.2983 +#> -12.2439 -6.5280 7.7131 -8.9455 -0.6136 -16.8031 13.5525 15.3239 +#> 13.0103 7.8079 -2.0780 -3.9817 -1.9197 -4.8449 3.7999 2.5186 +#> 1.6175 -14.0658 -3.0648 -9.7692 -1.3258 -3.1297 2.8956 -11.8217 +#> 6.4766 2.5869 0.1051 6.9717 5.7080 3.5031 6.1024 -12.0477 +#> 2.5707 6.8687 -1.8644 -2.8663 -4.0887 -1.3995 -4.8373 -4.1561 +#> -3.3495 7.8976 5.1014 3.9156 2.0625 -7.1106 10.1714 -7.5096 +#> -3.6825 -0.8773 -5.8448 -5.9601 2.6245 -9.4794 7.4325 -8.0925 +#> -1.6669 9.0294 -3.7196 -2.1797 -3.8907 2.0714 -5.2986 6.8930 +#> -4.4502 5.6893 -6.2273 6.1676 -9.2446 -3.0287 1.0945 4.0618 +#> 0.9663 10.6646 -3.0525 -8.6235 2.1806 6.7143 1.0023 -4.7460 +#> -6.8601 -6.9040 -2.1362 -3.8128 -3.6172 6.8327 -11.7972 4.5291 +#> 2.3521 7.5579 8.6607 4.4343 -0.7224 6.2762 -1.0277 -2.2570 +#> -7.8887 6.1623 0.2058 4.3952 2.9239 0.0894 -1.9110 -1.5387 +#> 9.1913 -7.2619 4.7334 7.7999 5.8007 -4.7579 -12.2513 -10.2077 +#> 6.6632 -5.6959 3.4835 -3.4153 -5.5553 -1.5467 2.9655 14.1140 +#> 7.1015 -1.6608 1.8035 2.9720 -0.2277 5.2976 -8.8339 -14.5872 +#> -0.0507 -5.1413 -6.4168 -3.6183 1.7529 -2.9126 3.9377 -2.5140 +#> -7.9588 2.2475 -6.8236 2.3756 3.6878 -8.0904 -10.5043 4.3607 +#> -6.0041 -3.4032 -2.3349 2.0346 -0.0680 5.5060 1.9226 5.7583 +#> 2.7211 3.8999 -3.5304 8.1900 4.6290 -1.9789 -0.2079 -0.1383 +#> -5.9205 -0.9211 0.3081 -2.4201 1.7209 7.1508 4.0471 2.5096 +#> 3.2303 -2.8736 -1.5706 8.0534 -0.0957 6.4468 -11.9059 -13.3073 +#> -2.9459 -1.4798 0.3102 4.0339 3.9692 -4.6202 0.1579 4.8978 +#> -4.3827 0.0199 -13.2549 -13.5809 5.0189 -0.3153 -2.8449 -0.0908 +#> -2.7795 8.7480 2.1159 -1.6767 1.7355 -2.2025 3.2562 1.3900 +#> 11.7586 -3.5436 2.9524 0.3429 -4.0160 0.9875 2.4724 -0.0485 +#> -9.9436 5.4469 1.0031 -1.0694 4.8070 2.0749 14.3860 3.8473 +#> -2.5094 2.3865 6.0782 0.3042 -0.8091 -5.7312 -7.5272 2.5913 +#> +#> Columns 41 to 48 1.1286 1.9196 -8.0738 -3.3653 -4.8662 1.7636 1.9130 -6.4182 +#> 2.7282 0.1354 6.3312 5.3053 7.7725 4.0125 -6.5891 -6.6015 +#> 3.7504 9.1643 -2.9494 -6.7499 3.4279 -0.8597 -3.7706 0.4302 +#> 0.7339 2.0958 19.1199 -7.1387 5.1549 6.4956 0.8500 -9.3698 +#> -1.3320 10.0561 -9.1548 13.5957 0.0464 14.8015 -7.8637 6.5315 +#> 3.4790 10.4465 3.7532 -17.1198 5.0716 8.5595 2.9782 -14.3744 +#> 7.9216 7.3401 5.8683 -9.1900 -3.5437 4.5550 6.6951 -2.6652 +#> -2.8156 -2.9303 -8.3067 -10.7477 -1.2444 -5.9962 -6.4136 -0.3524 +#> 5.8353 0.0697 5.0888 -6.5700 11.8449 -14.0458 -10.0617 7.1165 +#> 9.5783 4.1892 -6.8969 -5.7255 -5.8262 10.0081 -3.0229 5.2248 +#> 4.8607 -10.2742 5.4418 -4.6935 -3.9924 7.6789 11.0688 -7.2299 +#> -1.8997 -2.5301 12.9405 -5.5245 -7.6899 -4.1013 -6.5683 -11.3616 +#> -8.9428 0.9376 -4.3687 3.1147 -2.8850 0.5377 1.6129 5.0085 +#> 7.5761 11.6445 4.5632 -7.4423 -5.8603 -0.9190 7.3045 0.3010 +#> -16.6320 -7.8229 -0.8025 -0.0247 -10.7229 4.1699 4.9983 -14.3914 +#> -4.2124 -2.1237 -2.7954 -2.3528 -8.9774 -1.7163 3.4692 6.9663 +#> -12.2076 -5.4039 1.0898 4.7608 2.7735 -1.9635 10.1507 -1.2104 +#> -7.8948 -5.6078 -8.2388 10.2654 6.9660 3.3335 -9.6765 9.3083 +#> -4.1451 -5.9400 4.4090 -1.9872 -8.4899 3.7365 -2.9364 10.1282 +#> -0.2870 12.6552 11.8432 -4.8058 -2.5164 12.0275 -4.0982 -1.4433 +#> -1.3412 5.3159 -6.5643 2.7085 2.4023 -2.4453 -8.0315 -5.0729 +#> 9.8281 3.3709 -10.4745 -5.5053 10.6372 -9.6326 1.6871 0.0840 +#> 4.7918 -2.5304 0.4281 -0.6118 0.6592 -5.7371 5.9862 7.2743 +#> 0.9949 -8.8003 -3.4850 1.1671 -7.0995 -10.5485 5.8313 -1.8438 +#> -9.2603 9.2860 2.3040 2.3731 9.3122 -18.6149 -8.1081 5.1539 +#> 6.4143 7.6940 1.7792 0.3514 -3.0460 -10.5961 3.1049 4.9386 +#> 9.1956 -17.2597 -5.3138 4.9583 -17.3296 -1.2646 12.5870 22.1977 +#> 5.4353 4.3994 -4.5703 -6.5473 0.3110 -0.4991 3.8774 0.1557 +#> -5.3731 -9.0090 -22.4950 -2.0026 -6.2803 3.7204 2.6534 6.7629 +#> 0.4759 -2.5250 7.3919 -1.9523 0.1725 -1.8869 1.5488 -3.0972 +#> 1.5629 8.1136 1.0751 -1.6913 -2.8480 -4.7111 -1.3944 1.6796 +#> -7.1196 -2.5497 -3.5083 12.7240 -1.8137 -11.9769 -13.8184 3.3666 +#> 18.6126 -0.3100 11.7363 4.1982 5.2959 -9.3371 12.0391 -1.6972 +#> +#> (15,.,.) = +#> Columns 1 to 8 -3.1253 -2.4891 -2.5895 7.0758 -5.4317 -9.5299 3.6977 -0.1674 +#> -9.8030 -7.6711 -8.4548 -4.8719 -1.7880 -0.4484 0.5284 3.2480 +#> 3.6320 -1.0659 11.7798 5.2171 5.7348 -10.4198 -0.4218 13.1514 +#> 4.8075 -5.5062 6.3425 -7.3376 -4.3863 -4.5289 9.4405 13.9770 +#> 3.8806 -16.2398 3.5059 -6.1362 12.2265 -9.0854 -5.5039 13.4235 +#> -4.1663 6.0868 -3.4945 11.1570 -3.6488 -11.6105 11.0163 6.5336 +#> -3.8031 3.1703 -2.2212 3.1016 -1.5322 5.4706 4.5173 -3.3172 +#> 0.8993 3.7078 -6.8125 4.8599 8.6339 5.4187 15.7392 -5.3999 +#> 0.9756 -1.8817 -0.0176 1.0504 2.2567 -0.4623 -5.9230 5.6332 +#> 1.1827 4.1514 4.8478 3.5314 -4.7031 15.3405 -6.8330 -8.1256 +#> 2.4200 -3.3245 1.6314 11.2973 -6.1368 -2.5005 9.0485 -5.0253 +#> -10.0487 -4.6163 -2.9459 -8.1504 2.7669 5.7930 3.2534 -8.4073 +#> 8.2579 -4.6469 3.6854 -4.0573 8.4072 -7.8887 7.4235 0.3356 +#> 9.0672 -22.3137 8.2970 -1.2394 11.0511 -2.6435 6.9328 2.2990 +#> 4.3114 -13.1906 3.5634 -1.9399 -4.4585 11.5080 12.0772 -6.5503 +#> 5.4362 -5.8136 8.6792 -0.4724 4.3796 -4.4385 9.6176 0.6723 +#> 0.8973 -3.5184 5.9208 2.2987 -7.1067 -4.2047 11.8861 -10.2618 +#> -5.7227 -5.0894 4.4474 -10.8677 5.9358 2.5396 -0.2277 6.3298 +#> 4.3897 -6.0229 -0.0670 -2.5611 17.0762 3.3938 -1.7386 3.7448 +#> -9.4934 6.9583 -8.5542 -1.5938 3.3045 0.0415 -4.6655 3.9101 +#> -1.2996 -0.7492 -3.2458 -4.5743 -4.1373 8.8601 -1.2482 2.2418 +#> 8.7954 4.2748 -7.6379 9.9663 -7.5720 -5.1425 0.4114 8.5001 +#> 9.7972 2.1686 7.2911 -4.3874 9.2126 14.3686 -13.0085 -14.7067 +#> 6.9073 -0.1528 -0.5361 10.2091 3.5838 -1.9562 5.5775 -2.3897 +#> -2.0058 0.9751 -9.4071 -4.1356 10.8273 -7.2553 0.2022 2.7087 +#> 8.8742 -9.5533 6.2028 9.2313 -4.6031 -7.4926 5.2301 -4.4256 +#> 10.5665 -3.9474 6.7996 4.2394 -4.6927 -7.2510 -6.1554 -4.1069 +#> -0.2140 -8.0879 -1.9416 5.6772 3.9732 -12.6492 0.7084 4.7242 +#> -4.1366 0.9518 -0.7086 4.3093 -0.0127 2.9521 -8.6758 -15.1255 +#> 3.7458 2.0063 -5.2624 -2.4859 3.5777 4.8612 -3.6627 -4.5873 +#> 2.0283 -0.2667 -1.7340 -0.3327 -6.9421 0.0196 -0.1779 3.4340 +#> -1.3583 0.7448 1.4340 -5.0278 1.0488 6.6714 3.4176 1.0589 +#> -1.8821 1.7644 -8.2928 3.4571 -11.9411 3.0883 -7.5486 5.7396 +#> +#> Columns 9 to 16 -1.3996 -0.2212 -2.9046 7.0362 7.4061 7.8913 -0.2837 -1.7778 +#> 8.4745 1.2357 -0.4998 0.1824 -3.7174 2.6377 12.4441 -1.5362 +#> -7.9753 13.8676 1.4375 1.5568 -1.7344 -7.5788 0.3910 -4.4924 +#> -0.2156 -3.7623 -8.0925 3.6557 16.9974 -2.7182 -4.0778 -6.0998 +#> -9.2072 -6.8382 0.9352 -6.8689 11.1395 11.9871 -5.7927 1.1290 +#> 0.5936 0.7895 -15.9258 -12.0001 -1.8077 -13.7929 12.6847 11.3575 +#> 7.7198 -0.4563 -1.6533 0.5332 1.7745 -7.2186 -13.4829 -6.3908 +#> -4.1707 -1.0304 6.4781 1.7351 -2.9058 -11.8763 -2.2188 7.6641 +#> -6.9670 -0.9226 2.9826 -2.0580 -2.8312 0.3905 0.6323 -7.8374 +#> 0.1781 -5.4987 5.2169 5.0412 -1.8663 -6.3837 10.3154 2.5862 +#> 7.0369 1.2855 1.4010 4.5597 -0.1614 -1.8498 2.9861 6.5672 +#> 25.6376 1.0768 11.5551 -14.0244 10.1413 1.9486 0.2683 7.8230 +#> 1.0909 4.2149 0.5620 0.7162 3.3697 -5.4716 -4.2234 5.7646 +#> -11.0719 -5.8712 11.0278 0.0785 -5.3192 -1.2503 21.8856 -2.2491 +#> 9.8843 2.4702 -11.4462 3.8217 10.2331 -0.7071 1.2817 -0.4589 +#> -11.1059 13.3002 4.8504 -0.0257 3.8627 -4.9640 -4.3204 -1.3751 +#> -4.6346 3.1975 9.8365 9.9026 -5.1183 1.7183 6.9603 8.1594 +#> -2.4647 2.7544 9.4306 4.5861 -0.2958 -0.9157 7.9395 4.6285 +#> 1.9963 2.2322 0.9410 -1.1384 7.8911 -0.7860 -8.6600 -0.1771 +#> 5.3851 -3.2531 3.0231 4.9294 -8.7050 -7.7352 -8.3001 -3.8381 +#> 0.4298 10.3184 -11.4358 -5.7089 -1.3504 6.2225 5.1214 -5.6078 +#> -8.9549 -6.2012 -4.3474 17.8379 -12.1752 -9.4749 -8.6853 -22.4279 +#> 5.2647 -1.9981 1.7805 -1.6629 -3.6457 -6.5940 18.4017 10.6445 +#> 7.9835 -7.1073 -1.1655 -11.3274 -6.1546 -4.1639 1.9358 3.5239 +#> -4.5755 12.9567 16.2086 -1.1543 -12.1783 6.4463 3.4809 3.4591 +#> -5.3961 -14.6684 0.0882 4.8349 12.0317 7.2562 -7.2496 5.0603 +#> 0.5083 -0.7885 2.6800 7.8811 7.4976 2.3588 -10.2068 9.5285 +#> -5.1508 0.6026 7.5339 0.4525 -12.0128 -1.3036 2.9689 -6.5210 +#> -1.1429 -7.2266 1.6786 4.4667 -13.0362 10.3502 -5.1623 -2.0758 +#> 7.7377 0.3442 -2.4271 2.0120 -4.1017 -0.1575 -5.2286 4.5640 +#> -12.6262 1.9976 1.4561 5.7899 -7.9257 -1.2511 -3.9091 -17.3013 +#> 0.8465 4.1801 -2.0622 -0.0416 5.1187 -11.5897 2.2525 7.4036 +#> 13.7671 10.8874 -3.5693 8.3849 3.0566 2.8306 -1.7121 -8.8746 +#> +#> Columns 17 to 24 -2.7890 -2.6777 12.4100 -0.4173 -2.7351 -2.3237 -1.1893 -7.8487 +#> 2.8180 -0.6704 -1.1067 -5.3284 2.8287 -12.1802 0.6371 -7.8358 +#> 5.0840 -12.8286 -8.1259 0.3413 1.5893 4.7990 -1.0848 7.9284 +#> 0.7554 -2.6893 1.4951 -8.0301 -9.2258 9.2503 -4.3063 -2.9557 +#> -3.3535 -1.8424 2.6565 -14.7257 -0.4094 3.4802 -5.4106 0.2227 +#> -8.3111 0.9962 -13.7546 5.9317 7.1012 2.1884 0.1121 0.3053 +#> -12.9573 -4.9866 4.1322 -17.2575 8.8725 1.0928 0.7994 -1.9979 +#> -4.0501 3.7805 -5.9780 1.6578 6.6764 0.6974 -2.1380 1.1962 +#> 7.4870 -1.9593 7.8647 2.1362 7.6960 1.4970 6.4040 11.4150 +#> -8.6788 10.7922 3.7114 -6.4831 7.8223 -0.4050 2.9311 8.9054 +#> -7.1412 1.8338 -5.3095 -2.4958 -11.7982 0.1850 -1.6287 -5.4001 +#> -5.0454 2.4634 5.7743 -3.5566 -3.0905 -10.1670 -6.1730 3.2453 +#> 1.4025 -4.3798 3.3640 6.0507 -5.7250 7.3543 0.9277 -1.4232 +#> -12.3126 -7.0066 25.1666 -1.1720 -9.6095 15.6625 -5.4981 -6.9320 +#> -6.1307 2.9528 11.1720 3.3601 -14.1288 7.2203 -0.0050 -16.1877 +#> 8.2280 0.4377 1.7582 1.3171 -8.7026 -1.1113 -3.6698 -5.1849 +#> 6.9931 -7.4229 -8.2460 1.0635 -5.6589 17.8784 -4.5206 -4.7060 +#> 12.6966 8.8376 -4.6713 -5.9602 -3.9046 -6.6923 -6.4629 0.8691 +#> 9.7201 2.4825 -3.6458 5.0900 5.3519 -11.6224 -0.2309 -3.5843 +#> -4.8432 10.5841 -10.4051 0.8594 0.6298 -0.7780 -0.8480 11.4692 +#> 1.1214 5.4748 -2.0440 0.1484 7.7655 -12.8579 0.9495 11.1897 +#> 1.4060 -10.5693 -1.8325 10.9258 8.6519 7.6816 9.7604 1.3936 +#> -6.5766 3.0152 3.8706 2.6890 -4.8256 -3.7317 2.0358 2.5278 +#> 2.6886 -9.6577 4.5351 9.3324 -5.7782 14.5360 6.2228 -11.9422 +#> 12.6803 -4.0114 1.4691 7.7560 15.4859 -2.5307 -5.0918 -0.0953 +#> -0.4229 -15.7543 12.8897 10.0431 -14.1118 5.8578 8.0864 -8.7946 +#> 5.5382 6.8417 10.2201 6.2617 -9.7157 0.1867 2.5066 -4.6980 +#> 5.2566 -7.6473 13.5125 2.9201 4.6974 -0.1316 -0.3926 -3.7544 +#> 4.4954 14.5592 23.9246 -7.5780 -3.5055 4.0718 15.7131 -5.0255 +#> -4.2636 -0.7075 -3.2000 6.7224 -1.7066 -0.7913 5.1736 -1.7236 +#> 2.8973 -0.1939 2.0450 0.3506 14.3486 3.5940 -0.2480 -1.9980 +#> 10.1309 7.5637 -12.5340 2.9218 3.7240 3.3869 2.1361 -9.2092 +#> -4.2256 0.7727 1.1605 -13.3239 -10.2843 -8.2181 0.6975 5.7283 +#> +#> Columns 25 to 32 -7.3705 -4.5355 -4.6654 8.5918 7.8764 2.7982 6.0937 4.9096 +#> -3.2256 4.6105 13.4156 11.3793 13.9300 -2.6096 -0.2455 1.8582 +#> -4.1965 12.2526 1.0939 -8.2841 -0.7019 1.2590 8.1224 8.2583 +#> 10.3194 7.5178 1.8102 -2.6836 -0.2819 -6.5837 5.6283 -10.7460 +#> 2.7923 1.5692 17.0556 -7.5243 -0.6599 -6.3663 5.6177 6.9881 +#> -2.1981 7.7804 -13.8299 7.0072 18.8106 10.8715 3.5986 -3.3465 +#> 3.8715 1.4149 -1.5835 -15.3301 4.3764 3.0398 7.3337 -10.8537 +#> -0.5547 0.5229 -14.5510 4.3035 2.2922 5.5868 -0.3519 -6.5139 +#> 8.5781 8.8679 8.2551 -13.8532 -10.1748 -6.1047 13.1753 -5.9603 +#> 7.8875 -22.1076 -10.7262 -4.0644 -8.6112 -2.5865 4.4834 -8.3024 +#> -2.2896 3.7276 -8.8088 5.1738 7.0051 6.3038 1.5773 5.1471 +#> -3.9813 4.1158 16.0092 7.7633 18.0729 -10.1168 4.8557 -4.8928 +#> -1.5299 -0.7899 -1.8753 -10.5417 -3.6429 1.4227 -1.7932 -13.7711 +#> 9.3151 -4.0839 -9.7988 0.5396 -0.2788 5.7625 18.9792 4.5048 +#> 0.7024 8.3928 -2.2674 8.7297 6.4764 -7.2255 -11.1423 0.3319 +#> -8.2088 2.8323 -2.8998 -13.4854 -4.7871 -0.7063 -14.0605 9.1888 +#> 7.5993 0.3761 -12.7373 4.7726 -10.1780 14.2810 8.6891 -3.3541 +#> 1.7549 7.4356 8.0417 11.8169 1.0411 2.3458 -5.9812 3.5713 +#> 2.0372 10.7705 0.0037 -4.1848 9.4260 -1.5995 -9.3472 3.6611 +#> 2.9331 -10.5961 -4.6535 -6.9784 -16.6003 3.2779 1.2942 -13.4965 +#> 7.2361 8.7838 -3.5954 8.2293 3.4667 1.6373 -3.4771 -5.9106 +#> -1.5328 -3.6081 4.7678 4.0522 -15.1855 -5.3358 1.1090 -3.7671 +#> -0.9779 -8.5687 -9.3836 3.6648 6.7702 -5.3110 5.8611 -7.0016 +#> -10.7062 0.9163 10.8128 10.8927 -6.5610 -2.6657 -3.5245 13.2871 +#> -6.6687 1.5307 -0.0698 -11.1103 -0.6281 12.6698 8.1904 -10.2510 +#> -2.4164 3.8756 -1.7883 2.0656 -9.5013 -5.1183 11.6215 7.6850 +#> 7.4401 -12.6784 -7.0437 1.4436 -3.1311 0.7063 4.0412 -9.8384 +#> -3.0490 1.9418 11.6326 1.2181 -1.4382 2.8223 5.7756 6.7115 +#> -9.8401 -7.9711 2.8821 4.0277 -14.2939 2.5826 -5.7853 -1.2387 +#> -0.8754 9.9109 6.4537 7.3832 6.1161 -2.5829 -0.1030 2.8234 +#> 4.1773 1.5992 -3.4398 -6.2480 -6.4559 6.2756 -4.9218 -1.6041 +#> -6.8433 5.2723 -0.3745 -7.0563 -9.0295 -1.5636 -1.9559 5.8586 +#> -5.0232 -4.6374 11.6069 11.3809 -2.0076 1.4440 -12.3505 2.9675 +#> +#> Columns 33 to 40 -5.6125 6.7731 8.4541 -2.1854 0.3412 2.5516 0.4211 6.6909 +#> 24.7267 11.4926 -0.7755 -3.1323 -3.8439 -15.0334 -5.0555 6.1352 +#> -6.5297 -3.1846 5.3956 -0.9324 -18.1899 2.5683 0.1014 -4.2333 +#> 2.6885 4.7499 -8.2193 -1.1181 2.0216 0.3198 -2.2306 -6.7962 +#> -7.3324 -17.0377 14.1088 -4.4187 -1.0388 3.1218 8.2675 6.9591 +#> 2.8057 14.4177 -0.1052 -5.3618 10.1704 12.5540 -4.7404 -1.0310 +#> 3.2645 4.1142 8.3417 -10.1458 2.0197 -1.9604 12.6541 -4.8555 +#> 10.4826 3.7072 12.4159 -0.9389 6.7590 6.6212 -4.2486 -1.4669 +#> 5.5648 -13.3981 -3.6252 3.9873 -13.3871 -6.9011 3.3795 -1.0039 +#> -8.5738 13.3355 -11.9097 -0.3358 8.6572 -4.5881 4.3793 5.3560 +#> 4.9882 4.6433 -5.6991 -6.7133 -2.3059 1.3128 3.2571 -4.7954 +#> 21.8481 2.4341 7.4053 -1.9258 -5.9918 -13.7795 9.5304 -8.2610 +#> -2.8034 0.1379 7.4667 -0.7562 -4.7330 -7.0494 -7.7486 -5.3417 +#> -0.2651 7.3359 10.2199 -12.9101 7.4875 18.9526 -0.3215 16.4356 +#> 6.4327 1.3308 -1.0451 3.6919 6.1151 -10.2877 -17.5006 3.3733 +#> -7.3841 1.6269 5.3620 0.1772 -4.7273 1.0363 -3.8303 7.5537 +#> -11.0693 -7.3817 -5.9603 -4.6691 -2.9732 4.7557 -2.1609 -6.5708 +#> 5.7738 3.7471 3.8845 -1.3242 -2.7638 0.9706 -13.0282 1.6208 +#> -4.4859 6.2482 -1.5561 12.2759 -0.3583 0.6379 0.3503 -2.8123 +#> 8.1079 7.7002 -13.2443 -0.5972 1.9159 -2.3509 5.2845 -4.6995 +#> -12.6464 -2.1451 -2.0149 4.4907 1.3824 5.4930 -9.6339 -3.9409 +#> -12.5056 -3.6930 0.9160 12.3106 3.3540 3.8414 3.1190 -0.4137 +#> 11.1568 10.4567 1.9502 -0.4310 3.7549 8.4122 -1.2994 -0.0369 +#> 0.3792 0.1034 -0.2982 5.5100 9.8395 -8.4046 -8.9870 0.6755 +#> 3.2959 -6.3545 14.4600 -3.1475 -12.6413 0.4253 1.5095 1.5729 +#> 9.7122 -9.0172 10.6954 -2.7309 -10.5575 7.6955 7.7792 0.9114 +#> -16.1275 3.9434 4.8510 -4.9340 6.4149 9.6819 -0.8637 -12.9560 +#> 0.8454 3.9225 4.1566 1.8728 1.0603 -0.9924 1.6692 1.5093 +#> -4.1258 -5.1589 -5.7234 6.6922 13.5036 2.5792 -2.2433 8.0794 +#> 8.1036 -8.9591 -1.8927 6.3455 -4.2923 5.9356 5.4426 5.4370 +#> -9.6589 1.9065 -8.2652 3.1675 4.9384 3.5444 0.1585 1.6574 +#> 7.6873 -1.7784 10.2622 0.2564 -6.5389 -13.5866 -13.6336 13.6436 +#> 4.8743 3.9971 -4.7304 -2.6670 1.5260 -1.3921 -7.4538 4.8182 +#> +#> Columns 41 to 48 6.1353 3.7578 -0.0441 7.3223 4.9602 -2.1568 4.3757 0.8931 +#> 0.5662 18.6103 4.0292 9.4775 -6.7890 -10.5541 -0.4426 -10.3427 +#> -13.2749 -6.6240 4.1254 -13.0459 1.7052 -2.8550 2.5767 6.2758 +#> -2.3137 7.0136 10.9082 -11.1078 -15.2089 1.0354 -2.6229 -8.0987 +#> 6.4055 -14.9889 -6.5978 3.7650 -0.8987 8.4562 -1.4764 13.1080 +#> -4.4993 20.2693 17.5327 -14.8533 4.4216 -8.7556 12.3613 -5.5825 +#> -2.7250 -6.6039 -5.7230 3.6576 2.9245 9.2087 -3.5757 0.7079 +#> 1.5939 2.1982 6.9100 -3.3073 -1.1629 -0.2601 9.5024 -1.7469 +#> -0.3790 -10.1971 0.5872 4.1371 -3.9639 3.0745 -6.6441 -6.2489 +#> 2.3687 -3.5648 -7.2643 5.6019 -3.9771 12.3935 5.8484 -4.8982 +#> -2.8047 -1.5620 -2.2006 -1.3755 10.8788 -17.9937 15.4760 -3.8772 +#> 17.2307 3.3495 0.5677 14.9326 -2.6751 -16.8204 3.8257 -11.9529 +#> -1.4403 -1.1898 1.4185 -7.8768 2.4049 4.3673 -11.8423 1.5068 +#> 12.1322 1.2196 7.1309 0.2518 3.2303 4.1855 4.6652 13.1675 +#> -0.9723 13.1599 -8.4276 -5.5885 -1.6950 -17.1130 -6.1245 2.9842 +#> -11.4643 2.6667 -1.0499 -12.7139 -7.9057 -2.0608 1.6031 4.9724 +#> -4.7444 -2.6997 -0.7225 -7.9546 3.1268 9.3679 8.8378 5.3170 +#> -6.2729 -9.7272 -2.2612 -1.7600 -5.5337 0.5706 2.2268 -8.3842 +#> -11.3989 -8.9167 3.7355 4.0470 -11.4348 1.0019 1.8444 -0.2433 +#> 1.7320 10.5308 9.5154 5.5511 -0.9445 7.9703 -7.1860 -6.2728 +#> -7.1761 3.9230 -2.2746 3.3040 -6.5579 1.0544 1.4223 2.5891 +#> -2.3831 -0.2480 -11.4843 -1.3215 6.8661 3.4761 -2.7993 -4.1668 +#> 3.2639 1.6892 -3.8993 1.9522 2.8619 5.3807 0.8974 8.2397 +#> 13.2428 -6.2506 -8.7147 2.0855 5.3620 -7.9402 -3.0805 -0.0242 +#> 2.8766 9.3800 1.6777 -1.4231 3.3601 0.4491 -11.8244 -4.6924 +#> 13.5989 -8.7720 4.8395 4.7784 -1.7019 5.3305 -12.3706 6.6611 +#> -8.1035 -13.6463 -2.9965 9.7603 -4.3336 3.1389 -0.6104 2.8124 +#> -4.6791 -2.6387 -2.1356 5.3763 4.3937 -1.0013 0.7160 -0.2833 +#> 0.7437 -0.1479 -2.7407 5.0302 11.4307 1.8110 5.3611 9.9176 +#> 1.6393 -3.3354 -6.9242 -0.4400 -0.5671 0.8072 2.0127 3.4851 +#> -11.8593 -1.0136 -1.7088 0.0624 -3.6022 8.4024 -2.4087 1.2273 +#> 5.7540 2.5522 -6.3327 0.4532 -0.8195 -6.3937 -8.3132 -12.9178 +#> 7.9770 -3.8467 -8.2244 -5.2552 5.2144 -5.7930 1.9104 -3.7280 +#> +#> (16,.,.) = +#> Columns 1 to 6 1.1062e+00 2.3523e+00 4.2966e+00 8.7650e+00 -1.7496e+00 5.0325e+00 +#> -5.2779e+00 7.7802e+00 -9.6564e-01 -1.5290e+00 -3.3332e+00 7.4104e+00 +#> 1.1348e+01 -1.1255e+01 -1.1021e+01 4.2997e+00 3.5445e-01 -3.8183e+00 +#> -5.7209e+00 1.2489e+01 -4.0475e-01 -3.7800e+00 -7.7922e+00 8.2359e+00 +#> 6.7294e+00 -3.2308e+00 -1.7050e+01 -1.9124e+00 -6.1422e-01 1.5671e+00 +#> -4.9611e+00 -1.9958e+00 1.3287e+01 1.2624e+01 1.1339e+01 9.8049e+00 +#> 2.0276e+00 -7.0259e+00 -5.8926e+00 1.8932e+00 2.4937e+00 1.2010e+00 +#> 1.8477e-01 -9.5830e-01 -2.9014e+00 1.1737e+00 1.1770e+01 3.3203e+00 +#> 9.8410e+00 -4.3573e+00 -5.1784e+00 -5.9002e+00 -1.0852e+01 -2.7578e+00 +#> -1.1024e+00 4.8481e+00 2.5828e+00 4.5410e-01 -6.1700e+00 1.1684e+01 +#> -8.5110e+00 8.0595e+00 -5.6307e+00 -7.8825e+00 2.5881e+00 -9.7911e-01 +#> -1.2821e+00 3.0768e+00 -5.5347e+00 -2.0616e+00 1.7245e+00 1.6892e+01 +#> -1.4444e+01 9.7178e+00 3.3073e+00 -6.1807e+00 -4.5290e+00 -4.8493e+00 +#> -2.0301e+01 2.5582e+00 6.6547e-01 2.0842e+01 1.1694e+00 2.4476e+00 +#> -1.3845e+01 1.2386e+01 3.0308e+00 -2.3253e+00 6.9146e+00 -8.5794e+00 +#> 1.4086e+00 3.6773e+00 -9.4792e+00 -3.8196e-01 5.4873e+00 -3.0524e+00 +#> -8.7695e+00 1.6707e+00 -1.0864e+01 -7.7258e+00 -9.6745e+00 -6.1782e+00 +#> -5.4825e+00 5.7056e+00 -6.1971e+00 -1.0549e+01 -1.0189e+01 3.0593e+00 +#> -1.1173e+00 -3.6139e+00 -5.3825e+00 4.5499e+00 -7.2917e-01 -2.3880e+00 +#> -1.2923e+01 1.5657e-01 4.4478e+00 -6.3199e+00 -3.0469e+00 -2.8037e+00 +#> 8.3813e+00 -5.3089e+00 -4.8085e+00 -1.6525e+00 4.2677e+00 -2.8523e+00 +#> 3.8522e+00 -7.8284e+00 7.1563e+00 1.7591e+01 -5.4518e-01 -1.3013e+01 +#> -2.0716e+00 -4.8086e+00 4.9970e+00 1.0184e+01 9.7747e+00 2.0513e+00 +#> -4.7721e+00 1.9632e+00 7.6009e+00 1.0404e+01 1.5373e+00 1.0064e+00 +#> -4.6576e+00 -9.8586e+00 -2.1044e+00 3.0080e+00 -3.4148e+00 -2.7283e+00 +#> -5.1921e-01 -4.3524e+00 -7.8099e+00 5.3708e+00 -4.4657e+00 8.4733e+00 +#> 1.3220e+00 5.4338e+00 2.4668e+00 -2.7286e+00 -1.3265e+01 -8.2510e+00 +#> -3.3402e+00 1.4025e-01 -1.1569e+00 9.3844e+00 -6.8034e+00 1.1691e+00 +#> 5.7934e-01 1.3726e+01 9.1059e+00 -4.1645e-01 1.2338e-01 -4.5460e-01 +#> -2.9531e+00 4.2259e+00 5.7111e+00 2.1363e+00 9.4939e+00 -5.9615e+00 +#> 3.9652e+00 -2.2525e+00 2.5842e+00 7.1956e+00 -2.4090e+00 -5.7351e+00 +#> -1.6458e+01 -1.6657e-01 2.1476e+00 -7.4493e+00 -2.6677e+00 1.5434e+00 +#> 5.9747e+00 1.4048e+00 4.2234e+00 -6.1436e+00 -3.5178e+00 6.7212e-01 +#> +#> Columns 7 to 12 6.4937e+00 1.5255e+00 -2.6174e+00 4.1989e+00 1.7143e+00 4.4914e-01 +#> 9.2254e+00 -8.1909e+00 -8.5043e+00 3.4534e+00 1.3033e+00 -2.0266e+01 +#> -1.3503e+01 -2.4304e+00 -1.0212e+01 -1.1287e+01 8.7373e+00 -1.2957e+01 +#> -5.5402e+00 3.0597e+00 2.4944e+00 8.2812e-01 -1.5562e-01 -1.2708e-01 +#> -1.0826e+01 4.0194e+00 8.4735e+00 -9.6511e+00 -1.2049e+01 5.8889e+00 +#> 1.3975e+01 1.0204e+01 -8.5457e+00 6.1153e+00 1.7690e+01 -8.2086e+00 +#> -1.0727e+00 -5.1974e+00 -4.0803e+00 -1.2308e+01 -2.4092e+00 8.5248e+00 +#> 6.7251e+00 -3.3911e+00 -1.2863e+01 -4.8699e+00 -4.7742e+00 6.6995e-01 +#> -1.6866e+01 -6.9636e+00 -7.7978e+00 -1.3163e+01 1.3390e+00 3.1046e+00 +#> 4.8430e+00 -1.0008e+01 3.5039e+00 -5.8474e+00 1.8435e+00 9.5515e+00 +#> 1.4132e+01 -1.4396e+01 2.8810e+00 2.4206e+00 -4.8963e+00 -8.7100e+00 +#> -3.1799e+00 -3.6143e+00 -1.2477e+01 5.0919e+00 -1.8975e+01 -7.9723e+00 +#> -2.2462e+00 -4.7195e-01 -1.1273e+01 -6.9180e-01 2.9568e+00 7.2841e+00 +#> 1.9165e+00 2.0633e+00 -7.4336e+00 -6.6721e+00 -5.2687e-01 1.1182e+01 +#> 3.8574e+00 -2.5511e+00 2.8475e-01 -2.8109e+00 3.2679e+00 -9.2002e+00 +#> -5.4003e+00 -1.4693e+00 -4.2705e-01 -4.6539e-01 8.6899e+00 -4.1028e+00 +#> -2.3059e+00 -6.4408e+00 2.4362e+00 4.1849e+00 -4.1545e+00 -1.0140e+01 +#> -7.0114e+00 9.0500e-01 -6.6372e+00 1.4289e-01 -1.3913e+01 -8.6175e+00 +#> -2.1385e+00 7.1310e+00 -6.4398e+00 -4.6363e+00 -4.8200e+00 -5.8590e+00 +#> 1.1300e+01 -6.0039e+00 -5.8085e+00 -9.1043e+00 2.4718e+00 1.1574e+01 +#> 2.1517e+00 9.9852e-01 8.8147e+00 -8.0279e+00 1.1400e+00 -2.8017e+00 +#> 3.9096e+00 -1.8214e+00 9.4604e+00 -5.9733e+00 1.2349e+01 8.4879e+00 +#> -2.8631e+00 3.8223e+00 3.6857e-01 5.3020e+00 -2.0941e+00 3.6934e+00 +#> -2.4222e-01 -9.1640e-02 6.8393e+00 3.0688e+00 -3.4666e+00 6.4445e+00 +#> 2.3689e+00 4.8254e+00 -8.1025e+00 -9.1224e+00 -1.4370e+00 -3.5194e+00 +#> -1.7873e+01 -7.5773e+00 -3.3779e+00 5.3065e-01 -2.5650e+00 -1.9081e+00 +#> 3.0292e+00 -9.5216e+00 8.8519e+00 8.2866e+00 2.9579e+00 9.7640e-01 +#> 1.0347e+00 -2.8423e+00 -4.4655e+00 -4.7059e+00 4.2787e+00 -9.3091e+00 +#> -2.1331e+00 -4.7208e+00 3.7622e-01 8.2563e+00 1.1822e+01 8.3298e+00 +#> 1.0366e+01 1.7794e+00 6.7672e+00 4.0003e+00 -1.1374e+01 1.2010e+01 +#> -1.0395e+00 5.3475e+00 4.3677e+00 -7.1196e+00 1.5527e+01 4.5863e+00 +#> -2.5943e+00 1.3862e+01 -1.3181e+01 -6.8664e+00 -7.1383e+00 -2.8625e+00 +#> 8.1429e+00 3.7006e-01 1.8648e+01 -1.7464e+00 -7.7375e+00 1.7909e+00 +#> +#> Columns 13 to 18 -5.6074e+00 -3.4085e+00 7.3234e+00 4.4006e+00 2.6561e+00 5.5296e+00 +#> -4.2334e+00 3.4272e+00 7.4620e+00 9.8037e+00 3.7460e+00 -3.7765e+00 +#> 1.7933e+00 8.1047e+00 1.3330e+00 -1.5064e+00 -4.2035e+00 -1.0850e+01 +#> 2.6064e+00 1.2460e+01 6.2961e+00 -5.1986e+00 3.2517e+00 -1.3406e+00 +#> -2.6584e+00 -1.1686e+01 -2.5594e+00 -1.1187e+01 2.2860e+00 -1.0599e+01 +#> -7.2094e+00 1.3429e+01 -1.8517e+00 9.0273e+00 -3.4923e+00 -1.4232e+00 +#> -1.4405e+00 -8.0286e-01 9.0611e+00 -1.0482e+01 -7.6412e-01 -1.2579e+00 +#> -5.6488e+00 -4.4713e+00 -3.3972e+00 1.1581e+00 -5.3720e+00 -2.1390e+00 +#> 1.6543e+00 -6.6934e+00 -1.6806e+00 -4.3857e+00 4.3596e-01 -4.8176e+00 +#> 7.1809e+00 -1.0534e+01 1.2480e+01 1.8823e+00 -2.5779e+00 -1.0183e+01 +#> -1.1569e+01 6.5945e+00 7.0316e+00 -3.7626e+00 -9.0361e+00 -3.6413e+00 +#> 1.0520e+01 3.3703e+00 -7.2867e-01 1.1533e+01 2.7200e+00 2.7042e+00 +#> 5.0795e+00 3.8477e+00 5.7534e+00 5.5852e-01 -2.2018e+00 4.1012e+00 +#> -7.9938e-01 -2.1073e+01 7.6444e+00 9.3051e+00 -9.6337e-01 1.1523e+01 +#> -2.5777e+00 1.0652e+01 -3.3177e-01 -9.3445e+00 8.8366e+00 1.2526e+01 +#> -1.8051e+00 8.9351e+00 -7.4940e-01 2.2109e+00 3.5112e+00 6.2608e-01 +#> 5.3026e+00 2.2802e+00 -1.5618e+00 -1.1538e+01 -1.6324e+01 -2.1797e-01 +#> -7.9441e-01 -1.9782e+00 2.2414e+00 1.0456e+01 -6.3631e+00 -1.1424e+01 +#> -2.1576e+01 2.4376e+00 -1.0401e+01 7.9473e+00 7.6992e+00 3.7250e+00 +#> -1.4687e+00 2.4617e+00 -8.8550e-01 -8.3288e+00 3.7216e+00 -2.8876e+00 +#> 5.1585e+00 -4.0592e+00 2.2326e+00 -6.3612e+00 6.0288e+00 7.7542e-01 +#> 5.1339e+00 -8.1341e+00 -8.6073e+00 -6.0124e+00 5.4194e+00 1.5091e+00 +#> 5.1967e+00 -3.0833e+00 2.6684e+00 7.7853e+00 2.5172e+00 8.2850e-01 +#> 3.7526e+00 -5.1123e+00 1.5545e-01 1.1436e+01 -1.3545e+00 1.4901e+01 +#> -8.7696e+00 -9.5002e+00 -1.1776e+01 -7.0670e+00 6.7120e+00 9.2688e+00 +#> 8.0022e+00 -6.2738e+00 -3.2173e-01 -9.5505e-01 4.7932e+00 5.6999e+00 +#> -7.8021e+00 -6.1326e+00 7.7465e+00 7.6195e+00 8.4870e+00 1.3874e+00 +#> -7.3460e+00 -9.6441e+00 1.5864e+00 6.2023e+00 3.5650e+00 -7.2110e-01 +#> -3.2927e+00 -2.7632e+00 5.4078e+00 -7.6875e+00 4.6021e+00 7.5661e+00 +#> -1.2230e+01 2.1578e+00 -2.4639e+00 -3.6101e+00 2.1423e+00 -5.4368e+00 +#> -3.0876e+00 -5.9892e-01 -4.1137e+00 -3.3779e+00 1.2727e+01 1.0199e+01 +#> 2.6479e+00 -8.2225e+00 -9.6021e+00 7.1308e+00 -3.5645e+00 -4.3502e+00 +#> 1.8473e+00 -6.9284e+00 1.3093e+01 5.6988e+00 7.6180e+00 -8.3846e-01 +#> +#> Columns 19 to 24 7.3627e-01 -5.9731e+00 6.6536e+00 5.0267e+00 -3.4687e+00 -5.4246e+00 +#> 5.9174e-01 1.1488e+00 -1.0466e+00 1.5471e+00 -2.7703e-01 1.7222e+01 +#> 2.9076e-01 -2.9828e+00 -1.1044e+01 -4.9871e+00 -2.6339e-01 6.8426e+00 +#> 1.6837e+00 1.6836e+00 -8.8908e-01 1.1217e+00 -9.1887e-02 9.2449e+00 +#> -3.5896e+00 -1.5933e-01 4.0711e+00 -1.1323e+01 3.7404e+00 1.1657e+01 +#> 4.3868e+00 -3.5228e+00 -1.0065e+01 2.6119e+00 1.0244e+01 -2.4183e-01 +#> -2.1082e+00 -5.0763e+00 -4.7873e+00 1.2629e+00 -1.7532e+00 8.8094e+00 +#> 1.8175e-01 -4.1472e+00 5.8678e+00 -7.3693e+00 -5.5440e+00 3.1073e+00 +#> 4.5549e+00 -7.7966e-02 4.1041e-01 3.3153e+00 -1.2862e+01 -3.4978e+00 +#> 4.3182e+00 1.3787e+01 9.4837e+00 -1.0485e+01 2.7736e+00 -5.2870e+00 +#> -4.6468e+00 -3.5307e+00 1.3198e+00 4.6244e+00 1.9277e+00 1.4982e+01 +#> -2.2590e+00 -3.7974e+00 -2.0574e+00 -5.1744e+00 -5.8982e+00 6.0544e+00 +#> 7.7261e+00 4.5548e+00 3.1839e-01 2.2807e+00 -1.8303e+00 -1.0658e+01 +#> -2.1419e+00 -1.7195e+00 5.8053e+00 3.6411e+00 -3.6666e+00 -1.3808e+00 +#> -6.9503e+00 -1.5673e+00 2.3036e+00 1.2977e+01 -3.6986e+00 -4.0424e+00 +#> -7.9816e+00 5.0594e+00 -3.3944e+00 -1.1771e+01 -5.3699e+00 3.5985e+00 +#> 2.1716e+00 -1.6266e+00 3.2117e+00 -5.0772e+00 6.2194e+00 -2.6357e+00 +#> -6.5571e+00 3.1025e+00 6.6873e-01 -1.7099e+01 -7.0586e+00 4.9848e+00 +#> -6.8468e+00 5.1036e+00 3.8933e-02 8.0019e+00 -3.4802e+00 -1.7174e+00 +#> 6.3885e+00 2.1318e+00 3.9541e+00 -3.3173e+00 1.0848e+01 -5.0692e+00 +#> 5.7441e+00 -1.0195e+00 4.1744e+00 4.5401e+00 1.6965e+00 -2.0420e+00 +#> 7.0096e-02 -1.0259e+01 -4.7403e+00 7.3246e+00 -4.2524e-02 -9.8128e+00 +#> 7.8280e+00 8.2722e-01 -6.3400e-01 -7.8921e+00 4.0342e+00 -6.3275e+00 +#> -4.8883e+00 -1.4970e-01 -6.8572e+00 7.5394e+00 1.5777e+00 -8.3016e+00 +#> 1.1322e+01 -3.9290e+00 1.1352e+00 2.0000e+00 5.4427e-01 2.3393e+00 +#> 3.7449e+00 -1.4576e+01 3.7942e+00 -2.8292e+00 -7.8410e-01 -1.9518e+00 +#> 5.1286e+00 3.5064e+00 6.3571e+00 -1.1949e+01 -1.7064e+00 -1.3999e+01 +#> -6.0800e+00 -5.6762e+00 -3.2926e+00 2.5514e+00 -7.8099e+00 -1.9884e-02 +#> -3.1596e+00 6.6942e+00 1.2611e+01 -4.3601e+00 -2.3626e+01 -6.5944e+00 +#> -7.2940e+00 -1.0204e+01 5.5124e+00 8.1994e-01 -1.8847e-01 -2.6315e+00 +#> -1.8329e+00 2.2539e+00 2.2605e+00 6.7189e+00 -6.1848e-01 2.0646e+00 +#> -6.1003e-01 -7.5034e-01 -4.6136e+00 -7.3260e+00 6.4762e+00 -1.6502e+00 +#> -2.7058e+00 -5.2549e+00 -4.6255e-02 -2.0182e+00 -2.1587e+00 6.2375e+00 +#> +#> Columns 25 to 30 5.1233e+00 5.0286e+00 2.6590e+00 2.2515e+00 1.1992e+00 8.4146e+00 +#> 9.4814e+00 6.4499e+00 -2.7137e-01 4.2821e-02 4.6112e+00 7.4243e-01 +#> -1.2391e+01 7.0394e+00 -1.0630e+01 2.4426e+00 -4.2618e+00 -4.7847e+00 +#> 1.1533e+01 3.8799e+00 -3.3985e+00 2.8032e+00 -8.5456e+00 2.5490e+00 +#> -1.8559e+01 8.8160e+00 -5.9181e+00 2.3876e+00 -5.9590e+00 9.3294e+00 +#> 3.2840e+00 1.7363e+00 6.7995e+00 -1.5756e+00 8.8201e+00 -7.6525e+00 +#> -8.3449e+00 -6.7816e+00 3.0615e+00 3.6525e+00 -2.1395e-01 -1.2624e+00 +#> -3.8154e+00 -1.3157e+01 3.4326e+00 -7.5243e+00 4.2768e+00 -4.5662e+00 +#> -7.4420e+00 -3.8675e+00 -2.1392e+00 -3.2964e+00 -1.2923e+01 1.7868e+00 +#> -2.2106e+00 -1.0292e+00 -4.8813e+00 -2.5015e+00 6.1327e+00 1.3463e+00 +#> -2.9320e+00 3.3000e+00 -4.1312e+00 -5.5715e+00 7.5949e+00 7.2804e-01 +#> 7.7675e+00 -9.0562e+00 4.0172e+00 -7.4262e+00 7.2146e+00 -1.0835e+00 +#> 1.9125e+00 1.8156e+00 -2.6505e+00 3.2869e+00 -6.9167e+00 -2.6447e+00 +#> -1.6817e+01 -1.4289e+01 -9.0178e-01 1.1828e+00 -1.0829e+00 7.4066e+00 +#> 4.6046e+00 -2.0611e+00 1.0022e+01 -4.0054e+00 -8.3094e+00 3.4566e+00 +#> 1.1643e+00 1.4705e+00 -7.2142e+00 1.3124e+00 -4.4851e-01 -4.0656e+00 +#> -1.1685e+00 -8.3800e+00 -7.2293e+00 -3.3617e+00 4.6644e+00 -7.2425e+00 +#> -2.5843e+00 5.1759e+00 -1.0891e+01 9.9750e-01 9.9458e-01 -2.1447e+00 +#> -1.0396e+01 -4.2850e+00 3.2901e+00 1.2966e-01 2.0225e+00 3.0206e+00 +#> 8.9554e+00 -5.1237e+00 9.9581e-01 -7.2699e+00 4.8061e+00 -5.4607e+00 +#> -1.0005e+01 3.6013e+00 3.7541e-01 -3.4383e-01 1.2102e+00 1.4265e+00 +#> -7.7733e+00 8.2007e+00 4.5593e+00 3.0362e-01 2.4441e+00 -4.0336e+00 +#> 9.1666e-01 -1.3503e+01 6.0334e+00 1.1291e+00 1.9338e+00 1.0270e+01 +#> -1.8784e+00 -5.1123e+00 5.5162e+00 2.7242e+00 -4.4748e-01 -6.8360e+00 +#> -7.8590e+00 -5.5001e+00 -1.4332e+00 -4.0083e+00 -5.0840e+00 -3.9921e+00 +#> 4.6096e+00 -4.9101e+00 -5.2793e+00 -1.8341e+00 -2.1632e+00 1.3962e+00 +#> -2.6309e+00 1.2267e+01 -4.3781e+00 -5.8274e+00 4.7211e+00 1.3502e+01 +#> -6.5993e+00 3.8038e+00 -1.9661e+00 3.6923e+00 2.0397e+00 3.8195e+00 +#> -3.3981e+00 -2.2913e+00 5.1411e-01 7.9761e+00 9.0115e-01 8.9859e+00 +#> 9.2428e-01 -1.1373e+01 1.0695e+01 2.6802e+00 -1.2878e+00 2.7543e+00 +#> -1.0323e+01 1.6291e+00 3.1996e+00 3.2008e+00 3.4868e+00 -2.1713e+00 +#> 7.5225e+00 1.7948e+00 -5.1063e+00 -9.8844e+00 -4.4957e+00 -1.6431e+01 +#> 7.8656e+00 8.3052e+00 -4.3064e+00 6.0204e+00 9.6694e-01 1.1735e+01 +#> +#> Columns 31 to 36 7.8981e+00 4.6716e-01 9.1192e+00 6.1724e+00 1.9819e+00 2.7730e+00 +#> 6.2735e+00 -5.0336e+00 4.1666e+00 8.9138e+00 6.4594e+00 9.7013e+00 +#> -1.3202e+01 -1.7686e+01 8.6735e-01 3.1854e+00 5.6680e+00 4.7630e+00 +#> -2.8526e+00 -2.1498e+00 -6.8529e+00 -5.1817e+00 8.2548e+00 -1.2442e+01 +#> -5.0210e+00 -2.5293e+00 7.3916e+00 -1.9711e+01 -4.9559e+00 -6.0873e-01 +#> 6.2391e+00 -9.9065e-01 -5.2801e+00 1.2112e+01 8.5207e+00 -5.7427e+00 +#> -6.3976e+00 9.8501e-01 -4.4478e+00 4.7107e+00 3.0327e+00 -8.8026e-01 +#> -6.0163e-01 -2.0219e-01 -1.4381e+01 1.1614e-01 3.0164e+00 7.1744e+00 +#> -5.7401e+00 -4.9621e+00 6.5895e+00 1.5225e+00 9.7609e-03 6.6442e-01 +#> 2.7118e+00 7.8119e-02 1.5722e+00 -1.2200e+01 7.8679e+00 4.2014e-04 +#> 5.3179e+00 -6.8347e+00 -2.4433e+00 9.0759e+00 -6.5207e+00 1.1099e+00 +#> -1.2456e+00 -1.3854e+00 1.2949e+01 -1.8100e+00 4.0563e+00 6.1291e+00 +#> -5.2673e+00 -3.3808e-01 3.4098e+00 2.9034e+00 6.0943e+00 -9.2761e+00 +#> 5.7289e+00 -4.3777e+00 1.4237e+01 -8.2053e+00 1.9530e+01 7.8049e+00 +#> 3.9308e+00 -5.6630e+00 4.6183e+00 6.5412e+00 -3.3655e+00 -6.8543e+00 +#> -6.8679e+00 -1.4019e+00 -2.9951e+00 -8.8762e+00 5.1456e-01 -3.5969e+00 +#> -1.0001e+01 -2.2497e+00 3.6360e+00 1.4877e+00 5.6955e+00 -7.8108e-01 +#> 1.3087e+00 -6.4806e+00 -6.2973e+00 -1.8158e+01 -5.6897e+00 5.1906e+00 +#> 1.4685e+01 1.5701e+01 -1.2001e+00 -1.2090e+00 2.7959e+00 1.0252e+00 +#> -9.5096e+00 8.3507e+00 -1.1105e+01 1.5248e+01 8.6121e+00 -1.1817e+01 +#> 1.3106e+00 8.3270e+00 5.4541e+00 -2.1049e+00 -1.7916e+00 4.1066e+00 +#> -1.0683e+01 -4.3503e+00 -1.8285e+00 4.9063e+00 -6.9743e+00 -1.3843e+00 +#> 6.3039e+00 1.6537e+00 5.3608e+00 -6.5532e+00 5.4682e+00 8.8575e+00 +#> 2.9352e+00 -6.5402e+00 5.9713e+00 3.2167e-01 -1.3253e+00 4.7857e+00 +#> -7.1478e+00 1.0595e+01 1.4829e+01 1.0829e+01 4.1257e-01 -4.9931e+00 +#> -9.3587e-01 -8.5972e+00 9.1508e+00 -2.7947e+00 4.0810e+00 1.0099e+01 +#> 1.0737e+01 2.1649e+00 6.8882e+00 -4.7508e+00 3.5292e-01 4.0821e+00 +#> 1.1075e+00 -2.7300e+00 1.2028e+01 4.6867e+00 3.9253e+00 4.2040e+00 +#> 7.6877e+00 3.6231e+00 1.1568e+01 1.2691e+01 -4.0414e+00 3.5469e-01 +#> 2.5404e+00 9.3366e+00 -6.9511e+00 2.8369e+00 -1.4238e+01 -2.4976e+00 +#> -2.2425e-02 1.2984e+01 2.3252e+00 5.9544e+00 4.0155e+00 -1.5570e+00 +#> -8.6683e+00 -6.9898e+00 -2.5169e+00 -1.2687e+01 -1.1422e+00 -3.6780e-01 +#> 1.1547e+00 -2.3473e+00 3.8827e-01 1.2182e+00 -1.4819e+01 3.1968e+00 +#> +#> Columns 37 to 42 -8.2986e-01 1.5808e+00 -8.5267e+00 8.2548e-01 6.7616e+00 4.5466e-01 +#> -5.4160e+00 4.2210e-02 -6.7870e+00 1.3731e+01 -1.4242e+00 2.8703e+00 +#> 3.3299e+00 1.2675e+01 4.9779e+00 1.0628e+00 -4.1909e+00 -3.6972e+00 +#> -7.8707e+00 9.9748e+00 3.0096e+00 -7.6082e-01 -7.3238e+00 1.5020e+00 +#> -4.7687e+00 4.1794e+00 3.4387e-01 -6.1600e+00 2.0465e+01 -1.1869e+01 +#> 4.9347e+00 1.9419e+01 -1.4836e+01 -2.6536e+00 -1.8038e+00 6.3195e+00 +#> 2.6440e+00 4.4561e+00 -5.5239e+00 -3.5099e+00 -8.2062e-01 6.2203e+00 +#> 1.2335e+01 7.1544e+00 -4.6199e+00 -8.6408e+00 -2.0380e+00 6.3335e+00 +#> -2.3501e+00 -1.4147e+00 5.5156e+00 8.5819e+00 -1.2062e+01 1.1047e+01 +#> 4.9658e+00 1.5123e+00 -1.0143e+01 1.5395e+00 -1.1483e+00 -5.0319e-01 +#> 9.1873e+00 -4.0729e+00 -4.2981e+00 4.9867e+00 5.9117e-01 1.2014e+00 +#> 5.0660e-01 -4.0075e+00 2.7222e+00 3.5089e+00 -8.8519e+00 -1.6687e+01 +#> -2.1690e+00 -2.7477e+00 -8.3437e-02 -6.3983e+00 2.1571e+00 -6.2812e+00 +#> 2.3533e+00 2.0906e+00 -1.2856e+01 -2.6172e+01 1.3576e+01 1.7861e+01 +#> -4.0977e+00 -1.2518e+01 -2.3761e+00 4.4950e+00 -5.2409e+00 5.0584e+00 +#> -9.2152e-01 6.5864e+00 7.8020e+00 -7.0217e+00 1.8967e+00 -1.4170e+01 +#> 6.2868e+00 -3.7704e+00 4.5620e+00 -8.2222e+00 4.8203e+00 -2.4004e+00 +#> -3.6014e+00 6.6567e+00 1.9025e+00 5.1070e+00 3.6100e+00 -1.3140e+01 +#> -1.6723e+01 -3.2510e+00 3.4677e-01 4.1354e+00 9.5771e-01 -2.9183e+00 +#> 3.4590e+00 2.0839e+00 -4.4422e+00 -6.4545e+00 -2.5781e+00 1.4163e+00 +#> -2.2038e+00 -3.9421e+00 -2.5150e+00 8.6248e+00 -1.0409e+01 1.0375e+00 +#> 3.5168e+00 -6.7899e+00 -1.0509e+01 1.7630e+00 6.9609e-01 1.0334e+01 +#> 1.7577e+00 1.4941e+00 -1.7930e+00 -3.1237e+00 -1.0071e+01 7.2025e+00 +#> 6.0528e+00 -1.1943e+01 4.0831e+00 -6.3333e+00 1.2425e+01 3.2790e+00 +#> -3.0971e+00 3.1807e+00 -6.1176e+00 -9.5196e+00 4.3732e+00 1.8972e+00 +#> 3.3534e+00 3.1717e-01 1.2562e+01 -2.0393e+00 1.7878e+00 3.5104e+00 +#> 1.3278e+00 -1.2130e+01 6.3743e+00 5.0183e+00 1.5475e+00 3.2694e+00 +#> -2.0132e+00 1.8420e+00 -2.7093e+00 -1.2816e+00 1.2382e+01 8.1606e-01 +#> -8.1138e+00 -1.1665e+00 -6.9289e+00 9.1660e-01 5.5099e+00 4.4927e+00 +#> -7.5698e+00 -4.2022e+00 -2.4955e+00 1.3951e+00 -5.1773e+00 -4.4802e+00 +#> -2.1522e+00 4.0329e+00 -6.2254e+00 2.6406e+00 5.4958e+00 7.9409e+00 +#> 8.9750e+00 7.5071e+00 -2.3948e+00 5.6136e+00 -6.3503e+00 -9.2532e+00 +#> -8.0334e-01 -7.1829e+00 3.7443e+00 -6.3808e-01 1.6234e+00 7.6806e-01 +#> +#> Columns 43 to 48 -1.5677e+00 -2.9915e+00 -2.7661e+00 3.5917e+00 -3.7266e-01 -1.7021e+00 +#> -7.0343e+00 -1.7440e+00 -1.0801e+00 3.7083e+00 -9.5518e+00 1.1644e+01 +#> 1.4276e+00 4.6530e+00 4.7658e+00 4.9927e+00 9.4682e+00 1.1981e+01 +#> 5.1168e+00 -1.3723e+00 -4.0565e+00 2.6561e+00 8.4619e+00 1.3204e+01 +#> 1.7021e+00 -1.0051e+00 -3.5616e+00 -1.6672e+01 1.2281e+00 1.3375e+01 +#> 1.9577e+00 -1.0872e+01 -3.0708e+00 8.6402e-01 5.1648e+00 -8.1927e+00 +#> -1.4588e+00 -3.2117e+00 -4.2848e+00 8.6730e+00 6.3471e+00 7.4469e+00 +#> -1.1409e+00 -3.9830e+00 -4.2742e+00 1.5370e+01 -2.0290e+00 3.3303e+00 +#> 8.0693e-01 1.0572e+00 4.9300e+00 8.5671e+00 -1.6947e+01 1.8162e+01 +#> 1.2017e+01 -1.8909e+00 -4.6078e-01 -5.2104e+00 1.0487e+01 -1.0465e+01 +#> 1.1179e+00 -5.1893e+00 7.1756e-01 5.2863e+00 6.4620e+00 9.7821e-01 +#> -3.5414e+00 8.1094e+00 4.1925e+00 3.1710e+00 -5.1721e+00 5.7962e+00 +#> 3.6400e+00 3.3028e+00 8.8489e+00 -4.8388e+00 1.1024e+01 -4.8040e+00 +#> 2.9843e+00 -1.2802e+01 -1.2235e-02 -5.1402e+00 1.4085e+01 -8.2303e+00 +#> -4.5388e+00 -3.9963e+00 1.5188e+01 4.7872e+00 -3.0182e+00 -5.7262e+00 +#> 7.1949e+00 5.2302e+00 -2.1220e+00 -7.4285e-01 3.7943e+00 6.3117e+00 +#> -9.9401e+00 -3.0102e+00 4.7872e+00 -2.3967e+00 2.4630e+01 -2.2904e+00 +#> -7.0986e+00 5.8050e+00 -2.6652e+00 -3.0579e+00 -2.7367e+00 7.1841e+00 +#> 5.8417e+00 -1.6451e+00 5.4505e+00 5.0665e+00 -4.9857e+00 1.9282e+00 +#> 6.6733e+00 6.7517e-01 -7.7643e+00 6.6629e+00 1.0348e+01 -1.3121e+01 +#> -3.1792e-02 2.0679e+00 -1.3601e+00 3.7340e+00 -5.3487e+00 5.0697e-01 +#> 2.7926e+00 -3.8699e-01 6.6610e-01 -2.2246e+00 -3.5554e+00 -6.2905e+00 +#> 4.7517e+00 -2.4412e+00 6.2003e+00 -1.6680e+00 -4.6255e-01 -1.2179e+01 +#> -2.3411e+00 -1.4320e+00 9.3129e+00 -5.4157e+00 -7.7290e+00 -5.5071e+00 +#> -4.1921e+00 1.9433e+00 1.1364e+01 8.3016e+00 -9.6522e+00 2.5074e+00 +#> -2.7066e+00 -9.0488e-01 4.5803e+00 -6.0346e+00 1.0435e+01 1.7717e+01 +#> 4.0107e+00 5.0770e+00 6.2467e+00 -1.4272e+01 8.8144e+00 1.3842e+00 +#> -1.3449e+00 9.4606e-01 -1.6930e-01 -4.9765e-01 -5.1612e+00 -9.5667e-01 +#> -1.1142e+01 1.7190e+00 1.8346e+00 3.3680e+00 -1.6060e+01 -1.0255e+01 +#> 3.6461e+00 -1.8809e+00 -2.5463e-02 2.9670e+00 -1.6904e+01 -6.1228e+00 +#> -3.0225e+00 1.1404e+00 3.5382e-03 3.9360e+00 1.8768e+00 -1.0632e-01 +#> -2.9910e+00 5.5240e+00 4.8275e+00 -1.0063e+01 -7.7438e-02 6.2757e+00 +#> 8.0563e-01 1.0350e+01 -4.3406e+00 -4.0113e+00 -1.0990e+01 5.9592e+00 +#> +#> (17,.,.) = +#> Columns 1 to 8 1.3668 -3.4432 2.1390 -0.8427 -0.6320 1.9581 3.2341 1.9210 +#> -1.6544 -4.2853 0.9612 16.5762 -6.9679 0.1930 -3.5903 5.7247 +#> 4.8006 -3.6107 -5.6760 8.4668 -16.0328 -7.7381 3.2784 2.5965 +#> -9.5673 2.8128 8.2458 1.0514 -0.7305 -5.2313 1.3575 6.6193 +#> 6.6319 -5.2377 -0.6103 17.2627 9.5523 1.8141 -4.0966 -4.1852 +#> -3.9050 -12.3535 -0.4245 -2.6451 -14.1949 2.2731 -0.8757 8.6501 +#> 4.0817 -4.6800 0.9710 -4.5444 -7.0173 -6.4987 -4.5614 -3.2444 +#> 2.3754 -0.3210 -15.1469 -2.6496 -7.3515 -0.4677 1.9387 6.8919 +#> 9.4497 3.2428 -9.6117 7.6118 1.2019 -5.9788 4.4539 -4.1950 +#> -0.5779 -2.4925 6.2938 -10.5802 3.1000 1.9519 -0.6499 2.8393 +#> 0.5083 -1.2085 7.2838 7.6720 -15.7038 11.5571 -11.4855 9.2948 +#> 8.7573 -5.7349 -1.5443 -4.0924 -0.9508 9.0159 -11.9210 6.9827 +#> -10.6537 2.2223 -5.6679 -6.0914 4.2831 1.0994 6.9468 -5.2568 +#> 15.4910 -3.0078 -8.6448 -1.9103 2.2530 -2.6212 -1.8769 11.0320 +#> -0.7046 -0.8399 -10.9215 0.8463 -0.5661 10.0800 -0.9215 8.4464 +#> 0.2006 3.3116 -15.5019 14.4619 -10.9747 4.4037 -4.4251 1.1613 +#> -12.2859 1.1584 7.0685 -7.0402 1.7751 7.7504 5.1132 2.3032 +#> -5.8107 2.9755 7.9711 8.7593 -3.4794 -5.4817 -2.0242 1.0860 +#> 0.0560 -7.8120 4.4276 9.2264 1.7535 -7.2687 -11.2472 -2.2641 +#> -2.9812 10.3210 -3.2653 -15.6866 11.8189 -5.6440 5.6897 -4.3984 +#> -0.4527 -5.7626 -2.7274 1.7819 0.4515 1.4686 3.9425 2.5368 +#> -2.1137 -4.9138 -9.7005 -1.9540 -7.2742 5.7914 0.2371 -5.8429 +#> 10.5286 0.8848 3.7341 -9.4200 4.7866 -0.8210 -2.6872 -4.3080 +#> -3.1698 -2.3465 -4.7972 -5.3810 0.8969 10.1857 -10.0679 2.6018 +#> 7.2200 -5.9711 -9.7702 6.4634 2.4296 12.0672 1.9609 -10.3035 +#> -0.2531 7.8131 13.8678 0.3494 9.8688 -4.1610 -0.9882 -3.0283 +#> -13.5478 -5.8178 20.0050 -2.9637 1.9540 -0.5726 -0.7745 -10.1655 +#> 3.4642 0.3604 -10.5769 6.6056 -3.8506 -1.7982 -6.6384 -8.2130 +#> 11.2080 -1.5003 -26.1303 -9.7587 18.6554 4.8752 11.6313 -15.9210 +#> 8.0520 0.6630 -4.3392 -2.7406 -0.6999 -2.6470 -4.0481 0.6635 +#> 2.3905 -3.1174 -4.4570 -0.2210 1.7340 -6.2630 -3.1480 -2.8472 +#> -5.3666 -2.1554 3.5044 4.3067 -5.7222 6.9594 -5.5928 4.7046 +#> -2.3082 11.0496 1.2639 4.8452 -7.6340 -2.5461 0.2318 -10.4146 +#> +#> Columns 9 to 16 2.2131 -3.1289 -3.2092 -4.9386 -7.1049 0.2535 2.3358 -0.5907 +#> 3.7709 13.5684 4.0799 -3.6861 3.2552 -1.5032 -12.4103 -7.7534 +#> 4.7420 9.0173 1.7441 8.1694 2.2653 -3.1814 -6.5683 -1.1406 +#> 1.6285 18.5031 2.6744 -1.6099 7.8342 -3.9538 -12.6472 -4.5839 +#> -0.3643 4.3094 2.8272 12.1150 -0.1179 -1.3630 -5.0676 -3.3576 +#> 11.7261 14.8062 6.6564 -3.1429 -6.8643 -6.9250 -6.3080 1.7487 +#> -3.5739 8.8945 -2.3407 -1.1460 -1.9256 -0.0228 0.1383 -1.1746 +#> 1.9408 -6.4063 -7.9212 -4.8813 -6.5500 7.1861 22.5862 -8.1295 +#> -14.4637 5.0205 -10.4692 5.7980 9.8879 -5.4975 -1.8670 1.2577 +#> -13.7285 -5.8090 3.1237 -4.0750 11.1732 11.5277 -10.1997 5.1817 +#> 7.6963 -4.0519 5.8397 -1.9703 2.7619 -0.0227 1.1363 -6.3590 +#> 3.1155 -2.2087 2.2321 -4.7861 -10.0136 -11.1697 7.4551 -12.0435 +#> -2.3962 -0.5803 -7.3797 3.8779 -3.3768 4.3138 -6.5188 8.7066 +#> -5.1084 -2.8763 -3.0044 -9.3940 0.4919 9.7544 1.0481 -0.0822 +#> -3.5620 -6.6338 -2.9303 -13.2935 -3.6175 13.6802 -2.6698 2.0818 +#> 5.0874 -6.6636 5.0258 0.0068 -4.4974 10.3106 4.4157 -9.1281 +#> -0.5039 -9.0866 -7.8792 8.3116 3.1921 4.4632 0.7874 4.4261 +#> 0.0949 -4.5344 1.1556 4.1645 0.8011 4.6032 -4.3907 -7.9854 +#> -2.7583 -13.8017 3.0664 -15.8004 6.5540 0.2059 6.4565 -4.6958 +#> 2.7841 9.0344 6.6813 2.9996 5.0517 -1.1102 -1.6973 -0.8198 +#> -4.6734 -0.0891 -1.6388 -1.2797 -7.5500 -1.7393 -4.1495 -6.5000 +#> -1.9580 -0.0299 -6.7465 1.5278 7.1508 5.4712 2.8490 15.1900 +#> -5.7016 -4.6485 -2.2517 -8.5704 -4.5748 2.7904 -6.5105 11.1943 +#> 3.4303 -8.1898 -0.5538 -4.7813 -2.6399 -2.9381 1.3538 8.3747 +#> -4.9123 6.4044 -4.0448 16.7914 1.0910 -4.7631 7.2308 4.9348 +#> 3.3073 7.5156 -6.2439 3.1141 -13.5495 -8.9150 -4.9400 -2.1943 +#> -11.8695 -5.6923 -5.2384 -5.4031 7.6025 0.0664 -11.8833 -1.6594 +#> -0.1889 0.8857 -4.0587 2.2487 2.3278 -1.4802 -3.5833 3.3591 +#> -11.8833 -15.0646 -14.1268 -11.4514 -6.5296 10.8089 -1.2067 9.8783 +#> -0.4446 -13.2851 2.8717 -7.3882 -1.3503 12.6771 14.5548 -1.2611 +#> -1.6082 -1.7365 -1.3400 -7.8448 -1.3372 -0.4710 3.3958 -2.6139 +#> 6.7788 3.9052 6.4834 9.8326 -8.8153 7.2564 5.8019 6.6102 +#> 11.6655 7.3165 3.8755 17.4124 -4.0078 0.3736 -7.9917 -3.5397 +#> +#> Columns 17 to 24 3.5885 0.5112 4.1919 2.2928 -3.4592 -3.3438 -2.7507 10.1935 +#> -14.8107 -8.5995 -3.2987 -3.8913 11.1210 -11.2294 -5.3465 -0.9420 +#> -0.6395 1.4731 0.4671 1.7614 9.3827 -6.4671 -1.4682 2.0018 +#> -4.4671 -3.5339 -8.8812 4.4848 2.2799 -5.7539 -8.1888 -18.6924 +#> -3.6165 12.6336 -1.9317 -6.8450 15.0244 0.7580 11.5272 0.8536 +#> 3.0117 -10.5515 -6.4697 -5.6312 12.4682 3.6703 -1.9525 -1.4758 +#> 8.9683 -0.7668 1.5552 -3.6024 -0.3375 4.8706 -1.6415 -1.8996 +#> 10.5285 -6.9695 9.4006 -1.6278 -1.7209 10.2509 3.6054 -2.3565 +#> -4.3185 -3.4627 1.3199 6.1191 0.4631 -6.6461 -3.7601 -8.1898 +#> 4.3494 7.4639 -1.3335 -7.7843 7.8536 -13.4332 -5.3972 -0.3448 +#> 9.0859 4.2671 -2.2402 1.1676 2.1955 -2.0298 17.5266 -5.1779 +#> -8.4949 -0.0489 -2.5038 -19.0814 -3.1472 -2.3482 0.5061 -0.1879 +#> -17.4451 1.1065 -9.1180 4.0942 -2.8670 -17.7986 -1.4602 -6.7647 +#> -9.2190 8.2250 -4.6333 2.7758 -7.1850 -5.8412 3.8636 -15.6275 +#> -14.7662 -8.8694 -4.6928 2.5367 -3.3641 -1.6041 0.2557 -16.6916 +#> 0.6874 11.9075 5.7605 -2.5003 -8.5480 0.0249 -2.7186 -0.3809 +#> 0.1377 2.5164 1.2167 2.3743 -13.1255 0.0631 6.6241 -2.3395 +#> -5.2050 1.7445 -6.4421 -2.0980 9.4729 -4.7849 -0.4018 2.3524 +#> -2.1437 -1.2305 -5.5264 1.8921 8.4241 10.7543 6.0593 4.3951 +#> 7.3050 4.1177 1.4402 -7.1866 -7.6545 -3.9801 1.0020 -6.5904 +#> 1.6969 -6.4428 -1.2667 -6.1963 1.3801 8.1921 1.4278 -2.9235 +#> -1.5682 -1.5774 -0.2605 7.9931 7.5534 -5.9387 2.5593 1.1506 +#> -5.8831 1.1837 1.8428 1.3193 3.3813 -6.9804 5.6135 0.7021 +#> -5.1809 -13.6561 -9.6489 -0.9701 4.7968 11.5547 -2.0378 -1.0602 +#> -8.4414 -3.6153 2.1264 -5.8963 -14.0751 -4.3650 8.6629 -0.1972 +#> 1.7479 -0.8118 2.8183 6.3006 3.6819 -3.7023 -1.8702 -9.9025 +#> 9.1341 0.4909 -3.8819 5.3517 0.1662 -3.4185 4.7450 -4.2207 +#> -7.9886 3.8488 4.5116 1.5594 4.2561 -0.6114 0.5441 9.0807 +#> 0.4632 10.6365 7.8597 -5.6214 -9.5630 0.8920 -1.9616 18.4082 +#> 1.9194 0.7698 -1.1071 1.7970 -1.2869 -0.3217 15.0823 -0.5134 +#> -1.1442 1.0544 8.4564 0.9455 -6.0304 3.1597 -7.5331 -2.5951 +#> -4.9870 -1.7763 -13.3783 -2.8915 9.3471 -12.7830 1.9093 -7.5257 +#> 0.5892 0.2067 2.5230 -1.9130 -4.9706 1.5910 -1.7690 6.1105 +#> +#> Columns 25 to 32 -2.0029 3.9320 3.4067 5.3786 0.5396 3.2219 -7.7468 -4.7151 +#> 0.2340 -7.7960 -0.9949 2.4682 -5.4832 1.0483 19.7407 -0.3670 +#> 8.2586 -26.3016 -1.1887 -0.8150 3.0739 5.2194 6.4329 10.5169 +#> 6.7984 5.2923 -11.1026 3.6038 16.4585 16.3690 -10.6448 -2.7222 +#> 1.0594 0.1640 -4.0368 -17.2720 0.6828 0.8792 -4.4849 8.2630 +#> 13.6407 -14.7086 -10.7549 8.5309 25.9234 1.2776 15.5606 8.0957 +#> 2.6937 7.4711 -10.7229 -1.7906 5.5418 3.0545 -11.5271 6.0893 +#> 7.4223 -8.4966 -2.2960 -4.8587 1.0670 -9.9737 -12.3587 -1.9898 +#> -5.3290 -0.9011 3.1267 -0.6015 -18.7713 15.6686 -0.6180 6.3267 +#> -1.1610 9.2603 3.2946 7.5929 -1.7813 -3.0358 -6.0294 -4.0077 +#> 3.9938 -1.9535 -7.3432 2.1561 -3.5667 -10.5269 -0.0138 9.9390 +#> -12.0993 -0.7934 3.2908 -0.6219 -9.9757 6.9532 -3.9538 9.9249 +#> -0.3967 8.4286 -2.4337 -3.7470 11.7568 -5.6643 2.4034 -15.0496 +#> 0.8066 -0.6670 16.3712 -1.1119 -6.3982 -0.2643 2.2696 -0.5712 +#> 8.1521 11.5330 -4.9830 -4.5086 6.1124 -15.1762 -4.2291 -3.7102 +#> 10.3675 -1.9513 -1.2383 -2.5966 11.4348 -1.6052 -13.1066 -14.3413 +#> 7.5306 -6.6635 1.5598 -8.7047 1.1178 -9.2202 -2.9110 -3.3539 +#> -7.3380 -14.6635 0.9139 -6.4532 -3.7371 -10.3109 4.2459 -3.9466 +#> 8.4953 12.9968 -17.9691 -6.0439 -1.3250 -4.7113 4.7177 13.5382 +#> 3.7967 3.7300 -7.8984 3.7023 6.1214 -2.7543 8.7444 0.5483 +#> 4.0348 0.0959 8.1114 -6.1435 -0.4961 8.7812 9.5133 9.0300 +#> -2.0379 -4.1883 4.7867 9.5205 -4.9854 -2.4336 2.5777 -4.4673 +#> -5.3450 3.8609 11.8456 6.9889 -4.2670 -7.3262 2.8602 2.9715 +#> -15.0469 6.1461 7.7922 -5.8133 -4.5396 1.1051 -3.7984 6.4299 +#> -3.2903 -9.3937 5.7421 -8.5343 -4.2489 0.9252 10.1617 5.1430 +#> -2.5213 -5.9571 12.3115 -1.6832 -0.2978 4.6345 -9.3846 -3.4147 +#> 3.4115 16.7532 8.9324 5.0320 -8.1629 -1.3587 -6.7701 -2.1715 +#> -0.4575 -3.3817 3.9605 1.6008 -8.0968 4.3161 9.5901 -0.0280 +#> 0.9238 27.4941 7.9045 1.6047 -5.1932 -5.2457 -4.6604 -23.6828 +#> -8.9074 9.7269 -11.1725 1.8938 0.9953 -4.0623 1.6491 2.7527 +#> 13.1886 10.0342 1.2653 5.6036 10.1693 7.6246 0.1324 -5.8280 +#> -8.2246 -10.7013 -2.3507 -10.6499 9.7994 -16.3100 0.8994 -1.2278 +#> -5.7152 3.3416 9.0670 -2.8194 0.2029 14.9150 5.5999 7.0084 +#> +#> Columns 33 to 40 -6.8262 -1.1841 -1.4982 -1.3496 2.7896 0.1497 -1.9016 0.0235 +#> 13.9558 6.3541 5.1794 -5.9051 -10.0455 11.4800 4.4661 -4.3158 +#> -3.8938 -5.5247 2.4443 6.2150 6.8373 7.9397 11.8457 0.3093 +#> 6.4177 4.4580 9.2110 1.5865 5.1901 0.0157 -5.6327 13.2105 +#> 6.8356 -10.1816 -2.8213 17.2493 -4.7139 13.5832 4.4130 23.2731 +#> -6.6588 -1.2380 8.0038 9.1993 5.0634 -16.7633 17.0086 -8.2435 +#> 4.8066 -8.2511 -6.9907 10.9079 12.0255 0.1532 1.3351 8.3415 +#> 1.2216 0.9603 1.6721 9.9872 -6.9341 -13.8973 3.4003 7.8173 +#> -1.1035 -7.6217 -4.5250 -10.1933 -2.7159 9.6300 -15.2344 7.5073 +#> -12.8378 -17.2390 0.4289 -6.9769 -13.0214 -11.8335 9.0580 -12.5285 +#> 11.3366 -4.5438 -9.8580 3.4395 9.6712 1.6224 16.4277 -1.1459 +#> 9.4785 9.2577 -5.2641 -0.5703 -2.3712 -6.0868 -22.6391 -9.3431 +#> -2.2929 1.5627 -4.2094 -6.9983 -1.2377 -2.2779 -14.3282 5.1031 +#> 4.3634 -13.2845 2.8837 1.7756 -8.9107 -11.8874 -0.1907 17.6609 +#> 17.4933 2.8129 -3.1234 -4.1698 -8.5263 -1.2697 -4.4828 -7.6263 +#> 5.4847 3.4026 -0.1521 4.0699 -1.6453 -9.1976 5.3217 6.8270 +#> -1.1283 -4.7820 -0.6132 -3.5993 15.5907 0.4372 6.5704 0.1422 +#> 1.7675 12.4804 13.3193 6.0499 -6.6079 3.7158 2.7083 3.6558 +#> 15.0732 1.3094 -2.7776 1.4411 0.2781 8.4353 0.7465 10.3257 +#> -4.4394 8.8678 -4.8606 -5.7965 8.1754 -10.7260 -6.3006 -6.9240 +#> 5.0459 -1.7674 10.6367 3.9742 -4.9796 7.1816 6.7132 -10.6704 +#> -6.2376 0.8917 13.0906 -2.3837 3.1197 -3.0004 2.5718 -4.6050 +#> -5.1556 -3.3456 1.8443 -5.9739 -10.6337 0.5083 2.3102 -8.9235 +#> 10.2780 0.9049 -8.4920 -10.7988 -12.1641 -11.7005 -7.6098 3.2160 +#> 3.6577 6.1075 -8.5969 -3.4325 3.5764 -12.4381 -19.4353 -1.1953 +#> 0.7634 -4.8556 -8.9701 -9.6647 6.7192 -2.7212 -15.3708 21.7324 +#> -1.4057 -2.3539 4.2881 -11.4797 -8.8430 12.6676 1.0928 12.2919 +#> 8.5077 0.4143 1.1196 0.0862 0.1472 2.7632 3.6282 12.2199 +#> -5.0323 2.9259 -7.1431 -4.1458 -7.3915 -8.6137 -5.6548 9.9677 +#> 0.0409 8.6625 -3.9516 9.0051 7.6905 -4.4558 -9.8734 -1.3521 +#> -4.4540 -3.8072 4.0657 8.3726 5.7265 -6.1096 11.3597 4.5525 +#> 2.6306 10.6202 -1.5673 -18.5260 -11.5257 -16.5503 -16.8518 -12.0477 +#> 8.2940 10.5399 8.6244 -3.6574 12.9791 6.0812 6.7372 -1.1974 +#> +#> Columns 41 to 48 -4.3020 10.8055 4.6957 -4.9985 -6.2372 -2.7128 -8.4595 -7.3967 +#> 5.1753 -12.4423 -10.8705 -3.6379 3.7710 -2.2297 -4.7982 -3.3469 +#> -7.6991 -13.7726 -3.3563 -5.0511 5.6848 16.8829 1.6898 -5.4550 +#> -7.1857 -8.5628 1.1988 -6.4742 -9.5060 3.8144 4.4995 28.3708 +#> -11.5917 -2.3357 0.6637 1.2002 0.7196 11.2064 -6.4569 -0.7988 +#> -6.6003 -4.5986 -5.1728 -1.9692 10.9260 0.4085 -5.4126 -22.6286 +#> -9.0757 10.3513 2.2290 -13.7965 3.6469 10.4488 8.3148 3.1308 +#> -3.1757 8.3759 -3.7542 2.8308 0.0075 1.7349 5.1876 -16.4447 +#> 7.5323 -11.7255 5.3699 5.7143 -3.5338 -4.6624 1.6409 19.3834 +#> 11.9529 -7.1491 2.4550 3.3643 -1.3929 -3.6870 -6.2465 2.1597 +#> -1.0344 -1.1941 -8.5014 3.1560 -3.3007 3.9206 15.1769 -11.7456 +#> -0.6963 -7.1328 -13.3182 -4.9291 2.4142 -2.7669 -10.8955 -5.4913 +#> -8.6100 -2.2302 -3.3250 -6.1107 -7.4750 3.7574 4.4175 13.1750 +#> -7.6901 -1.3427 25.0766 10.5373 -16.1542 3.2744 -17.0334 -8.1871 +#> 1.1219 -3.7225 -2.0549 -13.2678 -4.5581 9.1610 2.4108 3.5842 +#> -11.2810 -1.9513 -0.7774 -6.2395 1.2964 13.9970 6.0814 -6.6790 +#> -10.3570 -5.9657 -1.0659 -8.6927 -18.1257 7.0295 8.2417 0.1360 +#> -12.1424 -7.3138 -10.4336 -2.6660 1.9307 7.5141 -2.1322 4.9691 +#> 3.9055 9.3442 -18.5435 8.3435 8.2109 -4.1803 8.2996 4.8317 +#> 9.6217 4.7113 1.9322 5.2811 3.3229 -4.8833 5.3930 10.3162 +#> 0.5451 -7.2596 -3.4627 -1.3435 2.4908 1.4579 -7.3328 -6.2187 +#> 17.7945 5.2902 9.3749 3.9251 7.0861 -10.4601 -0.5091 0.1525 +#> 7.6007 -7.6981 3.6208 10.9601 5.4518 -1.5241 -16.4945 2.8296 +#> 6.1518 1.8870 3.0516 1.0733 -4.8044 -8.6038 -2.2350 -3.6789 +#> -3.5250 5.8319 3.1770 -7.1390 2.8663 2.1237 2.9298 -4.8953 +#> -5.6806 -8.7227 5.6269 9.6688 -17.3416 6.3317 -6.7240 16.0170 +#> 11.7895 -1.7393 -6.7264 0.7712 -17.2032 2.6500 2.9119 10.6580 +#> 0.0527 -4.1213 -3.3495 2.4200 3.5814 2.3895 -4.0955 -0.2074 +#> 0.5995 10.2260 3.0368 -5.2734 -13.2920 -8.8198 -12.0334 5.3303 +#> 3.9332 12.1409 -4.3906 8.9993 1.0733 -6.0257 -0.2475 -5.4129 +#> 0.8669 7.1621 1.8395 -9.9322 -2.5338 -0.8740 10.4499 2.6948 +#> -8.6909 5.1841 6.7866 -10.0677 6.1586 0.6929 1.9280 0.6756 +#> 2.9781 3.8300 7.3013 -4.3447 1.3425 3.6420 -0.7538 -0.1901 +#> +#> (18,.,.) = +#> Columns 1 to 8 1.7693 -3.7437 0.4488 -1.4994 -6.6695 -3.2654 -3.9207 2.9121 +#> 0.5893 6.8927 -0.7710 -13.0229 8.1895 -10.8628 -6.4562 -4.8553 +#> -1.3457 -6.7509 -0.4216 -6.1695 0.2155 -2.9639 1.1536 -4.4958 +#> -1.5678 20.3629 -7.6743 -4.7557 0.9429 -1.0841 -9.0395 1.3211 +#> 4.8079 -1.0804 -0.2974 -13.3860 11.4514 8.6793 0.2175 14.0844 +#> 6.3678 17.7403 2.4537 -3.2665 1.9183 -0.6195 2.5128 -15.3417 +#> 14.1902 -1.5669 -0.8139 5.5219 1.6574 -1.2610 -2.1059 8.0694 +#> 0.8491 4.1303 1.2346 4.2384 0.8954 1.8104 -8.1027 -2.6615 +#> -6.7503 -4.8570 -3.6724 0.1626 -3.2078 -9.2938 -9.8602 9.7555 +#> -4.3962 -8.7855 1.8805 -5.5878 -0.4761 -0.3574 -7.1436 -0.9219 +#> -4.0086 -1.0663 -6.3442 -0.5711 -1.4093 6.6411 18.4049 0.7341 +#> 6.9307 -7.8223 -2.9164 3.7998 -4.4687 -10.0486 -9.5851 -7.9319 +#> -10.7289 2.7492 0.2853 1.7634 2.3138 -4.1133 -3.7710 -9.5835 +#> -3.4323 7.3113 4.7476 -1.9059 -17.2983 -0.7695 0.6150 -11.2911 +#> -1.9111 16.1997 3.4382 4.2202 3.9772 -5.0819 3.3507 -9.0817 +#> -5.7460 13.8628 1.4023 -13.6941 0.3724 -7.4148 -4.1202 -5.4349 +#> -9.6823 -4.2764 -18.3603 3.1510 -6.9110 15.3406 14.7010 -0.0527 +#> -3.3143 -4.1826 -7.3979 -13.9564 9.1485 3.6606 -2.2703 -8.8997 +#> 8.4892 -0.2705 5.2918 4.8499 15.7377 -8.8092 14.7740 0.8690 +#> -2.7041 -0.9629 7.5889 0.2518 -12.1988 3.1537 -7.3136 3.2532 +#> 1.2082 -2.5927 5.9001 3.5160 13.2167 4.2710 -8.6033 4.0416 +#> 0.0327 2.2226 9.3338 11.1842 0.7902 -3.5978 0.3853 -2.3400 +#> -1.6999 -2.9742 5.7235 3.3782 -4.6991 -8.1012 -1.1316 -6.7495 +#> 5.0286 3.9535 1.8890 9.1390 -0.0810 0.5031 10.0741 -10.5023 +#> -0.0041 1.3454 4.8857 0.1407 -3.1608 -2.2099 -2.6475 2.0598 +#> -7.0508 -4.2499 -9.8425 2.6331 -23.7175 -7.0619 -1.0543 10.4615 +#> -8.6743 0.2593 -0.1853 -2.3228 -10.3654 1.1706 8.3139 -0.9085 +#> 3.8526 6.5251 5.5954 -10.5018 -5.3175 -3.6696 -5.7914 -4.6401 +#> -3.8558 10.8271 13.5630 -15.7800 -11.1479 -4.1164 -8.8653 6.4761 +#> -1.4505 -1.8530 7.7855 6.5016 0.1126 2.2911 3.1066 6.9591 +#> 9.3867 8.2427 9.7405 -0.5214 -1.4383 -6.1565 -6.5163 -3.8440 +#> 4.4435 1.3288 -13.7300 -1.5707 7.2329 -8.3706 5.4045 -5.1003 +#> 6.2303 7.8291 3.5604 1.0735 -2.4621 8.0137 -10.1479 1.3674 +#> +#> Columns 9 to 16 1.7708 4.0457 -6.1123 -12.4369 1.5701 0.7788 1.5079 -4.0221 +#> 15.0378 4.7770 -5.8520 -2.1910 8.2019 1.4738 5.3760 7.8546 +#> -6.1831 -1.8318 3.5178 -9.4994 8.1199 5.7272 2.8079 -6.8835 +#> -3.7350 -8.6199 5.4259 6.5652 4.2800 -11.8189 -4.6143 7.2360 +#> -10.7553 7.5053 7.4874 -5.7983 0.4043 0.8089 9.4067 -13.1004 +#> 6.1953 4.4338 17.3723 -5.9549 1.7684 5.3632 -10.3271 -4.7114 +#> -9.9337 -6.1133 3.9953 1.3035 -4.7276 10.3715 13.9013 -7.9286 +#> -0.9040 -5.6797 5.1721 -3.8479 -4.4299 -2.9723 3.6749 3.6444 +#> -1.1017 -2.9652 -5.8445 3.9218 6.0071 -8.6968 6.5589 -1.1590 +#> -5.7768 4.6566 -0.1643 -4.3213 -4.9812 2.4639 -0.9566 7.4473 +#> 4.8310 -4.1685 -8.4279 7.0530 -10.2627 15.2408 8.5978 6.6599 +#> 15.0967 -7.4296 -6.2416 3.8482 2.8075 1.9646 8.2397 0.2924 +#> -5.9098 7.6005 -9.1337 4.8852 3.4711 1.1408 1.2984 7.4806 +#> 10.6830 5.2838 7.3942 -5.7493 -9.6578 8.3334 1.8883 -3.4544 +#> 8.0284 -3.2153 -4.6798 7.0326 2.2160 5.2296 8.3419 13.1458 +#> 6.0632 -8.7457 -0.8001 4.1799 -1.1196 -2.1648 -2.0590 3.7524 +#> -5.6592 -7.7796 -6.0401 -3.0907 -4.5600 6.7824 -0.2442 7.5469 +#> 1.3968 -0.8609 9.0593 2.6139 6.8111 -7.5359 1.6450 2.8085 +#> 3.4499 -6.6892 -1.5266 9.7360 5.1715 12.3020 -0.3969 4.7609 +#> 2.1820 6.0696 -8.8311 -0.4778 5.3647 -1.5951 -6.1490 6.5399 +#> -3.1861 0.5121 6.1372 -4.3520 6.2971 3.1386 -6.5192 0.6372 +#> -13.7539 14.8394 -4.5614 -7.6384 3.1100 -4.2105 -11.9115 -0.5960 +#> 0.0991 10.2576 3.5545 2.5239 -1.5584 -0.7342 -1.4513 2.3753 +#> 3.1298 2.8470 9.1367 2.1962 -10.9025 3.4857 1.3424 1.7096 +#> -3.4284 -3.9947 2.6576 -0.0517 9.5964 4.1337 11.2346 -0.2747 +#> -0.7268 1.8680 -3.1494 -5.5078 -1.8781 -5.5577 5.0735 -0.8478 +#> -19.5334 11.6953 -6.0222 5.4528 -12.5032 1.3430 -1.5628 -4.4609 +#> 4.9712 14.0985 -4.1237 -5.1416 3.5854 7.1459 -0.9475 0.1677 +#> 5.4993 9.4316 -20.0849 1.1029 -0.5751 -10.3690 -4.5713 0.9547 +#> -3.0163 1.9530 -5.8652 15.8317 1.0462 -4.7546 -1.0412 5.7411 +#> -6.4841 -3.1284 5.2417 -1.8846 1.6745 -0.6803 -6.0243 -1.8856 +#> 7.1562 -1.8055 8.6672 -0.8442 -0.0713 -12.3860 15.8024 6.7052 +#> -8.2570 9.0252 2.5138 8.1018 -6.4767 -4.7385 -3.5294 1.8052 +#> +#> Columns 17 to 24 6.2775 -1.5850 3.2457 -9.9804 -1.6445 -7.0003 -9.3166 -11.5565 +#> 7.9664 9.6289 0.2315 -1.3559 0.0530 -5.2808 -4.1676 0.7045 +#> 6.4092 6.8486 -3.1486 -3.3920 10.4421 20.3564 17.9846 4.4097 +#> -2.3627 5.6153 2.1447 21.3260 -7.2783 -0.4885 11.7380 -6.2825 +#> -1.4116 5.0696 -12.0459 -13.7264 8.2601 0.4631 20.2876 -5.3802 +#> 10.1130 8.0819 12.0196 4.4795 6.2384 19.2221 -2.6166 -11.2525 +#> -5.5405 0.7912 5.6823 1.7938 -3.9193 0.0173 1.0807 -13.1727 +#> 7.3571 -4.1093 9.3866 -7.4530 -14.9705 3.8423 -10.9668 -0.5777 +#> -6.1378 -2.0439 -0.1679 6.4808 -5.9127 6.4555 13.2630 2.6399 +#> -3.6521 -10.5245 2.6363 -7.0387 11.3067 -7.2164 4.6194 -5.4668 +#> -4.9939 2.1794 -0.7131 -13.3611 8.2837 -1.1934 -6.7704 9.8353 +#> 11.5898 -4.1928 -1.3052 3.4957 -5.8720 1.2028 -3.1091 13.1973 +#> -3.1264 6.7654 -2.8430 13.9791 -8.4931 -0.2467 -3.8826 2.6253 +#> -2.1554 -4.6526 0.3287 -4.4211 -2.7765 -9.5945 -1.2182 -6.0988 +#> 3.5301 16.1572 6.2256 6.7041 -14.0738 -24.8879 -6.8695 17.6601 +#> 2.0198 4.4124 -2.4822 -7.3260 1.6237 0.2821 12.2131 6.1603 +#> -3.5125 -6.0513 -9.0158 3.2189 -0.6211 7.8695 -0.4305 10.0394 +#> 3.0957 -1.7018 -7.6300 -8.3000 3.9737 2.7196 2.5131 0.5913 +#> 4.8698 2.4559 -4.4226 -1.1888 -9.9158 -9.0054 -1.9232 -4.1826 +#> -10.5941 -8.7925 2.0395 12.3159 -8.1303 6.5499 9.0866 -5.5161 +#> 4.8163 -3.7302 3.6447 4.9581 11.8535 1.0523 10.8975 -9.3148 +#> -2.7097 5.0189 1.3801 -2.0891 7.1357 2.1787 -3.7618 -6.3489 +#> -0.1834 -3.4768 -6.1250 3.5673 1.2078 -8.4678 -10.4166 3.5552 +#> 4.4301 4.1431 11.0031 -1.9690 5.4267 -5.9929 -14.1570 7.8287 +#> -0.0241 5.3399 0.5543 -5.0418 -2.8688 6.8181 17.5128 3.0657 +#> 1.4544 -1.0651 -1.4330 6.9148 -1.1912 4.7612 -6.6168 -4.1147 +#> 1.5356 2.1107 -8.3980 -1.6127 10.8442 -4.5871 -4.3506 3.3017 +#> 3.8860 5.2318 -4.8024 -8.2091 3.6962 2.8229 7.1286 -7.4035 +#> -11.9108 2.0322 -1.0265 -10.2117 -13.0385 -10.3164 -3.1064 -8.6134 +#> -12.2115 0.2786 -3.1106 -5.7939 -2.2793 -8.2510 -7.1224 8.6989 +#> -1.0105 5.1231 6.0568 3.0091 -1.1277 0.4511 7.1284 -12.9398 +#> 7.0544 4.4285 5.9728 2.4493 1.8328 6.6938 -0.2673 11.6947 +#> -8.0670 0.9918 3.2459 -10.6189 11.4817 6.1879 4.2604 6.4099 +#> +#> Columns 25 to 32 -0.8525 -4.2466 8.1149 8.9604 2.6708 -4.4176 -1.6742 -1.0590 +#> 12.3265 -5.2732 -6.4582 3.1313 7.5117 -3.6575 -7.0726 -0.6175 +#> 4.3506 -7.8010 -2.3238 9.1139 6.7009 -1.6155 -5.2580 -4.8739 +#> 1.3846 4.9054 -1.5195 -15.9436 0.6421 2.2918 -3.9876 -14.4454 +#> -9.4614 -2.8986 -1.1720 3.6532 2.1531 -12.4642 9.8654 -13.5485 +#> -3.7789 2.6711 2.3726 1.3406 18.7266 1.4941 -17.1303 -9.9435 +#> -7.7549 6.6290 -8.2352 3.8351 -7.4765 -1.8254 5.3463 -0.9688 +#> 11.4318 2.1752 3.5475 -11.0165 3.4342 10.1854 11.2030 6.3557 +#> 7.8854 -12.9497 -10.9564 2.5554 -7.5959 4.8934 13.7491 2.9515 +#> -16.6380 8.1645 -7.0017 4.9986 2.3491 9.3661 12.6656 0.1412 +#> -10.5543 1.7138 2.8433 13.7616 5.4915 -18.3438 6.3564 -11.7630 +#> -1.1189 3.1494 -1.9130 12.0892 7.8868 -22.8812 13.7789 2.8608 +#> 5.7502 5.8820 -5.7827 -10.7467 12.4053 2.1625 9.2320 0.1326 +#> -26.4053 -10.0236 5.1681 -8.0470 -8.4688 5.7817 22.6678 -15.7947 +#> 14.9533 10.6294 -3.2660 0.0595 0.2008 -5.3989 0.6418 2.7737 +#> 7.3439 -4.1394 4.6748 -14.2296 13.9774 4.8629 -6.9513 0.6267 +#> -3.2179 6.5492 16.0983 -5.5892 4.1199 5.6976 5.9562 -4.3062 +#> 12.2855 -10.3834 5.0860 -14.8155 0.5852 2.1939 -2.0211 -5.9842 +#> 15.2099 -15.7808 -3.3093 -15.0326 -1.9072 4.7694 3.5471 -3.8703 +#> -13.5151 13.7489 -7.8604 -9.9652 -1.5774 10.2144 0.0495 3.6213 +#> 5.0646 -0.9133 -4.4108 6.7082 -13.5431 6.9828 -1.0883 2.8532 +#> 1.4100 3.7741 -0.6558 0.9271 -15.5753 -6.1758 3.0588 7.6803 +#> -12.6247 -7.7815 -9.9065 10.6178 1.0637 2.7132 4.7961 5.1542 +#> 6.4375 3.6851 6.0630 2.5964 2.2891 -7.7238 2.8372 -2.4290 +#> 12.6501 -0.4962 -6.1870 -10.1276 -0.2421 4.6832 5.6642 2.8007 +#> -4.5591 -16.3988 20.4572 10.2898 0.5325 -6.2261 8.2257 -11.0654 +#> -14.5784 -8.0392 -2.5530 4.7269 3.0883 11.3417 6.5360 -17.0215 +#> -2.6200 -12.4883 -1.2617 -2.0253 7.9900 -4.3683 -1.7141 -4.1041 +#> 1.9636 -0.0181 7.0525 -1.0082 9.4880 16.3890 7.7812 5.2593 +#> -0.7927 -0.0714 -5.3875 -5.7970 -10.7919 -11.7366 2.1444 4.8084 +#> -0.2244 -2.5639 -1.7997 -14.9589 -8.3325 20.9482 -0.5846 -3.4125 +#> 28.0977 4.8458 12.7155 -10.4177 2.1977 -6.6505 -0.0407 3.5486 +#> -7.3188 -13.0014 -10.6939 4.8607 -7.7463 -5.1368 -19.2582 -5.9339 +#> +#> Columns 33 to 40 -0.1377 3.2506 -3.7871 -1.7106 5.6040 -3.9724 -3.5178 5.4732 +#> 4.7593 -1.1129 -2.8694 -5.2186 0.6332 -1.6388 8.8995 -1.3257 +#> 5.7330 -13.2376 -2.9613 -3.8430 -2.3290 5.7001 -7.2494 -3.2479 +#> 5.2694 -4.3552 -4.1546 5.8831 -12.7448 0.4376 -0.3571 -2.8330 +#> 10.3967 -1.6598 -11.2438 6.2785 3.1869 -1.2920 -1.5651 -13.4057 +#> 4.9391 7.1792 14.1085 -13.6301 -8.4327 -1.1758 -10.8934 3.0144 +#> -3.3718 3.2028 7.2947 8.6978 5.3906 0.6700 0.1483 -6.6201 +#> 1.6603 7.8946 9.3005 -11.7120 -11.4806 9.4585 -3.5366 7.3680 +#> -2.3639 -11.7350 -6.0241 13.2135 -7.9159 8.2889 2.2433 -9.5829 +#> 7.4441 -4.4825 -10.8704 11.8347 5.2982 -2.7713 8.4805 -4.3410 +#> 0.4683 0.3675 5.5605 -3.4918 13.9562 -10.8579 -3.2898 4.5480 +#> -1.2874 6.0127 -11.8734 12.2815 -2.5169 5.7772 14.3163 -14.1267 +#> 6.2015 -2.7839 -5.4442 3.8890 5.3374 10.8819 -1.7560 2.8851 +#> -3.3453 -6.0401 -0.8963 0.5534 -13.0673 8.6007 -6.2398 -6.9812 +#> 3.4145 22.1141 -0.1146 -1.6732 5.3828 -7.8777 -1.9819 3.0397 +#> 5.0049 -6.3872 2.6951 -11.7042 2.1571 -0.2808 0.5163 7.2347 +#> -5.4106 -0.7201 -2.5963 -4.6721 5.6510 11.6991 0.5185 12.9348 +#> 11.7139 -13.3170 -8.1163 -3.0640 -7.0170 1.7769 2.6703 4.6418 +#> -0.6666 -1.5284 3.7767 -3.0283 -4.3940 -1.2406 -0.5145 -10.3931 +#> -13.4537 2.4053 6.4562 -0.3350 0.6786 8.1114 5.2701 6.4626 +#> -3.6901 -3.4125 -0.1483 -1.0464 -0.2067 3.2730 -6.4035 1.1932 +#> 4.7079 7.1522 1.0095 -11.3026 13.4431 -9.8529 -12.1500 0.1062 +#> -8.7493 -7.3526 -2.5875 8.2180 -11.8997 -0.3301 1.0430 -4.4056 +#> 1.3124 17.5556 -2.8616 0.9150 10.7662 -3.2282 -5.5729 -12.4599 +#> -9.8113 -4.1152 10.6109 -15.3814 -0.4287 12.5030 5.3582 -1.4230 +#> -4.3885 3.8400 -16.4500 17.1638 1.1282 2.7942 -13.5514 -6.5250 +#> -1.5967 -3.5750 -20.1434 12.8107 5.2973 -7.3706 -5.8396 -3.9896 +#> 2.6274 -3.5200 -4.1072 -5.4231 2.5422 -5.8367 -3.2230 -6.7909 +#> -1.3164 5.3932 -1.0555 -2.4966 3.7360 -7.8412 8.6574 9.8278 +#> -4.4774 7.7216 5.6121 -11.1173 3.9849 -4.7291 -2.4899 3.5248 +#> -6.6304 -0.4859 7.2998 -9.1267 -1.0349 -0.8976 0.7707 0.2001 +#> 15.3553 4.8740 0.8778 2.2946 -2.4136 8.1221 3.2021 8.5878 +#> -3.5446 -14.0025 -1.2499 -6.4958 6.5118 -6.9399 -7.1943 3.8768 +#> +#> Columns 41 to 48 -3.5953 2.1697 -0.3344 -6.6644 -3.9017 -0.7598 -9.5340 6.0270 +#> -8.4097 -12.3432 -5.6333 -3.1630 -4.9298 -7.6671 -6.8117 -4.4287 +#> -2.6316 0.1110 8.7020 -1.0336 -4.1076 -3.1124 12.1268 10.2697 +#> -7.6029 -12.7724 -3.0518 9.1972 -1.7139 3.4533 3.1317 -6.5516 +#> 17.7244 -0.6909 1.6023 0.4245 -9.7306 -6.2988 -2.0096 4.8661 +#> -17.8505 -6.5809 2.6257 -8.1040 1.1353 4.0209 -0.6767 -15.1927 +#> -0.7163 -4.5878 -4.0440 -9.7823 7.1896 13.7092 5.9171 4.5902 +#> -9.6751 -12.4562 -0.0980 7.2054 5.0217 3.4857 3.0155 -8.7498 +#> 4.4006 -4.9509 -1.1345 8.0237 -4.5314 -3.9380 6.7367 5.3459 +#> 8.7066 13.6964 -6.8727 -2.7678 8.2594 1.2589 -10.8770 9.1227 +#> -5.7971 14.9028 -2.1528 -2.3899 5.8608 -6.0840 -1.4284 13.2431 +#> 2.4691 -1.6721 -16.9231 -7.3021 1.9050 -7.1693 -11.4543 7.3997 +#> 6.1480 -1.5046 4.2482 4.0662 3.6572 10.8297 7.2994 -8.9193 +#> -8.8439 -20.0561 -2.0420 4.6577 -12.5582 4.0400 -7.8463 -3.2685 +#> -7.7507 2.0564 5.9774 9.1360 7.6873 11.6704 0.8791 -16.1470 +#> -1.9377 -3.0228 11.4520 3.7974 0.5614 0.6581 7.8290 -4.3634 +#> -4.5778 11.0574 7.6594 4.9709 -3.9845 1.9616 7.1617 7.5041 +#> 4.8766 -0.6172 2.3325 5.5392 -4.7441 -5.1529 3.6519 -2.3442 +#> 5.2498 -9.0746 8.3399 17.3116 3.0493 6.6338 4.5687 5.0531 +#> -11.1321 -4.8567 -15.2620 -5.3239 14.1225 -2.4464 -8.0929 3.0455 +#> 3.6702 6.0884 -3.7725 -0.6496 6.3575 3.4182 4.6017 1.1628 +#> -4.4519 8.9508 -2.1166 1.6412 5.0754 -2.2529 -2.1845 -1.7842 +#> 1.9706 5.9680 -9.0638 5.2057 1.3752 2.9317 -2.7724 -7.6801 +#> 1.5172 6.3629 0.9934 6.1167 -5.9671 -6.4193 3.7734 2.2889 +#> -8.4000 -2.7407 0.8589 2.4671 -4.2783 -8.0097 1.4589 -9.7707 +#> -3.7666 0.7908 -9.2058 2.7238 -15.6450 -12.3411 -3.3576 14.9731 +#> 7.9356 17.2258 12.0983 3.0539 -4.0637 4.3706 -2.5933 19.9890 +#> 0.2995 3.2532 0.1612 1.3240 -3.8385 -5.4814 -3.7269 6.8137 +#> 1.4511 8.7210 -0.7404 14.8264 9.6801 -0.8539 -6.6956 -2.7426 +#> -8.3648 5.7144 -6.9272 7.4207 2.4334 -1.5141 -0.9600 -10.3752 +#> -4.5589 -6.6866 5.0427 5.3583 3.3784 9.3046 5.3795 0.8498 +#> 0.5552 -5.3834 -0.0234 -7.8309 -7.7514 -3.5336 -2.8804 -11.4011 +#> -7.8766 15.0146 -5.0480 -10.1385 0.5785 -5.2375 -1.8472 -1.3784 +#> +#> (19,.,.) = +#> Columns 1 to 8 -3.4611 -9.1404 1.7931 -3.0810 0.4750 -7.0759 -4.8471 4.2381 +#> -0.0352 -12.1933 -5.7169 -11.1747 -2.5628 -6.8346 -2.2831 4.9942 +#> 2.6442 -0.2799 7.6961 -2.9925 13.2339 -2.4387 -17.7280 -7.4640 +#> 5.7746 6.6950 -5.3038 -2.8800 -0.9693 2.5390 4.3223 -3.1995 +#> -6.1854 8.1444 7.4993 -17.7296 10.2539 11.1687 -10.5051 -5.8786 +#> 2.8332 5.7838 -6.2065 -4.2929 7.3672 -15.5023 -13.5422 -3.5879 +#> 8.9983 2.2253 3.9089 -3.8948 -3.0167 9.7603 5.8497 1.3103 +#> -1.8740 -10.7571 0.9592 9.4249 6.4506 -7.1429 -7.8860 -4.7393 +#> 8.8798 -7.6927 2.0819 3.1199 4.4651 10.9876 2.2112 -5.5021 +#> -3.3376 1.1291 2.8319 6.3287 10.1260 -7.8622 12.9377 -0.1655 +#> 4.4010 1.1070 -7.4041 2.4861 -3.1894 -11.1775 1.6313 -6.4075 +#> -4.3031 0.4363 -15.8812 -10.8058 -6.9135 -13.3837 14.2700 -1.7496 +#> -9.9263 8.3336 -15.6886 -0.5846 1.6988 1.5499 4.1669 9.1616 +#> -11.1320 -5.9512 4.1215 22.5764 -5.4150 -21.3222 -2.8292 -4.1807 +#> -2.0779 0.4513 -16.3865 -12.8173 -19.3269 -0.4818 23.1835 5.7976 +#> -11.8749 4.9889 9.5479 4.0342 2.1430 -1.6942 -4.7308 4.7222 +#> -3.8208 9.4871 1.4341 15.6271 -5.5153 -14.3294 -1.7244 -1.7959 +#> -5.8851 -10.5064 0.0830 -4.4312 0.6757 -5.7881 -10.0082 0.9816 +#> 12.8905 -9.3879 -12.7090 -20.1896 -2.0752 13.9645 0.4113 -4.8586 +#> 6.6401 8.4873 3.5662 8.7259 7.4143 4.1407 2.2664 -4.7658 +#> 3.5540 -5.3497 1.8646 -15.3814 -8.4128 -1.2179 2.8765 3.3504 +#> 5.7773 1.7763 3.1419 7.6839 7.3752 3.5489 0.2443 7.5274 +#> -6.2074 1.6690 -8.1071 3.1395 4.1027 -13.6262 11.1680 0.3504 +#> 5.4216 10.8193 -10.5752 0.7090 -14.1337 -7.2826 11.8666 0.0769 +#> 0.2185 -1.2183 5.5371 -5.9713 -2.5031 -1.1161 -17.1534 -7.7117 +#> 1.5651 1.4154 2.4670 1.2006 5.9063 -11.6006 -1.4069 -1.2071 +#> -6.3910 6.6812 0.9082 -8.4090 2.6307 -11.4943 0.8303 7.1434 +#> -1.1001 -0.0420 1.2677 -2.3396 -0.7564 -2.7638 -7.4476 4.5147 +#> -6.8185 -1.0605 16.0993 1.8552 -11.7301 10.6714 8.6809 15.1086 +#> 3.4357 -8.2866 -0.8393 1.7461 -2.2635 5.7537 6.1828 3.5534 +#> 6.6863 -8.7180 13.1817 2.8627 -4.3145 7.4306 -2.6715 3.6868 +#> 6.8550 8.8745 -5.7014 -4.9334 8.2615 -5.9185 -1.6588 -7.0063 +#> -3.1026 3.0432 1.5501 2.6255 -6.8630 1.4004 -2.0602 2.0817 +#> +#> Columns 9 to 16 5.2902 2.7383 4.8146 -5.3328 -5.9478 1.1382 2.2200 -1.5181 +#> -2.7915 3.4025 3.4455 -3.6296 -5.0638 7.8659 3.7163 1.6103 +#> 8.8335 10.3378 10.0154 -0.9722 -4.3381 3.9938 -2.6561 -13.8199 +#> -8.0572 -1.6901 -5.5261 -0.2729 12.8816 9.8370 -4.1764 -15.8455 +#> -5.5871 -0.7823 9.1436 7.3097 -6.8795 5.3679 6.6581 12.6578 +#> 14.6644 1.8670 8.3263 -3.8820 0.5929 9.8740 1.6168 -15.2564 +#> -0.2044 10.7061 -9.6726 -7.2059 -4.5816 8.3205 3.3814 -1.3716 +#> 10.8472 -9.4861 9.4482 2.8386 -5.3880 -6.6471 6.8851 3.2060 +#> -5.0416 4.7347 -9.3375 2.8626 -1.3858 -7.3036 -3.2511 -6.0695 +#> -4.1657 1.4105 4.4514 -4.3708 -7.7883 -4.9683 -5.0988 -2.3729 +#> 5.0074 -2.6819 0.2308 -3.8807 5.7511 15.1418 4.6777 -2.6857 +#> 13.4803 9.9774 -7.9755 4.8428 -4.2181 1.4315 4.6671 9.2052 +#> -3.1098 5.3918 -2.5908 0.7142 0.2389 -4.2196 -0.5577 5.7543 +#> 5.3973 -5.1033 -1.8019 3.3361 -17.7087 3.1050 1.6701 -15.3451 +#> -8.5191 -0.6737 -7.4726 8.3262 5.8445 13.9084 6.1187 -6.4196 +#> -2.8843 -1.8931 11.2683 -2.0956 -7.7679 -5.3379 4.7186 -9.8928 +#> 3.8557 -4.4119 3.4556 -2.6384 11.2398 8.3716 1.6247 -1.3049 +#> 3.0877 -5.1738 15.3108 3.6818 0.3692 -6.3290 -2.0822 15.2589 +#> 0.3658 -4.6154 4.7221 7.3196 5.5705 12.4712 5.8117 6.3938 +#> -2.7410 -7.1528 -9.8978 -12.0244 5.8807 -0.8024 -7.3468 -3.7716 +#> 5.8393 -1.4356 7.4594 11.1117 5.8228 -0.2386 -2.7089 3.7404 +#> 2.3091 3.3260 1.5249 -4.2911 -4.1793 -3.8416 -10.7012 -10.4168 +#> 1.9383 7.3857 -9.7228 7.2830 -3.6504 -3.0505 2.8108 0.9772 +#> 2.5689 -0.5136 -1.8759 2.3639 -8.4020 -6.3512 8.6290 -0.0518 +#> 2.5915 -2.2131 -4.5490 5.2644 4.0784 -2.1929 -6.3137 3.1056 +#> 3.4506 0.3835 -6.8334 -9.9447 1.8448 2.3892 0.6988 -11.2923 +#> -2.8059 1.7178 8.3596 0.7915 5.9538 -1.8466 -9.0439 -4.7090 +#> 4.0136 7.1306 4.5880 -2.7101 -6.3237 -0.2910 1.6246 -9.5799 +#> -4.8588 2.7241 -2.0466 -3.5706 2.3091 -0.0946 4.9093 5.8773 +#> 0.2450 -1.9763 -3.7768 2.7259 1.5898 0.2287 2.6923 10.9416 +#> 2.5144 -3.9102 4.8004 1.2450 1.5627 2.9760 -6.2275 -12.0590 +#> -5.9881 -9.1830 2.9870 -3.6907 -8.4951 -10.5845 -4.5769 6.8579 +#> -10.3085 6.1097 9.1956 9.3138 -0.6870 -20.9332 -0.4139 3.7591 +#> +#> Columns 17 to 24 1.2376 7.5606 12.5421 -3.0553 -0.9328 1.7316 0.6615 -2.5097 +#> 9.4143 14.5409 3.9029 -0.7644 0.0921 -0.1401 1.1963 2.8162 +#> -6.1888 -4.1944 -11.8042 -7.1860 -5.6268 17.0825 12.8673 4.7076 +#> 0.9620 5.0366 -8.5746 12.7652 -2.2362 -5.4341 -3.6892 -3.3061 +#> -9.0501 11.4634 -6.0280 -1.7396 3.4346 -5.4313 10.9943 5.4963 +#> -10.6759 6.0834 2.6129 5.1395 -6.9779 12.2807 4.5402 4.3892 +#> -7.7933 -5.6006 6.5909 -4.0752 -5.2934 -4.2789 -1.3369 -6.7048 +#> -5.6729 5.4822 0.3232 -1.6646 -9.2207 2.5597 1.4470 11.8789 +#> -1.7569 -17.5717 -2.9011 2.3722 -5.3322 -0.9906 -0.2374 -0.7140 +#> 10.2958 -8.0127 -0.2486 6.8947 -1.4969 -16.7986 -7.5833 -3.8531 +#> -5.4006 18.8477 -0.7930 -5.5661 2.8746 5.6923 -1.0670 11.8340 +#> 8.7225 13.2541 -0.0645 6.1721 -5.0030 7.3289 -12.8461 7.3463 +#> -1.2710 7.4224 0.0202 6.3414 -10.4809 -2.1036 -12.0762 -11.1617 +#> -1.3905 0.0062 6.2062 11.0585 -8.7239 0.8016 1.4885 3.2104 +#> 11.7447 19.2056 1.4388 -5.1179 -4.0166 -2.2951 -12.5351 6.4644 +#> -3.2974 11.3161 -8.8209 -11.7499 2.9362 2.3316 6.7193 -0.9621 +#> 12.9329 -0.1011 -7.3242 4.3339 1.0294 13.6734 -3.9086 -6.9845 +#> 10.8593 14.5181 -7.2419 0.2592 5.3456 -1.1686 4.3092 8.9532 +#> -8.1362 12.2634 -5.9849 -5.3753 -2.7588 -18.1391 -2.4050 0.9766 +#> -15.9462 -14.0508 4.6497 6.9329 -4.4538 2.4160 -1.1990 -9.0743 +#> 10.7729 -2.5267 3.3410 -1.3814 3.6126 -5.7625 -7.7225 -3.2399 +#> -0.4555 -18.3892 5.7101 -5.5647 3.8795 6.2622 -0.2600 -10.7557 +#> 8.0861 -6.9856 -7.6936 6.8099 -9.6428 -1.8709 -3.9135 11.6752 +#> 2.3393 -3.4244 1.7594 -1.7691 -1.8816 -1.3449 -3.1213 5.8446 +#> -17.0414 -12.5825 13.7659 3.2276 2.8819 9.7088 -0.0395 -16.3680 +#> -9.0849 2.8703 -4.8908 0.0538 -4.0284 10.7334 1.8612 14.3455 +#> 2.7764 10.3519 -8.5610 1.2517 1.8367 -13.9965 -7.0438 -2.9352 +#> -7.8392 -0.7714 3.5866 -8.1745 -2.8952 2.8005 7.5204 -2.6269 +#> -3.5496 8.5775 8.3233 -22.9023 -6.5217 -6.1104 -1.7158 -5.2329 +#> -9.3595 4.4470 1.9405 -2.8629 6.0939 -0.0330 -5.4820 -0.4906 +#> -7.1914 -7.4423 5.7052 -8.5940 -1.9660 -9.3857 -3.4722 -16.1382 +#> -5.2057 -6.5660 -3.2930 6.4077 9.6433 10.5191 2.4833 6.9536 +#> 3.6243 -12.0534 8.2019 -4.6935 6.8123 -4.5821 -1.7005 -2.0976 +#> +#> Columns 25 to 32 -5.2512 1.8249 1.1596 1.4756 -0.1453 -5.6668 4.3688 3.8027 +#> 0.3378 -0.9915 3.1091 -8.2443 1.4756 11.2804 5.2564 4.3766 +#> -1.8313 -2.2099 12.7952 2.9058 -1.2613 -13.3544 3.3536 9.3037 +#> 4.9129 5.6348 -3.1549 0.1487 5.6897 -5.9294 12.3421 -4.6768 +#> 5.9648 7.6459 0.8358 5.6926 9.6688 -12.8750 -6.5378 10.4043 +#> -0.5012 1.3708 -1.9134 7.2386 -11.6159 -4.4659 1.1672 5.8006 +#> 7.5550 15.2160 3.9736 0.6838 4.0331 -0.1188 5.0034 -2.9129 +#> 4.0897 -0.8494 2.9359 13.4275 -6.4579 -3.2058 0.3327 -9.4161 +#> -1.2164 -10.9410 4.9078 3.7710 9.4618 -7.4789 -0.9974 -3.7066 +#> -2.3371 0.4635 -0.9240 -0.4963 -0.6820 0.7073 -0.4325 2.0849 +#> 2.0346 5.2058 -2.9409 0.6210 -4.5806 -0.9869 2.5354 -1.3443 +#> -3.2821 6.8711 -2.9636 -14.8750 18.0725 1.8790 1.5808 2.6172 +#> -0.0502 0.7007 -1.9748 -2.0885 0.7059 -5.6614 7.9234 2.7092 +#> -17.1384 3.1040 12.6798 8.4108 4.2531 -9.5270 12.8214 -14.1993 +#> -4.2209 -7.2070 9.4186 -6.9256 -1.7514 5.9064 4.9219 -13.0215 +#> 3.7964 -4.0864 16.0046 1.6829 -3.5307 -1.5526 -0.3633 3.1747 +#> -7.1834 6.0091 4.9318 -5.0777 -10.0517 -5.5497 4.9865 -3.2409 +#> 1.5353 5.2731 -4.6185 -2.0486 0.8635 -3.6302 3.3154 6.9697 +#> 9.7254 2.1400 -13.2873 -0.3283 2.1209 -7.9610 -1.0666 -3.1248 +#> 9.8743 9.8609 -7.9352 3.1834 3.7730 16.6648 -5.2800 -2.7646 +#> 0.1464 -6.8665 -3.2266 -7.7950 -5.8029 -2.8987 -2.3511 3.3740 +#> -0.1179 -1.6332 5.8092 0.0731 -2.9186 -1.1299 -1.0177 4.3289 +#> -6.9973 -2.5415 2.9499 -4.7806 -6.6248 1.9698 6.0161 -1.9432 +#> -7.5513 -8.0110 -1.7323 -0.9099 14.4731 -7.5086 -6.2971 -5.3039 +#> -2.6150 7.7648 -1.4753 0.4759 3.1144 1.4673 -5.3906 -0.5745 +#> 3.3908 2.6249 4.3091 7.0905 13.8049 -15.0081 13.2431 6.3141 +#> -2.2864 -0.6594 -0.4841 -1.7268 -13.0835 -10.8193 -3.2489 3.3462 +#> 2.7739 0.1309 6.8358 -4.1396 3.0261 -6.0489 0.8392 5.5245 +#> -9.4017 -2.3352 1.0868 -2.3716 -4.7258 -3.0849 -7.9274 -18.7819 +#> -2.2601 1.6484 -13.3630 5.3704 -2.1310 6.0107 -0.3032 -4.6949 +#> 1.1735 0.0361 0.5446 -1.3908 -6.9785 -0.6287 2.1474 -4.2094 +#> -4.6960 2.4893 3.2426 5.1052 14.4704 0.1834 -1.8464 6.2223 +#> -0.3800 -3.7501 1.3461 -3.6039 -1.4326 9.7886 -5.2614 5.1459 +#> +#> Columns 33 to 40 -11.5152 -0.9243 1.7177 5.0535 -8.1015 -2.6109 -3.8127 5.8398 +#> 4.1287 -9.3522 10.9055 -16.0773 -5.0454 -3.5528 13.8708 2.7029 +#> 11.6871 -1.9325 -0.4353 -7.9579 -11.7032 5.5666 3.7455 -9.6129 +#> 3.2296 13.6496 -11.7866 0.2916 -5.4664 7.4755 16.1655 9.7189 +#> 2.0674 10.3068 9.1034 8.7561 -10.0243 -3.6387 4.6554 2.6122 +#> 5.0319 -14.1564 11.3310 4.3536 -4.7910 -10.0291 4.5722 -3.0061 +#> 10.4745 -4.0082 -3.6633 -5.0737 -3.4279 -0.3299 -4.7581 -5.4095 +#> -6.5930 -4.4209 15.1655 6.3988 0.2990 -16.0692 0.7111 2.0711 +#> 5.5374 10.7280 -9.7765 -5.7725 -4.9483 7.8515 -2.0251 -9.8779 +#> -3.7189 2.4464 6.4076 -10.6928 2.0520 1.9248 4.6834 -10.2054 +#> 4.2027 -0.9405 0.8029 -2.3511 6.8259 -9.3003 9.1939 -2.4253 +#> 0.8544 -13.7303 0.4549 -12.2916 4.1789 1.0487 7.6946 -1.5183 +#> 2.1501 1.2057 -7.6369 -3.0940 -0.9013 5.7293 -4.5890 4.1786 +#> -8.5443 9.4214 8.3380 12.1051 -6.0888 -4.7884 2.3212 -8.0710 +#> -1.2801 9.6789 0.0649 -1.9204 9.9172 -2.8573 9.9711 12.2731 +#> -4.4635 2.3057 4.5296 0.9816 1.7360 1.8344 7.3084 12.0034 +#> 4.0429 6.1456 -2.1394 -6.9665 -5.5990 -5.1341 -3.4766 -10.4355 +#> 3.1496 5.1559 4.6521 -1.9684 -6.8253 4.0424 12.2552 10.9414 +#> -3.6489 9.7629 6.5666 7.4750 5.6747 -2.3111 2.2481 8.5219 +#> 3.8301 -5.4730 -6.6944 -6.1863 11.0954 0.7285 -7.3015 -0.2664 +#> -2.1023 0.9463 6.2356 -4.1167 5.8918 4.5609 -1.8199 -1.1138 +#> -9.0089 3.7983 3.3152 -7.7969 3.4476 -1.7216 -3.0805 -4.2707 +#> 1.3314 4.4158 1.9641 -0.6657 10.2384 8.2146 -2.2257 -8.8144 +#> -9.1151 -0.4731 3.9524 7.4266 0.7322 -8.9711 -3.1905 -2.7870 +#> 6.1514 -6.4834 -4.7259 3.6780 6.6178 1.3456 -7.7370 1.2610 +#> -1.2889 14.6657 -7.0592 -7.8284 -1.6898 5.6680 6.8909 -6.2393 +#> 5.6675 15.7121 -6.1919 7.7758 -4.3231 4.1056 13.2871 -2.7665 +#> 2.7247 -1.6288 8.2588 -1.6087 -7.6883 1.1752 -0.2560 0.0173 +#> 1.0590 8.4544 2.4623 8.8080 0.8151 -0.4072 -19.2495 5.6410 +#> -14.9196 6.2735 -6.5225 6.6026 9.5773 -9.1956 -7.6193 12.3483 +#> 2.9590 -0.0770 0.5244 -0.1409 1.3758 -2.8964 -4.8030 5.5306 +#> -6.0014 -5.4352 -4.3040 -3.9256 9.0386 -5.5042 0.7756 2.6890 +#> 3.5337 -7.0212 -1.3884 5.6556 2.7715 8.9969 -1.3071 9.9345 +#> +#> Columns 41 to 48 -0.5824 1.4692 9.9275 3.6555 -0.8344 -3.8524 -1.7748 5.2643 +#> 0.3203 12.2354 -3.6763 6.3472 -2.8340 -1.8693 2.3417 2.2867 +#> -5.5468 -1.2549 -6.0051 5.1252 -6.7402 7.3577 3.4990 -1.8936 +#> 9.8346 5.3042 -2.1794 5.1964 -9.2770 -1.5883 6.6708 -0.7402 +#> -3.5805 -0.7470 1.2830 7.2845 -7.3766 3.7228 6.2811 0.2138 +#> 2.6054 -4.6318 -0.8484 8.3144 3.5253 -5.8594 2.0936 -7.2572 +#> -2.0082 3.8686 0.8358 -0.5959 4.6445 -1.6382 5.7570 9.4667 +#> -2.7015 -7.0737 0.3860 0.0786 14.4266 -5.7703 3.4809 -9.3841 +#> -2.2334 2.7636 -4.3436 4.6516 -3.6256 8.9462 0.9035 0.8084 +#> -4.3726 7.2027 -4.3678 3.9060 -1.8193 -4.5488 3.8191 5.7538 +#> -6.2466 6.2129 -0.7223 -7.6462 8.4903 -1.9344 8.9862 3.6523 +#> 7.0767 4.6762 -3.0891 -12.8673 13.1296 -1.3765 -2.7232 9.2269 +#> 0.4489 1.8570 3.5168 -2.8925 -7.6406 -0.0952 -1.3527 3.7495 +#> 1.5826 4.9068 2.2813 19.8114 5.6210 6.5733 -3.8362 9.5918 +#> -2.9860 10.9359 1.6390 -1.1453 -2.9503 2.8189 -8.0274 -2.1033 +#> -4.3303 0.2937 -1.2596 -4.8879 -6.8936 0.5022 -5.8503 -4.2147 +#> -1.3657 -5.2851 -0.5626 -2.0250 5.0959 0.7407 5.7787 6.5425 +#> 4.1264 -2.3706 -5.5607 0.1691 -11.5336 -0.5482 14.1629 -1.3343 +#> -4.2715 7.0989 -3.2161 -3.4652 5.9091 -8.7639 -2.0256 -2.5622 +#> 3.5216 1.1792 -1.5078 -2.9335 14.8755 -3.4561 3.2270 -4.9966 +#> -1.3233 1.7755 -12.4446 -2.2966 2.4633 -0.1826 -5.5273 -7.7662 +#> -3.8034 0.5859 6.7605 12.2746 -4.7226 -4.1606 -12.1284 -9.8226 +#> 1.3134 7.2764 -9.0757 -4.2172 0.0349 7.8910 -2.4282 2.2683 +#> -6.9185 -10.8574 4.7544 2.8901 0.2314 -4.0512 -4.5323 12.0024 +#> 10.8790 3.4761 4.8979 6.8556 2.2733 1.9687 -7.3329 -0.3296 +#> -0.8850 -1.0989 3.1610 5.7198 1.3077 -2.6804 11.3774 9.7755 +#> -3.7612 6.2740 1.3685 -9.1816 -10.4648 -2.2668 -1.5501 2.4334 +#> -4.7406 3.3968 3.4882 2.6706 -4.0044 -1.0474 -2.7413 2.1758 +#> -9.4952 -0.0138 -0.0833 -4.8732 -4.4048 -9.8041 1.0938 -8.0169 +#> -3.8376 3.0544 1.6981 -2.6817 9.1293 -8.2263 -0.0040 2.8741 +#> 1.8922 0.5671 -3.0768 1.2282 -4.8935 -4.6556 -2.9761 -9.9547 +#> -0.0221 -11.1739 3.2565 5.4391 -6.3191 8.9794 1.6672 2.8193 +#> 8.3047 -2.6720 0.5503 -3.5542 -5.0851 0.9157 -5.0840 1.2687 +#> +#> (20,.,.) = +#> Columns 1 to 8 2.2842 -9.3007 -2.5022 -4.5254 0.8086 11.4189 4.4972 0.0225 +#> -5.4441 -8.8693 5.7317 -12.6132 -4.9310 -1.5905 10.4928 -5.4723 +#> 8.0980 2.5519 1.5367 -18.4978 -2.2771 5.7824 2.2589 -6.2605 +#> -10.7820 -1.2471 4.6014 5.3154 -15.6898 -5.4572 -3.4885 13.0038 +#> -1.3370 -4.8736 -6.5480 6.2534 8.5135 13.9670 -9.0490 3.2787 +#> 6.8220 -6.1084 -4.2379 -9.0561 -4.4813 -11.1193 9.8193 12.2705 +#> -1.9170 5.1773 6.8824 7.3334 5.0908 2.4932 -0.6944 2.1303 +#> 8.1684 3.4160 -6.2923 -2.1127 7.2822 -8.4506 2.5393 -4.3990 +#> -3.5237 17.5413 7.2349 -1.0963 1.3250 4.2543 -3.0632 -6.4548 +#> -2.1005 -3.6719 7.0955 -6.5324 19.2545 0.4258 -5.5677 -5.0242 +#> -9.1786 -10.5193 0.8223 0.1238 -6.2313 1.7179 8.5738 -8.5565 +#> -7.8284 1.7040 3.8873 -9.1008 -0.6783 2.9533 -1.6648 -14.6538 +#> 9.3043 1.0702 -11.2738 10.9230 -10.2916 0.5239 -7.3030 3.1418 +#> 2.2950 -6.2420 3.6485 3.5322 -0.9277 20.1958 0.5635 12.4370 +#> -7.1262 -8.2530 10.2255 -0.6592 -28.4965 -7.2944 21.0907 -1.4685 +#> 12.3869 -14.2600 3.5160 -5.1331 -1.2875 -2.0544 5.2436 -3.6207 +#> 1.0990 -1.7611 -4.9954 3.8427 1.6899 -1.0132 -8.4517 0.7244 +#> -1.9679 -5.4131 -5.4782 -3.1717 -1.4692 -1.9899 -16.9192 -15.8189 +#> -1.6072 3.0888 4.8682 -7.2044 -7.5279 -7.6817 -0.1098 -4.4967 +#> 1.0395 4.9443 5.7307 8.6428 15.3588 -1.5626 3.9243 9.2448 +#> -3.0733 1.7062 -0.4861 -11.9732 5.9804 -14.3937 -2.1416 -2.3518 +#> 15.2716 5.2276 3.2135 -0.4847 14.3549 -1.8188 2.1609 5.9673 +#> -8.9208 3.1274 -1.9101 -3.1453 -2.2634 1.1079 -12.2047 -1.1498 +#> 4.2715 0.1498 -1.0524 6.5481 -11.2053 -10.4551 5.7463 8.0795 +#> 6.6160 14.1188 -9.8353 -2.5755 14.6442 7.2921 -5.6188 11.5282 +#> -5.7045 2.6968 1.0832 13.1342 -10.1617 4.7363 1.5884 7.4962 +#> 1.6970 -1.6772 -4.2640 6.2300 -7.0329 -9.8650 -12.3766 -11.0625 +#> 8.7481 -3.2058 1.9772 -4.6278 -1.2608 5.2762 -0.4509 -2.0620 +#> -9.9497 -0.1287 0.6570 15.6611 -8.7315 5.2262 -7.7726 -11.2332 +#> 0.8742 7.2435 -4.7075 6.3476 0.0331 9.3009 -8.1382 0.9533 +#> 6.4679 2.6813 11.9876 -3.0380 6.1175 -13.9871 1.2410 3.8011 +#> 10.9178 2.4582 5.3939 -0.3543 2.1917 -0.9866 13.4287 7.3312 +#> 2.7657 -3.6383 -9.5479 4.4432 -3.9823 14.3085 -3.2065 3.5106 +#> +#> Columns 9 to 16 -4.1815 -11.5099 -5.2858 2.1357 -0.6340 -8.8014 -1.6580 2.0258 +#> 12.6018 -6.4907 -1.7134 10.0354 5.7478 3.3890 -2.9349 -0.5419 +#> 8.3475 -0.4711 4.3299 5.7077 -9.1401 -2.3767 14.7719 10.5088 +#> 6.1800 -1.6572 8.5729 15.6553 12.7924 -8.0829 -8.9365 0.6729 +#> 0.0958 4.4101 1.4468 11.9980 -9.2241 7.8956 3.2662 3.9322 +#> 16.9663 5.6927 4.8464 -0.8725 12.3926 7.1392 -0.0534 15.9518 +#> -8.2486 -10.1814 -7.3882 -2.1658 -3.0324 -4.6042 3.3077 1.5265 +#> -0.4083 -12.7831 1.3060 -15.8912 5.3509 -4.3770 -0.0479 0.1638 +#> -0.8841 -14.2071 0.4834 -2.1798 0.5447 -0.2378 4.3046 -2.1199 +#> 4.4221 1.6682 -4.2027 -17.1329 -7.9319 1.7415 -12.3031 -8.0263 +#> 3.9206 -6.8473 5.9718 3.3969 14.0158 -20.7377 -5.5385 15.3233 +#> 8.6910 -4.5725 13.4395 -9.9043 -14.6248 -0.2776 -9.8324 7.8371 +#> -5.9905 -4.5636 -7.8709 -7.8936 -2.7318 8.6649 -11.8396 -3.5448 +#> 1.1642 -14.6849 -1.3807 -8.3010 4.8826 -3.0971 -7.9406 2.7303 +#> -10.4803 -13.4136 -2.9378 7.4990 9.2564 -12.6693 -6.4091 6.8663 +#> -1.5119 0.3176 -0.5733 1.2998 2.5392 0.0195 -0.1565 -4.6094 +#> -5.7074 -8.9894 17.5815 -1.2018 1.4961 -17.1923 -2.4444 0.8150 +#> 0.5595 8.2031 4.3901 3.8901 2.6667 1.6572 1.0731 -14.1939 +#> -12.3769 -1.4230 -3.4196 5.7495 -6.0557 8.6327 -1.0485 5.1459 +#> 1.3921 12.8169 -4.7885 -4.5789 2.4130 7.6160 -7.9005 -1.2045 +#> 7.1584 6.3071 9.4819 -4.2199 -0.9562 -0.2124 16.1204 3.1242 +#> 3.0037 -4.6943 -3.0364 -13.0083 -6.6571 6.2830 16.0910 12.3829 +#> 2.1058 4.3323 2.2193 -7.8173 -12.5410 13.6322 -11.2586 -6.8660 +#> 10.5044 -9.5241 -5.9055 -16.7476 0.0221 1.6616 3.0149 11.0270 +#> -0.4683 0.7242 8.5773 1.6223 -9.4850 3.5766 1.4956 -6.4105 +#> 0.7010 -16.4320 10.0723 -2.6672 -0.6204 -13.9131 2.1235 6.2535 +#> -4.9614 -8.3707 3.8164 3.4402 -7.0477 -8.3158 -3.2705 3.0646 +#> 3.6172 -8.7604 -2.9719 -4.5413 -2.4701 5.3452 7.9408 8.8628 +#> -14.9451 -2.3443 -18.9294 -6.1599 4.0042 -10.3584 1.3638 -8.8801 +#> -5.1800 11.7115 -5.9830 -1.8752 -1.1079 7.7158 -4.9851 -2.0916 +#> -8.7498 1.0072 -4.9560 -0.7679 -0.5172 -2.3869 8.2433 -4.0864 +#> 13.9879 4.3474 -2.1083 -9.7008 6.9482 5.2922 -3.8754 -6.8229 +#> 11.1854 21.4086 9.0164 -1.9481 2.6196 -4.5010 11.7534 -8.6801 +#> +#> Columns 17 to 24 -6.8623 0.9711 0.3639 0.8478 13.4453 -10.3790 5.3281 -3.1556 +#> 1.1253 6.6452 -8.1128 7.6139 -5.4746 2.7172 5.0802 5.8007 +#> 6.9472 6.4035 -12.3596 -12.7988 16.8413 6.8426 -2.7024 -8.3391 +#> -8.0927 -0.1453 -0.8154 20.4970 -1.6281 -12.2054 12.8511 9.4697 +#> 3.1497 3.7875 -2.5723 -16.9826 0.7948 2.5969 -12.3331 8.1099 +#> 9.0053 8.4317 -12.1390 8.9320 19.1877 -8.9690 -5.3526 -9.7213 +#> -0.2293 2.2816 -8.8143 5.8754 -3.9842 3.6941 -3.2832 -0.4515 +#> 2.7337 11.0518 4.7096 -5.8642 -3.2091 4.9801 -0.5527 -15.4752 +#> -6.0244 5.1127 -4.9657 -3.6686 -13.1704 13.1213 5.9027 -0.3311 +#> -4.7060 6.5571 1.1623 3.1787 12.1805 -5.9795 2.6450 -3.8890 +#> -1.6306 -2.1122 -0.8350 -3.4539 7.9609 -4.4769 8.5061 -9.8306 +#> -1.3080 5.6213 5.7974 -5.7210 0.0955 8.9099 3.7288 3.9984 +#> -2.5704 -6.9725 1.8329 7.5978 -0.8593 0.5273 -3.3973 23.9328 +#> -7.1605 -1.4152 7.6666 -12.9347 21.7138 -1.2969 -4.8485 -4.0536 +#> 1.5020 -14.0901 20.2565 16.0215 -4.5797 -17.2057 11.4516 22.0808 +#> 5.7694 -0.7467 2.2154 -6.6873 9.5679 -0.5329 -0.9066 -0.5328 +#> -5.9617 -4.3717 3.7772 -9.3009 3.4248 3.6538 2.0167 -1.1754 +#> -8.4511 4.7735 -9.5639 -3.8635 2.8697 -7.0396 4.0187 -8.9156 +#> 5.1888 0.6822 1.2531 -9.0151 -2.5080 -3.4085 8.2925 -11.9264 +#> -1.1109 2.7448 3.4192 2.9126 -16.1397 4.4868 5.2626 4.0020 +#> 3.4241 -1.0331 0.1459 -4.1374 9.5474 -7.8095 -1.9151 -3.1304 +#> -1.8686 0.4939 2.9359 2.4186 -6.1717 -4.0475 -10.4613 1.0937 +#> 3.8963 -7.4234 3.1208 9.0334 1.1222 0.8339 -7.1781 0.6802 +#> 8.1094 -6.6328 9.6605 -8.4629 7.6693 -5.5535 -2.5057 9.2899 +#> 5.0109 15.5296 -0.6742 -9.4445 -8.8333 24.9679 2.9495 -0.7685 +#> -0.6991 -2.9458 -4.8978 -8.7362 11.0254 1.6207 2.0617 8.8985 +#> 1.6044 -6.5704 -0.9885 -0.9802 13.6871 -0.9607 -9.8014 -4.7119 +#> 4.3083 4.4132 -3.6083 -8.3834 3.2557 -1.2220 -7.0236 -6.1360 +#> 0.4155 8.9627 5.8059 10.8253 -9.9329 -23.8729 8.9552 -1.7645 +#> -2.3212 -1.2192 2.5327 4.5004 -10.6159 0.5372 -0.6923 -3.4753 +#> -0.5686 7.1069 1.3838 2.7925 -1.0928 -0.2820 6.5320 -9.9073 +#> 1.7437 8.6192 -1.2021 0.1432 -8.0516 -2.0394 18.8280 12.6478 +#> 2.0619 1.9974 -1.2226 10.7438 0.8942 -1.1889 -6.6737 4.5507 +#> +#> Columns 25 to 32 -1.3911 3.0692 -6.5327 6.1549 -1.8614 6.7446 -4.5028 6.1221 +#> 20.5083 -8.3077 8.4241 -0.9615 2.6350 -0.4336 4.3162 -4.3943 +#> 3.6221 -3.4589 -0.4485 -3.6896 0.0100 6.2156 -4.5433 -0.9005 +#> 1.7241 9.3956 10.2362 1.6859 -2.2215 -2.3525 -14.7515 8.7909 +#> -4.7686 -10.5521 3.6614 -9.1587 -22.2191 5.6011 -3.2201 -10.5813 +#> 13.0537 -4.9376 -3.7024 -11.0930 5.8585 -4.2476 -0.8783 10.5901 +#> -7.2132 9.4312 0.1471 -7.5342 -3.0692 -2.1450 -5.2526 2.1488 +#> -0.0374 -2.0891 -10.8902 6.4740 -6.8957 -3.8138 14.3612 10.8246 +#> -2.0578 2.9070 0.7636 -2.4083 4.8585 10.4305 -6.1290 3.2920 +#> 6.0745 -3.8061 11.4818 -5.4890 -7.6171 4.5417 -1.2581 3.9617 +#> -1.9831 5.8574 -9.7204 -4.0234 6.3325 -18.3013 0.0386 -0.2576 +#> 14.1160 4.1519 1.4994 6.3302 4.4596 5.4791 6.9900 2.0674 +#> -0.8832 9.1233 15.2962 -10.6723 10.2131 0.2640 -0.8931 5.4990 +#> -9.4974 1.8862 10.7015 3.6607 3.6988 0.3356 8.3397 3.7397 +#> -4.9341 8.5224 5.5406 -2.2485 15.9529 -16.2531 11.6592 4.1163 +#> 2.3302 -10.0099 9.2156 4.9441 -0.7977 1.2424 5.3864 -1.9414 +#> -7.3847 7.0329 2.7319 3.1262 0.3203 -10.2225 0.6724 -8.4490 +#> 10.0434 -11.5953 5.7548 13.7025 -8.6265 5.6097 6.8491 3.0921 +#> 7.2897 -3.8818 0.8764 -2.4209 -1.4158 6.2946 -9.5363 2.0281 +#> 3.1655 21.0592 0.5048 -4.6175 -8.7786 -9.3837 -1.6558 -7.4198 +#> 3.3693 10.7326 -1.0920 0.7143 -0.5083 1.2072 3.9200 -0.7866 +#> -8.5052 -1.6198 0.3570 -15.7462 0.2321 4.3145 -2.1609 -8.6069 +#> -3.2597 -6.2662 2.6336 3.8607 11.7085 5.2244 8.3493 9.3229 +#> 4.6654 -11.0545 -4.0304 0.3089 6.3779 14.5641 9.4906 2.8101 +#> 8.3755 7.4700 1.5303 -7.4667 -3.8557 -2.9146 -2.2516 -8.7543 +#> -6.7077 -3.4594 -13.8202 7.8683 6.8754 19.9426 -2.8319 -0.6905 +#> -2.6036 8.8536 1.6484 9.1093 5.6961 9.1970 3.0565 0.7573 +#> 2.1711 -9.3552 1.2404 -2.5395 -1.8879 13.0455 -2.4307 -4.2132 +#> -1.1247 1.1490 -6.4411 12.0462 -0.5293 10.0444 7.4590 8.2738 +#> -13.2797 -2.3689 -8.9095 5.2106 -2.0309 -9.6698 -4.5866 -1.4056 +#> -0.0742 -1.0281 2.2397 2.6643 -1.4644 4.5541 0.5894 2.8064 +#> 3.0200 -11.9664 8.1987 -5.7961 3.0824 -5.5006 3.5044 -1.8273 +#> 5.7003 8.4931 -1.9940 10.1808 -9.4862 -10.4904 -5.6016 -1.3174 +#> +#> Columns 33 to 40 0.2859 0.9840 2.2459 -5.8183 -5.7843 1.8694 -2.8571 -3.5816 +#> -2.0551 7.1783 -1.1572 1.5118 -5.3523 -11.4000 0.2914 8.7751 +#> -7.8758 9.6504 8.1545 -2.8151 -6.2475 5.7657 -0.4973 4.8358 +#> 10.9517 -0.8912 0.9366 -1.4467 -3.5532 6.1168 7.2917 2.2090 +#> 4.6189 -12.8922 9.2277 -9.2239 -10.5154 -3.4526 -10.3148 -8.3928 +#> -4.9455 16.6823 -0.9426 -14.8247 2.2910 3.6689 -7.7723 -3.2236 +#> 1.7229 -1.7162 7.5840 -10.9975 -6.2808 -1.5537 -0.2561 -15.7143 +#> -5.6600 -7.6340 -11.1233 -9.4201 8.8585 -3.8014 -12.8213 -6.5898 +#> -1.7536 1.6171 1.4321 13.1670 1.2993 -1.2554 4.0438 13.5872 +#> -1.3886 8.3679 -6.1426 2.6054 17.4474 -2.2992 -3.5374 2.9025 +#> -8.0930 2.7702 9.6704 -6.5223 -3.7036 4.3249 -4.4369 -9.1160 +#> -3.8686 -0.0064 6.5268 -22.2443 1.2206 -1.4583 2.5408 -7.5054 +#> 10.2556 -6.3264 0.3958 7.4226 -4.6827 0.1756 1.1814 3.9796 +#> 1.3106 -1.2637 8.7511 -1.4127 2.6299 -0.2128 -12.0432 -0.7313 +#> 1.5478 -12.4681 -7.7595 11.3704 -8.4924 -3.9687 1.2540 -12.7846 +#> 0.1630 -7.1268 2.0465 -1.0191 2.5131 5.7533 8.5044 -0.3682 +#> -9.3258 2.9014 2.1419 3.9074 -2.2512 4.1822 -1.8971 2.1417 +#> -0.0288 -4.3303 -11.6703 -4.0828 0.8670 -4.3038 -8.3397 11.6375 +#> 2.7956 -3.4149 0.8683 -3.5767 -0.4766 -4.0104 -3.5698 -7.3545 +#> -0.0359 11.6808 5.1896 2.8549 8.1802 7.2720 4.9197 0.1529 +#> -0.4764 10.4557 -12.9198 -1.9820 4.1237 -1.1226 -1.9904 -3.2599 +#> -2.6389 3.2737 -9.9861 3.6453 -6.9060 -7.1317 -1.5258 -1.1851 +#> -6.5330 -7.2562 -3.5095 2.0451 13.9918 -6.1458 -3.0096 3.0420 +#> 9.2042 -6.6191 -0.0189 7.3594 5.9885 -5.6567 3.1459 -2.4510 +#> 0.1703 7.4191 21.8283 5.8142 7.5045 3.8907 3.2357 6.2399 +#> 7.4051 -2.9737 1.3531 3.2412 -8.0534 7.6714 2.4359 11.1779 +#> 3.5949 -3.2263 4.4387 8.6041 -4.2712 3.5999 -1.2865 16.8223 +#> -3.5910 2.3213 6.4177 -3.0644 -4.7243 -1.8941 1.9550 6.2904 +#> -8.8605 -15.7645 -15.2876 11.0053 -2.5527 0.4685 2.4509 -1.1892 +#> 1.3369 -10.9335 -1.8109 -4.6219 1.3658 -3.2621 -2.9472 -8.4462 +#> -0.2522 9.8344 -2.9380 2.2742 0.5041 2.9737 9.0709 -1.7162 +#> 0.8950 0.3126 -0.5070 22.7424 9.5379 9.2431 -1.0588 6.9257 +#> 1.0140 3.1717 7.1245 4.6121 -3.1624 0.2608 1.6049 11.6702 +#> +#> Columns 41 to 48 3.0304 0.0680 -7.5101 -7.4402 -2.6598 -2.8985 -5.8781 4.7761 +#> -4.8228 4.0899 -3.5155 -12.1874 2.6026 -4.9185 6.2257 7.5257 +#> 8.5117 -15.6475 -0.8342 3.4969 -3.1766 15.5418 -9.4967 -2.2512 +#> -8.1561 0.9059 -8.3890 -1.3538 -10.3790 2.8180 11.7988 4.0264 +#> 1.0914 -1.2328 -4.9416 -12.2846 14.0517 0.3156 -5.3539 4.6993 +#> -8.4228 -5.0796 11.6175 0.8570 -9.9459 -0.4718 -0.9525 -4.2221 +#> -4.2918 8.9565 -9.6403 9.7461 4.6375 1.3586 3.2488 8.1169 +#> -4.1697 6.8415 11.9651 0.1357 -7.7237 -1.6472 1.5552 -9.2615 +#> 18.6786 10.7467 -5.8624 15.0461 -4.0063 2.0067 3.4129 -0.1880 +#> 1.8563 -10.2713 0.6743 -3.7190 -1.5415 -5.4851 -6.5890 9.7558 +#> 4.9810 -8.3962 -11.9026 4.6082 1.0926 -3.8416 3.4724 -6.4561 +#> 5.9249 -8.2414 -0.5522 -3.7924 -1.8490 4.4591 -8.1456 15.5649 +#> -6.6237 -1.1452 5.2781 -15.0934 5.3808 -3.9231 3.4970 -5.5533 +#> 5.1679 8.7257 15.9771 -7.2731 1.2025 -11.6389 -10.8809 -0.5952 +#> 2.2360 1.5761 0.3329 -15.2636 1.0576 -6.3211 1.5507 -0.4872 +#> -16.3665 -1.6977 0.2362 -3.2404 -0.2077 11.0188 -3.7190 -5.1220 +#> -4.1080 -9.6898 2.4934 -6.1365 -7.9426 0.9949 -7.5373 -9.7230 +#> -4.5896 -5.6256 4.4149 -9.9634 4.7889 -2.4785 -1.6755 -1.0807 +#> 11.3292 4.1457 -3.1557 0.4664 6.4328 -15.9121 18.2348 5.1840 +#> -5.4514 6.7970 13.5304 7.8403 -1.2932 5.4044 15.9420 1.2582 +#> 3.3170 -6.9941 1.8947 2.6167 -2.5129 -3.3292 -1.8862 3.4555 +#> 1.1166 8.8201 -11.9609 3.5047 4.7559 8.0647 -6.9492 -0.6075 +#> -0.1691 -8.5591 14.4825 -2.3782 -1.3802 -0.6122 -10.3782 5.3662 +#> 14.2644 8.7466 3.7724 1.1308 9.7657 5.6476 -19.4576 -2.2649 +#> -1.1631 17.2467 4.2864 0.9452 -6.5103 5.2915 -1.7537 -2.0942 +#> 17.6076 -6.2901 -2.1301 3.3579 2.4108 12.9935 0.7542 -3.4535 +#> 20.8561 -18.3694 -13.3298 -5.0167 4.9272 -8.1185 8.3193 -13.0272 +#> -1.6616 5.9349 -3.2905 -0.6601 6.3905 6.9249 -6.5231 0.1840 +#> -11.0278 10.2004 4.1260 -13.8157 3.5648 -3.9353 -1.6771 -5.5769 +#> -7.6532 8.4180 1.4966 -0.0342 1.0259 -8.0851 7.3364 -5.2486 +#> -1.8772 4.3241 -3.9665 4.0804 -2.0016 2.0577 6.0984 5.7852 +#> 1.8621 10.3355 18.7347 0.5608 13.8933 -1.9063 -3.9664 0.1928 +#> -13.8940 -2.1754 -17.3681 5.7886 -6.5112 13.9914 -5.8627 -0.0077 +#> [ CPUFloatType{20,33,48} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_conv2d.html b/static/docs/dev/reference/torch_conv2d.html new file mode 100644 index 0000000000000000000000000000000000000000..e73ebb768c0b29a33ae3dc187dbf8d9655b2841d --- /dev/null +++ b/static/docs/dev/reference/torch_conv2d.html @@ -0,0 +1,342 @@ + + + + + + + + +Conv2d — torch_conv2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Conv2d

    +
    + +
    torch_conv2d(
    +  input,
    +  weight,
    +  bias = list(),
    +  stride = 1L,
    +  padding = 0L,
    +  dilation = 1L,
    +  groups = 1L
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iH , iW)\)

    weight

    filters of shape \((\mbox{out\_channels} , \frac{\mbox{in\_channels}}{\mbox{groups}} , kH , kW)\)

    bias

    optional bias tensor of shape \((\mbox{out\_channels})\). Default: NULL

    stride

    the stride of the convolving kernel. Can be a single number or a tuple (sH, sW). Default: 1

    padding

    implicit paddings on both sides of the input. Can be a single number or a tuple (padH, padW). Default: 0

    dilation

    the spacing between kernel elements. Can be a single number or a tuple (dH, dW). Default: 1

    groups

    split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

    + +

    conv2d(input, weight, bias=NULL, stride=1, padding=0, dilation=1, groups=1) -> Tensor

    + + + + +

    Applies a 2D convolution over an input image composed of several input +planes.

    +

    See nn_conv2d() for details and output shape.

    + +

    Examples

    +
    if (torch_is_installed()) { + +# With square kernels and equal stride +filters = torch_randn(c(8,4,3,3)) +inputs = torch_randn(c(1,4,5,5)) +nnf_conv2d(inputs, filters, padding=1) +} +
    #> torch_tensor +#> (1,1,.,.) = +#> 1.7751 1.8102 -5.3800 -3.0488 -5.4477 +#> -5.7712 2.6865 -0.0846 7.5752 6.9191 +#> 5.2896 3.0661 9.7657 -15.9121 -0.4981 +#> -9.0795 -1.5815 3.5357 9.3267 4.3401 +#> 2.0030 4.1984 -14.2355 0.2379 5.6598 +#> +#> (1,2,.,.) = +#> 3.7265 -0.2746 -1.2940 -1.8794 3.8898 +#> -1.3463 4.1611 11.7422 0.3257 -2.8260 +#> -3.1741 -9.4664 0.3786 -9.1628 2.2345 +#> -3.9877 8.6068 3.5048 4.0214 3.5298 +#> 1.4579 0.8858 0.2262 0.5585 1.9909 +#> +#> (1,3,.,.) = +#> 2.8495 7.5995 2.6102 4.4934 -4.1878 +#> -4.5630 3.2005 4.1013 -2.8311 2.1538 +#> 3.3536 5.8931 -4.5269 4.2960 -4.2912 +#> -10.8705 -0.3137 6.4640 0.5994 6.9229 +#> 7.0797 -0.3412 -1.9438 4.4663 -0.2123 +#> +#> (1,4,.,.) = +#> -1.0945 -0.2992 5.3942 -4.6463 -3.2643 +#> 4.5450 0.4313 2.9448 8.1186 6.1091 +#> -5.4938 8.1003 0.6197 1.0535 -8.3222 +#> -6.4377 -8.4517 6.1147 -1.4768 1.7856 +#> 0.4933 1.0665 3.6550 -1.2144 -6.8495 +#> +#> (1,5,.,.) = +#> 1.9912 6.3521 -6.3074 5.0286 1.5800 +#> 4.6302 -6.5545 1.2782 -9.8425 1.6922 +#> -1.9579 11.3642 -0.7866 2.6508 2.1056 +#> -4.8874 0.3786 3.4781 -8.8948 -6.2078 +#> 11.8482 3.8567 -0.3391 -0.4456 5.6384 +#> +#> (1,6,.,.) = +#> 3.2679 1.4055 -6.6733 9.1439 -0.1774 +#> -3.6459 1.0924 -3.4128 -9.4440 6.2553 +#> 1.4464 -5.3091 11.0492 -4.1211 -7.3963 +#> 7.6827 7.8154 -6.3949 2.2058 -7.1125 +#> 0.1458 9.6769 3.2690 1.7532 0.0174 +#> +#> (1,7,.,.) = +#> -5.2135 13.6151 -3.6709 2.1498 -3.5713 +#> -0.4493 -4.8700 3.7786 2.1324 8.0006 +#> 4.3950 -5.6337 7.7445 -8.1921 0.6682 +#> -12.5536 2.6553 -3.4464 6.5250 -5.0772 +#> 0.1851 6.6151 1.0536 -3.3529 1.2139 +#> +#> (1,8,.,.) = +#> 0.3621 2.2075 3.1951 1.3277 -2.4578 +#> 5.0692 2.1762 13.6334 -4.1285 10.3525 +#> 7.2175 -5.1844 -1.5165 7.2140 -6.2546 +#> -8.7801 -13.0705 -3.9751 14.0640 -9.8164 +#> -2.6648 1.6580 4.6547 0.6180 -0.3127 +#> [ CPUFloatType{1,8,5,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_conv3d.html b/static/docs/dev/reference/torch_conv3d.html new file mode 100644 index 0000000000000000000000000000000000000000..dae8838c855f67db8e56f75b1c74e8241722faee --- /dev/null +++ b/static/docs/dev/reference/torch_conv3d.html @@ -0,0 +1,285 @@ + + + + + + + + +Conv3d — torch_conv3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Conv3d

    +
    + +
    torch_conv3d(
    +  input,
    +  weight,
    +  bias = list(),
    +  stride = 1L,
    +  padding = 0L,
    +  dilation = 1L,
    +  groups = 1L
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iT , iH , iW)\)

    weight

    filters of shape \((\mbox{out\_channels} , \frac{\mbox{in\_channels}}{\mbox{groups}} , kT , kH , kW)\)

    bias

    optional bias tensor of shape \((\mbox{out\_channels})\). Default: NULL

    stride

    the stride of the convolving kernel. Can be a single number or a tuple (sT, sH, sW). Default: 1

    padding

    implicit paddings on both sides of the input. Can be a single number or a tuple (padT, padH, padW). Default: 0

    dilation

    the spacing between kernel elements. Can be a single number or a tuple (dT, dH, dW). Default: 1

    groups

    split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

    + +

    conv3d(input, weight, bias=NULL, stride=1, padding=0, dilation=1, groups=1) -> Tensor

    + + + + +

    Applies a 3D convolution over an input image composed of several input +planes.

    +

    See nn_conv3d() for details and output shape.

    + +

    Examples

    +
    if (torch_is_installed()) { + +# filters = torch_randn(c(33, 16, 3, 3, 3)) +# inputs = torch_randn(c(20, 16, 50, 10, 20)) +# nnf_conv3d(inputs, filters) +} +
    #> NULL
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_conv_tbc.html b/static/docs/dev/reference/torch_conv_tbc.html new file mode 100644 index 0000000000000000000000000000000000000000..93f8f316a1cd806a190573fd4025a7f0fd9273e1 --- /dev/null +++ b/static/docs/dev/reference/torch_conv_tbc.html @@ -0,0 +1,256 @@ + + + + + + + + +Conv_tbc — torch_conv_tbc • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Conv_tbc

    +
    + +
    torch_conv_tbc(self, weight, bias, pad = 0L)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    NA input tensor of shape \((\mbox{sequence length} \times batch \times \mbox{in\_channels})\)

    weight

    NA filter of shape (\(\mbox{kernel width} \times \mbox{in\_channels} \times \mbox{out\_channels}\))

    bias

    NA bias of shape (\(\mbox{out\_channels}\))

    pad

    NA number of timesteps to pad. Default: 0

    + +

    TEST

    + + + + +

    Applies a 1-dimensional sequence convolution over an input sequence. +Input and output dimensions are (Time, Batch, Channels) - hence TBC.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_conv_transpose1d.html b/static/docs/dev/reference/torch_conv_transpose1d.html new file mode 100644 index 0000000000000000000000000000000000000000..e844969a5f0c90db04bbd19414824098f26d9dd8 --- /dev/null +++ b/static/docs/dev/reference/torch_conv_transpose1d.html @@ -0,0 +1,5206 @@ + + + + + + + + +Conv_transpose1d — torch_conv_transpose1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Conv_transpose1d

    +
    + +
    torch_conv_transpose1d(
    +  input,
    +  weight,
    +  bias = list(),
    +  stride = 1L,
    +  padding = 0L,
    +  output_padding = 0L,
    +  groups = 1L,
    +  dilation = 1L
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iW)\)

    weight

    filters of shape \((\mbox{in\_channels} , \frac{\mbox{out\_channels}}{\mbox{groups}} , kW)\)

    bias

    optional bias of shape \((\mbox{out\_channels})\). Default: NULL

    stride

    the stride of the convolving kernel. Can be a single number or a tuple (sW,). Default: 1

    padding

    dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padW,). Default: 0

    output_padding

    additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padW). Default: 0

    groups

    split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

    dilation

    the spacing between kernel elements. Can be a single number or a tuple (dW,). Default: 1

    + +

    conv_transpose1d(input, weight, bias=NULL, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor

    + + + + +

    Applies a 1D transposed convolution operator over an input signal +composed of several input planes, sometimes also called "deconvolution".

    +

    See nn_conv_transpose1d() for details and output shape.

    + +

    Examples

    +
    if (torch_is_installed()) { + +inputs = torch_randn(c(20, 16, 50)) +weights = torch_randn(c(16, 33, 5)) +nnf_conv_transpose1d(inputs, weights) +} +
    #> torch_tensor +#> (1,.,.) = +#> Columns 1 to 8 7.5092 1.7658 -2.2897 11.3350 -0.0320 9.0184 1.6378 -8.5314 +#> -1.4322 -3.9954 1.7131 -3.8299 -8.1916 1.0842 17.4139 -1.0314 +#> 1.7253 -0.6135 5.5765 7.7010 16.5626 0.0593 -2.5966 0.6596 +#> 7.4652 8.0367 8.5493 4.8215 0.3527 -3.4042 -15.8269 9.5207 +#> -7.8330 -1.6530 -7.7436 16.6798 2.0674 -2.3725 -9.2932 -7.2266 +#> -0.2451 -9.4369 -9.4362 -3.3380 -5.3237 -0.2217 18.2118 4.1885 +#> -0.4496 3.4870 -3.8555 -6.2449 3.7345 -4.6552 -8.5749 3.3828 +#> 6.2772 -5.7245 -8.2870 8.6299 -0.3788 -4.0862 -12.6486 -4.0464 +#> -0.5047 0.3391 -8.3623 1.6752 25.0274 2.7063 -15.5494 -18.5210 +#> -4.6138 -2.7427 7.0834 -12.1850 7.8766 5.5000 -4.9467 8.5112 +#> 5.8172 0.6223 -0.8831 -6.5954 4.4467 -3.4453 5.2012 2.3533 +#> 0.1983 4.3668 0.8347 -0.4679 -0.0355 -10.2468 -10.3628 7.9219 +#> 3.2171 1.4948 5.1908 5.6194 8.6378 -7.5130 -2.7667 8.3995 +#> -8.5638 -5.1357 0.2054 -1.2208 3.7810 6.8747 -11.9120 1.7171 +#> -10.3604 0.8502 -3.4820 -5.6220 -4.1899 -7.6872 6.2128 -4.1734 +#> -1.6649 -8.1770 -0.9832 -0.7593 -13.7185 -5.0273 4.9400 7.6807 +#> -0.5635 -9.3564 -7.4733 -0.7659 -3.2477 -1.7132 -6.3380 14.2290 +#> -0.9348 4.5867 -1.5974 -6.5680 -5.5891 -5.2598 -8.8694 8.8517 +#> 1.0837 4.8790 -5.5669 -1.7414 -0.2322 14.3767 -1.7892 7.5502 +#> -1.5047 2.4748 5.7327 4.5888 -0.0441 -1.4232 5.7458 4.4760 +#> -1.0290 8.9721 1.3632 7.1096 -3.1683 4.9745 12.1966 1.6666 +#> 5.3529 10.8940 -4.7734 -3.0531 -0.8778 -4.1124 8.3844 -10.1344 +#> 7.6159 9.8673 2.7259 6.4998 11.2159 -5.1155 -2.6251 -12.3824 +#> -0.9453 -3.2646 8.0797 -11.7054 2.4791 2.1673 -8.9336 16.4105 +#> -2.7929 -8.9685 9.1113 -10.8023 8.3289 16.1866 1.6608 -1.5382 +#> 3.4635 -6.9134 7.1730 -4.7356 9.2693 3.9543 -3.4936 13.5175 +#> 3.6057 2.8769 -5.7268 -4.9874 11.7083 21.3028 3.9938 -9.9986 +#> -0.8990 0.3293 -0.7769 -10.3224 -0.2324 6.5558 2.0668 -8.2520 +#> -3.7813 -7.7431 1.1502 -1.2359 8.3363 -1.8446 6.6262 7.7890 +#> -7.2151 1.2037 6.9325 -4.6571 8.0520 -9.7497 11.2976 -12.2372 +#> 4.1542 5.0674 -7.7885 -4.8527 -5.1850 11.2249 -6.6870 -0.6467 +#> -4.0202 -7.4219 0.9196 0.9713 -5.9721 10.8120 -11.7623 18.6043 +#> -3.0401 -8.4811 2.6108 1.0898 -3.3030 4.8920 -22.7463 -12.8264 +#> +#> Columns 9 to 16 -11.3871 6.6489 2.1386 1.4176 -16.9844 13.3540 36.5843 -16.1047 +#> -12.2285 -5.0270 0.4176 -11.8998 -3.2067 -5.2539 10.9849 -1.1460 +#> -3.7166 8.9264 -23.5993 -8.0172 12.9451 -1.0583 -15.8391 14.2367 +#> 2.4835 -5.1355 -19.5458 4.9303 10.9628 6.8489 12.0889 -20.8065 +#> -1.2228 -4.5216 8.4793 -2.0482 10.2045 -7.3868 4.6700 17.5928 +#> 19.5259 0.9556 -2.7849 -9.8909 -10.0789 14.4241 -16.0743 -1.7923 +#> -14.7718 0.6720 17.6044 1.4031 -7.3729 2.3456 7.9769 -5.7674 +#> -13.0827 11.1574 0.1594 -4.2948 -6.1294 16.2569 -27.6479 10.4936 +#> 2.6115 -6.1979 -0.4239 0.5162 5.4815 2.2142 7.2077 -4.4299 +#> 0.8763 -25.9816 -6.9282 -7.0469 5.0726 4.3156 -6.7208 -8.3778 +#> -11.1706 -13.3165 -17.6183 12.8667 -0.1709 2.1462 14.3785 1.1981 +#> 3.7179 -6.1258 -10.7103 1.5823 -3.3837 -18.7970 3.7765 9.6193 +#> -9.2632 9.0765 2.8382 8.3116 -10.0603 -9.7842 23.6758 -0.5904 +#> 4.3770 3.8825 -1.2259 -0.2087 6.6029 -8.3191 15.9908 -9.4910 +#> 5.2189 1.0246 24.3959 9.9876 16.1522 2.3171 -10.4795 4.4048 +#> -7.3139 0.3050 -8.9961 -7.3436 -3.6458 -13.0024 -1.3321 7.4308 +#> -9.5448 -5.8965 1.0568 10.0638 -7.1460 1.8492 0.3626 10.6839 +#> 5.5975 -11.2718 17.3997 -3.7610 4.5311 -11.0250 -8.0981 1.1792 +#> -13.1293 6.8548 -13.7434 -19.7634 -7.0857 7.8447 16.7162 -5.8272 +#> -6.6048 6.5744 5.2019 3.0699 -3.1838 -3.7632 14.0776 -5.3063 +#> -9.2272 4.5861 4.2751 -4.7052 4.1038 -10.0239 8.6336 -12.7005 +#> -15.6721 -7.9124 15.8599 3.2827 1.5979 16.0084 9.0554 4.3529 +#> 9.0426 -0.5163 -0.7177 -12.6277 11.0778 -15.5229 5.6362 8.5260 +#> -3.5502 -3.4809 6.0038 -9.4615 -3.4595 6.3043 8.3116 11.0908 +#> -10.1339 15.4605 -3.8612 -12.8668 -15.8611 7.7750 -4.3362 -5.4805 +#> -0.9741 -8.9953 -8.0686 9.5095 -3.8287 -12.4131 5.5750 5.0586 +#> 10.1511 -9.4778 19.2425 -19.7662 -8.8612 14.5213 3.6339 -11.1999 +#> 7.4137 -8.5740 -15.2353 21.7661 4.4776 -6.8500 -6.5476 1.0748 +#> 16.9917 1.6460 6.9789 5.3569 -1.3209 -8.1805 -9.5055 15.4783 +#> 19.7126 1.5753 0.5239 -0.8367 0.7918 -10.4859 3.5727 -6.1451 +#> 9.6539 6.1113 -1.4215 -1.1832 6.7106 -0.4782 -5.3751 0.9431 +#> -5.2650 2.0490 14.0690 5.9502 -23.8761 -17.0422 5.3440 6.7202 +#> 0.0364 4.2985 -2.7618 10.1270 7.2346 22.9594 -8.9530 -12.4891 +#> +#> Columns 17 to 24 5.0461 7.1393 -4.2121 -2.7086 3.1919 5.6694 8.4918 -4.6152 +#> -10.5025 -7.5375 6.6254 -1.4796 -1.2396 -16.7928 -1.7118 -4.0963 +#> 4.4701 -3.2115 4.5510 -2.9465 0.0178 -2.7616 2.3772 -6.2773 +#> 1.0418 5.8846 2.3202 1.2672 12.6804 0.5868 6.5656 -8.7521 +#> 0.9933 -8.4528 -14.6442 -13.8869 -0.2739 -3.4585 -2.3166 -20.1706 +#> 13.5689 2.1767 3.2182 15.6104 2.1236 1.2023 -15.1057 6.0920 +#> 5.2509 12.1414 -3.3489 -0.3299 10.6327 12.0399 6.9582 19.4964 +#> 5.9499 3.5669 -4.9967 -8.3136 16.4572 -3.1373 7.9363 -1.5493 +#> -4.1027 -0.5909 -15.5389 -5.6274 -2.3088 1.9215 -2.9185 -9.8890 +#> 4.2068 -2.6873 -2.4395 -4.3479 -6.7238 -6.0880 -12.2196 -5.3386 +#> 0.0675 -1.6615 -16.3052 5.6699 -10.6777 1.2691 0.8664 1.3187 +#> -4.1581 -2.0602 -1.3760 -4.7957 15.5006 7.0492 7.2075 1.1087 +#> 11.4466 -2.8854 -6.4908 -3.5479 -1.0935 1.8129 11.5216 -7.1288 +#> -2.1702 -13.4427 -10.4851 -14.5719 6.6614 3.9431 -0.3751 -17.0631 +#> 8.8954 -15.5983 7.3785 -6.3945 -17.4699 -5.8576 1.7152 5.1607 +#> 5.7983 5.5403 -1.8862 25.7451 -13.6046 10.8975 -7.8493 9.0120 +#> -7.8288 -5.0260 4.4384 -5.5562 6.1745 2.9839 -8.0770 -1.7941 +#> 3.8588 2.9477 -1.0418 -0.9633 -3.2842 3.1351 -0.6501 3.0184 +#> -4.9276 12.2279 13.2807 0.4534 0.8122 -10.9757 7.0022 12.9495 +#> -6.8571 7.1047 1.4436 -4.8131 11.1064 -8.0598 -6.7032 -4.6377 +#> -0.4945 -1.9992 -0.9204 -2.0775 4.5513 -9.1970 19.4727 -6.7788 +#> -8.4687 -10.7235 1.2399 0.8373 -11.5367 7.3060 -4.3115 3.8187 +#> 0.0777 3.6163 5.9119 -5.5425 -2.0467 -6.0968 -1.8903 -10.3778 +#> -12.9548 -10.2684 10.7147 -3.0850 0.1460 1.8934 -2.8139 -8.0602 +#> -5.8127 11.3679 23.3444 -6.8042 9.7215 0.7005 8.5951 15.4495 +#> -13.7565 4.9358 -10.4920 4.4700 5.7130 -2.3677 -7.3746 -4.8896 +#> 5.7679 7.9859 4.3357 -8.2949 -1.1684 -1.2039 -2.5838 12.6753 +#> -9.8511 0.2315 0.0868 1.5931 9.8363 -7.6647 6.2375 12.4857 +#> -9.6911 -3.6033 0.3459 -8.1519 20.5941 13.4930 -15.9664 -6.7867 +#> 4.9953 -7.6476 -12.2583 -13.9312 9.3277 -15.1152 7.0205 -10.9346 +#> 1.8251 12.8465 8.8661 -6.7894 -5.5400 -8.6703 -0.9770 -1.5437 +#> 7.5049 0.6881 -0.4029 -8.4497 -7.3942 4.8276 5.9813 -22.4572 +#> 9.3542 4.8128 -18.0321 -7.1903 7.4505 8.9835 5.8222 -10.9540 +#> +#> Columns 25 to 32 6.3862 -13.3302 11.1319 4.7180 3.2073 7.9466 -2.5492 -6.9001 +#> -4.1443 1.9468 14.3526 2.2147 -4.3649 -1.3800 10.3766 2.3538 +#> -5.5984 -0.9744 20.5592 10.5067 0.7734 4.8639 5.6575 -4.2674 +#> 5.8836 2.3992 -0.4241 7.4990 8.4222 -9.8820 -3.9143 2.9094 +#> -6.3367 13.7460 -16.2125 -6.6184 12.3727 -8.0381 -7.0010 -4.2844 +#> -15.1284 -9.3837 17.3818 10.3391 -6.7978 -7.0412 5.0071 3.3687 +#> 22.8576 4.2799 3.9247 -8.8880 1.4415 -5.9465 -1.5270 -6.3403 +#> 11.0400 -5.5754 -2.3710 2.9331 10.7125 -5.0097 -11.5741 -13.6864 +#> -0.7302 -7.3042 -22.2769 -3.0296 4.8445 -3.2851 -9.6227 -3.7467 +#> -19.9910 0.8801 15.4423 7.2928 -1.8487 2.2269 1.0567 2.8086 +#> -11.5817 0.3192 4.6724 -1.6993 -6.4201 -5.2266 -10.4855 -1.9111 +#> 1.5360 13.3514 -2.8108 -0.8963 6.2879 3.1514 -3.8111 3.7554 +#> -4.3568 2.4317 -13.1502 -1.5801 0.0749 -4.2072 4.1061 -7.9029 +#> -2.3847 4.9557 -5.4506 -6.4766 12.1524 -3.1862 -6.4464 14.4038 +#> 2.8651 10.2111 5.3075 7.2955 -12.2476 -1.0174 -2.5743 -9.8317 +#> -10.2342 4.2476 2.4423 1.7453 1.8884 8.2515 12.6821 14.2898 +#> -7.5861 3.9747 13.3662 -11.6256 4.9735 -1.1453 9.0942 -8.3465 +#> -2.7225 -0.4563 3.6159 -2.0268 7.0906 3.4329 5.7170 -5.0272 +#> -1.1599 -4.1004 5.8942 2.0763 1.1574 -9.0853 3.6334 14.5240 +#> 8.1220 0.6985 -7.9109 -2.3143 14.3209 -8.1596 0.3304 3.9447 +#> 5.7197 -5.4664 -14.2248 -7.1517 -0.7762 0.1096 7.9700 0.3414 +#> 0.7410 -12.2951 -6.8423 2.7165 -4.8665 -0.1585 -2.9648 -7.9042 +#> -4.5787 -12.3927 -10.4110 -8.9851 -10.0936 6.6182 -8.9669 -0.8006 +#> -3.5018 -6.3532 -0.8541 -11.4317 6.5664 -5.0120 1.4957 0.2144 +#> 24.9434 3.5060 12.1157 -8.8518 6.9327 -1.4099 10.5455 7.7316 +#> 10.0264 2.2959 -3.4697 -3.3767 -7.6848 2.7017 -10.1378 6.6349 +#> -4.8995 -14.5042 -12.6239 2.2156 1.0402 20.8802 -17.5670 -1.9627 +#> 6.5268 1.4303 14.2157 -12.5419 -5.7303 6.1321 0.1528 1.3539 +#> -12.5683 9.2066 5.2977 -12.3084 -1.3095 -2.0016 1.9560 2.1589 +#> 9.9170 -0.3247 4.2993 -9.8070 -3.8887 2.7713 -11.7955 -5.8791 +#> -11.3715 -18.2381 -2.3663 -1.8315 2.1624 4.0394 -0.8909 0.9234 +#> 6.7059 -9.0806 11.7172 -8.0449 7.2731 -2.4154 -6.7864 4.5915 +#> 9.4971 3.7427 14.1617 -4.2874 1.0944 -2.3990 -6.5816 -10.5266 +#> +#> Columns 33 to 40 -6.3455 -1.8703 -15.4032 -7.0134 0.0208 4.1648 15.8100 28.9453 +#> -0.2216 7.7284 2.9620 0.3639 9.1237 -1.6198 -12.2399 -5.9175 +#> 3.7651 0.1576 -8.9333 -9.0791 -11.2053 -2.2744 -11.9607 -3.9005 +#> 17.8670 -3.7561 -8.5349 -16.0191 -2.2376 -4.5203 -1.3330 3.7970 +#> -3.0877 10.5827 11.0985 -12.3419 2.2013 21.9847 0.2310 -17.1320 +#> 7.2605 -3.4555 -7.0256 22.1650 3.7467 -4.8336 4.7423 -13.1405 +#> -20.2477 -7.8561 0.6372 6.1106 11.2098 1.2973 6.5043 7.3840 +#> 11.2250 2.4119 -7.7690 8.4223 -17.3652 15.1460 12.6436 -8.3118 +#> 13.2364 0.5188 -1.7755 -8.5054 5.9982 10.0803 -3.5235 -1.4883 +#> 11.7300 -9.9142 -2.5637 23.1697 -7.8644 -17.2795 6.9793 -8.2001 +#> -16.2262 7.3128 15.7658 -4.2027 8.8820 -1.2745 31.9361 0.3185 +#> 1.1794 0.1328 -2.6065 -0.2457 -4.7804 -0.1214 4.5615 -8.4000 +#> 4.9526 -0.0621 -3.1849 -5.1478 -12.8276 -10.0473 4.9044 -2.8794 +#> -1.6401 1.8186 11.2167 -15.1170 -6.1526 10.8328 3.0058 -2.7609 +#> -3.8121 3.5605 11.7003 8.8057 7.3935 9.8199 17.5917 0.2641 +#> -12.7404 -6.6300 23.2736 -0.9949 -13.6210 -14.9099 8.3208 -1.2268 +#> -1.2182 -1.0243 -1.3643 13.5720 1.6856 -0.1947 4.8962 -13.1538 +#> -3.8200 -5.2339 2.9613 9.7482 -3.4041 1.9207 3.8221 -10.3226 +#> -20.5769 -5.2420 14.3962 -7.3192 -1.0670 -5.8244 -7.1523 13.7788 +#> -3.4210 11.8004 -1.8985 -24.2199 5.1046 11.8401 -12.9386 9.0000 +#> -8.8798 6.7953 -4.5977 -0.4655 -1.1572 -10.4521 -17.8616 11.4842 +#> 1.4247 10.5945 -0.6283 5.4646 -0.4637 2.7673 0.6746 -2.5160 +#> -8.2162 2.1856 8.7255 -17.7315 3.5021 13.9792 -0.5242 3.9809 +#> 14.6361 -9.4655 10.5961 4.9845 -5.0705 6.2025 -2.0503 -11.3176 +#> 2.5859 -2.6797 2.8491 -1.9721 -20.3271 -7.8121 -19.6944 -5.5982 +#> 6.4359 -1.3808 -3.4707 7.9480 15.1759 -8.0214 -21.5689 -7.4061 +#> 13.3360 0.9918 -1.9450 13.9269 -13.6345 -1.5715 6.1679 -6.9350 +#> -3.8513 8.5463 -6.2359 3.3214 6.2455 -11.9007 -5.4180 -17.7371 +#> 13.6853 -3.6301 1.5904 -0.9607 1.7988 8.9489 -12.3001 -9.4200 +#> 14.3144 7.7870 -1.3917 10.7065 3.2846 -7.5796 -0.4937 -13.6019 +#> -10.1535 0.3431 4.9430 -1.6263 -2.4084 9.9046 -2.4994 8.4613 +#> -11.6224 11.0871 3.0902 5.3550 5.7273 0.5542 -13.9770 5.3782 +#> 4.7798 11.4943 -15.1648 -1.2837 0.3482 9.6982 15.4955 1.1748 +#> +#> Columns 41 to 48 -4.9976 23.7240 -1.7815 3.8400 -3.4735 10.8928 -3.4043 5.3599 +#> -7.3915 3.1934 -6.8950 -12.3522 -12.3706 6.8505 6.8892 1.9762 +#> -0.0715 -6.4888 11.4185 7.9727 -9.5411 -11.6563 -7.4205 19.0864 +#> 4.1723 -1.9456 20.3811 3.8453 2.1266 13.0805 0.7215 -14.0059 +#> -13.7520 -2.6440 0.1240 2.0033 -19.2793 -8.4554 3.3368 7.8435 +#> 0.9474 0.0677 -9.0905 8.6048 7.1732 -8.9498 10.6629 -6.7055 +#> 12.3113 -4.2834 5.2673 -5.5216 14.5509 25.9458 -4.5646 -19.4167 +#> 1.0545 -3.0002 -9.6247 20.9034 -9.3953 -8.2658 -17.1738 14.5474 +#> 14.9427 1.7972 1.3384 -4.2754 -0.1040 -1.6516 -7.3495 7.4288 +#> 0.4807 -2.4756 -13.8721 -17.2068 -1.2552 10.4579 14.9359 0.3989 +#> 5.6744 11.8811 16.9016 4.7813 -4.1635 0.3091 10.3594 18.1963 +#> -7.1362 3.7953 4.2058 -6.5452 3.4073 6.2320 -0.0451 -1.7456 +#> -2.2142 3.8023 11.0165 -10.9944 -1.6709 -1.4512 -13.7373 -1.5756 +#> -7.9698 -2.3468 7.9848 -10.4275 -1.2309 -4.3872 1.3864 17.3116 +#> 5.6839 -1.3079 -12.1567 5.1419 5.0424 -10.0917 -3.4104 -19.8380 +#> -0.7423 5.0781 9.0403 -5.9639 1.3515 9.2461 -2.3630 -0.6001 +#> -0.0186 -3.4549 -4.5367 -7.8862 3.3821 -8.4757 3.8030 6.7765 +#> 0.1186 2.9832 4.4354 -11.9722 6.9402 17.6355 7.4197 -9.8151 +#> -5.0030 2.8123 -2.4880 2.9521 10.4449 2.8441 6.4338 15.1839 +#> -1.2206 -7.7154 12.4448 -0.4596 -4.7501 4.3778 -3.3127 0.9771 +#> -2.6349 12.9001 6.9091 0.6244 1.0455 1.8382 -4.4529 -24.9631 +#> 13.8839 -3.9984 -13.0360 8.7732 19.8289 7.8284 -9.9964 -7.6309 +#> 5.5527 -3.4324 6.1997 3.6301 -9.3164 7.0911 5.2206 12.3208 +#> 4.0315 -7.2320 -4.3941 1.4198 -8.1051 3.8089 -0.3144 -3.1232 +#> -0.9606 -3.1572 -9.2182 8.4840 -5.6326 -0.4005 -2.5767 -2.6649 +#> -13.1766 -12.8322 -10.1581 -13.3435 -12.8123 0.3436 3.6834 -8.1418 +#> -12.2141 6.3279 -6.3403 18.5176 2.7415 -6.8329 7.6055 -12.8831 +#> -3.3953 -12.0585 -2.7113 -11.4844 -2.2880 11.4670 13.6869 -3.0174 +#> -4.0705 -11.9181 3.4695 15.9463 12.0978 -12.2546 -14.4044 -20.0763 +#> -11.9563 -11.6130 4.1540 3.7874 2.8458 0.4595 0.0348 -9.0737 +#> -1.6164 6.9030 1.9319 11.0896 -10.8507 2.4500 -16.2906 5.5908 +#> -22.5653 -0.2314 -5.6995 -0.9661 8.5033 11.1435 -5.7294 -15.6139 +#> -6.3774 12.9915 -3.6836 15.4483 -1.1037 -12.3339 -7.1406 17.2051 +#> +#> Columns 49 to 54 3.5690 -0.0477 -15.0157 -6.9384 -0.4864 -1.2965 +#> 5.0945 -3.4426 2.6572 -0.0536 6.9838 3.3913 +#> -3.3324 -0.8539 -1.1119 -9.6154 -4.1228 -0.7313 +#> -13.3576 -5.4817 -19.9596 -8.3586 -1.1764 -0.7123 +#> 3.6075 -2.7385 16.3039 3.1493 11.0811 4.8244 +#> 13.1777 -2.9419 -5.6008 3.2825 -5.1851 -7.3926 +#> -3.9983 8.9675 4.6193 -12.1679 -1.6746 -5.8087 +#> 1.7991 -7.8411 -1.2248 -1.0864 -4.2102 6.9872 +#> -2.4178 7.7497 -3.2832 1.6366 6.1673 3.2913 +#> 10.5111 4.1524 10.0971 15.3716 0.9500 -1.6488 +#> 7.9344 -5.6579 -17.3647 -1.3932 -0.2217 4.5559 +#> -1.4953 -1.9520 -1.7041 -5.4177 4.6078 0.7177 +#> 10.0156 10.1209 -14.6283 -14.6672 5.8308 0.7388 +#> 4.7466 3.5613 -9.8190 -1.6358 4.7033 4.0941 +#> -3.9140 -3.0370 -0.3425 -2.2664 -12.3400 -0.0259 +#> 0.2508 -3.1653 0.5941 0.1653 -5.9538 -4.8980 +#> 10.7472 -7.2793 -4.1071 -4.6229 6.7291 4.9974 +#> -17.5345 -1.0950 20.1220 -4.6122 -0.2784 -1.4945 +#> -1.7530 -18.5442 -11.9271 3.4168 6.6144 5.6142 +#> -0.2283 10.1079 7.8465 -4.4248 -3.0379 -1.7377 +#> -20.2794 5.7853 5.6740 5.6364 6.4354 -4.9923 +#> 5.2906 6.5501 11.1736 -3.2384 -7.3907 -1.6246 +#> 1.9878 0.6445 3.7551 -14.4261 8.9907 8.1813 +#> 7.0414 -21.7505 11.8331 -6.3518 -1.0108 3.7186 +#> 4.7247 -6.9536 -5.1810 -1.9012 -3.5109 -0.4010 +#> -12.9440 4.0629 6.3436 9.9396 9.9506 3.6067 +#> 2.5080 -0.0307 -0.9016 8.1904 -5.3479 0.7115 +#> -16.8321 5.3424 11.2973 0.5175 2.7200 3.1669 +#> 8.0001 -9.7692 -5.3772 -8.5085 0.2239 -0.2333 +#> 5.0423 4.4927 5.9419 -7.3695 4.7928 -6.7803 +#> -12.1958 15.1992 2.2614 -6.3638 -4.7134 1.5851 +#> 5.7907 13.0571 3.1658 -3.7021 -2.7384 -0.7511 +#> 7.6262 5.5069 -14.7954 -6.1695 1.0086 -1.4901 +#> +#> (2,.,.) = +#> Columns 1 to 8 -6.0142 -1.0846 6.6132 8.8048 -12.7652 1.2229 -3.9324 -2.6862 +#> -0.0446 6.6856 -6.5846 -5.6414 -10.7667 -7.7234 7.4473 -4.3770 +#> 0.5037 -0.0460 -10.5845 2.3327 8.3228 -2.7322 -22.1459 -10.1168 +#> -5.7167 5.9581 -4.6676 14.6979 -10.7694 -5.0956 5.0268 -4.5396 +#> 6.1477 -5.5994 -2.4948 -0.3934 1.3337 -5.0898 14.0561 6.1115 +#> 2.3034 8.0586 -2.1196 -20.7930 4.3909 0.3674 19.3874 -12.0856 +#> 5.2677 -2.1861 -6.3592 -6.9201 6.9173 18.7995 7.0750 1.2366 +#> 0.2273 5.6071 5.7069 2.8791 -4.8039 11.6470 -1.8406 8.5442 +#> 3.8859 0.1148 0.7576 8.3424 -4.4211 12.0085 -0.4955 2.0801 +#> 0.3176 3.4337 1.5268 5.9414 -7.9478 -5.3932 -4.4354 -13.2712 +#> 1.8593 -1.4915 -5.0682 5.8562 0.5165 -0.4172 8.4257 -6.5291 +#> 3.4373 -6.0908 0.2243 -0.2928 -9.5887 2.5851 2.9402 3.5356 +#> -3.5106 2.9929 -8.5752 6.3283 -3.7899 -6.4614 10.1288 -2.5069 +#> 2.4390 2.6027 -0.8885 9.6251 5.1367 -3.5879 -5.5656 2.5122 +#> 4.8990 3.6292 1.4210 0.9515 10.0889 -6.8010 -1.4446 -1.5657 +#> 6.9035 -3.1722 0.7518 -1.6720 1.7043 2.1128 11.1847 -1.3870 +#> 5.2967 -1.8563 -0.1792 -12.7039 4.0796 12.2812 -0.9159 -3.7733 +#> 0.2552 -0.8279 1.7061 -0.1536 4.1987 -7.3316 0.2077 7.6047 +#> 5.9567 -5.8254 -8.3397 -9.2084 2.7443 14.3128 4.3582 -6.9438 +#> 0.7814 2.8716 1.6646 5.8078 -20.3834 -2.3605 1.1268 -0.8053 +#> -2.3551 0.4909 6.6684 3.9486 10.9811 -11.7314 11.4604 2.8302 +#> -1.4565 2.0059 5.4728 3.8797 4.5597 -3.9356 4.6334 0.0123 +#> -0.0046 1.4556 -4.7974 1.3683 -3.0290 -11.4229 0.2953 1.9865 +#> 7.2449 0.2676 -10.0256 4.2182 -7.5904 2.9119 2.9824 -9.4551 +#> 2.0966 4.9624 -14.5976 -3.3322 -9.3261 10.3485 1.7679 -4.7087 +#> -7.7456 -1.2422 -7.4841 -4.0955 -3.6555 5.0936 -0.8654 -5.2973 +#> -3.9772 12.2638 -7.6108 -11.2966 3.2104 -16.6242 16.9061 7.4571 +#> -4.5477 -1.7940 2.9916 -3.3879 -0.0793 1.1810 -10.4906 13.0804 +#> -1.1056 0.5224 13.1361 7.4463 -21.3352 5.1708 2.2224 2.0314 +#> -5.1690 -3.3034 -1.3244 0.8516 -6.6657 0.6877 -5.4285 8.8806 +#> -0.2279 4.6165 -7.8977 10.6867 8.4942 -6.3051 -11.0459 -2.7063 +#> -4.1818 -1.3874 2.6532 -1.7010 -2.5896 -11.7944 7.7187 2.3623 +#> -5.0766 7.3549 10.2380 2.8121 6.0108 3.6358 -2.7319 6.7999 +#> +#> Columns 9 to 16 8.5346 14.6493 4.7343 10.5360 -13.0589 -1.0024 11.6057 -3.5862 +#> 4.0119 4.1767 5.8653 -5.8562 4.2334 12.2277 1.6284 0.2745 +#> -4.0777 15.2538 -12.9052 -3.1355 6.7297 7.5706 2.5126 -6.3892 +#> 3.1940 -10.7549 -7.6170 -0.4112 -4.0976 10.6098 -8.3684 -18.4668 +#> -3.2415 -8.9600 9.3266 -3.2092 4.5961 -8.3287 -0.9100 13.9068 +#> -12.5100 7.5045 3.7754 -15.6258 -6.2668 8.9938 -13.1121 -4.5580 +#> 3.6137 11.2081 5.2502 1.5275 -14.4869 1.1398 12.1204 -12.9243 +#> 8.4021 -6.4794 3.9283 -3.9831 -6.3969 -0.2040 -6.2184 9.9300 +#> -7.6047 8.8412 7.8077 -1.5548 -9.9581 -8.0922 -6.1359 -2.4118 +#> 8.0359 2.0868 -1.9051 8.4603 -3.9361 3.8858 7.5554 8.1774 +#> 7.6974 -2.9048 20.6802 5.3417 -2.3613 0.4480 -10.0400 1.7520 +#> 6.1837 -3.5641 -6.2898 10.2009 4.8104 -3.8270 0.0386 0.0012 +#> -7.4964 -14.7028 0.6050 -5.8951 -7.9125 10.8285 -8.6409 -6.7336 +#> 16.2129 -2.0599 4.5490 14.7370 0.5633 -0.3146 14.6852 4.0331 +#> 10.1456 12.4847 3.7984 -30.2292 15.8297 2.4371 -1.7537 15.3802 +#> -14.4920 2.9402 -1.1959 6.6652 -6.5744 4.3329 -13.1386 6.6488 +#> 5.8032 2.7329 -4.3297 -0.8535 -7.0992 -0.5278 4.6635 -16.4815 +#> 2.3530 -0.4281 -1.5351 7.8826 -8.8837 -3.4022 6.3356 -10.1218 +#> -1.2376 -5.7826 3.9019 2.5767 -7.1819 -5.1861 -7.0960 1.8692 +#> 6.0140 -8.3999 6.2292 5.2188 -10.3160 -0.8979 15.7884 -6.4467 +#> -20.7020 -2.8492 0.1653 -3.5244 -2.1998 4.6702 6.5823 15.6303 +#> 3.7195 4.3370 7.4293 -11.7746 -1.0114 1.6440 -2.4324 -4.7989 +#> -4.9267 2.0240 5.2009 -9.0453 11.2709 5.0153 -1.0949 -5.1142 +#> 17.9737 -3.8305 -5.3019 9.6272 9.1048 -10.2334 0.1254 -1.2130 +#> -4.9468 -10.2155 -3.5833 7.5000 3.7759 -0.9806 -7.0401 -15.9829 +#> -11.1570 8.3210 -1.7366 16.9358 -2.5996 -0.5112 20.6505 15.1489 +#> 0.5344 -13.4556 -0.9938 1.5593 -10.1299 6.0384 -10.6928 -0.0428 +#> -0.1160 -0.3928 -4.5734 1.5162 2.1136 -1.5655 7.2743 -19.0332 +#> 0.5523 -21.2386 -10.6015 -5.0827 -3.9395 -1.8489 16.5821 13.4423 +#> -9.9921 -3.6219 -9.2106 -3.6926 8.7945 1.7177 4.8302 -4.3215 +#> 1.5996 -6.0443 -6.0334 4.2433 -9.8292 7.8096 14.8247 -10.2697 +#> -8.0190 -9.6405 -7.5884 -0.6951 -5.1690 -2.9340 -1.3671 -7.9987 +#> 1.9397 3.0446 1.6674 3.8388 -7.0710 1.6946 -3.0739 -10.7864 +#> +#> Columns 17 to 24 3.5820 -1.0037 -2.1060 -0.5021 0.4669 5.5732 10.5865 6.4133 +#> -6.1313 14.0074 6.3147 -1.0882 -9.1710 4.2064 18.5576 8.6620 +#> -3.8849 -0.1272 4.0714 3.2136 8.1579 -13.3529 -3.6735 13.2189 +#> -4.7662 -0.0601 10.1762 6.0038 1.5362 9.3377 -11.6679 8.0604 +#> 1.7997 -2.3455 2.2832 3.4272 3.4207 -1.8314 2.3037 -3.0638 +#> 1.0240 3.4894 -16.9478 -4.6843 -9.7957 -9.3539 6.6651 3.9057 +#> 20.5544 -9.2036 -15.7903 4.6800 24.0699 9.6675 -6.7924 6.5766 +#> -10.8993 -6.8507 3.1501 11.1207 1.6813 -6.4768 -12.4266 -7.7825 +#> 4.2514 -8.3347 -9.1796 -24.2727 12.2082 9.7718 -13.9615 -6.7914 +#> 0.6135 -16.2445 -9.4782 -2.2328 -16.4114 -3.9198 8.7505 6.3720 +#> -9.1444 2.4820 -1.2928 4.8765 -12.1881 11.2062 5.9014 15.5521 +#> 2.3696 10.4414 10.2085 3.4477 3.2197 -2.5829 -13.2049 4.9238 +#> -1.8964 12.5158 10.1099 2.6588 -12.3698 -5.9838 -21.7972 9.1706 +#> 13.2557 15.0246 9.3085 -12.6739 -9.1985 2.9681 5.7537 15.4926 +#> 9.1599 4.1577 17.0316 9.7164 19.2484 6.2472 16.2896 0.8428 +#> -4.2250 -10.6430 6.5479 15.9252 -21.9275 11.6068 -5.1693 -5.2555 +#> 6.0267 16.3332 -9.3404 2.8688 1.6730 7.3361 -6.7439 10.3059 +#> -1.9348 0.8716 8.5999 5.4898 -9.7037 -4.2541 -5.6444 1.1883 +#> 2.3370 2.7619 -2.5206 0.8832 -2.1079 6.1265 -12.1927 5.6679 +#> -4.9426 1.5295 -0.3042 -2.2229 10.5935 -2.2341 -1.6341 -2.5226 +#> -6.4120 1.1743 6.9378 -4.2038 -3.4596 -1.8725 8.1421 1.1722 +#> -10.0810 -5.4424 -5.6946 -1.5854 -0.2833 5.7211 6.0585 -5.0141 +#> -9.7978 -1.5636 4.4473 -14.4918 3.7050 11.9840 -5.3736 -11.1259 +#> 15.8352 2.8404 -7.3328 1.7837 -8.1617 6.6040 5.5681 -9.2222 +#> 5.3187 14.0619 6.0540 1.1464 2.2653 1.6969 -14.8623 -8.8745 +#> 16.1223 15.0312 13.5814 1.4108 2.0266 6.3702 1.5893 -6.6857 +#> -2.5696 -9.5020 -18.7982 14.3826 13.0060 0.8541 -1.6138 -3.7581 +#> -6.5214 4.8643 11.3533 -14.8433 -9.9778 6.5444 -0.1860 1.6480 +#> -2.5516 -1.2378 7.9439 -0.4755 9.3799 11.1985 7.4678 -6.6077 +#> 1.8068 -11.0359 11.8054 -5.1727 3.9557 -11.9220 -6.6456 -5.2651 +#> -2.0492 7.6972 4.5677 3.9965 0.5246 -0.9572 -6.5559 11.8385 +#> -9.7477 -7.5894 -10.1902 2.6149 -9.2074 -1.6027 -7.5080 -2.3908 +#> -5.0027 -7.5121 6.1391 -6.0998 -3.7722 4.9990 -3.5067 1.5032 +#> +#> Columns 25 to 32 -8.3326 13.5462 1.9926 11.7988 15.5449 -6.1573 3.6031 19.6798 +#> -4.5706 -3.6426 3.9641 15.6773 -3.2051 -1.9343 5.4558 0.1278 +#> 4.5995 -7.8457 4.9201 -8.9976 0.6543 22.1168 -6.6297 -5.3975 +#> -5.8292 -5.0141 -25.5740 -14.8855 17.8364 -3.9283 -3.5939 11.8638 +#> -2.9779 6.4178 -3.7487 -2.5831 2.8021 13.1714 -11.4142 -8.3406 +#> -7.7044 -4.8954 0.2741 17.5627 12.6050 -2.7270 12.1657 -10.2586 +#> 4.2050 -8.5113 6.5566 -16.4712 0.9855 -5.5393 9.4006 -10.6406 +#> -0.0686 15.5756 -16.7051 0.7950 -11.9573 3.5651 -21.6793 0.0906 +#> 0.2562 -3.8317 4.3460 -8.2718 2.0697 -1.5980 10.5568 17.0559 +#> -1.0399 -0.3710 17.2835 -1.7320 5.8411 -12.8674 -11.0526 9.9299 +#> 3.8539 -15.1293 0.4751 6.1226 19.0899 -4.5517 -0.3517 11.6450 +#> 6.4945 -5.1039 -6.6794 -10.5553 -13.6099 -0.8634 -17.5812 6.6181 +#> -3.0515 1.0205 4.1219 -4.3147 2.5246 -7.1121 1.5620 14.6095 +#> 18.7603 11.6428 -2.3918 -6.4536 7.2049 16.7210 -0.2319 -3.9229 +#> 9.9841 7.8996 15.8140 11.8107 7.5064 10.9066 -4.1827 -10.6760 +#> -2.0325 1.6884 -0.6822 13.7890 1.1328 1.9457 5.2086 -4.3310 +#> -2.2291 -7.0635 4.8275 -1.9795 -11.7523 9.6408 14.6592 -18.0885 +#> -0.7313 -14.2401 2.8376 -6.9697 7.1563 1.0538 0.9391 -8.2351 +#> 9.7196 -0.1118 0.6643 9.5535 -28.1107 -8.3479 -0.4787 4.7379 +#> -8.2879 4.8443 -4.5237 4.2574 17.2704 -0.7534 -0.2627 3.1055 +#> -9.5763 -2.6406 -1.1393 -7.1322 -8.8842 -4.2991 1.9625 -1.4488 +#> -5.2910 7.7209 6.4626 -8.6814 -2.7850 -4.2663 9.5212 3.8776 +#> 15.9189 -3.6846 8.1071 -5.2958 1.2387 1.7237 7.1463 2.7731 +#> -8.0275 -3.3292 10.7864 -4.6377 7.3067 10.3989 -1.4576 -3.2657 +#> -0.7987 4.4697 10.1321 -4.5464 -7.7000 -9.2910 15.0833 1.8220 +#> 0.1179 -7.4323 -3.5473 -10.9446 -9.0101 8.4796 -6.5946 10.8900 +#> -8.9529 5.2127 5.1256 -8.2877 1.0793 -14.6291 -5.0903 3.9544 +#> -0.0143 -21.6235 0.5062 -11.7285 -5.6302 -6.3563 -2.7603 7.7661 +#> 2.9248 14.2674 -10.3140 10.7840 -0.0614 -22.1212 12.7547 -0.6719 +#> 2.9547 0.1362 6.0842 -21.9553 22.4724 -7.6297 0.8820 8.7482 +#> 12.7782 -0.8343 -1.8348 8.3165 0.6553 20.9635 0.0442 11.5359 +#> 0.3486 3.8193 8.0152 -1.3174 -12.5557 0.1765 9.3488 1.8027 +#> 11.4389 -2.4853 -9.1779 4.0988 8.2682 12.0934 0.5927 2.7791 +#> +#> Columns 33 to 40 -10.9232 9.9730 11.3402 -6.9985 -5.9817 12.0211 7.8166 6.7483 +#> 9.9061 -6.1972 8.9170 -12.2403 -3.2537 9.1765 1.9542 2.0039 +#> 18.2129 0.5420 15.0230 -4.1787 2.6993 -4.4323 3.1550 -7.6110 +#> 5.7301 -6.9517 9.9634 -18.3600 5.4696 -0.6466 -0.9669 -9.7345 +#> -4.5438 1.7791 -16.2689 -7.2792 9.5836 15.4019 -3.9399 -3.3617 +#> -10.4459 -4.0898 2.6926 -4.3441 6.9440 11.0991 -6.8736 6.7650 +#> -2.2186 9.3846 -3.3784 1.2866 -18.9018 15.1361 8.9594 1.4436 +#> 0.6342 -11.3456 12.0230 -18.2563 1.8400 -11.3758 6.0929 -9.9955 +#> -7.0029 5.5635 -17.5657 -16.0448 14.7556 -9.7206 5.8468 -1.3272 +#> -5.2170 2.0634 2.8360 -11.4897 0.4731 5.9903 -2.9786 -3.9552 +#> 14.0826 -0.8618 9.7885 -23.5981 -4.6269 2.7839 18.1689 8.5669 +#> -4.7810 -1.5260 -0.6470 8.5800 10.0634 -10.7164 -20.5765 3.0483 +#> 2.9208 -8.7283 20.0642 -24.0890 10.4879 -6.6390 -18.9042 -4.5116 +#> -10.1837 16.6957 -15.0471 19.0028 0.6059 -18.3325 -6.6769 2.8096 +#> 4.3609 13.3973 -13.6800 -3.5755 -5.4699 0.7154 -3.8066 -5.3042 +#> 2.7760 -6.4505 5.1796 3.7856 -1.1897 -4.0430 -11.1327 0.8289 +#> 2.0836 3.1477 -9.6225 0.7284 -16.2915 6.1414 1.0982 7.2752 +#> -5.3480 -4.6832 9.4552 11.5688 -6.9190 -2.1506 -3.3013 -2.7810 +#> -20.8252 9.1013 7.4089 15.0868 -4.8587 11.5892 -0.7041 4.1646 +#> -1.9325 -4.6326 0.7821 1.1786 7.3145 8.5536 -0.0245 -9.3328 +#> 6.2325 11.8251 -0.1127 2.6230 4.0537 -0.0211 -14.9967 -4.6132 +#> -1.6787 -16.4564 2.5823 1.8390 4.6178 -0.5814 5.4933 -0.9679 +#> 8.7747 0.2013 11.2500 -0.7504 6.5001 -0.1652 5.5703 8.0887 +#> -2.6262 -12.8454 2.7855 -3.5945 -8.0783 9.8257 -12.3558 -9.6176 +#> -2.2488 -10.7800 11.4000 15.3984 11.2245 3.3056 1.3418 -12.2344 +#> 17.4834 -8.8691 2.4820 15.6661 8.7345 -7.0508 14.9689 -9.3668 +#> -0.6897 -6.5376 9.5029 -1.4304 7.1311 3.8007 8.2219 -3.1152 +#> 3.2937 -6.8141 3.4211 3.8443 -6.6478 1.1954 20.9570 -4.7195 +#> 8.8615 6.6159 1.3878 -1.7054 14.7872 -1.8724 -13.2738 -4.3344 +#> 1.1489 4.1738 3.6340 -4.3114 11.8795 14.0606 -8.6819 2.8621 +#> -1.2038 16.4240 9.4475 -8.9706 -16.1243 -11.0447 9.5779 -4.6219 +#> 6.1986 6.9468 -12.7478 6.4257 -2.7053 6.1554 -4.7764 -0.9324 +#> -11.4281 14.2899 -13.6856 -24.8442 -6.2173 4.1451 4.1338 -6.8419 +#> +#> Columns 41 to 48 5.6167 5.1628 -5.3210 -0.4502 -2.5538 -2.1373 -9.5876 -1.2143 +#> 8.6899 -2.9253 -6.4096 3.3858 -0.5011 2.3444 -14.5082 7.1908 +#> 0.4306 4.3266 20.9429 1.8534 8.9586 -0.3715 -6.6717 -13.2715 +#> 12.1743 5.3164 7.8754 3.0096 -5.3800 12.6414 4.4029 13.5507 +#> -11.9514 0.4266 -5.6734 14.3629 1.5135 14.8825 -29.7157 3.3493 +#> 5.9194 -0.2439 -19.1017 -1.9195 1.1815 2.0446 6.3048 -10.8831 +#> -0.6203 -1.4123 1.0244 10.9377 -16.4396 -12.8790 -17.0253 -20.1500 +#> -12.0508 0.0932 13.4479 -16.1927 -6.1773 18.3777 2.4006 14.4940 +#> -4.6760 -9.2898 -1.0704 -4.9801 1.5868 9.6104 5.8968 3.7639 +#> -11.6862 0.4314 5.3906 2.6873 -4.3179 -1.1402 13.9832 -7.7592 +#> 21.9782 -1.6958 3.9049 1.5796 -4.9585 -2.4052 -2.6968 -1.2419 +#> 3.7931 -0.8146 4.0895 -6.1485 -0.6181 0.7770 10.5150 4.9762 +#> 7.7402 -4.6891 -3.0619 -13.1808 7.5418 16.2776 -3.7161 9.7330 +#> 5.3329 -1.7892 -14.2884 2.9087 7.3382 -1.2930 -5.4422 7.2745 +#> 5.0099 10.2449 -3.2923 -0.9518 -13.3891 -12.5068 -0.2344 -0.5110 +#> 5.9696 -13.7433 -3.7850 -1.8290 7.4685 -11.1664 3.2635 -0.9625 +#> -13.5953 -9.1116 -3.2895 -0.9582 2.0397 1.1494 -10.4343 -6.6435 +#> -8.7079 0.0132 5.1753 -3.4559 -21.4840 -4.6608 9.5996 6.8894 +#> 4.5736 -2.7679 -7.2934 8.6137 9.2659 2.0525 5.1626 -4.1929 +#> 11.8081 1.8777 -4.1571 3.6456 -4.3223 -1.5937 -14.4750 4.8976 +#> 4.4412 -1.9357 -6.1994 6.9305 -3.3421 -5.9407 7.1103 17.2743 +#> -2.3524 2.3204 6.9110 -1.3481 -3.3249 -5.5464 0.2175 5.9876 +#> 17.7796 9.1184 3.7129 17.5938 7.4084 5.2870 -4.9536 3.2263 +#> -12.8465 8.8358 -15.6219 7.6324 2.9546 16.7422 -7.0209 18.3724 +#> -4.4955 -0.8018 -15.7798 -0.8550 6.9244 2.1549 -1.5718 -11.6629 +#> -10.9435 -3.9419 14.7621 -0.4420 6.6204 -13.7846 10.7538 -0.4441 +#> -7.8047 2.1492 -7.8054 0.2371 -2.2775 4.9381 -0.1767 0.7649 +#> -17.0416 -6.5340 10.0033 -4.7815 -9.8913 -7.2958 10.0637 -14.3519 +#> 4.1466 -2.1522 -11.0059 2.9710 11.4202 5.4275 -3.6718 -3.6446 +#> -1.2420 12.3298 -2.0738 6.5719 -4.2960 6.7944 0.7106 -3.0943 +#> 10.3188 12.8648 5.4735 3.2250 -9.5462 1.5936 -6.0500 1.1283 +#> 13.0340 6.3285 -20.2287 -3.2465 6.8826 0.3587 9.6409 -11.5264 +#> -6.4261 0.0833 -2.6842 -0.0688 -8.8890 0.4671 -5.4090 -20.9419 +#> +#> Columns 49 to 54 -0.7595 17.7225 0.3433 -1.6600 -3.5832 9.1820 +#> -8.7117 4.1735 2.5032 -17.7739 0.8178 7.2484 +#> -4.1945 -2.0523 7.8682 -3.6549 0.1049 2.7100 +#> -14.5688 4.8320 -23.8985 1.7606 5.9118 1.8369 +#> 1.6188 6.0889 10.2977 -0.9904 3.6440 -4.2862 +#> 9.5141 -24.1392 6.4523 -1.2006 1.7379 -8.0874 +#> 9.3583 3.8137 19.6174 1.6485 7.6951 -8.1034 +#> -5.4193 -0.9837 3.1700 4.9500 -9.2599 0.8894 +#> 4.3056 9.3035 -14.7993 7.2205 5.2470 -5.3278 +#> 15.3968 -12.8037 5.6605 0.1833 -1.2213 -3.1601 +#> 0.0873 4.3805 -1.4111 -4.5149 -4.4941 3.8518 +#> 9.0395 4.8149 -9.1265 -1.8190 6.3632 -6.1823 +#> -12.6536 19.3748 -14.2283 -7.0402 11.6094 -2.7043 +#> -11.3499 19.5086 -16.5464 8.1848 3.2863 0.7498 +#> 5.5220 -9.2326 3.3839 15.0107 -13.0285 6.1753 +#> -6.3849 -1.9659 11.4017 -7.0603 -0.7306 -1.1019 +#> 9.0291 5.8749 -7.3415 -2.4878 2.6143 -1.3518 +#> 8.1800 2.8767 8.7180 -11.3694 8.2303 -4.1627 +#> -10.2962 2.4211 -9.2412 7.2377 -1.2510 1.9823 +#> -8.6539 13.0832 7.9066 -3.5086 0.2032 1.5193 +#> -6.3620 9.8919 4.0757 -8.8798 5.1976 0.1869 +#> 3.3345 -9.2125 16.6036 -0.0995 -0.2828 -8.4475 +#> -2.1083 -1.8617 9.7966 -11.4863 7.3926 0.3694 +#> 19.2445 -11.1232 2.8487 -3.6586 -2.7781 -1.6589 +#> -2.4736 -10.0103 -8.4449 -0.9406 10.5947 4.5627 +#> -1.3361 -1.6820 6.5745 3.9665 5.0010 -1.3289 +#> 25.4012 -19.8178 1.6182 -3.6655 -3.4616 5.2035 +#> 10.4479 -1.4385 1.6253 -15.2111 8.9087 1.9456 +#> -2.0720 1.9840 11.0861 -10.8541 -3.9871 -3.0554 +#> 14.9716 -11.3589 7.2884 -19.3158 6.6518 -5.0357 +#> -16.5899 20.3291 -10.1480 0.7029 0.1713 1.4221 +#> -6.5276 -8.9623 -12.1283 -2.3452 -2.4032 0.3382 +#> -8.0900 21.0676 -7.2271 3.0907 0.4850 -0.7093 +#> +#> (3,.,.) = +#> Columns 1 to 8 -5.1676 4.3854 8.9417 13.7912 12.8967 -12.5368 -13.4475 -29.8693 +#> -2.9827 -1.8464 3.5695 -0.5098 4.7262 -4.9420 -5.0182 -0.1195 +#> -6.6998 3.1935 1.3652 7.9787 8.3085 5.1705 13.3275 -12.1181 +#> -5.1322 -9.0147 3.3953 -4.9266 11.1200 -8.4146 6.8602 -9.5063 +#> 2.0075 -7.4455 10.4439 -4.1741 -4.2386 0.7301 -0.8043 14.3971 +#> 3.1661 -8.7687 -0.3247 11.4966 -10.5124 6.8284 11.4774 16.3061 +#> -5.6473 12.9050 11.0416 -12.2355 -6.9002 22.7572 16.5203 -8.3692 +#> 0.4523 6.5059 -6.1685 17.3045 9.7165 -2.5745 -5.2398 -1.0340 +#> -5.5061 -3.3630 17.3200 -0.6499 -20.7999 -18.8599 -9.2682 0.6153 +#> -0.7136 3.3564 -8.2173 -3.9614 -13.1121 7.8562 13.0622 -2.5986 +#> 2.5756 -7.8965 4.7584 11.4308 11.0917 -2.3474 -14.7677 -21.7201 +#> 1.8753 1.5512 -3.0405 5.6328 -3.0582 4.0797 0.6847 -8.1307 +#> -4.0625 -3.4900 -2.0719 -3.6909 14.6355 -11.0580 -10.3131 -21.0268 +#> -2.2194 -10.6236 5.6066 -0.1072 1.3125 -10.6183 -14.6643 -8.7573 +#> 10.3914 -2.6885 -6.3333 -0.9993 10.1637 12.6405 -7.0527 4.6743 +#> -0.3993 8.3141 0.6243 -13.4408 6.9399 13.3666 8.9177 1.1276 +#> -6.5183 1.5285 3.5984 4.8716 -13.2214 11.5015 7.9882 -3.1666 +#> 4.9163 4.5215 -6.0134 -10.1000 -5.9447 1.8915 -1.2253 -5.6679 +#> 6.7536 2.8255 2.7324 -3.4355 -3.2745 -12.4479 -0.5024 1.5993 +#> -1.7533 -10.5775 -0.8278 -9.8134 8.6638 8.0763 -2.4028 0.0364 +#> 3.9839 0.3409 -10.4570 -13.2835 -3.3255 -25.1475 -4.1796 -1.4266 +#> 0.5803 6.6940 -12.4528 3.6662 11.8859 7.8554 -22.8835 -4.3774 +#> -3.1301 2.5668 7.0398 13.0751 13.3319 -18.9986 -13.1365 -6.1700 +#> -3.4769 8.3327 10.7065 -6.9407 6.6296 -5.1751 6.9217 6.4474 +#> 3.4457 2.2778 4.4909 12.8246 -1.8149 9.8114 2.1885 0.2819 +#> -5.9717 14.9626 8.9741 -12.3336 -9.5979 -9.1602 7.4589 -6.9046 +#> 0.1160 -10.8806 -3.6631 -0.7203 1.4317 -2.1587 0.5743 5.3527 +#> -0.6568 9.3763 1.3580 7.9313 -11.8559 -2.6212 -13.2784 3.7245 +#> -4.3391 -8.4758 -0.1667 11.3493 -13.0984 4.5487 -13.8899 0.4504 +#> 5.5732 3.5709 -0.4515 8.8835 1.6770 2.2673 -7.8797 -5.2212 +#> 4.7617 -10.0378 -6.3020 1.3460 0.1302 -3.0625 5.0635 -9.6043 +#> -6.6632 6.7277 -11.9992 -7.4585 1.9142 -3.1063 12.8365 -9.4380 +#> 2.1475 -7.1687 -7.4764 17.2946 1.3755 -11.5783 4.2725 -5.1783 +#> +#> Columns 9 to 16 0.3036 -11.3024 -18.0805 4.6069 2.8373 -2.5326 5.8040 12.0689 +#> 10.6212 5.5497 -10.4617 5.0755 -3.4345 2.4996 14.7773 15.9369 +#> -15.8449 -9.3174 4.6682 -2.9860 -11.9860 1.1289 -16.7580 1.5834 +#> 5.6135 9.3310 -8.1332 -2.8832 1.1820 -3.1934 -5.7096 18.0656 +#> -11.3107 9.9932 6.4118 -0.2207 7.3681 8.6864 -19.0237 -14.6856 +#> 17.4867 -16.3963 -15.2217 5.0862 5.9360 -12.4534 2.1316 3.2917 +#> -2.0394 -3.1791 -7.6890 3.3832 7.2108 -10.2875 6.5990 -2.2819 +#> -3.1144 -1.4885 11.4863 2.6386 -1.2845 6.7801 -11.7524 6.4304 +#> -3.6454 3.3980 12.6641 15.3436 1.4798 16.5141 -2.4534 -24.2972 +#> 11.7442 -7.5043 -8.0800 3.9334 -0.1270 -0.8541 15.4260 7.7498 +#> -1.2218 -17.2297 -9.4993 -4.2767 9.1431 -10.7485 15.9689 -6.6429 +#> -4.1928 16.0113 6.0789 -3.2893 5.0841 6.7080 0.5600 0.0273 +#> 2.7119 9.5193 -3.4432 12.0904 -9.0164 -1.5356 -0.0627 -11.3350 +#> -2.9610 5.5429 -5.7038 3.2398 -5.6736 4.8195 -5.7769 -3.0289 +#> -12.3682 10.2531 4.4688 6.8857 -10.9107 -0.0482 -9.6106 9.1177 +#> 1.4048 -19.3090 3.9878 0.6812 2.1601 -9.6886 20.3901 -0.2893 +#> -0.7541 3.8006 1.8998 3.7767 2.7931 13.7318 -12.1923 -3.5357 +#> 13.0578 0.1626 -1.0595 12.7407 -17.9925 -1.1900 10.8697 1.0413 +#> -11.6378 -21.6557 4.4854 1.7284 4.5183 2.8117 13.8441 -4.5974 +#> 5.4963 -4.4388 -8.9355 -10.3428 -5.6252 -4.1750 4.9992 11.5611 +#> 0.0838 5.8764 -0.8031 -3.6421 3.3854 6.8787 -0.8951 2.3469 +#> 0.8371 -6.5001 -1.8387 17.2207 -3.2285 6.6468 11.7196 -0.3122 +#> -19.0688 -2.4172 -1.5870 3.2479 2.5332 12.7106 3.1121 1.0850 +#> 11.2773 0.2099 0.8492 8.3810 -1.3572 8.8184 -2.1157 12.7494 +#> 0.5370 -16.0230 -1.0461 -8.3726 0.9168 1.3402 -2.3441 5.3462 +#> -3.3933 7.5931 7.3473 0.5098 18.5612 -12.9361 -13.2702 -1.5486 +#> 15.3652 -16.7713 -5.8028 13.2746 8.0601 8.7573 14.7321 -6.6338 +#> 5.0931 15.6257 2.4031 -9.0759 9.6109 1.2197 3.4899 -1.6881 +#> 3.7917 -1.0522 6.1791 -0.4308 -2.5520 12.2615 -2.9855 -6.7026 +#> 1.2122 16.0225 -9.8612 0.3502 0.6998 -7.9716 0.4053 -7.3333 +#> -0.9244 5.7186 6.0906 0.1273 -6.2419 -0.4498 4.3157 0.5761 +#> 15.6654 8.1636 1.7944 5.9871 -0.0451 2.0708 -14.1700 -1.3609 +#> 12.3706 8.1161 -2.2232 -6.1145 -9.7034 -13.3460 -5.0612 -5.0010 +#> +#> Columns 17 to 24 -5.4216 1.1401 4.8240 9.6962 3.8607 0.2286 1.8772 8.0972 +#> -12.9882 -2.5961 7.1409 5.0553 5.5103 -10.7665 -3.2226 9.5484 +#> 8.4563 13.2828 -9.9814 1.0268 0.7073 5.7946 5.9229 -9.5130 +#> -0.4320 1.1234 -1.4031 -5.9275 -14.8847 -16.2643 -1.8429 8.5939 +#> -4.8680 -7.1420 -0.0741 6.3314 -5.4704 12.8090 -1.6785 -6.5486 +#> -1.7604 -6.0015 4.2669 3.6654 3.6607 -10.7026 -2.7560 13.7193 +#> -8.0264 8.6330 -6.5216 -2.4336 -19.8746 -14.3504 23.0912 -8.1561 +#> -4.8400 -1.5763 9.6763 -2.3567 -10.1442 13.7499 -1.6231 1.4223 +#> -0.7328 -0.3130 -6.6008 -9.5737 -4.6598 17.0784 9.6055 -7.4617 +#> 2.0727 -0.0316 8.7479 10.5035 -0.3153 -9.0982 6.7437 0.4590 +#> -1.6304 -8.2152 -4.9396 4.1893 11.3071 -12.0622 5.2254 13.9001 +#> 13.8572 5.2728 3.8509 -0.7684 -3.8357 -5.6192 -1.7142 -19.1967 +#> -9.3842 -4.7837 4.4577 0.5659 -8.0542 5.2786 6.0536 2.1250 +#> -10.2943 10.5885 -1.9853 -0.4070 11.9270 3.5373 6.2308 -1.1091 +#> 10.6107 -13.3856 1.9552 6.5011 8.9395 -3.9631 -1.4105 19.9545 +#> 3.2959 9.3271 9.6623 10.6403 6.1253 -11.9934 -3.3633 4.3472 +#> -1.4674 10.2126 -21.2552 -0.7752 0.0276 9.0277 7.2645 -4.4475 +#> 4.0243 6.0111 -5.0659 5.3259 2.8305 -2.6396 8.4763 -10.3649 +#> 4.2005 6.9034 0.8247 0.4297 -7.4189 -13.6058 -6.0547 -13.2314 +#> -12.7094 3.2149 -4.2815 16.8455 -10.1513 -13.3428 7.7253 4.8071 +#> -15.2911 -13.0665 9.3437 -16.9386 2.8923 -4.6578 0.3354 -0.0428 +#> 5.4649 -3.7999 -10.9661 -12.9193 -6.2558 3.4290 5.6079 -2.3062 +#> 3.1578 -8.0265 7.2415 -13.8662 3.5183 -4.8634 3.4179 -0.4553 +#> -9.0685 7.4247 -13.0376 14.5329 5.5948 0.6212 9.0086 4.1470 +#> -2.6753 5.3530 9.4994 5.8938 8.6655 -11.1075 -1.9245 1.6359 +#> 1.5458 5.2331 2.9425 -14.0977 6.1124 3.0792 -4.7941 -7.4190 +#> -7.6383 -23.8248 19.9109 -2.6570 -6.8308 -9.9353 0.3604 7.9929 +#> 0.0368 -9.2980 -3.0392 -7.3616 0.0033 -2.2695 13.9712 -8.1888 +#> -12.3978 7.4440 -8.1300 -8.2130 -3.6385 9.3517 4.1502 10.8084 +#> 1.0350 -7.0157 9.2695 -15.1145 12.5815 -7.6600 7.5487 -14.7018 +#> 4.6132 -2.3264 -19.3550 -2.4313 -2.9898 3.1919 -5.7208 11.5779 +#> -15.4273 5.1705 -1.6033 -13.2370 10.5671 -15.5728 3.8489 -2.4687 +#> -7.4581 2.8750 2.6568 1.0497 6.3311 6.8648 3.8989 18.4072 +#> +#> Columns 25 to 32 10.3351 -1.5769 3.4252 13.7836 3.9965 2.6405 16.7248 4.0101 +#> -6.0968 -15.4264 -9.0269 9.8713 -22.3232 8.2689 19.0507 -13.0055 +#> 2.3186 -11.1995 10.9495 4.4354 -4.7394 3.4724 0.1577 -2.1127 +#> -9.4724 8.8434 2.6938 6.3921 -1.5134 -12.8669 -22.9442 16.9483 +#> -1.1854 0.2564 -11.3047 15.9508 -6.3271 7.8759 -5.3361 5.8605 +#> -1.7057 -14.7213 13.2095 5.8321 3.7307 -3.8520 11.9686 -4.2490 +#> 5.9013 -11.1572 8.4920 6.8084 13.2017 8.7801 -2.5933 -7.4978 +#> -8.8815 0.0722 -16.2817 -0.4731 1.7651 -2.6391 -3.3044 1.3444 +#> -12.4770 5.3769 -1.0898 1.3739 -2.3164 7.6707 -16.3326 6.7719 +#> 2.3154 -6.3129 3.6013 -15.2266 17.4062 2.8900 -13.4940 -3.5560 +#> 2.1785 12.1000 -14.9234 24.7450 1.1970 -13.9475 14.0433 2.7353 +#> 0.8050 -6.3989 -2.8261 -10.9769 -2.1723 0.5281 -11.9126 10.0164 +#> 3.3175 -7.1518 2.9584 8.2955 -10.3765 8.6735 -4.7033 -1.7784 +#> 5.4076 11.4840 -11.7991 -19.7505 -0.3950 -6.6199 -10.0510 9.1154 +#> 3.0866 -5.5300 1.8951 5.5660 -8.8052 7.6707 -5.4482 -3.2680 +#> 5.4140 -7.9153 -13.7417 -1.4279 3.5988 -5.9935 15.3883 -0.0959 +#> 0.4607 -15.7977 3.4221 0.9693 -1.7245 -4.2609 -0.2560 -4.4233 +#> 19.1620 -1.7033 -4.7711 -4.8742 -1.8636 15.7761 -1.1533 1.9570 +#> -11.7664 3.9348 -8.1049 9.3780 2.9715 -5.1961 10.0908 -4.4165 +#> 11.1899 6.3989 2.3887 11.8247 -5.6958 4.8766 7.9577 -10.6484 +#> 7.0079 -18.0267 5.4670 -14.0870 -8.7145 17.8836 -15.9463 -3.9792 +#> 7.3750 12.1773 17.4754 6.8414 -6.4836 -1.8049 -7.0344 -2.9170 +#> -10.5822 2.6888 -7.6235 14.0420 -0.3742 9.2761 14.3565 16.5268 +#> -5.1296 16.3504 -3.7858 -2.4817 -9.0940 -8.8171 -2.8296 -4.2678 +#> -9.6838 6.5828 0.2946 2.0167 -2.9256 -4.5033 7.1349 -1.3210 +#> -19.8294 3.4722 -1.1876 -8.8626 0.2013 -4.8618 0.4699 2.4278 +#> -3.8120 18.5868 -8.3621 -8.0789 5.7242 4.3538 -2.0483 -1.8576 +#> -6.2936 -3.3374 3.2529 9.8045 0.4049 -7.7762 2.2055 1.9092 +#> 3.5109 -6.2683 -9.1098 -6.1544 -17.4376 -10.8298 -6.6113 -10.7142 +#> 9.2603 7.5734 9.4958 11.1101 -3.3366 4.6612 -11.9323 3.0409 +#> -0.3238 -0.1114 -14.5615 3.2727 1.9813 1.1815 3.8492 11.6906 +#> -7.5225 0.2525 -0.1607 -0.4146 -0.2028 7.9316 0.2772 3.3927 +#> 1.3049 3.7183 4.6914 3.4221 -5.3761 1.5496 -13.5748 -0.1931 +#> +#> Columns 33 to 40 -15.0313 1.8031 -4.8586 -4.6742 -11.6179 3.4501 22.4728 -0.9740 +#> -0.2749 13.0984 -7.2302 1.5292 -1.8052 -9.8942 -6.9763 3.1699 +#> 24.6312 -2.3724 -0.8381 13.1975 -3.2445 0.5209 -4.4298 10.9253 +#> -0.0687 13.3333 2.0191 6.4038 9.7615 3.1002 19.1632 -9.0256 +#> -0.0968 0.5706 12.8796 -9.6310 -7.3837 -11.1546 10.7900 -16.2938 +#> -1.7818 -11.3226 2.4213 -10.2419 3.2077 12.7258 -10.1224 -4.8499 +#> -6.9263 4.4384 -8.2918 -11.3904 -3.8657 3.3859 15.3510 8.6328 +#> -1.2913 3.9457 2.4575 11.6668 14.8740 -16.8759 -3.3993 -13.0933 +#> -7.8915 1.3880 0.4816 2.1369 -4.9386 1.8787 14.2630 -15.2881 +#> -9.5671 2.3661 2.7132 6.8460 2.9096 -0.1649 -7.9070 -6.0393 +#> -1.1403 -1.8965 16.7791 -13.6186 -2.9396 -2.6223 7.6169 2.8150 +#> -0.1347 -5.5610 2.4079 3.5049 -2.6137 -0.2816 8.3561 -3.0247 +#> 1.2535 -0.1156 0.0462 5.2174 -10.3696 -10.7547 10.8972 1.7234 +#> -15.6739 -1.2947 -4.9985 4.1553 -12.7274 1.7979 1.6871 3.3341 +#> -6.1162 -2.7802 4.7909 0.9724 4.2063 -4.9608 -19.9319 -11.9932 +#> -6.8179 7.0258 -2.4303 -8.4642 3.9944 -5.5612 -6.3351 -8.7273 +#> -13.0319 5.8118 -7.3953 5.0442 -0.5523 -3.7281 -1.4637 2.2636 +#> -4.3980 -3.0759 -11.9548 7.8060 5.1759 -0.6060 0.5467 2.6746 +#> -3.4059 -3.0204 6.8857 -4.4586 2.2255 -6.7481 9.3621 5.8424 +#> 4.5733 5.1204 5.8599 -10.3077 -2.6493 5.5195 -3.6791 5.6804 +#> -1.0876 6.6536 -6.7350 4.1673 9.5383 3.2740 7.0744 4.3193 +#> 3.8565 -1.2565 -1.0224 3.9740 6.5049 -0.5460 -3.1893 -4.9150 +#> 6.5424 6.4211 1.6067 -3.1930 -14.9358 -20.1992 -2.1942 2.8212 +#> 1.7486 -5.1895 5.1360 1.5839 6.3303 -2.8323 0.7757 -2.0265 +#> 0.0772 9.1729 2.0778 -7.1541 -0.9870 -7.1627 -4.6010 2.3400 +#> 5.9069 -4.0067 -6.4865 2.9032 13.8364 4.1096 0.3373 5.0010 +#> -9.4567 2.4729 7.3092 -7.2793 10.0568 7.1594 11.3578 7.3535 +#> -5.0468 -0.9335 -6.8382 -0.4090 8.6911 -6.4320 1.3180 7.6303 +#> -3.5101 10.4786 7.3228 11.3595 -0.7661 15.2906 7.2171 1.2259 +#> 10.3892 11.0709 -5.5317 3.5472 -11.1066 -5.2047 11.7419 -8.7145 +#> 1.8839 -8.3203 -3.0281 -4.2512 -2.3415 -7.5409 -0.8022 -2.5320 +#> -0.3910 -6.9799 2.3695 -4.2693 -1.4531 -1.0846 -2.1721 -4.3543 +#> -2.7668 5.8976 4.6049 1.6244 -6.1064 -1.8341 7.4666 -11.8902 +#> +#> Columns 41 to 48 7.7372 7.6612 4.9961 9.5088 -0.9377 8.1575 -5.2924 9.6978 +#> -3.3669 -13.6178 -13.7287 -4.2277 -1.7368 -15.7717 -5.6129 -2.9614 +#> -9.4431 10.1433 9.5629 -0.3275 3.0597 0.7192 12.6009 -5.1750 +#> -2.2547 18.4198 6.7477 -0.9649 12.8249 -2.1745 16.9646 5.1527 +#> -8.7985 -9.3572 -8.6903 -19.3328 -14.4594 -4.7990 -2.9063 9.2818 +#> 3.0428 4.9044 11.9585 -4.3834 12.0654 -1.3176 -0.4619 5.4780 +#> -17.4608 -0.6354 -6.8096 1.1482 16.4705 1.7212 6.8303 5.3250 +#> 2.4513 -3.8897 0.1238 -0.6568 -8.5003 5.6151 -5.1935 -3.0746 +#> -0.7103 5.7388 4.3880 6.0856 -8.4717 4.7485 7.2855 -5.0299 +#> -16.8739 9.3228 -8.2642 -6.7540 10.7359 -24.7143 -5.7663 1.2593 +#> -10.1084 -2.8553 -4.1009 0.0149 5.2116 -13.3501 -11.1080 1.6188 +#> 4.0397 0.4117 -13.2689 -4.3993 -10.6805 8.1024 1.7343 -2.2570 +#> 1.2193 11.8672 -16.9305 0.1263 5.1343 -17.0045 6.5744 -0.8092 +#> 6.5237 -9.3432 -13.9690 -1.1858 -3.0704 -0.7990 13.5370 -5.2877 +#> -2.0121 -16.2035 -5.6066 -15.6618 -0.7642 -11.4210 -10.8173 18.3928 +#> 5.7853 -4.0129 -20.9615 -4.4775 4.0315 -15.3383 -3.8424 -10.4490 +#> -15.3268 -24.1635 -3.0229 3.2985 -8.1616 -4.1786 -7.2921 -14.2433 +#> -4.7535 -4.1444 6.8501 -5.0432 4.2619 -6.6299 -4.3127 -4.2475 +#> 23.2268 -0.9254 3.1896 11.0457 -16.4595 2.3668 -7.2917 -0.9706 +#> 7.0187 19.6362 -5.1443 1.1327 7.4589 6.7519 7.2424 -1.8203 +#> 8.0685 -4.9535 8.4029 12.5493 8.5755 5.8927 3.8632 12.0752 +#> 0.5056 3.4838 1.6995 4.6258 -1.5417 -4.6166 12.7847 -1.3103 +#> 8.5583 4.1503 7.7805 12.8082 -3.9444 -7.1749 11.4802 -6.1888 +#> -8.4713 10.3415 -0.3942 -12.9463 10.6149 -15.2584 18.6078 -6.8641 +#> 11.6664 14.0203 2.8554 1.4630 -6.2200 10.6229 6.2584 -2.0938 +#> -0.4953 -3.6202 -2.5227 -4.0894 0.5414 9.3649 9.8964 -9.8591 +#> -0.2743 17.6746 7.8169 -7.1310 3.9413 -3.3677 6.5510 -0.3007 +#> -1.3900 -9.2363 8.4504 19.6248 -5.4001 6.0143 -2.1323 5.8315 +#> 6.1095 1.2179 -12.3948 1.1394 -2.9768 7.9932 7.3940 -4.7661 +#> -5.6988 7.4068 0.7307 -0.3264 -5.5465 7.8606 -4.8046 7.7227 +#> -8.6420 -8.1665 -3.3280 2.2567 -6.4153 2.3591 -13.7708 -5.0655 +#> 0.8891 -10.3632 7.1763 -19.1660 -4.8587 -7.7581 -1.0388 3.0972 +#> -6.1099 -18.8754 1.4319 -15.8268 -7.0458 4.1923 -10.2896 8.3506 +#> +#> Columns 49 to 54 14.9748 -3.4864 17.0685 4.1284 5.1310 3.1262 +#> 5.6224 -12.1718 -3.8469 -0.5642 -4.5141 -0.7448 +#> 9.3552 -8.2074 6.0488 2.0903 2.6791 -1.8372 +#> -9.0054 -23.0041 3.7161 5.2067 1.8688 6.3038 +#> -11.7563 -6.0015 -7.3474 0.3525 -2.0993 -2.5835 +#> -13.5861 -6.8689 10.9663 13.7047 7.8267 2.6829 +#> 13.8121 -22.0552 -1.0479 5.6581 -1.5251 9.2322 +#> 1.3197 12.9777 -10.2431 12.2484 -3.5843 -10.8793 +#> -5.3845 -0.6163 8.8257 -5.9029 -5.1137 0.5133 +#> 17.6449 -2.9233 -4.6607 -2.4684 0.8963 -1.1989 +#> -14.1206 -6.1064 -15.7182 2.0309 0.2568 1.0332 +#> 0.0812 -8.6971 -1.7887 -10.9466 -5.6519 -3.4051 +#> 2.2405 -8.5929 9.6886 3.0710 -6.6256 0.9887 +#> 8.6771 3.4362 8.6513 -4.0919 0.3634 2.0812 +#> -13.8877 6.0659 -6.1025 0.3090 0.0363 -12.5456 +#> -4.0079 5.4196 -5.7510 3.8144 5.6892 6.7359 +#> 8.7777 -8.2431 -18.0547 1.4337 -5.8826 -0.7521 +#> 14.7119 -4.1838 -8.4559 1.5906 1.3187 1.6980 +#> 6.8145 5.7901 4.4138 -11.7686 -6.8325 -1.9794 +#> 8.1843 -15.1178 12.3738 11.1231 8.9858 5.7360 +#> 3.6753 -7.6156 6.1509 -3.7130 -0.6571 6.1083 +#> -17.1941 8.6179 -6.8330 10.0978 2.7350 0.4185 +#> 0.4965 0.3054 4.7808 3.6973 -11.2568 -0.1029 +#> 10.2335 -5.3348 2.6063 6.3540 -3.5606 -5.8432 +#> 16.7542 6.7847 4.2078 15.8172 -0.1679 3.4273 +#> 7.2342 -2.5870 0.9174 -6.4394 -6.4908 -2.7477 +#> 4.9985 5.3420 -10.3066 2.6977 -1.9458 -1.4674 +#> 14.4275 -7.8555 -3.6470 8.3568 -7.4040 -1.1206 +#> 8.1549 -6.2982 6.0323 3.3352 -4.9192 2.2477 +#> 3.1596 -26.3176 14.0512 1.3184 -0.6189 5.3501 +#> 7.4058 -9.5246 -8.5619 -0.7603 -0.3428 -0.0791 +#> -13.0259 -3.9025 -2.3515 -4.5802 0.1049 1.7601 +#> -2.3563 -8.6606 -0.9520 3.5444 4.4992 1.4307 +#> +#> (4,.,.) = +#> Columns 1 to 8 5.2575 -0.7766 -4.7114 -2.1789 1.4369 -2.6180 -14.3979 -7.7102 +#> -0.6840 3.8540 4.0819 -7.0903 0.5486 3.2613 0.5156 -2.7583 +#> 1.5078 -1.5715 -0.6514 1.7072 -4.7081 11.5225 12.4029 -13.0238 +#> 2.1011 3.4168 -3.8830 -10.5282 3.7449 -4.7472 -5.5063 15.4334 +#> -2.2274 -4.8909 -3.5595 3.3106 -2.7585 18.6218 -5.5809 2.8852 +#> 1.0694 1.6039 -0.5700 -0.2159 -5.5109 -6.7220 23.5573 11.3252 +#> 3.7584 -7.4698 -0.4780 1.7294 0.4013 -19.3640 1.8215 7.7201 +#> -2.5843 4.0710 -2.1787 6.5675 -10.0709 0.0916 5.4518 3.4391 +#> 5.3328 -3.5622 -6.7090 1.1574 0.9005 2.7114 -5.1990 8.0508 +#> 4.2813 -0.1993 0.7410 -0.6316 2.6139 -18.8781 -0.0484 11.1997 +#> -0.0241 1.8775 -6.0636 5.3528 -6.4943 -1.7180 7.9061 4.0923 +#> -1.1490 -1.5504 1.7649 8.4804 0.9798 -8.3027 -7.5732 -2.0555 +#> 0.0750 0.7099 7.6419 0.5240 0.7933 -8.1904 -3.3562 1.2358 +#> 1.0598 -6.0068 9.1573 -12.4051 11.3743 -3.3522 -12.1646 -2.2511 +#> -6.2586 -1.8536 3.6691 2.2931 -6.2947 -3.0303 -2.5080 1.3602 +#> 0.0500 0.5524 0.8717 16.9023 -14.3267 5.7065 0.2353 -7.9640 +#> 1.4073 -3.8425 1.7744 -0.1246 -8.9244 1.8492 7.0214 2.7224 +#> -0.8062 -8.8097 6.1076 14.5658 -1.1873 -3.1778 -6.6876 1.8218 +#> 3.0836 1.6713 4.6119 -8.0687 -0.9293 5.9532 10.6747 -6.8836 +#> 0.0772 -0.6474 1.2649 -6.3609 2.8105 -2.3691 1.9532 2.3342 +#> -4.4295 3.4954 11.7487 -16.2479 21.5013 -21.5518 -6.8721 2.5280 +#> -0.2906 -1.9987 -5.5446 -1.7154 -21.7141 7.0420 9.9933 1.3947 +#> 0.6380 -2.0186 0.0800 -14.2780 2.1709 7.8208 -2.9369 -7.1446 +#> 0.7143 -7.0579 1.3845 -10.0082 2.6686 7.1179 12.3550 0.2197 +#> 3.8913 -3.4731 7.8313 -5.9778 7.2661 11.2723 9.3754 -4.7707 +#> -1.2250 -0.4714 -0.2368 -6.6252 15.8559 10.3191 -16.4309 9.5409 +#> 2.7018 -8.1056 -3.2256 5.8232 -3.3702 6.1084 -6.3966 2.1613 +#> 2.6144 -0.9473 1.5922 -9.9597 0.8925 23.8863 -8.0366 -0.6395 +#> 1.6135 -5.0597 -2.7115 6.5976 1.5883 5.4543 -7.8440 12.3300 +#> 0.8075 -2.5191 0.7634 1.1340 18.0483 -0.3822 4.6083 -0.3061 +#> -2.8758 1.6708 3.0058 0.2336 -11.2261 1.7554 -8.1278 -1.7567 +#> 3.0654 -1.4678 6.2884 -0.8781 12.7105 -6.6499 -5.2371 11.4991 +#> 1.9314 -2.7722 2.2711 -5.6332 10.9087 -1.3416 -11.0572 24.7921 +#> +#> Columns 9 to 16 -1.2083 -15.3246 8.7146 -1.2374 5.4216 6.7218 10.7859 10.5819 +#> 1.5094 -7.1882 0.4673 -3.3958 -2.1308 3.8891 -7.6609 15.6788 +#> -0.2640 -0.7133 -1.0597 -9.3713 11.0734 5.6356 8.4794 -0.5972 +#> 0.3838 2.0368 -7.6437 5.4887 -5.0681 -0.0557 -18.9789 -17.2678 +#> -4.2130 0.5865 -11.2981 10.6303 17.1320 14.3611 2.8967 5.3941 +#> -9.3434 12.1248 5.2709 -20.6078 -7.1341 4.6923 -2.7827 0.0394 +#> -7.0147 3.1226 -0.5927 -3.7602 -9.7529 -12.3875 10.0586 -3.4637 +#> -1.0193 1.8488 7.5552 -7.5836 13.5377 -5.2304 12.5765 -8.4428 +#> -7.7684 9.5295 1.4658 13.8395 6.0057 -5.9827 6.0963 15.2381 +#> 8.2388 -5.2734 3.3562 -16.8448 -0.3477 15.4448 14.2227 -13.4883 +#> -9.6210 -3.2715 -15.0036 -9.4995 -5.0580 2.9196 -0.3870 7.2528 +#> -0.3533 -8.1353 5.5265 6.4311 4.6806 -10.3631 -6.2019 -5.8668 +#> 1.9106 -1.0105 16.0779 15.1296 1.2079 10.8193 -8.7599 13.7758 +#> -4.2827 7.8034 -1.5330 10.3991 5.2379 11.6170 1.1458 6.5294 +#> -11.0512 8.3186 2.9383 0.0428 -4.6336 13.5094 -6.4548 3.0002 +#> 1.3194 2.2313 -14.1495 -3.7014 -5.5203 6.9198 -6.8851 5.7840 +#> 8.2647 -2.6567 -2.6202 -7.8955 -7.6242 0.0974 8.8808 16.0055 +#> -2.0806 6.1532 0.6778 1.4626 -5.6735 -2.1429 0.1467 -1.4459 +#> -13.8766 -15.7014 2.0942 -7.7894 -0.0281 6.3920 6.9762 3.7968 +#> -5.8858 3.6765 -13.4358 15.3547 10.2177 -8.6941 -5.6539 -0.1847 +#> -1.8102 7.2655 11.5626 6.9123 5.0585 4.9868 1.3962 3.2381 +#> -9.8563 -2.3639 3.7108 0.6650 6.4558 7.4911 -3.1464 -3.8051 +#> 1.6512 -4.5645 -10.0018 0.6565 8.4675 3.2615 -2.7642 12.2847 +#> 6.9485 -2.4238 -1.2727 -4.8454 -5.9742 -5.2209 -10.5011 0.8381 +#> -17.4365 -16.8198 9.4141 -8.2449 2.7744 -8.3985 3.4191 6.1630 +#> 5.1889 -18.6333 1.6147 8.9228 6.0449 -13.3716 -4.9473 -0.2034 +#> -4.4799 -13.8313 6.1731 6.8246 -6.9032 6.1586 -3.5178 7.4175 +#> 18.1998 -9.0402 -8.3661 -9.5611 14.3823 -16.7345 7.7613 -8.1148 +#> 1.3935 3.7030 -9.5104 3.7962 3.8622 -2.2526 -26.7995 -12.9311 +#> 13.7098 -10.0344 9.1681 -5.0523 11.3861 -5.4115 5.5574 -14.7097 +#> -4.9616 13.2860 11.9651 -7.7347 2.0093 -8.0394 8.9447 6.1681 +#> 5.5470 -2.6187 5.7762 3.6974 -0.5290 -0.3782 -2.2819 -3.0252 +#> -13.9729 5.5229 9.1752 -11.8947 3.1524 -5.4284 -1.3380 -13.3338 +#> +#> Columns 17 to 24 -4.0013 -5.1752 -6.0625 4.8196 5.2421 -28.3150 -9.7699 3.5622 +#> 1.6674 4.6116 13.3719 7.9474 2.6414 -14.0860 -6.6946 0.5103 +#> -2.2187 5.7972 12.1092 -7.5571 11.7682 11.0406 -1.4348 5.1614 +#> -4.8356 15.1994 -7.3824 -3.7967 8.6452 6.7505 21.6267 13.5475 +#> -3.7089 8.5722 -1.8550 -2.7300 -2.5289 -2.5775 -15.8576 6.0162 +#> -3.8329 10.1273 -1.0425 -0.3029 -2.3056 10.0746 16.1152 -3.9638 +#> 1.7464 -19.6774 -19.1648 -13.7747 -1.4367 8.1978 -6.9395 0.5813 +#> -6.8336 -4.5565 2.3554 25.8708 -14.1392 1.6892 -4.4730 15.2685 +#> 2.1956 2.0937 17.7277 5.4085 -3.9359 -3.5907 2.6210 -5.9122 +#> -5.0316 1.8280 16.5148 13.7234 -8.1850 -4.9112 4.7328 -1.5492 +#> -10.0506 -5.0259 -1.6797 12.7264 18.3157 -0.3624 -15.3497 6.4820 +#> -3.4658 -0.8802 -4.8918 -17.2658 -5.7206 6.5984 2.0777 -0.0184 +#> -6.0299 -3.1784 3.3672 -2.6883 0.0094 -13.6375 4.0191 -0.6468 +#> -5.7017 -2.0493 -5.4481 -8.1665 -10.9426 1.5246 -1.6671 4.0896 +#> -6.8264 4.2544 -3.6874 -17.2830 10.7918 -7.2712 -2.3039 -11.1600 +#> 1.1271 11.1836 8.5850 -2.4075 -5.7719 9.7470 -2.4562 3.3406 +#> -2.1897 -2.1179 -17.6689 -2.3949 -12.7269 20.6583 -11.9956 -3.8708 +#> 1.2004 -3.1057 2.9151 4.2559 3.2008 2.5393 10.7603 4.7212 +#> -3.8557 -14.9922 -6.0539 2.7624 -0.2656 2.4445 -1.9374 0.6772 +#> -0.3970 7.9793 0.1745 1.1208 13.4290 -13.2243 7.3049 9.0883 +#> -11.8745 -15.0219 5.4823 5.0307 5.2270 5.6098 9.6503 -2.4330 +#> -6.1362 -15.4137 -3.9300 11.9052 21.8574 -3.4039 -6.1206 -2.4324 +#> 11.0580 4.3413 0.8168 4.3517 7.3365 2.0796 -6.2185 -11.2443 +#> -0.2470 30.6120 -19.1413 6.5058 2.0727 5.7792 -10.0254 -1.8081 +#> 11.8609 3.9276 8.2051 -12.0045 -7.0559 -0.9327 14.2228 7.9384 +#> 5.9959 -13.4739 7.5799 -2.7083 1.6936 -1.8175 -3.8089 10.8707 +#> 1.4073 -22.4073 6.2444 27.8608 -2.8708 -20.2574 -6.5330 3.3457 +#> 15.3571 -8.8060 -2.2854 -6.4514 4.5233 7.8882 -6.8926 10.6354 +#> -11.2777 2.7589 7.0358 0.5842 -6.7995 4.6300 14.2326 -4.0184 +#> 7.1088 -2.1649 17.9472 -10.6835 8.4725 2.4021 10.1116 1.9024 +#> 11.2095 -2.8578 -0.3721 2.0504 1.8419 4.2851 -12.4610 -8.6200 +#> 7.4972 9.2567 -3.7825 -1.4415 6.3870 3.0675 -0.8579 -2.5643 +#> 5.6105 2.4290 6.9615 7.9456 -14.6035 -3.5008 7.8312 9.9388 +#> +#> Columns 25 to 32 -5.9620 -0.0204 0.7475 9.0756 -3.2886 18.0729 2.6390 -4.6496 +#> 3.8225 -10.5170 0.3081 -3.6348 -8.8446 -6.3224 -1.2951 -7.6951 +#> 8.6342 -5.8201 -4.7599 -22.9321 -7.0737 7.5013 10.3996 0.5483 +#> -1.3710 2.4480 -4.4538 5.9430 -8.7171 -12.2358 0.2681 -0.0347 +#> 0.8756 3.6222 -0.9614 -14.0501 -26.2135 1.7694 -0.0165 6.5674 +#> 6.4971 0.9460 13.1037 -1.6323 -2.8180 -2.0558 18.2262 9.0415 +#> -2.4175 11.1986 -11.0026 7.0562 -2.8092 18.9180 7.7928 3.8166 +#> -8.2856 -13.5131 -0.7048 14.4170 -8.4773 -4.0482 -3.4315 -1.1430 +#> -2.0748 5.0509 20.2421 -6.3062 -4.5865 9.8380 4.2838 -6.8102 +#> -0.0933 -19.3106 -3.8142 -3.2241 1.4647 -5.6961 -14.4402 -7.4341 +#> 17.4403 1.0406 5.2266 1.3735 -6.1249 -9.4619 23.3903 4.3756 +#> -3.7529 -10.5598 -3.0329 -4.4190 -7.7150 -5.7010 -5.7494 -1.9703 +#> -0.6847 -3.7366 10.7811 2.3640 -3.6072 -12.6992 0.9389 -5.6063 +#> -7.4758 7.8815 -6.4612 -4.4221 -0.9759 3.5567 4.8440 9.3159 +#> 25.3374 -17.8505 12.3732 10.7709 18.8009 -5.8407 -5.7090 3.8853 +#> 7.5111 -14.1989 -16.3555 -2.2136 -5.6583 -13.5274 -0.0328 5.3420 +#> 13.1912 26.4525 -7.4691 -1.1784 -11.1190 -2.3671 0.5289 -2.9780 +#> 4.1862 -6.5302 -2.3661 12.6600 6.0300 -1.9318 -10.2906 -6.6801 +#> -17.7918 9.5451 -9.6835 5.6130 -16.7753 -3.0325 7.0554 5.8254 +#> -22.2120 -2.5846 7.6869 8.7114 -0.6171 14.1924 6.8168 -5.8089 +#> -0.4439 -9.7532 -7.1138 -3.2425 6.1150 0.8034 -5.0221 -0.0487 +#> -10.1492 -14.6502 9.9484 0.8859 11.4973 -8.2778 0.7768 0.6372 +#> -6.4879 -17.8953 -1.6492 -17.8681 -9.8552 -18.8246 15.3990 -7.3329 +#> -1.6665 3.9535 5.0340 3.9076 -11.8002 -9.1399 -5.2809 -5.0410 +#> -3.3469 5.9649 -11.9179 -3.7345 -8.5747 -9.3139 8.8574 7.0436 +#> 7.0585 -4.3879 -27.3535 -19.1823 4.8467 4.5816 -0.3912 -11.9455 +#> 7.1808 -2.1510 24.0965 8.7902 11.2961 -12.1195 2.7546 -10.4150 +#> 2.7881 7.5827 -14.9542 9.2557 10.8290 -0.5128 1.9514 1.5722 +#> -1.2498 8.7504 14.4664 -19.5480 -17.6853 3.3890 -6.2159 -2.8830 +#> 7.8213 -4.4381 0.3780 -4.7539 -7.4555 5.2615 -6.1714 3.4048 +#> -15.4673 13.3153 -16.1077 13.0992 -6.0089 -7.7125 -5.6382 1.4185 +#> 4.7547 23.4044 -13.5534 11.6583 -10.4539 -5.5645 -8.4052 -1.6373 +#> 0.3435 25.7502 0.2418 15.5421 -19.9352 8.5725 -1.8372 9.1598 +#> +#> Columns 33 to 40 -9.9918 -11.2558 3.9723 -14.5038 -8.4011 9.2543 -2.6456 4.7494 +#> 4.5114 6.6943 11.3654 -0.0903 -9.6895 -2.1129 7.3742 1.0460 +#> 2.7906 -11.0089 2.9186 -13.0802 -12.9032 -3.4157 8.0848 -2.7310 +#> 7.2988 6.9473 -1.1243 -11.3052 5.7676 1.4347 -4.3950 5.2950 +#> -7.3413 -1.2005 -5.3936 -2.5187 -2.0919 -0.0479 -5.7671 2.3640 +#> 1.9062 12.7308 10.3243 0.4744 14.8923 -5.8763 -8.1324 7.6046 +#> 2.3840 5.4675 -6.9501 2.0311 10.8832 2.1187 4.0296 -14.1913 +#> 4.9866 5.4312 -3.2555 2.2842 2.6813 21.9236 -21.0662 20.4726 +#> -3.6316 7.4061 -8.3588 -2.7434 3.6586 9.4686 0.7693 -7.7623 +#> 0.7137 6.6464 -4.6390 1.0950 11.6623 -24.1333 3.0299 -1.1488 +#> -3.5207 -2.1141 11.9836 3.4444 -14.7773 1.0229 4.6654 -13.1691 +#> -3.0715 -1.0213 -10.2503 2.3550 -0.1898 7.5209 -7.0452 6.6049 +#> -2.6758 -14.3462 -9.0073 -13.5991 -1.4814 8.9751 -3.8749 -1.8851 +#> 9.0460 -9.4622 -9.8574 10.2383 2.5086 2.6030 12.4254 1.5024 +#> -4.2682 12.7862 17.4230 6.5026 21.6198 -2.0568 17.6738 1.6034 +#> -2.7451 -7.4888 -7.7958 15.0149 -10.1658 -10.0607 -7.6173 4.2733 +#> 2.2079 5.9707 -0.9333 16.3094 -21.9698 4.4637 2.9631 11.9600 +#> -10.3675 -2.1195 -8.4635 -4.4985 12.7914 6.8941 2.6211 -14.6620 +#> -4.0349 -12.7328 -1.4392 -4.6275 -24.8379 -3.4389 3.9137 4.1427 +#> 6.0577 -1.6129 8.2038 -19.6548 -6.0476 8.7963 -0.4679 -12.2020 +#> -3.0002 -8.7029 -0.0351 -9.2292 6.9334 4.8472 -20.2736 9.7402 +#> -2.4221 4.5754 10.2630 -2.9332 -24.3398 -8.4547 10.8109 1.6836 +#> -10.5356 1.4607 -6.1856 -12.5852 -20.3493 -4.0766 5.5376 -16.8254 +#> 9.7520 0.1653 11.3911 7.0408 4.5201 -9.5305 9.2442 -7.5668 +#> 16.5580 -0.0725 -4.5160 -25.7244 -24.1641 7.3483 -3.4566 -11.0987 +#> 3.7697 -13.1604 2.8608 -4.5298 -8.1642 1.9944 3.6661 -3.8547 +#> 11.1156 10.4773 -1.1748 -8.3283 -10.9928 4.2175 -0.1683 -13.2033 +#> 3.8437 2.4859 0.4669 -5.6129 -20.1019 7.1668 -0.7152 -2.3682 +#> 4.5508 0.4159 -4.1009 12.9912 -1.8456 -9.3103 -7.9641 -7.9680 +#> 12.3593 5.3969 -5.1216 0.5135 12.7073 1.5207 -20.1804 -9.0448 +#> -14.0890 -0.8132 8.5154 -3.2237 -2.9342 9.6484 6.4929 1.7095 +#> 2.9154 -9.1732 4.2327 7.1021 18.2222 -4.0201 -9.6024 -5.8148 +#> -9.7206 6.9108 1.1476 -12.0830 17.6767 12.9765 0.2746 -1.8375 +#> +#> Columns 41 to 48 4.0353 0.1683 9.3107 8.2381 -1.3212 3.5892 -1.6621 -1.4353 +#> -1.4281 -6.8974 8.5788 11.5415 -12.2345 4.2449 -0.6578 -9.1329 +#> 0.3255 -16.8218 8.9563 6.1586 -5.9325 1.5943 -12.7069 0.0494 +#> -8.6682 -0.0979 -5.0230 -18.0362 1.3477 0.8359 6.3049 -5.8941 +#> -2.5150 8.4617 -0.8237 -13.2406 -10.5428 2.4460 4.5751 -3.0680 +#> -3.7884 -13.1109 -4.6681 5.6655 15.2157 24.1355 -3.4650 -4.3438 +#> 18.4587 -19.5020 -4.1910 -11.6513 16.8113 5.3275 0.0685 -0.3817 +#> -25.7388 8.8890 -10.4926 17.7701 -12.6404 -4.9679 10.5607 -16.9835 +#> 5.6624 13.9448 -4.3876 -11.9631 9.2845 2.9322 -7.0282 18.7209 +#> 18.6680 8.3034 4.0959 4.1039 6.6948 10.9190 -10.6597 9.6531 +#> 3.8021 -14.0758 -9.2871 -1.7017 4.5407 10.1670 -0.0628 5.1776 +#> 3.8555 13.4692 -11.4556 -5.9488 -4.0233 3.8919 -3.8699 18.5399 +#> 9.1737 -6.5645 8.9549 -6.1373 -6.4067 -8.1647 -12.1142 10.2670 +#> -0.1919 13.6022 -3.9105 -12.5652 3.4625 -6.3605 21.5382 22.8667 +#> -12.0776 -12.7406 -3.2441 -8.8713 -16.6236 -3.4332 -16.5411 -11.0148 +#> 2.1752 -0.5010 4.6397 20.0059 4.0325 20.9525 -5.0529 10.0026 +#> 11.2979 -13.2448 -6.3452 -0.1919 -13.3157 -3.8160 18.6384 3.6693 +#> 19.1818 -10.1661 -4.4319 11.7715 -4.1373 3.2186 -6.8304 10.1579 +#> -8.3812 12.0222 -4.4667 21.4049 11.3710 -0.2993 -10.0765 5.5163 +#> 13.9959 -4.1086 6.1736 -10.6470 12.8444 2.3222 -5.8774 -5.3935 +#> -0.5331 5.4453 4.0455 -5.6229 12.5445 -8.5009 8.2076 -3.6695 +#> 4.3265 -7.1274 8.2978 11.0249 -2.2476 -3.3942 -18.0633 -8.2711 +#> -4.1544 1.3124 14.0326 4.7946 11.9610 -0.9016 -5.4485 -3.2986 +#> 9.0508 -8.0588 6.0134 -7.5460 -10.5553 3.1587 5.2354 1.9173 +#> -0.2094 0.2436 16.5652 3.4599 6.7972 -10.4418 6.2919 7.9371 +#> -4.1450 -1.3889 -20.9035 -2.6052 5.3293 6.1418 7.7175 -0.7744 +#> 8.3306 16.1748 3.2943 0.8919 -3.6691 -10.6401 -19.3199 -3.8398 +#> -5.7417 0.9074 11.0930 3.4065 -3.1414 -17.1465 6.0950 11.5081 +#> 12.9956 -0.8061 3.5513 -15.9179 5.1410 14.5245 1.1184 -2.1028 +#> -4.1863 -4.7161 5.7921 -14.5177 -6.3096 1.9435 0.6193 -11.3959 +#> -7.0905 4.7091 -4.8918 14.0901 1.5160 -10.1038 -3.1061 -3.2290 +#> 3.0677 4.3721 -3.8297 12.4544 1.4152 0.0946 4.9133 -11.2872 +#> -17.1241 11.0867 -3.6483 -12.6453 -6.9553 2.7518 12.5078 4.2787 +#> +#> Columns 49 to 54 4.1236 -5.4074 -12.9845 4.0738 -7.3491 -4.1963 +#> 12.4328 1.6314 17.9378 -4.5684 -4.7763 -2.2439 +#> -6.6719 -3.1554 10.8061 0.8958 3.5459 0.9062 +#> 3.1129 2.7201 -12.6108 8.5306 8.0650 0.7508 +#> -1.9852 -5.5406 -3.4580 -6.5936 1.2295 -4.1384 +#> -10.3391 -19.3431 -2.8895 3.9120 2.8137 0.7597 +#> 2.0765 -8.5507 -15.7660 -11.2097 1.5208 -0.8458 +#> 1.5014 8.8149 -7.4493 6.1569 2.9695 4.1294 +#> 5.0225 7.8214 9.1012 2.8977 8.4803 -0.4028 +#> -3.0103 21.8172 15.3435 4.7424 13.1495 -0.4556 +#> 6.2431 -4.7825 -1.3751 -1.7184 1.4160 -1.8794 +#> 4.9141 -0.5876 -10.1879 -7.3055 6.2525 -5.8698 +#> 11.1453 11.6135 -6.4222 -1.2066 8.8767 -2.9253 +#> 3.2769 2.5901 -0.5101 -1.4117 1.3547 -5.7876 +#> -0.1879 6.6963 11.9052 3.0424 -4.1758 7.4905 +#> 15.1550 2.3688 3.0109 3.7021 10.5098 0.7399 +#> -0.3722 10.1152 14.7701 -1.0260 -7.4072 -0.2253 +#> 3.7488 -5.1289 8.8644 -11.3547 3.6755 -1.5011 +#> 1.6978 -13.0118 -17.8606 -1.8947 -9.1641 -2.8292 +#> -2.8340 0.4534 -8.8220 -1.7407 6.7418 -6.5311 +#> 1.6589 12.7459 -3.8227 7.0958 5.6626 0.4574 +#> -7.6468 12.5628 19.8869 9.2874 5.8936 -0.5988 +#> 0.9610 -8.8247 6.0486 -8.6888 -3.9088 -3.7171 +#> 6.2439 13.9843 8.3316 -1.0690 7.0906 1.0348 +#> 13.8028 10.2469 -18.0544 0.0224 -9.1942 3.0021 +#> -21.6612 -1.9768 2.6255 -12.7041 3.8225 1.7540 +#> -9.8668 1.3828 0.1024 -15.2984 10.6509 3.1775 +#> -3.1621 1.6915 15.4808 -4.5523 -14.7115 -4.2481 +#> 9.9829 5.5103 4.3329 7.8143 8.2728 -0.9571 +#> 14.0987 -4.9220 -1.2361 0.7605 8.8626 3.5967 +#> 3.6289 -6.6178 12.4771 6.3315 -0.0571 2.5123 +#> -4.1765 -11.3557 -9.0154 2.1146 2.9739 7.5828 +#> 15.4765 -3.9233 4.7180 3.1266 -8.5665 3.3690 +#> +#> (5,.,.) = +#> Columns 1 to 6 4.7121e+00 -4.1392e+00 4.2671e-01 -2.3432e-02 -6.1502e+00 -1.5154e+00 +#> 1.3131e+00 -4.6019e+00 1.2158e+00 -4.8398e+00 -1.1462e+01 2.5465e+01 +#> 1.4356e+00 1.3472e+00 1.1290e+01 3.7592e+00 6.4035e+00 -6.0758e-01 +#> 4.5615e+00 1.4797e+00 1.9293e+00 -5.1798e+00 -8.7327e+00 -1.6384e+00 +#> -6.0103e+00 4.7125e+00 -5.0213e+00 1.4314e+01 -6.1885e+00 -4.8113e-01 +#> -2.8093e+00 -4.3261e+00 -2.2340e+00 -1.1215e+01 4.4678e+00 1.1791e+01 +#> -2.8427e+00 1.3671e+00 -9.2753e+00 4.2584e+00 1.7855e+00 -4.0986e+00 +#> 1.5708e+00 -7.1654e+00 1.9545e+01 3.3585e+00 -8.1278e-02 -7.8859e+00 +#> -5.4065e+00 2.4482e+00 -2.0467e+00 1.5009e+00 6.9066e+00 -1.9656e+00 +#> 2.1044e+00 3.7346e+00 -6.2802e+00 -8.6725e+00 1.2277e+01 -6.8880e-01 +#> 8.1300e-01 -8.0786e+00 -4.3819e+00 -9.5436e+00 1.9126e+00 -5.9912e+00 +#> -1.4374e+00 5.4119e+00 -5.9999e+00 2.2893e+00 2.6581e-02 -8.7797e+00 +#> 7.9736e+00 -1.5308e+00 9.2061e+00 -7.8350e-01 -8.8262e+00 1.8192e+01 +#> -4.8003e+00 -2.4843e+00 3.7562e+00 4.5686e+00 3.4293e+00 3.2791e+00 +#> -3.5586e+00 5.8432e+00 -5.2028e+00 -9.8800e-01 -8.6339e-01 -2.3204e+00 +#> -2.6500e+00 6.8019e-01 3.3802e-01 -1.3496e+00 8.8721e+00 6.9262e-01 +#> -3.6989e+00 1.4064e+00 -5.8904e+00 1.0351e+01 -4.6007e+00 2.7872e+00 +#> 5.0128e+00 5.0922e+00 -8.0095e+00 3.0661e+00 -6.7186e+00 5.2068e+00 +#> 1.7794e+00 2.0529e+00 -7.1347e+00 3.7214e+00 -1.3019e-01 -2.3643e+00 +#> -6.2076e-01 4.1952e+00 -1.5084e+00 -2.8139e+00 -4.6309e+00 3.7450e+00 +#> 6.2531e+00 -6.6878e+00 7.8619e+00 -4.6050e+00 -2.9871e+00 1.6594e+01 +#> -5.8875e-02 2.0826e+00 -1.1967e+00 -7.6659e+00 -1.1437e+01 -6.7666e+00 +#> 2.4713e+00 -1.6468e+00 3.1288e+00 -3.1551e+00 -1.4913e+01 -1.1225e+01 +#> -2.2079e+00 2.6099e+00 -1.3939e+01 1.3748e+00 4.7271e+00 1.7565e+00 +#> -4.3267e+00 5.8627e+00 -1.1069e+00 7.6390e+00 1.2591e+01 1.3539e+01 +#> -2.5213e+00 -6.9652e+00 6.6769e+00 7.5201e+00 9.1303e-01 -1.4627e+00 +#> -4.4846e+00 -3.4159e+00 -1.8865e+00 -4.1773e-01 3.1519e+00 1.4703e+00 +#> 8.6222e-01 1.6073e+00 -3.6558e+00 9.0493e+00 4.5090e+00 -1.7756e+00 +#> -9.5790e+00 6.6201e+00 -1.2950e+01 1.3573e+00 4.4204e+00 7.0487e+00 +#> -3.1445e+00 6.7386e+00 -1.0002e+01 3.3010e+00 1.4198e+00 2.3509e+00 +#> 4.6680e+00 -3.8550e+00 3.9559e+00 -7.9135e+00 -7.9662e+00 -1.1992e+01 +#> -1.1950e+00 -2.9311e-01 -9.8995e-01 -4.7760e+00 5.8903e+00 -2.2477e+00 +#> 2.2851e+00 -1.4962e+00 4.5850e+00 -2.0969e+00 2.1677e+00 1.1184e+01 +#> +#> Columns 7 to 12 5.3742e+00 -1.3278e+01 -2.2128e+01 6.2522e+00 5.7646e+00 7.7593e-01 +#> 1.5700e+01 -9.3470e+00 -6.5623e+00 -1.3812e+00 1.0429e+01 1.2237e+00 +#> 7.0832e+00 -1.3387e+00 -7.0009e+00 -2.4392e+00 4.7489e+00 1.3399e+01 +#> -5.7861e+00 7.8405e+00 -1.7810e+00 3.8876e+00 1.2636e+01 1.7642e+00 +#> 4.4580e-01 -3.3932e+00 2.9900e+00 -1.8315e+01 -1.5403e+01 -6.6818e+00 +#> 1.5445e+01 2.5324e+00 3.6166e+00 -6.3296e+00 -9.0035e+00 4.4681e+00 +#> 7.2337e+00 -9.5093e+00 -5.4988e+00 2.0171e+01 7.6171e+00 7.0269e+00 +#> 6.1839e+00 7.8078e+00 -1.0692e+01 -1.0780e+01 -7.1039e-01 1.2407e+01 +#> -9.1226e+00 -1.9685e+00 2.4767e+00 -3.5430e-01 -6.7079e+00 -1.3514e+01 +#> -1.3627e+00 -6.3488e+00 4.1866e+00 2.6592e+00 -1.3842e+01 -1.4554e+01 +#> 2.0691e+00 -2.9694e+00 -1.1506e+01 3.2658e+00 -8.1041e-03 2.8423e+00 +#> -4.2165e+00 1.0902e+01 3.2166e+00 -3.0789e+00 2.4340e+00 4.6152e+00 +#> -1.3090e+01 -8.9970e+00 -1.2499e+01 -7.4733e+00 -4.9388e+00 -2.6453e+00 +#> 7.3393e+00 1.1991e+01 -4.4998e+00 -6.4108e+00 -6.1439e+00 -4.5448e+00 +#> 1.2258e+00 7.8415e+00 8.2456e+00 -6.4829e+00 6.0065e-01 1.2036e+01 +#> -6.3584e+00 3.3278e+00 1.2227e+00 -2.8865e+00 -1.4019e+01 1.4059e+01 +#> 6.8571e+00 1.0581e+01 -3.4583e-01 4.9050e+00 6.1908e-01 -3.8844e+00 +#> 1.0609e+01 1.9299e+00 -5.2231e-01 3.0143e+00 3.9725e+00 1.3190e+00 +#> 1.1832e+00 -8.2013e+00 -9.8862e+00 3.3924e+00 7.1782e+00 -6.9648e+00 +#> 1.6707e+01 2.9170e+00 -1.6524e+00 -6.8065e+00 -8.7750e+00 -3.1798e+00 +#> -2.5117e+01 -2.5851e+00 9.3680e+00 2.9989e+00 -1.5857e+00 5.0736e+00 +#> 2.0603e+00 5.2293e-01 -4.5410e+00 -6.7505e+00 -3.5698e+00 -1.3509e+00 +#> -7.8929e+00 -2.2287e+01 -2.7721e+01 -1.6145e+01 9.2091e+00 8.1378e+00 +#> 6.9181e+00 1.9560e+00 3.0176e+00 -6.2042e+00 3.1212e+00 -1.3767e+01 +#> 4.7774e+00 1.7216e+00 1.1810e+01 1.3689e+01 8.3514e+00 -6.4704e+00 +#> 9.4548e+00 1.5837e+01 1.5592e+01 1.7712e+00 1.3769e+01 -9.1083e-01 +#> 1.3323e+01 -1.2507e+01 7.1013e-04 6.5178e+00 -1.1276e+01 -1.0068e+01 +#> 1.8866e+01 -4.5490e+00 7.2334e-01 2.2100e+01 3.2098e+01 2.0325e+00 +#> -8.2690e+00 -5.8296e+00 1.2803e+01 -5.4233e+00 -5.1961e+00 1.4061e+01 +#> -1.0628e+01 -9.3202e+00 1.2784e+01 -1.8175e-01 1.3940e+00 -3.9709e+00 +#> 4.2279e+00 -8.3057e+00 -1.4058e+01 8.9101e-01 8.8170e+00 -2.5214e+00 +#> -1.3956e+01 1.4538e+00 7.3590e+00 -1.6660e+00 -4.3110e+00 -4.6546e+00 +#> -1.3127e+00 -5.5786e+00 3.8559e+00 4.2540e+00 2.9314e+00 -2.6850e+00 +#> +#> Columns 13 to 18 1.1930e+01 -6.4788e-01 -1.0442e+01 -2.0240e+01 1.1644e-02 4.5082e+00 +#> -7.2730e+00 -6.3459e+00 4.9806e+00 -1.0142e+01 8.3479e+00 -5.6752e-01 +#> -8.7923e+00 -4.1251e+00 9.0765e+00 -3.8417e+00 -1.4755e+00 -2.2709e+00 +#> 9.0956e+00 -3.4605e+00 -5.6987e+00 1.7234e+01 1.3073e+01 -9.2928e+00 +#> 1.7260e+00 2.3374e+00 1.5466e+00 -1.9666e+00 -7.8690e-01 3.9154e+00 +#> -1.6471e+00 8.3004e+00 4.3186e+00 -3.6931e+00 1.6072e+01 -1.5788e+01 +#> 1.4084e+01 -2.9063e+00 -6.5679e+00 1.2663e+01 1.9751e+00 1.8627e+01 +#> 2.4873e+00 -1.0175e+01 1.5735e+01 -2.9948e+00 2.2098e+00 -5.5016e+00 +#> 2.2975e+00 -2.1597e+00 -5.9502e+00 5.7177e+00 -1.9129e+00 -6.3321e+00 +#> -3.0592e+00 -1.2532e+01 -1.6628e+01 -7.9265e-01 1.9683e+00 -1.0201e+01 +#> 7.7512e+00 3.6023e+00 8.0514e+00 4.1680e+00 4.3380e+00 -6.9590e+00 +#> -1.0490e+00 -1.6473e+00 1.7163e+00 8.5430e+00 -1.7873e+01 -4.1432e-02 +#> -1.3627e+00 -1.0659e+01 7.9768e+00 -6.7237e+00 -1.6839e+01 -6.2214e+00 +#> 1.1808e+01 1.3893e+01 -1.4191e+01 -7.5561e+00 -1.7373e+01 6.7302e+00 +#> -3.0020e+00 5.1884e+00 4.1400e+00 3.6263e-01 6.5121e+00 3.0467e+00 +#> 1.3495e+01 -1.1261e+01 -3.4114e+00 -6.2916e+00 1.7888e+00 -2.2022e+01 +#> -1.8014e+01 -6.0846e+00 1.5245e+01 3.4250e+00 -2.2113e+00 1.6600e+01 +#> 2.0599e+00 -5.7345e+00 8.3602e+00 1.1356e+00 -7.2583e-01 1.5345e+01 +#> 4.0177e+00 3.9430e+00 1.2406e+00 2.9329e+00 -1.0253e+01 -8.1627e+00 +#> 6.5136e+00 -5.7634e+00 -1.2022e+01 1.0369e+01 -1.4031e+00 -3.0687e+00 +#> 7.3289e+00 -5.0217e+00 -9.1185e+00 -7.9444e+00 8.1302e+00 -4.3823e+00 +#> -1.0513e+01 -1.1314e+01 3.7360e+00 1.6278e+00 -3.0589e+00 -7.2650e+00 +#> 6.4786e+00 -4.7705e+00 -5.1027e+00 -7.2549e+00 -3.8462e+00 -1.3477e+01 +#> 3.1919e-01 5.7666e+00 1.3536e-01 -8.2108e+00 1.2744e+01 9.9364e+00 +#> -5.2084e+00 1.5089e+01 1.4856e+01 -8.6932e+00 -4.7304e+00 -9.5339e+00 +#> -1.5008e+00 6.9193e+00 -2.2116e+00 1.1155e+01 -4.8783e-01 -8.7540e+00 +#> -1.1480e-01 -2.0923e+00 -2.5827e+00 1.0092e+00 7.8682e+00 -5.1561e+00 +#> -2.4299e+01 -1.8868e+00 -2.9056e-01 1.5654e+01 6.3220e+00 -1.2154e+00 +#> 6.7699e+00 -4.3887e-01 -1.7045e+01 -2.9752e+00 4.9787e+00 -4.4855e+00 +#> 4.8247e+00 1.3522e+01 -1.7036e+00 -5.4038e-01 3.3729e+00 -7.0467e+00 +#> -2.4993e+00 -7.9922e+00 -9.0398e+00 -5.3949e+00 -2.6596e+00 4.6461e+00 +#> 9.6043e+00 -6.8071e-01 -1.4409e+00 -7.2577e+00 -2.4176e+00 1.5199e+01 +#> 3.3081e+00 6.6585e+00 -6.5976e-01 1.9123e+00 6.9633e+00 9.2466e+00 +#> +#> Columns 19 to 24 -7.5810e+00 1.9367e+00 1.1137e+01 4.6425e+00 -4.8519e+00 3.3498e+00 +#> -1.1113e+01 6.4837e+00 -1.4595e+01 -6.1985e+00 1.1358e+01 -5.8455e+00 +#> -7.8337e+00 5.4322e+00 -1.8810e+00 -1.3986e+01 7.4964e+00 -1.4575e+01 +#> 9.0311e+00 5.3640e+00 1.3258e+01 -6.2679e+00 -1.4864e+01 -1.7001e+01 +#> -6.9761e+00 -6.3965e+00 6.1917e-01 1.3480e+00 4.2407e+00 5.5030e+00 +#> -1.9657e+00 1.9085e+01 -4.7221e+00 -1.2309e+01 -9.5057e+00 2.2392e+00 +#> -1.5988e+00 -2.7310e+01 1.7692e+01 1.3578e+01 -1.2273e+00 9.5286e+00 +#> -5.6398e+00 1.6114e+01 -1.2142e+01 -2.6239e+00 -5.9258e+00 1.1628e+01 +#> 1.1855e+01 -9.7676e+00 2.9456e+00 1.3398e+01 -8.7582e-01 7.0983e+00 +#> 1.7253e+00 -3.5074e+00 8.9861e+00 9.1305e+00 -1.7082e+01 5.8522e+00 +#> -5.9769e-02 4.4901e+00 3.0901e+00 -8.1861e+00 -6.7580e+00 -1.0428e+01 +#> 2.1544e+01 -9.4859e+00 -6.4954e-01 3.8079e+00 -2.3861e+00 1.0862e+01 +#> -1.5596e+00 1.6921e+00 1.9812e+00 -1.6297e+00 1.2623e+01 -1.8583e+00 +#> 1.7143e+01 -2.9039e+00 5.2118e+00 5.5340e+00 -3.1253e+00 1.8416e+01 +#> -8.9750e+00 8.7990e+00 8.2344e-01 -1.5382e+01 3.2630e+00 -4.2075e+00 +#> -9.1886e-01 2.8298e+00 -1.3495e+01 -1.2100e+01 4.2630e+00 2.1146e+01 +#> -1.1993e+00 -1.4638e+01 7.4549e-01 1.5010e+01 -8.0299e+00 -8.8297e-01 +#> 1.4656e+01 -1.1243e+01 2.5431e+00 -2.6352e+00 -1.0266e+01 1.3959e+01 +#> 1.1386e+00 -9.0387e+00 2.7431e+00 2.2918e+00 -3.1314e+00 1.5853e+01 +#> 1.2103e+00 2.0319e+00 3.4055e+00 -5.1472e+00 1.4683e+01 -8.0329e+00 +#> -1.1963e+01 5.1479e+00 -1.6543e+00 1.4991e+01 -2.1046e+00 9.3839e+00 +#> -3.9061e+00 1.6098e+01 6.4221e+00 -6.9825e+00 -5.4664e+00 -1.4462e+01 +#> 3.3171e+00 3.6631e+00 -1.4556e+01 -1.0407e+00 4.4183e+00 8.0098e+00 +#> 3.3501e+00 -3.1580e+00 -1.2794e+00 1.4548e+00 -1.1369e+01 -5.2852e+00 +#> -9.3622e+00 -5.4411e+00 -7.9050e+00 -5.5712e+00 3.2700e+00 5.3043e+00 +#> 3.7840e+00 -7.8123e+00 7.0003e+00 3.4789e+00 1.2995e+00 -1.1004e+01 +#> 4.8015e+00 -7.0745e-02 1.8126e+00 3.1094e+00 -3.1520e+00 1.7710e+00 +#> 1.3527e+01 -6.4771e+00 -4.1707e+00 1.1839e+01 -3.9910e+00 -2.8013e+00 +#> 6.1319e+00 -7.7846e+00 -1.1972e+01 -7.8127e+00 3.3484e+00 -8.5366e+00 +#> 1.2122e+00 -8.7030e+00 9.0175e-02 1.0272e+00 -1.3791e+01 -8.2346e+00 +#> -8.4347e+00 -7.8919e+00 3.1654e+00 6.9850e+00 -3.4827e+00 9.7836e+00 +#> -6.6116e+00 -5.1150e+00 -1.7089e+00 -6.9539e-01 -7.1561e+00 -3.5353e+00 +#> 1.3174e+00 4.1013e-01 1.0555e-01 3.5495e+00 -7.9443e+00 -4.3379e+00 +#> +#> Columns 25 to 30 -7.1197e+00 -2.5533e+00 -6.7593e+00 3.8385e+00 -4.2044e+00 -4.6364e+00 +#> -1.3072e+00 -7.9670e+00 4.6678e+00 -6.7488e+00 1.9721e+01 -1.2685e+00 +#> -1.5270e+01 -3.4695e+00 4.3683e-01 -8.9342e+00 1.4607e+01 4.5429e+00 +#> -5.3979e+00 -6.2467e+00 -6.3999e+00 1.6897e+01 -1.2749e+01 -1.7730e+01 +#> -3.8974e+00 1.7861e+01 1.1714e+00 5.0724e+00 1.2244e+01 2.2317e+01 +#> -2.0501e+00 -9.5200e+00 -1.1371e+01 3.3698e+00 1.2072e+01 1.2209e+00 +#> -2.8670e+01 -1.5304e+00 6.5713e+00 2.3335e+01 -1.9684e+00 1.9220e+01 +#> 7.8404e+00 3.9758e-01 1.5486e+01 -6.4413e+00 2.0383e-01 6.2809e+00 +#> -4.7607e+00 1.3538e+01 -5.4859e+00 8.5047e+00 1.5112e+00 7.9595e+00 +#> -5.1492e+00 -3.1643e+00 -9.2862e+00 4.5060e+00 1.3000e+01 4.4906e+00 +#> 2.4070e+00 -4.8888e+00 1.9903e+01 2.5987e+00 1.2074e+01 3.2714e+00 +#> -3.2577e+00 -2.6569e+00 -7.4322e+00 -9.7783e+00 -5.0959e+00 -1.0909e+01 +#> 1.6897e+00 -1.2268e+01 -3.4326e+00 -9.5988e+00 1.1998e+01 -1.9460e+01 +#> -2.9513e+00 1.3643e+01 -6.7885e+00 1.2826e+01 -1.0763e+00 1.0531e+00 +#> 3.6025e+00 1.1069e+01 2.5384e+00 -2.4595e-01 1.1113e+01 -1.3048e+01 +#> -9.0986e-01 -1.9103e+01 -7.5893e+00 -2.6130e+00 -4.7878e+00 2.0838e+00 +#> -2.4974e+01 5.0931e+00 9.1401e+00 7.0589e+00 -7.8296e+00 1.5907e+01 +#> -3.5827e-01 3.9912e+00 -5.3342e+00 -6.9986e-01 -1.1233e+01 -1.4593e-01 +#> -2.9047e+00 -9.9776e+00 -1.0195e+00 6.0493e+00 -1.4187e+01 4.6327e+00 +#> -1.9968e+00 -6.6989e+00 -4.5910e+00 4.3948e+00 8.6554e+00 8.5784e+00 +#> 1.4163e+01 6.2239e+00 -1.6425e+01 -1.5927e+01 -3.8968e+00 -2.0515e+01 +#> 8.4828e-01 4.4815e+00 1.5392e+01 5.2222e+00 1.1490e+01 3.5422e+00 +#> 3.0056e+00 -4.4067e+00 5.8039e+00 5.1411e+00 5.3563e+00 -1.0490e+00 +#> 6.8250e-01 8.0229e+00 -3.5188e-01 2.0258e+01 -7.8061e+00 1.3818e+01 +#> 6.6439e+00 -1.1878e+01 -3.7895e+00 -9.5331e+00 -6.0823e+00 -2.9926e+00 +#> 1.9294e+01 -6.9251e+00 -5.1325e+00 -7.0214e+00 9.5923e+00 3.7839e+00 +#> 1.1241e+01 2.5613e+00 -8.3436e+00 -1.3926e+01 -1.7239e+00 -1.9718e+00 +#> -9.3955e+00 1.1306e+01 1.4373e+01 -1.1424e-01 -4.8170e-01 -4.3959e+00 +#> 1.3172e+01 6.2202e+00 3.3152e-01 -1.6445e+01 -1.2597e+01 -5.3284e+00 +#> -3.0086e+00 2.3601e+01 1.9524e+00 -1.3146e+00 7.1911e+00 -1.8541e+00 +#> -1.2704e+01 -1.8338e+00 -2.5337e+00 -5.1111e+00 -2.6656e+00 -3.5587e+00 +#> 1.1506e+01 -1.1557e+01 -6.8043e+00 -5.9856e+00 -1.6333e+01 1.4423e+01 +#> -1.0813e+01 1.0223e+01 1.4212e+01 1.0552e+01 4.4558e+00 6.5555e+00 +#> +#> Columns 31 to 36 -1.8291e+01 -4.2999e+00 4.5601e-01 -8.3161e+00 1.9145e+01 1.0535e+01 +#> 9.6855e+00 2.7774e+00 1.9523e+00 -7.7770e+00 2.2417e+01 -4.9211e+00 +#> -1.4236e+00 -6.1047e+00 -6.8579e-01 4.1378e+00 8.3291e+00 1.4699e+00 +#> -8.3872e+00 -1.2020e+00 1.2573e+01 1.0246e+00 -1.4205e+01 4.4063e-01 +#> 7.8635e+00 5.5809e+00 -1.7828e+01 7.8313e-01 2.0001e+01 -2.0676e+00 +#> 1.0424e+01 5.9135e+00 -9.5414e+00 -8.8739e+00 -7.5911e+00 1.0424e+00 +#> -1.5620e+01 -5.5053e+00 -1.8575e+01 -8.0920e+00 -2.3011e+00 1.2082e+01 +#> 8.0482e+00 2.0614e+01 -5.9597e+00 4.4061e+00 -4.9501e+00 1.2811e+01 +#> 8.5973e+00 -1.3672e+01 -7.8232e+00 7.0040e+00 1.2301e+01 -6.5928e+00 +#> 9.5528e+00 2.9149e+00 7.0402e+00 6.4693e+00 -4.0254e+00 -1.4571e+00 +#> -6.7823e+00 -6.3719e-02 4.3697e+00 -2.0017e+01 1.4132e+01 1.7834e+01 +#> -3.6959e+00 3.3713e+00 7.4339e+00 -2.4896e+00 -1.0360e+01 -1.0914e+00 +#> 9.5771e+00 3.0994e+00 1.9979e+00 -2.8915e+00 5.6933e+00 2.0304e+00 +#> 3.1315e-01 -8.2881e+00 1.6856e+01 -6.0014e+00 5.6838e+00 3.6677e+00 +#> -9.1044e-01 -5.0781e-01 1.0143e+01 -2.3547e+00 -5.7666e+00 -8.1764e+00 +#> -9.5259e+00 4.0185e+00 1.7433e-01 -7.3735e+00 -7.8219e+00 6.3988e+00 +#> -1.1048e+01 1.4390e+00 -4.7159e+00 -9.0483e+00 2.0214e+00 7.5921e+00 +#> -7.7220e+00 2.6837e+00 1.1349e+01 -1.4938e+00 -1.8059e+01 -1.6158e+00 +#> -3.8124e+00 1.1999e+00 -1.7797e+01 -1.3856e+01 8.6441e+00 1.6933e+01 +#> -3.9433e+00 -2.0375e+00 -5.7926e+00 -6.4940e+00 1.8035e+01 -1.3628e+01 +#> 5.5925e+00 7.2992e-01 -1.0357e+01 2.6171e+01 -1.2837e+01 -1.1919e+00 +#> -5.5127e-02 -5.4829e+00 -6.2261e+00 4.4794e+00 1.4508e+00 1.0786e+01 +#> 5.0026e+00 -1.6174e+00 1.2585e+01 -6.2515e-01 -2.6363e+00 1.1334e+01 +#> -7.4970e+00 -5.9508e+00 9.5613e-01 -5.3492e+00 -5.7499e+00 -5.3408e+00 +#> 3.1062e+00 -5.7467e-01 -1.1296e+01 3.6180e+00 -1.8956e+00 -2.8660e+00 +#> 2.6379e+00 -6.9742e+00 -8.4832e+00 5.0639e+00 -9.9193e+00 1.5181e+00 +#> -7.4655e-01 2.7439e+00 -2.4501e-02 -1.1316e+01 1.4168e+01 -5.5834e+00 +#> -6.4428e+00 5.2486e+00 -2.2777e+00 -2.4341e-01 -1.0685e+01 4.9076e+00 +#> -1.7347e+01 2.5239e+00 -7.9817e+00 1.2293e+00 1.0096e+01 -1.8109e+01 +#> -2.2250e-01 -4.9260e+00 1.4205e+01 1.0993e+01 -4.4525e-01 -7.9359e+00 +#> 1.5869e+00 -4.8892e+00 1.5826e+01 -1.0400e+01 5.6197e+00 6.0068e+00 +#> 1.0016e+01 1.2187e+01 3.9678e+00 1.8454e+01 -2.1363e+01 1.2461e+01 +#> 2.9391e+00 7.7279e-01 -1.7982e+01 1.3988e+01 2.3683e+00 4.7453e+00 +#> +#> Columns 37 to 42 -1.4111e+01 5.3830e+00 -9.3397e-01 -2.0437e+00 1.3894e+01 -9.6059e+00 +#> 7.0483e-01 1.0629e+01 1.4933e-01 -3.3972e+00 3.7201e+00 -7.8989e+00 +#> -6.2214e+00 -1.4707e+00 -6.0769e+00 1.4141e+01 -8.4486e+00 7.6925e+00 +#> -1.0395e+01 -5.1267e+00 1.5288e+01 -2.1768e+00 -2.9589e+00 -1.7916e+01 +#> 6.7907e+00 -1.8053e+00 1.2054e+00 -1.1046e+01 1.4566e+01 1.1057e+00 +#> 7.0647e-01 8.0419e+00 -1.6069e+01 -1.5754e+00 -1.8007e+01 -2.6630e+00 +#> -5.3209e+00 -2.2984e+00 -1.9713e+01 8.3384e-01 6.2710e+00 1.9513e+01 +#> -4.5013e+00 -1.6560e+00 -2.1425e+00 3.9734e+00 -9.9441e+00 -4.9897e+00 +#> -1.9045e+00 -1.0288e+01 -1.8777e-01 -1.0439e+01 7.6219e+00 4.2557e+00 +#> 1.1217e+01 1.0260e+01 -5.4071e+00 9.8172e+00 2.0094e+00 8.2361e+00 +#> 2.5562e+00 7.6749e+00 1.0999e+01 -1.9015e+00 1.4723e+01 -3.3837e+00 +#> 1.5211e+01 2.2636e+00 6.9040e+00 4.8008e-01 -7.9932e+00 8.8856e+00 +#> -1.7622e+00 -7.6240e+00 1.9189e+00 -8.2306e+00 -3.6926e+00 -4.5966e+00 +#> 9.7993e+00 1.1645e+01 7.0390e+00 -1.0689e+01 6.1414e+00 1.2007e+00 +#> 1.4607e+01 -1.3078e+01 -2.0350e+00 -7.0639e+00 6.3654e+00 -1.4949e+01 +#> 5.9529e-01 1.3820e+01 -5.3896e+00 3.5609e-01 -9.5186e+00 2.4723e+00 +#> -5.3721e+00 1.7624e+00 -2.6790e+00 -1.7448e-01 -4.4717e+00 9.1652e+00 +#> 2.8137e+00 -1.2846e+00 -6.8347e+00 2.9087e+00 -3.4082e+00 -6.5433e-01 +#> -6.4573e-01 -6.3193e-01 6.7526e+00 1.0402e-01 -3.3774e-01 1.5402e+01 +#> 4.2707e+00 6.3796e+00 -8.3490e+00 -1.5283e+00 2.7979e+00 -1.1153e+00 +#> 1.5370e+00 3.4194e-01 -5.9215e+00 2.6696e+00 -2.1622e+00 -1.2943e+01 +#> 8.6297e+00 -1.2615e+01 -1.2578e+01 8.5392e+00 8.5492e+00 5.3639e+00 +#> -2.3783e+00 -7.4137e+00 9.6901e+00 -6.9019e+00 9.9480e+00 9.2872e+00 +#> 8.0943e-02 -6.6542e-02 -4.4073e+00 -3.1565e+00 8.1904e+00 6.8113e+00 +#> -1.2081e+01 5.3051e+00 1.1608e+01 -8.2586e+00 -1.2726e+01 3.8582e-01 +#> 6.6568e+00 2.4584e+00 4.0412e+00 1.3642e+00 -1.1038e+01 4.6452e+00 +#> 6.9482e-01 2.0867e+00 1.4203e+00 -7.3544e+00 9.0480e+00 2.9928e+00 +#> -1.7859e+01 -7.8506e+00 1.0674e+01 1.0165e+00 -8.9218e+00 1.1449e+01 +#> 1.2400e+01 6.1530e+00 -4.2715e+00 1.5877e+01 9.0478e+00 -3.7542e+00 +#> -1.1289e+00 7.9320e-01 8.2069e+00 -4.6946e+00 1.4807e+01 -3.4348e+00 +#> -1.0215e+01 -9.0403e+00 7.6240e+00 1.4947e+00 -1.8116e+00 6.4025e+00 +#> -6.5855e+00 8.3943e+00 -4.2471e+00 -7.4104e+00 -2.8329e+00 -6.5444e+00 +#> -2.0193e+01 -5.1753e+00 7.4816e+00 8.0486e+00 -9.9913e-01 -9.3372e+00 +#> +#> Columns 43 to 48 -1.3640e+01 1.7220e+00 -4.6315e+00 -4.8305e+00 -9.9433e+00 -1.1415e+01 +#> 1.2615e+01 -5.4900e+00 4.4614e+00 -7.3902e+00 1.8600e+00 4.1669e+00 +#> -7.0354e+00 -6.4858e-01 9.1743e+00 -5.2491e+00 5.0688e-02 -3.5844e+00 +#> -4.5732e+00 7.7138e+00 -5.5949e-01 -4.7766e+00 -6.5941e+00 2.2081e+00 +#> -2.4143e+00 3.9033e+00 6.1685e+00 -4.4903e+00 3.6106e+00 2.7718e+00 +#> 1.2963e+01 1.6424e+01 6.5619e+00 5.3657e+00 -7.7476e+00 2.1978e+00 +#> -6.5296e+00 -8.5861e+00 6.0724e+00 1.3922e+00 9.2813e+00 -4.5972e+00 +#> 9.8361e+00 -6.1249e+00 6.1533e-01 1.5170e+01 -9.7208e+00 -9.8965e+00 +#> -8.7144e+00 4.2792e+00 6.8535e+00 -1.7561e+00 3.5048e-02 6.5872e-01 +#> 1.5302e+00 1.7437e+01 -1.4131e+00 -6.8379e+00 -2.9896e+00 2.4287e+00 +#> -1.0938e+01 -2.2670e-01 3.5169e+00 -3.7272e+00 -8.9138e+00 -7.5340e+00 +#> -1.4197e+01 -3.7520e+00 4.5150e+00 -9.5360e-01 4.2155e+00 -4.0695e+00 +#> -5.2665e+00 -7.5975e+00 -2.0861e+00 -6.9546e+00 1.2292e+00 -3.7878e+00 +#> -1.6940e+01 -3.8391e+00 2.2983e+01 -1.2483e-01 3.3173e+00 -4.4470e+00 +#> -7.8407e+00 -5.7469e+00 -1.6211e+01 6.9511e+00 5.4495e+00 9.2661e+00 +#> 3.8092e+00 -1.0979e+00 6.3600e-01 -1.5279e+01 1.6077e+00 1.2852e+00 +#> -6.3536e-01 -2.2383e+01 8.7236e+00 -9.1232e-01 9.8138e+00 -1.6103e+00 +#> -4.8979e+00 -1.3216e+01 -2.4730e+00 -4.6399e+00 -2.1728e+00 -1.0300e+00 +#> 4.0771e+00 2.8650e+00 -4.5676e+00 -1.2963e+00 -1.0398e+01 -8.8620e+00 +#> 2.8616e+00 1.1801e+01 -1.4050e+00 -7.7831e+00 4.0118e+00 -1.6161e-01 +#> 5.4261e+00 2.7380e+00 1.4998e+00 2.3039e+00 5.0045e+00 9.0082e+00 +#> 1.7714e+01 6.0926e+00 -1.7778e+01 -1.7248e+00 1.7784e+00 -1.0243e+01 +#> 7.9459e+00 -3.6603e+00 9.8580e+00 -2.7786e+00 -1.3638e+01 -9.5378e+00 +#> 2.4795e+00 -2.9875e+00 1.5674e+00 -4.6055e+00 7.4356e+00 3.9743e-01 +#> 7.2215e+00 -1.1131e+01 -6.4836e+00 1.4040e+01 1.1492e+00 9.9893e-01 +#> 4.0891e+00 1.2133e+01 1.5029e+00 2.3619e+00 1.4673e+01 4.5757e+00 +#> -1.2461e+00 3.4615e+00 -4.2478e+00 4.4469e+00 7.1510e-01 1.7123e+00 +#> -2.6894e+00 -1.0350e+00 -7.3582e-01 -1.8254e+00 3.3137e+00 3.3876e+00 +#> 2.8312e+00 3.0288e+00 8.0730e+00 3.7114e+00 8.7465e+00 1.9342e+01 +#> -8.3858e+00 1.0696e+00 8.7608e+00 3.7185e+00 3.4854e+00 -6.5794e-01 +#> -8.6587e+00 -1.4916e+01 7.9075e+00 3.2647e+00 -6.0043e+00 4.1202e+00 +#> -2.8534e+00 5.2064e+00 -2.3374e-01 2.1394e+00 3.8324e+00 5.6622e+00 +#> -5.3596e+00 -9.4697e+00 -8.4387e+00 4.1856e+00 -8.2728e-01 4.2662e+00 +#> +#> Columns 49 to 54 1.5110e+00 1.1612e+01 -8.1481e+00 7.7504e+00 4.7089e+00 8.6133e-01 +#> -6.7131e+00 4.4196e-01 5.9638e+00 -1.7034e+00 -7.0029e-01 -2.8722e-01 +#> -6.9634e+00 -7.0357e+00 -5.7691e+00 -5.0540e+00 -5.0232e+00 4.1304e+00 +#> -1.1507e+01 8.6575e+00 -8.0303e+00 1.1754e+00 4.6072e+00 -3.5357e+00 +#> 4.4017e+00 -6.7497e+00 -2.4502e+00 -1.6876e+01 5.9885e-01 1.7616e+00 +#> 9.3639e+00 6.6137e+00 3.8002e+00 -1.9916e+00 4.9963e+00 1.8824e+00 +#> 3.8667e+00 -1.2947e+01 5.6025e+00 -2.4671e+00 -1.0458e+01 8.9027e-01 +#> -1.2818e+01 8.3827e+00 -2.0432e+00 -5.8657e-01 -1.0108e+00 -9.3442e-01 +#> 3.8007e+00 1.3953e+01 -1.9088e+00 8.6154e+00 -4.4130e-01 -2.6356e+00 +#> 5.7717e+00 3.0625e-01 -5.0145e+00 -5.3261e-01 -2.3341e+00 -2.8519e+00 +#> -2.1037e+01 -8.3455e+00 -5.1127e+00 -4.5669e-01 -1.0479e+01 -3.4343e+00 +#> -1.9757e+00 -1.0135e+01 3.7165e+00 7.5472e-01 6.7992e-01 -2.2342e+00 +#> -2.4433e+01 8.9641e+00 1.1905e+01 8.4401e-01 5.5836e+00 -7.9179e-01 +#> 1.0468e+01 1.8752e-01 -6.2280e+00 1.1272e+01 -4.0073e+00 -3.6035e+00 +#> -3.0949e+00 -1.7943e+00 -1.4946e+00 -7.2659e+00 2.7339e+00 -3.3407e+00 +#> -5.7835e+00 -5.1514e+00 -7.1319e-01 8.4498e+00 4.2640e+00 1.7035e+00 +#> 4.5562e-04 -9.2789e+00 1.7952e+01 -7.2646e+00 -1.0763e+01 -2.6272e+00 +#> 5.2462e+00 2.3286e+00 9.3039e+00 -6.0350e+00 -2.9160e+00 -1.4515e+00 +#> 1.0351e+01 -4.6743e+00 3.2394e+00 3.9729e+00 -6.5436e+00 -2.4057e+00 +#> -1.9072e-01 -1.0872e+01 -4.8054e+00 9.2820e+00 3.1677e+00 4.0453e+00 +#> -8.6390e+00 6.5988e+00 6.5066e+00 -2.8962e-01 2.2690e+00 1.5657e+00 +#> -5.6277e+00 7.8628e+00 4.3847e+00 -9.2744e+00 -6.1405e-01 5.3696e-01 +#> -8.5831e+00 2.5337e+00 3.5184e+00 1.5149e+01 3.6907e+00 -4.3372e+00 +#> 2.0339e+01 1.5128e+01 -1.0260e+01 -5.6791e+00 6.8810e+00 -2.6907e+00 +#> -7.1959e+00 -1.7392e+00 -6.0054e+00 1.7916e+00 4.5486e+00 4.5773e+00 +#> -1.1536e+00 6.2013e+00 5.2471e-01 2.9748e-01 -2.0977e+00 -4.8847e+00 +#> 1.5113e+00 3.6688e+00 -3.7778e+00 -7.6354e-01 2.2902e+00 2.3995e+00 +#> 3.5179e+00 2.1436e+00 1.5282e+01 -6.9281e+00 -1.2934e+01 -1.4841e+00 +#> 1.6872e+01 -1.4593e+01 -6.3487e+00 5.1606e+00 2.8223e+00 -8.5161e-01 +#> 1.0120e+00 3.3539e+00 -5.2198e+00 -1.1814e+01 6.5890e+00 2.4779e+00 +#> 2.9020e+00 -3.1077e+00 5.1558e+00 1.2185e+01 -3.7317e+00 6.3348e-01 +#> -7.3495e-02 3.7732e+00 2.4100e+00 -5.2214e+00 2.1536e+00 2.3925e+00 +#> -3.3776e+00 -8.3365e+00 -1.0611e+01 -8.1708e+00 -3.5078e+00 -1.1994e+00 +#> +#> (6,.,.) = +#> Columns 1 to 8 5.8372 -9.8998 -2.0121 -2.4662 13.5752 6.9889 -2.8107 7.1202 +#> 0.6224 0.0788 4.6555 14.5333 -8.0570 -0.3515 -5.9703 -5.2597 +#> 9.7019 -2.4902 -3.7452 4.8322 10.6615 3.1522 2.0184 -8.1924 +#> -0.4211 -1.8528 8.4477 1.6152 -9.3875 -4.3582 4.2066 -5.1822 +#> -5.7169 0.6433 0.5421 -0.6013 -6.9453 -10.2944 -7.6998 -11.1344 +#> -5.5389 -0.5468 7.3565 -0.8178 -10.4985 11.2354 -0.8214 -7.7982 +#> 3.5369 6.8267 -10.5947 -8.4343 -6.4685 3.9466 -2.0369 -10.0967 +#> 11.1584 -15.3450 10.2952 9.0181 3.0937 -9.2322 14.4363 2.1945 +#> -0.0193 -1.3542 5.8896 -14.3968 5.4872 6.0471 -7.3033 -9.9609 +#> -3.0220 -2.5323 3.9394 16.2894 8.3263 3.6074 -3.3756 -2.2061 +#> -2.8605 3.0711 -1.3183 -6.2283 4.1860 -3.6294 -3.6637 -13.1819 +#> -1.3793 -0.2396 3.0307 -7.1593 -4.5503 -1.6779 5.1447 13.8509 +#> -0.1465 -3.9010 11.4829 11.0006 -11.1987 13.0582 4.4387 -2.3655 +#> -4.7880 4.5173 9.1224 3.5623 11.7311 -7.4407 9.0906 4.6791 +#> -6.5952 0.4970 -5.9774 5.8664 6.9302 -8.0594 -10.3868 -6.9531 +#> -4.6250 7.2551 3.0804 -10.5546 -4.6623 -3.6951 0.1074 15.4016 +#> -3.5132 4.1171 -10.7817 4.3338 4.8770 0.4874 6.0347 -1.1558 +#> -3.9374 4.1109 -9.2184 -3.1122 -4.0898 0.0078 -19.8531 -3.5478 +#> 1.2404 5.1677 -2.4492 -14.6120 4.9774 -1.8887 5.0211 13.0184 +#> -1.6529 -1.9162 5.9899 6.5129 -6.4674 0.0788 0.9734 -7.4534 +#> 2.7855 -5.5393 7.6601 -3.4464 -9.9863 4.4274 2.6896 5.6719 +#> 3.4731 -4.5157 -12.2779 4.9178 0.1459 10.9105 -7.6646 -10.4843 +#> 8.3818 5.3656 2.8337 -13.5562 -15.3615 -6.3959 -11.6485 -3.5062 +#> -0.3563 10.8070 -6.9681 -4.0528 -0.5741 -15.1804 -10.4712 -1.5669 +#> -0.4883 7.6477 11.7307 1.3656 -8.3575 0.1948 9.9536 11.1918 +#> 8.4912 -2.2264 -5.8630 -3.8422 -1.1351 -5.2031 5.7807 -2.7160 +#> 5.5358 -10.8007 -3.4382 9.1261 -9.6091 6.6322 -1.3668 -2.2500 +#> 5.9400 3.2803 -4.4874 -2.2849 2.1297 -3.7096 5.4339 -5.1589 +#> -11.3716 3.9791 10.5131 9.5946 -16.9824 7.3506 -10.2104 2.1259 +#> -6.6718 -0.0340 8.5590 2.9999 -8.5308 7.4367 -9.0804 -10.4841 +#> 7.3935 -12.8008 -3.2281 -7.2737 7.6732 -2.0011 -2.2509 -7.0506 +#> -11.1762 6.9170 3.5395 -4.2980 9.7279 1.4796 -9.4957 1.8829 +#> -6.0718 -8.0331 17.0624 11.7189 0.1565 -11.3853 -3.9514 -21.4767 +#> +#> Columns 9 to 16 3.1270 4.7312 -6.4996 -12.1034 0.8866 -3.3480 2.1150 -5.1799 +#> 5.7491 7.5753 9.1675 -8.3121 0.1323 -3.4256 -10.1828 2.8702 +#> -6.0724 0.3255 -4.2952 4.0437 -6.3153 6.2150 8.1903 -7.0992 +#> -6.3629 -9.1470 -3.4025 -8.4366 -4.3414 12.8227 18.4577 5.3753 +#> -9.7089 -0.6214 -1.8766 1.3886 8.7637 -0.2797 -6.8564 -4.6726 +#> 11.2725 4.2084 4.9454 0.1580 5.5927 -3.0540 -13.8818 -2.9320 +#> 0.8992 -13.0661 -3.6130 10.5605 -6.7307 -0.1158 5.6653 -13.4067 +#> -13.7012 6.2335 4.5332 -15.4137 -3.3848 2.2410 -13.7391 -6.5338 +#> 0.3081 -13.2906 -3.5264 -8.5625 11.3408 2.2639 4.0107 -1.9744 +#> 12.5743 -8.6153 6.6129 3.9813 2.6037 6.4202 -2.7709 -17.4761 +#> -2.7177 5.9604 -3.1499 -11.8414 -20.1259 13.4410 0.1031 -9.2148 +#> 3.2199 11.1343 1.8144 -5.5571 0.2029 3.1182 -8.7780 -2.6389 +#> 4.6146 6.8731 5.7189 -23.2510 -8.2365 13.5839 -6.9237 -9.6302 +#> 6.5635 1.5082 -2.8093 2.9363 3.1272 -23.9563 -6.7154 0.5377 +#> -9.7716 -2.2921 16.4473 18.8884 -9.5087 -10.7768 -3.8763 -5.5696 +#> 3.7348 4.8383 -0.4432 0.2188 2.8202 -0.5749 -7.6459 7.0642 +#> -3.1382 -0.8714 2.7118 -9.4357 13.1350 1.8498 -4.6894 6.9628 +#> -6.0351 -5.6972 15.7125 5.6265 21.5144 5.2340 -2.9256 -13.5315 +#> -0.1302 -5.8842 -10.7976 -10.0615 -1.6952 -7.3183 3.7136 15.4941 +#> -3.4020 2.0545 8.1644 14.6500 -8.0948 0.3742 6.5363 -8.3616 +#> -7.8988 4.0873 6.6725 -7.4203 9.5629 -5.1464 12.8811 1.5573 +#> -14.4934 -21.1438 2.1387 5.1099 -4.1852 10.2888 10.0368 10.2898 +#> -17.3838 -18.6930 -1.4715 -21.7694 -6.2966 -1.2640 8.6499 13.2985 +#> 18.8828 -14.0267 12.0243 10.6017 9.4435 -11.3057 -4.5340 10.7612 +#> 15.0840 1.0585 -6.6564 -5.3599 -4.6832 -1.4636 -6.8946 0.3874 +#> -6.4301 -13.2148 -11.5608 15.6281 6.3202 5.1255 4.1724 1.9846 +#> 14.7582 -4.2892 5.4274 -5.8187 -13.8346 8.0116 13.6895 -4.7489 +#> -9.6515 -5.0152 -5.6887 -1.3467 6.7512 14.9172 -4.7750 6.3344 +#> 5.3507 5.0692 9.3586 -2.2776 8.1056 10.8131 5.0828 9.8253 +#> 8.9306 4.9590 6.4751 -9.1079 7.8426 1.9053 -9.8635 -1.3610 +#> -16.2635 -7.6040 7.7798 6.5987 -6.5625 6.0631 10.9673 1.2997 +#> -0.8216 3.8448 -2.8957 2.7185 0.7707 -8.1755 -1.3966 -3.3229 +#> -12.8707 5.7332 5.8443 1.1134 -0.8635 5.8408 4.2175 -16.5586 +#> +#> Columns 17 to 24 6.7434 3.6172 7.5219 -4.2430 5.7572 -18.9009 10.6751 -0.4495 +#> -10.0496 0.7910 4.8055 0.1286 2.3416 11.2289 5.8153 -1.7091 +#> -5.4093 7.5992 -8.8217 3.7474 4.3123 6.6951 -3.4955 -1.6625 +#> 18.4912 -1.0653 -4.2397 -20.1113 11.7525 -15.7751 2.9362 -3.7723 +#> -9.6425 -4.3680 2.8832 -9.9571 -0.3633 -3.4885 -6.2398 8.7156 +#> 31.5278 1.6027 -9.4435 19.1221 -3.8702 2.3171 3.8170 17.7132 +#> 9.7651 17.3714 4.6024 -5.8386 5.7030 -6.1837 8.2561 -4.4413 +#> -16.7743 9.1330 6.2069 0.1146 -18.9357 8.0628 -8.2238 -1.9070 +#> -3.4211 7.0988 -0.4025 -6.1283 0.5604 -15.7983 -10.3986 10.9773 +#> 6.9417 -2.9097 -5.0079 -14.0668 4.5167 5.3750 11.1730 -13.7184 +#> -8.5718 -1.3174 -7.9277 -3.4962 0.1227 -10.9827 -8.6096 -0.6327 +#> -1.1413 12.0654 -0.6577 -2.0581 13.2240 -2.8314 -11.7772 12.7701 +#> 9.8398 10.4058 -4.5915 -5.3860 16.5096 -19.2353 -8.3984 -5.0658 +#> -19.7751 -2.7539 6.8370 -4.8043 -3.8224 -5.9301 -12.7765 4.3204 +#> -5.9741 -9.8371 -6.3898 5.3392 -8.3081 1.1014 -4.2730 -14.6473 +#> 3.9865 -7.2380 15.8227 -4.1409 5.2327 4.2395 9.6288 -1.4335 +#> -7.6413 -6.5342 1.9042 1.7599 2.4410 -8.4803 4.6289 -0.5024 +#> 0.0245 0.4454 -0.3680 -11.9465 6.2814 0.0516 -7.8827 3.7139 +#> 13.1931 1.2556 16.9853 -3.0643 6.8853 -4.0688 2.7113 9.2518 +#> 0.4569 3.6248 -0.8355 -7.7372 -2.9394 5.1682 0.0059 -10.4355 +#> 4.6640 0.8229 7.5867 -23.6020 0.0220 9.3969 -5.1461 -3.7425 +#> 4.1282 -10.1168 -20.9062 6.9623 -3.9451 -0.5187 2.9524 -11.3756 +#> -6.4655 -0.3596 13.1940 9.8203 -4.7303 8.6359 -9.9316 14.5752 +#> 11.8410 -25.7768 2.2486 -0.9346 -7.5109 -11.0041 13.8072 1.6992 +#> 11.5055 -4.6496 11.1935 13.8994 19.3569 -7.2374 3.1911 -3.5341 +#> -14.6882 15.9491 12.0762 -4.8073 -7.7098 8.6749 13.5211 11.0884 +#> 15.1053 -6.8504 5.3475 1.4031 -5.4007 5.1388 -9.6504 -5.8465 +#> -6.6915 12.9552 -3.9263 3.6180 -0.2964 11.9318 -4.7163 2.9965 +#> 0.8383 0.7919 -0.3834 -0.2002 -0.3010 0.7126 4.5338 3.6352 +#> 2.5618 10.4896 -9.2390 -6.3396 10.5246 4.8557 -6.6497 5.2051 +#> -5.4984 0.9804 -3.3242 6.7121 7.0364 7.6747 -8.8115 -1.3950 +#> 15.2412 -5.7309 11.5133 7.5772 6.1621 1.3538 3.9427 13.9237 +#> -16.7468 3.0447 -10.9687 -7.2660 0.8922 -6.3033 -12.3895 -5.5437 +#> +#> Columns 25 to 32 -12.0697 -3.5384 3.1297 -4.6535 -5.2481 15.8287 5.4154 -0.3778 +#> -2.1089 9.5972 0.8430 -4.0199 -6.0881 2.9838 6.0103 4.5010 +#> -2.6543 4.3419 -0.0420 1.9371 -3.5836 0.8798 7.6040 15.6894 +#> 12.1703 0.8527 -16.7525 -14.1144 -2.8514 -1.8223 13.7236 -2.8442 +#> -2.6466 -4.5494 6.9735 14.2975 20.0328 1.3083 5.4931 -2.0729 +#> -8.6758 -8.7606 -3.5248 -6.7319 -23.9848 11.8449 2.1175 9.2487 +#> -9.3546 2.4001 11.0860 1.9609 4.1101 1.5524 -9.9594 -4.8627 +#> 13.1873 -2.1725 -13.8422 12.5630 12.0236 -5.8243 16.2306 -3.0538 +#> -4.1563 -21.6641 -3.8594 2.7444 2.9823 -0.3435 3.1660 -7.2944 +#> -12.8167 -0.6253 -13.2272 10.3824 3.1855 -15.4092 0.4140 4.1161 +#> 8.6379 -7.7792 -1.4905 -5.8643 -7.0079 11.6172 5.6935 8.8235 +#> 14.8754 5.8787 -5.5924 13.7661 -5.5452 -24.9628 5.7362 -10.2188 +#> 17.7307 -1.6486 -12.9553 5.3010 -0.9417 -3.4197 16.9908 -3.1881 +#> 7.1434 6.3656 -10.4272 -0.7966 -6.9522 1.6485 7.7171 -13.7625 +#> -7.7999 14.8574 4.1914 -2.3317 10.0932 7.8385 1.0473 7.9744 +#> 5.8419 -5.9148 1.6019 -17.8381 -2.7671 3.6393 -1.7876 -0.0644 +#> -7.3340 -5.3850 11.1706 -4.6075 -17.8987 -2.9685 -1.0708 -4.2589 +#> -0.5210 0.4319 1.5871 10.8863 -3.6306 1.6850 0.7711 -0.9064 +#> -2.9105 -1.4267 2.8435 -1.2618 3.9577 -4.1292 7.1147 -3.8027 +#> -4.2346 0.3152 -2.0866 -6.2390 5.7696 8.0283 -10.6504 12.0618 +#> 1.4405 10.5334 -20.7627 19.0846 3.8392 -21.0015 9.1211 -6.8417 +#> -12.8519 -14.0754 9.5872 6.3090 -5.6876 5.9102 -9.6384 10.8206 +#> 1.0209 3.4411 -5.6529 8.2674 17.3244 5.4750 2.9686 -3.5951 +#> -14.2953 16.2404 -8.5252 1.0102 4.7611 -0.9951 -17.7755 -9.6286 +#> 12.8574 11.9751 3.7719 -13.3267 -1.1982 9.6215 5.9018 -0.5553 +#> -1.7271 20.9145 3.3571 9.7421 9.9440 -18.9676 -2.7275 8.6002 +#> -6.0209 12.4593 -10.9827 -10.5646 10.0390 -10.0646 -22.2542 17.5519 +#> -5.0658 4.7174 8.2967 -4.6677 15.4137 -6.6179 -5.3330 -11.1588 +#> 2.1353 -13.7841 12.9964 4.7600 -5.2383 6.7317 -15.1759 -7.0365 +#> 1.7096 11.8668 -0.9163 12.2730 1.5656 2.7579 18.2381 -12.5987 +#> 4.2124 0.8708 2.3994 -9.3146 -2.9083 -7.0406 0.4310 3.3430 +#> -8.3238 10.6261 -8.8687 5.3676 -11.4817 6.0798 6.5560 -3.2692 +#> -1.8217 6.9275 -9.0067 1.5917 4.7243 15.1044 19.6905 -14.1816 +#> +#> Columns 33 to 40 18.0137 0.1437 1.0359 8.1105 -4.9806 16.9608 1.7034 0.7581 +#> -4.1803 -0.5965 0.0856 2.3563 -10.2289 -3.9149 -9.4304 1.1640 +#> 10.3162 9.0257 0.7877 -10.3640 -9.9317 2.3473 -2.1994 -0.8416 +#> -6.4261 -0.0093 -10.1654 14.0540 -7.8450 6.5073 -2.0542 -8.0125 +#> 12.0965 10.8590 -7.7790 -12.6560 0.0736 4.8304 1.2262 7.6016 +#> -2.1623 0.5889 18.8545 -0.2921 -7.2104 -1.2669 7.4378 -7.8752 +#> 4.7243 -4.5159 11.9878 -18.0457 8.2826 13.9053 13.4480 4.5478 +#> -13.6581 2.4030 -0.3871 9.3807 3.9741 -5.1651 0.3349 5.1812 +#> 11.1939 -2.9157 -11.7361 -1.6043 -22.8290 3.8562 -3.6200 -1.6497 +#> -13.1913 0.1082 -5.5334 -8.3986 -6.5263 5.5662 -11.0957 7.5131 +#> 16.4945 -0.9001 -0.7629 2.2027 -5.6176 4.2583 19.4694 6.7701 +#> -9.8157 3.9220 -2.5901 3.8787 3.5791 -1.2144 -5.8429 5.8580 +#> -16.6308 5.3659 -9.2491 7.4077 -15.5380 -13.3558 -5.7481 5.4371 +#> 6.2558 -3.2801 -7.3593 4.4177 -0.7377 -5.6666 -1.8812 19.0885 +#> -5.3019 -14.4732 -2.1184 9.8816 3.1071 4.0681 -9.8392 1.2120 +#> -9.6794 3.6849 -1.7766 2.8213 -5.8930 -3.7362 -5.6313 2.3740 +#> 5.1394 3.5812 7.7915 -21.3930 -4.9200 -6.9383 8.8521 -1.8699 +#> 1.5971 -3.1726 -2.3342 -0.0188 5.8653 -5.2357 -5.4085 11.3280 +#> 3.3370 3.1580 3.7043 7.4843 -5.3311 3.5652 7.1179 7.5347 +#> 4.0595 -6.2943 2.0057 -0.4389 9.6982 2.4842 -5.7146 -0.3769 +#> -6.7431 2.8492 -5.3043 -0.8100 -0.0629 -12.7998 -9.2995 9.5977 +#> 11.1320 -3.0477 -1.9959 -5.0161 7.0474 -3.0932 5.2943 -1.7621 +#> 4.1791 14.7743 8.0079 7.9273 -12.0689 -1.5847 9.2181 9.6022 +#> -1.2992 7.4819 -6.6553 -14.3183 -1.4715 3.8195 -13.1888 -2.3540 +#> -11.7587 -6.7343 8.6164 7.4476 -2.3062 -12.9534 -1.0827 -11.3341 +#> 6.0703 -2.0472 -8.3528 -2.0073 1.6976 -6.1322 -12.9710 -12.4216 +#> 11.1874 -8.0109 4.0227 10.4308 3.0215 4.3172 -17.6065 -5.2152 +#> 4.0862 0.7025 2.0104 -12.1847 -5.8455 -1.7024 7.5525 -3.8789 +#> -12.4878 0.4462 10.2940 -9.2408 3.7506 -8.0383 -7.6609 -13.3497 +#> 16.3533 3.2812 -9.3564 -2.9163 -4.2649 6.4444 -13.4228 -7.0914 +#> 4.9011 -1.0207 -0.0596 9.3285 -10.9899 -9.2491 -0.2204 0.8386 +#> 6.6736 0.9916 -1.4295 -8.5807 14.1271 -4.3453 -5.9411 -10.3486 +#> 0.0912 -14.7209 -3.3929 -3.3242 1.1695 -0.8085 13.9671 9.9344 +#> +#> Columns 41 to 48 -2.8734 -8.4924 -17.5193 10.2680 2.1600 2.0287 6.5707 -2.2567 +#> 5.2224 -13.3770 -2.5050 0.0954 0.4293 -13.4584 9.6534 12.0570 +#> -16.2988 2.3852 10.8625 0.2537 -10.8285 -2.8360 -2.1878 -15.1237 +#> -4.4878 7.1922 4.9887 1.6470 13.0142 4.5364 -7.0848 -1.1903 +#> 1.4093 10.3102 18.3857 2.2670 10.7662 -1.6782 -8.2206 1.4979 +#> 5.1478 -15.8922 -9.0215 1.8039 -6.8724 3.2032 -2.2787 4.8853 +#> 2.8464 19.2271 -9.3307 -4.0319 -1.8629 23.8544 9.6340 -12.7128 +#> 2.8826 -8.5982 -4.0705 -1.1010 9.4100 5.6208 -13.4208 -2.3558 +#> -15.4890 3.0345 18.6851 -0.2153 -6.2293 -2.9044 8.5082 -0.9174 +#> -3.8986 2.3894 2.2295 -18.0275 1.1315 7.5655 2.3939 -7.4495 +#> 9.3855 -12.9117 -7.1190 -0.2013 4.1409 1.2911 -14.5928 9.9418 +#> -3.2735 6.9314 -3.9730 -8.8119 6.7769 6.2036 -4.9626 -4.8748 +#> 8.6534 -8.0155 -9.7671 -5.3605 18.3674 1.1821 -7.3533 -3.2661 +#> -8.1285 -10.3746 -16.3919 0.2280 11.6997 -5.6910 -15.6176 -8.3501 +#> 10.5599 0.8822 6.8635 -10.8261 14.3865 1.1993 0.5539 7.1540 +#> 12.9604 -3.8049 -15.1851 -4.8909 9.1719 8.8168 -3.5436 13.8130 +#> -5.4091 13.2977 1.2928 10.5356 -6.0310 -0.1603 2.7894 -6.7051 +#> 8.8039 17.0807 3.7103 -2.3510 4.5326 10.2308 0.0510 -8.5260 +#> -7.9950 -8.3507 -1.1604 12.6094 -5.5550 -11.0050 21.6340 13.5570 +#> 7.4298 -0.0157 -5.8632 0.2383 6.6793 0.3072 -5.5691 12.1531 +#> 1.8551 -2.8246 -5.6893 -1.9213 -2.7554 -3.0900 4.5301 -5.5418 +#> 3.9559 1.3747 4.6716 9.8295 13.7761 0.1630 -0.0806 9.6606 +#> -4.8118 -14.7208 0.1570 3.6495 4.6368 -15.8935 4.4471 -6.7278 +#> -8.2398 8.1630 16.1575 -1.0816 12.1064 -14.7024 -9.8358 -6.6119 +#> -1.1686 -1.8082 1.2105 -4.8058 -6.6663 -1.0301 12.7177 2.1250 +#> -12.0589 8.9554 11.2514 1.9952 -10.0227 4.4163 5.1555 -2.2884 +#> 2.8321 -14.5843 5.3931 6.7814 -1.2274 5.3709 7.6621 6.8095 +#> -7.6772 4.0721 3.4299 21.4104 -2.8104 1.2902 4.7023 -16.2162 +#> -1.0247 -5.2604 -3.9623 2.1954 16.4670 0.6985 -18.0196 5.2822 +#> -2.3482 2.1633 -10.7999 -6.9703 7.4139 3.7502 -2.8416 -26.7818 +#> -0.3373 2.4728 0.3189 2.5142 -15.0152 2.2724 1.5376 -10.4134 +#> -0.0889 5.8051 -13.2319 -9.9406 2.4245 12.6232 -8.6694 2.0255 +#> 10.1628 13.3009 5.6029 17.3776 -4.9576 12.5435 -3.0724 -13.9567 +#> +#> Columns 49 to 54 15.6849 -5.2490 9.2349 -1.5794 -2.9792 4.6405 +#> 6.9880 0.7062 -14.9495 0.4172 -3.1521 4.1999 +#> -3.7239 21.9186 -1.1966 -5.0637 -0.4306 -2.2426 +#> 4.2418 -2.8615 2.6557 2.5540 -8.3940 3.1412 +#> 3.4456 -4.1329 -12.0184 3.5324 4.0986 -1.4316 +#> 6.0576 9.7856 -11.7726 -9.1875 -1.7548 -4.7044 +#> -7.1676 -14.7830 16.3865 -4.1484 7.2425 -0.3355 +#> -14.5263 6.9334 -5.9519 -0.1055 2.2809 -2.0225 +#> -5.8340 -8.7762 1.7046 5.7863 -7.9306 2.9383 +#> 5.0428 -3.5437 -15.8749 -7.9344 -2.9875 -1.0581 +#> 13.7009 1.3866 -6.1902 -4.8131 -5.1588 5.3303 +#> 5.9989 2.7255 1.7380 0.7751 2.5019 0.3435 +#> 2.8318 2.5967 4.5077 14.8610 3.3433 2.8488 +#> 23.4690 -10.7648 16.1782 4.3626 -1.5103 4.0486 +#> -9.9930 15.5036 -14.7680 5.5554 11.9162 0.5810 +#> 5.2166 -8.9599 -1.9074 3.7155 -8.1369 -4.3223 +#> 6.4710 2.6931 3.9900 -18.1333 8.4946 4.3847 +#> 0.7701 -3.1075 -7.9428 -3.0856 -6.7230 -2.4987 +#> -0.0960 -14.2224 10.6409 -5.5347 4.6661 7.2279 +#> 3.7741 -17.6026 3.7981 3.4200 -8.1906 2.2729 +#> 1.7229 -7.1844 7.6353 13.6129 6.1083 -2.6823 +#> -22.5709 -4.7772 -10.3082 10.4800 -1.8415 -3.4303 +#> 1.6789 1.2973 -5.5922 17.5860 -8.7833 4.6301 +#> 15.7483 -13.0379 -13.9133 -0.0562 -0.4216 3.0191 +#> -14.7516 -12.8231 -2.0598 4.1340 2.3340 4.7072 +#> 7.2764 3.7421 -3.6283 7.6889 -4.8453 -0.1343 +#> -7.9484 -0.0244 -16.3767 8.4560 4.7999 0.5011 +#> -4.0384 -1.9578 -6.8265 -7.5644 -0.4945 4.0614 +#> -7.6236 9.2067 9.0229 11.9141 -2.1053 1.3307 +#> 4.5215 -4.0815 0.0132 -1.5415 -0.0943 -4.1174 +#> 10.0483 9.1725 16.6767 1.9959 -0.3576 1.9711 +#> -1.3797 -5.5918 5.3150 1.8638 -0.1105 -1.3726 +#> -2.3779 2.5244 5.0120 -13.8557 -0.1838 -2.6333 +#> +#> (7,.,.) = +#> Columns 1 to 8 -0.6127 0.8602 4.0707 -17.2720 9.1873 10.7466 4.8171 15.5534 +#> -1.6149 2.8619 -3.2587 7.1475 5.7965 7.3948 -7.9632 1.1804 +#> 6.5168 6.6588 -4.2214 -4.8462 0.4125 2.0058 19.6637 2.1694 +#> 0.4760 -11.3199 3.9707 1.2572 -8.4510 3.7066 -8.8305 -13.8750 +#> 0.3363 1.7052 1.7680 5.5880 -1.5957 -5.1208 5.5055 -1.0847 +#> -4.0448 5.9806 7.0820 5.8893 -11.4771 -1.3977 -10.4268 -7.2437 +#> -1.6654 7.3950 12.6163 -3.9669 -3.9656 -6.1800 16.0702 -0.1868 +#> -0.0455 -4.7763 -7.0575 11.4144 6.8467 5.7396 2.8729 -6.6305 +#> 1.3588 -2.1503 9.3749 -10.7310 -4.1430 0.4640 17.0588 2.8468 +#> 1.7788 -8.8820 -10.5011 3.4414 2.7228 -11.7916 3.1457 3.6890 +#> -4.0497 2.2994 -7.7931 0.0908 2.9860 2.5071 -7.8457 -4.4900 +#> -2.0260 -4.7833 7.8291 7.1111 6.9172 -11.1081 -5.9024 -1.5690 +#> -0.8372 -8.0708 0.9861 10.7736 -2.1552 9.1345 -6.6132 -1.6625 +#> 1.9936 7.4995 4.1519 -0.0550 7.0429 -8.7814 12.7491 16.7252 +#> 4.3270 3.5657 -3.9918 4.0760 4.0217 6.6901 11.2858 11.1260 +#> -3.8955 -2.0160 1.4473 1.9261 12.7488 -12.7384 -14.8865 -7.3731 +#> -4.3515 6.3135 4.9883 -7.3654 5.7990 -1.3878 7.5930 -12.0186 +#> -1.3299 -7.6369 2.5160 6.8659 -6.8668 -12.6240 -0.9304 -3.3401 +#> -5.9669 1.4915 3.7437 2.7153 -3.4913 -1.2524 -19.4998 -14.2078 +#> 1.8816 1.4189 -9.9366 -2.6553 -1.7600 -1.7476 -1.5857 3.8363 +#> 5.3438 0.0244 -1.8589 3.5840 -5.2578 2.8645 -2.3510 11.4651 +#> -5.0214 -2.8464 -0.3529 -5.0528 -23.8036 7.4167 10.9668 -16.8754 +#> -6.5692 -3.0138 7.3143 2.2489 -2.2314 24.1479 2.3553 -4.3058 +#> -0.2328 9.7560 2.0405 2.7370 -4.6171 -2.4686 15.4630 0.7934 +#> -0.7612 1.0114 3.1264 13.3547 9.4779 -5.0772 -16.1384 0.3394 +#> 8.0910 17.3947 10.9892 -0.2604 8.4808 1.5966 -9.6748 -0.7410 +#> -0.4330 -9.5881 -8.5733 0.4734 -1.7358 4.1283 -13.9772 -7.4636 +#> 0.4772 2.9322 5.6349 2.3088 -2.3083 -5.6474 -2.5709 -4.5265 +#> -2.6507 -1.6445 3.0145 -8.5407 -0.7977 0.1677 -6.6109 2.0266 +#> 3.4436 -5.2601 7.8358 2.6887 -0.9017 -2.7233 -9.6875 8.9759 +#> 4.0120 1.9026 4.4819 -6.5699 -0.4820 7.8510 15.4935 -0.3251 +#> -7.5299 -1.9291 2.5879 -0.5706 -1.2503 0.8443 -11.5242 9.2105 +#> 1.9418 -3.2870 -3.3924 -9.6777 1.4283 -2.1436 3.8129 5.1940 +#> +#> Columns 9 to 16 10.9571 -6.5737 -0.2077 12.5503 4.1865 -6.1791 -7.0763 0.1965 +#> -8.8695 10.7537 0.3532 20.5753 5.4241 -2.1564 6.3179 -8.9029 +#> -0.6702 16.9306 -7.0601 0.4260 3.8519 2.1401 0.1246 -5.6079 +#> 8.3267 -1.0636 9.2678 1.8702 -0.2401 5.5926 13.1306 4.9335 +#> -6.5986 3.7130 -7.2328 -6.0084 -4.6307 -18.2156 -3.4626 0.3911 +#> -2.3652 1.7909 -7.1237 5.5230 -10.2107 3.4266 1.4408 -8.3406 +#> 1.3943 -11.3927 -7.6842 -9.6408 -1.2621 1.1670 -5.3837 -17.8851 +#> 16.2027 -5.7363 -0.7484 2.8857 1.1050 -4.5606 -5.7095 3.3818 +#> 16.2908 -6.0660 5.7284 4.4618 2.3688 4.1868 -5.4237 1.1845 +#> -8.7040 8.2184 11.7077 5.5538 0.3863 10.2449 2.7922 3.8356 +#> 2.8524 0.4918 -8.0683 8.1671 2.6062 7.1684 -0.7367 2.2054 +#> -0.8745 5.4079 5.7910 -2.2340 -0.7039 0.7330 0.0945 2.3333 +#> 7.5403 11.2966 -4.5131 7.8712 -5.7957 -9.7941 -5.2514 -13.1442 +#> 12.2324 13.0981 15.7952 1.3508 -8.5745 -14.5636 -18.6429 4.7431 +#> -21.2673 27.8406 -4.0932 5.3727 5.8858 -1.2683 3.9223 -1.1203 +#> 15.6746 -7.5262 8.9094 9.7716 12.2952 6.5154 11.0744 -9.2273 +#> 0.6147 -5.4507 14.8522 -5.6633 -4.0854 -2.5155 -2.8841 9.4550 +#> -6.7478 -4.5933 -4.9118 -1.4062 10.6929 3.7418 -3.6940 -4.5576 +#> -4.7530 2.7798 1.5448 -15.6911 4.4059 -4.8929 -1.6179 6.0765 +#> -1.1021 -9.2705 -1.2855 13.9661 -4.3380 -3.6592 4.4865 -19.0169 +#> 3.8989 15.4263 -1.7052 7.0711 -3.0013 -1.4601 16.3679 1.8411 +#> -1.4202 9.8199 -18.3763 -4.5312 0.1934 3.7991 13.9990 0.1671 +#> 4.7478 -1.3143 -16.8650 7.8815 -7.3830 6.6322 -4.8448 3.0777 +#> -8.3913 9.6030 7.5135 12.3141 -2.6286 0.3046 -13.1053 12.2796 +#> 11.2419 12.2097 16.2024 2.6510 3.9965 -1.1477 -12.0345 -13.1411 +#> 7.9201 5.8420 -2.7981 -9.0971 6.4303 -0.2831 -0.3092 -5.2020 +#> -10.4398 -1.1489 4.0544 2.0237 -0.5546 4.2036 -12.2154 2.8440 +#> 4.2917 -13.6569 3.7762 -8.5169 0.1295 2.3418 -20.9835 -1.0567 +#> -3.6594 -13.4791 14.9113 1.0772 -1.4652 -0.1085 15.8382 1.3571 +#> -5.5677 -6.4323 10.3060 2.9423 0.3724 9.3563 -3.3977 5.6300 +#> 1.9829 -4.8128 -1.3179 5.2454 11.3372 11.9656 -7.9866 1.6916 +#> -24.7450 -5.2551 3.8997 -6.1162 5.0284 -8.5934 -0.9088 2.3862 +#> -0.4460 -14.0491 -0.3221 -1.3527 -0.0903 -7.3134 -5.4660 -12.6041 +#> +#> Columns 17 to 24 -6.3862 0.6770 -2.8913 2.6446 -2.0113 2.0948 -3.9316 -10.3205 +#> -3.3042 2.7250 3.8922 12.5669 -8.8322 6.0081 -1.6474 -1.3496 +#> 8.0881 -1.4549 3.3471 4.8254 -12.2232 4.5284 -10.5167 -7.2270 +#> 0.8639 16.4537 6.0937 -8.1169 -13.8927 -14.5495 -6.0751 0.8588 +#> -9.5304 -5.5361 -5.8441 10.9372 -3.3628 3.0612 -0.8589 0.9682 +#> 1.9300 6.7491 5.6737 -5.7373 0.1888 -2.2724 -0.9412 7.9022 +#> 1.3358 12.5313 -13.9733 0.7424 -5.9798 -4.1006 16.4825 -4.5572 +#> -6.0400 3.8897 -5.4041 -4.4170 0.2472 -4.0526 -2.3943 13.3473 +#> -0.9131 -3.8272 -10.3153 9.9783 2.1318 12.0297 -3.2124 0.3001 +#> 1.6257 -6.6324 14.7083 8.7235 13.7729 -7.8260 -14.8461 12.1033 +#> 4.7265 -9.3282 -8.3582 -1.6361 4.0490 -7.2239 6.4747 -5.1505 +#> 18.6856 1.8156 -3.3326 -2.6225 -6.8087 6.1158 4.0315 7.7480 +#> 16.2540 -6.3447 -0.7650 -5.4297 -7.5572 6.6932 3.5214 -11.0778 +#> -1.4396 2.9347 -14.9760 5.6475 -0.2878 3.0462 -3.3582 -1.0588 +#> 6.9936 5.8066 11.7568 -2.2827 -10.0926 7.5195 -8.8348 22.8042 +#> 8.9934 0.0923 6.6329 10.4184 3.2270 -1.7795 11.4010 -7.4289 +#> 12.6000 -1.9638 4.8100 -6.8164 -0.0098 1.4891 3.9932 -10.4896 +#> 15.8137 0.5698 0.8974 4.4534 -8.9241 0.7093 -3.4582 -3.6150 +#> 2.7426 -3.3546 -8.9759 -0.8950 3.8870 0.7215 4.1513 -12.5556 +#> -13.8795 7.3449 1.0272 12.8563 -9.1313 -11.8193 -5.6979 1.2040 +#> -4.0437 -5.4842 -6.8933 -4.2900 -4.3514 3.2097 2.6541 0.8546 +#> -3.6259 -7.0295 6.5473 -3.4971 4.0426 10.0794 -0.2139 1.7212 +#> -1.5114 4.2256 -9.2188 2.0336 -5.7193 8.5824 3.7002 6.1402 +#> 4.7120 3.7413 7.9710 2.3441 2.4947 19.4809 -6.4370 12.9674 +#> -7.1538 6.5032 1.1843 -11.0555 4.7123 -6.6621 12.5625 1.8256 +#> -17.3850 -5.5194 -9.6443 6.7363 7.7781 11.0695 7.0998 8.3377 +#> -1.3867 11.3660 -4.2410 -3.2328 13.8050 -9.5354 -4.0114 14.7591 +#> -3.6691 1.7124 -3.8861 -0.6895 -1.2039 -1.8457 13.1434 -8.7010 +#> -1.3713 -1.4235 1.8947 2.0566 2.3588 0.9970 12.8145 17.3779 +#> -6.1097 6.0223 -11.1578 -0.4343 -8.3625 5.8353 8.0600 9.2822 +#> -0.2852 14.5481 -8.3150 11.4958 -0.0643 0.7529 6.3325 -15.5322 +#> 0.0462 0.7017 2.5437 -5.4227 -6.8571 11.7224 -3.2721 -16.5096 +#> -16.1163 12.1808 1.3101 9.5219 1.4452 -7.4029 -15.1918 -6.0166 +#> +#> Columns 25 to 32 16.7771 2.5286 6.9042 6.2303 -15.9087 -9.4931 14.2393 -9.3278 +#> 10.2568 6.5342 -17.2927 -4.2694 13.6054 -5.7999 4.6742 -1.9928 +#> 0.9143 12.3806 -0.1864 -0.7706 -2.3867 -3.2355 -5.7401 16.6064 +#> 8.9669 -7.7847 16.1993 4.0804 9.3619 -1.9008 9.2652 -1.0932 +#> 2.3558 0.7323 -8.9165 -11.6120 -5.3710 -12.8421 -3.8472 15.6658 +#> -2.7332 -1.9911 4.0549 6.7536 4.8172 9.6795 -20.7399 -6.8654 +#> 6.6537 4.6724 8.3192 2.9000 -11.0492 3.9040 -9.4077 15.2481 +#> -12.1571 -3.8718 13.3725 -11.5947 0.8376 7.8997 3.6024 2.2890 +#> -9.2799 -2.2657 12.7767 -3.7067 7.7755 -5.1627 19.9460 3.5988 +#> -9.9184 -7.2907 2.9753 -11.7184 2.7403 14.5138 -9.1724 -3.7580 +#> 13.8878 -1.7623 -2.0449 2.2927 9.5829 -14.8202 -1.0320 7.1218 +#> -3.7194 -10.7002 -4.5157 1.2188 -1.3918 2.0307 -4.8687 5.4336 +#> 0.7384 -15.3376 -12.5756 -1.6164 11.3294 -6.6587 4.7188 -1.1753 +#> 0.6555 2.9704 3.3755 6.6369 3.8169 5.7535 -10.3770 -2.6155 +#> 16.2458 15.0967 7.0097 9.3637 16.9596 3.9738 -2.3135 11.5663 +#> 7.1835 0.2678 -7.9150 1.8311 24.1035 -8.0588 -1.2880 4.6781 +#> 3.9928 2.4614 -14.0668 -3.9464 1.3520 6.5379 -17.3260 2.2953 +#> -9.4879 -6.3483 -6.9556 5.9862 -3.4417 4.5690 8.5713 10.0653 +#> -7.9826 -0.2856 -7.7034 1.6021 -5.2799 -7.1478 -4.6322 -11.1482 +#> 15.5197 -4.0462 1.5270 -1.5469 -8.3823 -10.8340 6.9581 2.7991 +#> 2.5861 -5.2750 -12.3159 12.5381 2.4441 19.5723 5.2797 4.2267 +#> 0.4503 5.0235 -2.7517 5.1426 -1.4245 -18.3980 3.7672 8.4228 +#> -4.7789 10.2163 -11.9028 12.3721 5.4755 -17.3363 10.2470 4.8958 +#> 6.0804 12.9226 7.1657 8.4844 6.1737 -5.6766 8.2505 9.2802 +#> 2.6966 -8.2131 -1.6640 -6.8382 3.1647 -6.9614 -4.6510 -10.8113 +#> 3.2803 0.5916 -4.3080 -3.4198 -1.7825 -1.4891 10.8797 -4.6491 +#> -3.5887 -6.4568 6.7569 -21.2021 -6.3740 -3.8290 -2.0275 -3.1784 +#> -9.4296 1.6017 -16.2227 -0.2107 -12.9060 1.9015 -5.7891 -2.1861 +#> 15.2954 16.3788 7.4437 -9.8675 7.5977 -3.1748 16.1293 6.4717 +#> 2.9967 4.0513 -4.7117 7.0075 -4.5454 -0.2602 4.6099 -0.3121 +#> 9.2938 4.1807 11.1087 -2.3951 2.9823 19.5268 3.2484 8.7916 +#> -2.8504 -6.3786 -15.5947 12.2028 -11.1596 6.8691 -3.8037 -5.6914 +#> 9.5168 -7.6326 10.2155 -3.2639 -18.6851 16.3554 2.1796 0.7864 +#> +#> Columns 33 to 40 -1.8424 13.4965 5.9359 -3.1670 -3.1803 -3.7731 -7.7101 -3.3640 +#> -3.0546 1.1266 4.9155 -9.1120 4.3837 1.0432 0.2036 3.4830 +#> 2.7782 5.8812 11.9892 -2.7495 7.1515 9.4099 4.2019 -3.2517 +#> 7.1420 6.0410 -1.8747 8.6460 -2.9767 10.0526 -11.9293 4.5738 +#> -4.2437 6.8362 -1.5379 -5.3885 -6.5491 -2.5902 -1.4827 -3.2206 +#> -6.4982 9.8482 16.1417 -2.2945 -3.6906 -18.0744 9.1594 -2.6025 +#> -16.6491 -10.8893 -2.6142 2.4836 2.5255 -8.5350 -7.0168 -8.9131 +#> 1.7256 -2.5309 10.2332 3.2002 -5.4432 -16.5663 -2.3239 -8.0805 +#> 1.0808 6.8372 10.3936 16.6316 2.6614 5.4583 -6.1073 11.2989 +#> -7.6508 10.4915 -2.4019 -8.2223 9.1519 -9.5589 15.4694 -0.7167 +#> 2.1963 4.1809 10.1309 18.6083 -6.5981 10.4373 -5.6071 7.2492 +#> -2.0518 -14.4212 -1.8670 -14.9595 -3.0426 -6.4525 -3.3262 -2.4451 +#> 4.3743 6.7467 6.6219 10.7107 -5.0255 -7.3092 -18.7796 7.1447 +#> 18.3567 15.6001 6.0591 -5.3036 -2.1265 5.3208 -12.8160 17.3172 +#> -3.1986 -2.0748 8.1779 -2.4382 -8.0289 -3.9662 -6.9810 -14.4200 +#> 11.4986 -8.6713 -1.0075 8.1491 -16.4517 3.4094 -16.0637 0.4579 +#> 1.7948 -7.4603 -5.5256 8.1267 4.2255 -5.1880 6.3938 -6.9577 +#> -7.8557 7.8170 -7.7228 2.2092 20.7377 -13.5920 3.6758 -18.9333 +#> 6.2087 1.6671 1.6227 -16.8905 -14.6488 -2.7445 -0.7253 11.5320 +#> -17.2548 14.1190 8.0960 2.3876 1.5339 2.4185 -10.6004 4.8182 +#> 7.0715 -1.5290 -3.4204 -10.6872 12.8909 -1.1878 1.6240 5.4122 +#> -2.2597 9.0978 -3.9608 -5.0600 8.3489 -0.2848 10.0797 -10.7936 +#> 17.7356 -1.1240 1.2717 -8.4553 -0.7845 6.9360 0.1280 0.4396 +#> -0.4524 23.6251 2.3271 -12.1734 -10.2318 -5.3682 11.4550 0.2321 +#> 4.8857 -1.9313 10.8503 13.9392 -3.1913 -3.9294 8.0877 -6.1294 +#> 6.5706 -16.9693 -1.9114 7.4077 10.2159 9.7724 -6.6862 20.9263 +#> -8.2047 7.2422 -1.7496 -6.9020 6.0776 -0.2092 13.3831 -4.0678 +#> 6.0206 -22.3505 5.8035 3.2204 15.3749 10.4475 9.3134 -0.1864 +#> 12.6119 6.7182 -16.6868 -7.9640 -5.2994 -6.0246 7.3100 0.1317 +#> 4.6046 -3.8573 21.3451 -5.5336 16.4337 -4.1470 20.8281 -1.5446 +#> -2.4232 -8.9493 -9.6627 6.3923 1.3328 7.5283 -11.4277 -8.5131 +#> -5.0047 -3.0523 9.3086 -6.4050 18.0022 -12.5359 12.8243 5.6132 +#> 6.9146 5.2188 7.4482 11.7864 10.6915 -15.4290 -10.6112 -4.0470 +#> +#> Columns 41 to 48 -5.5767 -3.8123 -8.3427 -5.2574 5.0344 0.8432 13.6247 16.0588 +#> 11.9970 2.5541 8.5713 7.8814 5.7165 -7.7433 -3.3369 -1.9118 +#> -2.0087 1.8799 -5.0319 -9.0380 -5.0690 -8.6191 -2.0421 -7.6519 +#> -11.4042 -13.2615 -7.3124 1.6295 7.9630 0.0894 -2.9275 7.8191 +#> 10.5512 -2.0984 -3.4992 -14.1445 -0.7259 -1.2510 -12.3144 0.3860 +#> 17.1771 8.8067 14.6089 10.8677 -8.2794 -6.8339 0.8230 -7.2536 +#> 7.1541 6.7248 -5.9598 -7.5477 5.3594 2.2800 -4.8214 0.0527 +#> 7.2494 -2.9123 -0.8093 -11.4293 -9.4994 -5.3124 14.6816 -12.0252 +#> -4.0650 -3.5543 -4.3100 -15.2891 -9.2865 -5.5602 5.1895 -5.0372 +#> 13.4037 10.5660 -0.0899 6.1516 9.5618 11.7577 -16.0881 -2.2386 +#> -1.2400 -0.7587 -7.2687 -2.7092 6.6366 0.8120 1.8416 10.2779 +#> -0.8043 -4.0606 2.9531 6.1974 5.7183 8.4783 2.1110 -3.3513 +#> -3.1867 -14.9174 -9.7651 -2.5045 -0.9003 2.0940 2.3022 15.0925 +#> -7.6922 21.5073 13.1315 1.7297 6.5255 -1.2053 -12.3053 -4.2204 +#> 2.9455 7.5772 -3.4221 1.2700 0.1425 -2.5533 -20.6513 5.0561 +#> -2.5448 -4.2967 0.6008 -1.1660 7.9227 8.2935 5.2803 3.9738 +#> -2.4051 5.4875 11.9692 -0.4424 -0.4812 -0.7993 -12.5396 -14.4680 +#> 4.4766 -1.7018 0.2177 -0.6928 -3.7054 11.1766 -0.9645 -3.0890 +#> -6.6712 4.6193 -8.4913 10.4303 -5.2905 12.2603 10.5912 11.5043 +#> 1.1754 5.0128 -6.6766 3.6303 10.5086 -14.8414 -3.1029 9.7789 +#> 7.3242 -15.6055 -2.2896 6.5068 0.9513 -6.9077 -7.8493 22.6716 +#> -3.7519 -14.4534 8.0234 9.4587 -7.6534 -16.7834 -2.9745 7.0931 +#> 7.3508 -16.2754 -0.2349 -5.1106 -5.1751 -5.9914 17.0252 10.8574 +#> -0.5602 15.3284 -5.8678 -11.0183 12.5782 -7.7635 -13.9929 -6.6450 +#> -2.5427 -1.9955 13.5255 -9.9100 3.9117 -5.7953 4.7139 2.5794 +#> 0.6945 12.8239 12.1433 6.4143 11.0303 -12.6060 -8.4101 -1.9576 +#> 7.0843 -3.7882 7.6630 -10.5038 -15.1195 1.5714 1.0156 -3.3395 +#> 4.4371 -1.8333 4.4307 5.6060 -1.4728 -9.9524 -7.3490 -8.8954 +#> 21.9396 -0.1028 11.4260 5.6503 18.2624 4.1884 -1.6542 -13.5589 +#> 16.3157 -5.6074 -0.5211 -4.8651 3.1517 1.1494 0.6412 -4.8549 +#> -7.7746 6.8673 -0.1777 3.0733 5.7553 2.9049 2.6433 -7.2542 +#> 3.5829 -1.1483 2.2033 3.3943 3.8698 15.5526 -9.6710 10.7033 +#> -6.7823 -3.8312 -8.0106 -10.4973 -0.2336 9.3450 5.3332 -10.5924 +#> +#> Columns 49 to 54 -1.8343 17.7541 6.7127 -3.5064 2.3426 7.0252 +#> -3.5401 -4.8931 -10.3428 -2.6911 0.5330 0.8503 +#> 15.2936 -9.6012 2.2967 1.3161 -1.4428 7.3306 +#> -21.8066 -3.0973 -1.0753 -6.1871 5.4726 1.5535 +#> 17.2229 -8.8055 1.1184 -6.3699 -5.1258 0.7082 +#> -1.6950 20.6292 -4.3222 4.9447 3.9206 0.8077 +#> 6.2088 8.8589 8.2650 1.5856 7.3887 -7.0106 +#> -2.7026 -5.6998 0.3055 0.6344 -12.7283 3.0237 +#> 13.8986 -2.2224 1.9932 0.6777 -0.0134 -4.8253 +#> -11.7346 -3.0460 -12.0943 2.0108 -3.2044 -3.4012 +#> 13.3435 7.0371 -1.1698 -0.6680 10.0373 -3.2892 +#> -8.3936 -8.6182 -0.0468 1.8136 -6.7346 1.1122 +#> -4.9654 -0.2158 -1.8528 4.9590 -1.4644 -3.4088 +#> -8.9411 8.1443 2.5521 -3.6202 -0.3143 4.8254 +#> -7.1834 -14.4674 -2.6967 8.8684 -4.1868 -1.7125 +#> 1.3816 -11.9041 -8.1992 -2.8824 1.1431 -6.2098 +#> 6.9124 9.4540 -11.5077 5.2963 -4.4932 -2.9735 +#> 0.0991 -5.4287 -2.4734 -3.8005 1.9282 -2.7588 +#> -1.5505 11.6649 0.1514 -6.5469 3.0994 2.7394 +#> 0.0905 -2.3823 8.1655 -3.4520 4.8792 3.2606 +#> -2.1946 1.3596 -0.3702 1.6147 1.9985 1.6740 +#> 1.7902 -7.2148 12.3227 -1.3156 -6.2056 -5.1462 +#> 12.8373 0.2384 4.3437 -7.0630 3.6899 4.9104 +#> -2.4433 -12.2084 -8.1028 -1.1702 -16.2865 5.9017 +#> -8.2896 -1.5226 -3.3016 -1.4798 1.1837 0.9620 +#> -0.2347 -23.1194 4.4594 -5.4188 2.2736 -7.9268 +#> 14.0461 -2.9727 -4.4063 0.3881 1.1977 -1.7749 +#> 0.9196 -1.8711 -0.8332 -5.8261 9.0320 -7.7057 +#> 8.3060 5.3831 -4.5026 -6.6984 -10.2018 1.0121 +#> 2.3768 3.4562 -1.0686 4.1457 -4.1291 1.1535 +#> 4.4473 2.4399 0.3042 -7.1260 10.3929 0.5017 +#> -6.7038 -2.0523 9.1131 -0.5452 1.7847 -4.1554 +#> -2.2229 12.9701 -5.2224 -16.5133 9.5451 -5.2632 +#> +#> (8,.,.) = +#> Columns 1 to 8 -1.2386 -6.6557 -5.7359 8.7007 -1.4756 -22.2371 -9.8361 0.2464 +#> -0.2773 4.0665 8.3574 6.4503 11.8810 -6.1403 -2.8884 13.2044 +#> -0.6591 2.2262 -0.0887 15.2786 14.7186 11.5726 6.0648 -6.0864 +#> 6.1982 2.9446 12.6473 -12.9558 2.9889 5.5596 -2.0329 -1.9073 +#> -4.8988 0.0376 -2.5519 11.5645 -10.0178 14.2919 12.6015 3.1491 +#> -2.3991 4.7100 -4.8162 -4.8229 7.4487 4.5662 -3.1959 4.7877 +#> 0.7086 -8.6799 -9.7750 -7.1017 5.6339 -3.0132 1.6500 1.5601 +#> -5.0250 4.1333 7.9076 -1.2778 -6.0876 -1.1585 0.6839 5.3396 +#> -2.2762 4.9162 2.9937 6.2812 -6.1639 -7.0113 7.5574 -7.0249 +#> 6.1409 2.9378 8.8781 -3.3653 13.8300 -3.6334 9.2394 -2.6541 +#> 7.3229 0.6074 5.5275 3.7079 10.0168 -1.9350 -8.6505 3.0828 +#> 2.5946 7.8824 -1.6077 1.3345 -7.2813 0.2058 6.0130 -2.6293 +#> 3.1692 4.5650 10.0818 -5.4949 9.0942 -13.9313 0.3521 1.2611 +#> 4.0248 -10.5574 10.4274 -10.8648 -14.1220 -14.5702 1.2047 -11.5819 +#> -4.4736 1.8697 -0.6305 11.1447 -5.4738 6.5114 -2.9410 -3.5729 +#> 6.8356 10.4735 -4.9814 10.4058 7.2451 5.3009 2.6563 3.6116 +#> 3.5384 1.9147 -3.3813 3.9148 4.9032 0.2468 8.6457 -2.7713 +#> 4.1245 6.9455 -8.9467 7.2937 11.2478 -6.0324 7.5209 2.6760 +#> 8.0597 1.4393 -4.9737 -6.3850 -4.3215 -13.1023 9.0532 0.6358 +#> -7.2548 -5.3086 8.0534 1.4170 1.6036 -11.1632 -2.8319 -4.0257 +#> -4.1577 -1.9521 1.0066 -14.9572 0.5408 2.0758 1.0115 -3.3268 +#> -5.4344 -0.0741 -7.8543 7.2897 10.7040 10.4217 -5.9500 -0.4322 +#> 1.1035 -0.0304 3.1303 6.2802 -0.1111 7.2508 0.8758 13.8683 +#> 6.7262 1.3074 7.9453 10.6012 0.5703 -13.6811 0.3449 1.0457 +#> 3.2287 6.3162 8.3626 4.4921 1.0661 -6.6015 2.3204 7.0992 +#> -2.5365 -7.2800 -1.5505 -1.2421 0.7988 27.5398 -8.2030 6.6027 +#> -17.1901 -1.4530 0.8999 0.8717 -15.8282 -9.4943 -1.3518 21.1274 +#> 6.6538 -6.2006 2.2470 2.2221 1.6375 12.5634 -13.6554 1.3106 +#> -9.9256 2.3837 -12.9811 3.4591 -5.6458 6.0142 4.4481 -3.8653 +#> 1.2003 6.0700 8.0300 5.6897 10.6546 8.2393 7.9712 3.3288 +#> 0.4206 -7.7476 2.0656 3.9185 13.3095 0.8433 3.4833 3.5886 +#> 1.0991 -2.1457 -5.2532 0.1567 5.3771 10.0281 6.8851 23.7975 +#> 0.4489 -3.3574 13.1555 4.9923 -2.8310 3.7748 -9.6098 4.9965 +#> +#> Columns 9 to 16 6.7832 -13.7821 4.4662 1.0761 3.8056 11.4756 10.8140 4.1145 +#> 0.2604 5.7517 -10.9681 -4.4028 6.7957 -3.9583 14.8481 0.7897 +#> -1.1360 3.4995 -3.6651 -8.1714 -6.5169 -11.3286 -0.8255 12.1935 +#> 7.9599 -2.6114 11.3199 -7.4855 -4.3361 -4.5887 -10.4610 -11.8132 +#> 0.3610 2.7278 3.2638 -11.1727 -10.9999 0.5326 3.8333 -6.8610 +#> -9.0624 10.5482 -0.0374 1.7146 7.7158 -12.9160 -5.5823 -4.2765 +#> -5.8257 -20.9969 -5.1232 -9.3122 5.3073 6.8794 2.7560 0.7989 +#> -14.3975 14.6538 -8.6528 -2.1857 -1.2501 9.6379 -0.7634 0.5584 +#> 0.3095 2.2708 -0.9672 20.4971 5.6079 14.8120 -1.6854 -8.5086 +#> -5.3860 0.3313 9.9838 12.0481 -3.9272 -3.0648 -13.0644 -6.4053 +#> 2.2785 6.3809 -9.3108 2.6389 -14.2079 -8.5372 5.4338 -3.5765 +#> 3.2861 -3.4737 -5.4502 4.8764 -4.8236 14.5205 13.1233 -2.7353 +#> 2.3916 -5.9340 -6.6059 0.5346 7.0030 6.8850 9.6263 -9.5210 +#> -7.8399 -5.2241 12.6349 5.2605 -2.4723 9.4437 2.7421 -16.5902 +#> 3.9001 -0.1205 18.7478 -0.5916 -10.3781 -4.7193 3.8084 10.6507 +#> 13.2789 -2.6716 -8.8196 10.9410 -0.6431 6.3290 11.9756 2.2585 +#> 3.6011 -4.4671 8.7177 17.1741 4.6441 -4.6869 3.0142 -6.9013 +#> 0.8539 2.7775 -2.7376 3.6020 4.7656 -2.1004 -5.5815 -5.4277 +#> 4.9741 -7.7645 0.3187 3.8744 -4.4129 0.7936 4.2426 1.3779 +#> -4.8911 0.5781 -0.4731 -12.0659 -7.7598 -1.6553 11.0999 -0.4283 +#> -3.3174 -2.4584 -2.6069 -3.0589 7.4794 10.2512 5.5712 -6.3497 +#> -16.9110 -0.9935 -3.6336 4.1780 8.7888 3.9658 -9.9355 -1.5366 +#> 0.3330 9.3097 -19.7546 -12.5324 -13.5795 -3.2058 4.9282 -1.4063 +#> -6.1823 -8.8298 14.8820 10.5941 -6.1332 -10.5407 3.7271 1.2386 +#> 7.5949 -10.4864 -13.9984 -10.9754 -6.1487 0.2152 3.8827 11.2900 +#> -2.1246 -8.4616 -14.0754 -3.6663 7.9090 9.4109 -2.5075 10.8019 +#> -7.6572 6.1259 -3.9265 -8.3555 -1.2643 9.8219 4.2961 4.2423 +#> 10.1329 -11.0471 -14.4955 6.5691 10.6779 7.3595 -18.6987 -9.7182 +#> 6.3881 14.8544 -2.7489 -4.0257 -2.0266 -13.0539 8.6928 2.6957 +#> 10.4079 -5.2841 -2.7571 -8.0091 -3.1103 -1.0414 -11.0158 -6.1718 +#> -6.4212 11.2837 -4.5784 7.5454 5.0112 -7.1306 8.5360 -2.2496 +#> 16.0291 -3.7346 -2.4386 -7.8284 -12.2543 -6.9022 3.0015 -8.1896 +#> -9.8392 11.3461 11.3539 -8.8640 -2.2912 -10.6087 -10.8046 -16.3326 +#> +#> Columns 17 to 24 -3.8181 3.5378 -7.5364 2.0519 -1.1574 9.0829 13.3129 -8.1194 +#> 3.8522 9.3402 12.0454 -13.4299 0.7772 -10.0543 -1.1425 -12.0032 +#> -3.0556 7.9356 -6.3537 -1.2470 11.8061 -7.4057 3.6360 8.3525 +#> 4.8905 -17.1222 -0.5280 -0.6388 7.1534 20.4932 12.6426 4.8937 +#> -1.6627 -5.0511 -2.3096 2.0789 -3.6078 -13.2165 -13.8051 -15.0447 +#> 8.8540 9.7598 17.4443 -2.6986 -5.6849 3.8985 6.8530 3.5966 +#> 3.2848 9.8828 -1.8873 2.7057 15.8798 26.6394 -15.2037 -1.7653 +#> -4.5540 -14.9251 -6.8075 -3.9965 9.4959 -5.1095 0.6675 2.4862 +#> 0.6328 -20.5414 -8.0892 -2.2073 -5.0799 1.2374 -4.4981 -0.2458 +#> 1.5963 -1.5600 5.1990 -13.6533 1.5979 -0.5885 -24.4527 -10.7978 +#> 5.0742 -17.0051 1.8368 12.9870 -3.1137 8.5719 5.8553 -5.8236 +#> -3.9114 2.8226 -2.7207 -1.7210 -6.1309 -2.6114 -10.6038 -1.5694 +#> -10.4239 -23.2444 -4.6536 -11.4708 -2.7148 -4.8113 5.0560 -8.3583 +#> 18.7890 2.5796 6.3800 -0.7864 -15.9076 -6.2544 -11.0225 -16.7624 +#> 6.4259 -2.8455 12.5595 -7.7595 3.5839 4.7312 -3.7208 -4.2754 +#> -3.4020 4.4032 -9.9221 -3.2326 -16.1397 16.0583 -5.0237 -6.4389 +#> -7.8684 3.4457 -4.4468 -4.7244 5.6385 -0.8368 -6.4453 -8.0733 +#> 0.2546 0.0812 1.4544 -7.6102 -6.1491 -15.5641 -0.9825 -4.7077 +#> -4.5914 10.1294 -0.9949 4.6124 -0.5657 3.5410 1.2348 14.0613 +#> -0.0191 5.3228 5.4258 -2.6048 -1.0057 -1.6115 -6.9707 1.0934 +#> 2.6736 -6.5737 4.4893 -2.9167 -4.3483 -11.3847 15.2476 -18.3195 +#> -13.0177 -13.9211 0.8724 3.0664 6.5222 -8.6685 2.6954 7.6246 +#> 1.4958 -16.9410 -1.1762 1.4130 14.5685 -4.2866 21.0710 -2.7017 +#> -0.3956 -0.8935 8.4032 -22.3429 2.6546 -13.8019 -6.5515 13.3860 +#> -4.0700 14.8472 15.5960 -8.1308 -0.5878 5.3835 8.8421 -5.2318 +#> 14.7388 6.6651 4.6989 26.1218 -17.2070 -16.5160 -19.6198 -10.9307 +#> 11.0064 -13.3230 11.1876 -4.2556 -8.9662 -7.0936 21.5521 6.4291 +#> 7.0076 -0.7431 -11.9614 27.6689 1.0181 -7.9860 -7.1009 -2.0687 +#> -4.2808 8.2195 6.5982 -5.1092 25.5834 6.2753 -13.0036 -11.7545 +#> -1.3946 -4.6868 0.1945 4.3473 10.8548 -5.9432 0.6596 -8.7896 +#> -1.4423 2.4108 -9.0420 11.8343 -5.8388 10.1416 10.7012 -9.5838 +#> -5.4262 0.0316 7.2794 1.8373 3.5906 0.3534 3.3103 2.6197 +#> 10.0691 -10.8212 -13.9875 13.7426 -0.4254 9.6408 -4.1174 -12.9087 +#> +#> Columns 25 to 32 5.7815 -11.2052 -10.1110 -15.0753 -3.7367 2.4327 14.2180 2.2737 +#> -5.2775 3.1873 -8.7778 6.7015 7.1382 8.2293 -12.5117 2.4427 +#> -7.3568 2.9960 -6.5885 -16.8982 -16.5649 3.7975 -1.6600 7.3327 +#> 10.9394 -11.4489 -21.5761 -7.8549 -6.8414 -10.9404 -4.2704 -0.0477 +#> -10.3743 0.7871 3.4286 0.5672 11.4105 -1.3975 6.1030 -0.6356 +#> 16.8554 5.9367 16.2723 1.7485 12.3916 7.4086 -8.5019 -0.6549 +#> 11.9617 -12.5959 6.3841 -2.2748 -1.6286 -14.0480 11.3096 13.4072 +#> -7.4953 6.1567 16.8598 -0.2953 -21.0817 16.6099 8.3513 -16.7929 +#> 14.8269 0.6151 2.3543 5.8286 2.2856 -3.9406 -0.9631 3.1815 +#> -3.9532 3.0284 -0.2219 10.4799 10.4760 3.7992 -6.2466 -18.3574 +#> -10.8578 -9.3689 -7.1844 -3.0032 -0.1545 11.5331 12.7631 13.6629 +#> 3.5911 -0.6161 -5.9469 0.6818 2.3619 -6.1240 -7.6180 -1.3287 +#> -2.5327 10.6189 -18.5148 5.3148 -0.6584 2.6013 -9.2113 13.8353 +#> 9.4187 0.5777 7.9389 0.5358 13.1063 0.7554 -1.3956 3.5146 +#> -17.2354 -12.3225 -5.8946 4.4433 -7.8367 0.8939 -0.7904 -4.1935 +#> 7.2861 -2.1638 -9.3430 -11.9848 14.3549 -5.6270 -7.5111 -4.3978 +#> -3.9379 -9.9126 -5.8633 4.0363 11.0591 -9.9864 2.4630 7.3822 +#> -5.0085 15.3669 11.6356 -7.1488 9.0924 1.9527 7.3431 -3.2836 +#> 2.7997 -10.5143 -3.9859 -5.4752 1.8580 -14.4903 7.8025 -1.2043 +#> 4.2449 -5.1456 5.4715 0.6693 0.2564 3.2547 -8.1555 4.8945 +#> 6.7695 -5.4372 13.4409 -1.6403 -4.4004 7.4706 -7.1136 -6.3188 +#> -26.4656 -9.6441 8.3506 7.7984 -15.6762 -6.2710 8.4947 -5.3092 +#> -1.5954 -4.9106 -4.4741 -5.1403 -10.9120 -6.8455 0.4099 9.0399 +#> 13.4713 -5.0032 0.1034 1.3208 4.6050 -19.9842 9.5379 -8.5128 +#> 9.3263 2.6677 -10.5032 8.1141 3.5864 -10.2731 -12.8021 -0.5099 +#> -6.6738 -1.6292 1.7206 10.6710 -4.8733 3.6649 9.3597 -6.3020 +#> -4.3809 -2.4609 14.0715 -9.0213 -9.2908 -0.6594 2.5404 -5.1101 +#> 0.5244 9.6204 1.3933 13.4220 4.4351 -3.4956 2.2497 2.2802 +#> 0.8334 -4.4792 -16.3570 0.9805 8.8349 1.6111 -10.4217 -9.5464 +#> 3.4417 8.0954 1.1975 5.5218 1.2787 9.3994 -1.2515 2.7017 +#> -6.2431 -3.4588 -11.8408 -16.5218 -3.1631 6.5201 -2.6856 8.5064 +#> 16.6163 -4.2587 7.4921 7.8182 16.3340 -4.9343 12.4147 -4.0190 +#> 3.4008 -3.5263 6.8179 1.2807 2.9962 10.0039 7.8072 -3.8946 +#> +#> Columns 33 to 40 0.6153 12.6312 -4.5639 -1.7969 11.6116 5.4296 1.5044 8.4756 +#> -13.2160 9.4266 -15.7469 -5.2275 6.5396 -15.4728 14.4696 -3.1124 +#> -9.9255 -0.4149 2.6068 -0.1747 -5.0429 2.1270 -6.1363 -12.1138 +#> 9.2543 1.2021 -14.3793 -12.1762 11.4430 -4.3578 -15.6615 5.0231 +#> -4.6041 -3.6716 9.3975 -5.8685 -2.6094 14.8090 12.1272 -16.6805 +#> 11.8514 -9.1434 -10.2178 7.1378 18.5872 -19.1870 14.8325 2.1634 +#> 19.5262 -24.5421 8.8771 13.1751 8.5918 4.8357 8.0893 23.1986 +#> 1.4895 1.0209 4.7899 4.8026 -0.0754 1.7163 5.4794 7.5594 +#> -1.0331 -4.6197 -6.7906 9.2240 -6.8523 9.3637 -3.3565 -9.6718 +#> -5.0029 4.3012 -5.7359 -8.8524 -4.6995 -10.0583 -6.9865 -15.9948 +#> 4.0009 2.2732 -8.0308 8.9335 8.5793 8.3403 6.6375 -2.2105 +#> 5.6792 -4.7119 -1.6580 1.2476 -1.1267 5.3460 -12.2722 5.2688 +#> -1.8956 7.9910 -15.9089 2.4161 6.5787 -13.7194 -4.2087 0.0350 +#> -9.0922 9.5968 -2.1486 1.0909 -7.9967 16.5149 5.1405 0.6490 +#> -8.1729 10.0460 -2.0875 -0.4981 1.0082 17.0891 17.1244 8.0785 +#> 5.2259 -10.8711 -2.8976 -4.4191 0.6887 -23.9850 4.5274 -5.9280 +#> 4.7499 -13.6675 -0.0904 2.6866 -3.5671 -4.9347 -0.4173 13.2649 +#> -4.6781 -8.9673 0.2868 -10.3915 -4.7363 -8.2594 13.4670 -1.8958 +#> 11.6591 -8.3003 -5.6046 9.3901 -3.5828 -9.1874 6.8649 1.0554 +#> 13.4827 4.2361 -9.7745 -0.8909 -6.0660 4.4624 2.2054 -2.4023 +#> -1.2169 -5.2659 14.5143 -2.7991 8.5474 5.4740 -6.8762 -12.6495 +#> -3.7247 15.0822 1.4707 -0.4692 -6.2861 -14.5584 8.6142 -8.4804 +#> 8.4012 -3.5130 -7.3314 6.7885 -2.0321 4.0237 -2.4397 -13.0585 +#> -7.4914 -0.0303 -9.6862 -8.3558 -4.4822 1.6017 21.4095 -8.9082 +#> 15.0708 -8.1171 -10.5559 2.5879 0.9688 -20.0621 -9.6058 18.6956 +#> -22.9990 0.3863 6.2837 -8.1493 5.9334 3.0045 -18.7993 -4.2683 +#> -8.5526 11.0582 2.9008 -3.6809 0.5880 -7.3651 16.2700 -11.3598 +#> -6.4579 -19.8438 20.3426 -1.5718 -16.8870 1.8191 -19.6990 11.1823 +#> 13.2661 -6.8728 -24.8098 3.3060 8.4801 -4.5746 -5.0747 -26.4260 +#> -2.9784 -7.9800 17.0551 -8.3335 0.7352 4.6203 -9.0919 0.5804 +#> -0.2750 1.6194 -12.5697 2.9655 -4.6859 7.3483 -4.8088 9.7103 +#> -0.2157 -18.7111 9.1968 -4.0598 4.2597 -13.5639 -2.9689 4.3862 +#> 16.8788 -7.4737 -12.5424 -6.5635 3.4351 5.2706 2.7987 8.1112 +#> +#> Columns 41 to 48 -4.6081 -0.2186 12.4059 2.3258 -9.2978 -9.0476 7.2262 9.7289 +#> 2.0818 9.9012 3.3630 -5.5130 7.9671 -11.9698 -14.7923 -0.5322 +#> 4.8438 10.9454 5.9332 -4.1776 15.0645 13.8891 -10.3100 -8.9127 +#> 5.3579 -7.7129 -1.3611 3.7962 3.2228 -16.7771 6.7819 12.6064 +#> -0.0313 5.1999 -8.1578 -7.9746 11.9216 20.4942 -13.9941 -10.7265 +#> -9.7730 -11.4795 11.9194 0.7846 -6.0939 -1.5018 15.0925 8.3961 +#> 12.9930 -15.6832 -5.3033 4.2834 -11.0360 -11.2718 -13.6859 5.6258 +#> -17.0769 7.9962 -1.0748 -0.8639 -4.9793 12.0624 -7.1264 -14.0314 +#> 5.7152 1.3865 -19.0770 3.0961 8.9462 2.2730 16.6593 16.3357 +#> 8.3332 -1.5241 12.2376 0.7902 -5.8305 7.1885 -1.3968 -1.9835 +#> -4.6199 7.1669 12.7579 -0.7066 -11.9175 -6.4062 -1.7167 -15.7563 +#> 11.8015 -13.2358 -8.3818 8.5178 -3.1642 -14.0368 -13.4443 2.3771 +#> 6.0066 -19.1066 2.3028 7.3583 15.3317 -15.0640 -9.5922 16.5623 +#> -4.3867 9.1373 0.2435 -0.0986 -0.0762 -13.7349 3.4532 5.1368 +#> 10.3933 -1.2704 2.3008 -10.6742 8.1758 -1.5208 -11.1490 -13.2506 +#> -0.0919 6.2322 -3.4364 7.1699 -6.4452 -0.6239 -1.0018 -11.6023 +#> -4.6317 -3.4427 -4.2826 -1.0602 -6.1983 12.9943 -4.2320 -5.9153 +#> 10.8377 -6.5072 -2.7856 -2.4237 -2.7781 10.0801 -7.5869 -1.8684 +#> -9.9327 2.3958 5.5314 12.3773 -23.8599 -8.0391 7.7051 5.0509 +#> 11.1409 -1.4190 -9.9447 1.3283 18.8076 -5.8233 -10.8330 -1.3174 +#> 2.3021 -2.0881 -7.5843 -4.7476 9.9467 0.2661 6.2724 20.2826 +#> -4.0467 -7.0384 -3.1936 -4.1471 7.2308 2.3575 0.3775 -0.4381 +#> 6.0766 -1.5979 1.1240 -0.4845 6.2339 -10.0734 -1.5752 -7.2063 +#> -8.8085 -14.1890 5.1767 -6.4603 -6.7137 8.9089 5.3676 6.6996 +#> 2.8207 -9.9635 2.8069 11.1562 -0.2039 -12.2122 -7.7154 -4.6887 +#> 18.0377 3.0167 -23.0681 -7.6777 3.5636 -2.3696 -1.9580 -7.0301 +#> -11.5130 -1.5931 4.9840 -11.5243 2.1282 7.6383 6.3256 -2.3138 +#> 3.0400 0.3301 -2.2775 -0.3457 -16.3886 7.2376 -0.6263 -3.9783 +#> 3.1493 2.8850 -4.0373 -8.4329 10.8766 -3.4952 5.5861 -1.1541 +#> -11.0791 -3.7675 8.2105 3.6783 4.1629 3.3627 -10.1947 2.4801 +#> -0.4216 6.0176 -4.3146 0.9449 2.6340 1.0334 0.7877 -1.3224 +#> 0.0025 -6.9363 6.5819 11.5109 -4.2354 -1.0384 8.9223 -1.9641 +#> -0.9662 10.4527 5.9766 3.6725 -5.2594 7.8263 -1.5536 -16.6347 +#> +#> Columns 49 to 54 1.9588 8.3283 13.8873 8.6834 0.5259 1.1903 +#> 5.2661 -12.5476 -2.8332 1.9896 -3.4706 -2.3168 +#> -5.2456 -1.0644 9.1147 -2.1190 -4.6427 5.3401 +#> 11.0142 1.9411 -6.3673 -2.1707 -7.1105 -0.4202 +#> -8.6967 -5.1558 -4.2474 -5.4727 0.4504 1.1208 +#> -1.6927 -9.9452 0.6525 1.6136 -2.8117 2.7959 +#> -14.0690 -13.4435 0.4909 24.8668 -1.8938 2.2268 +#> 11.2345 19.4879 3.9870 8.5541 -6.7318 -2.5259 +#> 15.0291 5.8583 14.2704 -4.8200 -0.1153 1.5786 +#> -4.1208 -11.5654 10.1012 5.8516 -5.0757 -2.9495 +#> -1.6121 7.9661 1.3122 6.2015 3.9888 -5.7656 +#> 5.4010 -0.3276 3.1856 -3.2439 -2.8550 0.0666 +#> 7.8241 4.6211 11.1584 7.4595 -3.2768 -4.8039 +#> 5.7145 13.4337 3.2486 -2.6163 -1.5198 0.9364 +#> 8.8906 -9.5267 -0.3171 1.0335 3.3856 -5.0768 +#> -4.6740 8.9297 4.9648 4.0569 12.6832 -1.7116 +#> -8.5909 -1.8645 -8.9489 9.4512 -3.1062 -2.8445 +#> 8.1606 -5.4371 -5.0712 -3.3699 0.1449 4.7645 +#> -10.8949 3.7928 -0.3640 0.2197 -5.7262 -1.6394 +#> 12.0143 -0.9339 -9.0014 1.9697 -0.6400 0.4976 +#> 6.0262 3.5343 11.7559 15.4925 -3.1998 7.0436 +#> -10.8077 6.3692 9.8305 -5.3393 0.6431 -4.7708 +#> 9.3598 5.6291 7.4261 0.5866 0.7967 -6.0903 +#> 18.3737 -9.8889 -11.1041 -2.4223 -4.2197 0.2418 +#> 3.8259 5.1196 -15.0322 11.3117 0.2863 -2.9050 +#> -1.9559 2.2944 -4.7705 -9.1695 -9.5788 -7.3815 +#> 2.6341 -8.2463 -7.3040 -1.5010 3.8757 -5.4977 +#> 4.6336 5.3556 -16.7745 -1.0408 -11.2265 -8.4252 +#> -9.5540 -9.2326 11.3845 3.3105 -3.3986 -3.8443 +#> 6.8270 -15.0225 8.3427 0.6549 -12.3857 6.7373 +#> 2.1005 7.0748 -10.9618 9.3835 8.7323 2.2216 +#> -11.1123 -6.8023 -16.3364 14.5614 -2.2583 2.3636 +#> -7.6711 10.2272 -11.4722 6.8585 -6.0782 6.5229 +#> +#> (9,.,.) = +#> Columns 1 to 8 1.0735 3.9054 8.9035 -0.2565 -6.1633 8.1643 15.0834 13.5828 +#> -2.5966 2.3693 -4.3026 7.2852 -13.0546 7.3715 -10.0758 5.7012 +#> -1.4647 -1.9157 8.2975 3.0261 9.2869 6.5125 -3.8768 -3.2859 +#> -2.1843 10.5563 -12.0219 2.9543 -5.6263 -3.5726 11.8489 -3.6930 +#> 1.6798 -5.6974 -3.9604 7.3593 12.9994 3.5268 -2.7130 1.0953 +#> -6.9651 -1.3729 -9.8204 -9.3804 2.2997 -1.8982 -21.3711 3.7881 +#> -2.4449 -11.3437 1.3579 -8.6287 5.0291 -2.0934 9.9355 -32.6100 +#> 2.7162 4.3471 -16.5572 10.7799 4.2249 1.2279 1.6086 -8.4085 +#> 0.8175 0.0933 -0.1991 -0.9655 1.8447 -3.6023 5.4509 4.7328 +#> -0.6463 5.4465 -1.1174 8.9566 17.9524 -13.4573 -8.2900 -6.3494 +#> -5.2350 3.3144 -12.4327 -2.6800 -7.5286 -5.4293 -13.3200 4.5779 +#> 4.1509 -3.0731 4.4164 -3.5928 -3.4108 -6.6018 8.5740 4.6753 +#> 1.0473 8.0859 -2.9306 10.5522 -1.6310 10.5982 3.5165 13.5564 +#> 4.2752 3.6573 14.5529 3.3785 -13.1357 -1.0683 20.0800 6.6249 +#> -0.5407 -1.2084 4.8380 0.8793 2.2436 -4.3094 -10.9556 12.2060 +#> 3.3666 -9.4404 -6.3948 4.1900 3.4935 -9.0387 7.3016 -7.2717 +#> 4.3984 -5.6790 -1.5033 -7.1358 6.4276 -5.6084 15.1699 -15.7159 +#> 2.7821 4.6108 3.8193 -2.7762 9.0128 -1.0443 -0.0539 0.6151 +#> -5.1694 -3.0569 1.3078 -9.5484 -3.4733 15.1284 -0.3976 -6.8847 +#> -2.5611 -3.7937 8.2898 12.8734 -6.9508 3.8760 3.1157 -4.9876 +#> 3.2563 3.6606 -3.2690 9.0415 3.9909 12.7054 -8.1152 1.9505 +#> 0.4398 0.0075 3.0800 2.6692 8.2081 3.8358 -15.0855 -4.9207 +#> -3.2285 11.5741 -2.5871 -3.3632 -11.5564 6.4468 3.4818 13.8326 +#> -0.0612 8.8998 -3.0649 5.3760 -7.7608 -17.2493 22.3423 4.2994 +#> -8.1860 2.6882 -5.9581 1.0583 -1.1013 3.2890 16.8306 6.0326 +#> -2.0240 -4.8830 -3.4334 12.8048 11.2242 2.5959 6.7187 -12.4128 +#> -7.1289 16.5580 -12.0429 -15.0252 1.9339 10.4103 -5.0611 17.3638 +#> -2.6788 3.8403 1.9228 -7.7660 7.7863 1.0034 0.8515 -9.2354 +#> 5.6612 -10.7418 -3.1110 6.2218 -10.4262 -11.0472 6.7749 -0.8332 +#> -1.8490 -1.6039 -7.3412 8.0914 -7.9953 -9.3456 -9.5047 1.7559 +#> -5.2588 7.0208 8.2431 -9.5051 -1.8955 4.6671 -7.0597 -2.5657 +#> 0.0276 -2.2944 -8.2584 -8.8120 -11.1925 -8.4706 1.7941 5.8807 +#> -2.6734 1.9046 0.4894 -0.7032 8.9083 -2.3308 -10.9878 -3.2104 +#> +#> Columns 9 to 16 13.2465 6.4433 -8.6046 -1.1845 3.8323 -12.3201 8.9943 -3.9047 +#> 0.4544 -1.8620 -3.3839 -5.8816 13.0031 5.3914 8.6402 -8.9050 +#> 15.6944 9.7085 9.9874 -9.6605 12.1715 -3.1628 -2.3314 -2.0266 +#> 3.6023 -0.3007 -11.3339 9.9588 1.0732 6.4970 10.8542 -1.6746 +#> -16.7024 5.0472 2.0216 5.7319 5.6213 -15.9480 -5.0707 -4.7655 +#> -5.2032 -7.3914 -1.9551 7.2575 -12.0884 -1.6787 -10.0721 -5.3452 +#> -7.5331 1.3891 29.7222 17.9035 -11.3366 -12.3587 -12.5128 -10.8938 +#> -3.8445 17.0920 -9.4624 4.3973 -16.0368 8.5743 6.0565 9.5379 +#> -1.7758 -7.2182 -1.0541 -2.2164 -1.0248 -9.4553 -1.1625 3.0394 +#> -2.2983 -3.4121 14.8455 4.7460 -0.3923 10.1846 11.1070 3.5390 +#> -2.6307 16.8253 -7.9007 2.6415 -22.1009 -3.8888 -25.7525 -4.4652 +#> -14.3153 -5.0006 4.1921 3.6170 16.4159 3.9102 8.9996 5.3158 +#> -0.2249 5.0144 0.1123 -7.6480 10.6608 1.9076 6.8513 -3.9613 +#> 4.6802 -1.4356 -3.0320 -0.2363 6.9620 0.6770 4.0127 3.6345 +#> -14.7316 9.5782 -4.1529 -5.5849 -9.0149 -3.2603 13.2802 0.5355 +#> -6.0926 -3.7464 0.9260 4.0663 1.2446 22.6448 -1.6881 8.9952 +#> -17.3501 3.5519 3.1421 10.5127 2.6415 0.3943 -17.9886 4.7924 +#> -15.1627 -13.0156 -6.8509 -4.7591 -5.9031 5.8196 13.7230 -8.2514 +#> -1.7240 -1.7272 3.2095 -7.4035 12.6146 4.1364 8.4938 -10.0151 +#> 14.1426 -1.1944 -0.9654 -1.0893 -0.5539 -11.8718 6.8059 1.1718 +#> -7.9204 5.9217 9.0367 7.3435 8.0015 10.7738 23.7962 -17.0851 +#> -0.4257 18.5709 1.1863 -2.7962 -15.0884 -3.9665 -7.0347 8.4267 +#> -0.2908 -3.4902 -2.1145 11.7449 6.1509 -0.9811 -2.7646 -9.2497 +#> -4.2742 -12.7498 -9.6965 4.7310 8.8934 -2.1604 12.7799 1.5827 +#> 15.9432 -1.9880 -11.8355 -1.5774 21.8418 3.9167 2.9111 9.0589 +#> 12.1510 1.0352 25.2667 0.8655 1.5272 -12.3486 0.3810 4.9968 +#> -5.1686 -15.6784 -7.3029 3.3857 -3.1386 -1.9642 4.3806 3.8814 +#> 14.6953 -1.7739 -7.0277 -1.6882 3.4467 -8.5026 -9.1406 6.5295 +#> -8.5005 -9.0115 8.9160 8.1339 12.3356 15.3479 4.7163 23.3220 +#> 12.5806 -7.5174 9.0809 -12.6274 0.3153 -23.3882 8.4827 -5.6356 +#> 0.3253 8.5266 -2.4614 1.2410 14.3005 5.2676 4.9897 -6.5873 +#> -14.0444 -2.9457 -9.0449 7.3549 -4.2304 -2.5845 2.6562 -16.3753 +#> 9.5149 -0.0036 -15.2331 -6.5545 -14.5967 -6.3073 2.3052 -3.8767 +#> +#> Columns 17 to 24 9.1108 -5.8852 9.4106 1.7577 5.5345 14.7085 -1.5553 -5.5393 +#> -7.4435 -1.3629 3.8554 -1.8817 6.6976 6.6251 3.9899 -0.2122 +#> 6.3102 3.5170 -19.0083 -5.8928 0.1021 22.7603 0.0510 -9.0458 +#> 10.2538 -15.8787 7.4476 -6.0061 7.4084 1.9730 -16.2613 -4.8448 +#> 1.9727 -7.3840 -6.6374 21.3157 11.4013 6.3762 16.4940 3.5851 +#> -13.9261 7.4495 15.9916 -13.7559 -13.9218 -15.0398 -13.0659 2.7139 +#> -1.7143 3.7097 12.9159 -5.4136 1.3846 10.6575 -2.6298 3.4312 +#> -0.1100 4.2187 -18.0611 15.4935 10.5499 -16.6544 1.7923 -11.7621 +#> -0.3548 -1.2753 -6.1631 11.5357 4.7380 -7.5232 -1.9752 -3.5531 +#> -7.4261 4.0613 -8.6857 -15.5452 7.2832 -11.2826 5.9104 13.5837 +#> -5.2524 -1.5555 12.7220 -12.5006 14.8736 21.9126 2.8786 0.3697 +#> -0.3842 2.0122 -0.7657 18.2707 8.8032 -15.5875 5.8973 2.2300 +#> 8.5452 4.5014 0.9094 10.7083 7.2936 -6.5902 -3.5868 4.8614 +#> 5.6845 -13.7795 5.7552 5.6715 -9.3158 -4.1249 14.4188 13.3217 +#> 0.4612 -0.7442 3.2900 -3.0268 3.3358 5.1864 5.6845 -7.3762 +#> 5.5567 17.9968 16.8128 -6.5117 -3.0693 1.3810 0.8088 11.5427 +#> -2.1269 6.0299 -6.2446 1.5503 -0.1344 3.4348 8.8951 3.4751 +#> 9.8986 -2.9783 -3.4552 11.9663 3.5168 -3.1248 -12.2686 1.1864 +#> 2.9204 15.7249 12.1296 -12.3881 -16.3484 3.4656 19.3450 10.9863 +#> -7.1073 -13.3275 0.8049 -3.4258 4.9472 -0.9507 -12.5504 4.2871 +#> 14.6746 -0.6757 0.4590 9.5481 -5.3562 4.6719 8.3205 -6.9180 +#> 2.4060 10.8216 -7.3185 -4.3943 7.1381 4.5894 -9.5015 -5.0768 +#> -2.2087 -6.6675 -7.5092 9.6980 13.7328 14.6772 10.6330 -3.2139 +#> -7.8486 -1.1354 -8.2768 5.7469 -12.4938 -20.9289 -0.2475 7.5843 +#> -0.6887 12.6767 -0.5174 -6.6744 -14.5340 -15.5469 -6.4152 2.3927 +#> -4.5653 4.9490 -0.7414 13.3096 7.2078 -0.2862 -3.0870 -6.7875 +#> -21.3772 -2.0972 -5.9350 -18.0664 16.4198 -13.3079 -0.7904 22.0595 +#> -2.2383 3.1956 -14.7318 -7.0698 6.8923 -6.2933 -18.4383 -15.0711 +#> -6.6730 1.2529 -7.0132 -0.7741 4.9305 -12.5031 13.2373 9.1619 +#> 9.0291 -5.0809 -1.0876 14.5123 0.6909 -19.2040 -3.5472 -14.5042 +#> 9.0016 -4.4740 1.2477 -5.7460 5.2325 27.0660 -1.1174 -0.0905 +#> 8.0508 -12.4503 11.1096 12.6742 -9.1776 -3.9803 -7.4178 5.6043 +#> 18.3664 -8.6539 -5.5655 0.3600 5.0420 8.9341 -8.4686 -10.3761 +#> +#> Columns 25 to 32 0.3008 -7.0571 -8.4149 -4.5962 -2.9224 3.1753 -12.6795 4.0980 +#> 9.5784 -1.7830 -5.8914 -3.1915 6.3987 8.4283 -9.2462 5.7840 +#> 16.1477 8.7329 3.5931 -14.8631 -9.1864 -6.5386 -1.5663 -2.1406 +#> 0.7749 -1.8754 -0.1599 4.7293 -2.5122 10.4562 -8.2233 -2.8465 +#> -4.3017 5.2190 7.9096 -1.5344 -9.9853 2.4526 -7.1441 -0.3820 +#> 0.0889 -4.2401 -0.9629 15.7039 9.8557 -4.1554 -1.1819 -22.0733 +#> -15.6935 8.3697 7.1618 5.1377 -3.7180 -16.9134 5.2836 12.9208 +#> 6.0987 -4.7289 7.0652 -12.9729 1.3067 6.7724 8.8482 -7.8131 +#> -3.0771 -3.2627 -4.6143 6.2622 -4.1074 -3.7573 -14.7813 4.4464 +#> -9.6590 5.7455 -3.0179 18.7896 -13.9661 2.1411 7.5048 1.9581 +#> -8.6272 -4.7795 5.4977 -9.7543 -5.1203 -10.2755 -1.7329 -27.8673 +#> -4.0099 -5.9967 -3.1690 -1.3150 1.7005 -0.5414 7.4053 -0.2833 +#> 3.0250 -15.8504 -12.5121 5.1994 -0.6908 14.9010 1.3966 2.9322 +#> -4.5000 -12.3745 4.7834 -2.7725 12.0570 16.1221 -12.6307 -2.4153 +#> -7.0002 1.6214 -4.7188 5.7197 -5.4331 19.3668 17.1480 2.5092 +#> -20.2407 -6.3909 5.9688 -0.5903 -8.5754 3.5485 -5.4323 -14.0733 +#> -12.6081 1.6736 11.7592 2.5367 1.7223 -1.5014 7.4325 8.1650 +#> 0.6163 -9.0269 -17.1684 -3.7598 -10.7735 0.7089 0.4417 -1.9499 +#> 0.3810 6.2666 3.8684 -7.2137 -3.1259 -17.9577 -11.2625 8.2531 +#> 4.4174 8.1828 -3.4086 -16.0366 12.7148 -1.1848 -18.2540 13.6571 +#> 1.3962 -8.3721 -7.6677 2.9592 -5.8263 -3.7404 5.6893 -1.6895 +#> 12.0753 17.7868 -0.1284 -5.2200 -10.6603 7.4665 1.8309 3.6457 +#> 20.6822 -0.5222 -0.2637 2.0915 -2.1368 -9.2732 -24.6116 -4.2364 +#> -1.4115 -6.7627 12.3034 13.1054 -0.8178 5.0396 -16.5100 12.1881 +#> 16.7217 3.1474 -1.3767 -3.9947 8.0686 -0.7637 -22.6418 2.2609 +#> 12.2405 12.9657 2.6951 -3.8284 -1.9389 -9.9976 5.2749 7.2858 +#> 6.3970 0.3287 -14.7146 16.2203 -0.5022 0.4890 -4.2413 8.4292 +#> 22.4388 16.0109 -8.7131 -6.3525 14.0905 -11.6210 4.3831 -8.4089 +#> -11.9730 11.7715 10.0646 3.1135 11.2826 3.3526 -11.7663 -0.8952 +#> 10.7174 -5.8578 -0.9425 5.9157 -1.9214 -3.8553 6.5107 -16.4646 +#> 8.4125 1.1668 -5.0757 -12.1152 4.6271 6.3382 3.3747 6.0867 +#> 1.6975 -10.5447 6.3402 4.8285 -0.8840 -6.2778 5.1701 -6.7291 +#> 1.0201 -0.5535 -4.0492 -6.8163 6.0372 -5.7309 7.7101 -17.4128 +#> +#> Columns 33 to 40 -8.3005 14.7272 -6.7074 12.7758 2.9899 -4.3429 -0.2968 -1.3443 +#> -2.0667 -2.5044 12.6257 9.2933 10.2412 2.6942 -7.6705 2.5916 +#> 3.3653 -6.9535 -2.8776 -2.2025 5.1992 0.6193 -4.5544 -9.4456 +#> -7.3743 -9.1843 2.0995 -9.3366 -0.0310 -5.1709 -2.6634 3.5477 +#> 14.6331 8.1799 7.9666 -0.2023 -9.0092 2.5689 -13.5106 -12.9327 +#> -3.8559 -1.6064 13.4314 6.0501 -8.6245 2.8513 -0.0219 6.0727 +#> -0.4318 -9.7012 -6.5943 5.8313 3.7841 -6.3746 -5.8966 -20.2121 +#> 0.0934 -3.7661 10.3504 -7.8322 -10.9738 -1.6299 10.5393 6.6490 +#> 6.7381 -3.8583 3.1032 6.0667 4.0452 -2.5885 -10.1868 -4.6375 +#> -15.4833 -1.4561 11.9122 13.3496 4.4545 -0.4175 12.7009 10.7143 +#> 2.5562 -11.7562 4.6176 3.0168 13.1050 -11.7575 -11.8906 -17.1705 +#> 6.1098 6.0384 -3.8374 -7.0562 -5.1985 5.9250 7.0644 8.5208 +#> 2.9506 -6.7071 -1.8607 -11.0242 -6.2298 -13.5995 7.3248 10.3573 +#> 9.7444 4.3608 5.9477 -7.8566 2.1875 3.0231 15.0895 -6.3208 +#> 4.0446 -4.2066 1.3350 11.2506 -16.0540 10.1319 -4.8498 9.5998 +#> 0.5336 0.2125 0.2089 -2.9279 -13.9673 10.1236 9.0002 -5.1703 +#> -0.1647 -5.5400 -12.4962 1.9435 8.9818 -3.3593 7.7355 1.7319 +#> -0.5365 -3.9592 -13.4042 -4.1858 5.8470 -4.3103 -8.1551 5.4808 +#> -5.6067 7.4605 -6.2640 -6.4502 2.1007 9.8207 -1.1358 -9.9572 +#> 2.8434 0.0870 12.1987 -0.7862 8.0063 -8.2581 -11.6546 -11.8047 +#> -18.6080 -8.1193 -0.9787 -4.6226 2.4769 -0.9187 -3.9897 -1.3944 +#> 5.8660 -3.8559 0.9946 5.5243 -5.7550 -1.7439 -14.3546 -7.3971 +#> 11.1755 0.3875 0.5909 -6.8003 6.6283 13.9861 -12.2645 1.2621 +#> 7.6756 10.7757 9.6400 -5.7359 4.5565 -0.8801 6.2724 9.9751 +#> 2.9415 4.6918 1.8767 3.1535 -5.2464 -0.5474 -4.1746 5.6156 +#> -2.3543 -2.9242 6.4376 8.7504 12.2642 -1.3250 -5.3298 -4.3991 +#> 18.4896 -0.2483 11.8350 13.6104 -8.5979 -12.4203 -10.6695 4.2116 +#> 5.3618 -5.8365 -3.6373 7.7287 12.9988 1.9299 -19.4603 -0.8785 +#> -15.5827 5.2480 -4.9182 -14.1546 -9.8278 5.4751 -1.3194 -2.6228 +#> -10.5405 7.2224 3.9087 1.7110 -5.3455 -2.9035 -9.1984 10.6450 +#> 6.5837 -14.6251 -19.5291 2.0283 17.5292 3.6927 -1.5740 -6.7273 +#> -9.8712 9.7002 -6.4679 -2.6612 -3.0343 -8.4513 -5.1446 11.2936 +#> -3.8341 1.4596 -7.8576 -4.4343 0.0754 -10.7116 -14.9492 -10.1589 +#> +#> Columns 41 to 48 -1.4469 -2.8999 -0.9630 -6.3509 -12.8624 0.1363 3.8723 10.0935 +#> 1.5435 9.5251 -10.4515 -6.3777 16.9190 -8.1521 23.9701 4.9641 +#> 12.7367 -7.9641 -9.8458 0.6162 -25.2348 -12.6177 0.4619 -7.2481 +#> -5.8560 -5.1255 -3.1354 -18.5729 -10.6835 6.6299 -16.9487 -13.3048 +#> -13.5108 12.6743 15.3481 -5.5595 4.2583 3.7890 2.8971 3.6391 +#> -6.3202 3.1731 2.8785 0.6849 16.7047 -12.0166 7.8555 1.0515 +#> 1.3086 -4.6868 -2.1914 5.2846 -2.5489 12.8513 4.4374 9.1135 +#> -4.0261 3.8887 -1.2627 1.7281 -9.6485 -10.0991 -8.0326 12.0395 +#> 2.6953 -8.3056 3.7025 -6.9717 -8.3421 0.7269 -8.8189 4.3341 +#> -7.8908 -11.2529 -6.9019 4.5368 11.9622 19.1285 5.0951 2.0622 +#> -5.8568 -11.5333 -12.7621 -7.6150 0.8706 0.5273 9.3849 0.6782 +#> 1.1827 -5.4800 6.4904 8.8677 -6.3656 11.7740 -2.6136 -6.4490 +#> 12.6030 0.7996 -5.3255 -7.1771 -10.4342 7.4206 0.3773 -0.1530 +#> -12.1226 -2.2457 22.0030 10.6510 2.8411 12.1533 3.6864 -10.1760 +#> 1.2764 3.6508 13.9743 4.0651 21.3132 -12.1229 10.9814 12.8568 +#> 15.1948 1.6840 -14.4206 5.6209 -4.1085 -0.4338 12.0710 9.5107 +#> -2.3546 4.4954 -0.8796 13.1283 6.9081 9.2268 6.1242 5.1834 +#> -2.5499 -1.8342 -3.4145 7.9242 -0.5652 -3.3177 0.7154 -6.5618 +#> -13.3003 -7.1681 -3.8071 -9.7384 -12.7232 8.6988 1.7359 -5.4667 +#> 1.8086 4.9076 9.4442 -0.0852 -9.6192 -3.7186 7.6617 -10.9622 +#> 1.9929 7.8155 -13.7915 -15.3899 9.9665 -11.1532 -9.2972 1.1085 +#> -6.6702 -13.3472 -9.3195 17.1402 11.5610 -10.0072 0.8832 -13.6221 +#> -9.6352 -11.9467 -9.7642 -21.7313 -13.6817 -6.7609 12.2533 3.6639 +#> -18.3198 11.0650 15.2191 14.5646 2.9968 -7.3285 14.3316 0.2506 +#> 7.6122 -2.4307 -1.3098 -4.9238 -6.5340 4.9611 -14.4775 1.0199 +#> 1.5326 -6.7138 -9.1889 5.1527 12.0538 -0.4903 -6.8468 8.7367 +#> -2.1124 7.9258 27.0479 -13.5470 3.2345 -1.3535 -14.3106 -11.4123 +#> 4.2531 -21.5239 -5.0090 5.8895 5.3684 28.9170 -17.2063 -12.3373 +#> -7.5529 10.2042 8.8707 0.3480 1.4809 -13.5463 0.2219 0.7899 +#> -8.3035 -8.1356 0.1940 -8.0646 3.5596 -1.6872 -14.3589 4.7166 +#> 14.5250 -5.2545 4.8554 -4.8012 -19.3753 15.5188 9.3791 5.3975 +#> -6.5756 7.1268 0.6758 -7.0289 8.0116 -1.6571 -6.2199 13.9036 +#> -1.1904 1.9509 7.1661 -9.9119 0.0153 12.0100 -6.9495 13.8010 +#> +#> Columns 49 to 54 -0.2883 3.9251 2.4917 -3.5285 -3.8354 1.9130 +#> -9.5924 -0.0397 -7.0266 -2.3239 2.6360 -0.8788 +#> -19.3069 3.8304 -7.6437 -2.9513 -1.9829 -1.7687 +#> -12.8549 -3.0123 0.5920 -5.8921 -6.6200 1.3946 +#> 17.4575 -6.8584 -8.8831 7.2746 6.2740 1.7815 +#> -3.7614 -3.1141 -5.3400 -2.8170 -4.7686 -0.1632 +#> -11.4588 -1.9522 0.8138 6.0205 6.5438 3.7458 +#> -3.8474 10.7437 -7.9791 -11.3983 4.7973 -0.8495 +#> 10.0112 -0.5624 -7.1970 12.1377 0.2231 -2.2062 +#> -10.5210 -19.2729 -10.4373 1.6629 4.1790 4.6455 +#> 13.6884 0.0696 -2.0373 3.4275 8.1889 5.3210 +#> 2.0698 2.7083 -6.8417 -1.9533 0.0743 0.7355 +#> 21.5921 5.4718 2.4355 -2.2130 -1.0402 0.3817 +#> 7.1291 2.9363 -0.4991 -0.5806 3.6087 2.5239 +#> -1.1978 2.3942 25.1930 1.8153 -4.3313 2.8886 +#> 6.0419 2.7377 -6.0539 -9.7997 -4.9919 -0.0148 +#> -2.2113 -8.0248 5.7452 1.0438 15.2932 -1.4453 +#> 3.3594 3.4097 -2.8406 6.9497 -1.0904 -0.4175 +#> -5.8860 -8.5307 -11.0248 -7.2802 2.6497 4.1530 +#> -2.4865 2.5076 -6.1890 2.9720 1.7406 1.3478 +#> -11.0451 1.3601 -4.5293 10.2894 -3.2070 -3.0481 +#> -13.6334 -7.3734 7.4083 5.8849 -0.1251 2.7981 +#> 8.7304 7.7720 -6.5287 -8.3295 -4.6946 -0.0223 +#> 3.8491 -9.9238 11.3099 -6.4363 -1.1995 2.6177 +#> -10.2489 4.1072 -0.0930 -10.1622 -7.3642 -2.6042 +#> -9.0480 -10.1765 -8.8019 11.2094 -1.5077 -2.1801 +#> 3.4538 9.4010 -9.0294 0.3479 0.9695 0.7321 +#> -14.7509 -3.1005 -11.6585 11.8664 8.3125 -1.2099 +#> 1.5439 5.6177 -3.1403 -5.1001 3.6383 -5.2296 +#> -11.6184 12.8553 -4.8697 11.6031 -6.0272 -5.3029 +#> 8.6764 13.1203 3.5574 -5.1431 -1.3548 -0.5325 +#> 6.7802 2.7706 8.1629 -2.3705 -10.0547 -1.8048 +#> 7.8417 11.9808 -5.6188 0.6759 5.7834 2.2238 +#> +#> (10,.,.) = +#> Columns 1 to 6 9.7380e-01 1.4490e+00 1.8312e+00 -8.7913e+00 4.1246e+00 6.1416e+00 +#> 4.1422e+00 1.9490e-01 7.1978e+00 -4.9095e+00 2.1002e+01 -1.1334e+01 +#> 8.6002e+00 7.8669e-01 -6.0797e+00 -5.8410e+00 2.2738e+00 1.2178e+01 +#> 4.5969e+00 -9.6160e+00 3.5879e-01 3.2520e+00 -4.9143e-01 8.9170e+00 +#> 1.2350e+00 8.5498e+00 -3.5786e+00 4.0318e+00 -1.0806e+01 1.3644e+01 +#> -3.8857e+00 1.1034e+01 -1.2057e+00 -9.4542e+00 2.1439e+00 -2.7162e+00 +#> 1.8798e+00 -5.7011e-01 -1.1796e+01 1.2672e+01 -2.6563e+01 1.3341e+01 +#> -7.6033e+00 -7.2512e+00 1.3764e+01 4.6117e+00 -4.9539e+00 -1.4034e+01 +#> -2.1579e+00 5.3434e+00 5.2073e+00 1.9832e+00 -7.5288e+00 -1.3320e+00 +#> 9.1001e+00 -3.7459e+00 -1.0102e+01 4.2869e+00 -7.9098e+00 7.5913e+00 +#> -3.7879e-01 -1.5565e+00 3.5275e+00 -7.2230e+00 1.6133e+01 8.0198e+00 +#> -6.3870e-01 -4.7495e+00 8.8710e-01 8.9196e+00 -1.6305e+01 -1.6101e+00 +#> 4.7581e+00 -6.6625e+00 1.9250e+01 -4.6821e+00 3.8569e+00 -9.6632e+00 +#> 3.7697e+00 9.9504e-01 2.1924e+00 1.8413e+00 -3.1287e+00 -7.6354e+00 +#> 1.9353e+00 4.9703e+00 -5.4919e+00 1.0535e+01 1.6755e+01 6.0989e-01 +#> -5.6916e+00 6.3663e+00 -1.2364e+00 6.8718e+00 -5.0680e+00 -1.0921e+01 +#> 1.9499e+00 4.3528e+00 -4.9796e+00 -4.7218e+00 -5.9021e+00 4.2923e+00 +#> -3.9844e+00 4.7720e+00 -7.5301e+00 1.2187e+01 -1.6360e+01 1.2726e+01 +#> -6.8201e+00 3.6636e+00 -2.2617e+00 1.1609e+00 -1.4828e+01 -5.0818e+00 +#> 4.9086e+00 -4.4233e+00 4.1111e+00 4.3334e-01 3.4612e+00 1.0340e+01 +#> -1.4880e+00 4.0621e+00 -4.6902e+00 1.2577e+01 -1.1600e+01 -1.2424e+00 +#> -2.1784e+00 2.9666e+00 2.6355e+00 -9.4748e+00 -7.9034e+00 4.8282e+00 +#> -1.7914e+00 7.1721e+00 2.0501e+00 1.6545e+00 6.4078e-01 -7.3620e+00 +#> 7.0301e+00 1.9841e+00 -9.4659e+00 1.9471e-02 -4.0225e+00 -1.2733e+01 +#> -2.2844e+00 -4.0059e+00 7.5528e+00 5.6614e+00 1.8246e+00 -2.1709e+01 +#> 7.6560e+00 -8.1628e+00 -2.3944e+00 4.1034e+00 6.9953e+00 -1.7369e+01 +#> -4.6991e+00 -3.8699e-02 1.3691e+00 8.1784e+00 8.0656e+00 -5.7873e+00 +#> -8.6688e-01 2.0912e+00 -1.1272e+01 5.5044e+00 -9.1747e+00 1.0300e+00 +#> -5.2787e+00 5.7283e-02 5.1677e+00 -1.1812e+01 7.4668e+00 7.7460e+00 +#> -1.3267e+00 5.6637e+00 5.5546e-01 5.4979e+00 -3.4629e+00 1.8040e+01 +#> 4.6097e+00 -1.9614e+00 4.1431e+00 -7.8649e-01 3.3812e+00 7.5966e+00 +#> -1.1130e+01 1.0027e+01 -3.0521e-01 1.3338e+01 -7.0440e-01 -2.4646e+00 +#> -1.0834e+00 1.5344e+00 -5.0158e+00 -1.1813e+00 3.8881e+00 1.4119e+01 +#> +#> Columns 7 to 12 -3.0010e+01 -6.6165e+00 8.4032e+00 -6.8274e-01 -9.0195e+00 -2.0540e+00 +#> 1.0079e+00 -1.2331e+01 1.8611e+00 1.4432e+01 -1.3957e+01 -7.3971e+00 +#> 7.2369e+00 -2.1319e+01 1.7590e+01 1.7050e+00 2.9713e+00 3.7688e+00 +#> -9.5183e+00 1.8749e+01 -9.2851e+00 -1.5082e-02 3.7627e+00 -8.1922e+00 +#> -2.4509e+00 5.0913e-01 -1.2228e+00 1.5254e+01 5.9769e+00 1.4472e+01 +#> -3.7212e-01 1.0772e+01 1.8635e+00 -1.9094e+00 1.2522e+01 -7.5197e+00 +#> -2.4230e+01 -3.5736e+00 3.0001e+00 -1.2669e+01 6.1788e+00 4.0507e+00 +#> 9.3861e+00 -1.6381e+00 -2.1131e+01 2.0590e+01 -1.0562e+01 1.6755e+00 +#> -3.3105e+00 -2.8660e+00 1.9779e+01 -8.6758e+00 3.0051e+00 9.7456e+00 +#> 2.2840e+00 2.6228e+00 3.5843e+00 -3.2269e+01 3.0991e+00 -3.0214e+00 +#> 8.8598e+00 -8.1051e+00 -5.1534e+00 1.6551e+01 -3.0966e+00 -1.5080e+01 +#> -5.3818e+00 -1.2176e+01 1.6221e+01 -9.8099e+00 -9.1781e-01 2.0126e+00 +#> -1.3203e+00 -1.9429e-01 2.7570e+01 1.0377e+01 1.0636e+01 3.3602e+00 +#> 7.3243e+00 2.2125e+00 -4.5594e-01 -1.2620e+01 -3.5310e+00 2.8700e+00 +#> 6.4630e+00 9.1406e-01 -3.5838e+00 -7.9412e+00 2.2477e+01 -1.5457e+01 +#> 7.4491e+00 -7.1114e+00 -2.9125e+01 1.4957e+01 -4.8028e+00 -8.9383e+00 +#> 3.9169e+00 7.5088e+00 8.6104e+00 -2.9066e+00 9.7379e-01 8.7434e+00 +#> -8.3075e+00 -9.7575e+00 1.8055e+01 -7.1664e+00 -3.6796e+00 8.2397e+00 +#> 9.6591e+00 -1.8395e+00 -3.2242e+00 1.3317e+00 -2.2787e+01 3.2769e+00 +#> -9.0037e+00 -2.3628e+01 1.5783e-01 1.8059e+01 -8.4509e+00 -7.8330e+00 +#> 3.6535e+00 -1.7096e+00 3.7125e-01 -7.2359e+00 4.1249e-03 -1.0515e+00 +#> 2.7810e+00 -2.7969e+00 -9.4412e+00 -3.5242e+00 3.5175e+00 -1.1864e+01 +#> -2.5415e-03 -1.7145e+01 1.2049e+00 1.3563e+01 1.1815e+01 4.3641e+00 +#> 7.5487e+00 6.6598e+00 9.3083e+00 6.4563e+00 5.5130e+00 4.9231e+00 +#> -9.7076e+00 -1.5475e+00 -7.0297e+00 -2.6467e+00 -3.9106e+00 -5.5356e+00 +#> 2.4310e+00 7.1102e+00 7.9353e-01 1.7824e+00 8.2437e+00 1.1257e+01 +#> -2.0769e+01 6.4784e+00 3.8660e+00 -1.2394e+01 1.1686e+01 -9.8153e+00 +#> -1.1151e+01 6.8596e-01 1.1964e+01 -2.2998e+00 -1.0429e+01 8.1477e+00 +#> 8.9297e-02 -8.8204e+00 -1.1794e+01 2.4298e+00 -3.4245e-01 -7.9011e+00 +#> -8.9061e+00 -5.5781e-01 2.3508e+01 -8.3288e+00 1.2172e+01 -7.0293e-02 +#> -2.3819e+00 -2.8189e+00 3.9560e+00 1.1461e+01 -4.3939e-02 1.8385e+01 +#> -2.6116e+00 1.0575e+01 9.6807e-01 1.3881e+01 8.0968e+00 1.3677e+01 +#> -5.0407e+00 5.9978e+00 -2.8054e+00 1.4021e+00 -4.4690e+00 8.9486e+00 +#> +#> Columns 13 to 18 8.8993e+00 4.7415e+00 4.4859e+00 -5.7668e+00 -6.9999e+00 1.5650e+00 +#> 5.8629e+00 4.1877e+00 7.3453e+00 1.6745e+00 -4.1151e+00 -1.3395e+01 +#> -4.2558e+00 -1.0452e+01 -4.1373e+00 2.4526e+01 2.5696e+01 -7.6015e+00 +#> -7.3704e+00 -7.3710e+00 1.9229e+00 -3.0092e+00 1.6079e+01 7.4628e-01 +#> -5.9248e-01 -9.9398e+00 -3.7181e+00 -1.1601e+00 7.2940e+00 1.5779e+01 +#> -4.2213e+00 -8.4778e+00 -1.6147e+01 6.6524e+00 -5.1158e+00 5.8316e+00 +#> -7.3672e+00 5.7705e+00 -1.6387e+01 -8.3544e+00 -3.2935e+00 -1.5326e+01 +#> 5.1255e+00 9.3415e+00 -4.2049e+00 -6.2573e+00 4.4090e+00 7.8587e+00 +#> 1.2954e+01 -3.2016e+00 -3.6492e+00 -1.4351e+01 5.4966e+00 -7.0020e+00 +#> -9.6956e+00 7.3025e+00 2.6994e+00 -4.1748e+00 6.6155e+00 -4.4575e+00 +#> -4.6948e+00 1.5444e+01 -7.0963e+00 8.3528e+00 -9.0845e+00 -3.6703e+00 +#> -7.1457e+00 -6.6985e-01 5.9290e+00 -7.9600e+00 2.0777e+00 -9.9763e+00 +#> 4.1814e+00 1.6123e+01 1.8267e+00 4.2746e+00 3.7384e+00 -7.2365e+00 +#> -3.4998e+00 -5.2595e+00 8.9079e+00 -1.6606e+00 2.2729e+01 -8.8141e+00 +#> -1.4404e+00 2.3608e-01 1.2236e+01 4.6271e+00 -2.7030e+00 -9.0437e+00 +#> -8.9669e+00 2.2795e+00 -6.8421e-01 2.9245e+00 -1.3561e+01 -5.5535e+00 +#> -7.2864e+00 -5.0646e+00 -1.0792e+01 -1.7618e+00 -1.7860e+00 -8.5841e+00 +#> -1.6103e+01 -3.7075e+00 -1.4694e+00 1.7613e+00 8.1363e+00 -9.0179e+00 +#> -2.6679e+00 8.3764e-01 -2.0586e+00 -1.4770e+01 -7.0641e+00 1.0958e+01 +#> 6.1465e+00 -1.7301e+00 6.8926e-01 1.1042e+01 -8.0646e+00 1.3740e+01 +#> 9.2186e+00 6.9219e+00 1.0736e+01 -7.6760e+00 1.1137e+01 1.2983e-01 +#> 5.1655e+00 3.9911e+00 -1.1730e+01 1.6808e+01 -7.6769e+00 1.3294e+01 +#> 1.7941e+01 5.9604e+00 1.6718e+01 -6.6048e+00 1.1282e+01 -5.0028e+00 +#> -9.0343e+00 -5.8421e+00 1.0260e+01 -4.2524e+00 2.6833e+00 -9.7470e+00 +#> 1.3666e+01 -4.8584e+00 2.1007e+00 -4.1773e+00 -1.1972e+01 1.2956e+01 +#> -6.5014e+00 -4.6207e+00 2.1431e+00 -7.7870e+00 1.8501e+01 6.5167e-01 +#> 2.2908e+01 6.1302e+00 -9.0525e+00 -4.2188e+00 -1.2936e+01 1.4321e+01 +#> 4.9014e+00 -4.1631e+00 2.4287e+00 -1.7578e+01 4.6079e+00 -2.5836e+00 +#> 1.4056e+01 6.7470e-01 2.8408e+00 1.6996e+01 -1.0415e+01 2.1274e+01 +#> 1.2592e+01 -7.1130e+00 1.0846e+01 -1.0264e+01 2.6116e+01 -1.1069e+01 +#> -6.8943e+00 2.1540e+00 6.4012e+00 6.3127e+00 3.7900e+00 -5.0710e+00 +#> -5.2806e+00 9.2638e+00 -5.8424e+00 -3.5825e+00 -4.8384e+00 5.1157e+00 +#> -2.2761e+01 3.0809e+00 -8.2334e+00 -1.0741e+01 1.8393e+01 4.0946e+00 +#> +#> Columns 19 to 24 5.3620e+00 -2.3777e+00 4.2923e+00 1.1956e+00 8.2449e+00 -1.8528e+01 +#> -7.6953e+00 -2.6968e+00 2.1382e+00 4.4323e+00 -1.2187e+00 -2.8213e+00 +#> 1.2562e+01 -1.1218e+00 6.4784e+00 -1.4275e+01 7.0761e+00 4.8327e+00 +#> 8.9042e-01 -7.8073e+00 -2.0207e+01 -7.8072e+00 1.1351e+01 -5.4043e+00 +#> 3.3035e+00 5.8111e+00 4.1300e+00 2.1795e+01 1.3702e+01 -1.4520e+01 +#> -1.4710e+01 -1.1862e+01 6.4156e-01 1.5063e+01 -1.4422e+01 -3.8048e-01 +#> 5.6821e+00 1.1837e+01 1.7712e+01 -1.1364e+01 -2.8194e+00 1.6764e+00 +#> 2.6854e+00 5.9453e+00 -1.0956e+01 9.3101e+00 1.3290e+01 -1.9139e+01 +#> 8.2914e+00 -1.0784e+00 6.3122e+00 1.4631e+01 -1.5058e+01 -1.4125e+01 +#> -1.6912e-01 5.4058e+00 -3.5318e+00 -1.3410e+01 5.6009e+00 4.3664e+00 +#> -5.5236e+00 4.4297e+00 -1.2670e+01 5.6168e+00 4.8356e+00 5.6053e+00 +#> -2.4133e+00 -4.7941e+00 -5.2335e+00 -1.2109e+01 3.9621e-01 8.1410e-01 +#> 1.3102e+01 1.6143e+00 -1.6402e+01 -2.1645e+00 8.7013e+00 -1.5632e+01 +#> 8.1271e+00 -8.1008e+00 1.4215e+01 6.8634e+00 -5.8671e-01 8.7925e-01 +#> -1.2478e+01 4.7577e+00 -1.4711e+01 -1.2847e+01 -1.1610e+01 1.3058e+01 +#> -1.7032e+01 5.1948e+00 -8.0982e+00 -2.7467e+00 -2.2929e+01 3.9658e+00 +#> -8.0204e+00 2.0920e+00 4.1907e+00 -1.6993e+01 5.7090e+00 1.0770e+01 +#> -5.0310e+00 5.3340e+00 2.0351e+00 -1.1819e+01 -2.0917e+00 -4.9536e+00 +#> 8.9459e+00 4.5408e+00 -1.2252e+00 -3.2597e+00 -1.9617e+00 6.0336e+00 +#> 1.2713e+01 -1.4828e+01 9.7539e+00 8.6881e+00 5.0384e+00 -7.8791e+00 +#> 3.3678e+00 -7.0126e+00 1.1724e+01 1.2602e+00 -8.2098e+00 1.9833e-02 +#> 1.4143e+01 6.1607e-04 -1.0256e+01 1.0763e+01 8.9912e+00 -2.3142e+00 +#> 6.3153e+00 7.2645e+00 5.8631e+00 2.3282e+01 -1.4853e+01 -2.1863e+00 +#> 2.0174e+00 2.7245e+00 -1.3041e+01 -2.5928e+00 -7.1844e+00 -5.0949e+00 +#> 3.6104e+00 5.2921e+00 1.2311e+00 1.1486e+01 4.3245e+00 -1.8870e+01 +#> 8.1833e-02 1.3504e+01 2.0587e+01 5.9465e+00 -7.3169e+00 4.9747e+00 +#> 1.2588e+01 -6.2767e+00 -2.8220e+00 1.4288e+01 6.7474e+00 -1.2098e+01 +#> -2.8981e-02 1.6053e+01 1.9742e+01 -3.1571e+00 6.6680e+00 1.9848e+00 +#> -8.9624e+00 -1.8030e+01 -1.7666e+00 1.5758e+01 -9.6462e+00 -6.6230e-01 +#> 1.4211e+01 -1.2624e+01 7.8362e+00 6.6253e+00 -1.5997e+00 -5.3106e+00 +#> -4.2488e+00 -3.7679e+00 -7.5373e+00 -2.1616e+01 1.0286e+01 -8.9994e+00 +#> -1.3258e+00 -3.1426e+00 -3.0732e-01 -2.8969e-01 2.0871e+00 -6.6631e-01 +#> -4.4706e+00 -3.2235e+00 -7.1209e+00 8.5614e+00 1.0714e+01 -1.9168e+01 +#> +#> Columns 25 to 30 -1.3738e+01 1.9407e-01 -6.9228e+00 4.9592e+00 2.7284e+00 -1.3088e+01 +#> 2.3503e+00 -2.0550e+00 1.1039e+01 -3.9911e+00 2.4592e+00 -7.9029e+00 +#> -1.8685e+01 -3.5048e+00 -6.9269e+00 -3.9238e+00 -2.5702e+00 -9.5877e+00 +#> 1.6462e+01 -1.1015e+01 7.6716e+00 -1.1743e+01 -2.0550e+01 3.2843e+01 +#> -8.0997e+00 1.4429e+01 1.1023e+01 1.2398e+01 -1.1172e+01 -6.1405e+00 +#> 1.1380e+01 2.7527e+00 -1.6173e+01 5.8441e-01 -3.5161e+00 -3.1103e-01 +#> -7.9379e+00 9.1177e+00 -1.0283e+00 -6.6229e-01 -2.5849e+01 -1.3594e+00 +#> 5.2394e+00 -7.1344e-01 1.0498e+01 4.1503e+00 -1.8481e+00 -9.5384e+00 +#> -1.8064e+00 -4.9391e+00 1.1056e+01 5.7745e+00 -1.1675e+00 -9.9092e+00 +#> 8.6111e+00 3.9188e+00 -8.4630e+00 -4.1664e+00 -1.0488e+00 7.1290e+00 +#> -7.8759e+00 -2.8698e+01 2.3918e+00 5.9546e+00 1.3440e+01 1.7421e+01 +#> -3.9861e+00 1.2915e+01 1.1409e+00 -1.1497e+01 -3.8018e+00 5.1922e-01 +#> 7.6864e-02 -8.8953e-01 3.0814e+00 -2.0490e+01 9.5406e-01 1.6945e+01 +#> 2.9916e+00 5.9803e+00 3.2879e+00 5.6387e+00 4.9739e+00 6.3383e-02 +#> 8.9156e+00 -1.0299e+01 3.1609e+00 -2.2392e+00 -8.0176e+00 3.5495e+00 +#> 1.2996e+00 -1.1130e+01 3.1512e+00 3.7519e-01 -1.0313e+01 -3.7287e+00 +#> -1.5403e+00 -3.7772e+00 -1.2884e+00 -2.4016e+00 6.9672e+00 -1.4478e+00 +#> 7.8027e+00 2.0717e+01 1.1774e-01 -8.1338e+00 -1.1843e+01 -8.3353e-02 +#> 1.1373e+00 4.8027e+00 -7.1674e+00 4.1796e+00 4.8723e+00 -1.0801e+01 +#> -4.0332e+00 1.6163e+01 1.3942e+01 -8.9178e-01 -1.2489e+01 -8.6920e+00 +#> 2.5148e+00 6.1723e+00 -6.3309e+00 -7.0200e+00 -4.0136e+00 5.8757e+00 +#> -3.5399e+00 -4.7152e+00 -1.1348e+01 -4.4675e+00 1.2779e+01 5.2246e+00 +#> -9.6200e+00 -1.5830e+01 -1.3623e-01 -9.5731e-02 -1.4709e+01 -3.6802e+00 +#> -1.7013e+00 1.2616e+01 -6.2482e+00 3.0809e+00 -3.3756e+00 1.6695e+01 +#> -5.2060e+00 -1.5555e+00 7.9039e+00 2.5685e+00 -4.5289e+00 -2.1201e+01 +#> -6.4956e+00 2.5324e+00 7.6992e+00 8.3877e+00 1.7758e+00 -3.5438e+00 +#> -3.6889e+00 1.7558e+01 9.0596e+00 4.0924e+00 -6.4671e+00 -6.9760e+00 +#> -1.2708e+01 3.0308e+00 1.2044e+01 3.0528e+00 -5.0074e+00 2.5508e+00 +#> 1.0672e+00 -2.1581e+01 4.4682e+00 5.3672e+00 2.0925e+01 5.4427e+00 +#> -6.6319e+00 -4.6486e+00 -3.7764e+00 -8.9514e+00 -6.2342e+00 8.3887e+00 +#> 1.2911e+00 -6.7639e+00 -1.4354e+01 7.7890e-02 -1.4188e+01 -3.8556e+00 +#> -1.3275e+00 -1.9247e+00 -5.8075e+00 -6.7388e-01 -9.3005e+00 2.0396e+01 +#> -1.1306e-02 -5.8658e+00 1.1033e+01 2.3647e+01 -3.6233e+00 5.3512e+00 +#> +#> Columns 31 to 36 -1.5202e+01 2.4354e+00 -8.2399e+00 -8.1755e-01 5.9034e+00 -2.0364e+00 +#> -2.1992e+01 -1.3849e+00 -4.3722e+00 1.0039e+01 1.3087e+00 6.5794e+00 +#> -2.9016e+00 5.2726e+00 -2.3553e+01 -1.7231e+01 -2.7549e+00 2.7730e+00 +#> 6.3767e+00 5.4126e+00 -2.3162e+00 -8.4837e+00 5.2359e-01 -2.2921e+00 +#> -2.0619e+00 7.2101e+00 4.2319e+00 -3.9911e+00 -7.0854e+00 3.5053e+00 +#> 4.0992e+00 -1.1882e+01 7.4594e+00 -1.4278e+00 -8.3049e+00 1.9544e+00 +#> 3.9721e-01 8.6233e+00 2.4166e+00 -6.9363e+00 -7.7536e+00 -1.8720e+01 +#> -5.7188e+00 9.3542e+00 1.4333e+01 1.9902e+00 1.8773e+00 6.6106e+00 +#> 3.4952e+00 1.5997e+01 3.1629e+00 8.4303e+00 -6.3740e-01 1.1113e+01 +#> -2.9555e+00 1.0813e+01 2.8458e-01 4.5747e+00 2.3225e+01 -2.2558e+00 +#> -8.7068e+00 -2.2495e+01 -7.6862e-01 -5.5118e+00 -2.0754e+00 5.5314e+00 +#> 7.2146e+00 2.7650e-01 -3.6630e+00 6.4178e+00 1.0216e+01 -2.0088e+00 +#> -1.5362e-01 1.3131e+01 9.7638e+00 1.5974e+01 -1.9576e+00 8.3369e+00 +#> 1.3024e+01 1.2571e+01 3.7349e+00 1.2891e+00 1.4476e+01 -1.3264e+01 +#> 3.6886e+00 -4.3462e+00 1.2288e+01 8.2774e+00 -5.7755e+00 1.7274e+01 +#> -5.9559e+00 -1.2825e+01 6.9343e+00 1.6892e+01 1.8917e+01 4.9605e+00 +#> 4.8494e+00 1.0662e+01 3.2896e+00 5.9429e+00 -5.0767e+00 -2.1693e+00 +#> 8.1237e+00 1.0483e+01 -7.8821e+00 6.5832e+00 -2.3975e+00 1.2192e+01 +#> -8.4196e+00 -3.8164e+00 2.0849e+00 6.8310e+00 -3.6229e+00 -1.9822e+01 +#> -7.6289e+00 8.9065e+00 -1.3043e-01 1.1697e+01 2.2462e-01 -1.4295e+01 +#> 1.3772e+01 -4.1033e+00 1.9545e+01 -7.2149e+00 -1.1904e+01 2.1790e+01 +#> -2.8698e+00 6.3158e-01 -1.5593e-01 1.3231e+01 -8.5647e+00 6.1170e+00 +#> -1.5071e+00 -1.4404e+01 -5.2590e+00 -1.0187e+01 -2.7895e+01 -5.0782e+00 +#> 2.5744e+00 2.5337e+01 -2.6079e+00 -3.4419e-01 1.3615e+01 -8.8423e+00 +#> -2.5558e+01 4.1642e-01 1.4168e+00 1.2753e+01 -1.3027e+00 -8.0492e+00 +#> -1.4020e+01 -8.7159e+00 -6.4408e+00 -5.9928e+00 2.4667e+00 1.0677e+00 +#> -1.2690e+01 -2.1022e+00 1.4398e+01 -1.0129e+00 -1.2697e+01 3.6410e-01 +#> 4.2876e+00 -7.4832e+00 3.4943e+00 6.7610e-01 -4.9578e+00 -4.9512e+00 +#> 4.9301e+00 1.1563e+01 -1.4322e+01 2.5729e+01 -1.6189e+00 1.6177e+00 +#> 1.2431e+00 3.6739e+00 -7.3476e+00 -9.7248e+00 3.0216e+00 1.0001e+01 +#> 1.0557e+01 2.9558e+00 -6.0461e+00 -7.5559e+00 -1.7653e+00 -1.2573e+01 +#> -3.5863e+00 -1.7674e+01 -7.5302e+00 -3.5161e+00 -6.9405e+00 9.9321e+00 +#> 1.2192e+00 -1.2505e+00 1.4940e+01 6.5002e+00 -7.2249e+00 6.9642e+00 +#> +#> Columns 37 to 42 -2.1839e+00 -3.0819e+00 1.5255e+00 -3.0231e+00 4.7962e+00 1.2023e+01 +#> 4.4927e+00 -9.4327e+00 -2.3355e+00 -9.2311e+00 1.0838e+01 2.0955e+00 +#> 8.3446e+00 -1.0250e+01 3.2047e+00 -1.5100e+01 -4.5512e+00 -1.2529e+00 +#> -1.2340e+01 3.8265e+00 -1.3124e+01 -1.2238e+01 -1.1541e+01 2.7594e-01 +#> 4.5085e+00 -1.2752e+00 5.7550e+00 5.1428e+00 9.8829e+00 7.7298e+00 +#> -2.5823e+00 7.6269e+00 2.2596e-01 -1.5687e+01 7.9061e-01 -5.1668e+00 +#> 8.5447e+00 -2.0057e+01 1.1856e+01 2.1813e+01 7.2455e+00 -7.8119e+00 +#> -6.8094e+00 -2.2211e+00 3.8332e+00 -1.3119e+00 -7.4602e+00 -8.5137e+00 +#> 2.2390e+00 -9.5517e+00 -5.3426e+00 3.6890e+00 8.5591e+00 -4.3057e+00 +#> -1.1398e+01 -2.8661e+00 1.5568e+01 -2.3049e+00 -5.6419e+00 4.3975e+00 +#> 1.2040e+01 2.9522e+00 -2.0292e+00 -3.8237e+00 -6.7427e+00 1.1371e+01 +#> -3.0471e+00 9.1596e+00 -2.2951e+00 1.4278e+00 2.8581e+00 1.1450e+00 +#> -5.7798e+00 1.8992e+00 2.7155e+00 -5.4368e+00 1.6683e+00 6.8179e+00 +#> 1.5552e+01 1.0856e+01 -7.4877e+00 1.7503e+01 1.1446e+01 -1.0355e+01 +#> -1.2055e+01 1.5282e+00 -5.4749e+00 -4.1974e+00 -5.9257e+00 1.0064e+01 +#> 2.3358e+00 1.4353e+01 9.0634e-01 -5.1698e+00 -6.9872e+00 -5.3377e+00 +#> 1.4625e+01 2.8490e+00 -7.1319e+00 1.3525e+01 1.6654e+01 -1.0162e+00 +#> 1.0403e+01 4.9621e+00 7.0472e+00 1.9989e+00 -1.9969e+00 -6.8726e+00 +#> 7.7782e+00 7.1451e-01 -1.0276e+01 2.5898e+00 3.3833e+00 -5.7464e+00 +#> 6.3637e+00 -1.4091e+01 -6.6739e+00 3.4672e+00 2.8298e+00 4.3459e+00 +#> -1.4757e+01 6.2524e+00 5.7120e+00 -8.2872e-01 2.9873e+00 -1.1124e+00 +#> -4.5514e+00 -4.1434e+00 -6.8436e+00 -1.9198e+00 1.0507e+01 6.1106e+00 +#> 4.9061e+00 -8.2704e+00 -1.0759e+01 -1.0625e+01 1.3115e+01 -1.7348e+01 +#> 1.9217e+01 -1.1588e+01 6.3912e-01 -1.8243e+01 1.2293e+01 -9.5986e+00 +#> -8.5925e-01 -3.4281e+00 -1.5110e+01 -5.4068e+00 8.1443e+00 -1.1779e+01 +#> 1.1450e+01 -5.6268e+00 1.5817e+01 1.0199e+01 1.4240e+01 -9.1857e+00 +#> 1.2182e+00 -2.1214e+01 1.1216e+00 4.1931e-01 -7.8672e-01 9.4782e-01 +#> 5.4024e+00 -3.0095e+00 1.8014e+00 1.5928e+00 9.5287e+00 -2.3651e+00 +#> -6.2293e+00 1.4186e+01 1.1173e+01 -5.3153e-01 5.2723e+00 2.2811e+00 +#> -6.3319e+00 -5.1972e+00 2.6973e+01 -1.4934e+01 -5.2616e+00 -1.1713e+01 +#> 2.1004e+01 1.9954e+00 -9.3554e+00 -8.2138e-01 -4.1082e+00 3.9037e-01 +#> 8.3272e+00 3.9975e+00 4.2137e+00 -7.9366e+00 1.6947e-02 -1.6094e+01 +#> 1.1774e+01 9.9972e-01 5.9695e+00 -4.2308e+00 1.3514e+00 8.0235e-02 +#> +#> Columns 43 to 48 9.8296e-01 -1.1794e+01 4.6783e+00 -1.1284e+01 -6.5485e-02 2.4002e+00 +#> 9.3494e+00 2.6465e-01 1.0985e+01 1.1484e+01 5.3779e+00 -6.5260e+00 +#> 1.3920e+01 -5.9769e+00 8.4406e+00 -6.1062e+00 -2.4462e+00 -1.0576e+00 +#> 8.5015e+00 -3.4099e+00 1.3045e+01 -2.4871e+00 -1.1143e+01 1.8200e+00 +#> -5.6810e+00 -4.6638e+00 -1.6928e+00 7.9357e+00 -9.2787e+00 1.2768e+01 +#> -2.0173e+01 1.2707e+00 1.7612e+01 -7.0284e-01 -7.2805e+00 -1.8927e+01 +#> 1.3834e+01 -1.2518e+01 -1.1550e+01 -1.5144e+01 -2.0212e+01 -4.5403e-01 +#> -8.4138e+00 -9.1491e+00 -1.5514e+00 6.5946e+00 -5.7358e+00 -2.8624e+01 +#> -5.4824e+00 -1.2562e+01 1.8653e+00 9.9120e-03 -6.2146e+00 -2.3214e+01 +#> 6.3677e+00 -2.5788e+00 8.7075e+00 1.0912e+01 4.8855e+00 -1.0308e+01 +#> -9.0831e+00 -2.0813e+01 -7.1917e+00 2.9565e+00 7.4505e+00 -4.5852e-01 +#> 1.0393e+01 1.5779e+00 1.2805e+01 8.2504e+00 -1.0377e+01 -1.2014e+00 +#> 1.6074e+00 -1.5799e+01 1.8749e+01 -2.1596e+00 -1.0772e+00 -8.7998e+00 +#> 1.8281e+01 1.2651e+01 1.1182e+00 -1.0956e+01 -6.5179e+00 3.6558e+00 +#> -1.5627e+01 7.2080e+00 -1.0683e+01 1.7225e-01 8.0603e+00 2.5588e+00 +#> 2.6387e+00 1.7141e+01 -3.0660e-01 1.2546e+01 -1.0772e+01 -2.4877e+00 +#> 5.5422e+00 -7.0834e+00 8.2290e+00 5.2221e+00 1.4808e+01 -6.4113e+00 +#> 1.8658e+01 4.4167e+00 -7.8011e+00 -3.7508e-01 2.1720e+00 2.3224e+00 +#> 1.1844e+01 5.7979e+00 5.4032e+00 9.5333e+00 -1.9390e+01 -1.8636e+00 +#> 1.1507e+01 -6.2315e+00 6.5892e+00 -8.4342e-01 -9.3099e+00 2.1889e+00 +#> 8.2866e+00 -5.2548e+00 -7.6570e+00 2.7553e+00 -2.9758e+00 -3.7415e+00 +#> -1.2918e+01 -3.3581e+00 -1.6786e+01 -3.7029e+00 6.6405e+00 -1.8627e+01 +#> 1.2714e-01 -5.9943e+00 -1.0005e+01 1.0083e+00 -1.5790e+01 -8.2264e+00 +#> 1.2429e+01 -5.3445e+00 1.0344e+01 -2.0266e+01 1.7524e+00 6.4923e+00 +#> 8.3809e+00 1.0603e+01 2.7851e+01 9.0080e+00 -2.4086e+00 -2.2249e+01 +#> 1.9947e+01 9.2557e+00 1.0491e+01 1.5285e+01 4.0656e-02 -2.8611e+00 +#> -7.9679e+00 6.0484e+00 -1.7679e+01 2.1741e+01 2.0318e+01 -2.8798e+00 +#> 1.7513e+01 4.3717e-01 -3.5861e+00 9.6215e-01 3.4398e+00 8.3210e+00 +#> -1.3057e+01 -6.2074e+00 3.6011e+00 3.1037e+00 1.8853e+01 -5.6660e+00 +#> -1.4325e+00 -1.2691e+01 1.4120e+01 -1.5470e+01 -1.8924e+00 1.4935e+01 +#> 1.3133e+01 -7.3707e+00 -2.5290e+01 2.8716e+00 2.6644e+00 1.7006e+01 +#> -7.8089e+00 -6.7755e+00 2.9553e+00 -5.1522e+00 1.0546e+01 1.5615e+01 +#> -2.2052e+00 -1.7775e+01 -2.4910e+00 -7.6424e+00 4.0206e+00 1.4578e+00 +#> +#> Columns 49 to 54 -1.1930e+01 1.0408e+01 1.9932e+00 1.1867e+01 -1.5796e+00 4.7469e+00 +#> -2.5033e+01 -2.9055e+00 7.0222e+00 6.1650e+00 -7.0954e+00 2.8061e+00 +#> 3.8242e+00 -1.9144e+01 -8.6292e+00 -7.0170e+00 4.1554e+00 3.1093e+00 +#> -4.9245e+00 8.9887e+00 -2.6099e+00 -1.3385e+01 -2.2275e-01 -6.9555e-01 +#> -7.4531e+00 -9.0411e+00 2.1363e+00 1.6619e+01 -9.6453e-01 -1.6231e+00 +#> 2.2502e+01 5.5189e+00 4.7697e-01 -1.0418e+01 4.2871e+00 -1.4408e+00 +#> 1.5951e+01 1.3212e+01 -7.5329e+00 1.0959e+00 2.7447e+00 -4.6562e+00 +#> 1.1344e+01 -2.2125e+01 1.7901e+01 -4.6182e+00 -2.8106e-01 -2.3148e+00 +#> 7.6376e+00 1.9672e+01 3.7193e+00 4.9079e+00 2.0155e+00 -4.4264e+00 +#> -1.6625e+01 1.1072e+01 -3.8426e+00 -1.2944e+01 5.5449e+00 -6.7104e+00 +#> -1.8381e+01 -9.1512e+00 -5.1241e+00 1.2869e+01 3.3699e+00 -5.3465e+00 +#> -3.5731e+00 -1.7432e+00 -4.5762e+00 3.4288e+00 5.0277e+00 -2.1140e+00 +#> -7.9599e+00 1.5342e+01 1.4389e+00 8.1760e+00 5.6928e-01 -3.1954e-01 +#> -1.7447e+01 -5.0924e+00 -1.7623e+00 -1.9096e+00 3.1523e+00 -2.2438e+00 +#> 1.3763e+00 2.2958e-01 -8.2987e+00 8.1516e+00 -1.1614e+01 -7.8641e-02 +#> -1.3026e+00 -1.1605e+01 1.7032e+00 6.7978e+00 2.0229e+00 3.4178e+00 +#> 1.3882e+01 -1.7011e+01 -1.5101e+00 2.0343e+00 3.7246e+00 -4.3124e+00 +#> 6.1257e+00 3.9545e+00 2.9905e+00 2.9953e+00 -3.4676e+00 -3.5509e+00 +#> -1.2858e+00 1.5554e+01 -1.3385e+01 3.0229e+00 1.3257e+01 -1.7151e+00 +#> -1.2664e+01 7.1216e+00 2.8620e+00 5.7296e+00 -2.6791e+00 -6.0639e-01 +#> -1.8125e-01 -1.1158e+01 1.5228e+01 2.1234e+00 -5.7803e+00 1.1344e+00 +#> 7.7264e+00 7.7212e+00 -3.0359e-01 6.7482e-01 -3.0877e+00 -1.5337e+00 +#> 1.1560e+01 -2.1762e+00 1.0641e+01 1.1824e+01 3.9550e+00 4.4653e+00 +#> 9.3274e+00 6.0953e+00 -9.0194e+00 -1.1214e+01 4.1548e+00 -2.2084e+00 +#> -9.6038e+00 2.2374e+00 -8.2138e-01 -1.1406e+01 5.8240e+00 5.5075e+00 +#> -2.2071e+01 -1.3338e-01 1.1620e+01 1.5719e+00 1.4193e+00 -3.2624e+00 +#> -1.2207e+01 4.2834e+00 2.3571e+00 8.4039e-01 6.5941e+00 -4.5320e+00 +#> 1.4071e-02 -4.2638e+00 1.4311e+01 -3.6200e+00 -3.4290e+00 -3.9109e+00 +#> 8.4256e+00 -8.9969e+00 -1.4901e+01 -6.7363e+00 6.0295e+00 3.4049e+00 +#> -5.4866e+00 7.0723e+00 -9.0642e+00 -2.3394e+00 -6.0919e+00 1.9601e+00 +#> 1.2050e+00 5.8304e+00 -5.3384e+00 2.3990e+00 5.0838e+00 5.8618e-02 +#> 1.0580e+01 -4.8685e+00 1.4148e-01 -3.3725e+00 -1.8417e-01 -5.7808e-01 +#> 2.0337e-01 -5.1599e-01 -2.3296e-01 -5.5305e+00 -2.6131e+00 -2.7271e+00 +#> +#> (11,.,.) = +#> Columns 1 to 8 -5.4055 -12.8212 -11.4962 -11.2653 -9.8770 -7.0898 -14.1908 -16.6895 +#> -0.9588 -6.7719 -6.7740 13.1522 2.5239 -4.9201 -9.0637 -2.6464 +#> 6.7203 4.0261 -3.5288 7.3960 -22.5836 2.1697 -9.1963 -16.7508 +#> -0.0230 7.3559 7.0085 0.8274 6.6417 -0.5680 -3.3451 9.1296 +#> 9.2374 4.0972 1.3119 2.0917 10.2749 5.4807 -13.5052 13.4374 +#> -7.2157 -6.5169 -1.0032 8.4717 0.7223 9.3963 -8.1653 3.7773 +#> -0.0921 -1.5552 -1.2707 -7.8980 -29.2803 -13.3941 6.4451 10.5154 +#> 0.6800 6.8448 -10.4146 -8.3573 16.7202 17.9789 -7.9937 -7.7566 +#> -2.7608 -7.5636 4.3386 10.7266 12.7765 2.9435 -3.0109 -2.5703 +#> -2.4705 -8.8155 15.8010 18.7013 -7.6651 4.8273 -15.2839 -22.3401 +#> -4.2251 -7.1305 -12.5061 -21.0559 -1.1047 -3.7846 -10.1375 5.1159 +#> -0.4302 -2.1175 2.2277 -0.0405 -5.3440 3.8683 18.2067 6.4389 +#> 2.3190 -0.9290 2.4649 6.1844 4.2881 5.6679 5.9470 10.5908 +#> -1.1352 4.7306 -8.0197 -17.6189 4.0034 -3.6601 7.7812 -7.2113 +#> 6.7854 -1.5334 -0.4881 -6.5860 -9.2010 6.2877 -3.2831 2.2807 +#> -7.1120 1.0091 -1.8908 -0.8197 8.8857 -4.6941 -2.6334 10.5442 +#> 0.6511 8.3130 5.9025 -2.2319 1.8958 2.6976 16.8064 -3.7850 +#> -1.1480 -5.0506 7.3444 -0.0132 -18.8620 20.5486 -1.9485 -4.9859 +#> -9.2978 -7.1933 -0.3520 5.1499 2.6228 -13.3133 -3.5121 -7.1834 +#> 5.1911 -3.7486 -10.2273 6.6342 0.1885 -15.2781 -19.7429 3.2060 +#> -1.3515 -4.5879 -3.5312 6.6493 3.3874 8.0278 9.6604 2.2411 +#> 0.6587 1.2721 9.1621 -6.2055 -9.5126 9.2114 -7.2505 -4.4917 +#> -4.9576 2.3409 -3.7596 -8.3144 0.1786 1.8665 -7.6558 -1.1847 +#> -1.9648 0.0582 6.4989 16.6409 -1.1180 26.6440 -2.2385 -3.8422 +#> -1.2133 -1.6047 -3.4843 3.5029 -7.3982 -22.1989 3.1332 6.5466 +#> -1.0831 6.7521 -8.0247 3.2630 3.3820 8.8123 20.4866 -2.4224 +#> -0.4180 -5.5231 -4.5282 8.0060 4.2186 9.3785 -8.6098 -0.3633 +#> 1.0353 11.5919 -1.4118 -4.8064 -1.7891 -5.2798 21.8117 -6.8129 +#> 1.5290 -1.1109 -1.6289 -2.0407 3.1045 -6.7241 7.7408 32.9108 +#> 4.5344 3.8848 -1.6082 0.0696 -16.1280 -5.5542 10.8213 8.0411 +#> 3.7386 1.5182 0.7064 -5.4933 0.4963 7.0519 8.9854 0.6076 +#> -4.3463 3.0614 2.6529 11.4385 -10.4198 4.7534 8.7773 3.2090 +#> 3.9197 6.6092 -1.9214 -7.9200 -3.8039 -8.5101 -8.0896 4.3386 +#> +#> Columns 9 to 16 -8.0963 -5.0893 11.2595 -21.7700 -0.4435 9.4434 -1.6422 -15.6053 +#> -5.4985 7.1083 16.8098 -5.3270 -2.6601 4.7787 -4.2944 -8.5968 +#> -12.8671 -21.2995 -3.0081 1.4710 -2.5656 -0.7884 6.1528 -6.6789 +#> -9.2201 -1.9824 -14.0278 0.3191 5.0258 1.0274 -13.7023 17.3186 +#> 19.6774 -1.2202 9.0924 7.5304 -3.3807 -7.3085 7.3692 7.5759 +#> -9.4689 0.1457 9.1901 -8.9213 -13.8453 -13.0196 -10.9186 -13.1116 +#> 17.7428 26.0072 8.6514 -14.4933 -5.2205 2.6055 -0.3344 -9.3308 +#> 24.7822 -6.2983 -7.4286 18.6491 13.0411 -11.3752 7.8269 15.9172 +#> -5.1160 -1.6017 -0.5521 4.4829 4.6389 2.8735 10.8363 7.8346 +#> -11.6842 13.8607 -2.7583 -7.2021 16.9271 10.5532 -3.4277 -5.2636 +#> -14.7114 13.8395 -2.7297 14.8692 -15.5743 3.5719 -8.1930 7.6920 +#> 12.9891 -0.7218 1.8757 6.5407 11.9038 -5.6843 1.6255 6.8036 +#> 4.7963 -0.9979 -7.9144 2.8503 -0.1705 3.4232 -4.7482 15.9476 +#> -1.2452 -6.6892 -1.8842 3.2330 14.9785 -7.7715 3.4890 5.1187 +#> 14.1618 20.5121 10.5584 -6.5404 -3.9107 -1.4618 -6.7874 -13.0442 +#> -7.0971 9.0339 -5.5109 -1.0006 6.7793 1.3089 -7.3950 4.5605 +#> 4.2148 5.6966 11.1746 4.5634 4.2315 -11.5886 -0.1598 -4.9882 +#> -5.5178 -1.6864 12.7013 -0.8401 0.2114 3.2006 7.0705 -3.7734 +#> 8.0506 4.4067 -2.7286 -2.3772 1.5381 3.8432 -1.5158 1.5586 +#> -2.7677 -5.3447 -0.3715 -0.4163 -7.7631 -2.1778 -4.1122 -0.5362 +#> -1.2315 -25.3957 0.4581 -13.7141 15.5662 10.6567 2.1952 5.4874 +#> -0.6417 -1.5647 0.1031 13.0846 -1.6135 -4.7646 8.8332 7.9774 +#> -9.5805 -5.6133 2.9529 16.4223 3.3296 2.4445 5.4741 2.5839 +#> 6.0440 17.0827 -5.4099 -8.0291 -1.3871 7.8044 13.4143 -6.8301 +#> -2.2846 -9.5562 -11.1248 -11.6881 -12.4201 0.3416 1.3621 1.6872 +#> -6.9161 7.4641 -4.6135 -1.7430 -5.8045 0.5027 -0.4220 -4.2977 +#> 11.4381 0.6050 -6.4387 -0.7690 -8.0419 -4.8361 9.1172 -19.6965 +#> -13.2769 2.1120 2.5309 5.2099 -8.7817 0.8572 -0.2561 7.0573 +#> 10.1496 -4.4357 -9.8793 -6.0450 -1.9153 -5.2773 -2.9340 -4.7760 +#> 14.4536 -13.1284 -1.1670 -12.4540 -18.7169 -5.2540 -2.2472 2.5345 +#> -3.1225 -5.1695 12.4998 1.8068 5.2393 9.0779 -5.4050 -11.2571 +#> 19.0003 0.4353 4.1467 -12.5603 0.0177 -10.2766 -10.0077 -10.1492 +#> 17.5557 -4.6641 0.6727 -1.7144 9.5760 -6.3881 -0.9979 17.6790 +#> +#> Columns 17 to 24 5.5389 9.7153 -0.8149 -9.5693 -3.4722 -2.8820 10.2517 4.9646 +#> 10.1253 -7.4265 -1.9225 3.8907 5.0756 -1.5778 -9.6211 8.5616 +#> 9.1293 9.3994 7.3130 -10.8699 2.4559 -6.2287 5.1936 -3.3511 +#> 13.6538 8.4455 -6.5015 7.5070 3.4916 -4.4760 -3.8086 9.2538 +#> 8.0192 0.4894 0.1876 -2.4902 -0.1445 -7.0434 11.4712 -12.6585 +#> 1.7324 -13.9589 0.8041 13.5750 -2.5177 -6.1032 -14.0721 9.3925 +#> -8.8323 -4.9451 -11.8284 3.4365 8.6094 -7.0845 -2.0510 -26.0251 +#> 3.0504 13.3794 -3.6231 7.0293 6.3882 -6.6549 -4.5141 0.6877 +#> -4.5420 10.6653 6.1909 0.1608 -12.9508 16.0592 23.2264 -10.8292 +#> 12.5975 11.6704 -6.9887 14.9949 18.1805 5.4730 -2.9134 -4.9841 +#> -10.5332 12.6488 1.7304 11.6070 -0.8718 -0.6584 -6.1346 20.9687 +#> -3.5676 -0.7987 -4.9272 -5.4014 3.3703 -0.4896 -0.4138 2.9948 +#> 16.2486 4.7883 1.6822 15.7260 1.9915 -16.1390 -5.9753 10.2376 +#> 3.8683 5.5006 4.3829 -11.0420 -11.3496 -11.4821 5.6880 -3.6688 +#> 11.9727 0.0843 -1.3996 3.0607 -0.6485 5.3883 -16.4463 8.4056 +#> -6.6803 -3.3068 1.5495 11.6893 8.4106 0.3437 -9.5375 4.6183 +#> -1.4904 5.8014 2.8704 -5.6100 20.2162 -2.6406 2.4729 -15.4521 +#> -5.3582 -5.6083 4.3181 -1.3172 -3.4599 -6.6939 1.7393 -14.3633 +#> -12.8406 2.4952 9.9817 -3.9607 -6.5257 -2.5706 0.6663 4.8252 +#> 7.8893 -2.8912 -0.5334 3.5568 6.0494 -5.4785 1.5054 -10.0505 +#> 9.2342 -8.0807 -2.8325 4.7177 -6.1211 2.3771 5.2189 12.9520 +#> 10.0793 0.3359 8.1422 3.8272 -13.6446 11.1437 20.9021 6.6736 +#> -0.4169 -4.6307 10.1826 -13.3094 -25.1051 -1.2309 -5.1323 6.9775 +#> 4.6990 -7.5735 11.3742 -18.8726 -7.4936 -14.2206 16.5621 0.0137 +#> 3.8652 0.0141 2.7227 6.6565 -3.2034 -14.6458 -13.0527 -9.5219 +#> -6.1301 -5.3862 -11.2960 -16.6052 -11.7143 4.1288 -10.9636 -18.1907 +#> -1.3663 8.9263 -0.4715 11.6974 -10.0834 9.7843 20.9122 4.4779 +#> -11.5308 2.9462 -6.4476 -1.0752 -6.5345 -13.5413 -0.9235 7.4826 +#> 13.1922 -17.1404 -10.6783 -1.9518 1.6250 9.2958 7.0275 -6.1738 +#> 5.3413 -4.3852 -8.6940 12.9464 -8.2948 -6.5683 -0.4623 12.6793 +#> -16.6802 7.5168 0.9948 -6.2193 7.4433 1.5622 4.1176 -1.9193 +#> 4.0622 -9.2806 12.1699 -0.1856 6.8562 -7.4302 0.8241 -10.5828 +#> 13.5447 1.1691 -10.1185 13.4078 15.7959 -11.0607 -4.6914 -11.4070 +#> +#> Columns 25 to 32 8.6906 -4.4819 -16.0977 2.4697 3.1499 0.9833 12.7757 6.6100 +#> 4.5444 -8.1830 13.3868 11.6473 -10.6073 9.0276 0.5097 -5.6914 +#> 5.9988 8.6714 21.7937 -4.5504 -6.1166 6.4743 6.9227 6.7058 +#> 3.8787 8.7077 -1.6324 -1.2688 -10.0218 -17.8990 6.8641 -7.0852 +#> -17.6675 -3.9856 3.3549 6.8274 9.7155 -12.3437 10.5366 -3.4925 +#> 3.3215 -9.0382 16.6955 -6.3661 -7.1147 -3.3227 -9.7662 10.1416 +#> 26.5915 -7.7979 -9.0020 3.4738 7.1377 -6.9711 3.3329 -25.6101 +#> -6.4348 -0.7443 2.2093 5.6551 -12.5932 12.2770 -1.8114 -0.0719 +#> -12.7800 18.4545 -14.7236 -17.5124 9.1215 -5.5512 8.3363 1.4436 +#> -4.1571 6.9203 2.7244 -13.6063 14.2651 -8.2992 -0.8229 9.0277 +#> -13.7235 -8.2734 -5.6812 -0.8279 -20.6463 -16.2376 4.5752 23.4529 +#> 6.9439 14.8364 -6.2147 -4.7256 5.3776 -5.0525 -3.9604 -11.5019 +#> -1.8704 2.3293 0.2194 -5.7172 -3.9069 -0.8479 -4.6177 -16.1231 +#> 6.6336 2.7141 -20.3401 10.7361 9.1753 2.2894 13.2451 7.7904 +#> 6.1828 -3.8053 -0.5917 7.3192 6.3272 -9.3474 23.2893 1.4797 +#> 14.5297 -5.6312 2.8111 3.6819 -8.0714 -10.8505 -8.7319 -2.6226 +#> 11.4411 -7.7391 -2.6136 5.7803 -1.6828 -17.1462 -11.5075 -8.5545 +#> 7.0145 5.4355 5.7301 -22.4279 0.0704 3.8067 -6.4166 -15.4869 +#> 4.8561 0.9614 -1.1349 5.8904 -6.4565 -9.5521 -12.3810 8.3897 +#> 1.1373 -6.8023 2.3514 14.6949 -5.0576 3.3793 17.4717 -12.4308 +#> 13.4749 -21.1074 8.8062 1.9577 -16.8838 5.5613 7.6369 -5.1571 +#> -8.7234 -9.4534 4.8240 -3.3674 1.4043 -0.5429 4.8862 0.3845 +#> -0.1187 2.7708 3.0856 8.0535 -0.2191 -2.3842 11.1446 10.1312 +#> 10.9804 8.7844 -10.2858 -1.8880 17.5858 -16.4390 -5.3838 4.6293 +#> -0.9628 15.2486 5.7033 -2.6914 10.7375 2.7955 -5.2543 -2.6170 +#> 6.6361 22.1522 9.1405 9.1258 -3.2185 4.8683 9.2677 14.2053 +#> -24.4219 -1.0824 14.5277 -2.1023 17.2105 2.2666 -3.9664 0.9500 +#> -9.1430 2.0245 -4.2121 -3.8316 3.7740 -0.7031 3.1172 4.9890 +#> 8.1631 -6.8762 -4.9908 9.3535 21.4092 -8.6858 1.6469 -2.8641 +#> -10.3839 8.7087 -8.9921 -4.4137 12.4108 2.1464 -1.1056 1.1637 +#> 6.8284 10.7424 -5.6882 4.4955 1.3618 -1.4492 -7.8652 3.9328 +#> 11.9747 -23.6127 -7.0290 4.9756 9.3470 9.7060 -9.9396 -14.5454 +#> -8.7183 -20.1927 4.2785 -0.6520 -12.1457 1.9948 -2.8725 -1.0328 +#> +#> Columns 33 to 40 -3.4957 -12.9007 -6.7540 5.1744 -7.6554 -20.8943 3.3641 12.3797 +#> -5.3283 2.1093 4.5289 16.1537 6.7174 2.9742 -2.6721 0.6115 +#> 3.9822 -17.6992 -24.2052 -3.4368 2.3902 12.8162 1.9887 2.0391 +#> 2.5299 2.6777 5.2940 -7.9351 -4.3510 -2.1907 11.8613 -8.4697 +#> 5.7702 -2.1336 0.2546 10.1146 13.6231 2.1547 6.8078 -2.0359 +#> -4.9064 10.9651 0.5256 -2.2876 -21.8221 3.4748 14.1588 1.6659 +#> 2.2628 15.3465 5.3890 -3.9440 5.9947 -2.6753 19.6240 9.5830 +#> 2.5197 7.1930 -12.9271 3.6338 3.2426 -5.0879 -5.2884 -3.3596 +#> -0.0810 1.4495 -3.6879 2.8248 -2.8201 3.4760 -7.0414 7.5413 +#> -15.0129 1.5153 3.2359 15.2691 -7.9809 0.5467 -4.8724 2.3613 +#> -17.9237 9.4862 0.9706 5.9850 17.0496 13.8400 5.2478 -1.0264 +#> -10.5961 12.3432 5.4140 -0.9479 -8.5752 -11.7603 -0.3170 -0.0237 +#> -4.8983 7.8893 -8.8595 -5.2496 -13.3228 -11.5002 -4.2536 -11.4074 +#> 18.0229 10.9709 7.3556 -4.8270 -20.2337 -4.2284 -10.4610 10.1696 +#> -11.9457 5.4662 12.8361 11.1042 1.1041 11.3119 6.4178 -6.0587 +#> -7.8864 9.9089 3.3984 -2.2397 3.6824 -5.8370 8.5485 -13.3280 +#> 8.0090 4.0433 8.1660 -11.0333 11.1999 9.0627 -1.6642 -2.3552 +#> 7.7223 2.9116 13.1093 13.0952 5.2031 -9.0280 6.6690 4.8306 +#> 7.3031 0.5444 -0.7612 -12.9485 5.4158 0.0919 4.3915 0.8169 +#> -2.0863 0.4553 0.7929 6.1557 7.6528 -7.3692 -8.0175 5.4582 +#> 11.1042 7.7931 -8.7395 -7.5686 -4.3529 -15.2286 4.3901 -14.2214 +#> 6.0429 -8.2275 -21.4503 7.2640 20.0072 12.2605 -1.5984 -6.5557 +#> -4.1199 -8.7502 -12.5962 -1.9513 -2.4513 11.3904 1.8514 7.2773 +#> 5.0082 1.4738 3.8043 2.4984 -6.3557 11.0754 -13.6473 18.8892 +#> 3.2076 5.6957 8.5382 -12.0987 -7.0255 -5.0878 -10.5054 0.0105 +#> -2.7257 2.1655 1.7951 -1.9168 8.3308 3.9440 -6.5315 8.8702 +#> -15.6634 3.8071 -2.8284 2.8686 2.6537 -9.9190 -6.4964 8.5380 +#> -5.5322 1.5889 2.1419 -0.9648 14.0815 3.4972 -3.2159 -1.7197 +#> -4.4879 -5.7155 12.5518 -11.9951 -6.4850 0.5837 6.0140 9.7379 +#> -7.8803 3.2550 0.9981 9.3336 -2.2737 -5.6261 8.0397 11.7345 +#> -1.1274 -9.1740 -4.2339 -3.3501 1.5657 10.3702 1.9456 -2.4436 +#> -5.4884 -3.4300 12.7837 -15.8801 0.5969 -7.2572 14.8341 6.3409 +#> -4.1985 -6.2091 19.6562 8.1881 4.9050 10.5610 19.5348 -11.3853 +#> +#> Columns 41 to 48 -2.7137 -4.6608 -7.8639 -18.1423 -1.5267 -3.2773 -5.4016 -8.6433 +#> 6.2280 1.0183 -1.4649 -4.1771 -10.7345 -2.6907 -8.6749 2.7132 +#> 6.1075 5.7429 13.1550 13.0300 -4.9505 -5.0011 -11.9035 3.2295 +#> 0.1574 0.5322 17.0278 10.7063 6.7545 12.1497 22.9725 -7.7652 +#> -1.1801 -10.1937 9.9222 4.5274 6.9176 -4.9439 -3.5044 -12.8462 +#> -2.3856 8.4841 -7.7070 -9.5067 0.0728 12.6317 -13.1261 3.8723 +#> -6.3127 -1.7740 6.6936 -0.2792 -0.6567 -3.0826 2.8149 2.7094 +#> 7.1113 4.0884 6.6075 -0.5773 -0.0683 10.5518 -0.1571 7.2865 +#> -5.1004 -8.1917 -5.5682 -2.5908 0.2107 -2.4291 2.2285 8.9865 +#> -11.2097 -10.6410 10.7745 -2.4273 -21.2566 1.6016 -9.4105 -12.2471 +#> 3.1427 5.7268 9.7448 -2.6100 -13.4216 -2.8819 -5.9152 3.9733 +#> -2.9482 -8.1471 12.7303 -3.7645 -2.3743 17.9188 3.1142 -8.6335 +#> -0.8742 -3.7746 8.0121 -6.9740 -11.5285 8.5837 4.2794 -8.0273 +#> 6.7697 -10.8690 -4.4483 -2.4258 5.9669 0.2114 -7.8955 -11.7361 +#> -3.9469 -1.4720 -2.5445 -5.8754 -3.0648 -2.9488 10.4857 -7.1903 +#> 0.6893 0.9485 -1.6990 2.3992 -11.1795 -6.2548 0.4749 5.5308 +#> -15.4657 -9.3748 0.4303 1.4809 16.5658 -0.9612 -6.9927 5.7430 +#> -2.1066 -2.3958 -3.9747 -4.9547 -4.1973 7.2521 1.0492 -3.7327 +#> 8.2104 -0.2188 -2.3428 -3.5882 17.7026 1.7196 -0.1771 0.0817 +#> -1.0454 -4.1729 13.4594 -11.0697 -7.7331 -8.8572 6.0746 -0.2361 +#> 0.3822 -4.0953 5.8121 20.4561 -7.5034 5.6746 10.3273 -10.2685 +#> 1.3891 5.1878 -2.8041 -15.3951 1.7782 3.0298 -9.8381 -2.7373 +#> 14.2780 15.8650 -13.7873 10.2324 -15.2655 -11.4930 -12.6654 -12.1412 +#> -7.0948 -9.5416 -18.0632 -4.7630 10.5022 10.3967 7.4148 -1.2079 +#> 7.6191 7.2975 2.7397 -3.0366 13.8674 -3.8837 2.3324 5.9046 +#> 17.4417 14.5925 10.1028 20.0136 -3.7869 -11.1534 2.0239 3.1739 +#> -1.7296 -20.4732 -15.1557 2.0850 6.5636 18.1938 0.9150 6.5254 +#> 15.9038 9.1991 -13.8631 2.8205 4.4692 -5.3159 15.2506 9.2716 +#> -19.8675 -3.1060 16.0112 4.6046 26.2780 2.9539 -8.7354 7.3370 +#> -3.8077 7.6845 1.5365 5.6504 -12.1643 5.4406 6.5761 -3.1648 +#> 4.3883 -2.6695 -3.8233 0.3331 -2.4336 -14.9341 -8.7418 -4.4109 +#> 9.4210 -2.6092 -9.9504 1.8270 -2.1828 -3.9466 6.8509 -10.3546 +#> -4.9575 7.1540 1.8171 3.5707 -0.1473 -11.0221 3.3263 3.0011 +#> +#> Columns 49 to 54 7.7686 -19.2088 -4.3984 -6.6009 1.0931 -0.0593 +#> -0.1722 -1.0048 5.8329 -4.6133 2.5930 -4.0542 +#> -0.7772 -8.3398 -8.7232 2.4307 0.6651 -3.0323 +#> 2.3375 2.2467 7.1897 4.5060 1.5466 2.6489 +#> -4.0726 -5.1446 -9.9577 -0.7306 -2.1495 -5.3092 +#> 5.8799 16.1226 14.1187 -6.3258 -12.7240 5.7837 +#> 17.9912 1.2103 -14.4796 5.2234 7.1612 10.9232 +#> 3.1071 -8.8086 6.2567 3.1653 -4.1226 -6.4468 +#> 17.6475 1.4954 -3.9971 -1.0792 4.5360 -3.3024 +#> -5.2560 -8.0136 7.9477 6.7578 -10.5123 4.1404 +#> -13.2961 12.3127 0.6827 -1.0238 8.6313 3.1152 +#> -1.6527 14.1260 -8.2222 11.8463 -0.6558 -0.6822 +#> -2.9386 -5.6103 -12.1391 -1.8200 -0.1392 3.7910 +#> 2.9182 2.2694 5.4313 10.4099 4.6388 -4.1158 +#> 1.2167 3.6775 -2.8355 -8.2874 0.8455 1.7383 +#> -12.7928 16.2392 -0.9791 -11.0838 1.6811 6.0138 +#> -2.2261 -2.1926 -9.0140 13.0908 8.7340 1.4901 +#> 8.9703 18.2571 8.0074 8.7897 -0.9565 -3.0671 +#> 2.2582 9.9103 -11.1425 -3.9471 4.5475 1.0640 +#> -1.2556 -2.3475 0.2533 -5.3723 -3.7134 -3.2432 +#> 6.2705 9.3926 6.0208 4.9499 -3.7650 0.8310 +#> -4.8548 -6.7536 -6.7908 -0.0707 2.7131 4.7570 +#> -7.3806 2.5334 -10.3324 -17.7143 10.4208 -5.3948 +#> 3.9395 5.7795 7.5972 -10.3655 1.4875 -2.3655 +#> -4.5489 -2.9826 4.1896 -6.3864 0.5334 3.7579 +#> -1.6332 6.7876 8.8370 11.5702 -5.1311 -2.9854 +#> -1.9556 -5.7515 0.5072 -5.4332 -13.8866 0.0639 +#> -13.4486 16.0679 5.5380 14.5054 2.6706 0.6628 +#> -7.0647 -4.9027 12.0031 0.4694 9.6411 3.3881 +#> -12.9776 -2.0363 3.5486 6.7200 -1.2885 2.0454 +#> 2.2942 -9.3349 -3.9989 -6.3987 5.6489 -1.1669 +#> 2.3960 12.7751 -4.8202 -5.7435 6.8233 1.0905 +#> 12.0144 -14.9577 3.7293 14.0925 6.1287 1.6907 +#> +#> (12,.,.) = +#> Columns 1 to 8 0.6229 0.7915 -1.1087 7.8374 10.0051 -7.7150 4.9062 7.8908 +#> -0.0694 -2.8023 -4.7687 14.2162 10.5257 -10.7560 -21.2938 -9.7516 +#> 1.8988 9.5241 -2.5876 0.9445 -5.8729 5.6424 4.7821 4.6663 +#> 2.2769 0.1422 7.5566 -2.3968 -15.5029 -5.7022 10.8948 18.1715 +#> 4.6877 5.5713 -3.5079 -18.5530 14.5777 13.3651 -1.9050 -8.8703 +#> -2.2608 -10.8032 -8.4165 12.6980 -8.4522 12.0540 3.4561 -14.1412 +#> 1.4388 5.8246 -10.8205 -10.4470 -10.0608 -3.0611 0.3587 3.1356 +#> -2.9395 1.1474 -1.9937 7.4385 11.0530 11.3669 2.6307 -11.3260 +#> -0.5357 4.5709 14.3846 6.7678 4.4890 8.4158 3.3504 -8.6785 +#> -1.6078 2.4123 3.6483 17.4909 -4.5587 -9.6758 -9.2399 -9.8593 +#> 1.0561 -6.8999 -6.4034 0.1626 11.3164 22.6796 5.4696 2.3235 +#> 1.4055 -0.7043 -3.1966 -12.8185 -2.8923 -1.5228 3.1578 -9.0939 +#> 2.8777 -2.4606 0.7928 3.4029 1.5469 -2.4470 -8.3945 -14.2637 +#> -0.6468 5.2892 -0.3504 -8.7743 0.0913 -4.0508 -11.2090 13.5111 +#> 4.0296 -2.7171 -12.6160 -9.6473 1.6610 -8.9166 -1.2568 2.2622 +#> -4.3760 -3.5491 4.4934 -0.9193 17.4047 -6.2420 -19.9250 -8.4942 +#> 2.4884 3.3511 -11.0656 -33.2959 -5.7327 4.2863 1.9573 -9.1797 +#> 1.4906 1.8312 -0.8521 -0.3516 -6.5020 7.9463 -4.5182 -4.6547 +#> -6.8905 -4.7727 17.1723 7.4306 3.7605 -5.8834 -0.0043 3.6442 +#> 5.4320 -3.3874 2.2405 10.1437 7.1875 -7.3165 -13.8895 6.6712 +#> -1.3703 0.7421 -2.6668 17.4942 6.1591 2.4394 0.1591 12.4642 +#> 5.0569 0.6890 -0.6120 8.6410 5.1975 1.9946 2.6576 -0.1100 +#> -3.8598 11.1419 10.2535 17.9036 -4.7257 7.1292 2.5413 6.2846 +#> -3.0385 7.9449 -0.3607 -7.3119 -9.2193 -9.0367 -15.9032 2.4966 +#> -3.8747 -4.3002 9.7375 11.5059 -1.2934 -10.2958 -12.3196 -3.9663 +#> -2.0728 0.9591 -13.1327 1.5240 12.8185 0.8781 -4.9501 -7.9751 +#> 0.2162 2.5040 -0.0823 5.0865 -10.6410 10.1654 4.6406 -3.2647 +#> -3.4860 8.0917 4.1999 6.0495 -17.4762 -15.6921 4.2501 7.8961 +#> 4.7829 -8.0504 -5.5278 -8.0957 3.9923 5.9154 5.7076 -2.8182 +#> 0.6827 2.1102 -6.3540 14.5539 1.0061 10.3586 8.4657 5.0889 +#> 3.5737 3.4540 3.5922 -4.7371 -15.5827 -4.0762 4.4211 8.8044 +#> -3.0682 4.6046 -2.8153 -7.7425 -0.8732 21.5143 -1.8952 -6.1452 +#> 2.0572 4.3809 -5.3978 7.2423 -10.0628 -3.8495 11.1717 -9.7059 +#> +#> Columns 9 to 16 13.1054 18.0277 -6.0260 0.2420 8.3980 -6.9372 6.2657 -11.9200 +#> -6.0747 7.9303 7.5229 -4.3968 2.4170 -5.7601 5.4535 -0.0501 +#> 3.4723 -3.0820 -7.2816 19.9661 34.0843 -2.0608 5.7500 -3.6835 +#> -21.8542 -2.1709 14.0783 -0.0227 10.1440 -0.1063 1.5406 15.0271 +#> -10.4748 -2.7479 1.4695 0.0121 -3.5012 14.2570 4.9082 -0.4205 +#> 1.2982 14.7621 1.7995 -11.6836 -11.5936 4.4736 14.8308 -10.7383 +#> 13.6573 37.3630 -1.9853 -28.1744 0.2730 -9.9434 -2.2139 5.3577 +#> 3.9378 2.6297 10.4705 8.9094 -15.8067 12.2414 6.9211 9.9428 +#> 1.3213 11.1123 -11.5620 -11.6397 5.2031 7.7923 -11.9422 0.9900 +#> -1.8480 1.3055 3.3730 -1.1550 6.7097 -10.0045 6.1946 9.9348 +#> -9.3842 13.3287 6.3690 -11.0413 -3.1044 -6.3424 0.7110 1.8190 +#> -11.2253 7.1318 7.6301 1.5611 -2.2177 2.7896 -5.8078 10.0294 +#> -15.6639 7.0832 7.2002 -9.6245 2.4862 -0.8140 21.6638 4.0479 +#> -1.4398 -1.9538 1.5081 4.6188 5.3399 20.8294 -15.5621 5.3192 +#> -6.8321 -7.8058 -1.1027 -4.5126 -5.9816 -12.7773 -1.9567 8.2813 +#> -5.8907 2.2779 11.0943 -7.5164 0.7313 1.8247 0.0621 -8.3329 +#> -1.6861 -1.0769 -9.0422 -11.4377 -6.1250 -2.3405 -1.3732 11.3301 +#> 0.2473 -4.9452 6.3063 -4.2525 -2.2493 -6.9930 0.6344 -10.9879 +#> 12.7692 15.0232 4.0342 -11.8127 -12.5489 7.0898 -10.3255 8.5702 +#> -6.3658 8.5593 3.8469 8.3642 3.3234 -13.5362 5.9985 1.0681 +#> 0.9440 -10.2386 9.9136 -9.4521 -3.1750 15.9229 0.0124 2.3349 +#> 13.8990 -3.6652 -10.2097 2.9960 3.0371 -5.6915 18.5167 -22.6270 +#> 1.1244 2.9051 13.6522 4.2208 4.2418 16.7871 -3.9917 -8.3738 +#> 3.3449 -0.2137 -6.7876 9.3774 11.8340 -5.9345 5.0493 0.4011 +#> -4.8042 -1.6463 16.4893 -4.5129 3.2499 9.9832 6.8517 4.0192 +#> -8.6577 -1.8142 5.4381 2.4526 9.2979 6.0551 -6.0966 4.9886 +#> 2.1910 -16.4252 -3.5771 -4.1214 -16.3212 -7.2827 19.1999 -2.1758 +#> 1.8902 -2.4865 -5.9268 -3.3036 -3.1475 6.4938 -13.0891 3.0017 +#> -8.2386 -6.6439 -14.7280 -6.3897 3.1754 5.0881 7.0446 -17.3720 +#> -5.1558 -1.9583 -12.0438 -5.6105 14.7233 11.9309 4.5504 -0.2746 +#> -3.9558 -14.8110 0.2810 7.0282 -7.0585 4.4361 -15.3473 7.6275 +#> -11.7893 -20.1127 3.4704 -17.3109 -3.8947 3.1418 6.7855 6.6511 +#> -12.1893 -0.4085 6.9987 0.3152 -6.5864 8.6177 -15.5359 4.0214 +#> +#> Columns 17 to 24 -11.6615 10.5632 -2.4200 -10.2524 5.1441 2.3397 -3.9033 12.2461 +#> 2.1998 6.5962 -0.9250 -0.0817 6.9914 0.3577 -10.4958 1.6117 +#> -1.8944 -3.1994 -3.6397 6.5397 2.7144 8.2007 -1.7855 -10.5797 +#> -7.7227 -15.8568 11.0396 5.2943 4.5943 10.8854 -7.5387 6.5586 +#> -14.3565 2.0211 12.9089 -8.2543 -1.8719 -2.9142 4.3158 0.0391 +#> 0.3928 5.0229 -8.5872 -7.5235 3.5602 -4.4647 0.0271 -8.5625 +#> -8.8302 -3.2055 10.0607 6.3659 -5.6354 -11.9080 -4.0432 11.1333 +#> -9.8045 1.5131 11.8736 8.6078 -17.1371 -11.8324 10.9882 11.9569 +#> 1.1724 4.6120 1.1067 -14.7848 6.7956 0.3228 7.3402 18.5039 +#> 12.2876 1.2404 -8.5821 -2.5870 8.5460 0.5873 -6.3000 6.6839 +#> -10.5291 -5.9038 8.6065 1.9188 4.3460 9.7324 6.6274 -0.9019 +#> 0.5496 -0.6834 8.8903 7.5900 -4.6946 2.9964 -1.7698 17.6238 +#> -12.5180 -0.4222 12.3620 -1.0974 -3.9157 5.5843 -3.8384 13.8410 +#> -0.0500 1.5192 -3.7752 -5.7582 -0.4553 1.1262 -9.2395 -1.7804 +#> 9.1267 -1.6329 -5.5847 -4.5938 9.5350 -8.3514 -9.9696 -3.0077 +#> 13.0405 -3.9405 10.3975 13.1198 0.3756 -0.4030 -1.1406 4.8598 +#> 7.3404 -9.6809 -3.6912 16.5556 1.8384 3.4509 5.3169 -11.9876 +#> 3.3568 15.9024 7.5228 7.8160 1.3312 -2.7920 13.0140 -1.1514 +#> -2.7608 -0.3533 1.4966 0.7164 3.5189 1.7852 -0.0909 8.2748 +#> -4.2513 3.3682 6.2583 -12.5878 -1.6890 -4.0262 -9.0551 -0.6686 +#> -7.2619 17.0131 -6.0109 3.7430 -4.9631 -0.4176 3.0952 8.8443 +#> -11.6211 -0.4046 2.2322 -3.0091 -0.4379 0.4270 5.4041 -2.0646 +#> -11.0826 -5.8817 1.5498 1.3301 2.6119 -4.7856 5.5867 6.9728 +#> -2.3084 6.0377 -10.1231 -12.9566 5.1678 -6.1464 3.0075 -19.1242 +#> 4.6652 -5.2460 6.7259 9.5123 -3.3718 2.1713 -12.0838 -1.2914 +#> 7.7468 -3.6890 -0.5174 0.4274 -8.2590 -1.0826 -5.3796 7.4317 +#> -9.5119 7.1720 -3.5629 -13.5772 -4.6550 0.8548 -10.8200 6.2651 +#> -3.9594 -6.8401 20.8511 4.3915 -15.2072 6.0818 2.9340 3.8214 +#> 9.8847 -10.4059 -0.6240 0.4630 -4.8572 -14.2137 -1.7101 -6.0853 +#> -23.9771 12.5208 1.1088 -4.5807 -9.2331 9.0315 -0.7076 8.5631 +#> 11.5185 -6.7143 0.4632 0.2514 2.8042 9.7327 -10.4660 2.3422 +#> -12.1903 2.7876 0.0121 9.4966 1.0631 2.8549 14.2169 -19.3689 +#> -9.8758 2.7902 9.1959 7.1353 3.4879 1.4194 6.2847 -2.2458 +#> +#> Columns 25 to 32 4.5306 10.0239 -0.4469 7.0473 -17.0017 -6.6273 -3.4350 -7.2549 +#> 0.4246 9.9583 7.8468 -3.9488 -6.1045 4.3270 -3.0799 -0.4587 +#> -0.9743 14.8645 -2.6655 10.4335 8.9047 -17.4343 1.3187 -15.2915 +#> -1.4697 -8.5132 2.2909 -0.0960 -12.7902 0.5393 2.2986 6.8757 +#> 4.7715 2.8477 -9.1590 6.4779 -10.4095 10.6809 17.5576 1.2134 +#> -19.4920 -6.9989 23.0444 -5.5358 -10.0281 -1.8228 1.1484 -9.5458 +#> 9.2121 -13.8715 -5.7789 -12.9094 16.2945 -5.1009 -5.1085 -1.7312 +#> -2.0899 -5.9829 -7.7419 12.3199 4.1673 -5.6364 13.8474 -4.1620 +#> -8.1518 12.3607 -9.7095 -2.7287 -2.8512 -8.9781 -11.1845 9.7048 +#> -2.2913 -5.8673 -1.6576 5.2512 -5.2403 2.1694 -2.9328 13.1544 +#> 8.3081 7.5182 14.9806 -14.0427 3.5668 0.7667 6.3285 4.8872 +#> -4.6346 -6.3342 -5.8248 4.4028 3.9935 -2.0995 -1.1211 2.5779 +#> 1.9450 -7.1452 0.8038 9.2172 6.5199 -13.6754 0.1850 3.7368 +#> -10.2145 3.9818 15.6611 9.4165 0.5767 4.5900 9.6266 7.8135 +#> 3.2014 7.9482 3.8777 13.7058 -8.6177 22.3753 0.8108 -2.6301 +#> 2.7221 -4.7654 4.5407 -7.5669 -3.7058 1.5908 -1.1659 -14.3292 +#> 7.2399 -1.6960 4.7654 -8.0337 3.6575 -7.8885 2.2866 4.2081 +#> 2.7355 -12.2470 -20.2008 2.6150 7.9502 -7.0101 -17.7491 1.5312 +#> -5.4821 5.0057 -2.0815 -11.7545 -8.6805 5.6487 -7.1170 5.6992 +#> -2.8292 9.9959 -5.2379 -5.1389 3.9304 -10.2201 5.6104 -4.7453 +#> 5.6196 -4.8448 -4.5968 0.8013 6.6862 0.4325 -3.1806 -11.6766 +#> -0.8924 -6.0481 1.1216 5.3271 8.6393 5.1332 -7.7949 -6.9238 +#> 3.5106 6.1411 5.9817 -6.2763 2.1675 10.2647 0.0526 7.1093 +#> 7.2340 7.8741 -7.8990 5.7538 -20.0345 -4.6588 12.7252 3.4780 +#> 9.1655 -5.5844 -2.2182 -6.6134 -4.9100 -10.3093 -11.3365 -2.7572 +#> -2.1367 -6.0750 -7.5535 -2.4472 12.0668 3.9066 -12.2970 -6.3955 +#> 11.2347 2.3724 -14.2610 -1.0497 5.8683 2.0895 -0.0487 -1.5310 +#> -3.8810 -10.0591 -2.1246 -13.0769 5.2932 -8.1686 -17.7479 8.2709 +#> -4.0192 -4.3515 16.9953 -7.0999 0.9301 16.0774 -4.8301 -0.6348 +#> -9.8164 -2.0018 -0.5420 11.5896 -1.6949 5.7029 -1.7505 9.4581 +#> -3.7871 2.9083 0.1670 11.3802 17.2895 -4.6849 -4.5711 -5.1924 +#> 2.5878 -13.6899 -6.3023 10.3935 6.8784 3.7867 5.5330 -2.7283 +#> -5.1429 -3.7365 15.4687 -13.9259 -11.8810 1.5338 -5.8120 17.3488 +#> +#> Columns 33 to 40 2.5604 -15.4177 -14.6819 10.6696 6.4719 -8.2174 5.9633 3.6121 +#> 3.2475 3.7884 -9.9572 9.9449 6.1172 -12.3016 -9.9045 8.6785 +#> -9.6476 13.1789 -14.3362 -5.2921 9.3703 13.5705 3.5328 -18.2553 +#> 0.8200 -9.2395 -12.6426 2.1794 5.7069 -5.1984 -1.7159 8.8606 +#> 12.1887 14.5776 16.0802 3.5241 4.9984 5.0564 10.0536 -6.3784 +#> -7.3819 -16.3646 -7.3752 13.1533 -9.5506 -8.5513 -2.4685 5.7979 +#> -15.7432 3.4323 4.3331 -12.1468 -9.8013 -9.0356 18.4537 1.4085 +#> 8.0163 11.3554 -8.2558 14.7985 8.7950 14.3984 -3.5777 3.7516 +#> 8.4761 -3.6927 15.2411 9.0781 -11.9967 2.9025 12.2549 -0.5434 +#> 5.6748 8.2198 6.8690 17.7507 -2.5757 -2.1245 7.3225 15.4973 +#> 13.2693 1.1514 -14.7790 -6.6366 7.2152 3.4473 -10.8023 -0.5673 +#> -3.3480 -2.7546 1.8944 -3.8607 1.4287 -1.0664 -4.3925 -4.4589 +#> 5.1777 7.8544 -8.1477 9.9872 16.8814 -17.1649 -2.4407 -4.9054 +#> -4.4187 -12.4269 10.0166 -8.0165 4.2810 -17.1213 -7.2087 -3.1573 +#> 2.8324 14.6614 -1.6358 -8.3364 4.7225 7.8425 -0.4754 -4.6862 +#> -0.7236 -10.6665 0.2801 -5.7126 -9.9734 3.0372 0.6493 -2.7656 +#> -10.4361 12.9258 28.0120 -16.0748 1.4014 1.9361 -4.5752 -14.5180 +#> 3.4714 8.7199 7.3691 7.8173 -10.6775 -8.0766 -7.7064 0.7006 +#> 1.7590 -18.1827 17.4603 -10.1387 -5.4156 -1.1636 0.2717 1.2229 +#> 11.5710 -2.9663 -19.4479 11.8578 21.6067 -5.6635 14.5880 -1.5109 +#> -3.4410 0.6300 -8.2975 -1.5521 -7.4187 -22.0133 6.5697 19.8905 +#> 8.2915 29.1055 7.7777 6.0414 5.3075 24.2294 14.8538 -13.4689 +#> -6.5008 1.8680 -3.2257 6.1621 -2.5149 8.2843 -17.5140 2.2492 +#> 0.7379 10.8900 7.1080 0.5840 -2.1923 -16.2487 8.1567 -1.8218 +#> -9.2839 -19.8571 -21.6250 -5.2127 9.3285 -12.4685 -8.8177 12.7496 +#> -17.6580 -1.7395 -3.1409 -20.1595 7.3245 -4.6812 -18.5302 4.2510 +#> -1.6834 4.4052 -1.1025 10.2804 13.0208 7.8221 1.5916 30.9571 +#> -13.0202 0.7954 8.7631 0.6770 -3.9413 -1.1535 -30.5018 20.1708 +#> -0.9510 -24.0391 -6.5558 -4.2580 -22.9719 -8.8187 5.9489 -18.0942 +#> -5.1406 4.6596 -19.3178 11.1041 -12.9293 -8.5829 -4.6197 3.1705 +#> 3.3627 0.8514 -4.8366 -6.5678 5.2303 4.3907 -20.2697 -1.9497 +#> -16.1903 -6.4804 6.0883 -5.3278 -7.6647 -18.9676 2.8304 -6.0799 +#> 19.6219 -3.1933 -0.5165 -5.5669 8.8122 -0.7631 -8.4943 6.1732 +#> +#> Columns 41 to 48 15.3043 7.4350 5.2746 17.0398 10.8575 15.6638 4.7680 -8.8015 +#> 3.1865 -11.5273 -1.4048 3.2386 -4.5437 18.3177 -5.1113 -11.6743 +#> 4.7627 13.9291 2.1461 -5.5553 8.0767 25.1584 19.3968 8.2596 +#> -15.1059 9.4222 0.7374 15.0578 -1.9248 9.3788 -5.2342 -5.1922 +#> -4.0628 0.0799 -12.2949 -7.1987 -9.3535 -7.2319 12.0573 -1.3216 +#> -3.6261 -4.3267 12.6372 -10.7914 1.8285 -2.8429 -5.2521 9.4206 +#> -10.4351 3.0622 4.0635 8.7156 -8.4692 -3.5347 -9.3349 -11.2326 +#> -4.8819 3.1711 10.9417 4.0195 -16.2581 2.3147 -5.3727 1.5587 +#> -13.9949 20.5441 -0.1346 -5.6686 4.7856 -8.0310 -3.5242 6.1520 +#> -20.9580 -2.9451 2.6229 -6.2504 7.1897 2.8362 7.3396 6.1694 +#> -7.7526 16.0218 -6.5002 14.2846 2.0576 12.0403 -1.4440 0.0529 +#> -1.8210 3.8910 -9.4577 -9.9758 4.3448 -6.8880 -11.5471 -2.9537 +#> -13.6959 -2.1616 -1.8796 3.8157 0.2745 18.6436 -18.3162 1.1090 +#> 0.6558 9.7255 -10.4935 11.2421 17.1597 1.4255 8.1980 12.9525 +#> 14.8064 -6.5950 3.9824 -4.6857 12.3295 -18.0792 4.4572 -2.3019 +#> -4.3114 -1.4401 -0.2082 7.6283 -3.7553 7.0221 -1.9723 -0.4984 +#> 0.1655 -16.8091 -4.4321 -4.1608 -19.3574 7.8121 -3.4068 6.5521 +#> 10.3457 4.8228 -2.4552 6.0876 14.5269 2.8116 -11.4125 -11.4710 +#> 2.8365 8.5408 -3.6436 6.9717 -14.5556 -4.1611 11.8448 -1.2942 +#> 2.3556 6.9870 -9.0469 10.9832 4.0157 16.0751 10.8844 -12.7906 +#> -6.0826 -10.1812 9.7034 -0.3338 1.7216 3.3417 -0.5874 -5.0220 +#> -1.1600 -3.7470 2.4539 0.6351 5.5996 0.6931 5.9151 5.4411 +#> -1.4959 0.7328 -0.6596 6.0922 14.3099 1.6037 3.8605 -10.7451 +#> 6.8728 8.6751 -14.6904 4.8840 -10.7428 -12.2921 -5.5567 2.1713 +#> 5.4584 1.9766 -2.8809 7.7141 -2.5709 12.2849 -3.1887 6.8841 +#> -6.8739 1.4988 -0.8740 -13.1916 -9.8405 3.4185 1.2487 16.4778 +#> -4.7139 -8.2436 -6.6425 -10.3109 6.6327 -14.2698 -7.3648 -7.1846 +#> 2.3693 -2.4466 -2.0346 0.3403 2.8040 1.7793 -13.2464 -0.8262 +#> -4.8966 -8.5134 11.7160 -12.8599 -11.4779 -1.5727 -10.0090 6.2925 +#> -26.8955 12.4366 -1.2455 -5.7940 9.3323 5.7103 -18.9559 5.3448 +#> -0.5664 -3.4544 -8.2387 0.9737 3.7511 -0.6067 10.4947 -12.2625 +#> -7.6955 -8.9588 -0.3107 2.4558 -5.5364 -11.5780 -8.0680 -7.8119 +#> -4.8699 2.4457 24.9365 -1.5236 -0.0761 13.6635 -6.5005 -2.1388 +#> +#> Columns 49 to 54 4.6061 -11.1767 5.2272 -7.5479 1.8996 0.5870 +#> -1.9617 0.7968 12.8098 2.1013 1.7502 -1.0937 +#> -5.9461 3.6231 -6.2823 -12.2479 2.2487 1.2517 +#> 5.4913 -2.7761 -3.6395 -11.8342 -11.6724 -2.3949 +#> 1.0509 -1.7664 2.2307 12.5083 12.6361 7.0670 +#> -7.9719 -1.8669 -11.6196 -8.0688 3.1259 0.5858 +#> 4.6331 -0.2254 12.8652 -4.7407 -7.7812 -0.5685 +#> 4.4783 0.2114 6.4664 3.2386 2.9876 6.3434 +#> -9.9895 -7.3874 -11.6368 -3.1943 2.6782 2.6706 +#> 1.6126 -6.0231 14.5779 3.4581 0.1845 -5.6553 +#> -9.7551 -3.2479 3.3885 -7.4853 -10.8401 5.5263 +#> 1.8967 16.1583 -1.4822 1.8835 -1.5732 1.6357 +#> 4.2449 12.8940 10.7280 -7.9487 -2.8604 3.9444 +#> 17.3303 -9.3588 -13.6531 0.3946 -4.3630 2.1931 +#> -20.4530 3.2279 5.8892 7.7337 3.5031 -1.0432 +#> -6.0292 -3.8491 14.4200 0.4378 -8.6521 0.0662 +#> 9.4528 4.2694 -0.5314 3.2997 -1.3007 -1.9550 +#> -2.0293 -8.9622 -2.1403 -4.2731 3.6069 1.6770 +#> -5.5609 1.5505 3.7999 -6.1164 -4.1350 0.1311 +#> -7.9973 -2.6276 3.8596 -4.2382 -0.4680 3.8643 +#> 0.7822 -1.0493 -2.6086 -4.1395 -5.9424 -0.1388 +#> -5.7123 -5.3117 -2.6779 -8.6652 -0.3869 3.7877 +#> -3.7720 -0.1263 -4.4734 -3.0565 0.0027 -2.2615 +#> -1.2315 -17.7116 0.8698 3.4517 8.2641 -0.6160 +#> 9.4001 17.4240 5.7139 -14.3328 -4.6654 -4.0643 +#> 12.4197 -5.0413 -8.1980 3.7925 -8.7958 -3.4557 +#> 2.2063 -7.8742 4.9997 8.4577 -1.4558 5.6535 +#> 18.6236 15.4684 -16.6984 -9.9905 -1.4535 -2.3841 +#> -18.8788 0.3210 2.9123 -2.9104 -3.4047 -8.7111 +#> 6.5853 5.4339 -5.0259 -5.7721 -1.2164 -2.6667 +#> 0.6133 -3.4483 2.2334 -4.8047 0.5273 1.0393 +#> 10.3740 -6.3779 7.4333 2.6176 1.5118 -0.8147 +#> -2.0307 -5.6574 4.3505 -2.9951 1.9067 4.1163 +#> +#> (13,.,.) = +#> Columns 1 to 8 8.1685 18.1310 9.2017 -4.4406 9.0612 4.8394 8.9754 0.5965 +#> 0.5417 -0.6371 -3.8832 -11.9493 2.1802 9.6493 6.6725 8.5286 +#> 7.4608 1.6885 -1.8740 4.2153 -7.6569 5.2148 -18.5076 -2.2715 +#> 1.2582 -15.3859 -4.0612 6.0551 -11.5922 5.6975 -10.6903 3.1235 +#> -2.3057 7.2851 4.1571 -3.7623 -13.3014 -3.3147 0.7165 8.2489 +#> -5.5216 6.5140 6.7643 0.2079 10.0632 -0.1891 9.7549 -10.0496 +#> 1.9597 -3.5380 -2.9677 16.1760 -1.5035 7.2587 -4.4721 -4.2364 +#> -1.6937 -3.2525 -1.5470 -0.9722 -7.1244 -4.9299 -2.4470 -12.3289 +#> 2.8852 -2.0236 1.3117 -5.7844 2.1517 -0.6345 -8.5330 -3.3713 +#> 7.0658 -11.1340 -7.4566 5.4111 14.4295 -2.9644 -10.9916 11.9038 +#> -1.6794 5.2344 -0.4831 -8.2227 -15.2647 -9.2234 19.3877 -3.3697 +#> -2.2222 -8.7085 2.1459 -2.3315 -13.2460 4.9453 3.3532 16.0788 +#> 1.1439 -8.1831 10.4678 0.7750 -22.9676 2.5373 -6.1360 -2.1722 +#> -0.3907 -2.4381 1.5294 -1.2772 -6.7203 10.3003 1.0238 -3.5374 +#> -3.4757 -9.6613 2.7461 4.1443 4.8124 -11.8703 -4.9022 14.1939 +#> -6.9834 -5.5026 -10.8559 1.4985 -12.0526 5.3668 7.6786 -5.4803 +#> -5.3383 0.7282 -2.7798 3.8661 -4.1999 1.9224 -4.2842 -8.8528 +#> -0.7675 -5.1321 -7.6997 0.7224 -0.1155 -4.1549 -11.6683 -7.6208 +#> -5.0463 7.2115 6.3373 -5.5542 -10.0361 -4.2997 12.2951 7.4486 +#> 0.3448 2.9213 9.9393 6.2153 -2.4870 12.5195 -0.9724 1.7917 +#> -1.5647 -8.2731 -4.5555 -5.7891 -4.1437 1.8244 8.3772 -14.1058 +#> 1.0482 6.5136 10.4520 0.4397 -1.7507 -0.6993 3.9861 -14.4631 +#> 1.7088 6.1604 11.0710 -2.2001 -5.1874 -5.5833 14.4653 -10.1261 +#> -2.5775 -0.5083 -2.2582 5.3694 -12.5665 -21.2035 -10.8536 14.9236 +#> 1.8758 -0.3528 1.5673 -4.7510 0.3847 4.9639 -1.8656 -3.3787 +#> 8.9301 -5.1093 -18.5290 -3.7455 -2.5514 6.0338 -1.9197 5.8923 +#> 8.1802 11.1580 9.9619 -4.8865 6.8277 -0.0333 11.4610 7.7824 +#> 6.9339 -2.1000 -15.4039 3.2951 10.8199 3.8083 -12.4813 -13.0707 +#> -2.0960 0.3521 -4.1971 -19.7384 -12.2238 -11.8048 -3.5601 8.3752 +#> 7.3922 -1.0737 -1.1850 -8.5491 3.3542 -6.1760 -12.4863 11.1563 +#> -0.7960 -8.5561 -0.7178 6.4331 -0.5254 4.7416 0.1240 4.5459 +#> -8.6579 2.2964 4.8503 6.0717 -10.4698 -5.1670 9.0755 -1.3557 +#> 1.8304 -3.5696 -11.9794 -7.5583 -3.1501 -8.3338 -10.3645 -9.4198 +#> +#> Columns 9 to 16 -11.0021 -10.9249 16.2891 0.0882 -7.9128 1.6974 6.3797 8.1230 +#> -9.5374 15.8626 -2.4815 4.6046 -0.0928 -18.1945 11.8253 -12.4632 +#> 7.2605 1.1474 -2.1147 2.8611 -3.9746 0.3914 -6.6777 -7.2535 +#> -11.2044 -6.1130 -1.3746 6.5575 6.7507 9.4469 13.0109 -14.4164 +#> -2.6247 7.0394 8.3947 -4.6857 -5.6457 -15.7065 3.2095 5.1134 +#> -19.3815 7.9436 6.1093 -28.0882 4.2993 13.2899 15.6173 -9.4412 +#> 11.9219 -3.2553 8.1343 -7.4777 -9.2920 -4.5519 -8.7648 16.9167 +#> 3.1517 -12.3206 9.5224 21.0205 10.3701 -24.0904 18.2913 -1.6430 +#> -8.9871 10.5288 4.0401 -2.1487 -9.5816 2.8172 13.7086 -6.8393 +#> -3.0137 5.1247 -0.1836 -3.1610 -2.9165 13.0244 -2.2387 7.7961 +#> -10.6710 -10.3327 -6.4834 -16.2986 3.8466 -2.9872 13.5495 12.1297 +#> 5.5083 -3.4488 1.6611 11.6190 -10.9689 0.0111 -14.8758 -1.9930 +#> -16.4534 -7.1903 -6.3189 10.8443 -9.7576 -17.8507 -3.2521 -8.2236 +#> -7.0884 -10.9956 4.9712 0.1294 0.2673 5.7678 6.1968 -2.6220 +#> -10.1659 8.9873 -6.8182 -5.1344 4.3459 -1.7587 -12.4333 0.7112 +#> 7.1221 3.1867 -19.9492 14.4134 12.6258 8.8062 -18.7293 -9.2968 +#> 4.5233 2.1924 -9.6172 1.4641 -11.6853 -19.2012 3.7209 11.5915 +#> -3.3990 19.1853 -7.7865 9.2036 10.0295 -7.4047 -23.1191 10.2670 +#> -1.6671 1.8350 15.1094 -3.0537 1.0734 13.0679 -0.2235 -6.1474 +#> -5.3390 -11.6550 4.9796 4.2657 4.0214 4.8109 4.5774 -9.8083 +#> 0.2785 6.9181 6.7965 -3.8192 0.7373 -3.0569 0.4019 -17.1030 +#> -8.4861 9.6221 -1.0142 7.5091 4.8052 5.6862 4.8217 17.0577 +#> -2.6712 16.7904 18.2559 16.1520 9.7135 -10.6487 -2.1274 -23.2701 +#> 5.7470 7.7881 1.3139 0.0961 3.6937 -12.6210 19.0965 -12.0381 +#> 4.9706 -16.2171 4.3602 -0.7338 17.7089 -4.6464 14.7318 -20.7157 +#> 14.8004 7.6476 -3.8669 3.5837 -12.6696 -5.5832 -1.7650 -10.2432 +#> -12.3197 -2.9730 10.3361 -0.9796 16.5506 -3.5573 7.3058 -7.3455 +#> 9.4243 -4.7248 11.3604 6.2869 -12.8884 -7.6743 -2.1170 15.0787 +#> -10.0851 10.8309 3.4603 13.8976 -2.3457 -1.3532 9.0514 -23.9656 +#> -4.2319 -7.7085 -1.6451 -11.6734 -15.2542 -1.3529 13.8607 -8.0336 +#> 11.2010 -5.1881 12.3673 -4.0243 0.9423 -17.6126 -17.8881 -5.3914 +#> -0.3544 -13.0569 1.1551 -14.3875 3.6698 -3.7429 -14.1105 -0.5932 +#> -7.5711 -11.6290 10.6177 -19.8359 4.3088 -1.8451 17.8137 8.1614 +#> +#> Columns 17 to 24 -5.7535 -13.6507 0.4260 -4.9204 -1.4511 -10.2812 -4.0653 17.3489 +#> -1.5493 -4.7359 -2.2354 8.0775 8.8627 -14.0385 17.9817 11.3177 +#> -2.4823 4.7306 -7.4366 8.9477 -6.2620 5.2805 0.9624 7.8016 +#> 3.8917 -6.2281 21.1442 -13.2944 -3.2937 1.6714 -6.0299 -12.0161 +#> 0.6774 -12.5701 15.6722 9.4217 27.2130 9.8320 1.2353 1.8094 +#> -8.0426 -4.4288 -5.4305 11.8428 -5.9798 -18.7180 -6.1792 -8.1874 +#> 14.3977 18.3625 1.1329 1.8208 -2.0881 -6.5014 -8.1138 0.0873 +#> 2.8809 5.2091 5.4588 -8.6967 1.8709 23.3601 2.2067 -0.6979 +#> 9.8144 -22.9668 2.9467 -10.7592 -1.2527 9.8109 1.8020 1.9310 +#> -22.1668 -1.8032 2.5275 12.0921 -4.3896 5.9487 4.6041 7.8023 +#> 23.0005 -12.2541 -4.5206 9.9436 6.1314 -3.0797 -6.7752 -8.7213 +#> 8.4727 0.6582 -3.8563 1.4148 5.1803 10.1021 3.4280 -5.6509 +#> 14.2992 -4.1243 -12.9224 -3.7907 1.6134 6.6839 9.2004 -10.9704 +#> 8.9007 -23.2353 7.6862 -2.5331 15.6309 1.3322 7.4129 -0.5102 +#> -0.0274 6.6441 9.6948 10.7106 3.1750 -4.4436 13.4575 9.3859 +#> -0.3707 6.7767 -9.8070 -1.7111 -6.5149 -0.4291 4.1224 7.2699 +#> -2.0091 -2.5312 -16.6773 8.6435 15.3238 3.9757 7.3876 -2.8835 +#> 10.5156 1.8706 1.0083 12.6324 -0.5944 6.9651 16.1475 -4.3249 +#> 1.0385 -6.5673 2.5462 -8.8037 13.5781 -4.3920 3.0261 -8.5648 +#> 4.4140 -1.1340 9.8787 -5.3499 0.1318 -1.2426 -11.1290 7.4819 +#> 11.5681 -11.5965 -0.8896 -5.5384 -7.4320 -17.9907 9.9755 -9.3822 +#> 2.2034 9.9652 -6.9261 4.8134 -18.4113 1.1571 -3.1118 -9.8003 +#> 22.7320 -3.3941 2.9400 -10.2855 -4.6363 -11.2379 1.9989 -7.3464 +#> 0.6234 -20.8203 15.4633 -10.6786 8.0310 -8.2472 4.6602 2.7410 +#> -3.0226 0.0176 -2.9008 -12.2696 -3.8482 10.7570 -1.9761 9.2274 +#> -5.5389 -1.7155 3.1498 8.2490 -1.4926 4.6617 4.5536 9.5355 +#> -0.8106 -11.7087 -5.0014 -2.1781 -14.5540 -3.3661 -2.6836 -3.9334 +#> 9.2194 4.3412 -14.0290 3.1135 -1.8577 11.5905 6.6218 8.0127 +#> -20.2410 4.2235 -15.7262 -15.3850 -13.5639 -10.2291 -23.9093 -3.0371 +#> 12.4885 -8.3624 5.0920 -2.9361 7.3489 -8.7218 0.7167 -6.9985 +#> 10.8382 4.0552 -8.6123 -8.4467 -8.0870 -0.8843 -1.4157 0.9252 +#> 5.3873 -3.4927 -6.0538 0.9851 0.1654 -6.2779 -1.2633 -6.9509 +#> 6.1580 4.8961 5.5143 6.6595 11.4430 10.1527 12.1372 -1.0388 +#> +#> Columns 25 to 32 2.8042 -9.5249 -21.0267 3.1264 28.3109 7.7618 20.7447 -0.9716 +#> 7.8744 4.5833 -1.6712 -8.9966 7.8448 -1.4948 -7.0991 -11.6277 +#> -2.0453 15.4764 -5.8538 -13.4746 -10.1985 -1.1669 -12.3960 0.4297 +#> -10.7893 -18.3605 1.7355 1.0931 0.4702 -1.5314 -2.9485 -2.6575 +#> 13.5892 14.6822 7.2090 -1.4291 3.9263 -4.9238 -16.3571 -0.7424 +#> 4.6905 -1.6928 -2.2440 13.4444 -0.6285 -7.8857 -5.8058 -6.2666 +#> -22.5014 2.3784 2.5228 -18.4715 3.8635 16.3372 6.3797 -8.2014 +#> 0.2979 -5.9161 0.2947 -8.9482 -6.3781 15.1396 -8.7327 -13.6224 +#> 0.3215 4.5580 -19.0640 6.0653 14.5959 -10.1248 1.4517 -3.2915 +#> -1.2405 -9.2978 16.2120 -5.6459 -4.1797 -13.9344 -22.0786 -11.3252 +#> 16.5517 3.8473 -8.1317 -13.4332 11.2736 12.7668 0.5516 1.3029 +#> 0.3796 3.6115 0.7702 -1.5182 7.6267 0.9081 -5.1293 3.2503 +#> 3.2618 7.6959 -5.6787 -17.6102 27.0427 2.3862 5.0714 2.3728 +#> 4.6810 10.6034 -4.4923 3.2739 7.9195 -11.9655 2.0914 0.0989 +#> -3.7336 -0.2368 4.6183 5.5902 -2.8246 9.2920 -1.8576 -4.1132 +#> 5.9530 14.4178 -3.1727 -5.7668 -5.4502 7.9786 9.4296 -4.6057 +#> 8.0693 8.1994 13.0564 7.0040 0.9989 2.0506 5.1272 -18.1135 +#> 6.8487 15.7204 -2.5249 -1.2499 10.0834 -0.1315 -2.4905 -10.5584 +#> -6.0915 -10.6941 4.1259 2.0105 -4.0860 -4.5871 -1.0034 1.3747 +#> -6.5917 -6.1863 -7.9351 -6.8489 3.4930 -1.7812 0.4904 -0.2508 +#> -6.7602 9.1604 -0.6079 -6.8637 7.7246 -9.3047 3.5791 -0.6862 +#> -8.0429 -3.5321 -12.3105 -4.5122 1.9469 -12.7310 -10.9645 2.5651 +#> -2.5565 -2.9323 -22.4190 -11.5246 6.6864 9.3873 2.8537 7.9164 +#> 10.2207 -16.9300 9.5375 15.9803 -0.1843 2.0617 -14.3775 -12.2400 +#> -0.5469 -6.7575 9.1287 -3.6532 -12.9932 -1.9317 13.0984 -11.7560 +#> 9.0637 23.6112 11.9632 7.5023 -8.7263 0.4869 -5.7155 -3.3102 +#> -4.2799 -16.6950 0.7203 0.1279 5.6116 5.3393 2.3024 -7.4505 +#> -2.9759 4.2560 -3.2889 -4.5652 6.1786 10.2252 -19.2386 -1.8180 +#> -3.2682 2.3879 6.9372 11.7385 -5.2903 -8.7105 4.0837 4.8753 +#> 3.3292 2.4731 0.3064 -8.8113 12.3164 -5.2932 -15.7692 9.8857 +#> -7.0072 9.2697 0.0230 -11.5711 7.2454 9.4541 12.2295 -0.9957 +#> 2.7625 3.5537 4.5743 10.5819 -5.3057 -3.4204 -0.4771 4.1003 +#> -2.6315 13.3455 1.7102 -2.5734 -3.7750 -0.4142 4.5067 -1.6987 +#> +#> Columns 33 to 40 1.4249 -11.0755 9.3542 -9.7008 11.8526 -11.6143 0.1438 -14.9416 +#> -2.4695 4.4213 -3.3442 2.8386 -14.5500 -0.3050 11.4525 -7.3751 +#> 5.4534 -8.3487 -1.4633 3.3697 -7.5637 3.0802 -5.3609 -8.5319 +#> -2.5340 -6.6086 -9.7978 -1.6956 12.7237 -0.1676 3.0869 11.5888 +#> -7.3901 4.0227 -5.8422 9.6897 -13.2719 -12.9970 4.8614 -7.8807 +#> 4.9316 15.0560 -5.8030 -2.5815 10.2874 -9.3993 1.8211 10.2510 +#> 5.4881 21.0625 -0.2542 9.2211 1.0045 -9.9962 12.1078 5.7644 +#> -1.5557 -0.0376 4.1277 4.8585 -5.5478 -11.3342 8.2986 4.6142 +#> 2.7084 4.6693 -3.2897 0.1316 4.3207 -3.9370 -4.8066 -2.3084 +#> 5.2748 9.7305 -2.2019 -7.5334 -5.4114 -3.4872 7.7262 5.5723 +#> -10.0755 10.0485 -12.7962 -3.9888 -7.6950 -9.0356 -2.9581 -7.5477 +#> -3.5278 -2.2127 0.6009 0.7558 3.3095 10.8913 3.8950 1.8363 +#> 5.5640 0.7430 -7.4437 -4.0137 -9.1372 9.7445 5.1849 -7.8112 +#> -1.9517 -6.1608 3.3622 -6.0094 5.6073 15.3582 -7.3265 -20.5902 +#> 0.4649 4.9782 -13.2930 -6.8730 -15.8075 3.3621 9.3135 -0.8069 +#> -1.1151 -8.3262 -5.1569 6.7165 -7.5948 -5.2884 10.2331 7.2735 +#> 2.7024 5.0846 -1.0947 4.6324 -2.7163 -9.5111 5.6135 -11.6155 +#> -11.6737 19.5294 -7.8315 -6.9095 -10.4567 -4.9886 12.0837 6.6195 +#> 3.3153 1.8701 5.9375 -8.0286 15.6053 -0.2567 -3.6559 8.8092 +#> -0.3040 1.5502 2.1421 10.8078 -6.6668 -3.3936 7.5686 -4.8653 +#> 7.9557 -1.2807 10.9573 1.5703 -2.0374 16.7048 1.0751 6.6447 +#> -3.0448 -1.1016 -8.6984 4.9987 4.5195 5.4928 -5.8908 -6.9696 +#> -3.3984 -7.4228 -8.7317 0.8089 6.2309 -2.4708 -22.0646 -0.3850 +#> 10.7299 -10.4042 -7.0590 -2.2802 1.7907 1.3410 3.3008 -21.8833 +#> -0.2072 -2.3352 0.9942 -8.1299 14.3783 2.8593 3.1763 -0.2166 +#> -11.1050 -9.6336 19.6276 4.7868 -11.1617 1.8055 -11.9346 8.4321 +#> -5.1588 12.5703 5.5086 2.5203 6.0757 -9.1832 -0.7057 10.9606 +#> -14.4499 6.5319 10.3255 0.5957 2.7447 -0.0765 -7.4166 12.5941 +#> 3.9534 -10.0733 -5.7243 22.2901 7.5241 4.0727 9.5097 -16.5037 +#> -5.4545 -4.5379 -3.2065 -6.8075 -2.8386 0.5619 6.8654 5.8550 +#> -8.9576 16.0609 1.6307 -14.9022 14.0983 -9.1671 -4.7498 -2.9699 +#> 6.6537 -0.2906 -0.8716 -9.0849 4.8815 -7.6360 8.5255 -0.1695 +#> -12.8973 5.7070 0.6085 -14.0956 0.0914 -15.6002 3.5593 6.6064 +#> +#> Columns 41 to 48 5.9311 17.7299 -0.7229 3.7374 -3.6511 -8.2053 -11.5240 -10.7492 +#> -1.0067 14.8023 -1.3072 -3.9467 -10.1321 4.0677 -17.3582 0.7563 +#> -9.5202 14.1758 3.8432 6.1191 27.2309 7.0681 20.0783 5.9590 +#> 2.0325 -2.8585 -2.0056 14.6161 2.2878 21.0311 3.6127 7.5252 +#> 6.1734 -6.7330 -13.4892 3.8152 -10.2857 10.5506 -6.4547 7.0024 +#> -9.4539 1.1894 -1.2883 6.1256 6.6439 3.5010 12.5788 -8.8679 +#> -12.1847 -12.4230 3.1144 -10.3928 2.5866 0.0955 -19.3389 3.0412 +#> 3.0109 -8.4414 9.7856 -2.2481 -9.3414 -11.8433 7.1392 5.5796 +#> 12.8713 7.8733 3.9543 -5.5824 -6.0260 -3.0953 -2.6589 22.8629 +#> 8.1692 -2.7973 -6.5048 -8.6637 -18.1423 -0.4985 -1.7051 -2.7297 +#> -20.6182 17.1865 14.7506 8.9323 -11.6332 6.9394 -22.2232 -10.3575 +#> -5.2529 -1.0713 0.7465 -10.0037 2.9579 -7.1691 -1.6837 1.9132 +#> -11.3145 8.4803 1.1995 -10.2422 2.6584 12.0492 -10.1743 24.5186 +#> 5.4599 -10.2661 -10.5127 -4.8045 -12.3273 16.6059 -2.9793 6.9405 +#> -8.5731 -6.3193 -4.3652 -6.5149 -5.7321 -5.4401 -1.6743 3.4636 +#> -3.3849 0.0514 7.9724 -0.9905 -9.1071 7.6889 -18.4658 -21.2365 +#> -3.2851 -0.4853 4.5171 -23.1480 7.8174 -10.5611 -2.3376 15.2831 +#> -0.1535 6.2131 5.2625 -10.2457 -3.8658 12.7566 -9.2627 7.7939 +#> 13.5917 4.3739 8.1545 2.8746 -13.6187 1.2221 -14.5068 -9.8078 +#> 6.1881 -2.6243 -12.2629 17.5239 -0.4198 1.4195 -4.3870 2.8471 +#> 1.9563 -2.1754 4.7194 -1.2826 -2.6734 7.6652 -6.2243 16.0889 +#> 3.8896 2.5679 3.6495 -5.7284 0.7490 -12.1440 -8.4993 12.5337 +#> -7.6442 7.6055 16.1955 -4.6797 3.1885 12.9011 -9.0566 0.3462 +#> 9.0367 -6.5953 -11.5499 10.1211 1.5265 12.2219 2.2632 4.6006 +#> 7.8655 -6.4106 -4.2319 2.4144 5.5798 -8.8956 18.1477 3.4162 +#> -5.3442 -6.6473 -5.4697 7.3366 3.0652 3.7171 13.7536 8.7985 +#> -5.7443 14.4374 -12.6321 16.9814 -13.5365 -13.4930 -2.4854 4.0077 +#> -9.0537 2.4328 7.1245 -15.6735 0.5791 4.4094 12.9868 -3.7366 +#> 6.3340 -9.7715 -4.4469 -6.6100 14.0820 -7.9638 6.5021 3.6925 +#> -17.9757 6.9878 -3.4021 -1.7645 18.2453 3.7194 17.6361 1.1105 +#> -10.8246 8.4444 -1.6945 6.7459 -9.3670 4.0811 -4.6410 -2.5993 +#> 1.5727 1.3147 -8.7397 0.7923 -0.0714 6.5934 -2.3473 1.3001 +#> -3.1788 -9.4588 2.2271 -0.1755 -4.4474 6.2041 -7.9666 5.3066 +#> +#> Columns 49 to 54 -13.6941 0.6945 -8.7885 -3.4986 0.6053 0.8150 +#> 15.7277 -3.5094 2.8476 -0.4773 2.1591 0.0238 +#> -7.3607 -3.1223 1.6915 -0.1196 -4.0571 0.0635 +#> 3.2340 3.6792 5.0335 0.1379 6.2757 1.4164 +#> 9.3465 -4.3897 5.2178 -2.6181 -5.4572 -1.8842 +#> -2.8565 2.0266 -2.8396 -3.6875 3.7526 -0.8052 +#> -5.4079 0.0992 -5.1323 -0.5866 -0.4208 -3.3939 +#> -2.3491 -11.4709 -3.6558 0.2687 1.3869 0.0414 +#> 1.4125 -11.3997 4.4783 7.2514 -6.7433 5.2444 +#> -2.3227 8.8233 -7.0617 -4.7002 2.7407 -2.8848 +#> 6.9253 -14.1175 -2.3262 -1.7634 -0.6409 -2.0952 +#> -5.7669 6.4380 -4.6598 3.6529 3.1694 -2.0667 +#> 0.7696 -15.2733 3.0434 -7.0144 5.7703 -0.1181 +#> -0.8130 -9.3739 -11.8269 5.4196 -3.4397 1.5477 +#> 10.2240 4.9246 -7.7297 -1.6301 7.3280 -1.0768 +#> 4.0419 1.0818 0.3424 -1.6101 -2.8325 1.2382 +#> 1.8596 7.9131 -9.2704 4.2190 5.3003 -3.8579 +#> 7.9187 0.9981 11.1683 1.7615 -1.0957 1.5316 +#> -4.3368 -6.7063 4.0030 4.1844 1.0063 -1.8471 +#> 3.1494 -2.0101 8.1739 -11.6163 -5.7066 3.9029 +#> 0.1406 0.1161 2.9006 -3.4985 -0.5075 0.4991 +#> 2.2324 -17.4610 7.3931 -1.7045 -5.1395 1.1228 +#> 0.1736 -15.9067 -1.4389 11.9064 6.6558 2.6835 +#> 7.8509 9.9993 7.3451 0.5591 4.1469 -2.1620 +#> 3.6479 10.3813 2.7129 -5.0629 3.0056 0.9193 +#> 23.9034 8.7242 10.9977 15.4263 1.3593 5.5134 +#> 15.2637 6.9468 3.1952 2.5030 -9.6888 4.5583 +#> 22.8260 17.4871 -4.4741 13.0648 7.0827 -2.1707 +#> -4.6060 7.0370 6.4077 0.5699 7.1098 0.0612 +#> 0.2928 11.7133 -0.3120 6.8789 -1.7140 0.3324 +#> 1.3951 4.0383 -3.9130 -1.4298 -1.6821 0.7301 +#> 8.7975 1.5632 4.8095 -4.0370 2.2302 -1.8490 +#> -0.4435 -6.9148 -6.4755 -1.8175 -3.8154 -3.3834 +#> +#> (14,.,.) = +#> Columns 1 to 8 1.0682 -1.3317 10.5563 -3.9496 -1.6608 -2.0113 5.8337 8.4680 +#> -3.2087 1.5790 4.2984 2.6121 -17.9662 2.5845 14.2544 3.7102 +#> -0.4351 10.0113 2.6389 -6.1302 -5.6138 -0.0838 -8.4350 -3.6393 +#> 6.3955 8.0218 -2.8163 -2.2577 3.2324 4.7032 -11.1047 -3.5908 +#> -2.1790 -0.9942 -13.1480 1.7856 6.9983 -16.8467 0.9652 3.0140 +#> -0.5052 0.7405 0.7574 0.6794 -6.4720 16.3848 9.6637 3.8627 +#> -0.1264 2.3876 3.4571 -18.3981 -11.0937 -3.9366 18.5412 -13.5563 +#> 1.1594 2.6859 -1.6409 1.0886 -1.2411 -3.9106 -13.6028 2.3904 +#> 0.2962 7.5492 2.3383 5.2418 -3.0762 1.7393 5.4344 4.5620 +#> -0.2984 11.7696 9.0801 -11.2731 -6.9323 -2.9416 -2.6688 7.9035 +#> -3.8964 -1.7891 -0.6085 4.3640 -6.1822 -4.1137 4.5777 14.6381 +#> 0.2409 -0.4794 -2.9572 2.9849 4.1751 6.1592 -6.7069 -1.0201 +#> 0.3693 8.7138 1.5158 3.0562 -0.5071 6.5347 -1.6752 1.6722 +#> 1.5698 1.9870 -6.4410 -1.9750 1.5633 -14.7235 -8.7555 -3.6683 +#> -8.1786 3.7620 -12.4598 -6.2602 -14.8774 -2.4304 8.0524 10.7301 +#> -0.9267 -4.2559 5.2403 5.8458 2.2824 -5.8456 -12.7765 2.6378 +#> 6.1738 -1.8667 5.3856 -15.2022 -8.4619 1.5997 4.0490 -15.1732 +#> -6.8491 -7.0006 0.5218 -2.4846 -4.0683 5.9917 3.5972 3.7189 +#> 3.2201 -5.7486 0.9012 6.3388 7.8480 4.0562 2.0984 -10.8311 +#> -0.0576 -1.3288 6.5422 4.0030 -10.3604 -10.7180 12.8008 -0.8465 +#> -5.3274 0.1229 2.4954 14.3851 -9.0433 12.9836 0.7213 -10.0064 +#> 0.2998 -1.8280 -3.5392 1.4372 0.8775 2.6480 -10.5243 -21.2429 +#> -1.7309 0.6944 -10.0847 9.0442 -7.0992 3.3640 4.8158 2.7516 +#> 2.3049 -4.1089 -6.4137 -7.9334 12.5217 -1.3651 2.1152 -11.6989 +#> 1.7183 -5.0437 4.7943 1.9089 13.7122 -7.5609 -2.8849 -7.4888 +#> -1.6922 -8.6862 1.9557 2.0456 -5.3209 -4.4141 -8.6267 17.2118 +#> -4.2077 0.8600 0.3819 13.0846 6.3533 11.3750 -18.7503 11.9948 +#> 1.5235 0.0820 -3.9783 -6.3998 3.6557 -4.6387 1.2233 1.1708 +#> 2.1953 -7.2120 3.6478 0.3455 -4.2720 -14.0556 -3.8233 -15.1683 +#> -4.2392 -0.3903 -8.1206 1.7471 14.4427 14.8464 8.3121 10.3957 +#> -0.2266 2.9230 3.3701 -2.4312 -7.6215 -9.7092 5.6057 4.0541 +#> 4.3938 -11.2120 2.3082 -2.9799 11.9298 16.6256 11.6223 3.0445 +#> 0.3378 1.6645 3.0569 -10.4494 -6.3165 -21.0461 -0.7557 3.9742 +#> +#> Columns 9 to 16 -7.6308 -1.3117 0.1137 20.8397 7.0400 -9.9571 -17.8718 -3.0070 +#> -2.8706 -6.0722 -11.1877 17.9593 5.9154 -2.6710 -2.9567 13.1148 +#> 13.6812 9.1494 -7.8987 -5.7887 -7.9125 25.3422 -8.0291 7.5238 +#> -14.5406 6.1440 -13.5509 -2.1922 -15.3191 -0.6309 6.7139 15.0202 +#> 5.3562 3.3114 12.1822 -0.7365 9.0712 1.3674 1.7546 1.8026 +#> -7.2997 8.5337 -2.2627 -3.7999 2.4415 -14.1935 1.6391 -0.1001 +#> 1.5942 -18.7663 10.4612 -8.4393 11.6695 23.0163 -7.2793 -11.3540 +#> 9.3155 -10.0385 -9.5278 8.8083 -3.5831 1.7618 -19.5998 20.3251 +#> -14.9647 4.1962 14.4394 2.8140 -2.5513 -5.0547 -2.9139 -3.6931 +#> -15.9188 -3.1156 2.7211 -5.6836 -4.6138 7.5781 -0.8871 3.2482 +#> 0.7135 3.3366 -9.3664 0.4163 6.8589 17.8568 -8.4108 -9.6010 +#> 2.3402 1.7981 3.7933 -10.4290 0.1447 3.2183 -0.8672 -3.2393 +#> -3.8269 3.1614 -7.9295 5.2880 -3.0958 -4.5372 -7.8706 -1.2308 +#> 0.1324 1.2515 11.7791 8.0497 -14.5167 10.0783 -2.2692 -6.0042 +#> -17.3841 14.1928 -3.1225 -11.1902 13.3952 -1.6488 2.6244 -10.8511 +#> 4.1539 -3.1791 -5.2044 -11.6293 5.5238 8.6782 2.5033 -9.2838 +#> 19.0114 -19.9130 7.7452 -2.1882 1.4135 9.3925 1.7562 -12.0515 +#> 11.9768 -1.7584 -1.6531 -1.7584 4.1544 -6.2675 6.3657 -10.0731 +#> -4.8516 8.9139 4.9415 -9.2595 5.2237 -2.3670 10.0588 7.0271 +#> -0.8574 -6.4313 5.1848 3.0898 -1.3450 7.3532 -19.2539 18.3999 +#> -3.0916 11.4739 -3.0554 6.7343 9.1936 -8.3684 15.6775 -8.5683 +#> -14.9063 -0.4541 5.5153 4.6044 13.4720 0.0900 -5.5785 3.7305 +#> -2.5716 20.6654 -4.4617 10.7399 11.0044 1.8849 14.4940 3.9620 +#> -19.3237 -0.6214 12.6412 7.4655 -25.5257 -2.9017 -6.9305 4.6988 +#> -12.5764 -8.2220 5.2123 -4.9763 -8.8428 -7.6355 4.7707 15.4390 +#> 7.6054 5.1530 -12.9802 4.9046 7.6365 16.9651 -0.5894 -6.7338 +#> -29.2327 4.0586 16.4308 4.3100 -6.9708 -13.1766 -3.3198 19.1684 +#> -2.3237 -15.2326 -5.9195 8.1093 17.5122 -9.4210 -9.0656 4.5830 +#> -2.2120 -0.1880 22.6787 -8.3586 -3.1750 -3.4459 10.6379 8.6725 +#> -8.5308 3.8757 9.2415 4.8788 -1.5141 -20.8685 9.7998 -3.5139 +#> 12.9533 4.5847 0.4892 2.3813 -1.0236 25.3592 -9.8416 4.0650 +#> 2.0021 1.8158 -0.2541 7.7000 -12.5841 -10.3021 5.7402 -15.0306 +#> 3.8260 -10.2455 -4.4622 1.6092 -10.3928 -13.6844 4.8724 9.0436 +#> +#> Columns 17 to 24 4.9842 4.5259 -5.3421 -7.0754 4.2468 27.1351 5.8376 -2.6273 +#> -2.5972 8.0906 -20.6761 7.8792 3.4047 3.8539 -7.9565 -6.5043 +#> 2.2919 -3.5523 -4.3128 -2.2364 -15.5910 1.1588 10.5466 -1.7738 +#> 11.0194 7.4519 -1.3531 3.9708 2.2232 -19.5060 -22.0468 11.5362 +#> -2.5365 -7.4888 -12.3445 -11.3400 12.4142 7.3685 11.5560 -10.5030 +#> 1.3197 13.2005 6.1211 -1.6043 -4.0554 3.0017 -14.1070 6.0636 +#> -2.3864 -9.5677 13.8623 15.4929 15.0028 -3.3659 7.7919 6.9288 +#> -8.0286 4.1282 -0.8025 8.0914 -11.6516 -2.5156 -8.0730 -16.7765 +#> -6.5890 3.8453 -7.0928 -6.1299 -13.2545 -10.3965 5.8059 14.0835 +#> -6.3167 -5.8134 -15.1776 1.3905 0.8991 16.8256 -16.6784 5.6563 +#> 9.6508 12.3354 -0.2637 11.7067 1.4065 15.8512 -1.9023 8.7481 +#> -0.7064 -10.0190 17.2492 7.3368 9.0122 -9.2554 -3.8186 -3.2544 +#> -6.9695 -1.8119 -2.1708 10.2541 -10.4514 -8.4371 -17.7197 3.9706 +#> -2.3627 -1.8745 -12.0485 -6.3383 -6.4900 -5.7979 -3.4634 0.3086 +#> 14.6418 -1.3350 -1.4662 -6.0756 11.8717 -8.0031 -12.4007 -11.8013 +#> 5.4393 -3.9346 -5.6344 -1.2853 16.2014 -6.3482 -7.3916 5.0740 +#> -1.5497 4.3571 3.4982 -3.3196 -1.4032 -2.8695 -7.8070 -0.7173 +#> -9.4201 1.3658 8.0585 13.4278 -0.5700 -9.2102 4.5890 -8.8514 +#> 2.9911 10.9693 2.2764 -0.8898 -1.1485 4.6408 9.7968 6.6684 +#> 4.4646 -22.2244 -8.8300 8.5027 -0.8602 3.4577 9.9833 -6.1529 +#> -9.2974 -5.5403 -14.5111 -2.1462 -0.7886 -6.1526 -7.9484 0.8473 +#> 0.6953 -14.0376 8.5477 7.7921 -12.2017 -3.9912 15.7656 7.1820 +#> -11.1236 4.4102 -13.2990 7.4666 2.3334 0.6001 16.0495 16.5304 +#> 10.4941 16.9373 -18.3478 0.3175 -28.8111 1.0789 -3.1643 0.9006 +#> 7.7730 -1.0303 6.1447 2.4639 -6.5774 -7.5510 -12.0587 8.7854 +#> 1.3077 9.6216 -6.6384 1.9178 0.8710 -14.0336 -3.8990 0.5428 +#> 1.1580 -9.1767 -0.7260 0.9959 -2.3034 17.3495 8.5480 -14.1308 +#> -7.5084 7.7580 12.0035 -5.3429 8.8252 -15.2600 -7.1186 13.1812 +#> 0.8627 -12.7228 6.1457 -12.3317 9.1124 10.1002 0.2276 3.5845 +#> -2.9476 5.2360 11.2360 -1.3000 8.2432 -1.1090 -1.5226 -0.3381 +#> -18.6024 7.1351 -3.4096 -5.2039 -7.7222 2.7480 2.3723 10.0306 +#> 6.5559 17.5489 0.9696 15.6342 9.4531 -2.8979 12.5495 -7.5411 +#> -5.6435 12.0125 -5.4215 -0.3082 13.3172 -1.2821 -7.2426 2.4742 +#> +#> Columns 25 to 32 -2.8955 7.1952 0.4607 6.0146 -12.1159 3.3703 0.6064 -1.3046 +#> 8.7800 -7.3266 -0.5357 -0.1866 -6.1014 -2.6579 7.6631 3.9382 +#> 8.4078 -2.3964 -12.6419 2.2767 -0.9326 -1.5690 0.6915 -8.7711 +#> -4.3703 -5.7726 4.9765 -7.1075 -9.7550 10.5853 -17.2975 -0.1327 +#> 6.9166 -9.1436 3.0721 -1.5284 -3.4600 4.8331 -3.9877 3.6732 +#> 16.3105 10.2621 -3.2205 3.4961 10.3559 7.3969 20.6431 -2.7180 +#> -8.0747 -5.0873 18.4055 -1.8955 -15.4231 6.9047 14.0930 3.5841 +#> -9.8793 8.3792 -3.1968 -6.2075 0.7911 5.5061 -4.9761 -3.1818 +#> -20.4612 5.3775 1.3401 2.0823 -3.2006 8.1444 12.6512 1.8527 +#> 4.7339 7.5866 1.1259 -2.0918 -3.8679 9.9544 0.8422 -17.0946 +#> 0.2804 -14.4982 -2.4195 -9.1474 -5.9318 2.0370 7.8673 -5.7676 +#> -3.9432 0.0240 7.7274 -10.3056 0.2360 -12.6807 -3.2647 -10.5063 +#> -17.0528 -10.7467 7.3681 -19.9432 -3.5717 8.4128 -3.0684 -5.3229 +#> -0.9222 7.1850 -5.4588 -6.6059 0.1661 -3.7781 -5.0199 8.9783 +#> -5.9012 -11.0385 -13.6209 -11.6550 -2.5110 -6.9044 0.9909 -10.7109 +#> 3.8815 -6.4445 12.4120 3.8269 -1.4892 5.8873 9.9911 -0.1741 +#> -3.9271 -4.3991 -12.8010 -1.7023 0.5467 -6.6148 5.8343 -4.4812 +#> -2.5616 3.7171 5.4630 1.0476 -3.4670 -1.6932 4.6626 16.5280 +#> 11.6511 11.2792 9.2341 6.6704 6.2150 -3.8874 -9.1205 0.5537 +#> 0.2453 -0.6578 16.1115 -1.1198 -12.6914 7.2857 0.5511 12.9376 +#> 5.8119 4.3004 4.9701 -9.4542 -2.1813 -4.8785 -4.7907 11.4160 +#> 1.6442 6.0653 5.3555 -6.3213 -2.0651 -4.6171 -18.4665 -4.2377 +#> -2.1608 8.1038 13.6519 11.5492 5.7398 -7.7747 -11.3077 -1.8447 +#> -6.8095 5.6028 4.2391 10.5294 -5.9611 8.1898 -10.3368 4.4342 +#> -4.6152 -4.8975 4.4741 21.4409 1.9309 2.1857 3.7726 14.2797 +#> 3.6013 -0.1528 12.8854 6.7936 -3.4744 -5.9519 0.9275 5.6669 +#> -1.1392 16.3384 -0.1937 1.1042 -18.2986 -4.5185 6.4076 0.7562 +#> -0.9276 -5.9927 12.3906 6.0865 -1.2403 -1.9945 2.0729 3.6452 +#> 17.5002 2.8927 1.7038 6.9912 0.2463 5.0752 7.1471 -1.2791 +#> -1.9058 7.4053 0.8129 -4.0243 0.3111 6.3161 9.8653 -4.0330 +#> 2.0178 -4.8426 -5.6489 2.9212 -10.6190 -14.3120 -2.7817 14.7493 +#> -7.8906 -1.6224 4.1810 -6.9609 -3.4207 14.0193 4.9421 18.7300 +#> -7.0843 -17.3787 -9.4141 -7.5089 8.1533 14.0384 5.7030 10.8777 +#> +#> Columns 33 to 40 -16.5827 -12.5344 -6.7852 -2.2524 7.8292 5.2382 6.7258 5.9336 +#> 1.4704 4.5821 -6.9088 1.2618 3.5617 2.7313 1.0961 3.7388 +#> 13.6906 14.3084 -1.3718 -24.1896 17.0772 -3.1286 7.3575 2.5939 +#> 9.4861 11.3426 26.4221 -0.5686 -14.8742 -12.4572 0.2607 4.7503 +#> -5.2708 -9.8033 -21.5549 15.9707 11.0396 12.1904 -13.9649 -16.2996 +#> 16.0063 20.1136 -4.2239 -8.2431 -21.9482 -2.3409 -2.5736 -1.9143 +#> -18.4170 5.4266 9.0387 -4.4576 -6.8005 1.2502 6.4913 -2.3464 +#> -13.7385 -5.3310 6.6542 -8.7270 -5.9253 17.1585 2.1584 -12.1461 +#> -8.7030 -16.2517 5.4788 3.5066 6.6224 6.0847 10.2736 -6.5104 +#> -3.6481 -2.3599 7.1428 -5.8620 -2.7469 -13.4954 2.0505 7.0197 +#> 6.2611 -0.0059 -9.7529 -1.6357 -19.3187 1.0854 8.6793 4.1513 +#> -0.8434 3.5754 12.8127 12.8768 6.8505 -9.1610 -5.4927 -1.1818 +#> -7.6068 -14.1712 15.0055 -0.3405 7.2821 4.8980 -6.6924 -7.0257 +#> -1.5353 -3.3516 -2.6371 11.8108 8.1527 -6.9623 -0.6607 -6.7928 +#> 3.2217 1.1136 -16.4745 7.1084 20.2841 -2.2466 4.4244 -2.5931 +#> 3.3643 0.9780 1.4271 -1.3384 -12.3832 3.2823 -10.0730 -2.7894 +#> -1.5428 18.5557 7.3113 3.3495 -0.7591 12.8430 -14.5960 11.3541 +#> 8.1762 0.6542 13.8176 -5.7693 -5.9172 12.2618 -6.7907 -1.1713 +#> -10.6165 3.9859 -4.0476 -1.4235 -18.0125 -1.2142 9.1134 -2.0178 +#> 2.5308 -9.1204 -7.9342 3.4713 -2.3418 -13.8876 11.5349 -11.4684 +#> -6.4398 -2.5942 6.9882 4.8880 -5.1512 -6.3376 7.0864 -6.1208 +#> 16.7980 -0.3647 -5.7067 -18.8361 -7.5711 0.7869 1.5912 8.9501 +#> -5.9780 -9.7641 0.6641 -5.6192 -9.6519 3.2004 7.3110 5.6151 +#> 18.7451 -5.4171 -6.4070 10.7518 5.8211 4.1964 -17.7327 8.1009 +#> -1.7654 -2.6971 5.8547 0.4250 10.4314 -4.4139 13.0077 5.2814 +#> 14.0373 1.9799 29.1636 12.4340 5.0169 -9.8093 14.9295 10.0356 +#> 4.1476 -31.1325 -8.0388 5.3732 7.0390 5.1446 7.6234 11.2519 +#> 20.0538 16.0889 4.1456 -5.3208 -1.8056 11.5560 14.9791 4.1244 +#> -6.1252 1.8963 1.2880 6.1788 8.8352 -20.9866 -13.8905 15.2825 +#> 10.6668 8.3772 2.6539 -1.4027 9.9779 -4.1143 -16.2849 0.2596 +#> -4.1236 7.5642 -6.4608 -8.4928 -6.1387 -2.4295 14.2868 1.8288 +#> -6.2461 -5.8768 -3.1784 -1.2247 5.6756 6.0421 -15.8626 4.6951 +#> -11.0868 -0.1595 -8.4995 -0.1629 3.8455 2.6679 -10.0015 -1.1356 +#> +#> Columns 41 to 48 -4.9705 -6.9927 10.1139 3.4636 -1.5541 1.2467 5.5312 -14.4339 +#> 4.6962 -12.0791 -2.2345 -3.6285 4.2757 0.7847 6.3379 6.9498 +#> 1.5080 5.6067 -5.4619 7.0293 -3.1296 7.5616 6.0792 -5.6423 +#> 9.2366 13.5901 5.0306 -12.9975 5.4999 -14.5889 9.4264 15.8695 +#> 4.0591 -11.1299 -11.1461 -6.2648 5.6930 5.6508 -9.9906 0.1710 +#> -1.0060 1.7579 5.8371 -0.7452 -6.3264 12.1475 6.9528 -4.9701 +#> -10.5107 15.4747 -10.0460 -7.3576 -5.8720 9.6989 8.1239 -1.2736 +#> -4.2612 11.4808 -14.3660 3.1476 11.9350 9.9523 -8.1508 -0.3578 +#> -11.9177 -4.7341 4.1306 1.3987 3.5440 6.7682 14.0690 -9.0625 +#> -6.8600 20.3472 12.2617 -9.3304 -0.8672 4.3437 -3.5301 -4.2093 +#> -4.2244 -14.5919 -4.8511 -16.7665 -13.4159 -5.3748 23.7413 7.7159 +#> -3.3487 6.8655 -1.0013 5.5959 -2.9073 -14.2582 -8.0837 1.0351 +#> -9.8738 -2.1356 -1.4778 -1.2914 10.0358 -13.8298 -0.5817 -10.4311 +#> -8.2221 -3.3398 2.1981 9.4470 5.8187 -17.8509 0.5108 3.4584 +#> -4.5577 -12.6593 5.5724 -8.0537 0.3736 -0.3577 -10.3494 5.8468 +#> 3.4860 4.8631 -5.7422 -2.5935 11.3737 3.8019 3.3831 -4.5654 +#> -10.5457 3.7662 -3.1607 -7.1903 -14.8307 -2.6427 -3.8135 0.3609 +#> -11.6211 5.3634 11.7782 -0.4783 -2.8875 7.1719 -2.3906 -2.8584 +#> 0.3633 -1.1711 8.9728 -2.4439 1.2721 -8.4528 -2.3368 0.9054 +#> 14.0391 4.2863 -13.3439 8.7040 14.9639 11.8481 1.8799 4.7166 +#> 5.0128 8.1895 -9.9560 2.8282 0.5387 -10.2402 4.5993 5.8128 +#> 4.8419 5.1989 -14.9392 -3.4054 6.8341 25.0790 -4.6841 -3.6823 +#> 5.1218 -2.8226 -2.2997 -2.3698 4.2732 -7.5474 6.3006 3.3641 +#> -5.4215 6.4601 17.6487 -12.7035 16.7165 0.1454 -11.3924 -9.9702 +#> -13.2645 -0.5894 4.8821 15.6570 7.0288 16.7542 -1.0207 7.3767 +#> -17.8001 -10.1375 -6.2899 -8.7367 6.7968 -2.1168 8.8565 -0.2674 +#> -7.5626 1.9409 14.6316 -3.6666 1.8705 14.5881 -3.9448 -12.1121 +#> -5.1335 -5.7657 -1.2603 9.2793 -1.8534 10.0799 4.4282 5.4261 +#> -0.2153 6.1714 -11.2568 2.3641 -9.5128 -2.5122 -13.7613 11.6866 +#> 4.5939 -13.0538 -4.5407 1.1675 3.1414 -3.1486 1.8305 -0.2885 +#> -2.8329 -5.6187 4.7904 -6.8152 -13.0753 -8.1461 1.1446 2.6098 +#> -8.2667 2.6419 -9.3701 -7.1938 -6.6125 -3.5960 4.9195 -4.8958 +#> -16.2781 -4.4159 -1.8804 4.3588 0.6405 -3.2376 10.0507 6.2739 +#> +#> Columns 49 to 54 -16.4997 -13.3547 0.6912 3.6431 -1.4873 -3.1410 +#> 2.5134 -2.9944 8.1467 4.3814 -0.7110 -4.7856 +#> 12.7381 -3.0205 2.8836 5.3029 4.5130 1.2538 +#> 15.3212 12.3674 7.7015 -6.7129 5.2488 2.6366 +#> -17.6317 3.2241 3.8934 -5.0880 5.5373 0.3936 +#> 3.5317 -12.8655 -10.1067 -3.1195 -0.2586 4.7912 +#> -7.6769 3.9488 -2.1534 -8.5644 -0.4721 6.4967 +#> 6.1252 -9.3189 6.0349 8.3829 3.6738 0.5524 +#> -15.6094 2.5884 2.3341 -5.9232 5.1800 -2.2496 +#> 11.3329 9.9988 17.8980 -1.8807 2.2449 5.1792 +#> 1.4664 -3.3432 0.2860 9.7708 8.2604 1.2490 +#> 4.3415 7.2991 3.2315 -2.9154 -1.1743 -3.6001 +#> -13.3036 -7.5742 -1.3056 -1.8536 5.0821 -2.2994 +#> -1.0885 8.8465 6.3945 1.2256 -3.3178 0.6133 +#> 4.1804 6.2278 -13.8247 7.8920 5.8814 -3.2915 +#> 5.2694 -5.8170 -2.8530 -2.9755 5.5307 0.6773 +#> 8.6352 -2.7823 2.2828 6.4963 -7.5428 1.7999 +#> 6.4909 5.7304 2.9439 -10.6418 -5.4520 0.5230 +#> -10.5764 2.2096 10.7326 0.1552 2.6700 -1.2733 +#> -3.2994 -7.0886 0.7244 -8.0308 1.0865 0.5060 +#> -7.9574 13.5639 -2.9610 -7.7068 -4.5694 0.7560 +#> 10.4968 4.1418 -4.4844 5.5859 10.6809 3.4893 +#> -13.6542 -4.1204 -4.9947 3.6366 1.1129 -3.9829 +#> 5.5567 -8.0876 10.1639 0.7135 -2.9387 2.1173 +#> -2.1073 -8.7430 7.6505 0.4635 -3.9520 -0.1823 +#> 7.7646 -0.2450 1.3579 -4.9050 0.8377 -2.6740 +#> 2.3609 -13.5276 11.8893 -1.8041 -9.8533 4.7342 +#> 1.7955 5.5517 6.6732 4.2278 -6.5946 -4.1777 +#> 0.4811 -5.9701 12.9905 -2.8380 3.6038 -0.1904 +#> -3.0507 3.8007 -0.4866 -2.0904 -2.5750 3.9200 +#> -3.2328 -8.9425 -3.3577 -3.9753 -0.8461 1.2268 +#> -4.9758 0.4337 -2.5192 -4.6356 -1.4582 3.7282 +#> -6.2956 4.1150 6.8176 7.8445 8.7865 0.9325 +#> +#> (15,.,.) = +#> Columns 1 to 8 -4.9328 7.3178 8.7430 -6.3928 14.8703 2.7630 -1.9050 -16.3873 +#> -4.4680 -8.4466 -4.8983 -3.4391 -0.8101 -4.0713 -11.7264 4.9084 +#> -4.1179 -10.0010 -5.9395 7.2514 -17.3568 -1.8234 -4.8319 -20.7847 +#> 2.2961 4.5832 3.0554 0.2390 -1.8471 -24.6781 -7.9666 -8.7934 +#> -0.8332 -6.1430 5.9178 -10.0756 1.5199 -1.4757 12.6072 -3.2561 +#> 7.4522 6.4549 -6.3038 -0.6804 9.8932 3.8696 -10.3272 24.3162 +#> -4.8750 -2.7677 4.6232 9.1124 -3.2398 -7.2643 7.7384 4.6930 +#> 7.4665 -1.6288 0.7243 -2.3744 9.5925 -8.8214 7.9525 -16.3618 +#> -2.7288 -0.4732 7.1707 0.2121 0.3897 8.0427 13.7533 -3.5827 +#> -5.5571 4.5583 -2.2729 6.6749 2.5241 -6.6153 -0.9368 2.0178 +#> 4.4070 1.5552 0.2017 -1.6898 -0.9701 1.5116 -6.8111 21.3446 +#> -0.0993 -1.7742 4.8599 -5.0896 -4.1081 1.9309 -0.3863 -4.4506 +#> -3.7298 3.9316 8.0563 10.9570 2.0475 -2.7074 0.8421 -7.7736 +#> -6.3895 -3.6186 -2.0591 -5.1573 3.4262 7.0903 -6.5710 -20.4886 +#> -3.7893 1.9785 -5.9091 -0.0369 -5.3028 -4.9728 -4.5592 6.6721 +#> 2.7202 -4.1592 -7.3281 7.4225 4.4771 -1.7763 -5.4908 20.4185 +#> -0.8111 -1.2126 6.8951 1.8754 -7.7215 0.2131 0.0820 1.1217 +#> -5.7301 -3.2006 -6.7083 12.5226 3.4311 -15.6035 -6.8852 3.8276 +#> 10.8132 -8.7679 0.2336 -10.7230 18.5749 -2.7675 -2.7558 -1.7553 +#> -1.6653 -3.6512 2.1612 -6.1718 2.4873 13.2646 -2.3671 -6.2557 +#> -5.6501 -1.0080 1.2501 -5.7407 17.0261 -14.0837 6.4080 5.3224 +#> -1.0114 4.1990 0.0805 -2.0892 -4.8267 10.3077 6.1778 2.3833 +#> 2.9098 -7.3345 -0.5344 4.5190 1.2796 -1.1394 -6.0879 -3.1857 +#> 2.0842 -0.9095 0.7107 -10.9939 -7.0167 -2.0472 11.8109 -2.2504 +#> 6.7794 -12.7292 -9.2754 -12.2863 1.4173 -2.6130 -1.5276 -7.3488 +#> -4.4658 -9.8398 -13.5225 -24.7894 -16.0531 -5.1242 -5.1884 -3.1845 +#> 5.3420 -2.8360 2.9838 -8.0145 15.0321 24.4878 -9.8807 -5.3889 +#> -2.4023 -13.0096 1.1745 -1.5041 -6.9856 -17.0877 -3.4722 8.7882 +#> -2.2276 6.6563 6.1868 -10.8852 -22.2603 12.8244 -7.3980 12.2693 +#> -3.4111 -0.7189 6.5161 -13.8551 -9.5669 -5.3583 9.4225 2.8161 +#> -1.0182 -7.9810 0.7640 9.7088 6.9861 -2.4327 -1.9906 -11.7918 +#> -3.2752 4.9139 14.3108 0.0637 2.2092 -8.4217 8.9753 -1.8305 +#> -2.2435 5.7426 2.2143 2.2549 7.5511 -19.5528 5.8062 -2.4257 +#> +#> Columns 9 to 16 1.7784 -10.6116 18.8916 11.7382 12.8056 1.5768 3.2779 14.9432 +#> -0.3574 -5.0433 20.9862 -3.2176 4.7611 -4.2310 0.6011 22.4073 +#> 0.5763 5.0978 -3.7787 10.0012 9.2696 12.6251 3.1758 -9.6194 +#> 18.8368 1.2991 2.2419 -10.4459 -19.3857 5.2887 -3.4947 -2.6045 +#> -8.9263 -1.4035 3.6362 9.1014 17.1883 -7.3270 -11.6537 -4.9035 +#> -2.6145 -9.2262 8.3823 0.8489 3.1518 2.6679 -1.2571 0.4608 +#> 4.9115 3.8220 7.5343 19.4945 -11.7214 -3.4566 5.6069 -14.0102 +#> 3.9337 1.5707 9.7323 -9.5611 -6.0542 1.7548 -8.7745 -0.8593 +#> -2.5963 3.3522 1.1015 -11.6833 9.1430 -1.4195 -2.1441 5.3349 +#> 0.9231 -11.3075 -0.3966 -1.1279 -8.6557 -6.7683 -6.6452 8.2509 +#> -9.1589 -1.5744 6.8670 13.6930 14.8558 -12.5839 -3.4082 -3.4675 +#> 2.1363 9.0875 -2.7776 5.6743 -8.4215 -1.8386 -1.2358 4.5936 +#> -2.3384 -4.6296 -0.8728 6.9245 -4.3669 2.1953 -0.1426 -0.2619 +#> -2.9725 -2.5162 -2.4488 1.5525 10.9119 1.2826 -2.0064 3.9925 +#> 1.7356 4.2003 15.6750 8.2792 -1.8962 5.0725 12.8496 4.5140 +#> -11.8334 -3.4519 2.7947 14.1890 -2.7292 -12.2159 0.8604 1.2984 +#> -2.8984 3.9930 -10.9699 5.5955 -4.4798 -5.7629 -2.2829 -7.4244 +#> -0.4230 3.7897 -0.3485 -3.5940 -5.9810 -12.7383 11.2135 3.9844 +#> 1.0031 1.5830 2.1796 -15.4505 -0.0103 -1.5564 -2.1125 -4.8409 +#> -7.0784 -5.9262 1.5270 3.7862 -3.0427 -5.6054 1.9523 11.0924 +#> 6.2898 -9.2303 1.3076 -11.7546 -4.2247 -6.3058 8.3821 8.8879 +#> -2.4704 10.8537 1.3857 -2.9049 -14.8050 -1.2383 5.0545 -14.2544 +#> -0.7749 7.1873 19.4923 2.8166 15.6717 5.1621 7.2092 4.2178 +#> 5.9891 13.2824 6.7528 1.0629 3.5650 -5.6318 7.6846 -4.9949 +#> 6.3304 -9.5514 -5.3183 -4.4670 -4.6939 8.8137 7.1928 1.4077 +#> 1.8776 4.4449 0.4581 16.0566 6.6165 2.0124 3.8151 13.4180 +#> -1.4359 -13.0809 4.7537 -9.6230 -0.2089 -0.8786 -3.1419 14.1345 +#> 7.4082 3.5545 -11.8935 -7.2739 -17.5130 -2.1501 -1.1007 -4.4789 +#> -6.2560 4.3406 1.7099 4.4764 5.2993 -2.5860 -11.3778 10.5995 +#> 8.7505 -0.2493 13.1371 4.6845 8.3199 7.9976 0.5115 2.5394 +#> 24.2112 -4.6620 4.5530 -4.3868 7.0348 17.4049 -0.5706 3.9495 +#> -5.2599 -6.5792 1.3911 7.3624 7.7792 -3.5900 7.5302 -16.5536 +#> 4.5886 -9.5352 7.9796 -7.5769 3.3832 0.0814 -21.4389 -18.7231 +#> +#> Columns 17 to 24 9.2787 0.1982 1.5504 11.3399 1.2111 14.6673 18.7854 6.3625 +#> -0.9752 -7.3254 -2.1699 -3.3196 2.0386 -11.0122 -5.8307 4.4358 +#> 5.4092 6.9548 4.1863 -5.8647 -6.8616 -0.6640 10.1477 1.6399 +#> 5.0527 0.3074 11.8740 -0.2983 -1.7315 8.2427 8.3338 4.6236 +#> -3.4113 -10.7663 -14.8191 -1.3755 -12.7231 -12.0317 -18.3439 -12.3815 +#> -5.6351 -7.0939 6.8910 -3.4747 3.9538 8.7547 2.5367 12.3079 +#> -4.6032 -23.8156 12.8856 21.8855 -2.4887 4.4766 11.6141 -9.9143 +#> -12.9748 11.3994 -6.9040 -0.3635 -1.1072 -2.9807 -3.8582 12.4080 +#> -14.7694 4.5977 -2.3043 1.8589 -6.9520 7.2736 17.1332 -2.7067 +#> -4.8280 -1.0101 8.8881 7.0700 3.3483 6.6308 -14.4445 9.7119 +#> 6.3701 -3.4438 0.4236 18.1360 4.6127 -6.6543 9.0146 12.5606 +#> 1.7462 -0.4416 -10.2707 -3.3285 -3.2200 -4.6096 0.3415 -5.3734 +#> -4.3973 -19.4633 1.0211 -7.1705 -8.5736 -6.6903 -2.4908 4.6847 +#> 12.3077 -2.1865 -2.0518 -5.8797 -3.8338 0.0074 24.1812 5.9451 +#> 1.8784 -4.9494 -8.1918 -12.5618 -3.1634 -13.8377 -19.1328 -6.2817 +#> -0.0792 2.2887 -3.3162 8.2582 -11.9628 0.4759 8.1665 -9.7636 +#> 2.4773 -12.4838 -14.7030 2.7567 0.0552 -16.9608 -10.3523 -8.7655 +#> 0.5989 -5.3416 -10.7168 0.0302 -5.5756 -10.4459 -0.5413 -3.3216 +#> -2.9938 12.8954 -2.1675 18.0611 10.9736 15.1048 -5.6207 7.0488 +#> 8.7778 -6.6210 9.0462 6.4141 0.8643 2.4889 14.0606 -10.2601 +#> 2.4058 -10.6825 9.5511 10.2986 8.7942 -1.7521 -4.2356 28.5379 +#> -11.8222 0.0615 10.6875 12.5234 12.5557 4.9664 -0.2542 0.5580 +#> 6.1884 9.7078 -7.8526 1.0346 10.0836 8.7839 8.3358 1.7056 +#> 3.4453 5.6992 -21.9889 -13.8549 3.0866 6.2960 -14.1334 -2.9023 +#> 0.8797 1.2786 9.0333 -5.1301 -10.4399 -2.3050 -6.3290 17.2964 +#> 5.8477 -0.5743 -2.7718 -12.4145 -14.6122 -14.8226 -10.7020 -1.3154 +#> 7.8803 -6.6331 8.2859 8.1343 7.1675 10.8932 -20.0518 0.5796 +#> -3.8860 -1.0948 0.3296 -4.5012 -10.3061 6.1479 -7.2847 -9.4322 +#> 10.6928 9.4802 15.4376 4.3896 17.0611 -13.1377 4.1840 -11.3105 +#> -9.6850 -0.8327 3.5414 -2.6485 -11.2932 -6.9164 -2.7452 9.1715 +#> 3.8276 3.4628 -6.0453 -2.5660 -2.6078 -5.2282 4.0372 -8.0659 +#> 7.0145 -7.9692 -12.5756 -1.9500 4.4520 -7.0035 -13.7542 -1.6138 +#> -5.0200 -7.9610 6.8528 -3.0502 -5.0376 -11.2978 -10.0984 7.5152 +#> +#> Columns 25 to 32 -4.0404 -1.4430 0.3607 3.3906 -2.3900 10.1742 5.5135 25.0277 +#> 3.5052 -1.4220 -8.9032 2.5179 -2.9835 -3.5487 9.3304 1.5378 +#> 6.6237 -5.5332 8.3167 -0.7853 -3.9757 -19.1286 -0.8038 -3.4982 +#> -3.5841 19.2283 4.6492 -5.5086 -23.7843 -8.8377 2.3797 2.3257 +#> -16.0309 -2.5382 -0.5452 -1.9870 11.8249 -5.2754 -9.2668 14.0315 +#> 5.3959 -3.6825 2.3861 -15.0529 -27.1974 4.1309 0.8618 -13.8103 +#> -6.2427 11.3191 10.2869 19.2107 7.5176 -1.7367 -8.1956 -1.5833 +#> -6.7464 2.4417 4.1475 -6.0524 0.8928 -7.8421 3.1080 15.0934 +#> -14.8396 -18.4550 -4.8427 -20.0457 1.0476 -9.0142 0.0407 5.3435 +#> -7.7599 -8.8913 25.1247 -3.6969 -7.0626 13.1309 -10.4188 0.6810 +#> 11.4706 3.3378 -10.5876 -8.8050 -9.6707 -12.6866 -11.1414 14.9977 +#> 5.0862 9.7291 8.7763 3.7778 -1.1664 13.5353 6.8519 12.9135 +#> -3.3324 5.6444 -3.9762 -3.5315 -10.8808 1.7029 8.5269 12.2964 +#> 8.2909 14.5977 -7.5689 -5.2819 13.6479 -1.1449 6.1938 11.2496 +#> 12.5643 -8.4485 13.2345 6.2915 5.4052 -13.3637 -21.9951 3.2774 +#> -2.0334 7.6593 -13.9139 -11.0922 4.6272 5.0187 -9.6147 6.7881 +#> -7.2256 10.9059 -0.0457 12.3354 10.9111 -5.3706 -7.8584 -9.1651 +#> -17.2781 5.7023 7.1502 -6.2450 -1.9618 -2.8856 -20.4779 -8.7634 +#> 0.5739 12.3096 -20.4771 4.7135 3.8365 30.0124 5.4050 4.8705 +#> 11.3993 6.0502 -5.1046 2.5172 -2.3657 -18.9375 5.3492 7.0289 +#> -3.5367 3.0282 -16.2368 13.8093 -6.2384 4.6840 0.7767 0.3309 +#> 6.4043 -1.9743 -6.5194 -12.9549 3.8220 3.7224 -10.7915 -6.2787 +#> -6.5494 -8.4102 -21.8307 -23.1294 2.5929 -10.0229 10.7691 8.3459 +#> -21.3921 -1.3958 -1.6219 -39.8597 2.5127 -4.2193 -2.6451 -4.0861 +#> 2.0386 8.3716 -8.0288 -4.9455 -5.6660 4.3270 7.3855 0.0502 +#> 7.3334 -5.0550 -1.1481 4.3261 4.3941 -2.4642 1.9597 -13.8305 +#> 0.0530 -17.9078 2.2163 1.3784 -7.7722 21.9329 -5.6291 -5.8106 +#> -0.2091 -1.8925 -6.6149 1.6091 0.5615 3.1125 2.0563 -12.6482 +#> -1.7814 -1.8443 4.3231 -10.0351 -6.6395 -11.3759 13.0064 9.0888 +#> -9.4602 2.8972 3.3182 -9.6499 -27.7990 -12.7712 -8.8141 -6.8202 +#> -4.4265 0.1465 -1.8701 9.6701 3.4133 -7.3095 -11.8970 1.9456 +#> -14.8678 11.2420 -0.5675 2.2864 -15.3030 6.6797 -7.3339 -2.2994 +#> -4.6842 -2.7975 5.7364 13.4220 -10.1075 -16.4725 -10.5969 6.2391 +#> +#> Columns 33 to 40 -13.6806 0.0062 2.7024 0.1656 17.1923 10.9281 7.4626 11.8520 +#> 9.3688 -18.2322 7.8515 -6.6231 9.8467 -2.5199 -8.2668 -3.7490 +#> -1.2868 -11.7083 1.7489 -4.1703 -11.6923 3.7311 10.0102 7.9855 +#> -11.4488 -10.5165 -2.1662 -20.7559 -7.5795 -11.6906 -15.0227 5.8678 +#> 0.9158 1.0827 6.6132 10.6499 -4.5283 -5.1345 -9.6351 7.3192 +#> 0.0054 -0.5887 10.9239 5.1057 2.9807 -4.8446 4.4155 0.9054 +#> -9.0131 19.0571 6.2585 8.1577 3.4832 -1.4605 -10.9881 1.1069 +#> 6.0856 -0.1204 -19.5616 17.0739 1.2550 -7.4597 7.1107 -7.6856 +#> -14.5574 -7.5907 6.1311 2.5264 6.0238 5.5392 -4.5329 -1.7663 +#> 0.1306 2.3647 8.1794 3.8633 1.4221 -15.7879 -5.0071 -9.0464 +#> 11.7398 -15.2417 6.7309 -6.2415 14.7766 0.5277 -15.6454 -2.6727 +#> -3.5572 -5.6365 -2.0427 2.3924 -11.7509 -1.9761 -4.6798 -16.5507 +#> 7.4429 -22.6910 5.0999 -7.8902 13.8731 4.2361 -7.2994 4.6956 +#> 6.8706 8.3718 8.0796 -3.4826 2.1241 16.3710 -24.5550 6.4867 +#> 8.3631 3.0281 -4.2836 -5.7600 4.3311 3.8684 3.4780 -3.1230 +#> 9.3325 -3.9323 2.3066 -3.1985 1.0924 -3.1014 -7.3856 -8.0613 +#> -5.1521 -7.0339 4.8373 8.2819 -5.5091 0.0910 -13.6147 -5.9339 +#> -0.3512 -7.7056 -2.1338 -8.1271 -1.3670 4.7521 6.2824 0.4692 +#> -0.0775 7.7405 -1.4074 10.9758 9.1070 1.8745 -16.8018 -17.3561 +#> -1.3179 11.8846 -0.2873 -20.0243 14.6354 -4.1182 1.5336 9.4046 +#> 10.9130 3.9092 -13.3798 17.8067 -6.8405 -5.6669 -1.3556 8.0783 +#> -1.5835 7.0228 9.7448 -9.0786 0.5883 -16.2685 3.6657 7.2246 +#> 4.0283 -12.8571 -11.8117 -5.7793 12.8297 5.8284 -1.6986 -3.5658 +#> -5.2016 1.1295 3.1815 -29.1516 15.5328 1.6484 -6.0560 4.8481 +#> -11.6748 6.3841 0.1354 -12.1163 2.2596 10.0080 -7.1320 -4.4598 +#> -6.7141 -7.8233 8.7967 -9.3728 -8.1989 -3.9306 -1.3272 10.5867 +#> 17.8738 -6.6065 -3.1639 -12.6698 14.8907 -2.1204 -20.8855 -1.2224 +#> -24.5303 -0.5188 -1.6418 6.9470 -12.8942 2.1682 -2.4463 -3.8339 +#> 4.0891 4.7385 13.1690 -3.4484 0.2950 -26.1274 5.2645 -10.7306 +#> -7.0131 -1.0200 7.1370 9.9407 -16.1058 -1.9401 8.9331 11.7031 +#> 7.1123 2.6941 -6.9144 -7.9629 -0.4457 23.3039 -3.6105 -9.2034 +#> -3.7213 -0.0295 -2.0509 0.4072 7.5788 1.3633 -11.1139 6.5488 +#> 1.5411 9.3479 -3.3330 9.7347 -10.2704 6.7816 2.0851 2.7570 +#> +#> Columns 41 to 48 2.8567 -1.2509 0.7831 0.8194 2.7188 8.6329 -9.1868 -8.1310 +#> -10.5539 0.3902 -5.4374 3.0626 10.5527 -3.2135 -8.0609 5.6467 +#> 17.3678 4.4211 -13.4232 11.7988 12.3921 7.7966 -2.1892 0.1657 +#> -14.3177 18.2527 8.0008 5.4604 -0.2969 -3.3893 9.6372 6.0779 +#> 17.2199 -11.9583 -3.8224 -4.7957 -2.8627 2.7558 -0.9646 -19.3567 +#> 0.5162 8.6835 -0.2533 -19.4177 5.4450 13.9350 1.8988 2.7825 +#> 6.9616 3.6158 -22.6991 -0.9634 5.5848 1.2298 -11.3165 0.6343 +#> -6.6978 12.4381 5.6340 1.4762 -9.8032 11.8252 19.2159 -9.1805 +#> 3.5207 -8.3466 -5.4355 -1.2235 -2.9561 2.4640 3.4395 2.7065 +#> -12.9215 -10.0668 0.5002 14.1226 6.6489 -13.0293 15.9507 25.0442 +#> -10.7093 -8.4703 -9.3923 -7.2155 -9.0077 8.0308 -16.2839 6.2839 +#> 3.2378 -5.2749 -5.6435 2.7889 -13.7544 -2.0767 15.1048 2.6578 +#> 7.3178 9.6251 -1.1023 -1.0402 -5.3451 -8.2557 14.3928 7.5724 +#> 3.8133 -1.8312 -8.7755 4.6773 6.2228 14.8890 4.6655 11.3970 +#> -2.3716 -9.3234 13.5430 -8.2350 12.4191 -9.2580 -4.3517 -10.8570 +#> -3.6950 -7.4019 4.1029 -6.4903 -4.2991 5.4390 6.6195 25.1670 +#> 6.7634 0.2952 -15.1368 -8.9548 8.9557 1.9366 2.4054 -6.7388 +#> 3.7285 -1.9482 -3.5839 4.9337 -2.0387 -12.1164 8.1546 5.6810 +#> -1.9431 7.5485 -1.7385 -5.2008 -4.8101 10.8890 4.2470 -6.5374 +#> -4.1856 10.4930 10.8537 13.1265 1.2434 10.1904 -12.1159 -1.9328 +#> -1.4070 2.7280 4.4063 -2.0554 -4.2765 -5.4015 -0.9147 7.3790 +#> -0.6859 -18.6484 4.5034 -4.0111 5.4810 -5.7982 22.5001 -1.1958 +#> 7.4635 10.7947 -11.7759 -13.4256 -14.8836 -8.6435 4.0845 -16.3680 +#> -16.4932 1.9518 -8.3171 3.2632 5.2895 -7.0985 7.4600 -14.5847 +#> -2.4035 22.0945 17.4568 -1.2950 7.9909 -0.7241 15.2463 -0.7937 +#> -6.5223 -4.7757 -4.5651 -5.2934 -7.2794 0.2040 -2.0507 2.6962 +#> 1.7143 6.3054 13.2956 6.2607 3.7199 -29.8309 2.6897 -2.4614 +#> 3.4529 4.6949 -7.0161 7.2513 8.2711 3.9631 -0.0563 7.0867 +#> -12.2865 -5.8011 4.6900 -1.8933 -1.1135 -6.4920 5.6744 -16.6353 +#> -5.8497 0.6882 -14.7212 10.2791 -5.5395 -6.1162 -1.8134 -5.8064 +#> 3.7475 3.9480 -15.1052 0.0570 -11.1260 3.7976 -13.5154 -0.3392 +#> 3.8720 -1.8354 -1.9172 -14.5821 -1.8462 -5.6563 1.7059 0.3672 +#> -16.0087 0.7300 1.3261 3.0401 8.6503 15.3800 -11.1643 -4.5400 +#> +#> Columns 49 to 54 6.5568 -9.7026 -7.1953 6.3538 -7.4264 -0.9708 +#> 30.9744 -6.4547 12.6697 3.6079 0.0469 -0.7767 +#> 9.4024 -16.0238 8.9534 -0.2179 5.3167 1.5216 +#> 1.2299 -3.9890 -6.1065 -1.6290 0.9819 1.2173 +#> 2.6867 15.8257 -2.8357 -2.2088 -2.5515 8.3031 +#> -6.9879 -6.5174 -16.0876 -5.6866 5.9698 -3.6638 +#> -1.0844 -14.0817 -9.1045 11.2855 -6.7743 -1.6082 +#> -2.5941 8.7188 6.2116 -7.3284 0.8581 5.6757 +#> -4.8610 16.6501 -9.9816 -4.9624 3.7696 -0.2728 +#> 1.7523 11.3379 1.6448 13.4161 1.0309 2.6940 +#> 4.0764 2.5950 4.9161 2.0570 7.1652 -0.3929 +#> 2.9331 6.7424 -0.0478 8.2542 1.1336 -2.3051 +#> -1.6343 -7.1790 6.0740 -15.0653 2.8794 -1.6636 +#> 3.9545 5.8668 1.1831 1.5058 -8.0106 6.4086 +#> -3.7696 5.5248 -7.8983 7.8554 3.2396 -3.2321 +#> 10.0122 6.1376 7.4421 10.4932 2.8359 3.4674 +#> 18.6560 -3.6614 0.6000 4.3321 3.1300 0.5418 +#> -8.0462 -2.4664 6.4827 2.6765 -7.8352 1.4117 +#> -7.2277 8.3355 -2.8573 2.8742 3.5099 2.4450 +#> 3.2092 -14.4512 8.0362 -14.1675 -9.4150 -0.8199 +#> 8.4226 13.4240 -4.0751 1.4271 -1.3003 -2.6394 +#> -20.1595 -0.7043 20.5986 -1.4484 -1.9009 4.3301 +#> -0.9838 0.8683 -8.6424 0.5942 5.8201 1.5375 +#> -4.6741 -4.9432 11.1699 -1.8994 -1.3315 5.7283 +#> 15.8932 -5.5368 6.4147 -10.6884 2.2649 -5.7028 +#> 4.7088 -8.3024 2.7415 0.9052 8.9913 -0.3258 +#> -22.7436 15.3307 1.5748 -10.1957 -14.2503 3.2781 +#> -4.2070 -8.0638 2.4944 4.2893 -3.7130 -11.7386 +#> 16.2969 2.3844 9.4157 4.1409 15.5271 -0.2734 +#> 0.3184 2.1784 -1.5733 9.0323 0.8671 -2.6642 +#> 2.9546 7.9019 2.1216 2.3385 -1.4722 2.2566 +#> 7.2053 -3.8532 3.4720 -1.0322 -2.0542 -0.4322 +#> 1.9949 14.8634 0.7372 -6.6369 1.6521 3.1908 +#> +#> (16,.,.) = +#> Columns 1 to 8 1.7697 7.8231 8.6120 5.2963 -1.2343 -0.9777 -4.0528 -7.1771 +#> 2.3795 -3.7493 11.5089 12.0890 -9.3660 7.1136 -12.0514 4.4451 +#> -1.8789 -1.2572 -0.6798 -13.7703 24.1168 5.1803 -3.9050 0.9457 +#> 1.9866 -5.0244 -22.3673 -3.2447 -34.5358 -21.4472 -0.6213 -8.1068 +#> 2.6026 4.2723 1.5155 0.1620 9.4152 12.7744 3.3296 12.9299 +#> -0.3071 -4.6883 -0.1982 -2.4439 1.0588 8.3292 8.6379 0.2811 +#> -2.8710 4.8084 -2.9594 -28.5942 -4.3785 -3.7145 1.0101 -3.0829 +#> -0.3498 -8.4707 1.7360 9.8888 -6.2180 3.2786 2.4448 -1.2863 +#> -4.0379 5.9845 6.6338 -5.8816 4.7582 14.4587 -8.1834 -6.8522 +#> -8.8423 8.8415 14.1958 8.3647 12.2898 -8.2332 3.3496 -2.9632 +#> 2.0142 -1.0245 -7.4184 16.0781 -3.5417 0.5323 -8.1766 0.3534 +#> -2.4849 4.1325 -2.9216 -17.0117 3.2954 -12.2175 -9.8271 -2.0682 +#> 0.9173 4.2991 6.8886 11.4014 -2.7774 -1.0993 -6.8784 -2.8886 +#> 13.4337 11.5280 -9.8062 15.3525 13.9176 0.1049 -3.4438 -5.9171 +#> -1.0894 -7.1077 -9.6651 -9.9595 2.7730 -16.9990 -22.5391 5.3471 +#> -4.5623 -7.1814 5.0093 0.2806 -5.7331 -14.1492 -6.9815 -8.1291 +#> 1.2719 1.1986 -14.1326 -14.5914 8.7864 5.2089 -0.7267 -10.3378 +#> -7.0493 -2.1834 4.9225 -16.6646 12.1307 -15.3796 -12.8936 -4.1587 +#> -6.9274 0.5319 13.3562 0.6256 -5.4126 2.4206 -0.0722 -7.8487 +#> -1.1814 -2.7907 7.6144 6.9151 -9.7289 8.8656 7.7155 7.1346 +#> -6.4354 -4.7063 3.8487 -4.7140 4.7482 -13.2276 10.2124 -2.0034 +#> -0.5469 -4.1568 2.5225 -10.3752 -3.8306 2.1754 14.3554 8.1533 +#> 3.1296 4.0774 10.4607 8.8223 11.7213 5.8240 -3.9445 11.0766 +#> -2.0549 -1.4588 -7.5864 -6.6973 -6.0404 -0.5402 -4.5653 4.3114 +#> -2.2802 -1.8898 -5.0546 5.5354 -11.9697 10.7120 14.6425 -12.6019 +#> 5.8921 -4.1813 -14.2185 2.9851 9.3341 2.8864 7.5513 0.0464 +#> 0.6838 -0.4889 4.8951 4.5487 -10.2015 2.1203 11.6653 4.5425 +#> 4.1227 1.1392 -9.5394 -2.0351 2.2566 -7.8231 11.2073 6.9540 +#> 2.3322 -5.5290 -1.9318 -17.9380 -17.8237 16.7737 0.6159 4.8332 +#> 4.7504 8.8325 0.5344 -3.4972 -0.5738 13.3032 -4.2240 8.0981 +#> 0.2960 2.7964 -8.7055 2.4322 2.3106 4.6955 -8.5344 4.2836 +#> 6.4717 8.9236 14.2096 -9.3419 7.2685 -7.3669 16.0820 2.5748 +#> 3.4450 5.1718 -7.1637 -7.9680 -3.8327 -0.6678 -1.3635 -7.9201 +#> +#> Columns 9 to 16 9.9000 -1.9064 13.9795 15.4671 -7.0296 4.4961 1.0969 4.8526 +#> -10.1480 6.5909 -9.2743 6.3157 -1.2375 5.5422 -2.0735 -7.6841 +#> -1.6440 -20.5329 -8.7478 13.6704 5.3678 -5.4746 0.1420 -10.4349 +#> -4.6557 -5.9320 -7.3280 -2.6079 -5.7878 9.3410 -24.6938 17.9438 +#> -7.7938 8.5064 -3.1349 5.0849 0.6603 7.5690 18.2352 -13.2141 +#> 16.4708 3.4304 1.5029 -13.6260 6.8327 12.9616 4.4406 -11.4976 +#> 37.2328 -3.8313 -6.8786 -3.6172 6.9452 -2.2848 9.2737 -0.4529 +#> -8.7894 3.5348 6.9157 -2.5667 -13.7301 -8.0534 1.5875 -2.3443 +#> -8.4161 3.5718 -1.2136 12.7011 -5.9564 -4.3919 -1.4307 17.0527 +#> 9.6242 3.7935 -0.7158 -14.5228 0.2585 20.8162 -6.8776 -16.3709 +#> -1.3885 -4.4480 -11.0857 12.8871 8.8989 8.2894 3.1028 15.7340 +#> -7.8462 3.0626 -5.0693 -1.3555 -4.7493 -18.0875 -5.3732 -8.5544 +#> -12.8403 2.9894 -4.3974 17.3711 -6.7865 11.2650 -7.1493 19.7333 +#> -4.5397 23.2233 13.6957 11.3891 1.4535 -0.5644 0.8427 10.4301 +#> -9.1539 4.0060 3.8074 -1.1349 -0.0091 -2.1499 0.4519 -3.8254 +#> 8.5675 0.6465 -17.8590 -6.1994 14.4108 11.8678 -13.6367 -8.6303 +#> -0.3827 1.0060 -11.8104 12.2828 -1.2360 1.9369 0.4711 3.2054 +#> 7.9029 4.0724 -6.1550 7.3966 -8.7446 -16.2476 8.1946 -2.3873 +#> 3.5166 4.5060 -9.2269 -4.2032 6.4045 -7.9896 -4.8831 5.1163 +#> 6.5437 -4.3654 0.4464 13.0827 -0.4307 -0.9888 3.0257 7.4889 +#> 3.5640 14.2778 0.9009 -16.2148 3.9189 0.6202 10.9532 -6.6318 +#> -4.4477 -14.7847 -6.1820 14.1875 5.7793 -1.8452 -3.8695 -0.0626 +#> -6.1742 0.6028 5.8470 9.2874 8.1991 -11.7732 7.5792 14.1381 +#> -7.0286 -3.3844 -2.7282 1.9353 -10.3101 4.4140 -6.2162 7.4030 +#> -2.8200 -5.8922 5.0646 3.0160 3.3355 -3.6240 -19.4316 16.2253 +#> -8.2071 -10.2448 4.0068 -5.9674 -0.0127 -8.8853 -10.3026 1.2651 +#> -3.6192 -9.8168 6.5678 -5.7630 -16.5598 9.7662 10.2760 0.4595 +#> 0.3560 -17.6844 8.4908 0.4143 2.9337 -25.1112 -3.2149 11.9771 +#> 6.3447 1.0973 -5.5224 -16.2662 6.2528 12.0775 -12.4180 -1.4815 +#> -0.0916 -4.3156 -1.3490 -16.8514 -0.0790 -6.3456 7.8934 -4.1172 +#> -2.3289 -6.3002 -4.8253 14.1059 5.3633 -1.3675 3.6250 19.4321 +#> 7.8740 1.9631 1.2724 -21.1095 12.1956 -3.0687 15.0578 7.6955 +#> -1.9299 -1.7296 14.4124 -1.6739 12.2284 0.4329 4.1399 8.2577 +#> +#> Columns 17 to 24 -11.6009 22.0213 6.6196 2.7686 -6.7935 6.6405 -1.8731 1.5987 +#> -4.8475 1.4497 -6.1142 -9.1211 2.8824 -14.6762 4.0170 0.5676 +#> 17.7661 -10.7091 -12.5137 3.4202 4.1326 2.1232 -7.6068 -7.5023 +#> -7.5991 15.5823 -1.7418 -1.3845 8.1397 12.8041 9.0708 -10.6008 +#> -9.1976 -2.8274 8.0926 -30.0434 -4.1656 -9.8114 3.7626 -14.7293 +#> 3.9396 8.3693 -2.1834 -13.3237 6.9885 3.0003 -5.2317 -0.4947 +#> 9.4872 10.7169 5.8791 -25.3369 -1.0772 -4.1301 3.7120 -0.2123 +#> 3.6405 -4.1067 -3.4298 -12.0492 -7.5760 3.1582 -6.9279 -4.5208 +#> -6.1974 15.0876 -1.0535 -4.2255 5.1222 10.2420 2.5771 10.4022 +#> -8.4490 -0.8669 -1.6400 7.2100 -10.0771 10.1309 -4.0657 -3.3534 +#> -8.3783 -0.1408 18.0774 -11.6504 -4.8912 2.9820 9.5248 -12.6447 +#> 2.4244 -12.6826 5.0148 1.4782 -8.0102 -2.8640 1.2667 4.6894 +#> -6.8496 7.6288 3.1449 -1.7850 -7.3471 0.8432 8.6055 -3.6856 +#> -18.5910 -6.8910 9.3741 10.0406 6.3631 -4.0381 7.5384 12.6679 +#> 6.9114 10.3067 -8.0363 1.8523 -9.8395 -6.1725 -10.5238 -4.5921 +#> -5.7217 0.4141 -5.1321 1.9550 -6.6907 -4.6277 2.3827 -7.2143 +#> 1.7261 -6.2427 5.9918 -0.0032 -9.7840 -1.1118 -1.0200 -4.4638 +#> -12.6195 -2.7205 18.5832 2.2899 -3.9882 -4.4527 -0.1273 -6.8444 +#> -6.8270 -7.2222 1.3957 -3.1436 -0.5386 -2.2124 12.1577 8.2202 +#> -8.0268 15.0382 7.8973 -13.2213 17.1631 -0.8632 0.3795 9.1084 +#> -8.8132 5.8229 4.5916 -5.5550 22.7646 -0.7858 16.6488 2.4779 +#> 4.4191 -4.5787 -1.5911 3.2636 2.8622 9.6346 0.7014 -13.4927 +#> 0.3287 -13.3972 3.8340 -10.3846 -5.9126 -6.8613 -0.2202 2.4956 +#> -7.6616 -7.8870 3.6699 -0.7547 -11.8913 -1.4331 1.9404 -3.7107 +#> 9.2290 -8.7490 0.2571 -7.0671 -5.8319 3.5638 -4.6722 6.5871 +#> 4.9291 -17.3877 -14.8163 -6.3332 3.4669 -3.2231 -2.3084 1.8338 +#> -7.1738 -0.2332 31.5866 -2.3907 -1.3109 1.2148 10.8057 -8.3530 +#> 14.8927 -31.3649 9.4766 -0.0843 7.7952 7.6698 -2.7628 0.2231 +#> 6.4920 15.8902 -7.7696 -0.1625 -9.2543 4.4658 -4.4855 13.4814 +#> 5.9749 -7.7058 4.5334 -22.2708 7.1025 4.3723 2.0585 -1.6211 +#> 1.4198 -14.7659 9.4616 -2.1611 0.1507 -4.8649 -4.4704 -3.7856 +#> -2.4949 0.9292 17.9165 -4.6141 -15.8654 -16.5336 9.6175 -8.8211 +#> 10.4338 4.0037 6.6796 -19.1662 -4.6070 7.0008 -3.3524 -6.1062 +#> +#> Columns 25 to 32 9.6566 -9.1663 10.9344 11.7395 4.9549 -7.5276 -1.2756 18.8765 +#> 7.4470 -8.4346 0.2678 5.7281 -5.2281 -9.0784 -12.0160 8.6164 +#> -6.7503 6.9754 -8.3045 18.7376 3.1970 4.1477 8.0606 -6.6611 +#> 5.8657 -14.5795 3.7240 1.0550 -1.9824 12.4239 -9.2815 -6.5168 +#> 7.8076 0.5792 -5.2520 7.5196 3.5677 -12.2524 7.8021 -7.0846 +#> -0.0678 -7.5979 8.7863 -5.6260 -11.0602 -6.0723 -0.7457 3.0615 +#> -8.5058 -9.4856 -1.8653 2.1144 17.7533 -2.5741 -5.2134 7.3999 +#> 13.6945 5.5516 -14.6413 -4.7243 -4.3727 -4.4692 -5.5395 2.4296 +#> 3.5659 1.1249 5.7436 -0.4159 -13.8032 -4.7807 -7.3343 6.7135 +#> 12.9533 -0.0736 -4.9114 -10.2170 -3.7267 -5.5711 7.0701 6.4046 +#> 3.2389 15.2308 1.3515 8.2620 -2.8910 -9.5465 -19.5893 1.4798 +#> 1.1973 1.3824 -8.8070 3.6789 -2.8311 -9.6534 10.1234 -7.3253 +#> 12.5387 -10.9293 -11.7844 15.2538 -11.6515 -9.2029 -10.9647 1.2111 +#> -5.8873 0.6263 13.7267 -3.3958 -10.3882 11.6040 10.8817 6.1875 +#> 17.4590 -0.6513 -9.4424 7.7948 2.0112 -3.4352 9.7776 -17.2679 +#> -0.0230 3.7101 -14.3744 -18.1747 2.7047 -7.3458 -16.5721 11.0432 +#> -6.6831 2.1461 5.1203 3.1521 2.2618 -8.8729 -7.5402 -10.5199 +#> 19.7619 6.8662 1.3187 -12.0551 14.0766 1.9216 2.7810 5.9483 +#> -6.1166 -5.5280 6.5886 -12.0131 -1.1430 -5.3102 12.8459 18.9011 +#> -0.1965 -6.7035 1.8048 9.9410 15.8474 1.5910 -11.0978 12.7264 +#> -3.9838 3.8904 1.1062 -1.3114 -0.2945 0.6846 -4.8257 -6.9986 +#> 10.7371 -5.5756 2.1448 7.2605 2.5549 3.2154 -9.1792 -8.2455 +#> -5.9145 -12.5473 8.1026 -2.6770 -0.5885 7.0260 -6.5885 -1.2770 +#> 14.0670 -12.5548 0.9392 -2.0448 -4.4855 -4.5305 -3.8521 -1.4823 +#> -8.0152 -11.9022 -3.7449 -0.0058 -7.5993 2.7174 6.7479 10.2567 +#> 0.0129 12.9285 6.7327 -2.9286 -6.9996 19.0981 0.8962 -0.3266 +#> 3.8237 0.1796 6.7072 7.8696 -0.5336 3.4412 -1.9806 7.1151 +#> -5.1513 0.2888 1.4149 1.7265 5.0009 6.4119 11.0597 4.7295 +#> -9.3078 -17.7354 1.2000 -5.0176 5.0675 -7.7543 -4.4655 -12.6498 +#> 1.6458 -2.4406 -4.0469 9.3308 -3.2241 -12.6094 5.7571 5.8206 +#> -6.1657 -9.3138 1.2429 2.0299 4.8295 9.0674 -14.2895 -15.3708 +#> -0.8646 -10.1943 1.4619 -20.2956 9.6826 -5.8096 -6.9881 0.2323 +#> 2.6498 -9.5218 2.3828 -1.3531 -0.6577 -3.0825 0.4045 -1.4742 +#> +#> Columns 33 to 40 -10.6229 8.6789 2.2178 -6.7040 -0.6326 1.5334 9.2213 -3.3000 +#> 14.4014 -11.2622 23.1654 -12.8932 -8.0920 -7.5030 -5.5508 5.0168 +#> -11.0422 -2.7758 -2.8666 4.5146 1.9365 0.2622 -8.9088 1.2884 +#> -5.3691 -15.7819 1.7578 0.9424 13.8666 3.3970 5.4180 -18.4514 +#> 12.3635 -1.1468 11.3830 -7.2679 1.0156 -11.4125 -4.8787 -7.9729 +#> 12.3441 4.6422 5.6238 -7.8218 -10.3204 3.0823 -5.5093 13.1061 +#> -8.0159 8.0071 -5.8348 -8.1469 9.9367 11.5474 6.5032 -0.6570 +#> 16.8167 -11.1660 -9.0735 13.9219 -9.8278 1.4944 22.7669 -12.5160 +#> -14.1331 17.7856 3.6391 -12.4763 5.1527 -6.3651 2.3149 -0.9128 +#> -11.7482 18.8823 -3.9689 4.8419 -9.8544 0.2651 12.1470 -5.7802 +#> 5.8595 -2.0600 23.0567 -6.0460 4.7197 -6.5538 -12.0873 -0.9905 +#> -8.3227 7.2505 -13.9033 0.2445 6.9149 -1.6440 2.3829 -5.6416 +#> -0.1194 -12.8388 -5.2210 -14.3280 11.0063 -2.5582 -2.0446 -4.6459 +#> -1.6576 -1.8032 -2.9412 -2.9725 -0.7047 -12.1007 -11.6922 -1.1728 +#> -0.5854 -6.4267 -13.1453 -11.3883 -5.0637 -4.0518 -6.3786 1.9079 +#> -2.5605 5.2524 -0.3209 -11.5121 -6.4272 3.4591 8.9157 -4.7580 +#> -4.8789 5.7178 -0.9700 16.5166 1.4804 -9.6791 0.1136 1.4191 +#> -6.7359 17.0078 4.4160 -5.6875 6.0253 -11.5817 -11.0020 0.6016 +#> 5.3858 1.8812 -4.8278 -7.9088 5.7006 7.6922 6.1906 -2.4290 +#> 13.8951 -2.6366 13.2971 -22.1817 3.2779 11.0970 -5.5572 -4.3547 +#> -7.1838 -8.4953 -0.3669 0.8697 0.1441 -0.4647 -0.1208 -1.3417 +#> -2.5223 1.0208 -1.9705 -1.4863 6.8423 3.6362 1.7609 4.1067 +#> 1.7234 -0.5239 2.8915 -12.4322 9.1356 -5.0871 -7.3245 -5.7226 +#> 1.5437 -9.9038 6.0135 -14.2809 -2.2871 -10.2505 5.6736 -1.9195 +#> 27.0936 -7.8168 -6.7540 3.1978 -13.1858 9.4741 3.8702 5.3423 +#> 10.7769 8.8273 6.0504 5.1611 -13.6625 -21.3920 -7.8566 -6.6555 +#> 9.3609 -9.2878 -9.9304 -4.2579 7.2767 4.6373 10.4448 24.5626 +#> 3.6608 11.5316 -0.4860 17.9653 2.0688 -7.4523 -3.3428 -0.0631 +#> -19.8172 -10.8198 5.3163 -8.2135 -5.1607 18.4298 1.3843 -3.0208 +#> -6.4828 -8.7025 5.3616 -7.4629 -3.3489 7.0791 -8.1782 -11.8469 +#> -17.0357 -9.2986 -3.1121 -0.6257 9.9659 -12.5118 -10.7899 -11.4824 +#> 0.7937 -2.6290 -2.4406 -1.4290 -1.6814 1.5998 -6.5598 -6.3476 +#> -0.0380 5.8284 2.9693 6.8694 3.4494 -0.5668 -10.5894 -10.2823 +#> +#> Columns 41 to 48 6.2774 -18.1864 -3.4018 -11.2180 -3.5322 2.7174 -10.5623 -10.6231 +#> -12.7863 18.0507 10.6248 -0.3488 -6.7826 3.7352 11.3845 -5.4283 +#> -0.1122 9.5195 -6.2941 -7.7556 4.4018 -4.6933 10.8892 1.8100 +#> 10.9189 5.0866 7.9907 18.7885 -5.5178 -6.2315 8.2512 -14.7725 +#> -8.2085 -16.0672 -7.0914 -2.8034 -7.5503 -17.5078 0.2551 17.3002 +#> -8.2185 27.4550 26.0083 8.5154 -17.4318 -0.4562 -6.6915 -8.3391 +#> -6.8927 -0.3459 7.1382 -11.8649 31.5663 22.8106 9.1958 -2.2901 +#> 14.8486 -15.4098 -0.3975 6.1668 5.2021 -15.0225 5.3419 8.6112 +#> 5.8912 -4.2721 1.5269 -18.8994 -4.8116 -13.1604 -11.2970 -6.8411 +#> -3.7090 6.0151 15.8139 0.2310 10.5546 3.2455 -5.8534 3.2205 +#> -8.9412 5.9837 -4.2832 10.5690 -9.1906 21.0550 -10.0617 -5.7039 +#> 8.3613 -2.2182 -9.7109 3.6916 15.4845 7.2722 9.6880 3.2922 +#> -12.5781 1.7351 -29.7383 -1.7174 -17.1644 -4.0509 0.9297 -28.8874 +#> 3.8587 -6.1355 10.8727 3.1289 13.5693 -12.7812 -10.7436 -2.7971 +#> -15.7318 5.4135 -1.1823 3.3775 0.0919 15.9588 -5.8201 7.5614 +#> -2.0112 15.7379 5.4613 -6.2413 -1.9344 10.4081 10.6265 14.4889 +#> -10.7885 -11.0621 3.3423 -2.7723 4.6190 5.9750 4.7556 -9.2367 +#> -2.8487 6.3066 -1.2910 1.2042 6.7052 -5.2055 -3.4929 -15.4715 +#> 13.9751 -8.3139 -9.1452 -6.8789 10.8781 5.6326 12.5587 -0.6621 +#> 1.7425 25.6232 3.9080 -5.4210 -3.4463 -2.2130 -8.6763 -4.5002 +#> -3.9253 2.7579 -6.7523 3.3445 -3.5237 -19.9601 2.2407 -15.6260 +#> -3.2751 14.7127 -2.1673 8.7985 4.7725 11.2860 -6.2253 1.1196 +#> 2.0084 1.3282 -24.1468 -17.8029 -15.3072 -0.8156 22.5892 0.8694 +#> 12.2622 29.0844 0.7350 -10.3071 -4.3900 -9.1008 -5.3611 -5.9090 +#> 6.2074 8.0331 10.4998 -5.2353 -4.3860 12.3646 20.1491 7.3873 +#> -17.4049 -6.4294 13.3101 3.0213 12.9921 -0.2051 11.9756 8.7426 +#> 0.6374 4.9446 -6.3065 -7.0719 -2.7146 -2.9173 -19.1332 -1.9017 +#> 10.5367 -12.9605 14.5342 0.4513 14.8723 1.2068 14.6907 -1.3696 +#> -10.9197 18.3182 14.4147 7.8424 -5.6563 6.5287 -7.8349 20.1241 +#> -1.4181 -6.9705 10.2070 2.5901 13.3057 -3.2291 9.3065 -12.6322 +#> 1.2362 -22.9517 3.5291 -4.7567 5.9149 -3.3526 5.4651 -8.5231 +#> -3.6754 -11.2693 -1.1005 11.3768 -3.2981 -3.3885 0.6195 -25.9140 +#> 0.4240 -21.1537 29.8423 9.7752 -2.7528 -16.3139 8.5193 -2.7063 +#> +#> Columns 49 to 54 18.1912 -15.7907 -3.9558 -2.0804 -5.8474 -2.1283 +#> 8.8668 3.5290 12.7925 0.1398 2.6789 -1.2222 +#> 3.2999 0.0110 -12.6241 -16.9354 10.1889 -2.9761 +#> -2.1456 -8.0910 15.6478 3.4534 -9.4753 1.2913 +#> 5.5167 12.1132 -19.4272 -2.5755 3.7057 3.4676 +#> 8.5158 18.9967 17.4010 -14.7757 -5.0401 -5.8380 +#> -1.0111 2.4076 7.3687 -0.7984 -4.9909 6.1201 +#> -0.6987 2.1589 18.6585 -1.3616 2.5712 2.5530 +#> 4.8704 11.3662 -10.0319 1.9212 8.7550 1.0719 +#> 6.1440 -16.3357 22.2800 0.1487 8.5740 2.0471 +#> -1.8990 12.0370 9.7612 -6.0730 -9.3985 10.1700 +#> -6.6664 2.8795 1.7469 -0.2079 8.2883 -4.1957 +#> -18.5809 -8.5180 2.4861 -19.8535 -1.6639 -0.7353 +#> 6.4558 -3.1731 -3.2707 18.1044 -1.8837 0.5426 +#> -21.2803 11.4748 5.1035 7.9882 4.7134 4.2060 +#> 2.1991 -5.9666 11.6436 0.5247 -2.6853 3.9402 +#> 2.3018 -1.2973 0.7850 -8.8660 1.2988 3.6674 +#> -2.0062 -11.9106 11.7197 13.1610 2.9937 0.8289 +#> 11.3765 0.3339 -11.8986 -17.0903 -5.6438 5.4151 +#> -7.2570 -10.3062 -0.8249 9.2584 -5.8272 -8.5365 +#> -5.0818 -12.0503 3.3193 5.4731 -1.8446 -0.5912 +#> -2.4373 6.1826 -4.4394 -2.8937 5.4018 -0.3870 +#> 2.6183 16.8474 -4.9494 3.4744 -5.2207 -4.3346 +#> 38.2951 8.5367 6.6771 0.6938 3.2265 1.0787 +#> 2.4264 -6.7126 0.3540 -19.4928 -0.4715 -5.2072 +#> 5.8695 -15.1944 -6.8595 13.3478 0.6535 -1.8124 +#> 13.5172 -4.1587 -6.7230 -9.0951 -11.7814 0.6975 +#> -17.8280 8.9426 2.4428 -5.3610 -5.7832 -4.6514 +#> 2.4178 -4.2723 2.7156 -7.5423 7.7660 -9.2237 +#> 8.1569 2.9548 3.8302 -5.8128 3.1560 -6.0940 +#> -14.6825 -0.2284 -0.3681 9.4919 -3.2106 3.3711 +#> 25.3513 -7.2872 -0.9155 -0.2299 -12.7788 6.3386 +#> -8.1845 -6.2757 14.2294 5.2468 -3.6520 14.6932 +#> +#> (17,.,.) = +#> Columns 1 to 8 -1.1945 5.9357 13.7610 4.6001 -2.9968 0.1567 -0.4438 -2.4103 +#> 0.0367 2.4763 6.6312 -9.5617 0.1143 14.2438 -4.9430 -0.7515 +#> -3.9668 -6.6500 -2.5517 -3.6693 -11.0968 6.0578 6.2461 0.8728 +#> -2.1622 -2.7587 3.6359 0.1597 1.3649 7.5670 -0.8458 -4.0292 +#> 3.9699 -7.5345 -7.1817 -5.2912 -6.8139 -1.3103 -1.3937 -9.7703 +#> 4.0056 10.3494 2.6678 -5.7951 -4.0568 2.8811 1.1952 2.0518 +#> 1.7394 -5.4003 0.6521 23.1251 -11.7972 6.4599 10.9959 9.3620 +#> -6.8429 0.3808 3.2818 -4.4718 13.9160 -13.1619 16.5356 1.2451 +#> 6.0260 0.2662 -1.4141 -0.8238 -3.1741 -7.5774 -3.0280 1.9594 +#> 3.9875 5.0174 -13.7977 -3.6185 -17.4418 10.5346 11.7027 9.1794 +#> 1.0173 3.9696 11.9721 12.7129 4.3783 12.9305 18.7580 3.0929 +#> 1.5908 -6.0321 4.2819 12.1846 10.0166 -6.2988 -2.7936 -3.6806 +#> -3.1961 -4.4246 7.6744 -2.4911 -1.4316 7.3917 -25.5858 -9.7842 +#> 5.0093 -6.0207 -0.7671 -1.3993 -1.7692 -5.8252 -1.3073 -9.7046 +#> 5.7459 -2.7283 1.3508 -0.5482 -1.6587 19.6017 14.6458 -5.2798 +#> 3.4183 1.9121 -1.4290 12.7108 4.8154 9.7358 0.5929 7.0476 +#> 1.2533 -1.4050 0.5945 -2.2962 -3.5722 -3.1145 7.9074 4.0017 +#> -1.0276 0.0731 -2.5617 17.7163 -11.6945 -15.4622 0.1691 9.3550 +#> -1.3410 6.6033 6.5680 3.6091 12.2945 -8.3083 -2.0391 -3.1811 +#> 2.6979 -2.8586 -7.0698 -5.1492 5.1086 7.7824 -8.8721 -12.3567 +#> 0.1352 7.9279 -4.4972 -3.0158 2.8392 -2.8841 -9.2191 9.6129 +#> -1.3872 -2.4786 2.5221 -2.5535 -3.2631 4.3959 0.7497 14.8393 +#> -4.4588 -6.8535 6.2479 11.1136 15.6840 3.7984 -7.0598 -0.9730 +#> 2.2904 2.8873 -5.5605 -4.1421 -1.0114 -2.0932 -6.3533 -1.0628 +#> -0.5695 -0.2034 -5.6579 -1.3840 13.2703 1.9438 -14.6659 -11.4732 +#> -0.7225 1.3376 3.2180 -7.6824 -12.6814 -13.5173 -9.2295 -10.0886 +#> 1.0159 -1.2425 -4.1128 3.7774 -4.8689 -2.7273 -13.7697 -3.0257 +#> -6.4470 -3.9544 7.4392 -5.6479 -15.8026 -12.1384 3.7016 -9.1124 +#> 7.2432 -2.0111 -5.3657 -8.0843 6.2781 12.4000 2.3539 14.0012 +#> 2.7117 -4.0745 -1.3105 5.0903 -3.0864 -1.7051 -1.0388 -4.4434 +#> -3.8594 1.1344 8.7419 2.6828 -2.1251 -4.7349 16.1477 -1.9756 +#> -1.4308 9.6469 -11.2050 2.1474 6.9698 -12.3214 -4.3573 7.7483 +#> -0.9032 3.6116 -7.7650 -11.3682 -2.1276 3.8079 21.5545 2.0709 +#> +#> Columns 9 to 16 -4.2809 14.9856 0.9747 -18.6864 2.5562 7.4461 9.4287 -1.2786 +#> 0.3906 5.4501 -6.8274 -16.3989 -7.6286 -2.4832 -10.1324 -10.4796 +#> -13.6979 8.8202 11.5512 -11.5892 4.5668 2.0810 -6.2758 -7.3325 +#> 8.7636 9.2025 -17.0848 9.3263 25.3252 -7.1282 7.0646 -5.9105 +#> -1.7513 -2.8084 6.3440 -6.4785 -7.0918 8.5817 2.3478 -2.9963 +#> -4.5858 -5.9181 11.9489 12.4078 -5.9416 -6.4059 2.3872 15.6740 +#> 4.1593 -0.3638 -7.1120 7.1704 -1.6941 -3.3962 -1.1642 14.1100 +#> 0.6987 -7.6049 -14.0736 -0.2314 2.1962 -3.3736 -9.1508 -4.7658 +#> 8.9245 -12.9431 -13.4310 -3.2654 3.9478 -2.9020 2.0256 1.8051 +#> 4.2866 0.0715 -0.9944 4.4333 11.7606 -16.5368 -6.2418 9.0348 +#> 7.3539 17.6790 4.9204 -5.0699 -14.3741 3.3521 -10.7943 -7.3542 +#> 5.8408 -6.8705 -2.8739 1.8558 -5.2406 -14.0624 -0.7518 5.0516 +#> 2.8608 0.7189 -7.4459 14.6039 28.6982 -10.6808 13.5832 13.6354 +#> -0.6098 21.8898 12.0036 2.4947 -16.7648 -7.0569 13.2104 -4.4672 +#> 1.3674 3.4912 24.4712 -12.4146 -8.7437 2.9366 2.0321 -11.8064 +#> 9.3356 -1.2168 3.2995 5.5203 -1.2169 -8.3922 2.6478 6.3334 +#> -14.8857 3.8096 10.0236 2.4527 -4.6751 6.5519 7.0608 8.0162 +#> -10.1171 -3.0784 10.7235 1.3285 1.6605 -4.9907 -9.7527 7.2168 +#> 7.3572 -10.3371 1.2387 10.0666 -13.4021 4.9186 5.3739 -8.2916 +#> 6.2000 3.4103 -20.5083 4.8153 10.2720 5.9020 1.8455 1.5598 +#> 17.3297 -6.9410 -2.1290 -2.1530 -1.3592 -0.9819 -13.4255 5.8479 +#> 1.3249 -0.8255 11.2061 3.9923 14.4966 3.2033 3.7043 -0.4251 +#> -2.3475 3.0203 -5.2621 -8.8402 -8.4170 7.4889 4.8786 -14.2834 +#> -7.8473 14.9303 12.7534 -6.0639 -10.7028 -6.6792 14.0166 -0.6058 +#> -2.3551 -7.5074 -22.6458 10.5946 9.2316 -12.8380 -3.5219 2.8540 +#> -10.7704 -5.4070 -16.1130 -13.0789 10.1141 -7.5616 -20.6803 -5.3971 +#> 7.5679 0.4259 -18.7126 -12.0772 13.6099 7.5732 2.4727 11.0583 +#> -11.5572 -4.3637 -16.0857 -1.8149 4.9713 4.0778 -12.0961 5.3785 +#> 23.8781 24.2907 4.8381 15.3205 0.9698 -0.8144 1.6659 -15.7205 +#> -4.8986 -8.1382 5.5597 7.2077 -3.0750 -6.8760 -10.6915 -0.8411 +#> -4.2023 21.3992 -3.1833 3.3113 5.4178 5.1335 1.7809 0.5336 +#> 1.9082 -7.2875 7.3810 16.7604 -11.3049 5.3751 8.5039 -4.0526 +#> 5.5594 18.7720 2.0388 12.1433 7.7461 10.1617 -10.5559 -6.4869 +#> +#> Columns 17 to 24 -8.5176 4.6507 -9.0314 0.4318 1.9225 13.1487 11.7611 4.7002 +#> 12.5557 -1.8317 5.9481 -0.7826 7.5287 0.4098 0.8948 -0.3997 +#> -8.0780 -18.8625 -0.6382 16.9047 -16.6192 -15.9150 7.9157 6.4388 +#> -26.3069 -11.9827 -5.4375 -9.6564 -1.5852 -17.9336 10.7353 -5.6754 +#> -2.2539 8.7048 17.4703 -4.4870 -5.1263 -12.5946 2.6317 -0.5057 +#> 1.2671 -4.6918 -4.9520 -12.1233 6.5261 17.5858 4.7685 5.8081 +#> -4.2971 14.6401 -10.6889 10.6989 8.7278 -2.0912 13.6802 11.9321 +#> -4.6551 -2.1508 -2.8937 4.3977 -1.0729 -12.3336 -1.6079 1.8039 +#> 0.3912 14.8130 -7.1374 2.1708 -5.1387 -4.8654 9.8697 3.2350 +#> -1.0918 3.1655 -2.2974 -0.7077 2.9168 2.1394 -18.2311 -10.4767 +#> -13.6448 -13.4811 2.7296 -11.3237 7.4976 -7.3000 7.8088 5.3485 +#> -6.1547 9.0813 10.3575 -5.3045 -12.4711 -1.0880 0.5393 -5.2169 +#> -1.6821 1.9627 10.5608 -11.9120 -5.2021 -14.8731 8.8732 -3.4069 +#> 2.2960 15.6631 2.4661 3.0519 -4.9549 14.3640 14.9355 -8.5279 +#> 5.4191 -11.8065 7.4382 -3.3906 8.3078 -12.8723 0.2156 3.8655 +#> -6.5227 10.8106 -13.6598 5.6528 22.9678 -3.7721 -0.2000 -17.8468 +#> 0.0332 12.2625 4.8987 -6.1812 1.5143 -9.2063 7.6447 13.9360 +#> -2.2296 -1.8215 1.2683 -7.6731 8.4074 -11.6097 -6.7154 -9.6227 +#> 4.4651 -3.6364 -7.6862 7.4514 14.5848 10.4993 -20.5772 -8.4878 +#> 4.3600 -3.5558 -0.0910 5.6951 -2.6760 -1.8089 0.3184 -3.0931 +#> 9.7662 -1.6603 -1.6957 8.0732 1.1176 1.7891 -6.3230 -10.8002 +#> -5.6969 -12.4185 -5.5132 -2.5430 10.8658 -2.3153 -1.1399 5.8108 +#> -12.8339 -12.8540 -9.6225 9.9798 -2.3513 -4.6996 12.3909 -0.0322 +#> -14.1234 2.4385 -4.9102 1.7481 5.9121 9.2784 4.1968 -7.3774 +#> 1.9370 8.2565 3.5238 6.7143 13.6739 15.9826 -10.4484 6.0355 +#> 17.2101 20.1894 -2.3342 13.6302 -14.2520 12.1984 13.2722 -0.9451 +#> -3.0642 0.4862 6.2188 2.4881 6.0619 4.6073 -15.5296 -3.1794 +#> 7.6234 4.4858 7.1091 16.2194 -0.1533 -2.2753 7.1786 6.2531 +#> -7.7554 10.8494 12.2926 3.0223 15.5357 14.5266 -0.4013 5.4663 +#> -16.4785 -11.1094 -2.8604 -4.8357 -11.6945 8.6168 3.2778 6.6541 +#> -4.5555 -4.6189 5.9754 8.4901 0.5280 -12.8330 -8.3715 13.2930 +#> 1.0469 3.2885 7.1255 -8.8634 -5.9635 6.3313 0.9104 3.4169 +#> -1.7300 3.2286 0.9228 10.1687 -4.2517 -6.2419 -3.2846 6.9897 +#> +#> Columns 25 to 32 0.2320 -0.8547 -13.1887 -11.1736 3.8571 1.2408 -9.5766 -11.6119 +#> -7.2324 9.9934 -15.2830 4.2745 -0.4072 -0.1736 5.0742 -10.5743 +#> 6.5299 -4.3027 -3.8422 -3.3640 -11.7661 -10.7601 -0.1278 -8.2473 +#> 4.7748 -1.7510 -0.2738 18.1343 8.4248 -17.7706 -4.5318 16.2630 +#> -20.4072 3.7203 -9.6475 9.2674 -4.5089 -0.8554 -3.5872 5.7413 +#> -8.1475 -7.7532 10.6418 9.7816 10.1368 -13.1655 6.8676 2.8375 +#> 16.0354 -19.0811 3.4356 -14.2017 3.9377 5.8120 3.5010 4.3098 +#> -18.6464 -0.4056 -19.4190 15.3488 3.1117 -0.5276 -14.3700 12.9940 +#> 0.8612 -24.8122 -5.7517 2.6817 -7.5426 11.6442 -5.1525 6.2859 +#> 15.2032 -18.9056 13.6918 -2.7647 9.2866 4.0309 -8.9129 -1.4976 +#> -5.2809 2.5317 -22.6318 -15.5670 -4.7408 8.9304 -6.7400 -0.7977 +#> -6.2725 -3.5081 11.7261 0.8719 -10.7490 7.3407 0.5090 -1.5701 +#> -4.3321 1.6710 -5.4596 0.9024 -1.9139 -6.0424 -3.3059 -4.8631 +#> 12.5325 -5.2914 10.0675 -16.3548 -12.4664 -4.1947 -6.9445 -3.6123 +#> -3.3057 6.3641 -4.2429 1.2611 7.7674 3.9772 10.2070 1.2617 +#> -1.9652 7.4931 -6.2278 -8.0131 -6.4442 4.2234 8.4705 6.0585 +#> -8.4888 -9.3521 5.2108 -6.9133 11.9634 8.7817 -5.8179 -5.5310 +#> -5.5910 -0.0267 8.8629 -18.6715 0.6947 12.2113 -10.0464 14.0775 +#> -9.0078 -3.8097 15.6286 0.3785 -14.1525 3.8843 -5.5465 -10.3125 +#> 10.5508 -2.7362 -4.8545 10.0730 -0.1848 -5.6251 10.6128 -6.6135 +#> 8.7989 2.4052 6.6470 -22.2156 2.5028 -5.0765 -5.0022 0.1233 +#> -8.0055 -0.8764 -0.4251 5.8127 0.2807 6.1428 -1.8243 -1.4170 +#> -6.6094 4.8514 -0.1901 -1.8807 -10.5834 5.3967 -2.5112 1.3605 +#> -10.4232 -0.9690 13.3882 13.4259 1.3262 -4.5520 16.7237 13.7532 +#> 0.7603 8.2806 1.0988 8.3490 2.1919 -21.3010 -5.5930 2.4877 +#> 3.8935 0.7023 0.4993 -11.9143 -13.2605 1.3567 -7.3537 -3.7524 +#> -1.2562 0.1787 15.5072 6.8030 7.4880 10.4575 -2.0694 -4.9916 +#> -1.1330 -6.2391 2.7930 -0.4653 -1.3610 9.2366 -14.2203 9.8848 +#> 2.3531 -2.8806 0.5263 10.1168 -8.4596 -15.0307 10.7694 -5.4531 +#> -8.8112 -0.9047 3.3837 5.9525 1.9664 -15.2173 -4.6034 8.1945 +#> 9.3376 7.3662 -14.0258 -2.9666 -8.7932 15.8481 -0.9324 0.4922 +#> -3.4387 -4.7718 9.6222 -4.2471 3.8059 -0.5909 -10.4423 7.4008 +#> 6.1939 -15.2302 -20.2640 -6.2330 5.4920 0.0324 -20.5632 25.6258 +#> +#> Columns 33 to 40 -1.0949 -7.8015 4.9329 -10.5205 -10.3315 12.0000 -12.0384 -20.9387 +#> 7.9118 13.2554 -7.8025 4.6869 -9.0013 -2.5157 -10.2050 -13.2897 +#> -3.2332 14.7656 -2.8396 -3.7716 15.0232 -2.0743 -0.9805 -3.6461 +#> 9.3389 12.4186 -3.4417 14.6364 6.3534 15.4567 19.1659 -4.8524 +#> -5.4858 -3.1153 1.8488 -8.9521 -10.9912 -13.4338 -0.0593 0.2635 +#> 6.9337 -5.0296 -19.6420 -10.6511 1.0063 1.9995 -8.8161 1.7152 +#> -7.4369 5.1616 -3.3007 -3.6334 -23.3955 -8.8335 1.3518 -12.2084 +#> -3.9604 5.2947 11.3275 -12.1789 22.6017 -0.0752 -3.1207 21.3796 +#> 4.2955 3.7270 16.5392 -9.9468 -15.8815 2.2264 3.8298 -17.7371 +#> 7.9218 -5.6900 3.6449 -0.7947 3.4458 3.9923 -4.3017 2.7656 +#> -8.9517 1.2676 5.6155 -3.7716 -18.7268 9.9223 1.8376 -12.6411 +#> 6.1272 -3.6295 8.5606 0.7275 2.0534 5.0195 3.3077 1.5170 +#> 0.2332 5.2794 5.4327 7.4953 -2.5189 23.3859 5.7894 -7.0571 +#> -7.9007 -3.8526 -7.6889 15.2641 -7.2014 -0.7732 0.3606 -11.3231 +#> 2.9877 4.7735 3.1117 -17.0739 5.6423 -9.8650 -5.8380 1.7340 +#> 1.3905 -4.4132 4.2730 8.1252 -2.9131 -8.0850 -9.9162 5.2451 +#> -4.5849 8.1466 1.9840 -6.6218 -9.1559 6.9800 -7.7543 4.5452 +#> 4.2496 -10.5153 14.9882 2.6485 -16.4099 -6.6727 -2.2787 -11.8236 +#> 0.6055 -6.5124 -2.4144 4.3632 -4.4317 5.2460 4.3530 0.1038 +#> -1.6034 7.5475 -15.9108 7.9887 -13.8177 -2.0681 13.7637 -5.0010 +#> -2.4930 2.5469 0.3014 15.2680 -0.1038 -5.5995 3.5746 -8.8897 +#> 2.2502 -0.8856 9.9862 -2.3436 4.9548 1.1025 14.5543 16.1863 +#> -3.4679 -2.5738 11.4560 5.6415 -4.5213 -6.7063 10.9575 -25.7627 +#> 11.4552 3.7832 -1.3785 9.8109 -6.2845 3.4034 -4.8431 2.4609 +#> -6.5703 10.3562 -16.6455 2.4634 17.5233 -5.4211 8.5169 -2.4641 +#> 3.5048 10.3812 5.9781 7.5783 -9.5489 -7.4988 -2.9599 -22.6984 +#> -5.9605 -5.2253 9.9344 0.6568 1.5765 1.0338 21.8366 4.3084 +#> -1.6509 -1.5457 4.3108 6.4686 -0.2890 -15.7786 7.9581 -8.5936 +#> 1.6220 3.7935 4.0385 15.6555 12.9142 5.7641 7.5789 7.7209 +#> -0.3752 -8.6618 -5.5172 4.5217 -0.9188 -0.7832 6.9243 -8.6669 +#> -0.8226 4.9436 7.2000 -0.8491 -11.6900 -10.9037 0.3746 -3.7334 +#> -5.4069 -6.9325 -1.7711 6.9012 -6.2216 -6.6870 13.4777 -16.5412 +#> 6.3956 -0.3344 1.6789 -5.1011 -7.0512 -1.5941 0.6538 -6.6800 +#> +#> Columns 41 to 48 -0.1698 8.8217 -6.4694 -16.4661 0.9077 12.7058 -13.3564 -1.2277 +#> 3.7345 -5.9974 11.6953 1.0140 3.1550 -3.2640 -13.4529 -8.2532 +#> -16.7333 0.4663 -3.3279 -6.2335 0.1770 2.2374 -5.7612 6.6991 +#> -1.6576 -18.9939 -14.9989 9.8988 -16.7891 -3.0302 -0.8575 27.8761 +#> -1.7592 18.6416 -3.0356 3.2867 1.1347 -9.6340 8.6015 -2.7213 +#> 9.4523 4.5226 16.7468 -4.9462 -12.1722 -9.1386 17.1996 -4.6083 +#> 17.1410 28.4998 -10.0920 -3.5216 -16.8519 3.3198 6.7380 3.4705 +#> -8.2911 -11.6910 14.6342 3.9526 5.8166 1.7158 -10.0800 6.0359 +#> -17.9142 18.2124 0.7111 5.0680 -8.1894 -3.4014 2.0671 4.9197 +#> 23.5714 1.1053 -11.3645 8.0493 8.3621 5.3639 -5.2680 16.4519 +#> -3.6381 4.2138 3.3700 -10.1134 -11.1222 -9.9525 7.9240 -12.9872 +#> -6.6284 4.8647 5.2983 -2.0913 0.3208 -10.4628 2.8716 3.9991 +#> -2.4523 -6.0604 18.8198 7.1272 -9.5356 -4.8851 -5.0653 11.6148 +#> 1.1664 15.2290 -9.0593 6.8798 -5.4856 -5.0677 -4.3022 1.2298 +#> 8.2731 -2.9381 -9.1734 15.1833 16.7573 -15.4167 6.4589 -8.6401 +#> -7.6856 -2.5541 1.3450 -2.9230 1.5550 -8.9173 2.4712 6.8891 +#> -1.8132 11.4218 16.3257 -15.0904 -3.5487 2.8636 -3.5318 -0.5746 +#> -4.9779 -4.7626 -3.1841 0.6262 2.6685 6.0077 -11.4513 -11.3537 +#> -2.5424 1.6647 -0.8174 -10.5674 -14.6828 -6.7312 3.1601 1.0290 +#> 0.4273 4.3375 -3.4777 8.0603 -3.9064 6.5420 -0.0892 1.4283 +#> 15.4705 -23.4476 -12.6628 4.8800 -5.8321 5.8370 -2.1516 -1.2398 +#> 0.8174 -23.1288 3.5473 8.7841 6.1238 12.9360 16.1962 -5.8630 +#> -25.2451 -5.5789 5.3848 3.3651 -11.0691 -5.4782 16.2179 -4.5155 +#> 5.0774 12.7844 2.1877 6.6938 4.1463 8.7491 2.1690 7.4667 +#> -13.9223 -12.1191 19.2083 -2.9883 -4.8362 -11.0929 -12.7274 13.0493 +#> 2.3783 0.7724 -16.6332 0.5814 -3.9713 5.5427 -11.2599 -4.7913 +#> 11.5428 -2.7389 -9.3639 9.0854 7.7848 12.2623 -9.0551 -10.2595 +#> -4.1513 13.5519 -5.7473 -5.3084 6.0087 -3.3954 -1.2762 -6.7904 +#> -10.5682 -6.6513 14.7823 -6.8701 5.6471 9.8141 10.5264 8.1733 +#> 6.7891 3.8397 -4.5906 -11.2489 6.0019 0.4279 -3.0140 -3.6645 +#> -28.8394 0.0399 -1.1060 11.3085 6.8042 -0.6700 -11.8443 3.7786 +#> 9.6303 -6.1626 2.5019 4.5818 -7.1691 6.1651 2.4477 -15.4298 +#> -0.5048 3.2294 3.5276 18.2456 -7.6242 -10.3284 -5.7173 3.2867 +#> +#> Columns 49 to 54 -5.5995 20.9823 -9.8211 -9.4221 1.9872 0.7064 +#> 5.9381 -0.5587 -4.5614 -4.6571 0.8054 -0.5948 +#> -7.1636 -8.5008 -3.2620 -2.0975 0.3987 -0.3976 +#> -0.5686 7.8110 6.2780 -7.0534 5.8295 -2.0320 +#> 0.1019 -16.0085 -6.3111 -13.1460 3.6006 1.1852 +#> 17.0260 -3.7308 -1.9848 1.1072 0.3429 -0.8318 +#> -8.3555 -7.5393 15.5764 -12.1047 2.5861 -0.0350 +#> 8.4494 1.5198 -5.0936 1.4265 1.9363 2.4409 +#> -6.7050 8.3492 -2.0682 3.9060 -0.0488 0.7735 +#> 6.5935 -6.0620 0.4721 2.7401 2.4293 -2.0944 +#> -3.3023 3.2847 -11.2997 -8.9486 -5.7318 3.3426 +#> -16.5071 0.3623 12.6809 -3.7546 2.9133 0.0275 +#> -2.5453 9.7901 11.8695 -5.8115 1.5117 4.0224 +#> 2.5855 6.5631 -2.9538 -7.3839 1.6488 -1.3814 +#> -4.9437 1.9161 9.5159 6.1644 -2.0306 3.5429 +#> -2.6007 -4.1000 4.9058 5.2294 -4.0530 1.1769 +#> 8.1945 -8.5164 5.9058 -7.5355 -4.9895 0.8525 +#> -2.6536 -17.9847 8.9036 0.1987 3.6190 -4.3432 +#> -7.9242 8.8606 -6.5877 -10.6735 -1.0000 3.9231 +#> 1.9386 12.0055 2.3776 -7.1784 5.6855 -0.8002 +#> 7.5407 -1.4188 0.4098 -2.5373 10.0966 -3.4645 +#> 6.3016 1.0639 11.3004 0.9768 -2.7151 3.1043 +#> -10.8548 -6.7497 -2.5480 -21.0140 -2.1168 1.2054 +#> -0.5643 -6.0690 21.8681 -8.9487 0.3526 1.5624 +#> -3.4643 -1.0701 3.3123 -2.6660 -6.1458 3.3124 +#> -19.0319 -18.3858 -10.2737 1.1562 0.5357 -0.9052 +#> 33.2400 1.4011 2.3428 0.2940 0.0104 4.2698 +#> -4.7071 -8.8811 -4.3700 0.9346 -1.0509 -1.3634 +#> 1.1192 -1.0273 5.7608 -7.0857 -1.6421 0.4937 +#> -8.4233 -7.4025 -2.0837 -2.1768 4.4988 -3.8958 +#> -12.1766 7.6390 6.8675 -0.8565 -4.2443 1.1196 +#> -10.1526 -0.8164 10.6687 -9.9186 4.3158 -0.8867 +#> 3.5572 6.2757 -10.8123 -3.5706 7.7994 -2.3397 +#> +#> (18,.,.) = +#> Columns 1 to 8 -0.6402 4.7434 1.8131 -9.0814 0.1055 3.5166 -11.0354 -2.1385 +#> 0.9856 0.9196 -4.9516 1.2091 2.9464 -3.9340 -7.0973 7.7832 +#> -6.5080 10.8607 -0.2128 -12.7948 7.3569 -14.3323 -2.0781 -9.3965 +#> 1.2588 -7.7663 -5.2737 3.5688 -13.4894 11.3071 -16.0923 17.9381 +#> -3.1761 16.1444 -0.6368 -11.7985 -1.2194 -10.7200 4.5110 6.6089 +#> 3.7938 -1.5923 -10.7022 14.1368 18.4437 2.1778 -9.3515 0.0366 +#> -0.1702 4.4401 -6.6548 -17.5698 -0.2235 -1.2114 -1.4631 -13.3428 +#> -5.4591 -5.0729 -0.5420 -9.9977 -6.6662 7.9867 -4.6333 19.9847 +#> 1.6030 13.0850 0.9668 -3.9323 -1.7781 18.4845 15.0948 -1.3705 +#> 3.0230 2.9641 5.2274 2.8294 16.3949 -0.6543 -24.5857 -7.1291 +#> -1.9360 4.2210 -0.5944 -10.0358 -25.5254 -7.8593 -10.5588 -0.0213 +#> 1.3608 -1.3583 -0.1132 -7.8611 -2.0944 2.9602 8.2347 5.0018 +#> -5.5325 1.0469 -2.9028 -4.2451 -0.5880 11.7759 -4.8940 19.1387 +#> 6.5882 0.2508 -2.5480 7.8370 3.2671 -9.5231 7.6839 -6.2263 +#> -2.2181 2.4591 -10.2505 6.4494 16.4529 -11.5110 -11.5513 6.8777 +#> 3.2458 -7.4024 -2.8479 11.7129 4.5454 1.8381 9.9420 12.1778 +#> 3.8067 5.1511 3.4393 -5.6485 -3.5249 14.0147 1.5736 -14.7018 +#> -6.8008 -4.4231 -1.5175 1.5062 6.7506 15.2381 6.2466 -3.5652 +#> -0.3280 -14.0092 8.7571 6.0134 -16.9417 -5.9196 -0.6036 -0.2575 +#> -2.3694 1.2576 -2.4105 0.9188 -0.9218 -10.2164 -5.5485 -4.6254 +#> -3.3375 -2.5219 0.7902 6.2528 4.9509 9.7387 -5.9530 5.3763 +#> 0.4704 -5.1322 -8.4724 20.5485 -5.4800 -23.7240 -1.6934 1.7389 +#> -1.6665 -7.4155 -0.8382 11.1109 -16.3196 -5.5543 6.7446 9.9557 +#> 2.0163 3.3944 4.8981 13.9078 22.7312 1.0781 -9.8206 7.9088 +#> 1.4904 -3.2828 1.4835 -3.8563 -14.0991 12.7025 -3.7869 11.3769 +#> 8.7064 -2.1175 -2.8878 -10.3032 -4.7273 26.4450 21.6295 -14.9229 +#> -1.1672 6.5164 2.7089 10.4096 -24.9721 -6.9781 -7.6589 12.5868 +#> 1.1866 -7.2541 9.2835 -0.6601 -8.6966 13.7798 8.5238 -13.9082 +#> 8.0989 7.6595 3.9582 10.4038 -13.3495 -2.4347 11.5073 -1.2542 +#> 1.5925 3.5610 5.5849 -9.6810 10.0129 4.2686 -6.1855 9.4021 +#> -5.8902 1.9189 6.9174 1.1192 2.6001 11.5549 8.3715 -14.1585 +#> 5.2909 -1.3217 -0.8316 -0.4801 -3.0564 12.0386 -9.8129 8.7052 +#> -3.8948 3.9388 6.3068 -14.6661 -17.3926 5.9122 -1.6922 -6.1795 +#> +#> Columns 9 to 16 3.1212 8.9768 -7.7829 -9.9287 4.3650 12.1186 7.9969 1.3515 +#> 9.5904 12.7234 -14.7654 -15.4517 -12.6266 -10.1074 -11.2457 -10.5228 +#> -13.0193 6.0127 21.2913 -9.2166 4.6633 2.2295 -1.5320 0.1976 +#> 2.0440 -6.1785 -15.1383 19.7634 6.1511 2.2815 -2.1816 6.6448 +#> -15.7721 -4.8728 2.1662 8.0321 2.3490 2.7814 -10.1976 -22.7731 +#> -8.3340 -21.6577 3.5734 16.7013 9.3389 8.5474 -10.7842 5.6079 +#> 15.8511 -31.5892 -11.5088 14.5114 10.3812 13.1984 5.2820 -7.6642 +#> -8.3634 5.4411 -3.0961 15.4396 -0.9041 -4.5280 -11.5267 9.0417 +#> -7.1770 0.3140 -1.6628 -13.4240 -7.4003 8.8183 -1.2335 1.8166 +#> 18.6558 1.3587 0.0706 -10.5494 5.1193 -11.0114 -21.1990 -3.8616 +#> -7.9893 12.3873 5.9942 1.0123 -20.4534 13.4816 2.2847 -23.4193 +#> 9.4019 -7.7280 7.4527 4.6136 -5.3764 -4.3965 -11.1285 1.8470 +#> 3.3213 -5.8639 -8.4869 5.0989 -6.5155 -0.3605 -2.4638 -0.0107 +#> -10.8851 -2.7753 -1.8128 -0.7322 -23.6905 -12.3329 -0.3330 -13.0412 +#> -3.3664 -7.8045 3.2785 -9.6358 -6.8970 -3.7473 -12.7874 -11.3062 +#> 8.1392 -2.1360 -2.6379 4.2500 -6.8766 2.0015 -13.1190 -3.9774 +#> 9.4907 -6.7738 -7.0910 -3.6525 -0.7717 13.0721 -1.0126 -11.7119 +#> 6.4099 -0.3371 14.4211 5.5252 -1.3406 7.2345 9.8366 10.8173 +#> 2.8487 4.7465 -6.5586 12.2626 2.7087 -3.4645 5.4999 4.5418 +#> 5.4154 4.8134 -13.5437 4.3640 19.4612 -8.3526 -3.4889 5.8875 +#> 2.8817 -0.8727 -7.1474 20.0984 -20.1178 -4.7040 2.7801 4.0371 +#> -8.1304 -2.1947 15.4954 2.0142 0.1754 -12.3287 -6.0262 8.5394 +#> -3.7817 -2.7384 -1.3907 -0.9739 -21.9563 -14.3583 11.7999 4.1277 +#> 2.8614 7.4567 -4.6579 -4.1701 16.0878 -17.3606 -0.2404 13.1254 +#> 4.7443 7.0819 -21.0819 -2.9417 30.7228 -7.2210 -0.3566 -1.2270 +#> 0.9078 5.4075 7.3044 -7.1971 5.0863 -11.6226 -1.8456 -17.5967 +#> 8.3709 -5.9648 -1.5901 -4.8041 13.5493 11.9323 -1.9895 16.9380 +#> 8.6526 -2.6055 -1.0883 0.3579 -1.2297 1.8231 18.8077 9.1044 +#> 2.6350 8.6512 4.1492 -0.4413 13.7071 -13.4789 -10.8800 -14.2305 +#> -0.2720 -15.9998 12.1914 12.9956 -1.3823 -6.7339 -1.2898 -13.3419 +#> 0.2440 5.4391 -13.1262 -13.8534 -21.1040 -0.1105 -1.2542 -6.6261 +#> 5.1576 -13.9171 -11.1951 10.1678 0.4347 -7.4268 7.3335 -10.2450 +#> 0.3775 -5.3758 1.2710 15.0254 -7.6080 11.5949 9.1129 -16.9504 +#> +#> Columns 17 to 24 -1.2359 -8.5151 -4.9691 -5.5363 -5.6185 4.5126 -7.5885 -3.2411 +#> -3.0516 -7.7003 -0.3205 3.8185 3.8513 11.8521 -8.0834 6.7628 +#> 9.4162 6.3903 8.9985 -11.4852 -10.1576 9.2670 -9.4125 -1.6675 +#> 25.6122 24.6035 4.7898 20.4474 -5.1089 -10.1887 -2.2117 -12.9580 +#> -18.4575 6.6532 0.2662 -10.4893 -7.9176 5.3809 -8.9158 4.7225 +#> 4.6109 4.3092 2.5403 4.2719 7.7647 -4.2787 -8.0528 -9.2415 +#> 9.8034 -9.4504 4.8232 -0.6465 -7.5265 11.5893 31.0453 -5.8467 +#> -11.1272 4.5555 2.3503 2.5335 -0.5362 -4.4438 8.6267 -4.2531 +#> 4.1193 5.8930 -8.2347 -3.1109 -6.0583 -6.9705 -6.4738 0.6213 +#> -16.0153 3.3135 -6.8481 9.7210 23.4779 -1.8885 -10.3987 -17.9696 +#> 1.2457 1.9422 -16.2943 2.0579 1.9381 5.8277 -2.7959 -7.1186 +#> -1.6487 4.1090 3.8216 11.1703 2.5012 0.9400 14.8761 13.8019 +#> 10.3919 -4.5696 2.9751 5.1873 0.8022 7.1395 1.5344 4.1708 +#> -10.1221 13.4091 1.8450 5.3546 -2.6772 -3.0008 5.9746 11.0633 +#> -5.6692 3.6746 -12.2909 -2.4199 14.6603 10.3287 -6.0446 9.5920 +#> 3.7258 -7.4862 -10.1744 8.5243 8.7851 -1.8077 -1.1021 11.8224 +#> -8.8759 -2.3468 17.9357 -2.0293 1.8067 10.5534 16.8988 -7.4985 +#> 4.7142 9.1845 4.3939 -13.8710 -4.2234 2.8338 8.8917 -1.5247 +#> -13.8352 -14.6098 2.2798 9.4814 -8.9881 -10.2032 10.2493 4.7158 +#> 26.2432 -14.3499 5.1706 20.6862 -15.3507 -8.9687 -10.3074 -0.6704 +#> -1.6121 12.9632 -7.4375 -8.1558 8.8426 -3.1457 4.9443 -9.4114 +#> 2.8822 -5.1626 5.2782 6.7536 -0.0838 -23.7936 -17.3393 -5.1881 +#> -9.0059 1.3427 -11.4110 -13.6968 -18.9035 -20.0758 4.5333 6.1590 +#> 8.7138 4.2739 5.6557 4.5342 -4.8090 -6.1795 -18.1063 4.2191 +#> 17.8636 -0.9109 8.3921 21.8324 -4.3993 -14.8443 12.3992 6.9068 +#> -0.1226 18.6954 -7.6301 3.9097 14.0524 9.0352 1.4781 12.1794 +#> -29.0194 10.7366 3.0024 7.8518 -3.8900 -10.1120 -8.9764 -0.1042 +#> 8.4657 11.0135 11.5666 -2.2558 2.2921 2.1164 15.4396 -15.8703 +#> -1.2208 -13.4990 -0.2787 4.6179 -1.0195 -15.3200 -3.7176 3.3242 +#> -2.9548 9.0405 1.3908 -6.8472 -4.7532 3.4735 -7.2340 -14.3944 +#> -7.2153 -5.7233 -13.0185 -11.4077 -9.7393 13.1545 15.8776 12.9083 +#> -15.9679 -10.7577 -3.8447 -3.7532 9.8389 -0.5559 15.9604 -9.6028 +#> -3.6659 19.6096 2.7075 -13.8219 3.1756 1.9758 9.0128 -8.8238 +#> +#> Columns 25 to 32 1.9274 4.3896 -16.3643 -14.6990 1.1926 -15.0499 9.9802 -11.7104 +#> 13.4793 -5.2875 2.4070 -4.4473 11.9843 3.9622 3.4315 5.6026 +#> -1.6660 -13.3265 10.1604 7.5670 -8.5662 -9.7122 -2.3879 -5.5062 +#> -18.9636 -2.5495 -1.1489 -1.4347 6.8116 -8.6081 -18.9034 -2.2259 +#> 6.5778 14.9713 10.6284 1.7543 8.4570 16.1060 6.5723 14.5249 +#> -0.4853 -24.5913 -0.1650 -0.0331 -0.1665 6.9419 3.9479 -17.4175 +#> 2.4660 11.4405 -8.8133 7.7053 -9.1260 -14.7865 4.3316 9.1293 +#> 5.7723 -8.3636 2.9328 -4.2591 1.0561 3.5199 -9.3543 -11.1262 +#> -20.3206 -3.8134 -14.7849 0.2517 -4.0225 6.7378 1.7420 15.8689 +#> -2.8622 -8.0222 -4.0324 10.5067 -0.0821 4.7956 8.8798 -7.8659 +#> -4.1033 12.2578 -4.9912 1.0899 3.9773 14.7161 13.1326 -5.5376 +#> 2.0499 1.9164 -0.7168 -0.7590 -1.1822 -1.7127 2.7421 8.0052 +#> 6.0050 -5.1090 -1.5934 -3.4421 2.2375 2.0690 3.0761 3.4515 +#> -12.2081 5.1257 2.8837 5.8534 12.9221 4.3078 3.1548 17.0783 +#> 18.8153 -2.1232 10.3712 4.8582 -15.2536 -0.3799 0.2904 5.2387 +#> -5.5080 2.8503 -7.7855 -4.1919 3.1368 2.7440 -6.9105 -9.5721 +#> -6.4521 3.8363 2.2905 10.3506 9.9592 4.5875 22.8841 11.6125 +#> -2.0568 -1.2586 8.8603 8.2651 -5.8694 5.7861 -0.9998 -11.3552 +#> -9.7005 3.9304 -1.3246 5.5093 19.2795 -3.1123 -4.8202 -11.9433 +#> -0.3071 4.8476 -10.4747 -15.6811 8.1133 -6.9384 -6.0605 8.4081 +#> 15.5502 -13.5337 4.5394 -12.1722 -1.1972 -2.7866 -20.4364 -2.3818 +#> 1.1061 -11.9509 5.4021 -7.8925 -13.8391 3.4655 1.6950 -2.7135 +#> -4.9492 -2.3996 6.1579 -5.8705 -2.0081 3.6366 -16.8255 -3.9599 +#> -16.4392 -4.0470 -8.5074 -8.2516 3.8233 4.8957 26.2796 -5.7294 +#> 9.3205 -9.2771 -3.7460 -3.1085 7.7708 -5.1837 -1.6613 -5.6988 +#> -2.9244 13.5807 -3.2159 2.7173 7.8858 -0.8228 -0.0907 4.4152 +#> 13.6970 -11.8831 -12.5843 -7.5293 -3.8181 15.1901 -1.6164 -14.6948 +#> -3.5478 7.2534 0.3166 11.3995 -0.3518 3.3743 3.1063 9.8545 +#> 6.1717 12.0053 1.4703 -15.5365 -7.1714 -1.5570 3.9972 13.3000 +#> 15.7730 -0.7918 -1.1586 -0.6822 -14.0889 5.5660 3.1862 12.7262 +#> -10.5246 3.7932 -1.7341 1.0231 -6.9974 -8.8409 -3.7959 3.0137 +#> 13.8169 10.6955 10.6924 -2.5440 6.9646 -12.5453 13.7020 -7.6772 +#> 1.2037 15.3911 -4.1420 14.5617 -0.5829 3.2137 13.7592 5.5065 +#> +#> Columns 33 to 40 -15.9017 19.8289 -1.4494 9.9546 2.8381 -5.8092 5.5306 -7.9625 +#> 14.3514 -2.3809 4.0669 4.0744 6.5918 17.5361 -14.1913 11.9106 +#> -1.2500 -2.0659 4.4168 4.7728 -10.4990 -4.8201 10.7299 16.5050 +#> -8.0709 -12.7665 -5.8409 -8.0910 -14.6648 2.8312 23.0723 11.7831 +#> 11.9143 -3.4812 -3.1850 0.9683 -1.1069 13.8365 3.2718 4.3104 +#> 17.6026 -11.7299 -2.9446 -5.8498 2.4150 0.4797 -11.2809 -21.7138 +#> -5.2596 4.1238 -6.4967 5.0332 7.0039 -14.9075 -6.3568 -13.5012 +#> 14.1926 -1.8025 -1.2939 6.0462 -18.2979 9.9267 -1.0225 -14.1316 +#> -6.2897 -4.5259 10.7730 1.4662 1.6391 -6.3149 1.2990 12.0719 +#> 9.1414 11.6358 7.1335 -7.0918 1.3331 -5.2896 3.1283 -7.8965 +#> 2.2127 -2.7778 0.1279 11.5026 -4.0410 6.9783 2.6770 6.4022 +#> -9.8901 3.9378 -0.1162 -8.3718 8.0727 -0.5122 8.9128 2.3976 +#> 2.4732 -9.5704 0.8080 -9.4388 -7.1319 2.6326 -3.1099 7.3057 +#> 0.9238 8.8140 -5.6856 2.7701 4.9751 3.5825 -13.0656 15.8158 +#> 32.9573 7.1917 -2.9788 13.9394 9.2468 -2.3492 2.4536 12.7421 +#> 7.8021 6.3202 6.1492 -2.3438 18.9970 3.2881 -4.6176 -18.2991 +#> 7.3711 -0.2389 -0.5569 -6.8879 9.2659 -11.5070 -0.5789 18.5673 +#> 11.0779 6.9295 -1.4665 -9.7184 14.9254 6.7255 14.8203 0.0350 +#> -19.2333 -0.7846 -0.8613 -6.6446 5.6036 -9.2336 -11.9312 -7.2170 +#> -0.7900 -1.2667 -1.1898 -7.0447 -7.1265 13.5867 -14.4753 -0.5253 +#> -8.0133 -9.5562 -1.3595 -1.2104 11.8182 5.1204 -3.0254 -6.4768 +#> 12.4020 1.6835 3.7220 -1.0084 -4.9357 -8.5555 -7.4911 -8.1255 +#> 0.3477 -13.4256 4.1321 14.8491 8.1742 -3.2309 0.9943 11.5252 +#> 22.8293 -10.3870 0.1999 -6.3627 9.3321 6.4844 4.9202 5.6303 +#> 2.0806 -3.3848 2.9210 -7.0609 -4.3613 0.6444 -6.2517 -1.3243 +#> -4.6048 12.2416 2.6795 9.1540 14.7008 6.9532 13.4680 0.9090 +#> -2.1774 2.6063 0.4495 -6.9727 -16.6756 -8.9110 3.8136 -12.1270 +#> 2.7138 14.7726 -10.0168 0.7281 -3.8214 -14.2197 6.1618 -2.6406 +#> -3.7193 -18.9873 13.9520 -18.3534 -1.0845 -2.3399 -13.4829 11.1438 +#> -1.7673 6.2762 1.2490 2.7316 -1.2734 -2.9658 11.2157 -3.0200 +#> 0.2754 18.0402 -5.0929 16.9264 -7.5677 3.8953 -4.5697 8.6893 +#> 2.8513 1.8195 -5.0325 -2.3912 -1.8182 -9.1630 -5.6628 -2.9472 +#> 14.6407 4.0824 -20.4510 9.0786 -16.7662 -4.4722 9.2136 12.7086 +#> +#> Columns 41 to 48 19.9825 -2.4135 -1.8140 -0.2352 -8.4371 21.2284 -4.9427 10.3563 +#> 2.0929 -0.9596 -14.4055 2.7796 -11.4302 1.7215 -3.0949 -11.9220 +#> 12.7472 7.7430 -5.8433 -14.4004 0.4688 -6.2496 1.2925 1.5456 +#> 10.4949 -8.6562 -15.1366 11.1817 -4.6283 16.3292 8.4729 6.7488 +#> 1.8578 -1.8055 -9.5686 0.0959 9.1349 -14.8557 5.6090 -11.5172 +#> -7.6768 5.3956 7.2504 4.2450 11.0339 -0.2446 5.0597 13.7156 +#> -12.0703 -14.2829 18.7942 -5.3005 4.3003 6.6919 8.5263 -22.2124 +#> -1.9022 13.3941 -9.9091 -7.1722 -6.5594 -8.5446 -14.1859 16.5809 +#> -0.4991 3.2186 7.6311 2.3051 -5.0829 10.1925 -6.0801 -3.8085 +#> 1.7771 9.6630 1.1029 -15.7105 -14.8743 0.8401 -6.0027 -10.3531 +#> -10.5784 11.8372 1.8579 1.2497 3.6849 14.5035 5.7341 2.8824 +#> -13.3272 -8.8119 -1.6545 -6.4224 3.8017 -4.2546 -2.4515 -11.1848 +#> 5.7604 7.8640 -12.3966 4.7883 -22.8609 10.9887 -15.5800 -6.8996 +#> 7.8910 -17.5010 -4.3003 -18.9640 -23.4317 -7.0928 -11.3561 -5.0840 +#> -4.0275 12.1429 -9.6797 -22.7276 10.5789 -2.0026 6.0124 -1.1752 +#> -0.7158 1.9147 11.6325 0.8597 -11.9986 -2.2259 5.3061 -5.9046 +#> -7.2563 -1.3592 -14.8967 -5.0000 -4.2093 -2.1025 -1.3454 -7.8412 +#> -7.1904 3.4257 12.1356 0.2198 -4.3924 10.8561 21.8964 -15.5202 +#> 4.5209 5.0271 9.2597 1.4535 9.2988 -0.3062 -4.4441 11.4083 +#> 8.9135 1.1426 3.0396 -2.5395 2.1145 5.0658 -2.5784 -5.2599 +#> 6.4097 -9.5186 10.7536 10.1055 1.1736 1.4161 3.7388 7.5245 +#> -4.9861 11.8011 0.3431 3.1095 1.2019 5.8417 1.8617 0.6218 +#> 8.2233 -1.3984 -9.1638 10.5929 4.4425 13.0148 18.2417 -2.9129 +#> -2.0693 12.6648 -17.6013 -2.1148 -0.3509 0.5956 -1.9759 -18.9371 +#> -0.4969 2.0451 7.4661 10.5700 -2.3554 -11.6705 -19.2058 -13.2703 +#> -5.5057 -5.0098 19.5467 11.8053 -13.8865 -5.5786 3.8843 -9.5756 +#> 4.5929 16.1116 8.9349 11.0129 0.6013 -5.7657 -1.2153 -3.3872 +#> -1.7534 -1.4609 2.9706 7.0784 5.2399 -9.4168 9.5644 -14.8966 +#> 0.7387 -6.1535 0.3570 3.8876 12.7630 10.4211 2.3205 2.2860 +#> -4.2484 -2.8584 4.3622 19.2862 14.6254 1.7646 12.2513 -19.7506 +#> 11.3878 -2.6585 3.4115 -8.2136 -10.2696 10.6788 0.3415 11.3967 +#> 1.0095 -11.3156 18.4836 4.8709 12.9354 1.7132 3.9450 -8.4643 +#> 10.3810 -15.1599 4.7001 -13.6059 -2.1802 0.8466 -5.1479 0.7696 +#> +#> Columns 49 to 54 14.9333 8.5446 -1.0856 4.8676 0.7913 0.9684 +#> -3.1436 -10.6986 0.5190 2.6541 1.0925 1.4596 +#> 4.7135 -2.7721 -3.4758 0.7220 10.0513 1.3868 +#> 11.6558 4.3242 -4.5343 5.4663 1.6779 4.5546 +#> 0.3374 -9.0572 -6.2981 5.8576 -3.7875 -5.6278 +#> 7.7066 1.7292 -0.3229 -5.5232 0.9591 -3.4107 +#> 29.2129 14.5294 6.8547 -4.1036 3.9797 -0.9070 +#> -19.2131 3.3668 -10.1499 3.9380 -2.8376 0.8068 +#> 0.6124 -0.6532 -2.5873 -1.4719 -7.6453 -1.0898 +#> -7.1441 -20.8443 -18.0205 0.6258 -1.4782 1.0112 +#> 7.0443 10.3443 -11.6343 2.3147 8.1567 1.6857 +#> 1.1138 1.2715 7.1518 2.8545 4.1725 -3.4115 +#> 12.5523 -3.5992 1.3982 1.9025 -1.2296 -4.2767 +#> -1.3358 -0.6525 5.2423 1.4915 -0.3749 1.5280 +#> -0.4321 4.0747 -6.9063 8.2032 2.2917 4.3343 +#> -0.7368 1.2588 -2.3436 -1.7070 -0.6109 -3.8114 +#> 2.1002 -1.0872 1.4160 -8.6263 0.3481 1.6243 +#> 9.9057 -6.9949 -5.9608 -4.5947 0.3429 0.4212 +#> 7.8891 13.0376 3.5967 5.6461 -0.1403 2.4187 +#> 8.1190 0.5203 6.9234 4.0052 -1.9999 -6.9853 +#> -0.4321 -6.9285 7.7258 -8.5372 -1.8424 0.0092 +#> 4.8562 -6.1752 -1.0074 2.5207 -4.3907 -4.0113 +#> 3.7989 3.2774 6.0602 -1.6650 -0.5464 1.9183 +#> -0.1145 -12.4350 -7.0779 3.4605 -2.6369 -1.3820 +#> -14.1670 3.0106 8.0427 6.0119 2.7458 1.2553 +#> -7.0025 -7.1132 14.1941 -4.1907 -3.0654 1.6230 +#> 11.8094 3.1970 -4.0947 3.3817 -1.8617 -4.6954 +#> -15.1024 -5.3801 1.4814 -0.4673 3.5067 2.9342 +#> 1.9082 -13.1175 9.2756 -1.4339 -4.9287 -1.5384 +#> 6.1539 -12.8605 -1.7919 -7.1876 1.3803 -0.5260 +#> 3.0840 16.2391 -10.2064 -3.4088 1.3223 1.3707 +#> -3.4470 -0.0689 -7.6676 -4.6356 0.6760 1.2200 +#> -2.4014 -6.9472 -5.3587 0.0275 3.0329 4.4052 +#> +#> (19,.,.) = +#> Columns 1 to 8 6.1934 0.6016 8.3598 3.0326 -12.9455 13.6546 -8.3350 6.5919 +#> -1.4887 2.2354 4.6899 4.0643 -19.0120 0.7482 -4.1261 2.9749 +#> 0.4922 -1.0899 -2.7840 4.2196 6.3346 -19.8369 -3.7838 1.9757 +#> -1.9319 -2.3496 -5.1289 4.6528 -5.8942 -3.7496 4.4994 -6.2430 +#> 0.0337 -4.0425 2.3811 0.5730 0.2054 9.6694 2.5225 -9.5001 +#> -0.5876 0.3160 7.7939 -4.7752 5.7192 -5.6494 -8.0888 14.6691 +#> 4.3229 -6.3642 -8.4702 -2.1014 23.2589 7.3972 -15.1733 -11.3082 +#> 2.0543 0.6571 -2.5315 -0.3325 8.6183 -16.6163 0.1893 -2.7989 +#> 5.6083 2.1832 -2.4867 -7.9341 3.1974 11.9689 3.1765 -2.5647 +#> -2.2926 9.2332 -1.2313 -3.1631 -1.5512 -1.3260 -6.3034 -12.3526 +#> 1.6317 2.5714 12.9366 5.4867 -6.9424 -3.9743 0.9728 -0.6109 +#> 1.2038 4.1512 -1.5282 -6.5023 2.6542 -1.2755 6.3453 -14.6680 +#> -1.4641 -4.0733 -0.2566 5.3166 -16.6637 9.4802 0.5710 -9.4650 +#> 9.4367 9.7752 6.1162 -12.1200 -26.9064 6.3087 -0.2352 2.9740 +#> -4.3341 0.9391 4.9173 -8.7431 2.4471 -8.0176 2.8032 2.0200 +#> 2.4573 -0.8673 -0.4458 -1.1450 4.6255 -9.2203 2.7095 -14.0133 +#> 3.1340 -1.0957 -2.5156 -6.6083 10.6748 -3.6518 -4.4568 -1.1938 +#> -0.6760 -4.3519 1.9990 -3.1414 3.1890 12.4459 3.0494 2.1602 +#> 4.5522 -1.8339 -7.4528 6.1832 -8.1795 -3.1925 -3.7697 5.9783 +#> -2.7287 -3.6379 6.2073 7.2695 -8.6929 -0.7868 -2.8261 6.8385 +#> -4.0738 -8.2125 7.5363 -4.8496 -15.8754 22.5223 0.6181 1.3733 +#> 0.1273 -2.9454 2.6994 5.5410 7.5915 -21.8249 -14.0528 3.2240 +#> 2.8974 -1.9770 2.5353 7.7352 -10.8922 1.1035 14.8167 -0.9914 +#> 1.3719 4.5675 0.5142 1.9600 -3.9910 -5.8259 3.5965 14.5469 +#> 0.4437 6.4980 -3.4437 2.7937 -7.6009 -15.2242 -7.1377 8.5971 +#> -3.7725 2.4433 -0.8972 6.8068 -1.3289 -3.0030 4.5202 -14.2860 +#> -0.9888 2.9218 -2.4031 0.1627 -7.3088 6.3008 12.4894 11.9645 +#> 2.1441 -2.5996 -0.6631 12.1015 -11.5187 -22.2022 20.2998 -8.2719 +#> -1.2663 -0.1324 -4.9035 -15.9993 6.3059 -4.5996 -7.5443 -4.8725 +#> -1.6626 2.6081 4.2534 11.0306 -10.0911 4.7944 -0.9095 -12.9522 +#> -0.1362 -7.1845 -4.4547 13.0362 -4.1170 -3.1437 -2.7968 3.4724 +#> -0.0237 -8.6747 3.7257 -3.5996 -2.3078 22.0702 -9.9110 -4.6093 +#> -0.6613 -6.0696 7.6438 3.0433 -6.6383 4.2396 -15.0798 -0.0339 +#> +#> Columns 9 to 16 2.8798 -3.0725 7.2669 -8.6678 8.8890 3.7638 -2.5844 -2.5241 +#> -13.7307 4.4794 -8.3669 -1.0329 1.4814 14.0255 -1.4771 -2.6008 +#> -6.5771 3.8326 -7.9315 -5.0395 -12.6048 12.8029 2.0019 0.1064 +#> 9.3538 3.9234 -9.3982 10.8300 21.5176 -9.6451 9.2925 7.6096 +#> 4.0361 -3.8203 -14.2969 -0.8630 -6.6651 -3.7881 -11.5364 -0.5598 +#> -4.0482 2.1511 6.9734 13.0590 -14.5905 -1.1738 13.3416 -10.6066 +#> -12.1073 17.4167 14.8756 5.2775 8.3533 8.6792 5.9096 -1.4173 +#> 10.3553 0.7710 0.4817 -9.9571 26.4267 -10.9394 3.2493 4.1104 +#> 24.8736 -5.9802 4.3403 8.4823 0.4159 -7.9480 15.0445 -3.4468 +#> -2.6414 -3.3396 0.0889 -5.3915 4.5256 3.4420 -5.5165 -15.8534 +#> -6.8405 18.8525 -0.2873 -4.2911 16.0573 16.1792 9.1864 9.1881 +#> 2.4798 -1.5783 -2.8959 5.1518 6.6068 -2.5040 -3.4740 19.6131 +#> -2.0984 12.7616 -21.5559 4.9098 0.7599 -17.0955 -8.8220 8.5591 +#> 0.2830 -2.2745 -5.4048 -5.3338 4.7524 -19.1735 -0.4606 13.3997 +#> -1.6081 7.7972 -5.1667 7.6012 -19.7526 13.4538 -15.0041 -12.9088 +#> -21.4450 11.7592 3.6115 -16.0658 16.4340 8.3671 -2.4403 -8.5778 +#> -15.7357 12.9648 12.6815 -21.2259 -10.4463 -0.5693 -2.0798 4.5000 +#> -12.8018 1.3293 1.4460 2.2941 -1.3528 -11.2727 -12.6531 0.4794 +#> 4.7367 -13.4382 5.8000 -1.8316 5.4515 6.7628 15.5774 2.2675 +#> 7.1600 7.1958 -22.9738 10.3199 10.3020 2.9799 -18.5614 0.5556 +#> -5.8958 2.2755 -1.4859 6.7113 6.9451 -5.7261 6.3672 8.5693 +#> 7.0025 6.8126 6.2316 15.4182 -14.0714 -0.4964 -6.2274 4.6494 +#> -6.4367 -2.6352 3.9081 13.2540 1.7440 3.5604 5.1332 13.8318 +#> 0.3749 6.7699 -4.3908 -5.0820 -7.0969 4.2889 -10.1144 5.3642 +#> -2.9467 -3.1925 -12.3471 -6.1830 1.3040 -1.9060 -2.6016 -0.4002 +#> -1.0362 -3.1978 8.8118 -15.3453 -0.7546 7.7332 3.7634 -1.8564 +#> 3.3061 -4.8062 -6.5058 13.5938 -4.2818 -4.0861 2.0229 -9.4537 +#> 4.1389 0.2342 10.9445 -12.8379 6.9789 -18.8211 -9.9630 6.3284 +#> -7.2165 8.0571 -8.4737 10.5222 -18.7037 3.8124 -10.1854 9.0345 +#> 14.6915 1.0380 -6.3960 2.5288 5.0942 -15.4381 1.9265 -5.4321 +#> -6.6489 -5.3268 16.5260 -11.1882 12.1356 -8.4580 4.7768 0.6481 +#> -20.7960 3.2741 2.8290 4.4515 -3.2537 -11.0028 -5.7387 -2.4862 +#> 7.2241 -2.5927 -5.9032 -15.7293 17.8961 -17.7844 4.1726 -12.0396 +#> +#> Columns 17 to 24 6.5391 -14.5864 -11.8604 -11.4786 5.6443 -2.0073 7.6965 -4.6677 +#> 12.0979 -3.9987 -3.4949 2.3190 2.4423 -4.2993 6.1381 -11.4657 +#> 14.2456 -0.9163 11.1421 -5.7223 -9.5826 -10.5583 -3.8580 1.5903 +#> 5.4065 6.3033 1.3012 -0.4432 -2.1939 -14.5521 -0.0028 -14.0889 +#> -14.2170 -4.8922 -0.8307 -9.2438 14.3694 12.5264 12.7776 -3.5967 +#> -9.3725 -11.6499 -8.6719 9.9445 0.9808 0.2336 -1.5833 -4.2600 +#> 6.6962 -4.6848 -6.4189 0.4120 2.8630 -1.5822 -4.0069 1.0265 +#> -1.1279 4.5786 11.1326 -8.0944 0.6882 12.2893 -11.5990 -12.9291 +#> -3.4631 -13.0946 -7.7095 -18.1560 1.5465 4.0599 -1.8186 14.4567 +#> 12.4410 -14.4470 20.0901 9.7878 5.4741 4.2032 3.8252 -2.6984 +#> -9.9990 -1.9192 -11.0305 5.3239 -11.1249 -3.8732 6.7137 -4.0402 +#> 11.8593 1.5556 9.8289 -2.1987 9.6795 4.5857 -15.0401 1.5829 +#> 0.3604 -8.2160 -3.8773 -3.5271 -2.3932 0.1904 2.8119 -8.4704 +#> 12.1340 4.8606 4.2701 1.8484 11.7156 5.0856 -0.6180 14.4383 +#> 5.7874 12.1384 2.4820 11.4998 1.8744 -4.7238 13.7177 -6.9019 +#> -8.9404 -7.7889 -2.8964 -4.5992 -6.6008 5.2246 -4.6712 -5.9672 +#> -9.4088 4.1162 -3.4417 0.5993 0.2595 -2.6834 -0.2342 9.4726 +#> -0.0734 -17.9673 -2.7938 -16.2442 0.6266 2.7935 6.5794 2.4333 +#> -16.2170 -4.8645 13.6569 8.2434 -0.7888 -12.5193 -3.6816 -0.5647 +#> 0.9639 -5.7729 -6.6615 -10.3802 10.3718 8.1148 -9.2690 -12.9491 +#> 11.4655 0.1740 4.9162 -4.9980 -5.1379 1.7215 3.7164 4.6842 +#> 1.5508 -1.6158 3.3389 -4.1350 -9.3359 0.6012 12.8650 1.7450 +#> 6.7992 -4.6559 -9.1753 -15.7167 -2.8661 3.0046 8.1778 14.1793 +#> 3.0958 -13.1580 2.8024 0.7439 6.5187 8.3216 14.8557 -3.0691 +#> -9.6449 10.5255 7.8832 -4.6325 -5.7371 -18.1773 -8.8166 -23.2309 +#> 4.8762 6.5577 4.0182 6.7272 -7.4877 -4.3655 -1.9526 5.2064 +#> 14.4899 -16.7208 2.1138 6.7385 6.8778 -5.1295 7.8338 -2.0964 +#> -1.7491 4.9707 -9.4929 -6.8305 -8.2027 -8.2111 -3.6427 8.0782 +#> 14.1475 11.4035 -1.4518 3.5647 20.2029 4.8265 -14.0111 -11.1388 +#> 13.8582 0.3056 -6.1752 -3.6393 11.9486 -7.3676 -11.8621 -3.0034 +#> 2.6802 -0.6858 3.7664 -10.4697 -8.1922 4.6575 12.2959 19.7323 +#> -2.8445 -0.1341 -6.7498 -0.5399 11.9542 -0.7129 -1.4025 -9.9363 +#> -15.2275 0.5037 -9.9558 -2.0779 -2.0914 4.1467 -0.4205 -8.2044 +#> +#> Columns 25 to 32 3.7903 3.3255 -5.7292 7.5276 11.8205 -10.4675 -10.6192 8.0113 +#> 0.8361 -3.3669 -9.5519 8.3768 4.9518 -12.9102 -0.0174 15.1387 +#> -12.5670 -11.0598 -3.1942 -8.3689 8.9384 -2.3949 8.8404 -4.6961 +#> 7.4176 5.2798 -1.0288 -5.2463 -11.3578 2.2115 22.9316 7.1917 +#> 2.5144 8.4124 2.3300 -14.5787 5.1187 1.9948 -15.7565 -10.7584 +#> -7.4503 3.3198 -10.1121 -5.9335 -9.8042 2.6150 10.7930 9.4060 +#> -6.3741 0.5055 23.4130 -2.1110 -3.5391 -7.4406 7.4490 -14.4898 +#> 6.8932 13.4604 12.8060 -3.6329 3.5193 -3.8410 4.0571 2.0357 +#> 14.1691 7.2257 10.0986 7.1218 -6.5157 -0.3897 0.8214 0.8872 +#> -11.4461 5.3822 1.2093 16.0759 -0.7038 -1.4869 8.4618 -0.1718 +#> -0.3760 2.2350 -1.7025 -2.6248 -6.2334 16.0488 -24.6371 -6.3375 +#> -7.2315 1.0800 7.2978 -1.4302 -3.2127 9.0249 11.0530 3.6445 +#> 15.7921 16.7880 14.4549 2.2096 -4.1307 3.5315 12.6594 15.9640 +#> 9.4974 -10.4409 1.4475 -4.9661 -8.8074 2.5021 -0.4174 4.7382 +#> -16.1161 7.1551 10.5302 -6.1067 -11.0190 -15.3492 -11.5276 -11.3453 +#> -9.3822 6.8743 -5.8723 5.6408 3.4993 15.7824 6.1130 10.4046 +#> -7.6319 -7.5774 11.2611 -3.8455 0.3930 -7.2326 4.4891 -2.6157 +#> -16.4360 -14.7686 -2.5789 5.0919 -0.7319 -6.2399 4.4602 -2.1547 +#> 4.6968 0.0927 -0.9485 0.7130 6.5895 22.6459 7.2589 -8.4118 +#> 14.9886 -4.0438 -2.1775 -2.1337 2.5141 -5.9813 -4.9089 5.9291 +#> 11.7019 -3.9686 2.4360 4.3078 20.4404 2.6189 9.3113 8.9625 +#> 2.2853 2.7111 -1.9807 0.6896 5.7212 -3.6727 -6.6924 -7.7792 +#> 18.7836 1.7695 0.3937 3.3705 -5.8818 3.6156 6.1784 -2.5669 +#> 9.2455 0.3481 -10.9108 -9.4289 -9.6590 0.4839 11.9312 -20.5037 +#> 2.4875 -0.6515 -1.7652 -1.9367 -3.9500 -8.1281 19.0556 8.8230 +#> 1.8250 -4.2213 -14.2542 7.0002 -2.8151 -18.4079 -6.9226 -0.1910 +#> -21.7549 4.9411 -1.1866 7.9180 0.4587 -7.5399 -10.5217 -1.3080 +#> -1.8189 -15.6254 -8.5672 0.2147 0.6852 -11.7538 -1.6671 -3.6402 +#> 14.1308 4.6025 -3.9681 0.4679 0.1882 -2.8152 2.7897 9.2692 +#> 2.5538 2.5195 -7.7524 -7.8544 -13.9627 -4.6951 4.4365 -3.7481 +#> 2.0063 -3.2778 9.1413 9.2929 4.3252 0.3978 -1.4988 0.4056 +#> -1.6680 -2.3542 12.1117 -1.9918 2.7421 -5.0232 13.8558 -10.7903 +#> 6.0628 5.7619 4.5081 -5.2663 -9.3721 -12.9506 5.4327 0.4773 +#> +#> Columns 33 to 40 8.1017 -16.3518 -11.8488 5.2833 -17.5721 -1.9173 1.8891 -15.4496 +#> 0.1731 -9.0688 -3.7495 -7.1022 4.3007 13.1694 -6.0692 19.0576 +#> -0.9493 5.5288 -3.4587 0.5222 2.1423 -6.1549 -0.7781 -4.1996 +#> 1.0215 2.3486 -7.8628 2.9591 -17.3152 -12.6207 -16.5884 3.6502 +#> 9.1096 1.8355 -2.9040 7.4276 16.7771 0.4230 8.0398 -10.2031 +#> -10.2698 -7.1725 -5.1960 -5.5190 6.6967 2.1429 -15.5684 9.2121 +#> -19.3767 -5.6862 7.8534 10.6132 18.2067 -1.9074 9.7766 -15.5770 +#> 5.2263 -4.2951 -5.4397 -8.1393 3.7078 18.5770 -12.5704 4.8184 +#> -1.7836 3.0403 -18.9434 6.4766 -3.4476 -14.9199 9.2449 -2.9303 +#> -1.2301 -2.2305 -7.8244 -5.0573 6.3950 -6.3923 10.4575 6.5719 +#> 9.8506 11.4765 -9.2351 -3.0002 -13.7599 2.9845 -22.4761 5.3977 +#> 5.6193 6.1970 -3.7173 3.0055 6.9053 -11.9419 17.4443 -8.0083 +#> 3.6494 16.5555 -3.1813 6.6481 15.9151 0.2748 -0.3932 12.0858 +#> -2.9139 2.6803 -3.0415 16.0457 -4.9883 5.8753 -0.3729 -12.7299 +#> -8.3219 8.4769 -4.8462 5.1220 9.1838 3.4023 -2.8892 -11.2306 +#> 7.6885 -3.5387 -3.7079 -10.4078 -3.1272 -8.8266 4.0314 13.4915 +#> 1.0996 -2.5958 20.0192 -1.6561 15.9806 15.0182 10.3297 -25.8490 +#> 6.3978 -3.5960 -9.7776 -8.5038 -1.5981 -13.7985 3.0721 -6.3714 +#> 0.4850 -5.7412 9.6595 0.3586 2.6632 -0.3682 14.7224 -7.9278 +#> -0.5118 -11.5803 -8.1301 2.5937 -4.3635 3.2610 -15.2348 9.8542 +#> 7.7489 15.6425 -5.5420 -5.6919 2.3951 -13.3311 -11.1449 -9.0787 +#> 2.9989 -4.4263 -6.8323 -5.8438 4.0403 9.1415 -6.7349 -13.6461 +#> -1.7004 -3.3287 -21.2722 -2.5126 -7.3674 0.3515 2.0060 2.1176 +#> -5.3685 -5.4785 -6.8406 -0.1505 -9.5887 -10.1640 8.3817 -5.8131 +#> -6.4718 -12.4564 6.9947 -6.8727 10.5639 0.6349 1.7956 14.2546 +#> 1.1731 3.1685 2.2902 -6.1751 4.7103 8.4293 -5.0037 -1.2424 +#> -2.4388 14.7543 -3.9616 -8.4939 -0.0702 14.5129 -8.7235 6.5275 +#> -10.0653 -12.1855 9.9079 -1.9125 13.2866 4.6751 14.6080 -10.8142 +#> 10.3276 -0.5984 13.2652 1.9736 2.5442 -6.9664 -11.3229 5.9947 +#> -4.5489 4.9185 -3.0494 9.0023 9.6750 -27.3298 5.2614 -5.3131 +#> -2.3094 13.9642 3.0953 0.2910 -5.1558 10.8093 1.1577 -2.9486 +#> 21.2301 4.4138 4.4121 2.4246 1.7106 4.2483 18.2669 8.2993 +#> -6.8699 2.8283 7.2359 -1.4976 3.9263 0.8536 -11.3219 -6.3654 +#> +#> Columns 41 to 48 6.3987 5.3726 -15.1102 -19.9172 12.5064 6.9965 -5.7315 -17.1862 +#> 3.4244 -8.5284 -10.4238 -9.4293 2.1006 -5.2414 -12.7033 11.0285 +#> -6.0304 8.8814 0.9639 -2.4080 -5.6089 12.8117 -3.3538 -6.5833 +#> -9.0917 -10.3516 7.4486 2.0141 18.1599 3.7454 -1.9926 -8.6860 +#> 7.3703 11.7580 -1.4917 -13.5822 8.8879 9.4748 -4.8432 -1.1621 +#> 4.4972 -13.0873 9.3680 -7.3819 11.5981 -2.5588 -11.3400 6.7135 +#> -11.8895 -35.7564 22.0851 -3.2676 0.4339 9.9938 16.2517 -3.3763 +#> 6.5308 18.2512 -0.8322 10.9775 -5.0022 12.3768 -0.0811 8.4797 +#> -4.3165 11.7174 3.9955 -3.6485 2.5013 2.1386 -0.2413 -10.2440 +#> 1.9551 -6.7131 6.7738 -14.6548 -0.1942 -12.0407 -0.7043 -5.1429 +#> -16.1036 15.5812 5.1011 -16.5777 -1.5173 0.9820 3.3362 3.3592 +#> -4.8309 -0.3282 0.6207 7.7402 -2.8567 6.1089 4.6141 7.5731 +#> -11.8528 7.9254 9.5656 -1.0380 3.1958 -0.3382 2.8357 -2.7504 +#> 7.3604 -4.7194 -5.1374 8.3832 1.6657 -0.5481 16.1025 -8.5649 +#> -17.7274 -5.9748 -10.2013 -0.8172 -4.6582 0.6032 -5.7292 14.5913 +#> -4.3346 -13.7389 -1.5869 -10.6060 7.0140 -15.5251 1.0579 7.8580 +#> -3.9522 -11.9852 12.5604 -0.1477 -7.7052 -7.5955 12.1044 0.1048 +#> -2.9733 -22.9579 14.3785 -6.0695 -6.9459 -3.2409 9.0416 2.9197 +#> 1.1180 6.9788 -3.5675 -10.0713 -11.9052 7.0531 5.3110 8.9956 +#> 2.3332 -3.0973 -14.2154 0.4811 9.7985 2.5278 -10.1085 -8.1604 +#> 19.4557 -18.9490 19.7723 15.0608 9.4433 3.3345 -3.1479 6.6671 +#> -15.0291 14.3806 7.9204 -0.8433 -3.0652 -4.9490 6.3862 2.0887 +#> 3.3946 8.2103 -2.2330 -1.1686 -6.4077 -1.8844 -0.6360 1.9359 +#> -7.2017 10.6838 9.3675 -10.5559 14.7724 -3.2499 6.3603 -1.1350 +#> 5.2416 -14.1291 -5.9369 6.3695 -8.0808 4.9498 -1.0099 4.0686 +#> 7.8008 -3.9301 -14.1962 16.6963 -2.1643 2.7470 7.2891 7.1268 +#> 9.7406 -13.0360 3.2398 -7.9581 4.9944 3.2405 -3.2256 -5.0415 +#> 0.3682 -7.9787 -5.2600 17.1641 -10.2795 1.7377 3.9875 0.1306 +#> -9.7142 8.1789 -4.8007 6.2437 17.5034 1.1674 -3.3046 -7.7050 +#> 7.5847 -7.4089 14.3233 -1.2958 7.2360 9.0604 -10.8416 1.0312 +#> -6.2819 -8.5516 1.6586 4.2253 -17.5011 9.3042 3.1412 -14.8686 +#> 8.2997 -8.1064 4.9181 -1.0426 -7.9551 4.1494 -9.7840 3.8240 +#> 18.1633 -14.6371 -4.4313 1.8154 -3.6519 5.9923 -5.1153 -9.8723 +#> +#> Columns 49 to 54 -8.4769 -5.9151 -6.4813 -7.0201 6.2623 -8.4037 +#> 0.3882 15.9138 5.6105 -3.8608 -2.2757 -5.6603 +#> -14.5014 2.0358 6.1023 2.8811 -5.4190 -6.7584 +#> 1.1881 -2.5143 -3.3320 -10.2318 -4.7125 -1.7250 +#> 2.3052 13.8123 5.7191 -0.2945 9.1147 4.2297 +#> 2.8621 -0.2156 -4.8806 6.0030 -1.1765 7.7670 +#> 20.1996 -7.7344 -6.2379 -7.5948 1.6836 15.4909 +#> -9.3062 3.6242 8.6981 -1.2253 0.9231 -2.3114 +#> -1.9740 -8.3923 -2.0443 -3.2118 12.0007 3.0823 +#> 14.1085 3.3791 6.9403 9.6675 2.2356 -2.4772 +#> -14.2471 6.0709 0.3183 11.1711 -1.1398 11.7890 +#> 7.6216 3.8623 -3.1473 -12.5835 -7.7868 2.6106 +#> -1.7931 6.3723 -3.8766 -7.6933 -2.7478 8.7902 +#> 6.1837 -12.2718 -0.4701 -4.2918 0.0584 -4.6319 +#> -1.9340 -6.0971 -3.5143 1.3975 -3.3757 -7.9841 +#> -1.1598 9.0673 6.4370 -2.2928 -5.7345 2.5671 +#> 9.1795 3.8481 3.3573 -3.7878 4.0010 6.8017 +#> 0.2499 15.8674 8.1792 -10.0088 -4.9935 0.4390 +#> 4.9046 -3.8502 -4.8795 -3.0779 -2.4735 -2.3404 +#> 1.5899 0.0293 0.9283 -1.2674 -3.9225 0.4987 +#> -9.3770 -15.8860 -5.1442 -6.6900 -3.4330 0.5659 +#> -16.0483 -9.8292 -3.6364 16.3431 8.5265 7.9262 +#> -9.0845 3.6114 10.0347 2.8926 -1.9010 -1.4543 +#> 8.0890 3.4206 -10.5650 -0.3379 5.8540 -4.7952 +#> 16.3089 9.6797 -6.8557 -5.6673 -7.9154 1.0721 +#> 2.1187 -0.1974 -12.1989 0.4059 -10.0289 -3.6241 +#> -6.3076 0.5004 -1.2712 2.3650 -2.7705 0.4133 +#> 6.9401 10.5418 2.2367 -1.7524 -5.1332 9.8441 +#> -2.2532 -6.0632 1.6978 8.2168 3.2946 -4.0279 +#> -2.4332 1.6308 -0.5258 -3.4941 11.7292 3.0047 +#> -1.8863 -7.4249 10.6301 -0.2827 -3.7962 -0.5907 +#> 4.2639 13.4959 4.5675 7.7005 6.4981 6.6331 +#> 8.1390 8.6622 13.0797 11.7492 6.6612 4.1266 +#> +#> (20,.,.) = +#> Columns 1 to 8 -2.2653 -0.2992 -8.0840 -13.1240 2.8674 6.6295 6.5126 6.3446 +#> -1.8972 0.3906 11.1849 -3.2554 -8.5344 -9.8407 -0.4684 -9.0555 +#> 3.3541 0.8391 6.3411 6.5816 -18.8454 2.7702 -11.9616 -4.1944 +#> 1.4189 1.1001 5.9307 10.4461 20.3536 1.1633 4.4568 -6.3664 +#> 7.0339 1.1404 -16.9503 -9.7155 -2.3246 -8.8742 5.7853 4.3189 +#> -3.0903 -0.8244 7.8981 10.2077 10.2137 -8.9271 -8.1074 -3.7657 +#> 5.5382 -0.7625 -15.4976 1.2459 -8.3574 7.7494 -10.5614 9.5826 +#> 0.2973 -1.9852 6.5622 3.2223 7.0269 -2.3723 5.8789 -12.0903 +#> 9.5609 10.7393 -5.2497 -8.8528 7.8993 -0.9968 10.9202 -4.5327 +#> 3.0976 5.1308 3.9697 7.3310 11.5743 -14.8499 -11.5704 16.6631 +#> -0.0480 -1.9001 -2.3730 -3.6587 4.0397 -9.3742 -1.6229 10.8680 +#> -0.2145 -9.9279 -3.6742 0.0782 10.3799 12.2620 -3.7961 -0.7784 +#> 0.8431 -3.5641 3.2876 -1.1760 8.3219 5.5639 11.7674 1.0064 +#> -5.2871 -0.5497 -7.7351 -10.3605 2.8745 6.0945 -14.8624 18.0661 +#> -0.7404 -3.3939 -7.2218 4.7577 -4.2778 -16.8384 17.6664 3.1392 +#> 0.5205 -5.2389 7.6788 9.0223 4.5100 -5.2292 -2.7248 -7.0030 +#> 0.5769 -3.3444 10.3431 -8.5299 -8.7415 8.7767 -4.7396 1.2269 +#> -5.8526 -7.4778 -10.2137 2.3334 5.3831 -1.5980 -8.9971 -7.5469 +#> -0.8442 0.2740 2.3468 -2.5717 -0.1457 -2.9135 -0.8418 5.4692 +#> 2.7965 5.7155 -3.7094 -8.7235 -4.5975 5.8828 -9.3944 -0.2782 +#> -2.6803 -4.8377 4.3721 17.9544 4.5883 9.8608 -1.4583 3.1014 +#> -2.8660 4.3645 18.4897 4.4416 -9.6468 -6.2421 20.4134 -8.2255 +#> 4.7237 5.2169 -1.3295 3.3900 -6.6240 5.5033 7.2892 -21.3770 +#> 7.1605 3.9767 -3.0323 -3.4185 0.6512 0.6174 0.5952 -3.9720 +#> 1.5655 6.6524 3.6685 -19.1833 -4.1078 -0.0181 -5.7368 -7.9393 +#> -5.0347 -8.6797 0.3316 -4.0256 -9.5080 -6.6997 -16.6795 -8.4676 +#> 2.8142 6.4528 -7.4576 -16.9667 21.4351 1.9185 -1.7165 14.0511 +#> -3.0426 2.9307 3.2615 -4.5143 -14.9627 -9.9714 -17.2782 0.8737 +#> 0.1046 5.6565 4.9881 -0.9736 12.6404 7.8123 14.4654 -15.1957 +#> 1.5976 2.0808 -10.7211 -5.2283 8.0706 -1.1106 -9.6745 -2.7221 +#> -2.1869 4.1961 8.8017 1.3681 -0.4700 9.7436 -4.8551 11.5580 +#> -3.5422 -2.7089 0.2502 9.7823 0.3797 16.4799 -4.6434 -8.0544 +#> -0.3372 7.9501 -8.4571 -9.9149 -5.7148 5.2510 -0.9710 9.2152 +#> +#> Columns 9 to 16 22.7046 8.8656 -1.8227 8.9593 14.7723 8.0486 -1.0943 0.0352 +#> -1.1169 2.5402 -12.4899 21.1558 -9.4494 -7.9042 -2.7939 28.6017 +#> 4.3768 -11.2553 -4.1071 18.4578 14.5702 12.0320 5.6829 -0.9867 +#> -28.0625 2.9737 4.5000 -0.2179 5.5182 -3.9749 -0.8898 4.2535 +#> -6.4355 1.6308 1.7177 -4.6168 9.9663 4.1929 -15.4372 -12.3801 +#> 8.3310 1.3950 5.3045 -9.5887 -12.1274 1.9325 -1.2257 4.4975 +#> 11.3182 32.6717 0.3343 -12.0492 8.5118 -7.2801 -9.5235 -11.3916 +#> 4.5921 -5.1121 0.4439 -18.4692 -2.2870 8.2574 -5.6966 -0.1372 +#> -3.4784 2.0897 -0.4041 -9.9791 9.7519 3.8283 14.5008 -10.6872 +#> -16.7427 -5.4824 10.4802 -18.8204 -8.0625 -12.5081 4.8563 1.9332 +#> -1.7032 12.5403 5.8618 6.3167 -9.1933 -3.0000 -11.1713 -2.9417 +#> -2.2482 -0.7298 7.2539 -7.5992 6.2598 -1.0015 4.3402 -5.9344 +#> -3.7315 20.7157 -14.7432 13.1493 -4.9645 3.1082 10.1389 5.6530 +#> 2.2658 -8.5603 -12.8248 -1.9801 7.9876 -10.0319 -6.5442 -6.7262 +#> 5.8323 -6.0048 9.5594 6.6087 4.1530 -4.4811 -4.9229 -2.6157 +#> 0.2485 9.5105 4.0412 -8.6230 1.1599 3.0516 8.0232 -1.2285 +#> -1.9085 -0.6954 -15.3894 11.5232 -2.8657 -1.1888 -5.0973 -6.7306 +#> -7.8041 2.1794 -1.9183 -6.6390 9.9993 8.2957 11.5401 -0.0086 +#> 6.1306 12.6493 3.2182 -6.5494 3.9135 1.9680 3.3683 2.6004 +#> 1.6508 11.0516 -3.1102 1.1376 -0.2661 -7.1742 -13.7552 14.2504 +#> 0.6556 0.7736 7.4219 -3.2018 -6.1690 -7.4612 7.6728 24.6016 +#> 13.6035 11.7464 -13.2721 -14.8095 -16.9787 -2.3197 0.8223 -4.9328 +#> 12.9162 8.4816 4.4873 2.0722 6.2033 12.1520 6.3933 17.6712 +#> -16.2263 -7.0064 -7.0218 -5.2966 4.6059 16.4592 -12.8893 -4.5726 +#> -13.9331 4.3118 5.4921 8.3261 -11.1891 19.0333 10.3206 19.8337 +#> -6.0739 -7.3151 8.1623 1.1320 -2.0046 -1.4451 8.7414 20.3453 +#> 5.2604 -10.8343 -6.7325 -5.9068 -16.6203 3.7952 2.1398 2.1272 +#> -3.0736 -13.2078 1.0946 -11.4924 -0.7304 4.2453 8.1450 16.6810 +#> -3.9020 -5.9777 -14.1405 1.5603 -17.3142 -2.6642 -16.0191 -2.4234 +#> -4.2214 -0.4177 -3.9181 1.2628 6.8582 5.5985 3.2752 1.6840 +#> 7.0367 -0.9933 8.5887 9.4574 14.0041 -0.1052 -2.3672 6.7918 +#> -0.5503 -0.7571 3.0058 -5.6141 0.0150 11.2386 -7.4648 -2.9740 +#> -10.6669 -13.3201 6.4182 7.0915 9.1428 -11.3971 -16.9469 2.8021 +#> +#> Columns 17 to 24 17.6642 -0.6387 7.9046 15.2093 -7.7815 7.4469 4.1875 -1.5077 +#> -0.9758 -0.9738 -12.1521 2.7148 -15.3814 -10.5035 3.7223 -1.2998 +#> 5.9875 2.2168 6.1767 11.0110 14.6629 -17.7826 -4.9001 0.8982 +#> -5.6393 5.3944 5.7311 5.5054 10.0760 15.5363 5.9889 7.3000 +#> 8.7352 6.1695 -3.2813 -14.6480 -19.0425 3.1577 -2.7995 3.5504 +#> 2.0529 21.4071 -18.3231 -10.7602 18.3895 0.0733 -9.8752 -6.8944 +#> -6.2217 3.2915 13.7572 -13.1856 7.4382 20.4022 -8.1393 -5.9567 +#> -6.5613 9.1675 8.0228 -19.1505 -3.6138 -6.6123 -0.6475 5.2600 +#> -2.3261 3.1672 -1.2693 -19.5586 -17.0771 4.5576 5.1650 5.3382 +#> 5.9462 -1.1929 -3.4518 -9.9776 -20.9776 5.0989 -2.3550 -15.8299 +#> 1.6283 5.6687 0.4092 10.9363 -10.4706 -13.9946 -16.0593 -8.0922 +#> 0.7369 5.6484 6.7144 -0.5455 -8.0493 13.8250 14.5871 5.5704 +#> -19.5043 0.7672 7.2171 -8.3151 -8.2818 15.7777 15.2274 15.9179 +#> -9.1341 -10.4634 -5.9890 -6.3053 -18.1182 0.9232 -7.3188 -7.4505 +#> 1.3812 -2.8491 -8.0542 -2.8909 1.9746 -10.2501 -13.2893 15.5367 +#> 15.0537 -0.8085 -4.4413 13.9893 11.6323 -9.4141 -8.2112 1.9273 +#> 3.3577 -2.5148 12.3265 -10.9512 -9.5564 8.2188 4.5326 -1.3718 +#> -6.1836 -1.9051 0.1954 -4.0796 -14.3666 17.7900 1.1285 -1.3318 +#> 13.3311 -10.3902 -10.0682 19.8027 7.1195 5.7822 -5.8048 -7.8914 +#> 6.4710 -7.7272 -3.0857 -3.4119 -13.4485 -7.9478 1.1953 8.5252 +#> -25.9261 8.1285 9.1884 -4.5096 12.1478 7.2600 -5.9244 3.5302 +#> 6.2438 5.0811 -10.1497 -11.7140 0.6813 -5.9037 -10.0189 3.9940 +#> -12.0951 4.6454 2.8733 18.5339 1.2435 -5.9668 4.1898 11.5805 +#> 8.4386 -12.2192 -22.5463 -6.2082 -2.5395 17.9824 6.2090 -5.6919 +#> 1.7823 -14.1368 2.3466 7.0545 2.1329 -6.9889 0.1930 -0.9751 +#> -1.1099 -1.4668 10.2118 -9.3422 -9.2612 -10.6328 4.2368 -21.1498 +#> -8.8242 5.1813 -1.1785 5.1899 -12.8420 -7.4009 13.8923 -0.1505 +#> -11.2040 -23.8364 10.4214 5.6040 -19.1460 -7.8605 7.8769 -7.5557 +#> 6.6239 1.6977 4.0559 1.4558 1.4975 2.3042 -0.5022 -10.7446 +#> -9.0102 18.5619 -8.8101 -10.7224 -7.9344 12.2323 -3.2101 -10.8233 +#> -3.2937 -8.6577 9.1005 26.2489 3.7980 -4.6835 -0.8302 2.7777 +#> -9.8487 16.4674 6.7647 -3.2271 -4.0097 12.5370 -4.5381 -8.1880 +#> -20.0576 -6.8400 10.2333 -15.7089 -9.9337 3.1535 -6.7635 -3.4640 +#> +#> Columns 25 to 32 5.9104 4.8221 -8.1394 -9.6691 9.3055 -5.1001 -12.2185 2.6398 +#> 3.4581 4.4536 -8.3125 2.5345 -6.5927 -0.0154 -3.3810 4.1154 +#> -6.6569 8.5043 11.9527 -9.7875 0.0155 4.9674 5.8879 -4.6753 +#> -3.0395 0.5927 -5.4095 11.2886 0.4358 4.0293 -17.5789 13.0740 +#> 2.7918 -2.7854 -7.4394 1.4016 -5.1818 10.5699 4.5737 -3.1707 +#> -7.4011 0.1835 -3.2074 10.7163 -3.9187 -10.3619 7.1848 0.9578 +#> -15.1819 -12.0398 -3.1670 6.0243 9.0537 -7.4634 -4.8999 9.0450 +#> -5.5812 6.5045 -5.0183 8.6197 2.2698 1.6468 -10.2231 5.6208 +#> 7.5617 -10.9167 1.3932 1.0610 -17.3475 3.6534 3.8007 10.5845 +#> 7.8356 5.9945 -17.7030 1.9506 -11.7852 0.6386 -1.0682 14.8160 +#> -5.4559 -11.3433 -1.6827 1.9712 -3.0699 0.2285 -10.0801 2.3636 +#> 2.3210 -6.3367 8.0976 1.6282 4.3854 6.5529 -7.7830 -0.8202 +#> 12.3213 2.0599 1.5219 11.5187 -4.3801 -4.2886 -6.0968 13.6755 +#> 2.3666 -12.0247 -12.8151 -4.8653 -2.4643 -1.0182 -13.3706 -1.2856 +#> 10.3363 -5.3233 1.4460 4.1572 -6.3407 -2.9956 -10.2585 -32.6818 +#> 4.8838 -3.7452 4.0146 7.0686 -1.9888 -4.5866 -4.6609 -1.3247 +#> -8.7365 4.3888 -0.0985 -21.9485 -1.5032 13.5189 3.0245 11.4382 +#> -2.0524 -11.0958 -10.1485 13.3686 1.0175 -2.8466 -5.0082 1.0060 +#> 8.1681 -9.2563 9.5008 8.1686 -1.9951 1.2165 10.2728 11.6361 +#> 6.4259 0.9784 -6.1052 0.4424 6.7311 -19.1240 -2.6960 6.5045 +#> -2.6689 1.0665 0.9865 20.0187 -18.4775 -4.7755 -0.6665 -4.9734 +#> 4.6048 1.9818 0.0268 -8.6312 -12.6262 -3.2561 8.2110 6.7139 +#> -2.6962 -15.7499 0.1256 13.0784 -8.2836 10.8653 -0.3269 5.2771 +#> 9.0601 0.5579 -16.0441 -2.1294 -3.0186 -0.6771 -7.0197 3.3308 +#> 6.5242 4.0100 -5.3352 2.7399 9.5822 -3.0179 -1.2517 7.2918 +#> 6.9063 5.6368 6.3425 -8.1189 11.9978 1.3311 -0.7957 -8.6222 +#> 14.0419 19.9841 -19.2937 15.6144 6.7469 -10.1073 -3.2629 20.3081 +#> -12.2120 1.5340 3.9908 -11.4030 5.8836 5.1645 1.5510 14.7949 +#> -4.5906 -0.6133 2.1705 5.4995 4.3333 0.2632 7.6452 -1.3775 +#> -3.9061 0.9274 -1.6277 6.9139 7.2659 7.1171 -7.6749 -6.3526 +#> 4.8205 0.5059 3.3250 7.3435 -0.2742 -10.0117 -5.7437 3.8513 +#> -11.4412 1.5989 7.2549 12.2832 6.6201 4.0207 18.6231 -2.6978 +#> -3.2150 -10.5552 -14.6349 1.0026 2.9579 -3.5328 1.9846 -3.9082 +#> +#> Columns 33 to 40 -13.2848 -0.5988 -1.2989 5.8212 11.2562 -6.6643 5.2632 -3.5389 +#> -8.8260 -10.5065 -0.1619 -0.9349 -14.6161 -1.1251 1.5410 0.6819 +#> -6.9181 -12.1762 -15.0838 -13.6559 13.2063 2.2151 -5.2201 5.0383 +#> -8.2575 -2.5460 -8.3200 11.6442 0.8071 -4.8722 17.0862 3.3237 +#> 1.2646 -15.8315 -5.4005 -8.6578 2.9465 0.6985 -7.5534 5.4819 +#> 11.0173 5.3423 -3.4031 21.6489 -10.3451 0.3455 -1.3004 8.8321 +#> -8.0202 2.6883 13.6055 -13.8843 14.5995 19.4407 8.8667 7.2221 +#> 7.3830 5.3822 -10.6590 26.2000 -5.7342 2.1643 -0.0664 -12.4417 +#> -10.7809 5.5687 7.8680 -3.7356 8.9632 8.1361 -6.8870 0.5630 +#> 6.5460 5.0822 5.1806 -2.6952 -8.1342 2.4732 -2.8841 1.8029 +#> -3.0723 -19.3079 -19.4181 -3.8867 2.4883 -0.6227 -5.4244 -0.7217 +#> 6.1192 6.0294 9.8505 -2.7279 8.4087 -0.0448 -10.4724 7.0981 +#> -21.8484 -6.8548 1.1474 -17.4633 10.1407 -5.7662 1.6476 -0.5942 +#> -18.0175 1.7014 -2.3008 -11.6641 3.7682 2.2228 7.7583 -7.2064 +#> -7.5446 -16.4989 -20.7692 -6.7358 -0.1978 -10.4529 -4.2468 -3.2102 +#> 4.4813 3.3473 5.4832 -0.1446 6.8409 -1.3990 -4.6832 10.3867 +#> -2.8606 2.7559 -14.8896 -19.1378 -5.5776 7.3396 -5.2749 -8.6807 +#> -1.9968 -0.7495 -0.2332 -8.4584 5.3272 3.6040 -5.3820 14.3607 +#> 10.6130 6.5070 2.7207 -4.3841 -2.1949 -0.3277 -1.9849 -0.0598 +#> -8.6159 -6.3805 17.1809 3.6410 -2.8060 -6.9909 2.4218 16.9963 +#> -9.5896 14.2145 6.6624 2.5391 7.3797 5.5521 10.5887 -3.8019 +#> -0.7171 4.9104 9.8122 4.7194 12.8769 -1.7959 -4.3849 1.9492 +#> -6.2699 -4.2470 -3.6704 11.3357 3.8254 -1.7273 4.2892 -0.9200 +#> 1.2836 11.5547 6.2253 12.6112 -4.6704 1.1384 -7.0601 -1.2174 +#> -6.0212 2.1315 -2.0683 10.7270 -4.7864 -5.4257 4.4478 -0.7735 +#> -6.9576 -8.7218 0.2272 -5.9443 2.5234 6.0720 -1.6406 -1.9135 +#> 9.6933 9.6212 9.9820 -1.4911 -18.6658 1.1614 1.1155 -2.1856 +#> 3.0013 -2.0622 3.7523 10.9132 -7.4350 -6.8037 6.1280 -5.5437 +#> 0.1733 2.5096 14.0835 6.6632 -10.4960 20.6756 -5.3487 -2.0246 +#> -14.5584 1.0116 -0.9558 3.5640 -12.4739 4.6679 -6.5922 -4.0250 +#> -1.1797 -8.4165 -13.5125 -15.1991 5.7454 -2.8517 2.0499 -4.1946 +#> -5.8201 5.6230 -12.1709 -12.6365 -7.3619 -1.7180 15.8541 -5.4316 +#> -8.1388 -15.1476 -14.7358 5.4903 -1.3170 0.5399 13.2027 -5.8415 +#> +#> Columns 41 to 48 -5.8575 -1.1832 -1.1470 0.8476 5.0894 -7.0232 -5.8470 0.0351 +#> 0.7060 -6.4848 4.5100 -12.3964 -7.9843 -2.6963 2.8206 -0.0505 +#> -1.3362 13.0885 13.4296 -2.2460 10.5508 10.7067 -12.4676 -11.7519 +#> 19.4782 2.0082 7.1231 -6.6430 10.8793 5.2294 0.3712 10.6722 +#> -12.4520 -7.5845 -8.5223 -2.7500 -9.8489 0.0859 -7.9954 6.8262 +#> 1.4447 -11.1030 -12.5502 12.8197 7.5714 3.8164 14.2554 6.3675 +#> -14.6389 -15.3754 10.2145 -14.5279 -10.8230 8.4901 15.4390 -2.5918 +#> 0.1149 -12.9130 7.2866 8.6505 0.4887 5.5779 -9.7180 -0.8259 +#> -6.8633 8.7274 3.1915 -1.6969 2.7831 -8.8030 4.1106 -8.4376 +#> 22.3276 -3.8989 2.7296 -11.7009 -3.6822 -7.7497 10.0564 2.5126 +#> 4.9275 -16.9212 -16.3241 -19.8715 -5.4912 1.6888 -5.5892 -2.8319 +#> -1.2967 3.0875 -5.0835 7.5323 -2.7911 -12.0536 -2.9674 5.9111 +#> 0.2800 4.3716 9.9010 -11.1197 14.2852 -4.9287 -8.5188 3.6311 +#> -13.7508 -13.0850 -1.6860 -3.5667 -6.7858 -9.7164 1.1143 -4.1267 +#> -13.2925 -8.6209 -12.0692 -3.1955 -2.8567 2.6562 4.9640 -6.7264 +#> -4.2125 4.9015 -9.4357 9.5524 3.9941 0.3523 -3.1834 -0.7954 +#> 0.3652 2.4108 5.1524 -1.3808 -15.2275 4.4895 4.9816 -14.0834 +#> 1.5387 -1.4155 15.2860 -1.2077 -16.7041 -10.3103 16.2057 -7.0775 +#> 18.3921 10.9614 -7.6098 6.7147 -8.5978 -4.9138 -15.3389 -5.3784 +#> 1.4735 -8.5954 -0.7906 1.1221 8.3126 -5.6129 -1.0612 21.9490 +#> 7.4943 1.1357 9.9221 -5.0446 -0.3586 -8.5198 0.8268 0.6990 +#> -10.2649 9.8347 1.0324 -5.3509 8.6235 11.2996 -2.2346 -1.9970 +#> -9.4363 4.5089 9.2187 5.0147 -3.4025 -8.3976 -7.9586 -11.1828 +#> 1.3212 0.6170 5.6923 9.9970 -6.6220 13.6253 -7.1009 -0.0962 +#> 23.5576 -2.1115 2.2221 27.7146 22.0444 2.3116 -4.4454 4.0914 +#> -7.8340 6.7754 -4.6155 3.6454 -8.7927 12.3246 -5.2933 -3.2191 +#> 6.2377 3.3940 -1.4850 10.7535 7.7656 -7.3435 11.8374 7.3073 +#> 0.3927 5.3734 21.5026 10.2683 -4.4226 6.6118 4.6513 -5.3856 +#> -9.4217 1.2088 -3.5455 11.8381 8.1124 2.5137 10.4823 4.9352 +#> -2.2640 -10.7916 10.9737 -6.8278 8.6452 1.7453 10.1867 2.6673 +#> 5.7317 -0.8226 2.8983 -13.4839 -3.5434 -12.2212 -1.9737 0.7393 +#> 0.4864 -7.3695 -7.6554 6.3468 -11.5907 9.2358 8.0288 4.5832 +#> 6.4283 -12.5552 -8.4994 -3.4785 -3.2836 2.6417 6.6195 11.7416 +#> +#> Columns 49 to 54 9.1007 -3.0177 -1.8838 7.2480 0.1200 3.8837 +#> -0.5773 -1.0676 -11.7548 4.0879 -0.9830 1.8452 +#> 6.2033 -3.8315 -7.3474 1.4246 -4.2984 -1.4961 +#> -4.4075 -11.2964 4.7264 -0.3367 2.2216 -2.9164 +#> 12.0234 -2.5118 -5.9832 -6.1021 1.3594 -0.4622 +#> 1.9878 7.5738 8.7366 2.0166 -6.1659 -2.0650 +#> -3.9390 11.0900 10.9211 -1.2368 6.4134 0.9693 +#> -8.5893 -4.2376 -12.2887 5.3077 -0.9279 -4.4723 +#> -0.8214 3.1198 -1.0422 7.3185 -0.3208 -1.2252 +#> -10.6750 -4.2415 -9.7995 -6.6033 -2.4468 -3.4383 +#> 12.7180 -0.8569 -3.2433 -3.1250 3.1652 4.3897 +#> -16.0551 -1.9551 2.3425 -2.2168 -1.2255 -1.9735 +#> -12.2473 -3.5930 -0.2202 5.3557 -1.5723 2.4245 +#> -12.9533 12.3988 -0.7214 -0.0127 0.2706 0.4035 +#> 10.6038 -2.0731 1.2670 -7.3337 9.1796 4.5962 +#> 0.9354 -14.4622 -1.1519 -7.8692 -1.0576 -0.0993 +#> -12.8159 20.0985 -5.1880 -4.4215 -4.0017 4.3007 +#> 1.0518 4.3241 0.5598 -0.7126 -1.0072 -2.8076 +#> -1.9028 16.1889 0.6727 -5.7127 -2.5740 1.3590 +#> 6.2794 -16.1370 3.9592 7.2018 -0.9700 -2.1768 +#> -1.6357 0.3465 -2.6915 -0.4033 2.5698 -2.8284 +#> 15.0365 -9.8127 -5.0431 -6.2516 0.8636 -1.0819 +#> 1.7100 5.7835 10.4537 5.7782 7.6487 3.6805 +#> -0.0082 9.7222 -5.5707 0.4330 -10.3284 4.2040 +#> -17.7295 -8.6692 3.4524 4.9592 -0.3906 2.4489 +#> -1.5549 8.2899 1.8337 0.1230 5.6002 -5.3337 +#> 20.2444 -7.2078 -2.7447 -4.5550 -1.5866 0.3140 +#> -15.8512 9.4708 10.1017 -3.2339 6.9926 -2.8168 +#> -4.4492 -17.7566 3.1178 1.5984 -5.2506 1.2193 +#> -4.2897 -1.4693 -6.3019 8.7731 -5.6519 0.7179 +#> -6.3805 13.5619 3.4258 1.9243 2.3275 4.2455 +#> 1.6612 4.9800 5.4945 -6.7546 4.0067 0.8389 +#> -13.7101 -0.5703 -8.4804 3.0112 7.8513 -3.5254 +#> [ CPUFloatType{20,33,54} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_conv_transpose2d.html b/static/docs/dev/reference/torch_conv_transpose2d.html new file mode 100644 index 0000000000000000000000000000000000000000..34e643b75939bb87c1c87ba3e48958b438cad89d --- /dev/null +++ b/static/docs/dev/reference/torch_conv_transpose2d.html @@ -0,0 +1,347 @@ + + + + + + + + +Conv_transpose2d — torch_conv_transpose2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Conv_transpose2d

    +
    + +
    torch_conv_transpose2d(
    +  input,
    +  weight,
    +  bias = list(),
    +  stride = 1L,
    +  padding = 0L,
    +  output_padding = 0L,
    +  groups = 1L,
    +  dilation = 1L
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iH , iW)\)

    weight

    filters of shape \((\mbox{in\_channels} , \frac{\mbox{out\_channels}}{\mbox{groups}} , kH , kW)\)

    bias

    optional bias of shape \((\mbox{out\_channels})\). Default: NULL

    stride

    the stride of the convolving kernel. Can be a single number or a tuple (sH, sW). Default: 1

    padding

    dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padH, padW). Default: 0

    output_padding

    additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padH, out_padW). Default: 0

    groups

    split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

    dilation

    the spacing between kernel elements. Can be a single number or a tuple (dH, dW). Default: 1

    + +

    conv_transpose2d(input, weight, bias=NULL, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor

    + + + + +

    Applies a 2D transposed convolution operator over an input image +composed of several input planes, sometimes also called "deconvolution".

    +

    See nn_conv_transpose2d() for details and output shape.

    + +

    Examples

    +
    if (torch_is_installed()) { + +# With square kernels and equal stride +inputs = torch_randn(c(1, 4, 5, 5)) +weights = torch_randn(c(4, 8, 3, 3)) +nnf_conv_transpose2d(inputs, weights, padding=1) +} +
    #> torch_tensor +#> (1,1,.,.) = +#> -2.9663 -1.2733 -3.9435 -0.5276 2.6750 +#> -1.2722 1.0069 -5.7263 2.7787 -0.1404 +#> -1.7860 -3.8682 4.8713 1.1316 3.8511 +#> -0.0010 5.3181 -0.6754 2.5730 -2.4711 +#> 0.5885 4.9659 -3.5166 4.5335 -3.7058 +#> +#> (1,2,.,.) = +#> -5.2250 0.2726 8.2119 -6.4099 5.9053 +#> 4.6700 5.1411 -1.5878 3.8120 0.1009 +#> -3.5450 -5.0390 9.2086 -2.9578 -0.5509 +#> 8.0991 -14.7839 -3.2309 -12.1295 2.1448 +#> 0.1715 2.9083 -1.4596 -1.0306 -2.7095 +#> +#> (1,3,.,.) = +#> 3.1749 2.3139 -8.2462 -6.3721 0.0100 +#> 0.8951 -3.3390 0.3912 -8.3079 1.3706 +#> 3.5525 3.0387 6.1074 6.3115 -2.0431 +#> -3.2812 2.3111 2.1700 3.3244 4.8020 +#> 5.2995 -1.5489 -1.4167 -6.5637 -0.9624 +#> +#> (1,4,.,.) = +#> -10.0481 7.0235 10.5055 -2.0748 -3.7513 +#> 8.1387 -0.4205 -3.0138 -10.8628 5.5135 +#> 2.9024 -5.1426 -1.6595 4.3741 -5.7961 +#> 4.4890 -14.6718 5.5729 4.6866 -0.7137 +#> 1.1518 3.0874 0.7841 0.3746 0.9620 +#> +#> (1,5,.,.) = +#> 0.9123 -8.0814 1.0808 7.4471 0.6151 +#> 2.1226 2.7764 11.3909 4.8650 -4.3402 +#> -11.2861 4.5080 -7.8865 5.5343 -9.9356 +#> -1.9028 -1.0153 -3.6493 -7.0338 -8.3456 +#> -3.5633 -8.6804 -7.8108 6.3765 2.8930 +#> +#> (1,6,.,.) = +#> -0.5542 1.5379 3.6290 -4.7070 -6.1971 +#> 15.2333 1.0645 0.0866 -3.6305 0.7299 +#> -0.0833 -10.8658 -6.3612 0.5463 2.2777 +#> -4.8246 2.8323 -2.6610 -1.3371 -0.6676 +#> -5.5904 6.6461 2.8359 5.6594 -0.5057 +#> +#> (1,7,.,.) = +#> 2.0289 -0.2122 4.9581 0.2454 1.7928 +#> -4.9431 -0.3061 5.8549 0.7861 -2.7473 +#> -3.8568 -3.2374 -1.5201 -1.2617 -7.8991 +#> 1.0549 1.0301 3.8253 1.0461 -6.5575 +#> -3.1756 2.8057 -2.0838 7.5251 3.8319 +#> +#> (1,8,.,.) = +#> 7.6274 4.6864 -5.2624 0.0808 -3.7774 +#> -0.0753 -18.2773 -7.6814 -6.3614 1.1808 +#> -0.9235 9.3308 -1.1801 -6.3923 2.6811 +#> -0.3068 9.7232 2.0825 5.2834 8.0793 +#> 5.7741 8.8292 2.7251 -3.5073 0.5215 +#> [ CPUFloatType{1,8,5,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_conv_transpose3d.html b/static/docs/dev/reference/torch_conv_transpose3d.html new file mode 100644 index 0000000000000000000000000000000000000000..a42abc0a05a816bee34467989d5df0f3c2b2be1c --- /dev/null +++ b/static/docs/dev/reference/torch_conv_transpose3d.html @@ -0,0 +1,291 @@ + + + + + + + + +Conv_transpose3d — torch_conv_transpose3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Conv_transpose3d

    +
    + +
    torch_conv_transpose3d(
    +  input,
    +  weight,
    +  bias = list(),
    +  stride = 1L,
    +  padding = 0L,
    +  output_padding = 0L,
    +  groups = 1L,
    +  dilation = 1L
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iT , iH , iW)\)

    weight

    filters of shape \((\mbox{in\_channels} , \frac{\mbox{out\_channels}}{\mbox{groups}} , kT , kH , kW)\)

    bias

    optional bias of shape \((\mbox{out\_channels})\). Default: NULL

    stride

    the stride of the convolving kernel. Can be a single number or a tuple (sT, sH, sW). Default: 1

    padding

    dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padT, padH, padW). Default: 0

    output_padding

    additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padT, out_padH, out_padW). Default: 0

    groups

    split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

    dilation

    the spacing between kernel elements. Can be a single number or a tuple (dT, dH, dW). Default: 1

    + +

    conv_transpose3d(input, weight, bias=NULL, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor

    + + + + +

    Applies a 3D transposed convolution operator over an input image +composed of several input planes, sometimes also called "deconvolution"

    +

    See nn_conv_transpose3d() for details and output shape.

    + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +inputs = torch_randn(c(20, 16, 50, 10, 20)) +weights = torch_randn(c(16, 33, 3, 3, 3)) +nnf_conv_transpose3d(inputs, weights) +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_cos.html b/static/docs/dev/reference/torch_cos.html new file mode 100644 index 0000000000000000000000000000000000000000..e707edadd23c570851ce7dab9bb1dfdc09aec09f --- /dev/null +++ b/static/docs/dev/reference/torch_cos.html @@ -0,0 +1,259 @@ + + + + + + + + +Cos — torch_cos • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Cos

    +
    + +
    torch_cos(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    cos(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the cosine of the elements of input.

    +

    $$ + \mbox{out}_{i} = \cos(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_cos(a) +} +
    #> torch_tensor +#> -0.7828 +#> 0.9242 +#> 0.9376 +#> 0.9922 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_cosh.html b/static/docs/dev/reference/torch_cosh.html new file mode 100644 index 0000000000000000000000000000000000000000..071ccb36dbd7d55d1b86e6f6dcb34e29045ccf50 --- /dev/null +++ b/static/docs/dev/reference/torch_cosh.html @@ -0,0 +1,260 @@ + + + + + + + + +Cosh — torch_cosh • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Cosh

    +
    + +
    torch_cosh(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    cosh(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the hyperbolic cosine of the elements of +input.

    +

    $$ + \mbox{out}_{i} = \cosh(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_cosh(a) +} +
    #> torch_tensor +#> 2.4203 +#> 1.7086 +#> 1.1125 +#> 1.1147 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_cosine_similarity.html b/static/docs/dev/reference/torch_cosine_similarity.html new file mode 100644 index 0000000000000000000000000000000000000000..294ef6ba26742a8f4fa8502c2ce5b527438059c9 --- /dev/null +++ b/static/docs/dev/reference/torch_cosine_similarity.html @@ -0,0 +1,368 @@ + + + + + + + + +Cosine_similarity — torch_cosine_similarity • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Cosine_similarity

    +
    + +
    torch_cosine_similarity(x1, x2, dim = 2L, eps = 0)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    x1

    (Tensor) First input.

    x2

    (Tensor) Second input (of size matching x1).

    dim

    (int, optional) Dimension of vectors. Default: 1

    eps

    (float, optional) Small value to avoid division by zero. Default: 1e-8

    + +

    cosine_similarity(x1, x2, dim=1, eps=1e-8) -> Tensor

    + + + + +

    Returns cosine similarity between x1 and x2, computed along dim.

    +

    $$ + \mbox{similarity} = \frac{x_1 \cdot x_2}{\max(\Vert x_1 \Vert _2 \cdot \Vert x_2 \Vert _2, \epsilon)} +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +input1 = torch_randn(c(100, 128)) +input2 = torch_randn(c(100, 128)) +output = torch_cosine_similarity(input1, input2) +output +} +
    #> torch_tensor +#> -0.0857 +#> 0.0101 +#> 0.1649 +#> 0.0623 +#> 0.0738 +#> -0.0599 +#> -0.0418 +#> 0.0788 +#> 0.0598 +#> 0.0059 +#> -0.0343 +#> -0.0889 +#> 0.0976 +#> 0.1223 +#> 0.1000 +#> -0.0479 +#> -0.1086 +#> 0.2057 +#> 0.1539 +#> 0.1357 +#> 0.0878 +#> 0.0818 +#> 0.0544 +#> -0.0295 +#> 0.0689 +#> -0.0774 +#> 0.0074 +#> -0.0211 +#> -0.0021 +#> 0.0976 +#> 0.0913 +#> -0.0176 +#> 0.2034 +#> -0.0049 +#> -0.0285 +#> 0.1157 +#> -0.1759 +#> 0.0079 +#> 0.0375 +#> -0.0873 +#> 0.0755 +#> -0.0989 +#> 0.0888 +#> 0.0649 +#> -0.0739 +#> 0.0348 +#> -0.0271 +#> 0.0272 +#> -0.1020 +#> -0.0533 +#> -0.1703 +#> -0.0898 +#> 0.1356 +#> -0.0408 +#> 0.0232 +#> 0.1387 +#> -0.0054 +#> -0.0062 +#> -0.0278 +#> -0.0370 +#> -0.1010 +#> -0.0419 +#> -0.1620 +#> 0.0823 +#> 0.0396 +#> 0.1143 +#> -0.0720 +#> -0.0441 +#> 0.0365 +#> 0.1570 +#> 0.0785 +#> -0.1786 +#> 0.0428 +#> 0.1420 +#> 0.0014 +#> 0.0639 +#> -0.0082 +#> -0.0694 +#> 0.0665 +#> -0.0191 +#> 0.0376 +#> 0.0624 +#> 0.1519 +#> 0.0230 +#> 0.0474 +#> 0.0567 +#> -0.0727 +#> 0.0950 +#> 0.1371 +#> -0.0143 +#> -0.0568 +#> 0.0634 +#> 0.1218 +#> -0.1276 +#> 0.0446 +#> -0.0189 +#> -0.0732 +#> -0.0915 +#> -0.1336 +#> -0.0561 +#> [ CPUFloatType{100} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_cross.html b/static/docs/dev/reference/torch_cross.html new file mode 100644 index 0000000000000000000000000000000000000000..a1aceb77277e10dbcb2d43d1f1dfe1f9a4c3b4bb --- /dev/null +++ b/static/docs/dev/reference/torch_cross.html @@ -0,0 +1,272 @@ + + + + + + + + +Cross — torch_cross • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Cross

    +
    + +
    torch_cross(self, other, dim = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    other

    (Tensor) the second input tensor

    dim

    (int, optional) the dimension to take the cross-product in.

    + +

    cross(input, other, dim=-1, out=NULL) -> Tensor

    + + + + +

    Returns the cross product of vectors in dimension dim of input +and other.

    +

    input and other must have the same size, and the size of their +dim dimension should be 3.

    +

    If dim is not given, it defaults to the first dimension found with the +size 3.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4, 3)) +a +b = torch_randn(c(4, 3)) +b +torch_cross(a, b, dim=2) +torch_cross(a, b) +} +
    #> torch_tensor +#> 0.1652 0.3135 0.0509 +#> -0.4376 0.1250 -0.1171 +#> 1.4616 -1.2569 1.0052 +#> 0.1030 0.3297 -0.5130 +#> [ CPUFloatType{4,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_cummax.html b/static/docs/dev/reference/torch_cummax.html new file mode 100644 index 0000000000000000000000000000000000000000..3d6aaab3a138fce8eb96c6e9dcb1d884f436fc9f --- /dev/null +++ b/static/docs/dev/reference/torch_cummax.html @@ -0,0 +1,287 @@ + + + + + + + + +Cummax — torch_cummax • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Cummax

    +
    + +
    torch_cummax(self, dim)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int) the dimension to do the operation over

    + +

    cummax(input, dim) -> (Tensor, LongTensor)

    + + + + +

    Returns a namedtuple (values, indices) where values is the cumulative maximum of +elements of input in the dimension dim. And indices is the index +location of each maximum value found in the dimension dim.

    +

    $$ + y_i = max(x_1, x_2, x_3, \dots, x_i) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(10)) +a +torch_cummax(a, dim=1) +} +
    #> [[1]] +#> torch_tensor +#> -1.8418 +#> -1.2347 +#> -0.2324 +#> 0.8768 +#> 0.8768 +#> 0.8768 +#> 0.8768 +#> 0.8768 +#> 0.8768 +#> 1.0924 +#> [ CPUFloatType{10} ] +#> +#> [[2]] +#> torch_tensor +#> 0 +#> 1 +#> 2 +#> 3 +#> 3 +#> 3 +#> 3 +#> 3 +#> 3 +#> 9 +#> [ CPULongType{10} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_cummin.html b/static/docs/dev/reference/torch_cummin.html new file mode 100644 index 0000000000000000000000000000000000000000..584e1db0a6bdc3fa0834b39975559684550d4d68 --- /dev/null +++ b/static/docs/dev/reference/torch_cummin.html @@ -0,0 +1,287 @@ + + + + + + + + +Cummin — torch_cummin • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Cummin

    +
    + +
    torch_cummin(self, dim)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int) the dimension to do the operation over

    + +

    cummin(input, dim) -> (Tensor, LongTensor)

    + + + + +

    Returns a namedtuple (values, indices) where values is the cumulative minimum of +elements of input in the dimension dim. And indices is the index +location of each maximum value found in the dimension dim.

    +

    $$ + y_i = min(x_1, x_2, x_3, \dots, x_i) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(10)) +a +torch_cummin(a, dim=1) +} +
    #> [[1]] +#> torch_tensor +#> 1.6268 +#> 1.0973 +#> 1.0858 +#> -0.5395 +#> -0.5395 +#> -1.0003 +#> -1.0003 +#> -1.0003 +#> -1.8734 +#> -1.8734 +#> [ CPUFloatType{10} ] +#> +#> [[2]] +#> torch_tensor +#> 0 +#> 1 +#> 2 +#> 3 +#> 3 +#> 5 +#> 5 +#> 5 +#> 8 +#> 8 +#> [ CPULongType{10} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_cumprod.html b/static/docs/dev/reference/torch_cumprod.html new file mode 100644 index 0000000000000000000000000000000000000000..8c95789b939a015e071d8995ffb9fb1520d1a75b --- /dev/null +++ b/static/docs/dev/reference/torch_cumprod.html @@ -0,0 +1,276 @@ + + + + + + + + +Cumprod — torch_cumprod • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Cumprod

    +
    + +
    torch_cumprod(self, dim, dtype = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int) the dimension to do the operation over

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: NULL.

    + +

    cumprod(input, dim, out=NULL, dtype=NULL) -> Tensor

    + + + + +

    Returns the cumulative product of elements of input in the dimension +dim.

    +

    For example, if input is a vector of size N, the result will also be +a vector of size N, with elements.

    +

    $$ + y_i = x_1 \times x_2\times x_3\times \dots \times x_i +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(10)) +a +torch_cumprod(a, dim=1) +} +
    #> torch_tensor +#> 1.5149e+00 +#> -6.5782e-01 +#> -2.5308e-02 +#> 9.9330e-03 +#> -9.4889e-03 +#> -8.0739e-03 +#> 4.0003e-05 +#> -2.7845e-05 +#> -3.9059e-05 +#> -1.1986e-05 +#> [ CPUFloatType{10} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_cumsum.html b/static/docs/dev/reference/torch_cumsum.html new file mode 100644 index 0000000000000000000000000000000000000000..c19712d14aa63d5f13eb59efd052bd723569d8b2 --- /dev/null +++ b/static/docs/dev/reference/torch_cumsum.html @@ -0,0 +1,276 @@ + + + + + + + + +Cumsum — torch_cumsum • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Cumsum

    +
    + +
    torch_cumsum(self, dim, dtype = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int) the dimension to do the operation over

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: NULL.

    + +

    cumsum(input, dim, out=NULL, dtype=NULL) -> Tensor

    + + + + +

    Returns the cumulative sum of elements of input in the dimension +dim.

    +

    For example, if input is a vector of size N, the result will also be +a vector of size N, with elements.

    +

    $$ + y_i = x_1 + x_2 + x_3 + \dots + x_i +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(10)) +a +torch_cumsum(a, dim=1) +} +
    #> torch_tensor +#> -0.6645 +#> -1.4518 +#> -2.8377 +#> -3.4089 +#> -2.3082 +#> -4.4183 +#> -3.9439 +#> -2.9651 +#> -2.3419 +#> -3.3175 +#> [ CPUFloatType{10} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_det.html b/static/docs/dev/reference/torch_det.html new file mode 100644 index 0000000000000000000000000000000000000000..ea609ca9f1765c025bfc36fbe04ef37afd9955c7 --- /dev/null +++ b/static/docs/dev/reference/torch_det.html @@ -0,0 +1,266 @@ + + + + + + + + +Det — torch_det • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Det

    +
    + +
    torch_det(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor of size (*, n, n) where * is zero or more batch dimensions.

    + +

    Note

    + + +
    Backward through `det` internally uses SVD results when `input` is
    +not invertible. In this case, double backward through `det` will be
    +unstable in when `input` doesn't have distinct singular values. See
    +`~torch.svd` for details.
    +
    + +

    det(input) -> Tensor

    + + + + +

    Calculates determinant of a square matrix or batches of square matrices.

    + +

    Examples

    +
    if (torch_is_installed()) { + +A = torch_randn(c(3, 3)) +torch_det(A) +A = torch_randn(c(3, 2, 2)) +A +A$det() +} +
    #> torch_tensor +#> 0.6546 +#> -0.7505 +#> -0.6785 +#> [ CPUFloatType{3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_device.html b/static/docs/dev/reference/torch_device.html new file mode 100644 index 0000000000000000000000000000000000000000..f90d74c82732020bc3be69caf5c5e80dff3dda46 --- /dev/null +++ b/static/docs/dev/reference/torch_device.html @@ -0,0 +1,262 @@ + + + + + + + + +Create a Device object — torch_device • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    A torch_device is an object representing the device on which a torch_tensor +is or will be allocated.

    +
    + +
    torch_device(type, index = NULL)
    + +

    Arguments

    + + + + + + + + + + +
    type

    (character) a device type "cuda" or "cpu"

    index

    (integer) optional device ordinal for the device type. If the device ordinal +is not present, this object will always represent the current device for the device +type, even after torch_cuda_set_device() is called; e.g., a torch_tensor constructed +with device 'cuda' is equivalent to 'cuda:X' where X is the result of +torch_cuda_current_device().

    +

    A torch_device can be constructed via a string or via a string and device ordinal

    + + +

    Examples

    +
    if (torch_is_installed()) { + +# Via string +torch_device("cuda:1") +torch_device("cpu") +torch_device("cuda") # current cuda device + +# Via string and device ordinal +torch_device("cuda", 0) +torch_device("cpu", 0) + +} +
    #> torch_device(type='cpu', index=0)
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_diag.html b/static/docs/dev/reference/torch_diag.html new file mode 100644 index 0000000000000000000000000000000000000000..76a8d10c48b457212ace03ca851021dad6fc4b60 --- /dev/null +++ b/static/docs/dev/reference/torch_diag.html @@ -0,0 +1,258 @@ + + + + + + + + +Diag — torch_diag • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Diag

    +
    + +
    torch_diag(self, diagonal = 0L)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    diagonal

    (int, optional) the diagonal to consider

    + +

    diag(input, diagonal=0, out=NULL) -> Tensor

    + + + +
      +
    • If input is a vector (1-D tensor), then returns a 2-D square tensor +with the elements of input as the diagonal.

    • +
    • If input is a matrix (2-D tensor), then returns a 1-D tensor with +the diagonal elements of input.

    • +
    + +

    The argument diagonal controls which diagonal to consider:

      +
    • If diagonal = 0, it is the main diagonal.

    • +
    • If diagonal > 0, it is above the main diagonal.

    • +
    • If diagonal < 0, it is below the main diagonal.

    • +
    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_diag_embed.html b/static/docs/dev/reference/torch_diag_embed.html new file mode 100644 index 0000000000000000000000000000000000000000..16d9f6ad16b4c21772a3e1430316e63432af0afb --- /dev/null +++ b/static/docs/dev/reference/torch_diag_embed.html @@ -0,0 +1,297 @@ + + + + + + + + +Diag_embed — torch_diag_embed • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Diag_embed

    +
    + +
    torch_diag_embed(self, offset = 0L, dim1 = -2L, dim2 = -1L)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor. Must be at least 1-dimensional.

    offset

    (int, optional) which diagonal to consider. Default: 0 (main diagonal).

    dim1

    (int, optional) first dimension with respect to which to take diagonal. Default: -2.

    dim2

    (int, optional) second dimension with respect to which to take diagonal. Default: -1.

    + +

    diag_embed(input, offset=0, dim1=-2, dim2=-1) -> Tensor

    + + + + +

    Creates a tensor whose diagonals of certain 2D planes (specified by +dim1 and dim2) are filled by input. +To facilitate creating batched diagonal matrices, the 2D planes formed by +the last two dimensions of the returned tensor are chosen by default.

    +

    The argument offset controls which diagonal to consider:

      +
    • If offset = 0, it is the main diagonal.

    • +
    • If offset > 0, it is above the main diagonal.

    • +
    • If offset < 0, it is below the main diagonal.

    • +
    + +

    The size of the new matrix will be calculated to make the specified diagonal +of the size of the last input dimension. +Note that for offset other than \(0\), the order of dim1 +and dim2 matters. Exchanging them is equivalent to changing the +sign of offset.

    +

    Applying torch_diagonal to the output of this function with +the same arguments yields a matrix identical to input. However, +torch_diagonal has different default dimensions, so those +need to be explicitly specified.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(2, 3)) +torch_diag_embed(a) +torch_diag_embed(a, offset=1, dim1=1, dim2=3) +} +
    #> torch_tensor +#> (1,.,.) = +#> 0.0000 0.2012 0.0000 0.0000 +#> 0.0000 -1.0725 0.0000 0.0000 +#> +#> (2,.,.) = +#> 0.0000 0.0000 -0.1869 0.0000 +#> 0.0000 0.0000 -0.9137 0.0000 +#> +#> (3,.,.) = +#> 0.0000 0.0000 0.0000 -0.1624 +#> 0.0000 0.0000 0.0000 0.7809 +#> +#> (4,.,.) = +#> 0 0 0 0 +#> 0 0 0 0 +#> [ CPUFloatType{4,2,4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_diagflat.html b/static/docs/dev/reference/torch_diagflat.html new file mode 100644 index 0000000000000000000000000000000000000000..81229e7b7b266d0c77e23b50662d2c732d1108e4 --- /dev/null +++ b/static/docs/dev/reference/torch_diagflat.html @@ -0,0 +1,275 @@ + + + + + + + + +Diagflat — torch_diagflat • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Diagflat

    +
    + +
    torch_diagflat(self, offset = 0L)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    offset

    (int, optional) the diagonal to consider. Default: 0 (main diagonal).

    + +

    diagflat(input, offset=0) -> Tensor

    + + + +
      +
    • If input is a vector (1-D tensor), then returns a 2-D square tensor +with the elements of input as the diagonal.

    • +
    • If input is a tensor with more than one dimension, then returns a +2-D tensor with diagonal elements equal to a flattened input.

    • +
    + +

    The argument offset controls which diagonal to consider:

      +
    • If offset = 0, it is the main diagonal.

    • +
    • If offset > 0, it is above the main diagonal.

    • +
    • If offset < 0, it is below the main diagonal.

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(3)) +a +torch_diagflat(a) +torch_diagflat(a, 1) +a = torch_randn(c(2, 2)) +a +torch_diagflat(a) +} +
    #> torch_tensor +#> 0.1689 0.0000 0.0000 0.0000 +#> 0.0000 -0.1269 0.0000 0.0000 +#> 0.0000 0.0000 0.7075 0.0000 +#> 0.0000 0.0000 0.0000 0.7293 +#> [ CPUFloatType{4,4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_diagonal.html b/static/docs/dev/reference/torch_diagonal.html new file mode 100644 index 0000000000000000000000000000000000000000..8fa49e04f16947cc8eac876058c481893902f6a7 --- /dev/null +++ b/static/docs/dev/reference/torch_diagonal.html @@ -0,0 +1,298 @@ + + + + + + + + +Diagonal — torch_diagonal • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Diagonal

    +
    + +
    torch_diagonal(self, outdim, dim1 = 1L, dim2 = 2L, offset = 0L)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor. Must be at least 2-dimensional.

    outdim

    dimension name if self is a named tensor.

    dim1

    (int, optional) first dimension with respect to which to take diagonal. Default: 0.

    dim2

    (int, optional) second dimension with respect to which to take diagonal. Default: 1.

    offset

    (int, optional) which diagonal to consider. Default: 0 (main diagonal).

    + +

    diagonal(input, offset=0, dim1=0, dim2=1) -> Tensor

    + + + + +

    Returns a partial view of input with the its diagonal elements +with respect to dim1 and dim2 appended as a dimension +at the end of the shape.

    +

    The argument offset controls which diagonal to consider:

      +
    • If offset = 0, it is the main diagonal.

    • +
    • If offset > 0, it is above the main diagonal.

    • +
    • If offset < 0, it is below the main diagonal.

    • +
    + +

    Applying torch_diag_embed to the output of this function with +the same arguments yields a diagonal matrix with the diagonal entries +of the input. However, torch_diag_embed has different default +dimensions, so those need to be explicitly specified.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(3, 3)) +a +torch_diagonal(a, offset = 0) +torch_diagonal(a, offset = 1) +x = torch_randn(c(2, 5, 4, 2)) +torch_diagonal(x, offset=-1, dim1=1, dim2=2) +} +
    #> torch_tensor +#> (1,.,.) = +#> 1.7990 +#> -0.4819 +#> +#> (2,.,.) = +#> -0.8694 +#> -1.2228 +#> +#> (3,.,.) = +#> -0.2355 +#> 1.2403 +#> +#> (4,.,.) = +#> -1.7326 +#> -0.4145 +#> [ CPUFloatType{4,2,1} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_digamma.html b/static/docs/dev/reference/torch_digamma.html new file mode 100644 index 0000000000000000000000000000000000000000..d98758dd1e49246b0c7cd894f021231300849ed8 --- /dev/null +++ b/static/docs/dev/reference/torch_digamma.html @@ -0,0 +1,256 @@ + + + + + + + + +Digamma — torch_digamma • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Digamma

    +
    + +
    torch_digamma(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the tensor to compute the digamma function on

    + +

    digamma(input, out=NULL) -> Tensor

    + + + + +

    Computes the logarithmic derivative of the gamma function on input.

    +

    $$ + \psi(x) = \frac{d}{dx} \ln\left(\Gamma\left(x\right)\right) = \frac{\Gamma'(x)}{\Gamma(x)} +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_tensor(c(1, 0.5)) +torch_digamma(a) +} +
    #> torch_tensor +#> -0.5772 +#> -1.9635 +#> [ CPUFloatType{2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_dist.html b/static/docs/dev/reference/torch_dist.html new file mode 100644 index 0000000000000000000000000000000000000000..6aca13c162a3f97396b647384b98fe9826d9cca7 --- /dev/null +++ b/static/docs/dev/reference/torch_dist.html @@ -0,0 +1,268 @@ + + + + + + + + +Dist — torch_dist • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Dist

    +
    + +
    torch_dist(self, other, p = 2L)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    other

    (Tensor) the Right-hand-side input tensor

    p

    (float, optional) the norm to be computed

    + +

    dist(input, other, p=2) -> Tensor

    + + + + +

    Returns the p-norm of (input - other)

    +

    The shapes of input and other must be +broadcastable .

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_randn(c(4)) +x +y = torch_randn(c(4)) +y +torch_dist(x, y, 3.5) +torch_dist(x, y, 3) +torch_dist(x, y, 0) +torch_dist(x, y, 1) +} +
    #> torch_tensor +#> 3.8911 +#> [ CPUFloatType{} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_div.html b/static/docs/dev/reference/torch_div.html new file mode 100644 index 0000000000000000000000000000000000000000..23e9879a004fe4297e27e9044b16e2e94098cd3b --- /dev/null +++ b/static/docs/dev/reference/torch_div.html @@ -0,0 +1,299 @@ + + + + + + + + +Div — torch_div • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Div

    +
    + +
    torch_div(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    other

    (Number) the number to be divided to each element of input

    + +

    div(input, other, out=NULL) -> Tensor

    + + + + +

    Divides each element of the input input with the scalar other and +returns a new resulting tensor.

    + + +

    Each element of the tensor input is divided by each element of the tensor +other. The resulting tensor is returned.

    +

    $$ + \mbox{out}_i = \frac{\mbox{input}_i}{\mbox{other}_i} +$$ +The shapes of input and other must be broadcastable +. If the torch_dtype of input and +other differ, the torch_dtype of the result tensor is determined +following rules described in the type promotion documentation +. If out is specified, the result must be +castable to the torch_dtype of the +specified output tensor. Integral division by zero leads to undefined behavior.

    +

    Warning

    + + + +

    Integer division using div is deprecated, and in a future release div will +perform true division like torch_true_divide(). +Use torch_floor_divide() to perform integer division, +instead.

    +

    $$ + \mbox{out}_i = \frac{\mbox{input}_i}{\mbox{other}} +$$ +If the torch_dtype of input and other differ, the +torch_dtype of the result tensor is determined following rules +described in the type promotion documentation . If +out is specified, the result must be castable +to the torch_dtype of the specified output tensor. Integral division +by zero leads to undefined behavior.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(5)) +a +torch_div(a, 0.5) + + +a = torch_randn(c(4, 4)) +a +b = torch_randn(c(4)) +b +torch_div(a, b) +} +
    #> torch_tensor +#> -2.0561 -18.6148 0.3729 -0.3667 +#> -2.1486 5.5361 0.6578 -0.3014 +#> 4.5507 -138.1689 -1.8987 -0.0247 +#> -0.9767 101.7390 -0.3603 -0.0525 +#> [ CPUFloatType{4,4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_dot.html b/static/docs/dev/reference/torch_dot.html new file mode 100644 index 0000000000000000000000000000000000000000..a8e72430be7a23232ec02128b6022643f96cca42 --- /dev/null +++ b/static/docs/dev/reference/torch_dot.html @@ -0,0 +1,258 @@ + + + + + + + + +Dot — torch_dot • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Dot

    +
    + +
    torch_dot(self, tensor)
    + +

    Arguments

    + + + + + + + + + + +
    self

    the input tensor

    tensor

    the other input tensor

    + +

    Note

    + +

    This function does not broadcast .

    +

    dot(input, tensor) -> Tensor

    + + + + +

    Computes the dot product (inner product) of two tensors.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_dot(torch_tensor(c(2, 3)), torch_tensor(c(2, 1))) +} +
    #> torch_tensor +#> 7 +#> [ CPUFloatType{} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_dtype.html b/static/docs/dev/reference/torch_dtype.html new file mode 100644 index 0000000000000000000000000000000000000000..58ebe776fc73b98a73971357dfa853fc310b36b0 --- /dev/null +++ b/static/docs/dev/reference/torch_dtype.html @@ -0,0 +1,263 @@ + + + + + + + + +Torch data types — torch_dtype • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Returns the correspondent data type.

    +
    + +
    torch_float32()
    +
    +torch_float()
    +
    +torch_float64()
    +
    +torch_double()
    +
    +torch_float16()
    +
    +torch_half()
    +
    +torch_uint8()
    +
    +torch_int8()
    +
    +torch_int16()
    +
    +torch_short()
    +
    +torch_int32()
    +
    +torch_int()
    +
    +torch_int64()
    +
    +torch_long()
    +
    +torch_bool()
    +
    +torch_quint8()
    +
    +torch_qint8()
    +
    +torch_qint32()
    + + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_eig.html b/static/docs/dev/reference/torch_eig.html new file mode 100644 index 0000000000000000000000000000000000000000..be669d1c98dd6fb1091df02c6042254a361fd947 --- /dev/null +++ b/static/docs/dev/reference/torch_eig.html @@ -0,0 +1,254 @@ + + + + + + + + +Eig — torch_eig • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Eig

    +
    + +
    torch_eig(self, eigenvectors = FALSE)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the square matrix of shape \((n \times n)\) for which the eigenvalues and eigenvectors will be computed

    eigenvectors

    (bool) TRUE to compute both eigenvalues and eigenvectors; otherwise, only eigenvalues will be computed

    + +

    Note

    + + +
    Since eigenvalues and eigenvectors might be complex, backward pass is supported only
    +for [`torch_symeig`]
    +
    + +

    eig(input, eigenvectors=False, out=NULL) -> (Tensor, Tensor)

    + + + + +

    Computes the eigenvalues and eigenvectors of a real square matrix.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_einsum.html b/static/docs/dev/reference/torch_einsum.html new file mode 100644 index 0000000000000000000000000000000000000000..50e1c12ed8fbb6cc7715ff1ec9564eb40eb860e7 --- /dev/null +++ b/static/docs/dev/reference/torch_einsum.html @@ -0,0 +1,273 @@ + + + + + + + + +Einsum — torch_einsum • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Einsum

    +
    + +
    torch_einsum(equation, tensors)
    + +

    Arguments

    + + + + + + + + + + +
    equation

    (string) The equation is given in terms of lower case letters (indices) to be associated with each dimension of the operands and result. The left hand side lists the operands dimensions, separated by commas. There should be one index letter per tensor dimension. The right hand side follows after -> and gives the indices for the output. If the -> and right hand side are omitted, it implicitly defined as the alphabetically sorted list of all indices appearing exactly once in the left hand side. The indices not apprearing in the output are summed over after multiplying the operands entries. If an index appears several times for the same operand, a diagonal is taken. Ellipses ... represent a fixed number of dimensions. If the right hand side is inferred, the ellipsis dimensions are at the beginning of the output.

    tensors

    (Tensor) The operands to compute the Einstein sum of.

    + +

    einsum(equation, *operands) -> Tensor

    + + + + +

    This function provides a way of computing multilinear expressions (i.e. sums of products) using the +Einstein summation convention.

    + +

    Examples

    +
    if (torch_is_installed()) { + +if (FALSE) { + +x = torch_randn(c(5)) +y = torch_randn(c(4)) +torch_einsum('i,j->ij', list(x, y)) # outer product +A = torch_randn(c(3,5,4)) +l = torch_randn(c(2,5)) +r = torch_randn(c(2,4)) +torch_einsum('bn,anm,bm->ba', list(l, A, r)) # compare torch_nn$functional$bilinear +As = torch_randn(c(3,2,5)) +Bs = torch_randn(c(3,5,4)) +torch_einsum('bij,bjk->bik', list(As, Bs)) # batch matrix multiplication +A = torch_randn(c(3, 3)) +torch_einsum('ii->i', list(A)) # diagonal +A = torch_randn(c(4, 3, 3)) +torch_einsum('...ii->...i', list(A)) # batch diagonal +A = torch_randn(c(2, 3, 4, 5)) +torch_einsum('...ij->...ji', list(A))$shape # batch permute + +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_empty.html b/static/docs/dev/reference/torch_empty.html new file mode 100644 index 0000000000000000000000000000000000000000..39c7b8218a11d5a8731e5c5914d79bb0254ec3b1 --- /dev/null +++ b/static/docs/dev/reference/torch_empty.html @@ -0,0 +1,280 @@ + + + + + + + + +Empty — torch_empty • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Empty

    +
    + +
    torch_empty(
    +  ...,
    +  names = NULL,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    ...

    a sequence of integers defining the shape of the output tensor.

    names

    optional character vector naming each dimension.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

    layout

    (torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    empty(*size, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False, pin_memory=False) -> Tensor

    + + + + +

    Returns a tensor filled with uninitialized data. The shape of the tensor is +defined by the variable argument size.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_empty(c(2, 3)) +} +
    #> torch_tensor +#> 1.9205e+31 1.8891e+31 6.3375e-10 +#> 1.8169e+31 4.4726e+21 8.4843e+26 +#> [ CPUFloatType{2,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_empty_like.html b/static/docs/dev/reference/torch_empty_like.html new file mode 100644 index 0000000000000000000000000000000000000000..6b392dad71c400fe43228ffabfe0cd5ec55366dd --- /dev/null +++ b/static/docs/dev/reference/torch_empty_like.html @@ -0,0 +1,281 @@ + + + + + + + + +Empty_like — torch_empty_like • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Empty_like

    +
    + +
    torch_empty_like(
    +  input,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE,
    +  memory_format = torch_preserve_format()
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    (Tensor) the size of input will determine size of the output tensor.

    dtype

    (torch.dtype, optional) the desired data type of returned Tensor. Default: if NULL, defaults to the dtype of input.

    layout

    (torch.layout, optional) the desired layout of returned tensor. Default: if NULL, defaults to the layout of input.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, defaults to the device of input.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    memory_format

    (torch.memory_format, optional) the desired memory format of returned Tensor. Default: torch_preserve_format.

    + +

    empty_like(input, dtype=NULL, layout=NULL, device=NULL, requires_grad=False, memory_format=torch.preserve_format) -> Tensor

    + + + + +

    Returns an uninitialized tensor with the same size as input. +torch_empty_like(input) is equivalent to +torch_empty(input.size(), dtype=input.dtype, layout=input.layout, device=input.device).

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_empty(list(2,3), dtype = torch_int64()) +} +
    #> torch_tensor +#> 1.2885e+10 0.0000e+00 0.0000e+00 +#> 0.0000e+00 1.7180e+10 1.3700e+02 +#> [ CPULongType{2,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_empty_strided.html b/static/docs/dev/reference/torch_empty_strided.html new file mode 100644 index 0000000000000000000000000000000000000000..91aeecf0ab71c5542e90c959bdfbc319e344e60e --- /dev/null +++ b/static/docs/dev/reference/torch_empty_strided.html @@ -0,0 +1,295 @@ + + + + + + + + +Empty_strided — torch_empty_strided • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Empty_strided

    +
    + +
    torch_empty_strided(
    +  size,
    +  stride,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE,
    +  pin_memory = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    size

    (tuple of ints) the shape of the output tensor

    stride

    (tuple of ints) the strides of the output tensor

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

    layout

    (torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    pin_memory

    (bool, optional) If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: FALSE.

    + +

    empty_strided(size, stride, dtype=NULL, layout=NULL, device=NULL, requires_grad=False, pin_memory=False) -> Tensor

    + + + + +

    Returns a tensor filled with uninitialized data. The shape and strides of the tensor is +defined by the variable argument size and stride respectively. +torch_empty_strided(size, stride) is equivalent to +torch_empty(size).as_strided(size, stride).

    +

    Warning

    + + + +

    More than one element of the created tensor may refer to a single memory +location. As a result, in-place operations (especially ones that are +vectorized) may result in incorrect behavior. If you need to write to +the tensors, please clone them first.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_empty_strided(list(2, 3), list(1, 2)) +a +a$stride(1) +a$size(1) +} +
    #> [1] 2
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_eq.html b/static/docs/dev/reference/torch_eq.html new file mode 100644 index 0000000000000000000000000000000000000000..a49104c211185aacd76c4ac738e8a3b94bb3252f --- /dev/null +++ b/static/docs/dev/reference/torch_eq.html @@ -0,0 +1,261 @@ + + + + + + + + +Eq — torch_eq • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Eq

    +
    + +
    torch_eq(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the tensor to compare

    other

    (Tensor or float) the tensor or value to compare +Must be a ByteTensor

    + +

    eq(input, other, out=NULL) -> Tensor

    + + + + +

    Computes element-wise equality

    +

    The second argument can be a number or a tensor whose shape is +broadcastable with the first argument.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_eq(torch_tensor(c(1,2,3,4)), torch_tensor(c(1, 3, 2, 4))) +} +
    #> torch_tensor +#> 1 +#> 0 +#> 0 +#> 1 +#> [ CPUBoolType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_equal.html b/static/docs/dev/reference/torch_equal.html new file mode 100644 index 0000000000000000000000000000000000000000..fc1534f8e06711a769bc1a507ba4901fb80ab596 --- /dev/null +++ b/static/docs/dev/reference/torch_equal.html @@ -0,0 +1,253 @@ + + + + + + + + +Equal — torch_equal • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Equal

    +
    + +
    torch_equal(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    the input tensor

    other

    the other input tensor

    + +

    equal(input, other) -> bool

    + + + + +

    TRUE if two tensors have the same size and elements, FALSE otherwise.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_equal(torch_tensor(c(1, 2)), torch_tensor(c(1, 2))) +} +
    #> [1] TRUE
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_erf.html b/static/docs/dev/reference/torch_erf.html new file mode 100644 index 0000000000000000000000000000000000000000..c5e39ac4e250119b07bcfdf8d40621bc528e8884 --- /dev/null +++ b/static/docs/dev/reference/torch_erf.html @@ -0,0 +1,256 @@ + + + + + + + + +Erf — torch_erf • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Erf

    +
    + +
    torch_erf(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    erf(input, out=NULL) -> Tensor

    + + + + +

    Computes the error function of each element. The error function is defined as follows:

    +

    $$ + \mathrm{erf}(x) = \frac{2}{\sqrt{\pi}} \int_{0}^{x} e^{-t^2} dt +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_erf(torch_tensor(c(0, -1., 10.))) +} +
    #> torch_tensor +#> 0.0000 +#> -0.8427 +#> 1.0000 +#> [ CPUFloatType{3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_erfc.html b/static/docs/dev/reference/torch_erfc.html new file mode 100644 index 0000000000000000000000000000000000000000..8f6ca85ad6ff0b6a5de9426f963e7f34278b5ca7 --- /dev/null +++ b/static/docs/dev/reference/torch_erfc.html @@ -0,0 +1,257 @@ + + + + + + + + +Erfc — torch_erfc • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Erfc

    +
    + +
    torch_erfc(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    erfc(input, out=NULL) -> Tensor

    + + + + +

    Computes the complementary error function of each element of input. +The complementary error function is defined as follows:

    +

    $$ + \mathrm{erfc}(x) = 1 - \frac{2}{\sqrt{\pi}} \int_{0}^{x} e^{-t^2} dt +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_erfc(torch_tensor(c(0, -1., 10.))) +} +
    #> torch_tensor +#> 1.0000e+00 +#> 1.8427e+00 +#> 1.4013e-45 +#> [ CPUFloatType{3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_erfinv.html b/static/docs/dev/reference/torch_erfinv.html new file mode 100644 index 0000000000000000000000000000000000000000..6f44aea3d5ceac8ba3c42471797b6bda8639f06d --- /dev/null +++ b/static/docs/dev/reference/torch_erfinv.html @@ -0,0 +1,257 @@ + + + + + + + + +Erfinv — torch_erfinv • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Erfinv

    +
    + +
    torch_erfinv(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    erfinv(input, out=NULL) -> Tensor

    + + + + +

    Computes the inverse error function of each element of input. +The inverse error function is defined in the range \((-1, 1)\) as:

    +

    $$ + \mathrm{erfinv}(\mathrm{erf}(x)) = x +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_erfinv(torch_tensor(c(0, 0.5, -1.))) +} +
    #> torch_tensor +#> 0.0000 +#> 0.4769 +#> -inf +#> [ CPUFloatType{3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_exp.html b/static/docs/dev/reference/torch_exp.html new file mode 100644 index 0000000000000000000000000000000000000000..ee92ddefb41532e91a7975619b18002ba3f30a64 --- /dev/null +++ b/static/docs/dev/reference/torch_exp.html @@ -0,0 +1,256 @@ + + + + + + + + +Exp — torch_exp • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Exp

    +
    + +
    torch_exp(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    exp(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the exponential of the elements +of the input tensor input.

    +

    $$ + y_{i} = e^{x_{i}} +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_exp(torch_tensor(c(0, log(2)))) +} +
    #> torch_tensor +#> 1 +#> 2 +#> [ CPUFloatType{2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_expm1.html b/static/docs/dev/reference/torch_expm1.html new file mode 100644 index 0000000000000000000000000000000000000000..80c7ca562825bf292a3e3c5ed629b94b2d3eef1a --- /dev/null +++ b/static/docs/dev/reference/torch_expm1.html @@ -0,0 +1,256 @@ + + + + + + + + +Expm1 — torch_expm1 • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Expm1

    +
    + +
    torch_expm1(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    expm1(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the exponential of the elements minus 1 +of input.

    +

    $$ + y_{i} = e^{x_{i}} - 1 +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_expm1(torch_tensor(c(0, log(2)))) +} +
    #> torch_tensor +#> 0 +#> 1 +#> [ CPUFloatType{2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_eye.html b/static/docs/dev/reference/torch_eye.html new file mode 100644 index 0000000000000000000000000000000000000000..073b697bcd3d53c45e953d5694516835ebdc424d --- /dev/null +++ b/static/docs/dev/reference/torch_eye.html @@ -0,0 +1,280 @@ + + + + + + + + +Eye — torch_eye • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Eye

    +
    + +
    torch_eye(
    +  n,
    +  m = n,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    n

    (int) the number of rows

    m

    (int, optional) the number of columns with default being n

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

    layout

    (torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    eye(n, m=NULL, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

    + + + + +

    Returns a 2-D tensor with ones on the diagonal and zeros elsewhere.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_eye(3) +} +
    #> torch_tensor +#> 1 0 0 +#> 0 1 0 +#> 0 0 1 +#> [ CPUFloatType{3,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_fft.html b/static/docs/dev/reference/torch_fft.html new file mode 100644 index 0000000000000000000000000000000000000000..109ab06163582de8ba0d8a09f5ed2575cb673622 --- /dev/null +++ b/static/docs/dev/reference/torch_fft.html @@ -0,0 +1,614 @@ + + + + + + + + +Fft — torch_fft • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fft

    +
    + +
    torch_fft(self, signal_ndim, normalized = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor of at least signal_ndim + 1 dimensions

    signal_ndim

    (int) the number of dimensions in each signal. signal_ndim can only be 1, 2 or 3

    normalized

    (bool, optional) controls whether to return normalized results. Default: FALSE

    + +

    Note

    + + +
    For CUDA tensors, an LRU cache is used for cuFFT plans to speed up
    +repeatedly running FFT methods on tensors of same geometry with same
    +configuration. See cufft-plan-cache for more details on how to
    +monitor and control the cache.
    +
    + +

    fft(input, signal_ndim, normalized=False) -> Tensor

    + + + + +

    Complex-to-complex Discrete Fourier Transform

    +

    This method computes the complex-to-complex discrete Fourier transform. +Ignoring the batch dimensions, it computes the following expression:

    +

    $$ + X[\omega_1, \dots, \omega_d] = + \sum_{n_1=0}^{N_1-1} \dots \sum_{n_d=0}^{N_d-1} x[n_1, \dots, n_d] + e^{-j\ 2 \pi \sum_{i=0}^d \frac{\omega_i n_i}{N_i}}, +$$ +where \(d\) = signal_ndim is number of dimensions for the +signal, and \(N_i\) is the size of signal dimension \(i\).

    +

    This method supports 1D, 2D and 3D complex-to-complex transforms, indicated +by signal_ndim. input must be a tensor with last dimension +of size 2, representing the real and imaginary components of complex +numbers, and should have at least signal_ndim + 1 dimensions with optionally +arbitrary number of leading batch dimensions. If normalized is set to +TRUE, this normalizes the result by dividing it with +\(\sqrt{\prod_{i=1}^K N_i}\) so that the operator is unitary.

    +

    Returns the real and the imaginary parts together as one tensor of the same +shape of input.

    +

    The inverse of this function is torch_ifft.

    +

    Warning

    + + + +

    For CPU tensors, this method is currently only available with MKL. Use +torch_backends.mkl.is_available to check if MKL is installed.

    + +

    Examples

    +
    if (torch_is_installed()) { + +# unbatched 2D FFT +x = torch_randn(c(4, 3, 2)) +torch_fft(x, 2) +# batched 1D FFT +torch_fft(x, 1) +# arbitrary number of batch dimensions, 2D FFT +x = torch_randn(c(3, 3, 5, 5, 2)) +torch_fft(x, 2) + +} +
    #> torch_tensor +#> (1,1,1,.,.) = +#> -3.8906 -0.2521 +#> -0.6301 7.3887 +#> -0.1569 -3.4600 +#> 6.8864 4.7697 +#> -4.4483 -4.8385 +#> +#> (2,1,1,.,.) = +#> 8.1809 -9.1632 +#> 13.5965 10.6084 +#> 7.5879 1.5757 +#> -2.9251 9.8662 +#> -3.5436 -0.0408 +#> +#> (3,1,1,.,.) = +#> -6.8657 0.6179 +#> -2.1867 -0.9080 +#> 3.0976 1.3844 +#> -4.6698 4.5027 +#> -4.1710 -1.5937 +#> +#> (1,2,1,.,.) = +#> -4.5069 -1.0340 +#> -1.7006 5.2808 +#> -1.1689 -2.7179 +#> 8.6645 5.0827 +#> -4.1420 2.3250 +#> +#> (2,2,1,.,.) = +#> 7.5556 -1.3444 +#> 2.5145 -2.9687 +#> 0.8374 1.2748 +#> 7.4609 1.1490 +#> -2.5103 6.5001 +#> +#> (3,2,1,.,.) = +#> -3.1516 10.3080 +#> 5.4505 7.7295 +#> 1.9037 0.0331 +#> -2.4931 2.9686 +#> 3.7238 -2.8382 +#> +#> (1,3,1,.,.) = +#> -1.8638 4.4898 +#> 6.2693 -3.0226 +#> -3.7461 -0.5782 +#> -4.6917 4.8592 +#> -7.3360 7.1371 +#> +#> (2,3,1,.,.) = +#> -6.3858 2.2408 +#> -5.9384 -2.9792 +#> 2.7604 1.3434 +#> -7.1528 0.3610 +#> -1.0356 3.4105 +#> +#> (3,3,1,.,.) = +#> 8.0636 4.3733 +#> -6.8172 4.5569 +#> -8.2686 9.6735 +#> -6.1795 8.9975 +#> 3.4867 1.4089 +#> +#> (1,1,2,.,.) = +#> 4.0458 1.1211 +#> 6.7274 0.8219 +#> 3.2647 -8.1180 +#> -4.3900 8.7062 +#> -6.9060 12.5835 +#> +#> (2,1,2,.,.) = +#> -14.9166 -0.4563 +#> -8.9484 3.7034 +#> 7.5586 0.5212 +#> -6.2767 -7.2894 +#> -2.6380 0.9689 +#> +#> (3,1,2,.,.) = +#> -5.0481 4.8095 +#> -0.1282 -0.0783 +#> -3.2317 -2.6416 +#> -0.9148 3.6559 +#> -5.0307 3.9920 +#> +#> (1,2,2,.,.) = +#> -1.9028 2.8111 +#> -8.5834 -3.5393 +#> 6.6530 5.8489 +#> -4.2728 8.3334 +#> 0.6024 -1.7717 +#> +#> (2,2,2,.,.) = +#> 5.4114 -5.5515 +#> 4.1821 5.1946 +#> 1.0588 8.8242 +#> -0.0587 2.2180 +#> -11.7108 2.6790 +#> +#> (3,2,2,.,.) = +#> 2.8153 8.5202 +#> 0.0949 1.5582 +#> -0.9742 -6.8452 +#> -11.5933 1.6363 +#> -2.0684 5.5530 +#> +#> (1,3,2,.,.) = +#> 9.8351 0.5795 +#> -1.6079 -6.4322 +#> -2.0161 -7.3104 +#> 1.4246 -2.9077 +#> 4.1010 -1.4020 +#> +#> (2,3,2,.,.) = +#> -4.0648 -2.0372 +#> 6.0770 6.2051 +#> -7.0857 3.3361 +#> -3.6015 -0.8916 +#> 8.3811 3.9670 +#> +#> (3,3,2,.,.) = +#> 0.1586 2.9111 +#> -10.1217 -0.9805 +#> 5.0078 0.2680 +#> 0.1776 2.0732 +#> -1.6860 0.8195 +#> +#> (1,1,3,.,.) = +#> -9.8876 -1.3592 +#> 3.7233 -8.8646 +#> -0.5858 2.5747 +#> -0.6867 -0.9947 +#> -7.1288 -6.7674 +#> +#> (2,1,3,.,.) = +#> -0.2221 -2.7402 +#> -4.0179 0.0893 +#> 7.6763 -6.2895 +#> 3.8782 6.3572 +#> 0.5241 -0.7963 +#> +#> (3,1,3,.,.) = +#> -1.7983 1.1807 +#> 2.8927 2.6928 +#> -2.2799 0.1756 +#> 6.7964 -8.9340 +#> 0.0358 1.1838 +#> +#> (1,2,3,.,.) = +#> 2.4838 -2.3196 +#> -0.8169 0.6886 +#> -5.2591 -5.7622 +#> 0.9913 4.7963 +#> -1.9588 5.2229 +#> +#> (2,2,3,.,.) = +#> -1.7353 5.1988 +#> 2.5770 -3.1907 +#> -1.6854 -3.0060 +#> 1.8460 -0.4476 +#> 3.2443 -3.0184 +#> +#> (3,2,3,.,.) = +#> -3.8917 -4.1796 +#> 4.3798 -2.0535 +#> -5.1279 3.5968 +#> 9.4164 3.9474 +#> -1.9752 5.4251 +#> +#> (1,3,3,.,.) = +#> 0.4361 -5.6941 +#> 1.8463 -3.5674 +#> 9.1693 1.7494 +#> -6.2192 1.9202 +#> 4.6439 -1.7834 +#> +#> (2,3,3,.,.) = +#> -10.0798 -1.5540 +#> 3.2683 6.1536 +#> 5.1260 7.6848 +#> -1.8577 -3.3287 +#> 6.9663 1.9440 +#> +#> (3,3,3,.,.) = +#> -3.6094 3.3084 +#> 4.7373 -5.0252 +#> -3.1775 3.2782 +#> -1.7631 0.7487 +#> 5.1359 -5.6054 +#> +#> (1,1,4,.,.) = +#> 6.2109 -5.7290 +#> -3.2070 -3.6565 +#> 10.3394 2.8397 +#> -5.9428 3.2633 +#> -2.0450 -2.4186 +#> +#> (2,1,4,.,.) = +#> 4.7595 4.1469 +#> 0.7921 4.8637 +#> -1.0122 -4.4003 +#> -11.7296 1.0495 +#> -3.6038 -9.4859 +#> +#> (3,1,4,.,.) = +#> 2.5091 0.5782 +#> 1.8351 -4.7400 +#> -4.6407 -2.1848 +#> -2.8872 -3.2071 +#> 3.2865 -3.3006 +#> +#> (1,2,4,.,.) = +#> -0.7310 6.7275 +#> 1.1613 -2.5606 +#> 3.1072 -3.6625 +#> 0.1855 3.8225 +#> 0.3006 1.0869 +#> +#> (2,2,4,.,.) = +#> 3.9528 3.7990 +#> 1.7951 -3.5298 +#> -7.1744 -0.6763 +#> 2.3892 4.9950 +#> 8.5449 -6.1794 +#> +#> (3,2,4,.,.) = +#> 5.5085 2.2022 +#> -4.3377 3.1459 +#> 7.6732 -3.6306 +#> 1.8619 7.0566 +#> 7.4051 -7.7637 +#> +#> (1,3,4,.,.) = +#> -1.2423 1.1820 +#> -0.4600 1.5733 +#> -8.5312 -2.3545 +#> 0.4854 -0.7314 +#> -2.3858 -4.7297 +#> +#> (2,3,4,.,.) = +#> -3.8325 -0.5313 +#> 0.6302 5.9889 +#> -6.4304 8.7342 +#> -5.5349 -1.8498 +#> -2.5480 -0.2350 +#> +#> (3,3,4,.,.) = +#> -5.6014 -5.7887 +#> -5.6342 -0.1735 +#> -8.3357 -0.1609 +#> -3.4009 4.3665 +#> -1.2554 -2.9821 +#> +#> (1,1,5,.,.) = +#> -4.1281 1.7054 +#> -4.5965 1.2471 +#> 8.7289 12.5479 +#> -1.2868 9.5942 +#> 1.1662 0.0020 +#> +#> (2,1,5,.,.) = +#> 4.1019 3.2576 +#> 3.4793 0.8775 +#> 2.5433 4.6935 +#> 5.2767 2.7942 +#> -0.5224 -2.5172 +#> +#> (3,1,5,.,.) = +#> -3.0225 0.2177 +#> 6.4263 0.8066 +#> -3.0760 1.7617 +#> 6.6881 -2.1407 +#> -8.4150 3.4503 +#> +#> (1,2,5,.,.) = +#> 0.3347 -9.9952 +#> 1.0171 -1.1604 +#> 1.1945 3.9782 +#> -1.1903 3.4292 +#> 3.8634 1.9562 +#> +#> (2,2,5,.,.) = +#> 6.2246 5.1168 +#> 1.6311 -0.1134 +#> 3.6844 -3.9802 +#> -4.8940 -4.1660 +#> 5.7609 4.0242 +#> +#> (3,2,5,.,.) = +#> 4.8045 -1.2544 +#> 0.6540 -6.0830 +#> -6.5560 0.9239 +#> -7.5844 -4.3400 +#> 2.4527 -0.6081 +#> +#> (1,3,5,.,.) = +#> 0.0012 1.4379 +#> 2.3320 -0.5475 +#> 3.0302 -1.6159 +#> -8.4547 -7.8288 +#> -0.9196 -3.0474 +#> +#> (2,3,5,.,.) = +#> -1.0915 -0.7531 +#> 7.7590 2.9822 +#> 1.4004 2.6508 +#> 5.4837 -4.4701 +#> 0.8624 5.0485 +#> +#> (3,3,5,.,.) = +#> 1.2067 2.4785 +#> -3.7972 2.4715 +#> 1.6085 4.8943 +#> 1.5530 -0.9489 +#> -0.9469 -2.9451 +#> [ CPUFloatType{3,3,5,5,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_finfo.html b/static/docs/dev/reference/torch_finfo.html new file mode 100644 index 0000000000000000000000000000000000000000..374ddbbc022a608d03f789ac2d5a50a841b9d522 --- /dev/null +++ b/static/docs/dev/reference/torch_finfo.html @@ -0,0 +1,239 @@ + + + + + + + + +Floating point type info — torch_finfo • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    A list that represents the numerical properties of a +floating point torch.dtype

    +
    + +
    torch_finfo(dtype)
    + +

    Arguments

    + + + + + + +
    dtype

    dtype to check information

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_flatten.html b/static/docs/dev/reference/torch_flatten.html new file mode 100644 index 0000000000000000000000000000000000000000..388b4cb230bb8eb47b22fd8f2459e5c90e3de334 --- /dev/null +++ b/static/docs/dev/reference/torch_flatten.html @@ -0,0 +1,270 @@ + + + + + + + + +Flatten — torch_flatten • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Flatten

    +
    + +
    torch_flatten(self, dims, start_dim = 1L, end_dim = -1L, out_dim)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dims

    if tensor is named you can pass the name of the dimensions to +flatten

    start_dim

    (int) the first dim to flatten

    end_dim

    (int) the last dim to flatten

    out_dim

    the name of the resulting dimension if a named tensor.

    + +

    flatten(input, start_dim=0, end_dim=-1) -> Tensor

    + + + + +

    Flattens a contiguous range of dims in a tensor.

    + +

    Examples

    +
    if (torch_is_installed()) { + +t = torch_tensor(matrix(c(1, 2), ncol = 2)) +torch_flatten(t) +torch_flatten(t, start_dim=2) +} +
    #> torch_tensor +#> 1 2 +#> [ CPUFloatType{1,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_flip.html b/static/docs/dev/reference/torch_flip.html new file mode 100644 index 0000000000000000000000000000000000000000..24f82d0c095da7e4dad7ecf0aa3eff5ec9eb8939 --- /dev/null +++ b/static/docs/dev/reference/torch_flip.html @@ -0,0 +1,263 @@ + + + + + + + + +Flip — torch_flip • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Flip

    +
    + +
    torch_flip(self, dims)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dims

    (a list or tuple) axis to flip on

    + +

    flip(input, dims) -> Tensor

    + + + + +

    Reverse the order of a n-D tensor along given axis in dims.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_arange(0, 8)$view(c(2, 2, 2)) +x +torch_flip(x, c(1, 2)) +} +
    #> torch_tensor +#> (1,.,.) = +#> 6 7 +#> 4 5 +#> +#> (2,.,.) = +#> 2 3 +#> 0 1 +#> [ CPUFloatType{2,2,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_floor.html b/static/docs/dev/reference/torch_floor.html new file mode 100644 index 0000000000000000000000000000000000000000..dfcf6f3f8b44f2b8286ef746d1bb8df6bc69f088 --- /dev/null +++ b/static/docs/dev/reference/torch_floor.html @@ -0,0 +1,260 @@ + + + + + + + + +Floor — torch_floor • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Floor

    +
    + +
    torch_floor(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    floor(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the floor of the elements of input, +the largest integer less than or equal to each element.

    +

    $$ + \mbox{out}_{i} = \left\lfloor \mbox{input}_{i} \right\rfloor +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_floor(a) +} +
    #> torch_tensor +#> -1 +#> 0 +#> -1 +#> -3 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_floor_divide.html b/static/docs/dev/reference/torch_floor_divide.html new file mode 100644 index 0000000000000000000000000000000000000000..fae2c3a2995b17636292a7de5fc93498a50eb143 --- /dev/null +++ b/static/docs/dev/reference/torch_floor_divide.html @@ -0,0 +1,263 @@ + + + + + + + + +Floor_divide — torch_floor_divide • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Floor_divide

    +
    + +
    torch_floor_divide(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the numerator tensor

    other

    (Tensor or Scalar) the denominator

    + +

    floor_divide(input, other, out=NULL) -> Tensor

    + + + + +

    Return the division of the inputs rounded down to the nearest integer. See torch_div +for type promotion and broadcasting rules.

    +

    $$ + \mbox{{out}}_i = \left\lfloor \frac{{\mbox{{input}}_i}}{{\mbox{{other}}_i}} \right\rfloor +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_tensor(c(4.0, 3.0)) +b = torch_tensor(c(2.0, 2.0)) +torch_floor_divide(a, b) +torch_floor_divide(a, 1.4) +} +
    #> torch_tensor +#> 2 +#> 2 +#> [ CPUFloatType{2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_fmod.html b/static/docs/dev/reference/torch_fmod.html new file mode 100644 index 0000000000000000000000000000000000000000..65f67450a5d43aaa1e0d853af1556906bd37da70 --- /dev/null +++ b/static/docs/dev/reference/torch_fmod.html @@ -0,0 +1,264 @@ + + + + + + + + +Fmod — torch_fmod • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fmod

    +
    + +
    torch_fmod(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the dividend

    other

    (Tensor or float) the divisor, which may be either a number or a tensor of the same shape as the dividend

    + +

    fmod(input, other, out=NULL) -> Tensor

    + + + + +

    Computes the element-wise remainder of division.

    +

    The dividend and divisor may contain both for integer and floating point +numbers. The remainder has the same sign as the dividend input.

    +

    When other is a tensor, the shapes of input and +other must be broadcastable .

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_fmod(torch_tensor(c(-3., -2, -1, 1, 2, 3)), 2) +torch_fmod(torch_tensor(c(1., 2, 3, 4, 5)), 1.5) +} +
    #> torch_tensor +#> 1.0000 +#> 0.5000 +#> 0.0000 +#> 1.0000 +#> 0.5000 +#> [ CPUFloatType{5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_frac.html b/static/docs/dev/reference/torch_frac.html new file mode 100644 index 0000000000000000000000000000000000000000..a20592f6578d964e2edd3a3e9bf993cfe433d28a --- /dev/null +++ b/static/docs/dev/reference/torch_frac.html @@ -0,0 +1,256 @@ + + + + + + + + +Frac — torch_frac • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Frac

    +
    + +
    torch_frac(self)
    + +

    Arguments

    + + + + + + +
    self

    the input tensor.

    + +

    frac(input, out=NULL) -> Tensor

    + + + + +

    Computes the fractional portion of each element in input.

    +

    $$ + \mbox{out}_{i} = \mbox{input}_{i} - \left\lfloor |\mbox{input}_{i}| \right\rfloor * \mbox{sgn}(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_frac(torch_tensor(c(1, 2.5, -3.2))) +} +
    #> torch_tensor +#> 0.0000 +#> 0.5000 +#> -0.2000 +#> [ CPUFloatType{3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_full.html b/static/docs/dev/reference/torch_full.html new file mode 100644 index 0000000000000000000000000000000000000000..0e7775a3a99360bd062354dca642e0d6c9b31f43 --- /dev/null +++ b/static/docs/dev/reference/torch_full.html @@ -0,0 +1,293 @@ + + + + + + + + +Full — torch_full • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Full

    +
    + +
    torch_full(
    +  size,
    +  fill_value,
    +  names = NULL,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    size

    (int...) a list, tuple, or torch_Size of integers defining the shape of the output tensor.

    fill_value

    NA the number to fill the output tensor with.

    names

    optional names of the dimensions

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

    layout

    (torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    full(size, fill_value, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

    + + + + +

    Returns a tensor of size size filled with fill_value.

    +

    Warning

    + + + +

    In PyTorch 1.5 a bool or integral fill_value will produce a warning if +dtype or out are not set. +In a future PyTorch release, when dtype and out are not set +a bool fill_value will return a tensor of torch.bool dtype, +and an integral fill_value will return a tensor of torch.long dtype.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_full(list(2, 3), 3.141592) +} +
    #> torch_tensor +#> 3.1416 3.1416 3.1416 +#> 3.1416 3.1416 3.1416 +#> [ CPUFloatType{2,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_full_like.html b/static/docs/dev/reference/torch_full_like.html new file mode 100644 index 0000000000000000000000000000000000000000..b57b08382d796abe16e45de1acc66b5b6f7a6933 --- /dev/null +++ b/static/docs/dev/reference/torch_full_like.html @@ -0,0 +1,278 @@ + + + + + + + + +Full_like — torch_full_like • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Full_like

    +
    + +
    torch_full_like(
    +  input,
    +  fill_value,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE,
    +  memory_format = torch_preserve_format()
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    (Tensor) the size of input will determine size of the output tensor.

    fill_value

    the number to fill the output tensor with.

    dtype

    (torch.dtype, optional) the desired data type of returned Tensor. Default: if NULL, defaults to the dtype of input.

    layout

    (torch.layout, optional) the desired layout of returned tensor. Default: if NULL, defaults to the layout of input.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, defaults to the device of input.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    memory_format

    (torch.memory_format, optional) the desired memory format of returned Tensor. Default: torch_preserve_format.

    + +

    full_like(input, fill_value, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False,

    + + + + +

    memory_format=torch.preserve_format) -> Tensor

    +

    Returns a tensor with the same size as input filled with fill_value. +torch_full_like(input, fill_value) is equivalent to +torch_full(input.size(), fill_value, dtype=input.dtype, layout=input.layout, device=input.device).

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_gather.html b/static/docs/dev/reference/torch_gather.html new file mode 100644 index 0000000000000000000000000000000000000000..1aa8cd7f54c80c0390f238f9ea0761612d1680ce --- /dev/null +++ b/static/docs/dev/reference/torch_gather.html @@ -0,0 +1,275 @@ + + + + + + + + +Gather — torch_gather • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Gather

    +
    + +
    torch_gather(self, dim, index, sparse_grad = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the source tensor

    dim

    (int) the axis along which to index

    index

    (LongTensor) the indices of elements to gather

    sparse_grad

    (bool,optional) If TRUE, gradient w.r.t. input will be a sparse tensor.

    + +

    gather(input, dim, index, sparse_grad=FALSE) -> Tensor

    + + + + +

    Gathers values along an axis specified by dim.

    +

    For a 3-D tensor the output is specified by::

    out[i][j][k] = input[index[i][j][k]][j][k]  # if dim == 0
    +out[i][j][k] = input[i][index[i][j][k]][k]  # if dim == 1
    +out[i][j][k] = input[i][j][index[i][j][k]]  # if dim == 2
    +
    + +

    If input is an n-dimensional tensor with size +\((x_0, x_1..., x_{i-1}, x_i, x_{i+1}, ..., x_{n-1})\) +and dim = i, then index must be an \(n\)-dimensional tensor with +size \((x_0, x_1, ..., x_{i-1}, y, x_{i+1}, ..., x_{n-1})\) where \(y \geq 1\) +and out will have the same size as index.

    + +

    Examples

    +
    if (torch_is_installed()) { + +t = torch_tensor(matrix(c(1,2,3,4), ncol = 2, byrow = TRUE)) +torch_gather(t, 2, torch_tensor(matrix(c(1,1,2,1), ncol = 2, byrow=TRUE), dtype = torch_int64())) +} +
    #> torch_tensor +#> 1 1 +#> 4 3 +#> [ CPUFloatType{2,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_ge.html b/static/docs/dev/reference/torch_ge.html new file mode 100644 index 0000000000000000000000000000000000000000..9566f00f26f39bb94f2c2c6c3f55c02914cbf10f --- /dev/null +++ b/static/docs/dev/reference/torch_ge.html @@ -0,0 +1,259 @@ + + + + + + + + +Ge — torch_ge • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Ge

    +
    + +
    torch_ge(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the tensor to compare

    other

    (Tensor or float) the tensor or value to compare

    + +

    ge(input, other, out=NULL) -> Tensor

    + + + + +

    Computes \(\mbox{input} \geq \mbox{other}\) element-wise.

    +

    The second argument can be a number or a tensor whose shape is +broadcastable with the first argument.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_ge(torch_tensor(matrix(1:4, ncol = 2, byrow=TRUE)), + torch_tensor(matrix(c(1,1,4,4), ncol = 2, byrow=TRUE))) +} +
    #> torch_tensor +#> 1 1 +#> 0 1 +#> [ CPUBoolType{2,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_generator.html b/static/docs/dev/reference/torch_generator.html new file mode 100644 index 0000000000000000000000000000000000000000..762fc54184dbc4493c88446fb76781b0c1f89e0f --- /dev/null +++ b/static/docs/dev/reference/torch_generator.html @@ -0,0 +1,246 @@ + + + + + + + + +Create a Generator object — torch_generator • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    A torch_generator is an object which manages the state of the algorithm +that produces pseudo random numbers. Used as a keyword argument in many +In-place random sampling functions.

    +
    + +
    torch_generator()
    + + + +

    Examples

    +
    if (torch_is_installed()) { + +# Via string +generator <- torch_generator() +generator$current_seed() +generator$set_current_seed(1234567L) +generator$current_seed() + + +} +
    #> integer64 +#> [1] 1234567
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_geqrf.html b/static/docs/dev/reference/torch_geqrf.html new file mode 100644 index 0000000000000000000000000000000000000000..3fddfc1f786f26a20fccf98cde79eaa1201e7681 --- /dev/null +++ b/static/docs/dev/reference/torch_geqrf.html @@ -0,0 +1,250 @@ + + + + + + + + +Geqrf — torch_geqrf • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Geqrf

    +
    + +
    torch_geqrf(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input matrix

    + +

    geqrf(input, out=NULL) -> (Tensor, Tensor)

    + + + + +

    This is a low-level function for calling LAPACK directly. This function +returns a namedtuple (a, tau) as defined in LAPACK documentation for geqrf_ .

    +

    You'll generally want to use torch_qr instead.

    +

    Computes a QR decomposition of input, but without constructing +\(Q\) and \(R\) as explicit separate matrices.

    +

    Rather, this directly calls the underlying LAPACK function ?geqrf +which produces a sequence of 'elementary reflectors'.

    +

    See LAPACK documentation for geqrf_ for further details.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_ger.html b/static/docs/dev/reference/torch_ger.html new file mode 100644 index 0000000000000000000000000000000000000000..ef5e342a9476811ee84885f19c9349ada1a85c3a --- /dev/null +++ b/static/docs/dev/reference/torch_ger.html @@ -0,0 +1,265 @@ + + + + + + + + +Ger — torch_ger • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Ger

    +
    + +
    torch_ger(self, vec2)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) 1-D input vector

    vec2

    (Tensor) 1-D input vector

    + +

    Note

    + +

    This function does not broadcast .

    +

    ger(input, vec2, out=NULL) -> Tensor

    + + + + +

    Outer product of input and vec2. +If input is a vector of size \(n\) and vec2 is a vector of +size \(m\), then out must be a matrix of size \((n \times m)\).

    + +

    Examples

    +
    if (torch_is_installed()) { + +v1 = torch_arange(1., 5.) +v2 = torch_arange(1., 4.) +torch_ger(v1, v2) +} +
    #> torch_tensor +#> 1 2 3 +#> 2 4 6 +#> 3 6 9 +#> 4 8 12 +#> [ CPUFloatType{4,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_gt.html b/static/docs/dev/reference/torch_gt.html new file mode 100644 index 0000000000000000000000000000000000000000..60d8bc71ff051bc00f0cfcfd2a15d4b9424877cd --- /dev/null +++ b/static/docs/dev/reference/torch_gt.html @@ -0,0 +1,259 @@ + + + + + + + + +Gt — torch_gt • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Gt

    +
    + +
    torch_gt(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the tensor to compare

    other

    (Tensor or float) the tensor or value to compare

    + +

    gt(input, other, out=NULL) -> Tensor

    + + + + +

    Computes \(\mbox{input} > \mbox{other}\) element-wise.

    +

    The second argument can be a number or a tensor whose shape is +broadcastable with the first argument.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_gt(torch_tensor(matrix(1:4, ncol = 2, byrow=TRUE)), + torch_tensor(matrix(c(1,1,4,4), ncol = 2, byrow=TRUE))) +} +
    #> torch_tensor +#> 0 1 +#> 0 0 +#> [ CPUBoolType{2,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_hamming_window.html b/static/docs/dev/reference/torch_hamming_window.html new file mode 100644 index 0000000000000000000000000000000000000000..4cf81e4b5588bf52656d770a9792305a175659e9 --- /dev/null +++ b/static/docs/dev/reference/torch_hamming_window.html @@ -0,0 +1,301 @@ + + + + + + + + +Hamming_window — torch_hamming_window • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Hamming_window

    +
    + +
    torch_hamming_window(
    +  window_length,
    +  periodic = TRUE,
    +  alpha = 0.54,
    +  beta = 0.46,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    window_length

    (int) the size of returned window

    periodic

    (bool, optional) If TRUE, returns a window to be used as periodic function. If False, return a symmetric window.

    alpha

    (float, optional) The coefficient \(\alpha\) in the equation above

    beta

    (float, optional) The coefficient \(\beta\) in the equation above

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type). Only floating point types are supported.

    layout

    (torch.layout, optional) the desired layout of returned window tensor. Only torch_strided (dense layout) is supported.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    Note

    + + +
    If `window_length` \eqn{=1}, the returned window contains a single value 1.
    +
    + +
    This is a generalized version of `torch_hann_window`.
    +
    + +

    hamming_window(window_length, periodic=TRUE, alpha=0.54, beta=0.46, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

    + + + + +

    Hamming window function.

    +

    $$ + w[n] = \alpha - \beta\ \cos \left( \frac{2 \pi n}{N - 1} \right), +$$ +where \(N\) is the full window size.

    +

    The input window_length is a positive integer controlling the +returned window size. periodic flag determines whether the returned +window trims off the last duplicate value from the symmetric window and is +ready to be used as a periodic window with functions like +torch_stft. Therefore, if periodic is true, the \(N\) in +above formula is in fact \(\mbox{window\_length} + 1\). Also, we always have +torch_hamming_window(L, periodic=TRUE) equal to +torch_hamming_window(L + 1, periodic=False)[:-1]).

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_hann_window.html b/static/docs/dev/reference/torch_hann_window.html new file mode 100644 index 0000000000000000000000000000000000000000..ba4de4640d80566516258700deb5ab21cd0bcf43 --- /dev/null +++ b/static/docs/dev/reference/torch_hann_window.html @@ -0,0 +1,289 @@ + + + + + + + + +Hann_window — torch_hann_window • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Hann_window

    +
    + +
    torch_hann_window(
    +  window_length,
    +  periodic = TRUE,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    window_length

    (int) the size of returned window

    periodic

    (bool, optional) If TRUE, returns a window to be used as periodic function. If False, return a symmetric window.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type). Only floating point types are supported.

    layout

    (torch.layout, optional) the desired layout of returned window tensor. Only torch_strided (dense layout) is supported.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    Note

    + + +
    If `window_length` \eqn{=1}, the returned window contains a single value 1.
    +
    + +

    hann_window(window_length, periodic=TRUE, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

    + + + + +

    Hann window function.

    +

    $$ + w[n] = \frac{1}{2}\ \left[1 - \cos \left( \frac{2 \pi n}{N - 1} \right)\right] = + \sin^2 \left( \frac{\pi n}{N - 1} \right), +$$ +where \(N\) is the full window size.

    +

    The input window_length is a positive integer controlling the +returned window size. periodic flag determines whether the returned +window trims off the last duplicate value from the symmetric window and is +ready to be used as a periodic window with functions like +torch_stft. Therefore, if periodic is true, the \(N\) in +above formula is in fact \(\mbox{window\_length} + 1\). Also, we always have +torch_hann_window(L, periodic=TRUE) equal to +torch_hann_window(L + 1, periodic=False)[:-1]).

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_histc.html b/static/docs/dev/reference/torch_histc.html new file mode 100644 index 0000000000000000000000000000000000000000..13ae3ba637f83657a73022acc830be761f191386 --- /dev/null +++ b/static/docs/dev/reference/torch_histc.html @@ -0,0 +1,269 @@ + + + + + + + + +Histc — torch_histc • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Histc

    +
    + +
    torch_histc(self, bins = 100L, min = 0L, max = 0L)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    bins

    (int) number of histogram bins

    min

    (int) lower end of the range (inclusive)

    max

    (int) upper end of the range (inclusive)

    + +

    histc(input, bins=100, min=0, max=0, out=NULL) -> Tensor

    + + + + +

    Computes the histogram of a tensor.

    +

    The elements are sorted into equal width bins between min and +max. If min and max are both zero, the minimum and +maximum values of the data are used.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_histc(torch_tensor(c(1., 2, 1)), bins=4, min=0, max=3) +} +
    #> torch_tensor +#> 0 +#> 2 +#> 1 +#> 0 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_ifft.html b/static/docs/dev/reference/torch_ifft.html new file mode 100644 index 0000000000000000000000000000000000000000..f0dde54b75a15066e910e181d6a383a73298fdf7 --- /dev/null +++ b/static/docs/dev/reference/torch_ifft.html @@ -0,0 +1,308 @@ + + + + + + + + +Ifft — torch_ifft • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Ifft

    +
    + +
    torch_ifft(self, signal_ndim, normalized = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor of at least signal_ndim + 1 dimensions

    signal_ndim

    (int) the number of dimensions in each signal. signal_ndim can only be 1, 2 or 3

    normalized

    (bool, optional) controls whether to return normalized results. Default: FALSE

    + +

    Note

    + + +
    For CUDA tensors, an LRU cache is used for cuFFT plans to speed up
    +repeatedly running FFT methods on tensors of same geometry with same
    +configuration. See cufft-plan-cache for more details on how to
    +monitor and control the cache.
    +
    + +

    ifft(input, signal_ndim, normalized=False) -> Tensor

    + + + + +

    Complex-to-complex Inverse Discrete Fourier Transform

    +

    This method computes the complex-to-complex inverse discrete Fourier +transform. Ignoring the batch dimensions, it computes the following +expression:

    +

    $$ + X[\omega_1, \dots, \omega_d] = + \frac{1}{\prod_{i=1}^d N_i} \sum_{n_1=0}^{N_1-1} \dots \sum_{n_d=0}^{N_d-1} x[n_1, \dots, n_d] + e^{\ j\ 2 \pi \sum_{i=0}^d \frac{\omega_i n_i}{N_i}}, +$$ +where \(d\) = signal_ndim is number of dimensions for the +signal, and \(N_i\) is the size of signal dimension \(i\).

    +

    The argument specifications are almost identical with torch_fft. +However, if normalized is set to TRUE, this instead returns the +results multiplied by \(\sqrt{\prod_{i=1}^d N_i}\), to become a unitary +operator. Therefore, to invert a torch_fft, the normalized +argument should be set identically for torch_fft.

    +

    Returns the real and the imaginary parts together as one tensor of the same +shape of input.

    +

    The inverse of this function is torch_fft.

    +

    Warning

    + + + +

    For CPU tensors, this method is currently only available with MKL. Use +torch_backends.mkl.is_available to check if MKL is installed.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_randn(c(3, 3, 2)) +x +y = torch_fft(x, 2) +torch_ifft(y, 2) # recover x +} +
    #> torch_tensor +#> (1,.,.) = +#> -0.1187 0.4043 +#> 1.4305 0.6184 +#> 0.0020 -1.6723 +#> +#> (2,.,.) = +#> 0.6435 -0.1685 +#> -1.8431 0.5859 +#> -2.5527 -0.6183 +#> +#> (3,.,.) = +#> -0.5707 1.3996 +#> 0.3492 -0.7337 +#> 0.4699 0.5105 +#> [ CPUFloatType{3,3,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_iinfo.html b/static/docs/dev/reference/torch_iinfo.html new file mode 100644 index 0000000000000000000000000000000000000000..e798e37e2540b6f7c5227a80f5b6063ca1d00e6f --- /dev/null +++ b/static/docs/dev/reference/torch_iinfo.html @@ -0,0 +1,239 @@ + + + + + + + + +Integer type info — torch_iinfo • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    A list that represents the numerical properties of a integer +type.

    +
    + +
    torch_iinfo(dtype)
    + +

    Arguments

    + + + + + + +
    dtype

    dtype to get information from.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_imag.html b/static/docs/dev/reference/torch_imag.html new file mode 100644 index 0000000000000000000000000000000000000000..176ce051815ace651f2837c1c4d2afaf21e1fe43 --- /dev/null +++ b/static/docs/dev/reference/torch_imag.html @@ -0,0 +1,258 @@ + + + + + + + + +Imag — torch_imag • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Imag

    +
    + +
    torch_imag(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    imag(input) -> Tensor

    + + + + +

    Returns the imaginary part of the input tensor.

    +

    Warning

    + + + +

    Not yet implemented.

    +

    $$ + \mbox{out}_{i} = imag(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +torch_imag(torch_tensor(c(-1 + 1i, -2 + 2i, 3 - 3i))) +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_index_select.html b/static/docs/dev/reference/torch_index_select.html new file mode 100644 index 0000000000000000000000000000000000000000..fc952de891dbb5feffb54f1546da7b07bb6a2fb1 --- /dev/null +++ b/static/docs/dev/reference/torch_index_select.html @@ -0,0 +1,275 @@ + + + + + + + + +Index_select — torch_index_select • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Index_select

    +
    + +
    torch_index_select(self, dim, index)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int) the dimension in which we index

    index

    (LongTensor) the 1-D tensor containing the indices to index

    + +

    Note

    + +

    The returned tensor does not use the same storage as the original +tensor. If out has a different shape than expected, we +silently change it to the correct shape, reallocating the underlying +storage if necessary.

    +

    index_select(input, dim, index, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor which indexes the input tensor along dimension +dim using the entries in index which is a LongTensor.

    +

    The returned tensor has the same number of dimensions as the original tensor +(input). The dim\ th dimension has the same size as the length +of index; other dimensions have the same size as in the original tensor.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_randn(c(3, 4)) +x +indices = torch_tensor(c(1, 3), dtype = torch_int64()) +torch_index_select(x, 1, indices) +torch_index_select(x, 2, indices) +} +
    #> torch_tensor +#> 1.8498 -0.2319 +#> 0.0647 -1.1608 +#> 0.0595 0.9246 +#> [ CPUFloatType{3,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_inverse.html b/static/docs/dev/reference/torch_inverse.html new file mode 100644 index 0000000000000000000000000000000000000000..ac899566dae7e836f0f738933a53c1979ccc156d --- /dev/null +++ b/static/docs/dev/reference/torch_inverse.html @@ -0,0 +1,268 @@ + + + + + + + + +Inverse — torch_inverse • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Inverse

    +
    + +
    torch_inverse(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor of size \((*, n, n)\) where * is zero or more batch dimensions

    + +

    Note

    + + +
    Irrespective of the original strides, the returned tensors will be
    +transposed, i.e. with strides like `input.contiguous().transpose(-2, -1).stride()`
    +
    + +

    inverse(input, out=NULL) -> Tensor

    + + + + +

    Takes the inverse of the square matrix input. input can be batches +of 2D square tensors, in which case this function would return a tensor composed of +individual inverses.

    + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +x = torch_rand(c(4, 4)) +y = torch_inverse(x) +z = torch_mm(x, y) +z +torch_max(torch_abs(z - torch_eye(4))) # Max non-zero +# Batched inverse example +x = torch_randn(c(2, 3, 4, 4)) +y = torch_inverse(x) +z = torch_matmul(x, y) +torch_max(torch_abs(z - torch_eye(4)$expand_as(x))) # Max non-zero +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_irfft.html b/static/docs/dev/reference/torch_irfft.html new file mode 100644 index 0000000000000000000000000000000000000000..04fa1cdcea064a02f3d6330de81086edc996a177 --- /dev/null +++ b/static/docs/dev/reference/torch_irfft.html @@ -0,0 +1,330 @@ + + + + + + + + +Irfft — torch_irfft • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Irfft

    +
    + +
    torch_irfft(
    +  self,
    +  signal_ndim,
    +  normalized = FALSE,
    +  onesided = TRUE,
    +  signal_sizes = list()
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor of at least signal_ndim + 1 dimensions

    signal_ndim

    (int) the number of dimensions in each signal. signal_ndim can only be 1, 2 or 3

    normalized

    (bool, optional) controls whether to return normalized results. Default: FALSE

    onesided

    (bool, optional) controls whether input was halfed to avoid redundancy, e.g., by torch_rfft(). Default: TRUE

    signal_sizes

    (list or torch.Size, optional) the size of the original signal (without batch dimension). Default: NULL

    + +

    Note

    + + +
    Due to the conjugate symmetry, `input` do not need to contain the full
    +complex frequency values. Roughly half of the values will be sufficient, as
    +is the case when `input` is given by [`~torch.rfft`] with
    +`rfft(signal, onesided=TRUE)`. In such case, set the `onesided`
    +argument of this method to `TRUE`. Moreover, the original signal shape
    +information can sometimes be lost, optionally set `signal_sizes` to be
    +the size of the original signal (without the batch dimensions if in batched
    +mode) to recover it with correct shape.
    +
    +Therefore, to invert an [torch_rfft()], the `normalized` and
    +`onesided` arguments should be set identically for [torch_irfft()],
    +and preferably a `signal_sizes` is given to avoid size mismatch. See the
    +example below for a case of size mismatch.
    +
    +See [torch_rfft()] for details on conjugate symmetry.
    +
    + +

    The inverse of this function is torch_rfft().

    +
    For CUDA tensors, an LRU cache is used for cuFFT plans to speed up
    +repeatedly running FFT methods on tensors of same geometry with same
    +configuration. See cufft-plan-cache for more details on how to
    +monitor and control the cache.
    +
    + +

    irfft(input, signal_ndim, normalized=False, onesided=TRUE, signal_sizes=NULL) -> Tensor

    + + + + +

    Complex-to-real Inverse Discrete Fourier Transform

    +

    This method computes the complex-to-real inverse discrete Fourier transform. +It is mathematically equivalent with torch_ifft with differences only in +formats of the input and output.

    +

    The argument specifications are almost identical with torch_ifft. +Similar to torch_ifft, if normalized is set to TRUE, +this normalizes the result by multiplying it with +\(\sqrt{\prod_{i=1}^K N_i}\) so that the operator is unitary, where +\(N_i\) is the size of signal dimension \(i\).

    +

    Warning

    + + + +

    Generally speaking, input to this function should contain values +following conjugate symmetry. Note that even if onesided is +TRUE, often symmetry on some part is still needed. When this +requirement is not satisfied, the behavior of torch_irfft is +undefined. Since torch_autograd.gradcheck estimates numerical +Jacobian with point perturbations, torch_irfft will almost +certainly fail the check.

    + +

    For CPU tensors, this method is currently only available with MKL. Use +torch_backends.mkl.is_available to check if MKL is installed.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_randn(c(4, 4)) +torch_rfft(x, 2, onesided=TRUE) +x = torch_randn(c(4, 5)) +torch_rfft(x, 2, onesided=TRUE) +y = torch_rfft(x, 2, onesided=TRUE) +torch_irfft(y, 2, onesided=TRUE, signal_sizes=c(4,5)) # recover x +} +
    #> torch_tensor +#> 0.4783 0.0485 -1.0355 -0.9623 0.9389 +#> -1.4649 0.3731 -1.4910 -0.1119 0.8593 +#> 0.0232 0.6348 0.4739 -0.6755 0.1265 +#> -1.1202 -0.9652 0.6908 -1.8474 1.7297 +#> [ CPUFloatType{4,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_is_complex.html b/static/docs/dev/reference/torch_is_complex.html new file mode 100644 index 0000000000000000000000000000000000000000..b6bf913dcd21cbf21a05c078649cfd32ca5afef5 --- /dev/null +++ b/static/docs/dev/reference/torch_is_complex.html @@ -0,0 +1,244 @@ + + + + + + + + +Is_complex — torch_is_complex • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Is_complex

    +
    + +
    torch_is_complex(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the PyTorch tensor to test

    + +

    is_complex(input) -> (bool)

    + + + + +

    Returns TRUE if the data type of input is a complex data type i.e., +one of torch_complex64, and torch.complex128.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_is_floating_point.html b/static/docs/dev/reference/torch_is_floating_point.html new file mode 100644 index 0000000000000000000000000000000000000000..310f93f84197a6a5237b5fd61f1335476bf38785 --- /dev/null +++ b/static/docs/dev/reference/torch_is_floating_point.html @@ -0,0 +1,244 @@ + + + + + + + + +Is_floating_point — torch_is_floating_point • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Is_floating_point

    +
    + +
    torch_is_floating_point(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the PyTorch tensor to test

    + +

    is_floating_point(input) -> (bool)

    + + + + +

    Returns TRUE if the data type of input is a floating point data type i.e., +one of torch_float64, torch.float32 and torch.float16.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_is_installed.html b/static/docs/dev/reference/torch_is_installed.html new file mode 100644 index 0000000000000000000000000000000000000000..7b31b05066998054ea45638986f8cc1644f1b10b --- /dev/null +++ b/static/docs/dev/reference/torch_is_installed.html @@ -0,0 +1,229 @@ + + + + + + + + +Verifies if torch is installed — torch_is_installed • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Verifies if torch is installed

    +
    + +
    torch_is_installed()
    + + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_isfinite.html b/static/docs/dev/reference/torch_isfinite.html new file mode 100644 index 0000000000000000000000000000000000000000..f2bec8600826a000a8c85de016a839215f85dea5 --- /dev/null +++ b/static/docs/dev/reference/torch_isfinite.html @@ -0,0 +1,255 @@ + + + + + + + + +Isfinite — torch_isfinite • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Isfinite

    +
    + +
    torch_isfinite(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) A tensor to check

    + +

    TEST

    + + + + +

    Returns a new tensor with boolean elements representing if each element is Finite or not.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_isfinite(torch_tensor(c(1, Inf, 2, -Inf, NaN))) +} +
    #> torch_tensor +#> 1 +#> 0 +#> 1 +#> 0 +#> 0 +#> [ CPUBoolType{5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_isinf.html b/static/docs/dev/reference/torch_isinf.html new file mode 100644 index 0000000000000000000000000000000000000000..9de6892421c7c978c9b18d728a03ae1ff91295d5 --- /dev/null +++ b/static/docs/dev/reference/torch_isinf.html @@ -0,0 +1,255 @@ + + + + + + + + +Isinf — torch_isinf • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Isinf

    +
    + +
    torch_isinf(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) A tensor to check

    + +

    TEST

    + + + + +

    Returns a new tensor with boolean elements representing if each element is +/-INF or not.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_isinf(torch_tensor(c(1, Inf, 2, -Inf, NaN))) +} +
    #> torch_tensor +#> 0 +#> 1 +#> 0 +#> 1 +#> 0 +#> [ CPUBoolType{5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_isnan.html b/static/docs/dev/reference/torch_isnan.html new file mode 100644 index 0000000000000000000000000000000000000000..82db099ceef54e6dac7bbbd8dcf2798084cf3121 --- /dev/null +++ b/static/docs/dev/reference/torch_isnan.html @@ -0,0 +1,253 @@ + + + + + + + + +Isnan — torch_isnan • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Isnan

    +
    + +
    torch_isnan(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) A tensor to check

    + +

    TEST

    + + + + +

    Returns a new tensor with boolean elements representing if each element is NaN or not.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_isnan(torch_tensor(c(1, NaN, 2))) +} +
    #> torch_tensor +#> 0 +#> 1 +#> 0 +#> [ CPUBoolType{3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_kthvalue.html b/static/docs/dev/reference/torch_kthvalue.html new file mode 100644 index 0000000000000000000000000000000000000000..2ad65c95e85aa5fd97c9c11e5a9f907d8844bd39 --- /dev/null +++ b/static/docs/dev/reference/torch_kthvalue.html @@ -0,0 +1,283 @@ + + + + + + + + +Kthvalue — torch_kthvalue • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Kthvalue

    +
    + +
    torch_kthvalue(self, k, dim = -1L, keepdim = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    k

    (int) k for the k-th smallest element

    dim

    (int, optional) the dimension to find the kth value along

    keepdim

    (bool) whether the output tensor has dim retained or not.

    + +

    kthvalue(input, k, dim=NULL, keepdim=False, out=NULL) -> (Tensor, LongTensor)

    + + + + +

    Returns a namedtuple (values, indices) where values is the k th +smallest element of each row of the input tensor in the given dimension +dim. And indices is the index location of each element found.

    +

    If dim is not given, the last dimension of the input is chosen.

    +

    If keepdim is TRUE, both the values and indices tensors +are the same size as input, except in the dimension dim where +they are of size 1. Otherwise, dim is squeezed +(see torch_squeeze), resulting in both the values and +indices tensors having 1 fewer dimension than the input tensor.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_arange(1., 6.) +x +torch_kthvalue(x, 4) +x=torch_arange(1.,7.)$resize_(c(2,3)) +x +torch_kthvalue(x, 2, 1, TRUE) +} +
    #> [[1]] +#> torch_tensor +#> 4 5 6 +#> [ CPUFloatType{1,3} ] +#> +#> [[2]] +#> torch_tensor +#> 1 1 1 +#> [ CPULongType{1,3} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_layout.html b/static/docs/dev/reference/torch_layout.html new file mode 100644 index 0000000000000000000000000000000000000000..26c50099042af64995538f10876728a50d08adbb --- /dev/null +++ b/static/docs/dev/reference/torch_layout.html @@ -0,0 +1,231 @@ + + + + + + + + +Creates the corresponding layout — torch_layout • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Creates the corresponding layout

    +
    + +
    torch_strided()
    +
    +torch_sparse_coo()
    + + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_le.html b/static/docs/dev/reference/torch_le.html new file mode 100644 index 0000000000000000000000000000000000000000..ae47c41595311a67da7c9224cca4ae4b3ac30cde --- /dev/null +++ b/static/docs/dev/reference/torch_le.html @@ -0,0 +1,259 @@ + + + + + + + + +Le — torch_le • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Le

    +
    + +
    torch_le(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the tensor to compare

    other

    (Tensor or float) the tensor or value to compare

    + +

    le(input, other, out=NULL) -> Tensor

    + + + + +

    Computes \(\mbox{input} \leq \mbox{other}\) element-wise.

    +

    The second argument can be a number or a tensor whose shape is +broadcastable with the first argument.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_le(torch_tensor(matrix(1:4, ncol = 2, byrow=TRUE)), + torch_tensor(matrix(c(1,1,4,4), ncol = 2, byrow=TRUE))) +} +
    #> torch_tensor +#> 1 0 +#> 1 1 +#> [ CPUBoolType{2,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_lerp.html b/static/docs/dev/reference/torch_lerp.html new file mode 100644 index 0000000000000000000000000000000000000000..29f3ba041968e57cb2e49ce68d60f12f19349f1b --- /dev/null +++ b/static/docs/dev/reference/torch_lerp.html @@ -0,0 +1,274 @@ + + + + + + + + +Lerp — torch_lerp • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Lerp

    +
    + +
    torch_lerp(self, end, weight)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the tensor with the starting points

    end

    (Tensor) the tensor with the ending points

    weight

    (float or tensor) the weight for the interpolation formula

    + +

    lerp(input, end, weight, out=NULL)

    + + + + +

    Does a linear interpolation of two tensors start (given by input) and end based +on a scalar or tensor weight and returns the resulting out tensor.

    +

    $$ + \mbox{out}_i = \mbox{start}_i + \mbox{weight}_i \times (\mbox{end}_i - \mbox{start}_i) +$$ +The shapes of start and end must be +broadcastable . If weight is a tensor, then +the shapes of weight, start, and end must be broadcastable .

    + +

    Examples

    +
    if (torch_is_installed()) { + +start = torch_arange(1., 5.) +end = torch_empty(4)$fill_(10) +start +end +torch_lerp(start, end, 0.5) +torch_lerp(start, end, torch_full_like(start, 0.5)) +} +
    #> torch_tensor +#> 5.5000 +#> 6.0000 +#> 6.5000 +#> 7.0000 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_lgamma.html b/static/docs/dev/reference/torch_lgamma.html new file mode 100644 index 0000000000000000000000000000000000000000..5f98c03554cf8e2ac770d463bb1f5b312cbcbd2c --- /dev/null +++ b/static/docs/dev/reference/torch_lgamma.html @@ -0,0 +1,257 @@ + + + + + + + + +Lgamma — torch_lgamma • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Lgamma

    +
    + +
    torch_lgamma(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    lgamma(input, out=NULL) -> Tensor

    + + + + +

    Computes the logarithm of the gamma function on input.

    +

    $$ + \mbox{out}_{i} = \log \Gamma(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_arange(0.5, 2, 0.5) +torch_lgamma(a) +} +
    #> torch_tensor +#> 0.5724 +#> 0.0000 +#> -0.1208 +#> [ CPUFloatType{3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_linspace.html b/static/docs/dev/reference/torch_linspace.html new file mode 100644 index 0000000000000000000000000000000000000000..4ca88db33e7e02a9830e446c03a57366f45e8f21 --- /dev/null +++ b/static/docs/dev/reference/torch_linspace.html @@ -0,0 +1,288 @@ + + + + + + + + +Linspace — torch_linspace • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Linspace

    +
    + +
    torch_linspace(
    +  start,
    +  end,
    +  steps = 100,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    start

    (float) the starting value for the set of points

    end

    (float) the ending value for the set of points

    steps

    (int) number of points to sample between start and end. Default: 100.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

    layout

    (torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    linspace(start, end, steps=100, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

    + + + + +

    Returns a one-dimensional tensor of steps +equally spaced points between start and end.

    +

    The output tensor is 1-D of size steps.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_linspace(3, 10, steps=5) +torch_linspace(-10, 10, steps=5) +torch_linspace(start=-10, end=10, steps=5) +torch_linspace(start=-10, end=10, steps=1) +} +
    #> torch_tensor +#> -10 +#> [ CPUFloatType{1} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_load.html b/static/docs/dev/reference/torch_load.html new file mode 100644 index 0000000000000000000000000000000000000000..0386408fd3ff96586d30fff91efb7b538d18aeda --- /dev/null +++ b/static/docs/dev/reference/torch_load.html @@ -0,0 +1,241 @@ + + + + + + + + +Loads a saved object — torch_load • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Loads a saved object

    +
    + +
    torch_load(path)
    + +

    Arguments

    + + + + + + +
    path

    a path to the saved object

    + +

    See also

    + +

    Other torch_save: +torch_save()

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_log.html b/static/docs/dev/reference/torch_log.html new file mode 100644 index 0000000000000000000000000000000000000000..378da5f164786283bc6b305a50ea390c7b03f9dc --- /dev/null +++ b/static/docs/dev/reference/torch_log.html @@ -0,0 +1,261 @@ + + + + + + + + +Log — torch_log • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Log

    +
    + +
    torch_log(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    log(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the natural logarithm of the elements +of input.

    +

    $$ + y_{i} = \log_{e} (x_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(5)) +a +torch_log(a) +} +
    #> torch_tensor +#> nan +#> nan +#> -0.7203 +#> nan +#> -1.1992 +#> [ CPUFloatType{5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_log10.html b/static/docs/dev/reference/torch_log10.html new file mode 100644 index 0000000000000000000000000000000000000000..f0be6c62c2d2b4881b87c66ba5aee6140ff1221e --- /dev/null +++ b/static/docs/dev/reference/torch_log10.html @@ -0,0 +1,261 @@ + + + + + + + + +Log10 — torch_log10 • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Log10

    +
    + +
    torch_log10(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    log10(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the logarithm to the base 10 of the elements +of input.

    +

    $$ + y_{i} = \log_{10} (x_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_rand(5) +a +torch_log10(a) +} +
    #> torch_tensor +#> -1.0370 +#> -0.6577 +#> -0.1037 +#> -0.7685 +#> -0.8610 +#> [ CPUFloatType{5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_log1p.html b/static/docs/dev/reference/torch_log1p.html new file mode 100644 index 0000000000000000000000000000000000000000..23d6ad669ac08cbc7a9b9ddabc742f7961034fb7 --- /dev/null +++ b/static/docs/dev/reference/torch_log1p.html @@ -0,0 +1,264 @@ + + + + + + + + +Log1p — torch_log1p • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Log1p

    +
    + +
    torch_log1p(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    Note

    + +

    This function is more accurate than torch_log for small +values of input

    +

    log1p(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the natural logarithm of (1 + input).

    +

    $$ + y_i = \log_{e} (x_i + 1) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(5)) +a +torch_log1p(a) +} +
    #> torch_tensor +#> 0.4854 +#> -0.3607 +#> 1.1316 +#> -0.0408 +#> -0.7326 +#> [ CPUFloatType{5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_log2.html b/static/docs/dev/reference/torch_log2.html new file mode 100644 index 0000000000000000000000000000000000000000..83695336a6f1b9aa6d480f16eeb4b4e6b9001fd0 --- /dev/null +++ b/static/docs/dev/reference/torch_log2.html @@ -0,0 +1,261 @@ + + + + + + + + +Log2 — torch_log2 • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Log2

    +
    + +
    torch_log2(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    log2(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the logarithm to the base 2 of the elements +of input.

    +

    $$ + y_{i} = \log_{2} (x_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_rand(5) +a +torch_log2(a) +} +
    #> torch_tensor +#> -0.1392 +#> -0.0448 +#> -1.5715 +#> -1.7101 +#> -0.0220 +#> [ CPUFloatType{5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_logdet.html b/static/docs/dev/reference/torch_logdet.html new file mode 100644 index 0000000000000000000000000000000000000000..a3d884fa62104a18942cb837eee40d462f0ae102 --- /dev/null +++ b/static/docs/dev/reference/torch_logdet.html @@ -0,0 +1,269 @@ + + + + + + + + +Logdet — torch_logdet • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Logdet

    +
    + +
    torch_logdet(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor of size (*, n, n) where * is zero or more batch dimensions.

    + +

    Note

    + + +
    Result is `-inf` if `input` has zero log determinant, and is `NaN` if
    +`input` has negative determinant.
    +
    + +
    Backward through `logdet` internally uses SVD results when `input`
    +is not invertible. In this case, double backward through `logdet` will
    +be unstable in when `input` doesn't have distinct singular values. See
    +`~torch.svd` for details.
    +
    + +

    logdet(input) -> Tensor

    + + + + +

    Calculates log determinant of a square matrix or batches of square matrices.

    + +

    Examples

    +
    if (torch_is_installed()) { + +A = torch_randn(c(3, 3)) +torch_det(A) +torch_logdet(A) +A +A$det() +A$det()$log() +} +
    #> torch_tensor +#> 1.25629 +#> [ CPUFloatType{} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_logical_and.html b/static/docs/dev/reference/torch_logical_and.html new file mode 100644 index 0000000000000000000000000000000000000000..25ba12116e8755e8617e6eac3ac4c2b6795992aa --- /dev/null +++ b/static/docs/dev/reference/torch_logical_and.html @@ -0,0 +1,260 @@ + + + + + + + + +Logical_and — torch_logical_and • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Logical_and

    +
    + +
    torch_logical_and(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    other

    (Tensor) the tensor to compute AND with

    + +

    logical_and(input, other, out=NULL) -> Tensor

    + + + + +

    Computes the element-wise logical AND of the given input tensors. Zeros are treated as FALSE and nonzeros are +treated as TRUE.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_logical_and(torch_tensor(c(TRUE, FALSE, TRUE)), torch_tensor(c(TRUE, FALSE, FALSE))) +a = torch_tensor(c(0, 1, 10, 0), dtype=torch_int8()) +b = torch_tensor(c(4, 0, 1, 0), dtype=torch_int8()) +torch_logical_and(a, b) +if (FALSE) { +torch_logical_and(a, b, out=torch_empty(4, dtype=torch_bool())) +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_logical_not.html b/static/docs/dev/reference/torch_logical_not.html new file mode 100644 index 0000000000000000000000000000000000000000..d9e5a06b2073aecf5905d5d384ff24665e27f2cd --- /dev/null +++ b/static/docs/dev/reference/torch_logical_not.html @@ -0,0 +1,255 @@ + + + + + + + + +Logical_not — torch_logical_not • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Logical_not

    +
    + + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    logical_not(input, out=NULL) -> Tensor

    + + + + +

    Computes the element-wise logical NOT of the given input tensor. If not specified, the output tensor will have the bool +dtype. If the input tensor is not a bool tensor, zeros are treated as FALSE and non-zeros are treated as TRUE.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_logical_not(torch_tensor(c(TRUE, FALSE))) +torch_logical_not(torch_tensor(c(0, 1, -10), dtype=torch_int8())) +torch_logical_not(torch_tensor(c(0., 1.5, -10.), dtype=torch_double())) +} +
    #> torch_tensor +#> 1 +#> 0 +#> 0 +#> [ CPUBoolType{3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_logical_or.html b/static/docs/dev/reference/torch_logical_or.html new file mode 100644 index 0000000000000000000000000000000000000000..d833d837109595e6cb70aad51c75b3bbede6dde4 --- /dev/null +++ b/static/docs/dev/reference/torch_logical_or.html @@ -0,0 +1,262 @@ + + + + + + + + +Logical_or — torch_logical_or • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Logical_or

    +
    + +
    torch_logical_or(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    other

    (Tensor) the tensor to compute OR with

    + +

    logical_or(input, other, out=NULL) -> Tensor

    + + + + +

    Computes the element-wise logical OR of the given input tensors. Zeros are treated as FALSE and nonzeros are +treated as TRUE.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_logical_or(torch_tensor(c(TRUE, FALSE, TRUE)), torch_tensor(c(TRUE, FALSE, FALSE))) +a = torch_tensor(c(0, 1, 10, 0), dtype=torch_int8()) +b = torch_tensor(c(4, 0, 1, 0), dtype=torch_int8()) +torch_logical_or(a, b) +if (FALSE) { +torch_logical_or(a$double(), b$double()) +torch_logical_or(a$double(), b) +torch_logical_or(a, b, out=torch_empty(4, dtype=torch_bool())) +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_logical_xor.html b/static/docs/dev/reference/torch_logical_xor.html new file mode 100644 index 0000000000000000000000000000000000000000..033c28323032f10c0cea5319343f67ee62db01e6 --- /dev/null +++ b/static/docs/dev/reference/torch_logical_xor.html @@ -0,0 +1,264 @@ + + + + + + + + +Logical_xor — torch_logical_xor • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Logical_xor

    +
    + +
    torch_logical_xor(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    other

    (Tensor) the tensor to compute XOR with

    + +

    logical_xor(input, other, out=NULL) -> Tensor

    + + + + +

    Computes the element-wise logical XOR of the given input tensors. Zeros are treated as FALSE and nonzeros are +treated as TRUE.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_logical_xor(torch_tensor(c(TRUE, FALSE, TRUE)), torch_tensor(c(TRUE, FALSE, FALSE))) +a = torch_tensor(c(0, 1, 10, 0), dtype=torch_int8()) +b = torch_tensor(c(4, 0, 1, 0), dtype=torch_int8()) +torch_logical_xor(a, b) +torch_logical_xor(a$to(dtype=torch_double()), b$to(dtype=torch_double())) +torch_logical_xor(a$to(dtype=torch_double()), b) +} +
    #> torch_tensor +#> 1 +#> 1 +#> 0 +#> 0 +#> [ CPUBoolType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_logspace.html b/static/docs/dev/reference/torch_logspace.html new file mode 100644 index 0000000000000000000000000000000000000000..d6c5e2b7d5db0e75f65596650848ef2a84656a62 --- /dev/null +++ b/static/docs/dev/reference/torch_logspace.html @@ -0,0 +1,294 @@ + + + + + + + + +Logspace — torch_logspace • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Logspace

    +
    + +
    torch_logspace(
    +  start,
    +  end,
    +  steps = 100,
    +  base = 10,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    start

    (float) the starting value for the set of points

    end

    (float) the ending value for the set of points

    steps

    (int) number of points to sample between start and end. Default: 100.

    base

    (float) base of the logarithm function. Default: 10.0.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

    layout

    (torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    logspace(start, end, steps=100, base=10.0, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

    + + + + +

    Returns a one-dimensional tensor of steps points +logarithmically spaced with base base between +\({\mbox{base}}^{\mbox{start}}\) and \({\mbox{base}}^{\mbox{end}}\).

    +

    The output tensor is 1-D of size steps.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_logspace(start=-10, end=10, steps=5) +torch_logspace(start=0.1, end=1.0, steps=5) +torch_logspace(start=0.1, end=1.0, steps=1) +torch_logspace(start=2, end=2, steps=1, base=2) +} +
    #> torch_tensor +#> 4 +#> [ CPUFloatType{1} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_logsumexp.html b/static/docs/dev/reference/torch_logsumexp.html new file mode 100644 index 0000000000000000000000000000000000000000..ab2737e00bd0a76b145d234289e8ee56fcdb24ec --- /dev/null +++ b/static/docs/dev/reference/torch_logsumexp.html @@ -0,0 +1,272 @@ + + + + + + + + +Logsumexp — torch_logsumexp • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Logsumexp

    +
    + +
    torch_logsumexp(self, dim, keepdim = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int or tuple of ints) the dimension or dimensions to reduce.

    keepdim

    (bool) whether the output tensor has dim retained or not.

    + +

    logsumexp(input, dim, keepdim=False, out=NULL)

    + + + + +

    Returns the log of summed exponentials of each row of the input +tensor in the given dimension dim. The computation is numerically +stabilized.

    +

    For summation index \(j\) given by dim and other indices \(i\), the result is

    +

    $$ + \mbox{logsumexp}(x)_{i} = \log \sum_j \exp(x_{ij}) +$$

    +

    If keepdim is TRUE, the output tensor is of the same size +as input except in the dimension(s) dim where it is of size 1. +Otherwise, dim is squeezed (see torch_squeeze), resulting in the +output tensor having 1 (or len(dim)) fewer dimension(s).

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(3, 3)) +torch_logsumexp(a, 1) +} +
    #> torch_tensor +#> 1.3320 +#> 1.7755 +#> 2.0098 +#> [ CPUFloatType{3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_lstsq.html b/static/docs/dev/reference/torch_lstsq.html new file mode 100644 index 0000000000000000000000000000000000000000..26f86d1316d32dea060674c146974320095646f6 --- /dev/null +++ b/static/docs/dev/reference/torch_lstsq.html @@ -0,0 +1,298 @@ + + + + + + + + +Lstsq — torch_lstsq • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Lstsq

    +
    + +
    torch_lstsq(self, A)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the matrix \(B\)

    A

    (Tensor) the \(m\) by \(n\) matrix \(A\)

    + +

    Note

    + + +
    The case when \eqn{m < n} is not supported on the GPU.
    +
    + +

    lstsq(input, A, out=NULL) -> Tensor

    + + + + +

    Computes the solution to the least squares and least norm problems for a full +rank matrix \(A\) of size \((m \times n)\) and a matrix \(B\) of +size \((m \times k)\).

    +

    If \(m \geq n\), torch_lstsq() solves the least-squares problem:

    +

    $$ + \begin{array}{ll} + \min_X & \|AX-B\|_2. + \end{array} +$$ +If \(m < n\), torch_lstsq() solves the least-norm problem:

    +

    $$ + \begin{array}{llll} + \min_X & \|X\|_2 & \mbox{subject to} & AX = B. + \end{array} +$$ +Returned tensor \(X\) has shape \((\mbox{max}(m, n) \times k)\). The first \(n\) +rows of \(X\) contains the solution. If \(m \geq n\), the residual sum of squares +for the solution in each column is given by the sum of squares of elements in the +remaining \(m - n\) rows of that column.

    + +

    Examples

    +
    if (torch_is_installed()) { + +A = torch_tensor(rbind( + c(1,1,1), + c(2,3,4), + c(3,5,2), + c(4,2,5), + c(5,4,3) +)) +B = torch_tensor(rbind( + c(-10, -3), + c(12, 14), + c(14, 12), + c(16, 16), + c(18, 16) +)) +out = torch_lstsq(B, A) +out[[1]] +} +
    #> torch_tensor +#> 2.0000 1.0000 +#> 1.0000 1.0000 +#> 1.0000 2.0000 +#> 10.9635 4.8501 +#> 8.9332 5.2418 +#> [ CPUFloatType{5,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_lt.html b/static/docs/dev/reference/torch_lt.html new file mode 100644 index 0000000000000000000000000000000000000000..8e9b2b60423078c37a65200b2958a45be85d750c --- /dev/null +++ b/static/docs/dev/reference/torch_lt.html @@ -0,0 +1,259 @@ + + + + + + + + +Lt — torch_lt • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Lt

    +
    + +
    torch_lt(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the tensor to compare

    other

    (Tensor or float) the tensor or value to compare

    + +

    lt(input, other, out=NULL) -> Tensor

    + + + + +

    Computes \(\mbox{input} < \mbox{other}\) element-wise.

    +

    The second argument can be a number or a tensor whose shape is +broadcastable with the first argument.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_lt(torch_tensor(matrix(1:4, ncol = 2, byrow=TRUE)), + torch_tensor(matrix(c(1,1,4,4), ncol = 2, byrow=TRUE))) +} +
    #> torch_tensor +#> 0 0 +#> 1 0 +#> [ CPUBoolType{2,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_lu.html b/static/docs/dev/reference/torch_lu.html new file mode 100644 index 0000000000000000000000000000000000000000..dea17909a6e88638a76725638bcb66257e45c198 --- /dev/null +++ b/static/docs/dev/reference/torch_lu.html @@ -0,0 +1,281 @@ + + + + + + + + +LU — torch_lu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Computes the LU factorization of a matrix or batches of matrices A. Returns a +tuple containing the LU factorization and pivots of A. Pivoting is done if pivot +is set to True.

    +
    + +
    torch_lu(A, pivot = TRUE, get_infos = FALSE, out = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    A

    (Tensor) the tensor to factor of size (, m, n)(,m,n)

    pivot

    (bool, optional) – controls whether pivoting is done. Default: TRUE

    get_infos

    (bool, optional) – if set to True, returns an info IntTensor. Default: FALSE

    out

    (tuple, optional) – optional output tuple. If get_infos is True, then the elements +in the tuple are Tensor, IntTensor, and IntTensor. If get_infos is False, then the +elements in the tuple are Tensor, IntTensor. Default: NULL

    + + +

    Examples

    +
    if (torch_is_installed()) { + +A = torch_randn(c(2, 3, 3)) +torch_lu(A) + +} +
    #> [[1]] +#> torch_tensor +#> (1,.,.) = +#> 1.9995 1.3835 0.1162 +#> 0.0141 -0.6562 -0.2773 +#> -0.3905 -0.7564 0.1848 +#> +#> (2,.,.) = +#> -0.6658 -0.4938 0.4441 +#> -0.3991 -0.6220 0.0216 +#> 0.8804 -0.9766 1.1155 +#> [ CPUFloatType{2,3,3} ] +#> +#> [[2]] +#> torch_tensor +#> 2 2 3 +#> 1 2 3 +#> [ CPUIntType{2,3} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_lu_solve.html b/static/docs/dev/reference/torch_lu_solve.html new file mode 100644 index 0000000000000000000000000000000000000000..c7182d0eefd3457e7598fdd71e36e2a852fdc70a --- /dev/null +++ b/static/docs/dev/reference/torch_lu_solve.html @@ -0,0 +1,263 @@ + + + + + + + + +Lu_solve — torch_lu_solve • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Lu_solve

    +
    + +
    torch_lu_solve(self, LU_data, LU_pivots)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the RHS tensor of size \((*, m, k)\), where \(*\) is zero or more batch dimensions.

    LU_data

    (Tensor) the pivoted LU factorization of A from torch_lu of size \((*, m, m)\), where \(*\) is zero or more batch dimensions.

    LU_pivots

    (IntTensor) the pivots of the LU factorization from torch_lu of size \((*, m)\), where \(*\) is zero or more batch dimensions. The batch dimensions of LU_pivots must be equal to the batch dimensions of LU_data.

    + +

    lu_solve(input, LU_data, LU_pivots, out=NULL) -> Tensor

    + + + + +

    Returns the LU solve of the linear system \(Ax = b\) using the partially pivoted +LU factorization of A from torch_lu.

    + +

    Examples

    +
    if (torch_is_installed()) { +A = torch_randn(c(2, 3, 3)) +b = torch_randn(c(2, 3, 1)) +out = torch_lu(A) +x = torch_lu_solve(b, out[[1]], out[[2]]) +torch_norm(torch_bmm(A, x) - b) +} +
    #> torch_tensor +#> 1.68587e-07 +#> [ CPUFloatType{} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_manual_seed.html b/static/docs/dev/reference/torch_manual_seed.html new file mode 100644 index 0000000000000000000000000000000000000000..3afb06de77cf729b4b1bebd183d8220bf0f50da1 --- /dev/null +++ b/static/docs/dev/reference/torch_manual_seed.html @@ -0,0 +1,237 @@ + + + + + + + + +Sets the seed for generating random numbers. — torch_manual_seed • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Sets the seed for generating random numbers.

    +
    + +
    torch_manual_seed(seed)
    + +

    Arguments

    + + + + + + +
    seed

    integer seed.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_masked_select.html b/static/docs/dev/reference/torch_masked_select.html new file mode 100644 index 0000000000000000000000000000000000000000..1d562839899027f9077e4f186bc01fb2ce4122f9 --- /dev/null +++ b/static/docs/dev/reference/torch_masked_select.html @@ -0,0 +1,269 @@ + + + + + + + + +Masked_select — torch_masked_select • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Masked_select

    +
    + +
    torch_masked_select(self, mask)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    mask

    (BoolTensor) the tensor containing the binary mask to index with

    + +

    Note

    + +

    The returned tensor does not use the same storage +as the original tensor

    +

    masked_select(input, mask, out=NULL) -> Tensor

    + + + + +

    Returns a new 1-D tensor which indexes the input tensor according to +the boolean mask mask which is a BoolTensor.

    +

    The shapes of the mask tensor and the input tensor don't need +to match, but they must be broadcastable .

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_randn(c(3, 4)) +x +mask = x$ge(0.5) +mask +torch_masked_select(x, mask) +} +
    #> torch_tensor +#> 1.0696 +#> 0.6496 +#> 2.4253 +#> 1.3309 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_matmul.html b/static/docs/dev/reference/torch_matmul.html new file mode 100644 index 0000000000000000000000000000000000000000..cd830f4bbc9c9b85480f6ed8a62a606349152a7a --- /dev/null +++ b/static/docs/dev/reference/torch_matmul.html @@ -0,0 +1,347 @@ + + + + + + + + +Matmul — torch_matmul • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Matmul

    +
    + +
    torch_matmul(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the first tensor to be multiplied

    other

    (Tensor) the second tensor to be multiplied

    + +

    Note

    + + +
    The 1-dimensional dot product version of this function does not support an `out` parameter.
    +
    + +

    matmul(input, other, out=NULL) -> Tensor

    + + + + +

    Matrix product of two tensors.

    +

    The behavior depends on the dimensionality of the tensors as follows:

      +
    • If both tensors are 1-dimensional, the dot product (scalar) is returned.

    • +
    • If both arguments are 2-dimensional, the matrix-matrix product is returned.

    • +
    • If the first argument is 1-dimensional and the second argument is 2-dimensional, +a 1 is prepended to its dimension for the purpose of the matrix multiply. +After the matrix multiply, the prepended dimension is removed.

    • +
    • If the first argument is 2-dimensional and the second argument is 1-dimensional, +the matrix-vector product is returned.

    • +
    • If both arguments are at least 1-dimensional and at least one argument is +N-dimensional (where N > 2), then a batched matrix multiply is returned. If the first +argument is 1-dimensional, a 1 is prepended to its dimension for the purpose of the +batched matrix multiply and removed after. If the second argument is 1-dimensional, a +1 is appended to its dimension for the purpose of the batched matrix multiple and removed after. +The non-matrix (i.e. batch) dimensions are broadcasted (and thus +must be broadcastable). For example, if input is a +\((j \times 1 \times n \times m)\) tensor and other is a \((k \times m \times p)\) +tensor, out will be an \((j \times k \times n \times p)\) tensor.

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { + +# vector x vector +tensor1 = torch_randn(c(3)) +tensor2 = torch_randn(c(3)) +torch_matmul(tensor1, tensor2) +# matrix x vector +tensor1 = torch_randn(c(3, 4)) +tensor2 = torch_randn(c(4)) +torch_matmul(tensor1, tensor2) +# batched matrix x broadcasted vector +tensor1 = torch_randn(c(10, 3, 4)) +tensor2 = torch_randn(c(4)) +torch_matmul(tensor1, tensor2) +# batched matrix x batched matrix +tensor1 = torch_randn(c(10, 3, 4)) +tensor2 = torch_randn(c(10, 4, 5)) +torch_matmul(tensor1, tensor2) +# batched matrix x broadcasted matrix +tensor1 = torch_randn(c(10, 3, 4)) +tensor2 = torch_randn(c(4, 5)) +torch_matmul(tensor1, tensor2) +} +
    #> torch_tensor +#> (1,.,.) = +#> 2.9089 0.9845 -1.9209 0.7565 -0.8089 +#> 1.5602 -0.1355 -0.5663 -0.7256 -0.3005 +#> -5.1681 0.8439 -0.2311 4.0118 0.6029 +#> +#> (2,.,.) = +#> -3.2413 0.6460 -0.0018 2.4108 -0.3839 +#> 1.1368 0.6532 1.2959 -0.8586 -1.9191 +#> 1.8520 -0.0187 0.1186 -0.9161 0.4118 +#> +#> (3,.,.) = +#> 2.9787 0.2050 0.3688 -1.8496 -1.7943 +#> -2.8292 0.7724 0.4590 2.3608 0.5896 +#> 2.1940 0.3648 0.9281 -1.0902 0.0810 +#> +#> (4,.,.) = +#> -2.1130 0.3511 -0.6688 2.3332 1.7527 +#> 3.4641 0.3747 1.2832 -2.0061 -0.3228 +#> 3.3574 -0.2530 0.2699 -2.4027 -0.9421 +#> +#> (5,.,.) = +#> -0.7329 -0.3116 -0.9270 0.1775 -1.1252 +#> -0.5466 0.5188 -0.3335 1.2077 0.1667 +#> -2.5609 -0.2285 -1.5218 2.0231 0.8828 +#> +#> (6,.,.) = +#> -2.6296 1.0644 -0.3949 3.1285 0.5754 +#> 0.6244 -1.2722 0.0847 -2.0712 0.1138 +#> -1.6715 0.1444 -2.2961 2.2244 -0.3139 +#> +#> (7,.,.) = +#> 2.3106 -1.2625 1.1150 -3.0800 1.7108 +#> -1.4338 0.6178 0.9726 1.0354 0.0449 +#> 0.5690 -1.0606 -0.5211 -1.1207 1.3386 +#> +#> (8,.,.) = +#> -0.7904 0.7064 1.5994 0.5113 0.1367 +#> -0.6828 -0.1212 -2.8078 1.8834 0.6446 +#> 1.6149 -0.3797 -0.5884 -0.9357 0.2698 +#> +#> (9,.,.) = +#> 2.2742 -1.5073 -0.8308 -2.4203 1.3077 +#> -3.3323 -1.0881 -2.3815 2.2911 3.2742 +#> -1.8565 -0.3452 -0.5353 0.8325 0.3332 +#> +#> (10,.,.) = +#> 2.3585 -0.4239 0.6924 -2.1180 0.0934 +#> 0.9073 1.1976 -0.0628 0.8384 -1.5615 +#> -1.1092 -0.2548 0.6868 -0.3382 -0.6751 +#> [ CPUFloatType{10,3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_matrix_power.html b/static/docs/dev/reference/torch_matrix_power.html new file mode 100644 index 0000000000000000000000000000000000000000..42a3227b975aba5d45cecccfa535cb1126561e51 --- /dev/null +++ b/static/docs/dev/reference/torch_matrix_power.html @@ -0,0 +1,268 @@ + + + + + + + + +Matrix_power — torch_matrix_power • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Matrix_power

    +
    + +
    torch_matrix_power(self, n)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    n

    (int) the power to raise the matrix to

    + +

    matrix_power(input, n) -> Tensor

    + + + + +

    Returns the matrix raised to the power n for square matrices. +For batch of matrices, each individual matrix is raised to the power n.

    +

    If n is negative, then the inverse of the matrix (if invertible) is +raised to the power n. For a batch of matrices, the batched inverse +(if invertible) is raised to the power n. If n is 0, then an identity matrix +is returned.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(2, 2, 2)) +a +torch_matrix_power(a, 3) +} +
    #> torch_tensor +#> (1,.,.) = +#> 0.1113 0.3742 +#> 1.4282 4.8037 +#> +#> (2,.,.) = +#> -6.5694 -8.0205 +#> -5.1387 -6.1698 +#> [ CPUFloatType{2,2,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_matrix_rank.html b/static/docs/dev/reference/torch_matrix_rank.html new file mode 100644 index 0000000000000000000000000000000000000000..20a4f212ff4ddb49d1b1d599abd612dbd08102d7 --- /dev/null +++ b/static/docs/dev/reference/torch_matrix_rank.html @@ -0,0 +1,268 @@ + + + + + + + + +Matrix_rank — torch_matrix_rank • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Matrix_rank

    +
    + +
    torch_matrix_rank(self, tol, symmetric = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input 2-D tensor

    tol

    (float, optional) the tolerance value. Default: NULL

    symmetric

    (bool, optional) indicates whether input is symmetric. Default: FALSE

    + +

    matrix_rank(input, tol=NULL, symmetric=False) -> Tensor

    + + + + +

    Returns the numerical rank of a 2-D tensor. The method to compute the +matrix rank is done using SVD by default. If symmetric is TRUE, +then input is assumed to be symmetric, and the computation of the +rank is done by obtaining the eigenvalues.

    +

    tol is the threshold below which the singular values (or the eigenvalues +when symmetric is TRUE) are considered to be 0. If tol is not +specified, tol is set to S.max() * max(S.size()) * eps where S is the +singular values (or the eigenvalues when symmetric is TRUE), and eps +is the epsilon value for the datatype of input.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_eye(10) +torch_matrix_rank(a) +} +
    #> torch_tensor +#> 10 +#> [ CPULongType{} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_max.html b/static/docs/dev/reference/torch_max.html new file mode 100644 index 0000000000000000000000000000000000000000..fd4ce2f4f5db7097dbec63c7105b2a749985030a --- /dev/null +++ b/static/docs/dev/reference/torch_max.html @@ -0,0 +1,320 @@ + + + + + + + + +Max — torch_max • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Max

    +
    + + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int) the dimension to reduce.

    keepdim

    (bool) whether the output tensor has dim retained or not. Default: FALSE.

    out

    (tuple, optional) the result tuple of two output tensors (max, max_indices)

    other

    (Tensor) the second input tensor

    + +

    Note

    + +

    When the shapes do not match, the shape of the returned output tensor +follows the broadcasting rules .

    +

    max(input) -> Tensor

    + + + + +

    Returns the maximum value of all elements in the input tensor.

    +

    max(input, dim, keepdim=False, out=NULL) -> (Tensor, LongTensor)

    + + + + +

    Returns a namedtuple (values, indices) where values is the maximum +value of each row of the input tensor in the given dimension +dim. And indices is the index location of each maximum value found +(argmax).

    +

    Warning

    + + + +

    indices does not necessarily contain the first occurrence of each +maximal value found, unless it is unique. +The exact implementation details are device-specific. +Do not expect the same result when run on CPU and GPU in general.

    +

    If keepdim is TRUE, the output tensors are of the same size +as input except in the dimension dim where they are of size 1. +Otherwise, dim is squeezed (see torch_squeeze), resulting +in the output tensors having 1 fewer dimension than input.

    +

    max(input, other, out=NULL) -> Tensor

    + + + + +

    Each element of the tensor input is compared with the corresponding +element of the tensor other and an element-wise maximum is taken.

    +

    The shapes of input and other don't need to match, +but they must be broadcastable .

    +

    $$ + \mbox{out}_i = \max(\mbox{tensor}_i, \mbox{other}_i) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(1, 3)) +a +torch_max(a) + + +a = torch_randn(c(4, 4)) +a +torch_max(a, dim = 1) + + +a = torch_randn(c(4)) +a +b = torch_randn(c(4)) +b +torch_max(a, other = b) +} +
    #> torch_tensor +#> -0.0453 +#> 0.1631 +#> -0.6085 +#> 2.0054 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_mean.html b/static/docs/dev/reference/torch_mean.html new file mode 100644 index 0000000000000000000000000000000000000000..189335c5a6d915f74b43a2661c0c5ef613f9a184 --- /dev/null +++ b/static/docs/dev/reference/torch_mean.html @@ -0,0 +1,283 @@ + + + + + + + + +Mean — torch_mean • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Mean

    +
    + +
    torch_mean(self, dim, keepdim = FALSE, dtype = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int or tuple of ints) the dimension or dimensions to reduce.

    keepdim

    (bool) whether the output tensor has dim retained or not.

    dtype

    the resulting data type.

    + +

    mean(input) -> Tensor

    + + + + +

    Returns the mean value of all elements in the input tensor.

    +

    mean(input, dim, keepdim=False, out=NULL) -> Tensor

    + + + + +

    Returns the mean value of each row of the input tensor in the given +dimension dim. If dim is a list of dimensions, +reduce over all of them.

    +

    If keepdim is TRUE, the output tensor is of the same size +as input except in the dimension(s) dim where it is of size 1. +Otherwise, dim is squeezed (see torch_squeeze), resulting in the +output tensor having 1 (or len(dim)) fewer dimension(s).

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(1, 3)) +a +torch_mean(a) + + +a = torch_randn(c(4, 4)) +a +torch_mean(a, 1) +torch_mean(a, 1, TRUE) +} +
    #> torch_tensor +#> -0.2487 -0.7366 -0.5339 -1.1544 +#> [ CPUFloatType{1,4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_median.html b/static/docs/dev/reference/torch_median.html new file mode 100644 index 0000000000000000000000000000000000000000..3899b939d104fa618ca268154b9c50763d5e805d --- /dev/null +++ b/static/docs/dev/reference/torch_median.html @@ -0,0 +1,294 @@ + + + + + + + + +Median — torch_median • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Median

    +
    + +
    torch_median(self, dim, keepdim = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int) the dimension to reduce.

    keepdim

    (bool) whether the output tensor has dim retained or not.

    + +

    median(input) -> Tensor

    + + + + +

    Returns the median value of all elements in the input tensor.

    +

    median(input, dim=-1, keepdim=False, out=NULL) -> (Tensor, LongTensor)

    + + + + +

    Returns a namedtuple (values, indices) where values is the median +value of each row of the input tensor in the given dimension +dim. And indices is the index location of each median value found.

    +

    By default, dim is the last dimension of the input tensor.

    +

    If keepdim is TRUE, the output tensors are of the same size +as input except in the dimension dim where they are of size 1. +Otherwise, dim is squeezed (see torch_squeeze), resulting in +the outputs tensor having 1 fewer dimension than input.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(1, 3)) +a +torch_median(a) + + +a = torch_randn(c(4, 5)) +a +torch_median(a, 1) +} +
    #> [[1]] +#> torch_tensor +#> -0.4870 +#> -1.3090 +#> -0.2144 +#> 1.1017 +#> -0.6253 +#> [ CPUFloatType{5} ] +#> +#> [[2]] +#> torch_tensor +#> 2 +#> 1 +#> 1 +#> 3 +#> 1 +#> [ CPULongType{5} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_memory_format.html b/static/docs/dev/reference/torch_memory_format.html new file mode 100644 index 0000000000000000000000000000000000000000..bf53e61c17b70314c5b77cfe2bbcb3ab3d000acd --- /dev/null +++ b/static/docs/dev/reference/torch_memory_format.html @@ -0,0 +1,233 @@ + + + + + + + + +Memory format — torch_memory_format • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Returns the correspondent memory format.

    +
    + +
    torch_contiguous_format()
    +
    +torch_preserve_format()
    +
    +torch_channels_last_format()
    + + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_meshgrid.html b/static/docs/dev/reference/torch_meshgrid.html new file mode 100644 index 0000000000000000000000000000000000000000..3308c77e6dd065a130a688605635f045cb5bebbf --- /dev/null +++ b/static/docs/dev/reference/torch_meshgrid.html @@ -0,0 +1,268 @@ + + + + + + + + +Meshgrid — torch_meshgrid • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Meshgrid

    +
    + +
    torch_meshgrid(tensors)
    + +

    Arguments

    + + + + + + +
    tensors

    (list of Tensor) list of scalars or 1 dimensional tensors. Scalars will be +treated (1,).

    + +

    TEST

    + + + + +

    Take \(N\) tensors, each of which can be either scalar or 1-dimensional +vector, and create \(N\) N-dimensional grids, where the \(i\) th grid is defined by +expanding the \(i\) th input over dimensions defined by other inputs.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_tensor(c(1, 2, 3)) +y = torch_tensor(c(4, 5, 6)) +out = torch_meshgrid(list(x, y)) +out +} +
    #> [[1]] +#> torch_tensor +#> 1 1 1 +#> 2 2 2 +#> 3 3 3 +#> [ CPUFloatType{3,3} ] +#> +#> [[2]] +#> torch_tensor +#> 4 5 6 +#> 4 5 6 +#> 4 5 6 +#> [ CPUFloatType{3,3} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_min.html b/static/docs/dev/reference/torch_min.html new file mode 100644 index 0000000000000000000000000000000000000000..950bce9e6ef391672d44fbf71e7aa48b602d6393 --- /dev/null +++ b/static/docs/dev/reference/torch_min.html @@ -0,0 +1,321 @@ + + + + + + + + +Min — torch_min • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Min

    +
    + + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int) the dimension to reduce.

    keepdim

    (bool) whether the output tensor has dim retained or not.

    out

    (tuple, optional) the tuple of two output tensors (min, min_indices)

    other

    (Tensor) the second input tensor

    + +

    Note

    + +

    When the shapes do not match, the shape of the returned output tensor +follows the broadcasting rules .

    +

    min(input) -> Tensor

    + + + + +

    Returns the minimum value of all elements in the input tensor.

    +

    min(input, dim, keepdim=False, out=NULL) -> (Tensor, LongTensor)

    + + + + +

    Returns a namedtuple (values, indices) where values is the minimum +value of each row of the input tensor in the given dimension +dim. And indices is the index location of each minimum value found +(argmin).

    +

    Warning

    + + + +

    indices does not necessarily contain the first occurrence of each +minimal value found, unless it is unique. +The exact implementation details are device-specific. +Do not expect the same result when run on CPU and GPU in general.

    +

    If keepdim is TRUE, the output tensors are of the same size as +input except in the dimension dim where they are of size 1. +Otherwise, dim is squeezed (see torch_squeeze), resulting in +the output tensors having 1 fewer dimension than input.

    +

    min(input, other, out=NULL) -> Tensor

    + + + + +

    Each element of the tensor input is compared with the corresponding +element of the tensor other and an element-wise minimum is taken. +The resulting tensor is returned.

    +

    The shapes of input and other don't need to match, +but they must be broadcastable .

    +

    $$ + \mbox{out}_i = \min(\mbox{tensor}_i, \mbox{other}_i) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(1, 3)) +a +torch_min(a) + + +a = torch_randn(c(4, 4)) +a +torch_min(a, dim = 1) + + +a = torch_randn(c(4)) +a +b = torch_randn(c(4)) +b +torch_min(a, other = b) +} +
    #> torch_tensor +#> -1.0568 +#> -0.9262 +#> 0.8762 +#> -0.0916 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_mm.html b/static/docs/dev/reference/torch_mm.html new file mode 100644 index 0000000000000000000000000000000000000000..7ab9e231262e2f91f2b0667ccc88b9e9d72ba513 --- /dev/null +++ b/static/docs/dev/reference/torch_mm.html @@ -0,0 +1,264 @@ + + + + + + + + +Mm — torch_mm • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Mm

    +
    + +
    torch_mm(self, mat2)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the first matrix to be multiplied

    mat2

    (Tensor) the second matrix to be multiplied

    + +

    Note

    + +

    This function does not broadcast . +For broadcasting matrix products, see torch_matmul.

    +

    mm(input, mat2, out=NULL) -> Tensor

    + + + + +

    Performs a matrix multiplication of the matrices input and mat2.

    +

    If input is a \((n \times m)\) tensor, mat2 is a +\((m \times p)\) tensor, out will be a \((n \times p)\) tensor.

    + +

    Examples

    +
    if (torch_is_installed()) { + +mat1 = torch_randn(c(2, 3)) +mat2 = torch_randn(c(3, 3)) +torch_mm(mat1, mat2) +} +
    #> torch_tensor +#> 1.7778 -1.7284 -0.0374 +#> -1.1219 3.0849 0.3441 +#> [ CPUFloatType{2,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_mode.html b/static/docs/dev/reference/torch_mode.html new file mode 100644 index 0000000000000000000000000000000000000000..4286a40064ffd965a7422506b39433ef0d8fd502 --- /dev/null +++ b/static/docs/dev/reference/torch_mode.html @@ -0,0 +1,279 @@ + + + + + + + + +Mode — torch_mode • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Mode

    +
    + +
    torch_mode(self, dim = -1L, keepdim = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int) the dimension to reduce.

    keepdim

    (bool) whether the output tensor has dim retained or not.

    + +

    Note

    + +

    This function is not defined for torch_cuda.Tensor yet.

    +

    mode(input, dim=-1, keepdim=False, out=NULL) -> (Tensor, LongTensor)

    + + + + +

    Returns a namedtuple (values, indices) where values is the mode +value of each row of the input tensor in the given dimension +dim, i.e. a value which appears most often +in that row, and indices is the index location of each mode value found.

    +

    By default, dim is the last dimension of the input tensor.

    +

    If keepdim is TRUE, the output tensors are of the same size as +input except in the dimension dim where they are of size 1. +Otherwise, dim is squeezed (see torch_squeeze), resulting +in the output tensors having 1 fewer dimension than input.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randint(0, 50, size = list(5)) +a +torch_mode(a, 1) +} +
    #> [[1]] +#> torch_tensor +#> 11 +#> [ CPUFloatType{} ] +#> +#> [[2]] +#> torch_tensor +#> 3 +#> [ CPULongType{} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_mul.html b/static/docs/dev/reference/torch_mul.html new file mode 100644 index 0000000000000000000000000000000000000000..d92492ba4736f41b5f7c054ae3c0142abaa268cf --- /dev/null +++ b/static/docs/dev/reference/torch_mul.html @@ -0,0 +1,282 @@ + + + + + + + + +Mul — torch_mul • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Mul

    +
    + +
    torch_mul(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the first multiplicand tensor

    other

    (Tensor) the second multiplicand tensor

    + +

    mul(input, other, out=NULL)

    + + + + +

    Multiplies each element of the input input with the scalar +other and returns a new resulting tensor.

    +

    $$ + \mbox{out}_i = \mbox{other} \times \mbox{input}_i +$$ +If input is of type FloatTensor or DoubleTensor, other +should be a real number, otherwise it should be an integer

    + + +

    Each element of the tensor input is multiplied by the corresponding +element of the Tensor other. The resulting tensor is returned.

    +

    The shapes of input and other must be +broadcastable .

    +

    $$ + \mbox{out}_i = \mbox{input}_i \times \mbox{other}_i +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(3)) +a +torch_mul(a, 100) + + +a = torch_randn(c(4, 1)) +a +b = torch_randn(c(1, 4)) +b +torch_mul(a, b) +} +
    #> torch_tensor +#> 0.1073 -0.3304 0.0649 0.2010 +#> -0.3764 1.1592 -0.2277 -0.7051 +#> -0.2627 0.8090 -0.1589 -0.4921 +#> -0.3030 0.9332 -0.1833 -0.5676 +#> [ CPUFloatType{4,4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_multinomial.html b/static/docs/dev/reference/torch_multinomial.html new file mode 100644 index 0000000000000000000000000000000000000000..81e8189d973f171cfb52777e75f3ef49f3f91983 --- /dev/null +++ b/static/docs/dev/reference/torch_multinomial.html @@ -0,0 +1,291 @@ + + + + + + + + +Multinomial — torch_multinomial • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Multinomial

    +
    + +
    torch_multinomial(self, num_samples, replacement = FALSE, generator = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor containing probabilities

    num_samples

    (int) number of samples to draw

    replacement

    (bool, optional) whether to draw with replacement or not

    generator

    (torch.Generator, optional) a pseudorandom number generator for sampling

    + +

    Note

    + + +
    The rows of `input` do not need to sum to one (in which case we use
    +the values as weights), but must be non-negative, finite and have
    +a non-zero sum.
    +
    + +

    Indices are ordered from left to right according to when each was sampled +(first samples are placed in first column).

    +

    If input is a vector, out is a vector of size num_samples.

    +

    If input is a matrix with m rows, out is an matrix of shape +\((m \times \mbox{num\_samples})\).

    +

    If replacement is TRUE, samples are drawn with replacement.

    +

    If not, they are drawn without replacement, which means that when a +sample index is drawn for a row, it cannot be drawn again for that row.

    +
    When drawn without replacement, `num_samples` must be lower than
    +number of non-zero elements in `input` (or the min number of non-zero
    +elements in each row of `input` if it is a matrix).
    +
    + +

    multinomial(input, num_samples, replacement=False, *, generator=NULL, out=NULL) -> LongTensor

    + + + + +

    Returns a tensor where each row contains num_samples indices sampled +from the multinomial probability distribution located in the corresponding row +of tensor input.

    + +

    Examples

    +
    if (torch_is_installed()) { + +weights = torch_tensor(c(0, 10, 3, 0), dtype=torch_float()) # create a tensor of weights +torch_multinomial(weights, 2) +torch_multinomial(weights, 4, replacement=TRUE) +} +
    #> torch_tensor +#> 1 +#> 1 +#> 2 +#> 1 +#> [ CPULongType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_mv.html b/static/docs/dev/reference/torch_mv.html new file mode 100644 index 0000000000000000000000000000000000000000..98e0462667c6d8dadcc6884ad9885659688b7922 --- /dev/null +++ b/static/docs/dev/reference/torch_mv.html @@ -0,0 +1,264 @@ + + + + + + + + +Mv — torch_mv • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Mv

    +
    + +
    torch_mv(self, vec)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) matrix to be multiplied

    vec

    (Tensor) vector to be multiplied

    + +

    Note

    + +

    This function does not broadcast .

    +

    mv(input, vec, out=NULL) -> Tensor

    + + + + +

    Performs a matrix-vector product of the matrix input and the vector +vec.

    +

    If input is a \((n \times m)\) tensor, vec is a 1-D tensor of +size \(m\), out will be 1-D of size \(n\).

    + +

    Examples

    +
    if (torch_is_installed()) { + +mat = torch_randn(c(2, 3)) +vec = torch_randn(c(3)) +torch_mv(mat, vec) +} +
    #> torch_tensor +#> 0.3571 +#> -0.0321 +#> [ CPUFloatType{2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_mvlgamma.html b/static/docs/dev/reference/torch_mvlgamma.html new file mode 100644 index 0000000000000000000000000000000000000000..03e105b9c86f06a9310c5fa7464445f4297775e5 --- /dev/null +++ b/static/docs/dev/reference/torch_mvlgamma.html @@ -0,0 +1,264 @@ + + + + + + + + +Mvlgamma — torch_mvlgamma • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Mvlgamma

    +
    + +
    torch_mvlgamma(self, p)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the tensor to compute the multivariate log-gamma function

    p

    (int) the number of dimensions

    + +

    mvlgamma(input, p) -> Tensor

    + + + + +

    Computes the multivariate log-gamma function <https://en.wikipedia.org/wiki/Multivariate_gamma_function>_) with dimension +\(p\) element-wise, given by

    +

    $$ + \log(\Gamma_{p}(a)) = C + \displaystyle \sum_{i=1}^{p} \log\left(\Gamma\left(a - \frac{i - 1}{2}\right)\right) +$$ +where \(C = \log(\pi) \times \frac{p (p - 1)}{4}\) and \(\Gamma(\cdot)\) is the Gamma function.

    +

    All elements must be greater than \(\frac{p - 1}{2}\), otherwise an error would be thrown.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_empty(c(2, 3))$uniform_(1, 2) +a +torch_mvlgamma(a, 2) +} +
    #> torch_tensor +#> 0.4040 0.4059 0.7450 +#> 0.3997 0.8720 0.4162 +#> [ CPUFloatType{2,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_narrow.html b/static/docs/dev/reference/torch_narrow.html new file mode 100644 index 0000000000000000000000000000000000000000..487204d8b637448117d07a719399065030280c5b --- /dev/null +++ b/static/docs/dev/reference/torch_narrow.html @@ -0,0 +1,269 @@ + + + + + + + + +Narrow — torch_narrow • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Narrow

    +
    + +
    torch_narrow(self, dim, start, length)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the tensor to narrow

    dim

    (int) the dimension along which to narrow

    start

    (int) the starting dimension

    length

    (int) the distance to the ending dimension

    + +

    narrow(input, dim, start, length) -> Tensor

    + + + + +

    Returns a new tensor that is a narrowed version of input tensor. The +dimension dim is input from start to start + length. The +returned tensor and input tensor share the same underlying storage.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_tensor(matrix(c(1:9), ncol = 3, byrow= TRUE)) +torch_narrow(x, 1, torch_tensor(0L)$sum(dim = 1), 2) +torch_narrow(x, 2, torch_tensor(1L)$sum(dim = 1), 2) +} +
    #> torch_tensor +#> 2 3 +#> 5 6 +#> 8 9 +#> [ CPULongType{3,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_ne.html b/static/docs/dev/reference/torch_ne.html new file mode 100644 index 0000000000000000000000000000000000000000..e00c0774436b0748004ee6d81e2e19c478d114c7 --- /dev/null +++ b/static/docs/dev/reference/torch_ne.html @@ -0,0 +1,259 @@ + + + + + + + + +Ne — torch_ne • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Ne

    +
    + +
    torch_ne(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the tensor to compare

    other

    (Tensor or float) the tensor or value to compare

    + +

    ne(input, other, out=NULL) -> Tensor

    + + + + +

    Computes \(input \neq other\) element-wise.

    +

    The second argument can be a number or a tensor whose shape is +broadcastable with the first argument.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_ne(torch_tensor(matrix(1:4, ncol = 2, byrow=TRUE)), + torch_tensor(matrix(rep(c(1,4), each = 2), ncol = 2, byrow=TRUE))) +} +
    #> torch_tensor +#> 0 1 +#> 1 0 +#> [ CPUBoolType{2,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_neg.html b/static/docs/dev/reference/torch_neg.html new file mode 100644 index 0000000000000000000000000000000000000000..24858f3edd1ca456040ba4a5cd87ae3cf1f36710 --- /dev/null +++ b/static/docs/dev/reference/torch_neg.html @@ -0,0 +1,260 @@ + + + + + + + + +Neg — torch_neg • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Neg

    +
    + +
    torch_neg(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    neg(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the negative of the elements of input.

    +

    $$ + \mbox{out} = -1 \times \mbox{input} +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(5)) +a +torch_neg(a) +} +
    #> torch_tensor +#> -0.6722 +#> -0.2995 +#> 0.8433 +#> -0.6480 +#> 1.4570 +#> [ CPUFloatType{5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_nonzero.html b/static/docs/dev/reference/torch_nonzero.html new file mode 100644 index 0000000000000000000000000000000000000000..c6dad7df25f8a2b203e2c54713d397f777a5cf2d --- /dev/null +++ b/static/docs/dev/reference/torch_nonzero.html @@ -0,0 +1,284 @@ + + + + + + + + +Nonzero — torch_nonzero • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Nonzero

    +
    + +
    torch_nonzero(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    Note

    + + +
    [`torch_nonzero(..., as_tuple=False) <torch.nonzero>`] (default) returns a
    +2-D tensor where each row is the index for a nonzero value.
    +
    +[`torch_nonzero(..., as_tuple=TRUE) <torch.nonzero>`] returns a tuple of 1-D
    +index tensors, allowing for advanced indexing, so `x[x.nonzero(as_tuple=TRUE)]`
    +gives all nonzero values of tensor `x`. Of the returned tuple, each index tensor
    +contains nonzero indices for a certain dimension.
    +
    +See below for more details on the two behaviors.
    +
    + +

    nonzero(input, *, out=NULL, as_tuple=False) -> LongTensor or tuple of LongTensors

    + + + + +

    When as_tuple is FALSE (default):

    +

    Returns a tensor containing the indices of all non-zero elements of +input. Each row in the result contains the indices of a non-zero +element in input. The result is sorted lexicographically, with +the last index changing the fastest (C-style).

    +

    If input has \(n\) dimensions, then the resulting indices tensor +out is of size \((z \times n)\), where \(z\) is the total number of +non-zero elements in the input tensor.

    +

    When as_tuple is TRUE:

    +

    Returns a tuple of 1-D tensors, one for each dimension in input, +each containing the indices (in that dimension) of all non-zero elements of +input .

    +

    If input has \(n\) dimensions, then the resulting tuple contains \(n\) +tensors of size \(z\), where \(z\) is the total number of +non-zero elements in the input tensor.

    +

    As a special case, when input has zero dimensions and a nonzero scalar +value, it is treated as a one-dimensional tensor with one element.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_nonzero(torch_tensor(c(1, 1, 1, 0, 1))) +} +
    #> torch_tensor +#> 0 +#> 1 +#> 2 +#> 4 +#> [ CPULongType{4,1} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_norm.html b/static/docs/dev/reference/torch_norm.html new file mode 100644 index 0000000000000000000000000000000000000000..cefcda7aa9bbf8c218c9f4fd6fa970f830e32453 --- /dev/null +++ b/static/docs/dev/reference/torch_norm.html @@ -0,0 +1,274 @@ + + + + + + + + +Norm — torch_norm • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Norm

    +
    + +
    torch_norm(self, p = 2L, dim, keepdim = FALSE, dtype)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor

    p

    (int, float, inf, -inf, 'fro', 'nuc', optional) the order of norm. Default: 'fro' The following norms can be calculated: ===== ============================ ========================== ord matrix norm vector norm ===== ============================ ========================== NULL Frobenius norm 2-norm 'fro' Frobenius norm -- 'nuc' nuclear norm -- Other as vec norm when dim is NULL sum(abs(x)ord)(1./ord) ===== ============================ ==========================

    dim

    (int, 2-tuple of ints, 2-list of ints, optional) If it is an int, vector norm will be calculated, if it is 2-tuple of ints, matrix norm will be calculated. If the value is NULL, matrix norm will be calculated when the input tensor only has two dimensions, vector norm will be calculated when the input tensor only has one dimension. If the input tensor has more than two dimensions, the vector norm will be applied to last dimension.

    keepdim

    (bool, optional) whether the output tensors have dim retained or not. Ignored if dim = NULL and out = NULL. Default: FALSE +Ignored if dim = NULL and out = NULL.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. If specified, the input tensor is casted to 'dtype' while performing the operation. Default: NULL.

    + +

    TEST

    + + + + +

    Returns the matrix norm or vector norm of a given tensor.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_arange(0, 9, dtype = torch_float()) +b = a$reshape(list(3, 3)) +torch_norm(a) +torch_norm(b) +torch_norm(a, Inf) +torch_norm(b, Inf) + +} +
    #> torch_tensor +#> 8 +#> [ CPUFloatType{} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_normal.html b/static/docs/dev/reference/torch_normal.html new file mode 100644 index 0000000000000000000000000000000000000000..ecb375a3909601e16523ba120c44025a5a21e49d --- /dev/null +++ b/static/docs/dev/reference/torch_normal.html @@ -0,0 +1,304 @@ + + + + + + + + +Normal — torch_normal • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Normal

    +
    + +
    torch_normal(mean, std = 1L, size, generator = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    mean

    (Tensor) the tensor of per-element means

    std

    (Tensor) the tensor of per-element standard deviations

    size

    (int...) a sequence of integers defining the shape of the output tensor.

    generator

    (torch.Generator, optional) a pseudorandom number generator for sampling

    + +

    Note

    + +

    When the shapes do not match, the shape of mean +is used as the shape for the returned output tensor

    +

    normal(mean, std, *, generator=NULL, out=NULL) -> Tensor

    + + + + +

    Returns a tensor of random numbers drawn from separate normal distributions +whose mean and standard deviation are given.

    +

    The mean is a tensor with the mean of +each output element's normal distribution

    +

    The std is a tensor with the standard deviation of +each output element's normal distribution

    +

    The shapes of mean and std don't need to match, but the +total number of elements in each tensor need to be the same.

    +

    normal(mean=0.0, std, out=NULL) -> Tensor

    + + + + +

    Similar to the function above, but the means are shared among all drawn +elements.

    +

    normal(mean, std=1.0, out=NULL) -> Tensor

    + + + + +

    Similar to the function above, but the standard-deviations are shared among +all drawn elements.

    +

    normal(mean, std, size, *, out=NULL) -> Tensor

    + + + + +

    Similar to the function above, but the means and standard deviations are shared +among all drawn elements. The resulting tensor has size given by size.

    + +

    Examples

    +
    if (torch_is_installed()) { + +if (FALSE) { +torch_normal(mean=0, std=torch_arange(1, 0, -0.1)) + + +torch_normal(mean=0.5, std=torch_arange(1., 6.)) + + +torch_normal(mean=torch_arange(1., 6.)) + + +torch_normal(2, 3, size=list(1, 4)) +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_ones.html b/static/docs/dev/reference/torch_ones.html new file mode 100644 index 0000000000000000000000000000000000000000..1b6d4f63b547a39a1d3313f696a348dfcf7e244f --- /dev/null +++ b/static/docs/dev/reference/torch_ones.html @@ -0,0 +1,284 @@ + + + + + + + + +Ones — torch_ones • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Ones

    +
    + +
    torch_ones(
    +  ...,
    +  names = NULL,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    ...

    (int...) a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple.

    names

    optional names for the dimensions

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

    layout

    (torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    ones(*size, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

    + + + + +

    Returns a tensor filled with the scalar value 1, with the shape defined +by the variable argument size.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_ones(c(2, 3)) +torch_ones(c(5)) +} +
    #> torch_tensor +#> 1 +#> 1 +#> 1 +#> 1 +#> 1 +#> [ CPUFloatType{5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_ones_like.html b/static/docs/dev/reference/torch_ones_like.html new file mode 100644 index 0000000000000000000000000000000000000000..3d93ca92a3ea584886e08e1f90cf99afd8eda688 --- /dev/null +++ b/static/docs/dev/reference/torch_ones_like.html @@ -0,0 +1,289 @@ + + + + + + + + +Ones_like — torch_ones_like • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Ones_like

    +
    + +
    torch_ones_like(
    +  input,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE,
    +  memory_format = torch_preserve_format()
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    (Tensor) the size of input will determine size of the output tensor.

    dtype

    (torch.dtype, optional) the desired data type of returned Tensor. Default: if NULL, defaults to the dtype of input.

    layout

    (torch.layout, optional) the desired layout of returned tensor. Default: if NULL, defaults to the layout of input.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, defaults to the device of input.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    memory_format

    (torch.memory_format, optional) the desired memory format of returned Tensor. Default: torch_preserve_format.

    + +

    ones_like(input, dtype=NULL, layout=NULL, device=NULL, requires_grad=False, memory_format=torch.preserve_format) -> Tensor

    + + + + +

    Returns a tensor filled with the scalar value 1, with the same size as +input. torch_ones_like(input) is equivalent to +torch_ones(input.size(), dtype=input.dtype, layout=input.layout, device=input.device).

    +

    Warning

    + + + +

    As of 0.4, this function does not support an out keyword. As an alternative, +the old torch_ones_like(input, out=output) is equivalent to +torch_ones(input.size(), out=output).

    + +

    Examples

    +
    if (torch_is_installed()) { + +input = torch_empty(c(2, 3)) +torch_ones_like(input) +} +
    #> torch_tensor +#> 1 1 1 +#> 1 1 1 +#> [ CPUFloatType{2,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_orgqr.html b/static/docs/dev/reference/torch_orgqr.html new file mode 100644 index 0000000000000000000000000000000000000000..b438e741eee3d1b1ca02f1a93220335d366df7ac --- /dev/null +++ b/static/docs/dev/reference/torch_orgqr.html @@ -0,0 +1,250 @@ + + + + + + + + +Orgqr — torch_orgqr • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Orgqr

    +
    + +
    torch_orgqr(self, input2)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the a from torch_geqrf.

    input2

    (Tensor) the tau from torch_geqrf.

    + +

    orgqr(input, input2) -> Tensor

    + + + + +

    Computes the orthogonal matrix Q of a QR factorization, from the (input, input2) +tuple returned by torch_geqrf.

    +

    This directly calls the underlying LAPACK function ?orgqr. +See LAPACK documentation for orgqr_ for further details.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_ormqr.html b/static/docs/dev/reference/torch_ormqr.html new file mode 100644 index 0000000000000000000000000000000000000000..7a59f577b447404e833c8ded9decde350d7149be --- /dev/null +++ b/static/docs/dev/reference/torch_ormqr.html @@ -0,0 +1,262 @@ + + + + + + + + +Ormqr — torch_ormqr • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Ormqr

    +
    + +
    torch_ormqr(self, input2, input3, left = TRUE, transpose = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the a from torch_geqrf.

    input2

    (Tensor) the tau from torch_geqrf.

    input3

    (Tensor) the matrix to be multiplied.

    left

    see LAPACK documentation

    transpose

    see LAPACK documentation

    + +

    ormqr(input, input2, input3, left=TRUE, transpose=False) -> Tensor

    + + + + +

    Multiplies mat (given by input3) by the orthogonal Q matrix of the QR factorization +formed by torch_geqrf() that is represented by (a, tau) (given by (input, input2)).

    +

    This directly calls the underlying LAPACK function ?ormqr. +See LAPACK documentation for ormqr for further details.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_pdist.html b/static/docs/dev/reference/torch_pdist.html new file mode 100644 index 0000000000000000000000000000000000000000..4c3b85c6d6a0fa8db10611d8305b09ebcac60228 --- /dev/null +++ b/static/docs/dev/reference/torch_pdist.html @@ -0,0 +1,256 @@ + + + + + + + + +Pdist — torch_pdist • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Pdist

    +
    + +
    torch_pdist(self, p = 2L)
    + +

    Arguments

    + + + + + + + + + + +
    self

    NA input tensor of shape \(N \times M\).

    p

    NA p value for the p-norm distance to calculate between each vector pair \(\in [0, \infty]\).

    + +

    pdist(input, p=2) -> Tensor

    + + + + +

    Computes the p-norm distance between every pair of row vectors in the input. +This is identical to the upper triangular portion, excluding the diagonal, of +torch_norm(input[:, NULL] - input, dim=2, p=p). This function will be faster +if the rows are contiguous.

    +

    If input has shape \(N \times M\) then the output will have shape +\(\frac{1}{2} N (N - 1)\).

    +

    This function is equivalent to scipy.spatial.distance.pdist(input, 'minkowski', p=p) if \(p \in (0, \infty)\). When \(p = 0\) it is +equivalent to scipy.spatial.distance.pdist(input, 'hamming') * M. +When \(p = \infty\), the closest scipy function is +scipy.spatial.distance.pdist(xn, lambda x, y: np.abs(x - y).max()).

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_pinverse.html b/static/docs/dev/reference/torch_pinverse.html new file mode 100644 index 0000000000000000000000000000000000000000..38cb1d8e2cf887f19d9f106d9de2522a53784d9e --- /dev/null +++ b/static/docs/dev/reference/torch_pinverse.html @@ -0,0 +1,283 @@ + + + + + + + + +Pinverse — torch_pinverse • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Pinverse

    +
    + +
    torch_pinverse(self, rcond = 0)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) The input tensor of size \((*, m, n)\) where \(*\) is zero or more batch dimensions

    rcond

    (float) A floating point value to determine the cutoff for small singular values. Default: 1e-15

    + +

    Note

    + + +
    This method is implemented using the Singular Value Decomposition.
    +
    + +
    The pseudo-inverse is not necessarily a continuous function in the elements of the matrix `[1]`_.
    +Therefore, derivatives are not always existent, and exist for a constant rank only `[2]`_.
    +However, this method is backprop-able due to the implementation by using SVD results, and
    +could be unstable. Double-backward will also be unstable due to the usage of SVD internally.
    +See `~torch.svd` for more details.
    +
    + +

    pinverse(input, rcond=1e-15) -> Tensor

    + + + + +

    Calculates the pseudo-inverse (also known as the Moore-Penrose inverse) of a 2D tensor. +Please look at Moore-Penrose inverse_ for more details

    + +

    Examples

    +
    if (torch_is_installed()) { + +input = torch_randn(c(3, 5)) +input +torch_pinverse(input) +# Batched pinverse example +a = torch_randn(c(2,6,3)) +b = torch_pinverse(a) +torch_matmul(b, a) +} +
    #> torch_tensor +#> (1,.,.) = +#> 1.0000e+00 4.4703e-08 -1.4901e-08 +#> 1.3411e-07 1.0000e+00 4.4703e-08 +#> -1.7881e-07 1.1921e-07 1.0000e+00 +#> +#> (2,.,.) = +#> 1.0000 -0.0000 0.0000 +#> -0.0000 1.0000 0.0000 +#> 0.0000 0.0000 1.0000 +#> [ CPUFloatType{2,3,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_pixel_shuffle.html b/static/docs/dev/reference/torch_pixel_shuffle.html new file mode 100644 index 0000000000000000000000000000000000000000..1538a6e839011d6925fede370415e20415c73541 --- /dev/null +++ b/static/docs/dev/reference/torch_pixel_shuffle.html @@ -0,0 +1,255 @@ + + + + + + + + +Pixel_shuffle — torch_pixel_shuffle • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Pixel_shuffle

    +
    + +
    torch_pixel_shuffle(self, upscale_factor)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor

    upscale_factor

    (int) factor to increase spatial resolution by

    + +

    Rearranges elements in a tensor of shape

    + +

    math:(*, C \times r^2, H, W) to a :

    +

    Rearranges elements in a tensor of shape \((*, C \times r^2, H, W)\) to a +tensor of shape \((*, C, H \times r, W \times r)\).

    +

    See ~torch.nn.PixelShuffle for details.

    + +

    Examples

    +
    if (torch_is_installed()) { + +input = torch_randn(c(1, 9, 4, 4)) +output = nnf_pixel_shuffle(input, 3) +print(output$size()) +} +
    #> [1] 1 1 12 12
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_poisson.html b/static/docs/dev/reference/torch_poisson.html new file mode 100644 index 0000000000000000000000000000000000000000..6e907242474929e961e8863730c5d86cb640317c --- /dev/null +++ b/static/docs/dev/reference/torch_poisson.html @@ -0,0 +1,264 @@ + + + + + + + + +Poisson — torch_poisson • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Poisson

    +
    + +
    torch_poisson(self, generator = NULL)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor containing the rates of the Poisson distribution

    generator

    (torch.Generator, optional) a pseudorandom number generator for sampling

    + +

    poisson(input *, generator=NULL) -> Tensor

    + + + + +

    Returns a tensor of the same size as input with each element +sampled from a Poisson distribution with rate parameter given by the corresponding +element in input i.e.,

    +

    $$ + \mbox{out}_i \sim \mbox{Poisson}(\mbox{input}_i) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +rates = torch_rand(c(4, 4)) * 5 # rate parameter between 0 and 5 +torch_poisson(rates) +} +
    #> torch_tensor +#> 2 4 3 2 +#> 1 4 1 5 +#> 1 2 2 0 +#> 4 5 7 0 +#> [ CPUFloatType{4,4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_polygamma.html b/static/docs/dev/reference/torch_polygamma.html new file mode 100644 index 0000000000000000000000000000000000000000..910d3b71f4262769c44e2f3c5571599ca3bec439 --- /dev/null +++ b/static/docs/dev/reference/torch_polygamma.html @@ -0,0 +1,265 @@ + + + + + + + + +Polygamma — torch_polygamma • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Polygamma

    +
    + +
    torch_polygamma(n, self)
    + +

    Arguments

    + + + + + + + + + + +
    n

    (int) the order of the polygamma function

    self

    (Tensor) the input tensor.

    + +

    Note

    + + +
    This function is not implemented for \eqn{n \geq 2}.
    +
    + +

    polygamma(n, input, out=NULL) -> Tensor

    + + + + +

    Computes the \(n^{th}\) derivative of the digamma function on input. +\(n \geq 0\) is called the order of the polygamma function.

    +

    $$ + \psi^{(n)}(x) = \frac{d^{(n)}}{dx^{(n)}} \psi(x) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +a = torch_tensor(c(1, 0.5)) +torch_polygamma(1, a) +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_pow.html b/static/docs/dev/reference/torch_pow.html new file mode 100644 index 0000000000000000000000000000000000000000..e1cdfe1d6e91b9fc561c493caaafb5c866c71dab --- /dev/null +++ b/static/docs/dev/reference/torch_pow.html @@ -0,0 +1,294 @@ + + + + + + + + +Pow — torch_pow • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Pow

    +
    + +
    torch_pow(self, exponent)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (float) the scalar base value for the power operation

    exponent

    (float or tensor) the exponent value

    + +

    pow(input, exponent, out=NULL) -> Tensor

    + + + + +

    Takes the power of each element in input with exponent and +returns a tensor with the result.

    +

    exponent can be either a single float number or a Tensor +with the same number of elements as input.

    +

    When exponent is a scalar value, the operation applied is:

    +

    $$ + \mbox{out}_i = x_i^{\mbox{exponent}} +$$ +When exponent is a tensor, the operation applied is:

    +

    $$ + \mbox{out}_i = x_i^{\mbox{exponent}_i} +$$ +When exponent is a tensor, the shapes of input +and exponent must be broadcastable .

    +

    pow(self, exponent, out=NULL) -> Tensor

    + + + + +

    self is a scalar float value, and exponent is a tensor. +The returned tensor out is of the same shape as exponent

    +

    The operation applied is:

    +

    $$ + \mbox{out}_i = \mbox{self} ^ {\mbox{exponent}_i} +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_pow(a, 2) +exp = torch_arange(1., 5.) +a = torch_arange(1., 5.) +a +exp +torch_pow(a, exp) + + +exp = torch_arange(1., 5.) +base = 2 +torch_pow(base, exp) +} +
    #> torch_tensor +#> 2 +#> 4 +#> 8 +#> 16 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_prod.html b/static/docs/dev/reference/torch_prod.html new file mode 100644 index 0000000000000000000000000000000000000000..3a752f5020cb4d6e8be2ff9aab009f0daa6c9401 --- /dev/null +++ b/static/docs/dev/reference/torch_prod.html @@ -0,0 +1,283 @@ + + + + + + + + +Prod — torch_prod • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Prod

    +
    + +
    torch_prod(self, dim, keepdim = FALSE, dtype = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int) the dimension to reduce.

    keepdim

    (bool) whether the output tensor has dim retained or not.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: NULL.

    + +

    prod(input, dtype=NULL) -> Tensor

    + + + + +

    Returns the product of all elements in the input tensor.

    +

    prod(input, dim, keepdim=False, dtype=NULL) -> Tensor

    + + + + +

    Returns the product of each row of the input tensor in the given +dimension dim.

    +

    If keepdim is TRUE, the output tensor is of the same size +as input except in the dimension dim where it is of size 1. +Otherwise, dim is squeezed (see torch_squeeze), resulting in +the output tensor having 1 fewer dimension than input.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(1, 3)) +a +torch_prod(a) + + +a = torch_randn(c(4, 2)) +a +torch_prod(a, 1) +} +
    #> torch_tensor +#> 0.01 * +#> -9.4219 +#> -0.0763 +#> [ CPUFloatType{2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_promote_types.html b/static/docs/dev/reference/torch_promote_types.html new file mode 100644 index 0000000000000000000000000000000000000000..19fa9aebe362e59b622cffcd2fefe2aeb33d98d8 --- /dev/null +++ b/static/docs/dev/reference/torch_promote_types.html @@ -0,0 +1,257 @@ + + + + + + + + +Promote_types — torch_promote_types • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Promote_types

    +
    + +
    torch_promote_types(type1, type2)
    + +

    Arguments

    + + + + + + + + + + +
    type1

    (torch.dtype)

    type2

    (torch.dtype)

    + +

    promote_types(type1, type2) -> dtype

    + + + + +

    Returns the torch_dtype with the smallest size and scalar kind that is +not smaller nor of lower kind than either type1 or type2. See type promotion +documentation for more information on the type +promotion logic.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_promote_types(torch_int32(), torch_float32()) +torch_promote_types(torch_uint8(), torch_long()) +} +
    #> torch_Long
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_qr.html b/static/docs/dev/reference/torch_qr.html new file mode 100644 index 0000000000000000000000000000000000000000..bcf5112cf6e377b676bee74949801a13625913cc --- /dev/null +++ b/static/docs/dev/reference/torch_qr.html @@ -0,0 +1,274 @@ + + + + + + + + +Qr — torch_qr • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Qr

    +
    + +
    torch_qr(self, some = TRUE)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor of size \((*, m, n)\) where * is zero or more batch dimensions consisting of matrices of dimension \(m \times n\).

    some

    (bool, optional) Set to TRUE for reduced QR decomposition and FALSE for complete QR decomposition.

    + +

    Note

    + +

    precision may be lost if the magnitudes of the elements of input +are large

    +

    While it should always give you a valid decomposition, it may not +give you the same one across platforms - it will depend on your +LAPACK implementation.

    +

    qr(input, some=TRUE, out=NULL) -> (Tensor, Tensor)

    + + + + +

    Computes the QR decomposition of a matrix or a batch of matrices input, +and returns a namedtuple (Q, R) of tensors such that \(\mbox{input} = Q R\) +with \(Q\) being an orthogonal matrix or batch of orthogonal matrices and +\(R\) being an upper triangular matrix or batch of upper triangular matrices.

    +

    If some is TRUE, then this function returns the thin (reduced) QR factorization. +Otherwise, if some is FALSE, this function returns the complete QR factorization.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_tensor(matrix(c(12., -51, 4, 6, 167, -68, -4, 24, -41), ncol = 3, byrow = TRUE)) +out = torch_qr(a) +q = out[[1]] +r = out[[2]] +torch_mm(q, r)$round() +torch_mm(q$t(), q)$round() +} +
    #> torch_tensor +#> 1 0 0 +#> 0 1 0 +#> 0 0 1 +#> [ CPUFloatType{3,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_qscheme.html b/static/docs/dev/reference/torch_qscheme.html new file mode 100644 index 0000000000000000000000000000000000000000..ebcb51acdea99c460e34e8566a93e525b21aba43 --- /dev/null +++ b/static/docs/dev/reference/torch_qscheme.html @@ -0,0 +1,235 @@ + + + + + + + + +Creates the corresponding Scheme object — torch_qscheme • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Creates the corresponding Scheme object

    +
    + +
    torch_per_channel_affine()
    +
    +torch_per_tensor_affine()
    +
    +torch_per_channel_symmetric()
    +
    +torch_per_tensor_symmetric()
    + + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_quantize_per_channel.html b/static/docs/dev/reference/torch_quantize_per_channel.html new file mode 100644 index 0000000000000000000000000000000000000000..117bffb22daf8d1331fa1cc4f8607a3b4bc0bd44 --- /dev/null +++ b/static/docs/dev/reference/torch_quantize_per_channel.html @@ -0,0 +1,271 @@ + + + + + + + + +Quantize_per_channel — torch_quantize_per_channel • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Quantize_per_channel

    +
    + +
    torch_quantize_per_channel(self, scales, zero_points, axis, dtype)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) float tensor to quantize

    scales

    (Tensor) float 1D tensor of scales to use, size should match input.size(axis)

    zero_points

    (int) integer 1D tensor of offset to use, size should match input.size(axis)

    axis

    (int) dimension on which apply per-channel quantization

    dtype

    (torch.dtype) the desired data type of returned tensor. Has to be one of the quantized dtypes: torch_quint8, torch.qint8, torch.qint32

    + +

    quantize_per_channel(input, scales, zero_points, axis, dtype) -> Tensor

    + + + + +

    Converts a float tensor to per-channel quantized tensor with given scales and zero points.

    + +

    Examples

    +
    if (torch_is_installed()) { +x = torch_tensor(matrix(c(-1.0, 0.0, 1.0, 2.0), ncol = 2, byrow = TRUE)) +torch_quantize_per_channel(x, torch_tensor(c(0.1, 0.01)), + torch_tensor(c(10L, 0L)), 0, torch_quint8()) +torch_quantize_per_channel(x, torch_tensor(c(0.1, 0.01)), + torch_tensor(c(10L, 0L)), 0, torch_quint8())$int_repr() +} +
    #> torch_tensor +#> 0 10 +#> 100 200 +#> [ CPUByteType{2,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_quantize_per_tensor.html b/static/docs/dev/reference/torch_quantize_per_tensor.html new file mode 100644 index 0000000000000000000000000000000000000000..5ced4124cdd010007f63915cb4c4bc77c0711331 --- /dev/null +++ b/static/docs/dev/reference/torch_quantize_per_tensor.html @@ -0,0 +1,266 @@ + + + + + + + + +Quantize_per_tensor — torch_quantize_per_tensor • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Quantize_per_tensor

    +
    + +
    torch_quantize_per_tensor(self, scale, zero_point, dtype)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) float tensor to quantize

    scale

    (float) scale to apply in quantization formula

    zero_point

    (int) offset in integer value that maps to float zero

    dtype

    (torch.dtype) the desired data type of returned tensor. Has to be one of the quantized dtypes: torch_quint8, torch.qint8, torch.qint32

    + +

    quantize_per_tensor(input, scale, zero_point, dtype) -> Tensor

    + + + + +

    Converts a float tensor to quantized tensor with given scale and zero point.

    + +

    Examples

    +
    if (torch_is_installed()) { +torch_quantize_per_tensor(torch_tensor(c(-1.0, 0.0, 1.0, 2.0)), 0.1, 10, torch_quint8()) +torch_quantize_per_tensor(torch_tensor(c(-1.0, 0.0, 1.0, 2.0)), 0.1, 10, torch_quint8())$int_repr() +} +
    #> torch_tensor +#> 0 +#> 10 +#> 20 +#> 30 +#> [ CPUByteType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_rand.html b/static/docs/dev/reference/torch_rand.html new file mode 100644 index 0000000000000000000000000000000000000000..ef819b40a50c72c27cd4449ec8cfc1708a894bef --- /dev/null +++ b/static/docs/dev/reference/torch_rand.html @@ -0,0 +1,282 @@ + + + + + + + + +Rand — torch_rand • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Rand

    +
    + +
    torch_rand(
    +  ...,
    +  names = NULL,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    ...

    (int...) a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple.

    names

    optional dimension names

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

    layout

    (torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    rand(*size, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

    + + + + +

    Returns a tensor filled with random numbers from a uniform distribution +on the interval \([0, 1)\)

    +

    The shape of the tensor is defined by the variable argument size.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_rand(4) +torch_rand(c(2, 3)) +} +
    #> torch_tensor +#> 0.8149 0.6329 0.7243 +#> 0.7835 0.8208 0.7484 +#> [ CPUFloatType{2,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_rand_like.html b/static/docs/dev/reference/torch_rand_like.html new file mode 100644 index 0000000000000000000000000000000000000000..a1068a69981fffe90c90b780bd83806be98793e7 --- /dev/null +++ b/static/docs/dev/reference/torch_rand_like.html @@ -0,0 +1,273 @@ + + + + + + + + +Rand_like — torch_rand_like • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Rand_like

    +
    + +
    torch_rand_like(
    +  input,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE,
    +  memory_format = torch_preserve_format()
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    (Tensor) the size of input will determine size of the output tensor.

    dtype

    (torch.dtype, optional) the desired data type of returned Tensor. Default: if NULL, defaults to the dtype of input.

    layout

    (torch.layout, optional) the desired layout of returned tensor. Default: if NULL, defaults to the layout of input.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, defaults to the device of input.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    memory_format

    (torch.memory_format, optional) the desired memory format of returned Tensor. Default: torch_preserve_format.

    + +

    rand_like(input, dtype=NULL, layout=NULL, device=NULL, requires_grad=False, memory_format=torch.preserve_format) -> Tensor

    + + + + +

    Returns a tensor with the same size as input that is filled with +random numbers from a uniform distribution on the interval \([0, 1)\). +torch_rand_like(input) is equivalent to +torch_rand(input.size(), dtype=input.dtype, layout=input.layout, device=input.device).

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_randint.html b/static/docs/dev/reference/torch_randint.html new file mode 100644 index 0000000000000000000000000000000000000000..06ac2179e56c9aa37f1b540ab4c28b45e6550e15 --- /dev/null +++ b/static/docs/dev/reference/torch_randint.html @@ -0,0 +1,302 @@ + + + + + + + + +Randint — torch_randint • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Randint

    +
    + +
    torch_randint(
    +  low,
    +  high,
    +  size,
    +  generator = NULL,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE,
    +  memory_format = torch_preserve_format()
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    low

    (int, optional) Lowest integer to be drawn from the distribution. Default: 0.

    high

    (int) One above the highest integer to be drawn from the distribution.

    size

    (tuple) a tuple defining the shape of the output tensor.

    generator

    (torch.Generator, optional) a pseudorandom number generator for sampling

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

    layout

    (torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    memory_format

    memory format for the resulting tensor.

    + +

    randint(low=0, high, size, *, generator=NULL, out=NULL, \

    + + + + +

    dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

    +

    Returns a tensor filled with random integers generated uniformly +between low (inclusive) and high (exclusive).

    +

    The shape of the tensor is defined by the variable argument size.

    +

    .. note: +With the global dtype default (torch_float32), this function returns +a tensor with dtype torch_int64.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_randint(3, 5, list(3)) +torch_randint(0, 10, size = list(2, 2)) +torch_randint(3, 10, list(2, 2)) +} +
    #> torch_tensor +#> 8 5 +#> 4 5 +#> [ CPUFloatType{2,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_randint_like.html b/static/docs/dev/reference/torch_randint_like.html new file mode 100644 index 0000000000000000000000000000000000000000..bace7a86c6709881041c62bd3061576cfdaa39d6 --- /dev/null +++ b/static/docs/dev/reference/torch_randint_like.html @@ -0,0 +1,281 @@ + + + + + + + + +Randint_like — torch_randint_like • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Randint_like

    +
    + +
    torch_randint_like(
    +  input,
    +  low,
    +  high,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    (Tensor) the size of input will determine size of the output tensor.

    low

    (int, optional) Lowest integer to be drawn from the distribution. Default: 0.

    high

    (int) One above the highest integer to be drawn from the distribution.

    dtype

    (torch.dtype, optional) the desired data type of returned Tensor. Default: if NULL, defaults to the dtype of input.

    layout

    (torch.layout, optional) the desired layout of returned tensor. Default: if NULL, defaults to the layout of input.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, defaults to the device of input.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    randint_like(input, low=0, high, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False,

    + + + + +

    memory_format=torch.preserve_format) -> Tensor

    +

    Returns a tensor with the same shape as Tensor input filled with +random integers generated uniformly between low (inclusive) and +high (exclusive).

    +

    .. note: +With the global dtype default (torch_float32), this function returns +a tensor with dtype torch_int64.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_randn.html b/static/docs/dev/reference/torch_randn.html new file mode 100644 index 0000000000000000000000000000000000000000..63fc8f6b91621270ad0381580a3f7b4b28bdadbe --- /dev/null +++ b/static/docs/dev/reference/torch_randn.html @@ -0,0 +1,286 @@ + + + + + + + + +Randn — torch_randn • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Randn

    +
    + +
    torch_randn(
    +  ...,
    +  names = NULL,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    ...

    (int...) a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple.

    names

    optional names for the dimensions

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

    layout

    (torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    randn(*size, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

    + + + + +

    Returns a tensor filled with random numbers from a normal distribution +with mean 0 and variance 1 (also called the standard normal +distribution).

    +

    $$ + \mbox{out}_{i} \sim \mathcal{N}(0, 1) +$$ +The shape of the tensor is defined by the variable argument size.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_randn(c(4)) +torch_randn(c(2, 3)) +} +
    #> torch_tensor +#> -0.2629 1.1612 -0.3210 +#> 0.7943 0.8465 -0.2583 +#> [ CPUFloatType{2,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_randn_like.html b/static/docs/dev/reference/torch_randn_like.html new file mode 100644 index 0000000000000000000000000000000000000000..7e23524c29ed46391677a8ac3fe332873bd09198 --- /dev/null +++ b/static/docs/dev/reference/torch_randn_like.html @@ -0,0 +1,273 @@ + + + + + + + + +Randn_like — torch_randn_like • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Randn_like

    +
    + +
    torch_randn_like(
    +  input,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE,
    +  memory_format = torch_preserve_format()
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    (Tensor) the size of input will determine size of the output tensor.

    dtype

    (torch.dtype, optional) the desired data type of returned Tensor. Default: if NULL, defaults to the dtype of input.

    layout

    (torch.layout, optional) the desired layout of returned tensor. Default: if NULL, defaults to the layout of input.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, defaults to the device of input.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    memory_format

    (torch.memory_format, optional) the desired memory format of returned Tensor. Default: torch_preserve_format.

    + +

    randn_like(input, dtype=NULL, layout=NULL, device=NULL, requires_grad=False, memory_format=torch.preserve_format) -> Tensor

    + + + + +

    Returns a tensor with the same size as input that is filled with +random numbers from a normal distribution with mean 0 and variance 1. +torch_randn_like(input) is equivalent to +torch_randn(input.size(), dtype=input.dtype, layout=input.layout, device=input.device).

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_randperm.html b/static/docs/dev/reference/torch_randperm.html new file mode 100644 index 0000000000000000000000000000000000000000..d69440c67704fb1c80a5b702f783702df8cd44ca --- /dev/null +++ b/static/docs/dev/reference/torch_randperm.html @@ -0,0 +1,276 @@ + + + + + + + + +Randperm — torch_randperm • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Randperm

    +
    + +
    torch_randperm(
    +  n,
    +  dtype = torch_int64(),
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    n

    (int) the upper bound (exclusive)

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: torch_int64.

    layout

    (torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    randperm(n, out=NULL, dtype=torch.int64, layout=torch.strided, device=NULL, requires_grad=False) -> LongTensor

    + + + + +

    Returns a random permutation of integers from 0 to n - 1.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_randperm(4) +} +
    #> torch_tensor +#> 2 +#> 1 +#> 3 +#> 0 +#> [ CPULongType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_range.html b/static/docs/dev/reference/torch_range.html new file mode 100644 index 0000000000000000000000000000000000000000..a5b585c6ff82bb28bf6f7e66cd2123b4d0c0ef2b --- /dev/null +++ b/static/docs/dev/reference/torch_range.html @@ -0,0 +1,299 @@ + + + + + + + + +Range — torch_range • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Range

    +
    + +
    torch_range(
    +  start,
    +  end,
    +  step = 1,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    start

    (float) the starting value for the set of points. Default: 0.

    end

    (float) the ending value for the set of points

    step

    (float) the gap between each pair of adjacent points. Default: 1.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type). If dtype is not given, infer the data type from the other input arguments. If any of start, end, or stop are floating-point, the dtype is inferred to be the default dtype, see ~torch.get_default_dtype. Otherwise, the dtype is inferred to be torch.int64.

    layout

    (torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    range(start=0, end, step=1, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

    + + + + +

    Returns a 1-D tensor of size \(\left\lfloor \frac{\mbox{end} - \mbox{start}}{\mbox{step}} \right\rfloor + 1\) +with values from start to end with step step. Step is +the gap between two values in the tensor.

    +

    $$ + \mbox{out}_{i+1} = \mbox{out}_i + \mbox{step}. +$$

    +

    Warning

    + + + +

    This function is deprecated in favor of torch_arange.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_range(1, 4) +torch_range(1, 4, 0.5) +} +
    #> Warning: This function is deprecated in favor of torch_arange.
    #> Warning: This function is deprecated in favor of torch_arange.
    #> torch_tensor +#> 1.0000 +#> 1.5000 +#> 2.0000 +#> 2.5000 +#> 3.0000 +#> 3.5000 +#> [ CPUFloatType{6} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_real.html b/static/docs/dev/reference/torch_real.html new file mode 100644 index 0000000000000000000000000000000000000000..0c09f93cac7140481118aba9604f39bf46773c49 --- /dev/null +++ b/static/docs/dev/reference/torch_real.html @@ -0,0 +1,260 @@ + + + + + + + + +Real — torch_real • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Real

    +
    + +
    torch_real(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    real(input) -> Tensor

    + + + + +

    Returns the real part of the input tensor. If +input is a real (non-complex) tensor, this function just +returns it.

    +

    Warning

    + + + +

    Not yet implemented for complex tensors.

    +

    $$ + \mbox{out}_{i} = real(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +torch_real(torch_tensor(c(-1 + 1i, -2 + 2i, 3 - 3i))) +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_reciprocal.html b/static/docs/dev/reference/torch_reciprocal.html new file mode 100644 index 0000000000000000000000000000000000000000..805c211e2f53f9a2431760aba67509e0e7423ff1 --- /dev/null +++ b/static/docs/dev/reference/torch_reciprocal.html @@ -0,0 +1,259 @@ + + + + + + + + +Reciprocal — torch_reciprocal • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Reciprocal

    +
    + +
    torch_reciprocal(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    reciprocal(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the reciprocal of the elements of input

    +

    $$ + \mbox{out}_{i} = \frac{1}{\mbox{input}_{i}} +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_reciprocal(a) +} +
    #> torch_tensor +#> -21.5189 +#> 2.1018 +#> 0.6203 +#> 0.9015 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_reduction.html b/static/docs/dev/reference/torch_reduction.html new file mode 100644 index 0000000000000000000000000000000000000000..911b7ded2d343c3f204a4d6d5d5d7bf8bd81df27 --- /dev/null +++ b/static/docs/dev/reference/torch_reduction.html @@ -0,0 +1,233 @@ + + + + + + + + +Creates the reduction objet — torch_reduction • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Creates the reduction objet

    +
    + +
    torch_reduction_sum()
    +
    +torch_reduction_mean()
    +
    +torch_reduction_none()
    + + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_relu.html b/static/docs/dev/reference/torch_relu.html new file mode 100644 index 0000000000000000000000000000000000000000..c912570973942d8eb0150384cd6fe80c996ccd69 --- /dev/null +++ b/static/docs/dev/reference/torch_relu.html @@ -0,0 +1,243 @@ + + + + + + + + +Relu — torch_relu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Relu

    +
    + +
    torch_relu(self)
    + +

    Arguments

    + + + + + + +
    self

    the input tensor

    + +

    relu(input) -> Tensor

    + + + + +

    Computes the relu tranformation.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_relu_.html b/static/docs/dev/reference/torch_relu_.html new file mode 100644 index 0000000000000000000000000000000000000000..f70f1507568d43ff2242ea8312cbfbdde8bc0e81 --- /dev/null +++ b/static/docs/dev/reference/torch_relu_.html @@ -0,0 +1,243 @@ + + + + + + + + +Relu_ — torch_relu_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Relu_

    +
    + +
    torch_relu_(self)
    + +

    Arguments

    + + + + + + +
    self

    the input tensor

    + +

    relu_(input) -> Tensor

    + + + + +

    In-place version of torch_relu().

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_remainder.html b/static/docs/dev/reference/torch_remainder.html new file mode 100644 index 0000000000000000000000000000000000000000..bcfede80ea4adfb6781813f04b51eae7baebed54 --- /dev/null +++ b/static/docs/dev/reference/torch_remainder.html @@ -0,0 +1,264 @@ + + + + + + + + +Remainder — torch_remainder • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Remainder

    +
    + +
    torch_remainder(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the dividend

    other

    (Tensor or float) the divisor that may be either a number or a Tensor of the same shape as the dividend

    + +

    remainder(input, other, out=NULL) -> Tensor

    + + + + +

    Computes the element-wise remainder of division.

    +

    The divisor and dividend may contain both for integer and floating point +numbers. The remainder has the same sign as the divisor.

    +

    When other is a tensor, the shapes of input and +other must be broadcastable .

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_remainder(torch_tensor(c(-3., -2, -1, 1, 2, 3)), 2) +torch_remainder(torch_tensor(c(1., 2, 3, 4, 5)), 1.5) +} +
    #> torch_tensor +#> 1.0000 +#> 0.5000 +#> 0.0000 +#> 1.0000 +#> 0.5000 +#> [ CPUFloatType{5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_renorm.html b/static/docs/dev/reference/torch_renorm.html new file mode 100644 index 0000000000000000000000000000000000000000..faa6342598e431890173b43a5496f4e9cba9c4d9 --- /dev/null +++ b/static/docs/dev/reference/torch_renorm.html @@ -0,0 +1,273 @@ + + + + + + + + +Renorm — torch_renorm • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Renorm

    +
    + +
    torch_renorm(self, p, dim, maxnorm)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    p

    (float) the power for the norm computation

    dim

    (int) the dimension to slice over to get the sub-tensors

    maxnorm

    (float) the maximum norm to keep each sub-tensor under

    + +

    Note

    + +

    If the norm of a row is lower than maxnorm, the row is unchanged

    +

    renorm(input, p, dim, maxnorm, out=NULL) -> Tensor

    + + + + +

    Returns a tensor where each sub-tensor of input along dimension +dim is normalized such that the p-norm of the sub-tensor is lower +than the value maxnorm

    + +

    Examples

    +
    if (torch_is_installed()) { +x = torch_ones(c(3, 3)) +x[2,]$fill_(2) +x[3,]$fill_(3) +x +torch_renorm(x, 1, 1, 5) +} +
    #> torch_tensor +#> 1.0000 1.0000 1.0000 +#> 1.6667 1.6667 1.6667 +#> 1.6667 1.6667 1.6667 +#> [ CPUFloatType{3,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_repeat_interleave.html b/static/docs/dev/reference/torch_repeat_interleave.html new file mode 100644 index 0000000000000000000000000000000000000000..2f70e3bfa47d23a1c9e2fa3593ba4d0c855e55b7 --- /dev/null +++ b/static/docs/dev/reference/torch_repeat_interleave.html @@ -0,0 +1,277 @@ + + + + + + + + +Repeat_interleave — torch_repeat_interleave • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Repeat_interleave

    +
    + +
    torch_repeat_interleave(self, repeats, dim = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    repeats

    (Tensor or int) The number of repetitions for each element. repeats is broadcasted to fit the shape of the given axis.

    dim

    (int, optional) The dimension along which to repeat values. By default, use the flattened input array, and return a flat output array.

    + +

    repeat_interleave(input, repeats, dim=NULL) -> Tensor

    + + + + +

    Repeat elements of a tensor.

    +

    Warning

    + + +
    This is different from `torch_Tensor.repeat` but similar to `numpy.repeat`.
    +
    + +

    repeat_interleave(repeats) -> Tensor

    + + + + +

    If the repeats is tensor([n1, n2, n3, ...]), then the output will be +tensor([0, 0, ..., 1, 1, ..., 2, 2, ..., ...]) where 0 appears n1 times, +1 appears n2 times, 2 appears n3 times, etc.

    + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +x = torch_tensor(c(1, 2, 3)) +x$repeat_interleave(2) +y = torch_tensor(matrix(c(1, 2, 3, 4), ncol = 2, byrow=TRUE)) +torch_repeat_interleave(y, 2) +torch_repeat_interleave(y, 3, dim=1) +torch_repeat_interleave(y, torch_tensor(c(1, 2)), dim=1) +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_reshape.html b/static/docs/dev/reference/torch_reshape.html new file mode 100644 index 0000000000000000000000000000000000000000..22eaeb1afb2c185d9183c8f32b627956bdde5084 --- /dev/null +++ b/static/docs/dev/reference/torch_reshape.html @@ -0,0 +1,268 @@ + + + + + + + + +Reshape — torch_reshape • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Reshape

    +
    + +
    torch_reshape(self, shape)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the tensor to be reshaped

    shape

    (tuple of ints) the new shape

    + +

    reshape(input, shape) -> Tensor

    + + + + +

    Returns a tensor with the same data and number of elements as input, +but with the specified shape. When possible, the returned tensor will be a view +of input. Otherwise, it will be a copy. Contiguous inputs and inputs +with compatible strides can be reshaped without copying, but you should not +depend on the copying vs. viewing behavior.

    +

    See torch_Tensor.view on when it is possible to return a view.

    +

    A single dimension may be -1, in which case it's inferred from the remaining +dimensions and the number of elements in input.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_arange(0, 4) +torch_reshape(a, list(2, 2)) +b = torch_tensor(matrix(c(0, 1, 2, 3), ncol = 2, byrow=TRUE)) +torch_reshape(b, list(-1)) +} +
    #> torch_tensor +#> 0 +#> 1 +#> 2 +#> 3 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_result_type.html b/static/docs/dev/reference/torch_result_type.html new file mode 100644 index 0000000000000000000000000000000000000000..252b4e6390df8e4e4ebdb0c4b1e42f93f508e11b --- /dev/null +++ b/static/docs/dev/reference/torch_result_type.html @@ -0,0 +1,255 @@ + + + + + + + + +Result_type — torch_result_type • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Result_type

    +
    + +
    torch_result_type(tensor1, tensor2)
    + +

    Arguments

    + + + + + + + + + + +
    tensor1

    (Tensor or Number) an input tensor or number

    tensor2

    (Tensor or Number) an input tensor or number

    + +

    result_type(tensor1, tensor2) -> dtype

    + + + + +

    Returns the torch_dtype that would result from performing an arithmetic +operation on the provided input tensors. See type promotion documentation +for more information on the type promotion logic.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_result_type(tensor1 = torch_tensor(c(1, 2), dtype=torch_int()), tensor2 = 1) +} +
    #> torch_Float
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_rfft.html b/static/docs/dev/reference/torch_rfft.html new file mode 100644 index 0000000000000000000000000000000000000000..4b52408a9b2fd2b59cd90e9047c78b693cbf8634 --- /dev/null +++ b/static/docs/dev/reference/torch_rfft.html @@ -0,0 +1,334 @@ + + + + + + + + +Rfft — torch_rfft • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Rfft

    +
    + +
    torch_rfft(self, signal_ndim, normalized = FALSE, onesided = TRUE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor of at least signal_ndim dimensions

    signal_ndim

    (int) the number of dimensions in each signal. signal_ndim can only be 1, 2 or 3

    normalized

    (bool, optional) controls whether to return normalized results. Default: FALSE

    onesided

    (bool, optional) controls whether to return half of results to avoid redundancy. Default: TRUE

    + +

    Note

    + + +
    For CUDA tensors, an LRU cache is used for cuFFT plans to speed up
    +repeatedly running FFT methods on tensors of same geometry with same
    +configuration. See cufft-plan-cache for more details on how to
    +monitor and control the cache.
    +
    + +

    rfft(input, signal_ndim, normalized=False, onesided=TRUE) -> Tensor

    + + + + +

    Real-to-complex Discrete Fourier Transform

    +

    This method computes the real-to-complex discrete Fourier transform. It is +mathematically equivalent with torch_fft with differences only in +formats of the input and output.

    +

    This method supports 1D, 2D and 3D real-to-complex transforms, indicated +by signal_ndim. input must be a tensor with at least +signal_ndim dimensions with optionally arbitrary number of leading batch +dimensions. If normalized is set to TRUE, this normalizes the result +by dividing it with \(\sqrt{\prod_{i=1}^K N_i}\) so that the operator is +unitary, where \(N_i\) is the size of signal dimension \(i\).

    +

    The real-to-complex Fourier transform results follow conjugate symmetry:

    +

    $$ + X[\omega_1, \dots, \omega_d] = X^*[N_1 - \omega_1, \dots, N_d - \omega_d], +$$ +where the index arithmetic is computed modulus the size of the corresponding +dimension, \(\ ^*\) is the conjugate operator, and +\(d\) = signal_ndim. onesided flag controls whether to avoid +redundancy in the output results. If set to TRUE (default), the output will +not be full complex result of shape \((*, 2)\), where \(*\) is the shape +of input, but instead the last dimension will be halfed as of size +\(\lfloor \frac{N_d}{2} \rfloor + 1\).

    +

    The inverse of this function is torch_irfft.

    +

    Warning

    + + + +

    For CPU tensors, this method is currently only available with MKL. Use +torch_backends.mkl.is_available to check if MKL is installed.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_randn(c(5, 5)) +torch_rfft(x, 2) +torch_rfft(x, 2, onesided=FALSE) +} +
    #> torch_tensor +#> (1,.,.) = +#> 1.1392 0.0000 +#> 5.9699 -6.3502 +#> -2.2049 5.4613 +#> -2.2049 -5.4613 +#> 5.9699 6.3502 +#> +#> (2,.,.) = +#> 3.6573 0.2440 +#> -2.9131 4.3144 +#> -2.5946 -4.3514 +#> -0.0998 -3.1859 +#> -3.6787 1.2527 +#> +#> (3,.,.) = +#> 1.9928 -4.5021 +#> -1.6164 0.4800 +#> -0.7325 -0.2080 +#> 2.5052 1.7202 +#> -3.2780 -0.2895 +#> +#> (4,.,.) = +#> 1.9928 4.5021 +#> -3.2780 0.2895 +#> 2.5052 -1.7202 +#> -0.7325 0.2080 +#> -1.6164 -0.4800 +#> +#> (5,.,.) = +#> 3.6573 -0.2440 +#> -3.6787 -1.2527 +#> -0.0998 3.1859 +#> -2.5946 4.3514 +#> -2.9131 -4.3144 +#> [ CPUFloatType{5,5,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_roll.html b/static/docs/dev/reference/torch_roll.html new file mode 100644 index 0000000000000000000000000000000000000000..13a50c094ce14d3b479659ab432bfddf5ce85abf --- /dev/null +++ b/static/docs/dev/reference/torch_roll.html @@ -0,0 +1,269 @@ + + + + + + + + +Roll — torch_roll • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Roll

    +
    + +
    torch_roll(self, shifts, dims = list())
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    shifts

    (int or tuple of ints) The number of places by which the elements of the tensor are shifted. If shifts is a tuple, dims must be a tuple of the same size, and each dimension will be rolled by the corresponding value

    dims

    (int or tuple of ints) Axis along which to roll

    + +

    roll(input, shifts, dims=NULL) -> Tensor

    + + + + +

    Roll the tensor along the given dimension(s). Elements that are shifted beyond the +last position are re-introduced at the first position. If a dimension is not +specified, the tensor will be flattened before rolling and then restored +to the original shape.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_tensor(c(1, 2, 3, 4, 5, 6, 7, 8))$view(c(4, 2)) +x +torch_roll(x, 1, 1) +torch_roll(x, -1, 1) +torch_roll(x, shifts=list(2, 1), dims=list(1, 2)) +} +
    #> torch_tensor +#> 6 5 +#> 8 7 +#> 2 1 +#> 4 3 +#> [ CPUFloatType{4,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_rot90.html b/static/docs/dev/reference/torch_rot90.html new file mode 100644 index 0000000000000000000000000000000000000000..d63f3b0e58cb4031e221556ad676ad74af43bb90 --- /dev/null +++ b/static/docs/dev/reference/torch_rot90.html @@ -0,0 +1,271 @@ + + + + + + + + +Rot90 — torch_rot90 • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Rot90

    +
    + +
    torch_rot90(self, k = 1L, dims = c(0, 1))
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    k

    (int) number of times to rotate

    dims

    (a list or tuple) axis to rotate

    + +

    rot90(input, k, dims) -> Tensor

    + + + + +

    Rotate a n-D tensor by 90 degrees in the plane specified by dims axis. +Rotation direction is from the first towards the second axis if k > 0, and from the second towards the first for k < 0.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_arange(0, 4)$view(c(2, 2)) +x +torch_rot90(x, 1, c(1, 2)) +x = torch_arange(0, 8)$view(c(2, 2, 2)) +x +torch_rot90(x, 1, c(1, 2)) +} +
    #> torch_tensor +#> (1,.,.) = +#> 2 3 +#> 6 7 +#> +#> (2,.,.) = +#> 0 1 +#> 4 5 +#> [ CPUFloatType{2,2,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_round.html b/static/docs/dev/reference/torch_round.html new file mode 100644 index 0000000000000000000000000000000000000000..712c041a6c6c3f7ac7f9f069ca25f0e4b3142f1d --- /dev/null +++ b/static/docs/dev/reference/torch_round.html @@ -0,0 +1,257 @@ + + + + + + + + +Round — torch_round • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Round

    +
    + +
    torch_round(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    round(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with each of the elements of input rounded +to the closest integer.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_round(a) +} +
    #> torch_tensor +#> -1 +#> -0 +#> 2 +#> 0 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_rrelu_.html b/static/docs/dev/reference/torch_rrelu_.html new file mode 100644 index 0000000000000000000000000000000000000000..0ad1b4c4710fff8afeef6cf97490c0e933a1a6fd --- /dev/null +++ b/static/docs/dev/reference/torch_rrelu_.html @@ -0,0 +1,265 @@ + + + + + + + + +Rrelu_ — torch_rrelu_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Rrelu_

    +
    + +
    torch_rrelu_(
    +  self,
    +  lower = 0.125,
    +  upper = 0.333333,
    +  training = FALSE,
    +  generator = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    the input tensor

    lower

    lower bound of the uniform distribution. Default: 1/8

    upper

    upper bound of the uniform distribution. Default: 1/3

    training

    bool wether it's a training pass. DEfault: FALSE

    generator

    random number generator

    + +

    rrelu_(input, lower=1./8, upper=1./3, training=False) -> Tensor

    + + + + +

    In-place version of torch_rrelu.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_rsqrt.html b/static/docs/dev/reference/torch_rsqrt.html new file mode 100644 index 0000000000000000000000000000000000000000..376d50bf288eec291bce0fd1b855ea6b648be0d2 --- /dev/null +++ b/static/docs/dev/reference/torch_rsqrt.html @@ -0,0 +1,260 @@ + + + + + + + + +Rsqrt — torch_rsqrt • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Rsqrt

    +
    + +
    torch_rsqrt(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    rsqrt(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the reciprocal of the square-root of each of +the elements of input.

    +

    $$ + \mbox{out}_{i} = \frac{1}{\sqrt{\mbox{input}_{i}}} +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_rsqrt(a) +} +
    #> torch_tensor +#> 1.7920 +#> nan +#> nan +#> nan +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_save.html b/static/docs/dev/reference/torch_save.html new file mode 100644 index 0000000000000000000000000000000000000000..a79c333d017002945fdaf427d3e4ace51c08c44d --- /dev/null +++ b/static/docs/dev/reference/torch_save.html @@ -0,0 +1,251 @@ + + + + + + + + +Saves an object to a disk file. — torch_save • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    This function is experimental, don't use for long +term storage.

    +
    + +
    torch_save(obj, path, ...)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    obj

    the saved object

    path

    a connection or the name of the file to save.

    ...

    not currently used.

    + +

    See also

    + +

    Other torch_save: +torch_load()

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_selu.html b/static/docs/dev/reference/torch_selu.html new file mode 100644 index 0000000000000000000000000000000000000000..2eca7d4e96180242b4551ddae01a0df7ec30cbec --- /dev/null +++ b/static/docs/dev/reference/torch_selu.html @@ -0,0 +1,243 @@ + + + + + + + + +Selu — torch_selu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Selu

    +
    + +
    torch_selu(self)
    + +

    Arguments

    + + + + + + +
    self

    the input tensor

    + +

    selu(input) -> Tensor

    + + + + +

    Computes the selu transformation.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_selu_.html b/static/docs/dev/reference/torch_selu_.html new file mode 100644 index 0000000000000000000000000000000000000000..9750979113e5dda34e9e5eb1eb613c768b95541d --- /dev/null +++ b/static/docs/dev/reference/torch_selu_.html @@ -0,0 +1,243 @@ + + + + + + + + +Selu_ — torch_selu_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Selu_

    +
    + +
    torch_selu_(self)
    + +

    Arguments

    + + + + + + +
    self

    the input tensor

    + +

    selu_(input) -> Tensor

    + + + + +

    In-place version of torch_selu().

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_sigmoid.html b/static/docs/dev/reference/torch_sigmoid.html new file mode 100644 index 0000000000000000000000000000000000000000..4d563ca41872cf80b1dbee4430ae68492e5bd7d8 --- /dev/null +++ b/static/docs/dev/reference/torch_sigmoid.html @@ -0,0 +1,259 @@ + + + + + + + + +Sigmoid — torch_sigmoid • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Sigmoid

    +
    + +
    torch_sigmoid(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    sigmoid(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the sigmoid of the elements of input.

    +

    $$ + \mbox{out}_{i} = \frac{1}{1 + e^{-\mbox{input}_{i}}} +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_sigmoid(a) +} +
    #> torch_tensor +#> 0.5045 +#> 0.6762 +#> 0.6376 +#> 0.2477 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_sign.html b/static/docs/dev/reference/torch_sign.html new file mode 100644 index 0000000000000000000000000000000000000000..ce6c8e94218f097085adcc1957587bd3af98ae10 --- /dev/null +++ b/static/docs/dev/reference/torch_sign.html @@ -0,0 +1,259 @@ + + + + + + + + +Sign — torch_sign • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Sign

    +
    + +
    torch_sign(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    sign(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the signs of the elements of input.

    +

    $$ + \mbox{out}_{i} = \mbox{sgn}(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_tensor(c(0.7, -1.2, 0., 2.3)) +a +torch_sign(a) +} +
    #> torch_tensor +#> 1 +#> -1 +#> 0 +#> 1 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_sin.html b/static/docs/dev/reference/torch_sin.html new file mode 100644 index 0000000000000000000000000000000000000000..30f6dcd09d2daf916dd9a0f842a62ef06118b820 --- /dev/null +++ b/static/docs/dev/reference/torch_sin.html @@ -0,0 +1,259 @@ + + + + + + + + +Sin — torch_sin • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Sin

    +
    + +
    torch_sin(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    sin(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the sine of the elements of input.

    +

    $$ + \mbox{out}_{i} = \sin(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_sin(a) +} +
    #> torch_tensor +#> 0.9321 +#> 0.3476 +#> 0.3708 +#> -0.2301 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_sinh.html b/static/docs/dev/reference/torch_sinh.html new file mode 100644 index 0000000000000000000000000000000000000000..ef98c937ece33574f0992382caf53e07747599c4 --- /dev/null +++ b/static/docs/dev/reference/torch_sinh.html @@ -0,0 +1,260 @@ + + + + + + + + +Sinh — torch_sinh • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Sinh

    +
    + +
    torch_sinh(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    sinh(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the hyperbolic sine of the elements of +input.

    +

    $$ + \mbox{out}_{i} = \sinh(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_sinh(a) +} +
    #> torch_tensor +#> 0.7845 +#> -0.4508 +#> 0.2873 +#> -2.0906 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_slogdet.html b/static/docs/dev/reference/torch_slogdet.html new file mode 100644 index 0000000000000000000000000000000000000000..6805e86cb9a5801dd0b76b08fca25848efa78439 --- /dev/null +++ b/static/docs/dev/reference/torch_slogdet.html @@ -0,0 +1,274 @@ + + + + + + + + +Slogdet — torch_slogdet • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Slogdet

    +
    + +
    torch_slogdet(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor of size (*, n, n) where * is zero or more batch dimensions.

    + +

    Note

    + + +
    If `input` has zero determinant, this returns `(0, -inf)`.
    +
    + +
    Backward through `slogdet` internally uses SVD results when `input`
    +is not invertible. In this case, double backward through `slogdet`
    +will be unstable in when `input` doesn't have distinct singular values.
    +See `~torch.svd` for details.
    +
    + +

    slogdet(input) -> (Tensor, Tensor)

    + + + + +

    Calculates the sign and log absolute value of the determinant(s) of a square matrix or batches of square matrices.

    + +

    Examples

    +
    if (torch_is_installed()) { + +A = torch_randn(c(3, 3)) +A +torch_det(A) +torch_logdet(A) +torch_slogdet(A) +} +
    #> [[1]] +#> torch_tensor +#> -1 +#> [ CPUFloatType{} ] +#> +#> [[2]] +#> torch_tensor +#> -0.634866 +#> [ CPUFloatType{} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_solve.html b/static/docs/dev/reference/torch_solve.html new file mode 100644 index 0000000000000000000000000000000000000000..0a74bfe01886e32f4eef9a56a2af075f59d57671 --- /dev/null +++ b/static/docs/dev/reference/torch_solve.html @@ -0,0 +1,288 @@ + + + + + + + + +Solve — torch_solve • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Solve

    +
    + +
    torch_solve(self, A)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) input matrix \(B\) of size \((*, m, k)\) , where \(*\) is zero or more batch dimensions.

    A

    (Tensor) input square matrix of size \((*, m, m)\), where \(*\) is zero or more batch dimensions.

    + +

    Note

    + + +
    Irrespective of the original strides, the returned matrices
    +`solution` and `LU` will be transposed, i.e. with strides like
    +`B$contiguous()$transpose(-1, -2)$stride()` and
    +`A$contiguous()$transpose(-1, -2)$stride()` respectively.
    +
    + +

    solve(input, A) -> (Tensor, Tensor)

    + + + + +

    This function returns the solution to the system of linear +equations represented by \(AX = B\) and the LU factorization of +A, in order as a namedtuple solution, LU.

    +

    LU contains L and U factors for LU factorization of A.

    +

    torch_solve(B, A) can take in 2D inputs B, A or inputs that are +batches of 2D matrices. If the inputs are batches, then returns +batched outputs solution, LU.

    + +

    Examples

    +
    if (torch_is_installed()) { + +A = torch_tensor(rbind(c(6.80, -2.11, 5.66, 5.97, 8.23), + c(-6.05, -3.30, 5.36, -4.44, 1.08), + c(-0.45, 2.58, -2.70, 0.27, 9.04), + c(8.32, 2.71, 4.35, -7.17, 2.14), + c(-9.67, -5.14, -7.26, 6.08, -6.87)))$t() +B = torch_tensor(rbind(c(4.02, 6.19, -8.22, -7.57, -3.03), + c(-1.56, 4.00, -8.67, 1.75, 2.86), + c(9.81, -4.09, -4.57, -8.61, 8.99)))$t() +out = torch_solve(B, A) +X = out[[1]] +LU = out[[2]] +torch_dist(B, torch_mm(A, X)) +# Batched solver example +A = torch_randn(c(2, 3, 1, 4, 4)) +B = torch_randn(c(2, 3, 1, 4, 6)) +out = torch_solve(B, A) +X = out[[1]] +LU = out[[2]] +torch_dist(B, A$matmul(X)) +} +
    #> torch_tensor +#> 2.21788e-06 +#> [ CPUFloatType{} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_sort.html b/static/docs/dev/reference/torch_sort.html new file mode 100644 index 0000000000000000000000000000000000000000..b38d4f7feead1f54714d99a9ac14117f515d4cf1 --- /dev/null +++ b/static/docs/dev/reference/torch_sort.html @@ -0,0 +1,281 @@ + + + + + + + + +Sort — torch_sort • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Sort

    +
    + +
    torch_sort(self, dim = -1L, descending = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int, optional) the dimension to sort along

    descending

    (bool, optional) controls the sorting order (ascending or descending)

    + +

    sort(input, dim=-1, descending=FALSE) -> (Tensor, LongTensor)

    + + + + +

    Sorts the elements of the input tensor along a given dimension +in ascending order by value.

    +

    If dim is not given, the last dimension of the input is chosen.

    +

    If descending is TRUE then the elements are sorted in descending +order by value.

    +

    A namedtuple of (values, indices) is returned, where the values are the +sorted values and indices are the indices of the elements in the original +input tensor.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_randn(c(3, 4)) +out = torch_sort(x) +out +out = torch_sort(x, 1) +out +} +
    #> [[1]] +#> torch_tensor +#> -1.7778 -1.6086 -1.3155 -1.1151 +#> 1.0098 -0.2044 -1.2104 -0.8225 +#> 1.2042 2.1714 1.5548 0.3414 +#> [ CPUFloatType{3,4} ] +#> +#> [[2]] +#> torch_tensor +#> 0 2 1 1 +#> 1 1 2 0 +#> 2 0 0 2 +#> [ CPULongType{3,4} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_sparse_coo_tensor.html b/static/docs/dev/reference/torch_sparse_coo_tensor.html new file mode 100644 index 0000000000000000000000000000000000000000..af9ce126e41eb99ab642a2ee12ed589c24d4b8e1 --- /dev/null +++ b/static/docs/dev/reference/torch_sparse_coo_tensor.html @@ -0,0 +1,294 @@ + + + + + + + + +Sparse_coo_tensor — torch_sparse_coo_tensor • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Sparse_coo_tensor

    +
    + +
    torch_sparse_coo_tensor(
    +  indices,
    +  values,
    +  size = NULL,
    +  dtype = NULL,
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    indices

    (array_like) Initial data for the tensor. Can be a list, tuple, NumPy ndarray, scalar, and other types. Will be cast to a torch_LongTensor internally. The indices are the coordinates of the non-zero values in the matrix, and thus should be two-dimensional where the first dimension is the number of tensor dimensions and the second dimension is the number of non-zero values.

    values

    (array_like) Initial values for the tensor. Can be a list, tuple, NumPy ndarray, scalar, and other types.

    size

    (list, tuple, or torch.Size, optional) Size of the sparse tensor. If not provided the size will be inferred as the minimum size big enough to hold all non-zero elements.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, infers data type from values.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    sparse_coo_tensor(indices, values, size=NULL, dtype=NULL, device=NULL, requires_grad=False) -> Tensor

    + + + + +

    Constructs a sparse tensors in COO(rdinate) format with non-zero elements at the given indices +with the given values. A sparse tensor can be uncoalesced, in that case, there are duplicate +coordinates in the indices, and the value at that index is the sum of all duplicate value entries: +torch_sparse_.

    + +

    Examples

    +
    if (torch_is_installed()) { + +i = torch_tensor(matrix(c(1, 2, 2, 3, 1, 3), ncol = 3, byrow = TRUE), dtype=torch_int64()) +v = torch_tensor(c(3, 4, 5), dtype=torch_float32()) +torch_sparse_coo_tensor(i, v) +torch_sparse_coo_tensor(i, v, c(2, 4)) + +# create empty sparse tensors +S = torch_sparse_coo_tensor( + torch_empty(c(1, 0), dtype = torch_int64()), + torch_tensor(numeric(), dtype = torch_float32()), + c(1) +) +S = torch_sparse_coo_tensor( + torch_empty(c(1, 0), dtype = torch_int64()), + torch_empty(c(0, 2)), + c(1, 2) +) +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_split.html b/static/docs/dev/reference/torch_split.html new file mode 100644 index 0000000000000000000000000000000000000000..fe9a27bd585b4a8cb6a398fbf3ec9a0c35158a5c --- /dev/null +++ b/static/docs/dev/reference/torch_split.html @@ -0,0 +1,260 @@ + + + + + + + + +Split — torch_split • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Split

    +
    + +
    torch_split(self, split_size, dim = 1L)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) tensor to split.

    split_size

    (int) size of a single chunk or list of sizes for each chunk

    dim

    (int) dimension along which to split the tensor.

    + +

    TEST

    + + + + +

    Splits the tensor into chunks. Each chunk is a view of the original tensor.

    If `split_size_or_sections` is an integer type, then `tensor` will
    +be split into equally sized chunks (if possible). Last chunk will be smaller if
    +the tensor size along the given dimension `dim` is not divisible by
    +`split_size`.
    +
    +If `split_size_or_sections` is a list, then `tensor` will be split
    +into `len(split_size_or_sections)` chunks with sizes in `dim` according
    +to `split_size_or_sections`.
    +
    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_sqrt.html b/static/docs/dev/reference/torch_sqrt.html new file mode 100644 index 0000000000000000000000000000000000000000..1c6ff7b296f9731c4b520c34ed12575e864cc7c8 --- /dev/null +++ b/static/docs/dev/reference/torch_sqrt.html @@ -0,0 +1,259 @@ + + + + + + + + +Sqrt — torch_sqrt • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Sqrt

    +
    + +
    torch_sqrt(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    sqrt(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the square-root of the elements of input.

    +

    $$ + \mbox{out}_{i} = \sqrt{\mbox{input}_{i}} +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_sqrt(a) +} +
    #> torch_tensor +#> 0.4630 +#> 0.8983 +#> nan +#> 0.6670 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_square.html b/static/docs/dev/reference/torch_square.html new file mode 100644 index 0000000000000000000000000000000000000000..ecaa92cb581589dd8293ea306d3644f730c5f647 --- /dev/null +++ b/static/docs/dev/reference/torch_square.html @@ -0,0 +1,256 @@ + + + + + + + + +Square — torch_square • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Square

    +
    + +
    torch_square(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    square(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the square of the elements of input.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_square(a) +} +
    #> torch_tensor +#> 1.1779 +#> 0.0218 +#> 0.0193 +#> 0.7057 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_squeeze.html b/static/docs/dev/reference/torch_squeeze.html new file mode 100644 index 0000000000000000000000000000000000000000..e1a20f8cb5babc03b8b336bb6241fbef199b4cf4 --- /dev/null +++ b/static/docs/dev/reference/torch_squeeze.html @@ -0,0 +1,283 @@ + + + + + + + + +Squeeze — torch_squeeze • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Squeeze

    +
    + +
    torch_squeeze(self, dim)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int, optional) if given, the input will be squeezed only in this dimension

    + +

    Note

    + +

    The returned tensor shares the storage with the input tensor, +so changing the contents of one will change the contents of the other.

    +

    squeeze(input, dim=NULL, out=NULL) -> Tensor

    + + + + +

    Returns a tensor with all the dimensions of input of size 1 removed.

    +

    For example, if input is of shape: +\((A \times 1 \times B \times C \times 1 \times D)\) then the out tensor +will be of shape: \((A \times B \times C \times D)\).

    +

    When dim is given, a squeeze operation is done only in the given +dimension. If input is of shape: \((A \times 1 \times B)\), +squeeze(input, 0) leaves the tensor unchanged, but squeeze(input, 1) +will squeeze the tensor to the shape \((A \times B)\).

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_zeros(c(2, 1, 2, 1, 2)) +x +y = torch_squeeze(x) +y +y = torch_squeeze(x, 1) +y +y = torch_squeeze(x, 2) +y +} +
    #> torch_tensor +#> (1,1,.,.) = +#> 0 0 +#> +#> (2,1,.,.) = +#> 0 0 +#> +#> (1,2,.,.) = +#> 0 0 +#> +#> (2,2,.,.) = +#> 0 0 +#> [ CPUFloatType{2,2,1,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_stack.html b/static/docs/dev/reference/torch_stack.html new file mode 100644 index 0000000000000000000000000000000000000000..1c4c0f13c1cc3e814240df0562c546a5253512cd --- /dev/null +++ b/static/docs/dev/reference/torch_stack.html @@ -0,0 +1,248 @@ + + + + + + + + +Stack — torch_stack • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Stack

    +
    + +
    torch_stack(tensors, dim = 1L)
    + +

    Arguments

    + + + + + + + + + + +
    tensors

    (sequence of Tensors) sequence of tensors to concatenate

    dim

    (int) dimension to insert. Has to be between 0 and the number of dimensions of concatenated tensors (inclusive)

    + +

    stack(tensors, dim=0, out=NULL) -> Tensor

    + + + + +

    Concatenates sequence of tensors along a new dimension.

    +

    All tensors need to be of the same size.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_std.html b/static/docs/dev/reference/torch_std.html new file mode 100644 index 0000000000000000000000000000000000000000..bc6bf89636e37578642c810bf0a4733fbd49b7c2 --- /dev/null +++ b/static/docs/dev/reference/torch_std.html @@ -0,0 +1,289 @@ + + + + + + + + +Std — torch_std • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Std

    +
    + +
    torch_std(self, dim, unbiased = TRUE, keepdim = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int or tuple of ints) the dimension or dimensions to reduce.

    unbiased

    (bool) whether to use the unbiased estimation or not

    keepdim

    (bool) whether the output tensor has dim retained or not.

    + +

    std(input, unbiased=TRUE) -> Tensor

    + + + + +

    Returns the standard-deviation of all elements in the input tensor.

    +

    If unbiased is FALSE, then the standard-deviation will be calculated +via the biased estimator. Otherwise, Bessel's correction will be used.

    +

    std(input, dim, unbiased=TRUE, keepdim=False, out=NULL) -> Tensor

    + + + + +

    Returns the standard-deviation of each row of the input tensor in the +dimension dim. If dim is a list of dimensions, +reduce over all of them.

    +

    If keepdim is TRUE, the output tensor is of the same size +as input except in the dimension(s) dim where it is of size 1. +Otherwise, dim is squeezed (see torch_squeeze), resulting in the +output tensor having 1 (or len(dim)) fewer dimension(s).

    +

    If unbiased is FALSE, then the standard-deviation will be calculated +via the biased estimator. Otherwise, Bessel's correction will be used.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(1, 3)) +a +torch_std(a) + + +a = torch_randn(c(4, 4)) +a +torch_std(a, dim=1) +} +
    #> torch_tensor +#> 1.1757 +#> 1.2405 +#> 0.6001 +#> 2.1534 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_std_mean.html b/static/docs/dev/reference/torch_std_mean.html new file mode 100644 index 0000000000000000000000000000000000000000..2e8051f3a1e676dac0da6ada47f4522fa9d348ae --- /dev/null +++ b/static/docs/dev/reference/torch_std_mean.html @@ -0,0 +1,299 @@ + + + + + + + + +Std_mean — torch_std_mean • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Std_mean

    +
    + +
    torch_std_mean(self, dim, unbiased = TRUE, keepdim = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int or tuple of ints) the dimension or dimensions to reduce.

    unbiased

    (bool) whether to use the unbiased estimation or not

    keepdim

    (bool) whether the output tensor has dim retained or not.

    + +

    std_mean(input, unbiased=TRUE) -> (Tensor, Tensor)

    + + + + +

    Returns the standard-deviation and mean of all elements in the input tensor.

    +

    If unbiased is FALSE, then the standard-deviation will be calculated +via the biased estimator. Otherwise, Bessel's correction will be used.

    +

    std_mean(input, dim, unbiased=TRUE, keepdim=False) -> (Tensor, Tensor)

    + + + + +

    Returns the standard-deviation and mean of each row of the input tensor in the +dimension dim. If dim is a list of dimensions, +reduce over all of them.

    +

    If keepdim is TRUE, the output tensor is of the same size +as input except in the dimension(s) dim where it is of size 1. +Otherwise, dim is squeezed (see torch_squeeze), resulting in the +output tensor having 1 (or len(dim)) fewer dimension(s).

    +

    If unbiased is FALSE, then the standard-deviation will be calculated +via the biased estimator. Otherwise, Bessel's correction will be used.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(1, 3)) +a +torch_std_mean(a) + + +a = torch_randn(c(4, 4)) +a +torch_std_mean(a, 1) +} +
    #> [[1]] +#> torch_tensor +#> 1.5486 +#> 0.8768 +#> 0.3827 +#> 2.0388 +#> [ CPUFloatType{4} ] +#> +#> [[2]] +#> torch_tensor +#> -0.5692 +#> 0.2571 +#> 0.0071 +#> -0.3322 +#> [ CPUFloatType{4} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_stft.html b/static/docs/dev/reference/torch_stft.html new file mode 100644 index 0000000000000000000000000000000000000000..e4359d0f808fa8d50ca09c7214e9c4b01d258da8 --- /dev/null +++ b/static/docs/dev/reference/torch_stft.html @@ -0,0 +1,341 @@ + + + + + + + + +Stft — torch_stft • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Stft

    +
    + +
    torch_stft(
    +  input,
    +  n_fft,
    +  hop_length = NULL,
    +  win_length = NULL,
    +  window = NULL,
    +  center = TRUE,
    +  pad_mode = "reflect",
    +  normalized = FALSE,
    +  onesided = TRUE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    (Tensor) the input tensor

    n_fft

    (int) size of Fourier transform

    hop_length

    (int, optional) the distance between neighboring sliding window frames. Default: NULL (treated as equal to floor(n_fft / 4))

    win_length

    (int, optional) the size of window frame and STFT filter. Default: NULL (treated as equal to n_fft)

    window

    (Tensor, optional) the optional window function. Default: NULL (treated as window of all \(1\) s)

    center

    (bool, optional) whether to pad input on both sides so that the \(t\)-th frame is centered at time \(t \times \mbox{hop\_length}\). Default: TRUE

    pad_mode

    (string, optional) controls the padding method used when center is TRUE. Default: "reflect"

    normalized

    (bool, optional) controls whether to return the normalized STFT results Default: FALSE

    onesided

    (bool, optional) controls whether to return half of results to avoid redundancy Default: TRUE

    + +

    Short-time Fourier transform (STFT).

    + + + + +

    Short-time Fourier transform (STFT).

    Ignoring the optional batch dimension, this method computes the following
    +expression:
    +
    + +

    $$ + X[m, \omega] = \sum_{k = 0}^{\mbox{win\_length-1}}% + \mbox{window}[k]\ \mbox{input}[m \times \mbox{hop\_length} + k]\ % + \exp\left(- j \frac{2 \pi \cdot \omega k}{\mbox{win\_length}}\right), +$$ +where \(m\) is the index of the sliding window, and \(\omega\) is +the frequency that \(0 \leq \omega < \mbox{n\_fft}\). When +onesided is the default value TRUE,

    * `input` must be either a 1-D time sequence or a 2-D batch of time
    +  sequences.
    +
    +* If `hop_length` is `NULL` (default), it is treated as equal to
    +  `floor(n_fft / 4)`.
    +
    +* If `win_length` is `NULL` (default), it is treated as equal to
    +  `n_fft`.
    +
    +* `window` can be a 1-D tensor of size `win_length`, e.g., from
    +  `torch_hann_window`. If `window` is `NULL` (default), it is
    +  treated as if having \eqn{1} everywhere in the window. If
    +  \eqn{\mbox{win\_length} < \mbox{n\_fft}}, `window` will be padded on
    +  both sides to length `n_fft` before being applied.
    +
    +* If `center` is `TRUE` (default), `input` will be padded on
    +  both sides so that the \eqn{t}-th frame is centered at time
    +  \eqn{t \times \mbox{hop\_length}}. Otherwise, the \eqn{t}-th frame
    +  begins at time  \eqn{t \times \mbox{hop\_length}}.
    +
    +* `pad_mode` determines the padding method used on `input` when
    +  `center` is `TRUE`. See `torch_nn.functional.pad` for
    +  all available options. Default is `"reflect"`.
    +
    +* If `onesided` is `TRUE` (default), only values for \eqn{\omega}
    +  in \eqn{\left[0, 1, 2, \dots, \left\lfloor \frac{\mbox{n\_fft}}{2} \right\rfloor + 1\right]}
    +  are returned because the real-to-complex Fourier transform satisfies the
    +  conjugate symmetry, i.e., \eqn{X[m, \omega] = X[m, \mbox{n\_fft} - \omega]^*}.
    +
    +* If `normalized` is `TRUE` (default is `FALSE`), the function
    +  returns the normalized STFT results, i.e., multiplied by \eqn{(\mbox{frame\_length})^{-0.5}}.
    +
    +Returns the real and the imaginary parts together as one tensor of size
    +\eqn{(* \times N \times T \times 2)}, where \eqn{*} is the optional
    +batch size of `input`, \eqn{N} is the number of frequencies where
    +STFT is applied, \eqn{T} is the total number of frames used, and each pair
    +in the last dimension represents a complex number as the real part and the
    +imaginary part.
    +
    + +

    Warning

    + + + +

    This function changed signature at version 0.4.1. Calling with the +previous signature may cause error or return incorrect result.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_sum.html b/static/docs/dev/reference/torch_sum.html new file mode 100644 index 0000000000000000000000000000000000000000..98fafad0a36c0d6e8a4ad4dded72d9f29f16f581 --- /dev/null +++ b/static/docs/dev/reference/torch_sum.html @@ -0,0 +1,287 @@ + + + + + + + + +Sum — torch_sum • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Sum

    +
    + +
    torch_sum(self, dim, keepdim = FALSE, dtype = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int or tuple of ints) the dimension or dimensions to reduce.

    keepdim

    (bool) whether the output tensor has dim retained or not.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: NULL.

    + +

    sum(input, dtype=NULL) -> Tensor

    + + + + +

    Returns the sum of all elements in the input tensor.

    +

    sum(input, dim, keepdim=False, dtype=NULL) -> Tensor

    + + + + +

    Returns the sum of each row of the input tensor in the given +dimension dim. If dim is a list of dimensions, +reduce over all of them.

    +

    If keepdim is TRUE, the output tensor is of the same size +as input except in the dimension(s) dim where it is of size 1. +Otherwise, dim is squeezed (see torch_squeeze), resulting in the +output tensor having 1 (or len(dim)) fewer dimension(s).

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(1, 3)) +a +torch_sum(a) + + +a = torch_randn(c(4, 4)) +a +torch_sum(a, 1) +b = torch_arange(0, 4 * 5 * 6)$view(c(4, 5, 6)) +torch_sum(b, list(2, 1)) +} +
    #> torch_tensor +#> 435 +#> 1335 +#> 2235 +#> 3135 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_svd.html b/static/docs/dev/reference/torch_svd.html new file mode 100644 index 0000000000000000000000000000000000000000..d05d19659644b74d0332572779922302d1241a1b --- /dev/null +++ b/static/docs/dev/reference/torch_svd.html @@ -0,0 +1,298 @@ + + + + + + + + +Svd — torch_svd • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Svd

    +
    + +
    torch_svd(self, some = TRUE, compute_uv = TRUE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor of size \((*, m, n)\) where * is zero or more batch dimensions consisting of \(m \times n\) matrices.

    some

    (bool, optional) controls the shape of returned U and V

    compute_uv

    (bool, optional) option whether to compute U and V or not

    + +

    Note

    + +

    The singular values are returned in descending order. If input is a batch of matrices, +then the singular values of each matrix in the batch is returned in descending order.

    +

    The implementation of SVD on CPU uses the LAPACK routine ?gesdd (a divide-and-conquer +algorithm) instead of ?gesvd for speed. Analogously, the SVD on GPU uses the MAGMA routine +gesdd as well.

    +

    Irrespective of the original strides, the returned matrix U +will be transposed, i.e. with strides U.contiguous().transpose(-2, -1).stride()

    +

    Extra care needs to be taken when backward through U and V +outputs. Such operation is really only stable when input is +full rank with all distinct singular values. Otherwise, NaN can +appear as the gradients are not properly defined. Also, notice that +double backward will usually do an additional backward through U and +V even if the original backward is only on S.

    +

    When some = FALSE, the gradients on U[..., :, min(m, n):] +and V[..., :, min(m, n):] will be ignored in backward as those vectors +can be arbitrary bases of the subspaces.

    +

    When compute_uv = FALSE, backward cannot be performed since U and V +from the forward pass is required for the backward operation.

    +

    svd(input, some=TRUE, compute_uv=TRUE) -> (Tensor, Tensor, Tensor)

    + + + + +

    This function returns a namedtuple (U, S, V) which is the singular value +decomposition of a input real matrix or batches of real matrices input such that +\(input = U \times diag(S) \times V^T\).

    +

    If some is TRUE (default), the method returns the reduced singular value decomposition +i.e., if the last two dimensions of input are m and n, then the returned +U and V matrices will contain only \(min(n, m)\) orthonormal columns.

    +

    If compute_uv is FALSE, the returned U and V matrices will be zero matrices +of shape \((m \times m)\) and \((n \times n)\) respectively. some will be ignored here.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(5, 3)) +a +out = torch_svd(a) +u = out[[1]] +s = out[[2]] +v = out[[3]] +torch_dist(a, torch_mm(torch_mm(u, torch_diag(s)), v$t())) +a_big = torch_randn(c(7, 5, 3)) +out = torch_svd(a_big) +u = out[[1]] +s = out[[2]] +v = out[[3]] +torch_dist(a_big, torch_matmul(torch_matmul(u, torch_diag_embed(s)), v$transpose(-2, -1))) +} +
    #> torch_tensor +#> 3.34915e-06 +#> [ CPUFloatType{} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_symeig.html b/static/docs/dev/reference/torch_symeig.html new file mode 100644 index 0000000000000000000000000000000000000000..c8fb8fbb821ad39a2d35ea1c86a69401a2250df7 --- /dev/null +++ b/static/docs/dev/reference/torch_symeig.html @@ -0,0 +1,290 @@ + + + + + + + + +Symeig — torch_symeig • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Symeig

    +
    + +
    torch_symeig(self, eigenvectors = FALSE, upper = TRUE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor of size \((*, n, n)\) where * is zero or more batch dimensions consisting of symmetric matrices.

    eigenvectors

    (boolean, optional) controls whether eigenvectors have to be computed

    upper

    (boolean, optional) controls whether to consider upper-triangular or lower-triangular region

    + +

    Note

    + +

    The eigenvalues are returned in ascending order. If input is a batch of matrices, +then the eigenvalues of each matrix in the batch is returned in ascending order.

    +

    Irrespective of the original strides, the returned matrix V will +be transposed, i.e. with strides V.contiguous().transpose(-1, -2).stride().

    +

    Extra care needs to be taken when backward through outputs. Such +operation is really only stable when all eigenvalues are distinct. +Otherwise, NaN can appear as the gradients are not properly defined.

    +

    symeig(input, eigenvectors=False, upper=TRUE) -> (Tensor, Tensor)

    + + + + +

    This function returns eigenvalues and eigenvectors +of a real symmetric matrix input or a batch of real symmetric matrices, +represented by a namedtuple (eigenvalues, eigenvectors).

    +

    This function calculates all eigenvalues (and vectors) of input +such that \(\mbox{input} = V \mbox{diag}(e) V^T\).

    +

    The boolean argument eigenvectors defines computation of +both eigenvectors and eigenvalues or eigenvalues only.

    +

    If it is FALSE, only eigenvalues are computed. If it is TRUE, +both eigenvalues and eigenvectors are computed.

    +

    Since the input matrix input is supposed to be symmetric, +only the upper triangular portion is used by default.

    +

    If upper is FALSE, then lower triangular portion is used.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(5, 5)) +a = a + a$t() # To make a symmetric +a +o = torch_symeig(a, eigenvectors=TRUE) +e = o[[1]] +v = o[[2]] +e +v +a_big = torch_randn(c(5, 2, 2)) +a_big = a_big + a_big$transpose(-2, -1) # To make a_big symmetric +o = a_big$symeig(eigenvectors=TRUE) +e = o[[1]] +v = o[[2]] +torch_allclose(torch_matmul(v, torch_matmul(e$diag_embed(), v$transpose(-2, -1))), a_big) +} +
    #> [1] TRUE
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_t.html b/static/docs/dev/reference/torch_t.html new file mode 100644 index 0000000000000000000000000000000000000000..8b84bf1f1a23636cbf7ab15b33acb41496d21708 --- /dev/null +++ b/static/docs/dev/reference/torch_t.html @@ -0,0 +1,264 @@ + + + + + + + + +T — torch_t • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    T

    +
    + +
    torch_t(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    t(input) -> Tensor

    + + + + +

    Expects input to be <= 2-D tensor and transposes dimensions 0 +and 1.

    +

    0-D and 1-D tensors are returned as is. When input is a 2-D tensor this +is equivalent to transpose(input, 0, 1).

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_randn(c(2,3)) +x +torch_t(x) +x = torch_randn(c(3)) +x +torch_t(x) +x = torch_randn(c(2, 3)) +x +torch_t(x) +} +
    #> torch_tensor +#> -1.0611 -1.1550 +#> -1.1566 0.8038 +#> -0.2224 0.2330 +#> [ CPUFloatType{3,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_take.html b/static/docs/dev/reference/torch_take.html new file mode 100644 index 0000000000000000000000000000000000000000..e504331a58773e2c0957c504bf9115b64d3ccbb1 --- /dev/null +++ b/static/docs/dev/reference/torch_take.html @@ -0,0 +1,260 @@ + + + + + + + + +Take — torch_take • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Take

    +
    + +
    torch_take(self, index)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    index

    (LongTensor) the indices into tensor

    + +

    take(input, index) -> Tensor

    + + + + +

    Returns a new tensor with the elements of input at the given indices. +The input tensor is treated as if it were viewed as a 1-D tensor. The result +takes the same shape as the indices.

    + +

    Examples

    +
    if (torch_is_installed()) { + +src = torch_tensor(matrix(c(4,3,5,6,7,8), ncol = 3, byrow = TRUE)) +torch_take(src, torch_tensor(c(1, 2, 5), dtype = torch_int64())) +} +
    #> torch_tensor +#> 4 +#> 3 +#> 7 +#> [ CPUFloatType{3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_tan.html b/static/docs/dev/reference/torch_tan.html new file mode 100644 index 0000000000000000000000000000000000000000..7617bd69f14194a06a8da8e2da5315f1b0aa2468 --- /dev/null +++ b/static/docs/dev/reference/torch_tan.html @@ -0,0 +1,259 @@ + + + + + + + + +Tan — torch_tan • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Tan

    +
    + +
    torch_tan(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    tan(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the tangent of the elements of input.

    +

    $$ + \mbox{out}_{i} = \tan(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_tan(a) +} +
    #> torch_tensor +#> -1.9736 +#> -1.0167 +#> -0.7853 +#> 0.9984 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_tanh.html b/static/docs/dev/reference/torch_tanh.html new file mode 100644 index 0000000000000000000000000000000000000000..72f56ec0663f15a388fd66d9c777ef26fda87f2c --- /dev/null +++ b/static/docs/dev/reference/torch_tanh.html @@ -0,0 +1,260 @@ + + + + + + + + +Tanh — torch_tanh • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Tanh

    +
    + +
    torch_tanh(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    tanh(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the hyperbolic tangent of the elements +of input.

    +

    $$ + \mbox{out}_{i} = \tanh(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_tanh(a) +} +
    #> torch_tensor +#> -0.4951 +#> 0.0453 +#> 0.9080 +#> -0.8097 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_tensor.html b/static/docs/dev/reference/torch_tensor.html new file mode 100644 index 0000000000000000000000000000000000000000..282487eeb1b1e13f2834986134074b6caca6fcb5 --- /dev/null +++ b/static/docs/dev/reference/torch_tensor.html @@ -0,0 +1,271 @@ + + + + + + + + +Converts R objects to a torch tensor — torch_tensor • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Converts R objects to a torch tensor

    +
    + +
    torch_tensor(
    +  data,
    +  dtype = NULL,
    +  device = NULL,
    +  requires_grad = FALSE,
    +  pin_memory = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    data

    an R atomic vector, matrix or array

    dtype

    a torch_dtype instance

    device

    a device creted with torch_device()

    requires_grad

    if autograd should record operations on the returned tensor.

    pin_memory

    If set, returned tensor would be allocated in the pinned memory.

    + + +

    Examples

    +
    if (torch_is_installed()) { +torch_tensor(c(1,2,3,4)) +torch_tensor(c(1,2,3,4), dtype = torch_int()) + +} +
    #> torch_tensor +#> 1 +#> 2 +#> 3 +#> 4 +#> [ CPUIntType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_tensordot.html b/static/docs/dev/reference/torch_tensordot.html new file mode 100644 index 0000000000000000000000000000000000000000..1abe8878da01c77e589abe97c19108d1a94678a3 --- /dev/null +++ b/static/docs/dev/reference/torch_tensordot.html @@ -0,0 +1,260 @@ + + + + + + + + +Tensordot — torch_tensordot • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Returns a contraction of a and b over multiple dimensions. +tensordot implements a generalized matrix product.

    +
    + +
    torch_tensordot(a, b, dims = 2)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    a

    (Tensor) Left tensor to contract

    b

    (Tensor) Right tensor to contract

    dims

    (int or tuple of two lists of integers) number of dimensions to contract or explicit lists of dimensions for a and b respectively

    + + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_arange(start = 0, end = 60.)$reshape(c(3, 4, 5)) +b = torch_arange(start = 0, end = 24.)$reshape(c(4, 3, 2)) +torch_tensordot(a, b, dims = list(c(2, 1), c(1, 2))) +if (FALSE) { +a = torch_randn(3, 4, 5, device='cuda') +b = torch_randn(4, 5, 6, device='cuda') +c = torch_tensordot(a, b, dims=2)$cpu() +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_threshold_.html b/static/docs/dev/reference/torch_threshold_.html new file mode 100644 index 0000000000000000000000000000000000000000..c64ce9ec89ff24df590b547e4326cc9f646ce095 --- /dev/null +++ b/static/docs/dev/reference/torch_threshold_.html @@ -0,0 +1,251 @@ + + + + + + + + +Threshold_ — torch_threshold_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Threshold_

    +
    + +
    torch_threshold_(self, threshold, value)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    input tensor

    threshold

    The value to threshold at

    value

    The value to replace with

    + +

    threshold_(input, threshold, value) -> Tensor

    + + + + +

    In-place version of torch_threshold.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_topk.html b/static/docs/dev/reference/torch_topk.html new file mode 100644 index 0000000000000000000000000000000000000000..22c559d3d5aec2d6c9cd2634eb2f48927dcedfc7 --- /dev/null +++ b/static/docs/dev/reference/torch_topk.html @@ -0,0 +1,287 @@ + + + + + + + + +Topk — torch_topk • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Topk

    +
    + +
    torch_topk(self, k, dim = -1L, largest = TRUE, sorted = TRUE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    k

    (int) the k in "top-k"

    dim

    (int, optional) the dimension to sort along

    largest

    (bool, optional) controls whether to return largest or smallest elements

    sorted

    (bool, optional) controls whether to return the elements in sorted order

    + +

    topk(input, k, dim=NULL, largest=TRUE, sorted=TRUE) -> (Tensor, LongTensor)

    + + + + +

    Returns the k largest elements of the given input tensor along +a given dimension.

    +

    If dim is not given, the last dimension of the input is chosen.

    +

    If largest is FALSE then the k smallest elements are returned.

    +

    A namedtuple of (values, indices) is returned, where the indices are the indices +of the elements in the original input tensor.

    +

    The boolean option sorted if TRUE, will make sure that the returned +k elements are themselves sorted

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_arange(1., 6.) +x +torch_topk(x, 3) +} +
    #> [[1]] +#> torch_tensor +#> 5 +#> 4 +#> 3 +#> [ CPUFloatType{3} ] +#> +#> [[2]] +#> torch_tensor +#> 4 +#> 3 +#> 2 +#> [ CPULongType{3} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_trace.html b/static/docs/dev/reference/torch_trace.html new file mode 100644 index 0000000000000000000000000000000000000000..93120104a3ea3b9145eafd50d2c05cb5ff3c1132 --- /dev/null +++ b/static/docs/dev/reference/torch_trace.html @@ -0,0 +1,253 @@ + + + + + + + + +Trace — torch_trace • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Trace

    +
    + +
    torch_trace(self)
    + +

    Arguments

    + + + + + + +
    self

    the input tensor

    + +

    trace(input) -> Tensor

    + + + + +

    Returns the sum of the elements of the diagonal of the input 2-D matrix.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_arange(1., 10.)$view(c(3, 3)) +x +torch_trace(x) +} +
    #> torch_tensor +#> 15 +#> [ CPUFloatType{} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_transpose.html b/static/docs/dev/reference/torch_transpose.html new file mode 100644 index 0000000000000000000000000000000000000000..47d30f5f30dbf02d0fef9fa8b2c2ec61d65731a5 --- /dev/null +++ b/static/docs/dev/reference/torch_transpose.html @@ -0,0 +1,267 @@ + + + + + + + + +Transpose — torch_transpose • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Transpose

    +
    + +
    torch_transpose(self, dim0, dim1)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim0

    (int) the first dimension to be transposed

    dim1

    (int) the second dimension to be transposed

    + +

    transpose(input, dim0, dim1) -> Tensor

    + + + + +

    Returns a tensor that is a transposed version of input. +The given dimensions dim0 and dim1 are swapped.

    +

    The resulting out tensor shares it's underlying storage with the +input tensor, so changing the content of one would change the content +of the other.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_randn(c(2, 3)) +x +torch_transpose(x, 1, 2) +} +
    #> torch_tensor +#> 0.0214 -0.8931 +#> -1.7985 0.4838 +#> -0.6487 0.5333 +#> [ CPUFloatType{3,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_trapz.html b/static/docs/dev/reference/torch_trapz.html new file mode 100644 index 0000000000000000000000000000000000000000..9de71109aee6bfd31b033ad33e968b95ce93161b --- /dev/null +++ b/static/docs/dev/reference/torch_trapz.html @@ -0,0 +1,274 @@ + + + + + + + + +Trapz — torch_trapz • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Trapz

    +
    + +
    torch_trapz(y, dx = 1L, x, dim = -1L)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    y

    (Tensor) The values of the function to integrate

    dx

    (float) The distance between points at which y is sampled.

    x

    (Tensor) The points at which the function y is sampled. If x is not in ascending order, intervals on which it is decreasing contribute negatively to the estimated integral (i.e., the convention \(\int_a^b f = -\int_b^a f\) is followed).

    dim

    (int) The dimension along which to integrate. By default, use the last dimension.

    + +

    trapz(y, x, *, dim=-1) -> Tensor

    + + + + +

    Estimate \(\int y\,dx\) along dim, using the trapezoid rule.

    +

    trapz(y, *, dx=1, dim=-1) -> Tensor

    + + + + +

    As above, but the sample points are spaced uniformly at a distance of dx.

    + +

    Examples

    +
    if (torch_is_installed()) { + +y = torch_randn(list(2, 3)) +y +x = torch_tensor(matrix(c(1, 3, 4, 1, 2, 3), ncol = 3, byrow=TRUE)) +torch_trapz(y, x = x) + +} +
    #> torch_tensor +#> 2.8381 +#> 0.2725 +#> [ CPUFloatType{2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_triangular_solve.html b/static/docs/dev/reference/torch_triangular_solve.html new file mode 100644 index 0000000000000000000000000000000000000000..e2393a90df1846265aa1357da65221d1299ad750 --- /dev/null +++ b/static/docs/dev/reference/torch_triangular_solve.html @@ -0,0 +1,292 @@ + + + + + + + + +Triangular_solve — torch_triangular_solve • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Triangular_solve

    +
    + +
    torch_triangular_solve(
    +  self,
    +  A,
    +  upper = TRUE,
    +  transpose = FALSE,
    +  unitriangular = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) multiple right-hand sides of size \((*, m, k)\) where \(*\) is zero of more batch dimensions (\(b\))

    A

    (Tensor) the input triangular coefficient matrix of size \((*, m, m)\) where \(*\) is zero or more batch dimensions

    upper

    (bool, optional) whether to solve the upper-triangular system of equations (default) or the lower-triangular system of equations. Default: TRUE.

    transpose

    (bool, optional) whether \(A\) should be transposed before being sent into the solver. Default: FALSE.

    unitriangular

    (bool, optional) whether \(A\) is unit triangular. If TRUE, the diagonal elements of \(A\) are assumed to be 1 and not referenced from \(A\). Default: FALSE.

    + +

    triangular_solve(input, A, upper=TRUE, transpose=False, unitriangular=False) -> (Tensor, Tensor)

    + + + + +

    Solves a system of equations with a triangular coefficient matrix \(A\) +and multiple right-hand sides \(b\).

    +

    In particular, solves \(AX = b\) and assumes \(A\) is upper-triangular +with the default keyword arguments.

    +

    torch_triangular_solve(b, A) can take in 2D inputs b, A or inputs that are +batches of 2D matrices. If the inputs are batches, then returns +batched outputs X

    + +

    Examples

    +
    if (torch_is_installed()) { + +A = torch_randn(c(2, 2))$triu() +A +b = torch_randn(c(2, 3)) +b +torch_triangular_solve(b, A) +} +
    #> [[1]] +#> torch_tensor +#> 2.4651 2.4966 -0.8156 +#> -1.9076 -0.5801 2.0013 +#> [ CPUFloatType{2,3} ] +#> +#> [[2]] +#> torch_tensor +#> -0.8972 -0.6839 +#> 0.0000 -0.6066 +#> [ CPUFloatType{2,2} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_tril.html b/static/docs/dev/reference/torch_tril.html new file mode 100644 index 0000000000000000000000000000000000000000..c5a44bed357abc4e4879d165e6927cce87421481 --- /dev/null +++ b/static/docs/dev/reference/torch_tril.html @@ -0,0 +1,274 @@ + + + + + + + + +Tril — torch_tril • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Tril

    +
    + +
    torch_tril(self, diagonal = 0L)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    diagonal

    (int, optional) the diagonal to consider

    + +

    tril(input, diagonal=0, out=NULL) -> Tensor

    + + + + +

    Returns the lower triangular part of the matrix (2-D tensor) or batch of matrices +input, the other elements of the result tensor out are set to 0.

    +

    The lower triangular part of the matrix is defined as the elements on and +below the diagonal.

    +

    The argument diagonal controls which diagonal to consider. If +diagonal = 0, all elements on and below the main diagonal are +retained. A positive value includes just as many diagonals above the main +diagonal, and similarly a negative value excludes just as many diagonals below +the main diagonal. The main diagonal are the set of indices +\(\lbrace (i, i) \rbrace\) for \(i \in [0, \min\{d_{1}, d_{2}\} - 1]\) where +\(d_{1}, d_{2}\) are the dimensions of the matrix.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(3, 3)) +a +torch_tril(a) +b = torch_randn(c(4, 6)) +b +torch_tril(b, diagonal=1) +torch_tril(b, diagonal=-1) +} +
    #> torch_tensor +#> 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 +#> -1.0780 0.0000 0.0000 0.0000 0.0000 0.0000 +#> 0.2396 -1.1575 0.0000 0.0000 0.0000 0.0000 +#> 0.6936 -0.7147 0.3855 0.0000 0.0000 0.0000 +#> [ CPUFloatType{4,6} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_tril_indices.html b/static/docs/dev/reference/torch_tril_indices.html new file mode 100644 index 0000000000000000000000000000000000000000..39ffbb0b5561a07bc5928286f3fa21c1127cc9cc --- /dev/null +++ b/static/docs/dev/reference/torch_tril_indices.html @@ -0,0 +1,301 @@ + + + + + + + + +Tril_indices — torch_tril_indices • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Tril_indices

    +
    + +
    torch_tril_indices(
    +  row,
    +  col,
    +  offset = 0,
    +  dtype = torch_long(),
    +  device = "cpu",
    +  layout = torch_strided()
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    row

    (int) number of rows in the 2-D matrix.

    col

    (int) number of columns in the 2-D matrix.

    offset

    (int) diagonal offset from the main diagonal. Default: if not provided, 0.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, torch_long.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    layout

    (torch.layout, optional) currently only support torch_strided.

    + +

    Note

    + + +
    When running on CUDA, `row * col` must be less than \eqn{2^{59}} to
    +prevent overflow during calculation.
    +
    + +

    tril_indices(row, col, offset=0, dtype=torch.long, device='cpu', layout=torch.strided) -> Tensor

    + + + + +

    Returns the indices of the lower triangular part of a row-by- +col matrix in a 2-by-N Tensor, where the first row contains row +coordinates of all indices and the second row contains column coordinates. +Indices are ordered based on rows and then columns.

    +

    The lower triangular part of the matrix is defined as the elements on and +below the diagonal.

    +

    The argument offset controls which diagonal to consider. If +offset = 0, all elements on and below the main diagonal are +retained. A positive value includes just as many diagonals above the main +diagonal, and similarly a negative value excludes just as many diagonals below +the main diagonal. The main diagonal are the set of indices +\(\lbrace (i, i) \rbrace\) for \(i \in [0, \min\{d_{1}, d_{2}\} - 1]\) +where \(d_{1}, d_{2}\) are the dimensions of the matrix.

    + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +a = torch_tril_indices(3, 3) +a +a = torch_tril_indices(4, 3, -1) +a +a = torch_tril_indices(4, 3, 1) +a +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_triu.html b/static/docs/dev/reference/torch_triu.html new file mode 100644 index 0000000000000000000000000000000000000000..c0ee0ebd2469f8bdd8c0371b6559fdc372ab1920 --- /dev/null +++ b/static/docs/dev/reference/torch_triu.html @@ -0,0 +1,276 @@ + + + + + + + + +Triu — torch_triu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Triu

    +
    + +
    torch_triu(self, diagonal = 0L)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    diagonal

    (int, optional) the diagonal to consider

    + +

    triu(input, diagonal=0, out=NULL) -> Tensor

    + + + + +

    Returns the upper triangular part of a matrix (2-D tensor) or batch of matrices +input, the other elements of the result tensor out are set to 0.

    +

    The upper triangular part of the matrix is defined as the elements on and +above the diagonal.

    +

    The argument diagonal controls which diagonal to consider. If +diagonal = 0, all elements on and above the main diagonal are +retained. A positive value excludes just as many diagonals above the main +diagonal, and similarly a negative value includes just as many diagonals below +the main diagonal. The main diagonal are the set of indices +\(\lbrace (i, i) \rbrace\) for \(i \in [0, \min\{d_{1}, d_{2}\} - 1]\) where +\(d_{1}, d_{2}\) are the dimensions of the matrix.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(3, 3)) +a +torch_triu(a) +torch_triu(a, diagonal=1) +torch_triu(a, diagonal=-1) +b = torch_randn(c(4, 6)) +b +torch_triu(b, diagonal=1) +torch_triu(b, diagonal=-1) +} +
    #> torch_tensor +#> 0.0762 1.2123 1.0293 1.3295 -0.9118 1.0105 +#> -2.1972 -0.5764 1.0395 0.6328 1.9152 0.7976 +#> 0.0000 0.7375 -0.2437 -0.6842 -1.2824 0.0754 +#> 0.0000 0.0000 1.0719 1.1795 -0.2865 0.4529 +#> [ CPUFloatType{4,6} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_triu_indices.html b/static/docs/dev/reference/torch_triu_indices.html new file mode 100644 index 0000000000000000000000000000000000000000..af92c5fd945f23975fb75270f4f30a5acc6aa8e4 --- /dev/null +++ b/static/docs/dev/reference/torch_triu_indices.html @@ -0,0 +1,301 @@ + + + + + + + + +Triu_indices — torch_triu_indices • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Triu_indices

    +
    + +
    torch_triu_indices(
    +  row,
    +  col,
    +  offset = 0,
    +  dtype = torch_long(),
    +  device = "cpu",
    +  layout = torch_strided()
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    row

    (int) number of rows in the 2-D matrix.

    col

    (int) number of columns in the 2-D matrix.

    offset

    (int) diagonal offset from the main diagonal. Default: if not provided, 0.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, torch_long.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    layout

    (torch.layout, optional) currently only support torch_strided.

    + +

    Note

    + + +
    When running on CUDA, `row * col` must be less than \eqn{2^{59}} to
    +prevent overflow during calculation.
    +
    + +

    triu_indices(row, col, offset=0, dtype=torch.long, device='cpu', layout=torch.strided) -> Tensor

    + + + + +

    Returns the indices of the upper triangular part of a row by +col matrix in a 2-by-N Tensor, where the first row contains row +coordinates of all indices and the second row contains column coordinates. +Indices are ordered based on rows and then columns.

    +

    The upper triangular part of the matrix is defined as the elements on and +above the diagonal.

    +

    The argument offset controls which diagonal to consider. If +offset = 0, all elements on and above the main diagonal are +retained. A positive value excludes just as many diagonals above the main +diagonal, and similarly a negative value includes just as many diagonals below +the main diagonal. The main diagonal are the set of indices +\(\lbrace (i, i) \rbrace\) for \(i \in [0, \min\{d_{1}, d_{2}\} - 1]\) +where \(d_{1}, d_{2}\) are the dimensions of the matrix.

    + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +a = torch_triu_indices(3, 3) +a +a = torch_triu_indices(4, 3, -1) +a +a = torch_triu_indices(4, 3, 1) +a +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_true_divide.html b/static/docs/dev/reference/torch_true_divide.html new file mode 100644 index 0000000000000000000000000000000000000000..edbe9c693675c92294f68fc297b65ca1501f896c --- /dev/null +++ b/static/docs/dev/reference/torch_true_divide.html @@ -0,0 +1,265 @@ + + + + + + + + +TRUE_divide — torch_true_divide • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    TRUE_divide

    +
    + +
    torch_true_divide(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the dividend

    other

    (Tensor or Scalar) the divisor

    + +

    true_divide(dividend, divisor) -> Tensor

    + + + + +

    Performs "true division" that always computes the division +in floating point. Analogous to division in Python 3 and equivalent to +torch_div except when both inputs have bool or integer scalar types, +in which case they are cast to the default (floating) scalar type before the division.

    +

    $$ + \mbox{out}_i = \frac{\mbox{dividend}_i}{\mbox{divisor}} +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +dividend = torch_tensor(c(5, 3), dtype=torch_int()) +divisor = torch_tensor(c(3, 2), dtype=torch_int()) +torch_true_divide(dividend, divisor) +torch_true_divide(dividend, 2) +} +
    #> torch_tensor +#> 2.5000 +#> 1.5000 +#> [ CPUFloatType{2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_trunc.html b/static/docs/dev/reference/torch_trunc.html new file mode 100644 index 0000000000000000000000000000000000000000..45aede12dda523edd091a7886d9ebfd372ced4fa --- /dev/null +++ b/static/docs/dev/reference/torch_trunc.html @@ -0,0 +1,257 @@ + + + + + + + + +Trunc — torch_trunc • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Trunc

    +
    + +
    torch_trunc(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    trunc(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the truncated integer values of +the elements of input.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_trunc(a) +} +
    #> torch_tensor +#> 1 +#> 1 +#> -0 +#> -0 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_unbind.html b/static/docs/dev/reference/torch_unbind.html new file mode 100644 index 0000000000000000000000000000000000000000..89ccdadd47aacfc15cdfeaa9d4565f9fa1f5e6f6 --- /dev/null +++ b/static/docs/dev/reference/torch_unbind.html @@ -0,0 +1,274 @@ + + + + + + + + +Unbind — torch_unbind • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Unbind

    +
    + +
    torch_unbind(self, dim = 1L)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the tensor to unbind

    dim

    (int) dimension to remove

    + +

    unbind(input, dim=0) -> seq

    + + + + +

    Removes a tensor dimension.

    +

    Returns a tuple of all slices along a given dimension, already without it.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_unbind(torch_tensor(matrix(1:9, ncol = 3, byrow=TRUE))) +} +
    #> [[1]] +#> torch_tensor +#> 1 +#> 2 +#> 3 +#> [ CPULongType{3} ] +#> +#> [[2]] +#> torch_tensor +#> 4 +#> 5 +#> 6 +#> [ CPULongType{3} ] +#> +#> [[3]] +#> torch_tensor +#> 7 +#> 8 +#> 9 +#> [ CPULongType{3} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_unique_consecutive.html b/static/docs/dev/reference/torch_unique_consecutive.html new file mode 100644 index 0000000000000000000000000000000000000000..c4605d6dab67c999fbbdc4dcfbaba5f5626f8701 --- /dev/null +++ b/static/docs/dev/reference/torch_unique_consecutive.html @@ -0,0 +1,294 @@ + + + + + + + + +Unique_consecutive — torch_unique_consecutive • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Unique_consecutive

    +
    + +
    torch_unique_consecutive(
    +  self,
    +  return_inverse = FALSE,
    +  return_counts = FALSE,
    +  dim = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor

    return_inverse

    (bool) Whether to also return the indices for where elements in the original input ended up in the returned unique list.

    return_counts

    (bool) Whether to also return the counts for each unique element.

    dim

    (int) the dimension to apply unique. If NULL, the unique of the flattened input is returned. default: NULL

    + +

    TEST

    + + + + +

    Eliminates all but the first element from every consecutive group of equivalent elements.

    .. note:: This function is different from [`torch_unique`] in the sense that this function
    +    only eliminates consecutive duplicate values. This semantics is similar to `std::unique`
    +    in C++.
    +
    + + +

    Examples

    +
    if (torch_is_installed()) { +x = torch_tensor(c(1, 1, 2, 2, 3, 1, 1, 2)) +output = torch_unique_consecutive(x) +output +torch_unique_consecutive(x, return_inverse=TRUE) +torch_unique_consecutive(x, return_counts=TRUE) +} +
    #> [[1]] +#> torch_tensor +#> 1 +#> 2 +#> 3 +#> 1 +#> 2 +#> [ CPUFloatType{5} ] +#> +#> [[2]] +#> torch_tensor +#> [ CPULongType{0} ] +#> +#> [[3]] +#> torch_tensor +#> 2 +#> 2 +#> 1 +#> 2 +#> 1 +#> [ CPULongType{5} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_unsqueeze.html b/static/docs/dev/reference/torch_unsqueeze.html new file mode 100644 index 0000000000000000000000000000000000000000..ce8bfb566f93102512ae8c980e082469c7312232 --- /dev/null +++ b/static/docs/dev/reference/torch_unsqueeze.html @@ -0,0 +1,265 @@ + + + + + + + + +Unsqueeze — torch_unsqueeze • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Unsqueeze

    +
    + +
    torch_unsqueeze(self, dim)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int) the index at which to insert the singleton dimension

    + +

    unsqueeze(input, dim) -> Tensor

    + + + + +

    Returns a new tensor with a dimension of size one inserted at the +specified position.

    +

    The returned tensor shares the same underlying data with this tensor.

    +

    A dim value within the range [-input.dim() - 1, input.dim() + 1) +can be used. Negative dim will correspond to unsqueeze +applied at dim = dim + input.dim() + 1.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_tensor(c(1, 2, 3, 4)) +torch_unsqueeze(x, 1) +torch_unsqueeze(x, 2) +} +
    #> torch_tensor +#> 1 +#> 2 +#> 3 +#> 4 +#> [ CPUFloatType{4,1} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_var.html b/static/docs/dev/reference/torch_var.html new file mode 100644 index 0000000000000000000000000000000000000000..948ec2732c3073d8afc05eec5fbeeb8a0bf10e23 --- /dev/null +++ b/static/docs/dev/reference/torch_var.html @@ -0,0 +1,288 @@ + + + + + + + + +Var — torch_var • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Var

    +
    + +
    torch_var(self, dim, unbiased = TRUE, keepdim = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int or tuple of ints) the dimension or dimensions to reduce.

    unbiased

    (bool) whether to use the unbiased estimation or not

    keepdim

    (bool) whether the output tensor has dim retained or not.

    + +

    var(input, unbiased=TRUE) -> Tensor

    + + + + +

    Returns the variance of all elements in the input tensor.

    +

    If unbiased is FALSE, then the variance will be calculated via the +biased estimator. Otherwise, Bessel's correction will be used.

    +

    var(input, dim, keepdim=False, unbiased=TRUE, out=NULL) -> Tensor

    + + + + +

    Returns the variance of each row of the input tensor in the given +dimension dim.

    +

    If keepdim is TRUE, the output tensor is of the same size +as input except in the dimension(s) dim where it is of size 1. +Otherwise, dim is squeezed (see torch_squeeze), resulting in the +output tensor having 1 (or len(dim)) fewer dimension(s).

    +

    If unbiased is FALSE, then the variance will be calculated via the +biased estimator. Otherwise, Bessel's correction will be used.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(1, 3)) +a +torch_var(a) + + +a = torch_randn(c(4, 4)) +a +torch_var(a, 1) +} +
    #> torch_tensor +#> 2.2094 +#> 1.3878 +#> 0.3613 +#> 0.4530 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_var_mean.html b/static/docs/dev/reference/torch_var_mean.html new file mode 100644 index 0000000000000000000000000000000000000000..95c29646eddfdc0b4d0e6ae0dc0476f402deba25 --- /dev/null +++ b/static/docs/dev/reference/torch_var_mean.html @@ -0,0 +1,298 @@ + + + + + + + + +Var_mean — torch_var_mean • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Var_mean

    +
    + +
    torch_var_mean(self, dim, unbiased = TRUE, keepdim = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int or tuple of ints) the dimension or dimensions to reduce.

    unbiased

    (bool) whether to use the unbiased estimation or not

    keepdim

    (bool) whether the output tensor has dim retained or not.

    + +

    var_mean(input, unbiased=TRUE) -> (Tensor, Tensor)

    + + + + +

    Returns the variance and mean of all elements in the input tensor.

    +

    If unbiased is FALSE, then the variance will be calculated via the +biased estimator. Otherwise, Bessel's correction will be used.

    +

    var_mean(input, dim, keepdim=False, unbiased=TRUE) -> (Tensor, Tensor)

    + + + + +

    Returns the variance and mean of each row of the input tensor in the given +dimension dim.

    +

    If keepdim is TRUE, the output tensor is of the same size +as input except in the dimension(s) dim where it is of size 1. +Otherwise, dim is squeezed (see torch_squeeze), resulting in the +output tensor having 1 (or len(dim)) fewer dimension(s).

    +

    If unbiased is FALSE, then the variance will be calculated via the +biased estimator. Otherwise, Bessel's correction will be used.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(1, 3)) +a +torch_var_mean(a) + + +a = torch_randn(c(4, 4)) +a +torch_var_mean(a, 1) +} +
    #> [[1]] +#> torch_tensor +#> 0.8131 +#> 2.9003 +#> 0.3918 +#> 1.6492 +#> [ CPUFloatType{4} ] +#> +#> [[2]] +#> torch_tensor +#> 0.5266 +#> 0.3362 +#> 0.2158 +#> 0.1401 +#> [ CPUFloatType{4} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_where.html b/static/docs/dev/reference/torch_where.html new file mode 100644 index 0000000000000000000000000000000000000000..90370f0c2178473404e0c9e18b48f54944be3b4f --- /dev/null +++ b/static/docs/dev/reference/torch_where.html @@ -0,0 +1,287 @@ + + + + + + + + +Where — torch_where • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Where

    +
    + +
    torch_where(condition, self, other)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    condition

    (BoolTensor) When TRUE (nonzero), yield x, otherwise yield y

    self

    (Tensor) values selected at indices where condition is TRUE

    other

    (Tensor) values selected at indices where condition is FALSE

    + +

    Note

    + + +
    The tensors `condition`, `x`, `y` must be broadcastable .
    +
    + +

    See also torch_nonzero().

    +

    where(condition, x, y) -> Tensor

    + + + + +

    Return a tensor of elements selected from either x or y, depending on condition.

    +

    The operation is defined as:

    +

    $$ + \mbox{out}_i = \left\{ \begin{array}{ll} + \mbox{x}_i & \mbox{if } \mbox{condition}_i \\ + \mbox{y}_i & \mbox{otherwise} \\ + \end{array} + \right. +$$

    +

    where(condition) -> tuple of LongTensor

    + + + + +

    torch_where(condition) is identical to +torch_nonzero(condition, as_tuple=TRUE).

    + +

    Examples

    +
    if (torch_is_installed()) { + +if (FALSE) { +x = torch_randn(c(3, 2)) +y = torch_ones(c(3, 2)) +x +torch_where(x > 0, x, y) +} + + + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_zeros.html b/static/docs/dev/reference/torch_zeros.html new file mode 100644 index 0000000000000000000000000000000000000000..4061b5dd0a841f14b078ba3e58b6969c7537fdc5 --- /dev/null +++ b/static/docs/dev/reference/torch_zeros.html @@ -0,0 +1,284 @@ + + + + + + + + +Zeros — torch_zeros • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Zeros

    +
    + +
    torch_zeros(
    +  ...,
    +  names = NULL,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    ...

    a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple.

    names

    optional dimension names

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

    layout

    (torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    zeros(*size, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

    + + + + +

    Returns a tensor filled with the scalar value 0, with the shape defined +by the variable argument size.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_zeros(c(2, 3)) +torch_zeros(c(5)) +} +
    #> torch_tensor +#> 0 +#> 0 +#> 0 +#> 0 +#> 0 +#> [ CPUFloatType{5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/torch_zeros_like.html b/static/docs/dev/reference/torch_zeros_like.html new file mode 100644 index 0000000000000000000000000000000000000000..4bdc53db3c27d560306942ac6afb6704248e76e2 --- /dev/null +++ b/static/docs/dev/reference/torch_zeros_like.html @@ -0,0 +1,289 @@ + + + + + + + + +Zeros_like — torch_zeros_like • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Zeros_like

    +
    + +
    torch_zeros_like(
    +  input,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE,
    +  memory_format = torch_preserve_format()
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    (Tensor) the size of input will determine size of the output tensor.

    dtype

    (torch.dtype, optional) the desired data type of returned Tensor. Default: if NULL, defaults to the dtype of input.

    layout

    (torch.layout, optional) the desired layout of returned tensor. Default: if NULL, defaults to the layout of input.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, defaults to the device of input.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    memory_format

    (torch.memory_format, optional) the desired memory format of returned Tensor. Default: torch_preserve_format.

    + +

    zeros_like(input, dtype=NULL, layout=NULL, device=NULL, requires_grad=False, memory_format=torch.preserve_format) -> Tensor

    + + + + +

    Returns a tensor filled with the scalar value 0, with the same size as +input. torch_zeros_like(input) is equivalent to +torch_zeros(input.size(), dtype=input.dtype, layout=input.layout, device=input.device).

    +

    Warning

    + + + +

    As of 0.4, this function does not support an out keyword. As an alternative, +the old torch_zeros_like(input, out=output) is equivalent to +torch_zeros(input.size(), out=output).

    + +

    Examples

    +
    if (torch_is_installed()) { + +input = torch_empty(c(2, 3)) +torch_zeros_like(input) +} +
    #> torch_tensor +#> 0 0 0 +#> 0 0 0 +#> [ CPUFloatType{2,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/with_enable_grad.html b/static/docs/dev/reference/with_enable_grad.html new file mode 100644 index 0000000000000000000000000000000000000000..875ead4490cf2730790258d19c90b09564616de9 --- /dev/null +++ b/static/docs/dev/reference/with_enable_grad.html @@ -0,0 +1,259 @@ + + + + + + + + +Enable grad — with_enable_grad • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Context-manager that enables gradient calculation. +Enables gradient calculation, if it has been disabled via with_no_grad.

    +
    + +
    with_enable_grad(code)
    + +

    Arguments

    + + + + + + +
    code

    code to be executed with gradient recording.

    + +

    Details

    + +

    This context manager is thread local; it will not affect computation in +other threads.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x <- torch_tensor(1, requires_grad=TRUE) +with_no_grad({ + with_enable_grad({ + y = x * 2 + }) +}) +y$backward() +x$grad + +} +
    #> torch_tensor +#> 2 +#> [ CPUFloatType{1} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/dev/reference/with_no_grad.html b/static/docs/dev/reference/with_no_grad.html new file mode 100644 index 0000000000000000000000000000000000000000..20200cbf39a170a07bbd8f342195561bd607f90f --- /dev/null +++ b/static/docs/dev/reference/with_no_grad.html @@ -0,0 +1,249 @@ + + + + + + + + +Temporarily modify gradient recording. — with_no_grad • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Temporarily modify gradient recording.

    +
    + +
    with_no_grad(code)
    + +

    Arguments

    + + + + + + +
    code

    code to be executed with no gradient recording.

    + + +

    Examples

    +
    if (torch_is_installed()) { +x <- torch_tensor(runif(5), requires_grad = TRUE) +with_no_grad({ + x$sub_(torch_tensor(as.numeric(1:5))) +}) +x +x$grad + +} +
    #> torch_tensor +#> [ Tensor (undefined) ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/docsearch.css b/static/docs/docsearch.css new file mode 100644 index 0000000000000000000000000000000000000000..e5f1fe1dfa2c34c51fe941829b511acd8c763301 --- /dev/null +++ b/static/docs/docsearch.css @@ -0,0 +1,148 @@ +/* Docsearch -------------------------------------------------------------- */ +/* + Source: https://github.com/algolia/docsearch/ + License: MIT +*/ + +.algolia-autocomplete { + display: block; + -webkit-box-flex: 1; + -ms-flex: 1; + flex: 1 +} + +.algolia-autocomplete .ds-dropdown-menu { + width: 100%; + min-width: none; + max-width: none; + padding: .75rem 0; + background-color: #fff; + background-clip: padding-box; + border: 1px solid rgba(0, 0, 0, .1); + box-shadow: 0 .5rem 1rem rgba(0, 0, 0, .175); +} + +@media (min-width:768px) { + .algolia-autocomplete .ds-dropdown-menu { + width: 175% + } +} + +.algolia-autocomplete .ds-dropdown-menu::before { + display: none +} + +.algolia-autocomplete .ds-dropdown-menu [class^=ds-dataset-] { + padding: 0; + background-color: rgb(255,255,255); + border: 0; + max-height: 80vh; +} + +.algolia-autocomplete .ds-dropdown-menu .ds-suggestions { + margin-top: 0 +} + +.algolia-autocomplete .algolia-docsearch-suggestion { + padding: 0; + overflow: visible +} + +.algolia-autocomplete .algolia-docsearch-suggestion--category-header { + padding: .125rem 1rem; + margin-top: 0; + font-size: 1.3em; + font-weight: 500; + color: #00008B; + border-bottom: 0 +} + +.algolia-autocomplete .algolia-docsearch-suggestion--wrapper { + float: none; + padding-top: 0 +} + +.algolia-autocomplete .algolia-docsearch-suggestion--subcategory-column { + float: none; + width: auto; + padding: 0; + text-align: left +} + +.algolia-autocomplete .algolia-docsearch-suggestion--content { + float: none; + width: auto; + padding: 0 +} + +.algolia-autocomplete .algolia-docsearch-suggestion--content::before { + display: none +} + +.algolia-autocomplete .ds-suggestion:not(:first-child) .algolia-docsearch-suggestion--category-header { + padding-top: .75rem; + margin-top: .75rem; + border-top: 1px solid rgba(0, 0, 0, .1) +} + +.algolia-autocomplete .ds-suggestion .algolia-docsearch-suggestion--subcategory-column { + display: block; + padding: .1rem 1rem; + margin-bottom: 0.1; + font-size: 1.0em; + font-weight: 400 + /* display: none */ +} + +.algolia-autocomplete .algolia-docsearch-suggestion--title { + display: block; + padding: .25rem 1rem; + margin-bottom: 0; + font-size: 0.9em; + font-weight: 400 +} + +.algolia-autocomplete .algolia-docsearch-suggestion--text { + padding: 0 1rem .5rem; + margin-top: -.25rem; + font-size: 0.8em; + font-weight: 400; + line-height: 1.25 +} + +.algolia-autocomplete .algolia-docsearch-footer { + width: 110px; + height: 20px; + z-index: 3; + margin-top: 10.66667px; + float: right; + font-size: 0; + line-height: 0; +} + +.algolia-autocomplete .algolia-docsearch-footer--logo { + background-image: url("data:image/svg+xml;utf8,"); + background-repeat: no-repeat; + background-position: 50%; + background-size: 100%; + overflow: hidden; + text-indent: -9000px; + width: 100%; + height: 100%; + display: block; + transform: translate(-8px); +} + +.algolia-autocomplete .algolia-docsearch-suggestion--highlight { + color: #FF8C00; + background: rgba(232, 189, 54, 0.1) +} + + +.algolia-autocomplete .algolia-docsearch-suggestion--text .algolia-docsearch-suggestion--highlight { + box-shadow: inset 0 -2px 0 0 rgba(105, 105, 105, .5) +} + +.algolia-autocomplete .ds-suggestion.ds-cursor .algolia-docsearch-suggestion--content { + background-color: rgba(192, 192, 192, .15) +} diff --git a/static/docs/docsearch.js b/static/docs/docsearch.js new file mode 100644 index 0000000000000000000000000000000000000000..b35504cd3a282816130a16881f3ebeead9c1bcb4 --- /dev/null +++ b/static/docs/docsearch.js @@ -0,0 +1,85 @@ +$(function() { + + // register a handler to move the focus to the search bar + // upon pressing shift + "/" (i.e. "?") + $(document).on('keydown', function(e) { + if (e.shiftKey && e.keyCode == 191) { + e.preventDefault(); + $("#search-input").focus(); + } + }); + + $(document).ready(function() { + // do keyword highlighting + /* modified from https://jsfiddle.net/julmot/bL6bb5oo/ */ + var mark = function() { + + var referrer = document.URL ; + var paramKey = "q" ; + + if (referrer.indexOf("?") !== -1) { + var qs = referrer.substr(referrer.indexOf('?') + 1); + var qs_noanchor = qs.split('#')[0]; + var qsa = qs_noanchor.split('&'); + var keyword = ""; + + for (var i = 0; i < qsa.length; i++) { + var currentParam = qsa[i].split('='); + + if (currentParam.length !== 2) { + continue; + } + + if (currentParam[0] == paramKey) { + keyword = decodeURIComponent(currentParam[1].replace(/\+/g, "%20")); + } + } + + if (keyword !== "") { + $(".contents").unmark({ + done: function() { + $(".contents").mark(keyword); + } + }); + } + } + }; + + mark(); + }); +}); + +/* Search term highlighting ------------------------------*/ + +function matchedWords(hit) { + var words = []; + + var hierarchy = hit._highlightResult.hierarchy; + // loop to fetch from lvl0, lvl1, etc. + for (var idx in hierarchy) { + words = words.concat(hierarchy[idx].matchedWords); + } + + var content = hit._highlightResult.content; + if (content) { + words = words.concat(content.matchedWords); + } + + // return unique words + var words_uniq = [...new Set(words)]; + return words_uniq; +} + +function updateHitURL(hit) { + + var words = matchedWords(hit); + var url = ""; + + if (hit.anchor) { + url = hit.url_without_anchor + '?q=' + escape(words.join(" ")) + '#' + hit.anchor; + } else { + url = hit.url + '?q=' + escape(words.join(" ")); + } + + return url; +} diff --git a/static/docs/index.html b/static/docs/index.html new file mode 100644 index 0000000000000000000000000000000000000000..881a7c951b0ef4585aa7b108b847e370e600537c --- /dev/null +++ b/static/docs/index.html @@ -0,0 +1,311 @@ + + + + + + + +Tensors and Neural Networks with GPU Acceleration • torch + + + + + + + + + + +
    +
    + + + + +
    +
    +
    + + +
    +

    +Installation

    +

    Run:

    +
    remotes::install_github("mlverse/torch")
    +

    At the first package load additional software will be installed.

    +
    +
    +

    +Example

    +

    Currently this package is only a proof of concept and you can only create a Torch Tensor from an R object. And then convert back from a torch Tensor to an R object.

    +
    library(torch)
    +x <- array(runif(8), dim = c(2, 2, 2))
    +y <- torch_tensor(x, dtype = torch_float64())
    +y
    +#> torch_tensor 
    +#> (1,.,.) = 
    +#>   0.5406  0.8648
    +#>   0.3097  0.9715
    +#> 
    +#> (2,.,.) = 
    +#>   0.1309  0.8992
    +#>   0.4849  0.1902
    +#> [ CPUDoubleType{2,2,2} ]
    +identical(x, as_array(y))
    +#> [1] TRUE
    +
    +

    +Simple Autograd Example

    +

    In the following snippet we let torch, using the autograd feature, calculate the derivatives:

    +
    x <- torch_tensor(1, requires_grad = TRUE)
    +w <- torch_tensor(2, requires_grad = TRUE)
    +b <- torch_tensor(3, requires_grad = TRUE)
    +y <- w * x + b
    +y$backward()
    +x$grad
    +#> torch_tensor 
    +#>  2
    +#> [ CPUFloatType{1} ]
    +w$grad
    +#> torch_tensor 
    +#>  1
    +#> [ CPUFloatType{1} ]
    +b$grad
    +#> torch_tensor 
    +#>  1
    +#> [ CPUFloatType{1} ]
    +
    +
    +

    +Linear Regression

    +

    In the following example we are going to fit a linear regression from scratch using torch’s Autograd.

    +

    Note all methods that end with _ (eg. sub_), will modify the tensors in place.

    +
    x <- torch_randn(100, 2)
    +y <- 0.1 + 0.5*x[,1] - 0.7*x[,2]
    +
    +w <- torch_randn(2, 1, requires_grad = TRUE)
    +b <- torch_zeros(1, requires_grad = TRUE)
    +
    +lr <- 0.5
    +for (i in 1:100) {
    +  y_hat <- torch_mm(x, w) + b
    +  loss <- torch_mean((y - y_hat$squeeze(1))^2)
    +  
    +  loss$backward()
    +  
    +  with_no_grad({
    +    w$sub_(w$grad*lr)
    +    b$sub_(b$grad*lr)   
    +    
    +    w$grad$zero_()
    +    b$grad$zero_()
    +  })
    +}
    +print(w)
    +#> torch_tensor 
    +#> 1e-09 *
    +#>  5.2672
    +#>  -6.7969
    +#> [ CPUFloatType{2,1} ]
    +print(b) 
    +#> torch_tensor 
    +#> 0.01 *
    +#> -9.6802
    +#> [ CPUFloatType{1} ]
    +
    +
    +
    +

    +Contributing

    +

    No matter your current skills it’s possible to contribute to torch development. See the contributing guide for more information.

    +
    +
    +
    + + +
    + + +
    + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + diff --git a/static/docs/link.svg b/static/docs/link.svg new file mode 100644 index 0000000000000000000000000000000000000000..88ad82769b87f10725c57dca6fcf41b4bffe462c --- /dev/null +++ b/static/docs/link.svg @@ -0,0 +1,12 @@ + + + + + + diff --git a/static/docs/news/index.html b/static/docs/news/index.html new file mode 100644 index 0000000000000000000000000000000000000000..5cedf5d547bd06e3fcc63f20a490a4ed2dce6f94 --- /dev/null +++ b/static/docs/news/index.html @@ -0,0 +1,235 @@ + + + + + + + + +Changelog • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    +torch (development version) Unreleased +

    +
    +
    +

    +torch 0.0.2 2020-08-31 +

    +
      +
    • Added a NEWS.md file to track changes to the package.
    • +
    • Auto install when loading the package for the first time.
    • +
    +
    +
    + + + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/pkgdown.css b/static/docs/pkgdown.css new file mode 100644 index 0000000000000000000000000000000000000000..1273238dd9541cec3612b557db37835c800aa5de --- /dev/null +++ b/static/docs/pkgdown.css @@ -0,0 +1,367 @@ +/* Sticky footer */ + +/** + * Basic idea: https://philipwalton.github.io/solved-by-flexbox/demos/sticky-footer/ + * Details: https://github.com/philipwalton/solved-by-flexbox/blob/master/assets/css/components/site.css + * + * .Site -> body > .container + * .Site-content -> body > .container .row + * .footer -> footer + * + * Key idea seems to be to ensure that .container and __all its parents__ + * have height set to 100% + * + */ + +html, body { + height: 100%; +} + +body { + position: relative; +} + +body > .container { + display: flex; + height: 100%; + flex-direction: column; +} + +body > .container .row { + flex: 1 0 auto; +} + +footer { + margin-top: 45px; + padding: 35px 0 36px; + border-top: 1px solid #e5e5e5; + color: #666; + display: flex; + flex-shrink: 0; +} +footer p { + margin-bottom: 0; +} +footer div { + flex: 1; +} +footer .pkgdown { + text-align: right; +} +footer p { + margin-bottom: 0; +} + +img.icon { + float: right; +} + +img { + max-width: 100%; +} + +/* Fix bug in bootstrap (only seen in firefox) */ +summary { + display: list-item; +} + +/* Typographic tweaking ---------------------------------*/ + +.contents .page-header { + margin-top: calc(-60px + 1em); +} + +dd { + margin-left: 3em; +} + +/* Section anchors ---------------------------------*/ + +a.anchor { + margin-left: -30px; + display:inline-block; + width: 30px; + height: 30px; + visibility: hidden; + + background-image: url(./link.svg); + background-repeat: no-repeat; + background-size: 20px 20px; + background-position: center center; +} + +.hasAnchor:hover a.anchor { + visibility: visible; +} + +@media (max-width: 767px) { + .hasAnchor:hover a.anchor { + visibility: hidden; + } +} + + +/* Fixes for fixed navbar --------------------------*/ + +.contents h1, .contents h2, .contents h3, .contents h4 { + padding-top: 60px; + margin-top: -40px; +} + +/* Navbar submenu --------------------------*/ + +.dropdown-submenu { + position: relative; +} + +.dropdown-submenu>.dropdown-menu { + top: 0; + left: 100%; + margin-top: -6px; + margin-left: -1px; + border-radius: 0 6px 6px 6px; +} + +.dropdown-submenu:hover>.dropdown-menu { + display: block; +} + +.dropdown-submenu>a:after { + display: block; + content: " "; + float: right; + width: 0; + height: 0; + border-color: transparent; + border-style: solid; + border-width: 5px 0 5px 5px; + border-left-color: #cccccc; + margin-top: 5px; + margin-right: -10px; +} + +.dropdown-submenu:hover>a:after { + border-left-color: #ffffff; +} + +.dropdown-submenu.pull-left { + float: none; +} + +.dropdown-submenu.pull-left>.dropdown-menu { + left: -100%; + margin-left: 10px; + border-radius: 6px 0 6px 6px; +} + +/* Sidebar --------------------------*/ + +#pkgdown-sidebar { + margin-top: 30px; + position: -webkit-sticky; + position: sticky; + top: 70px; +} + +#pkgdown-sidebar h2 { + font-size: 1.5em; + margin-top: 1em; +} + +#pkgdown-sidebar h2:first-child { + margin-top: 0; +} + +#pkgdown-sidebar .list-unstyled li { + margin-bottom: 0.5em; +} + +/* bootstrap-toc tweaks ------------------------------------------------------*/ + +/* All levels of nav */ + +nav[data-toggle='toc'] .nav > li > a { + padding: 4px 20px 4px 6px; + font-size: 1.5rem; + font-weight: 400; + color: inherit; +} + +nav[data-toggle='toc'] .nav > li > a:hover, +nav[data-toggle='toc'] .nav > li > a:focus { + padding-left: 5px; + color: inherit; + border-left: 1px solid #878787; +} + +nav[data-toggle='toc'] .nav > .active > a, +nav[data-toggle='toc'] .nav > .active:hover > a, +nav[data-toggle='toc'] .nav > .active:focus > a { + padding-left: 5px; + font-size: 1.5rem; + font-weight: 400; + color: inherit; + border-left: 2px solid #878787; +} + +/* Nav: second level (shown on .active) */ + +nav[data-toggle='toc'] .nav .nav { + display: none; /* Hide by default, but at >768px, show it */ + padding-bottom: 10px; +} + +nav[data-toggle='toc'] .nav .nav > li > a { + padding-left: 16px; + font-size: 1.35rem; +} + +nav[data-toggle='toc'] .nav .nav > li > a:hover, +nav[data-toggle='toc'] .nav .nav > li > a:focus { + padding-left: 15px; +} + +nav[data-toggle='toc'] .nav .nav > .active > a, +nav[data-toggle='toc'] .nav .nav > .active:hover > a, +nav[data-toggle='toc'] .nav .nav > .active:focus > a { + padding-left: 15px; + font-weight: 500; + font-size: 1.35rem; +} + +/* orcid ------------------------------------------------------------------- */ + +.orcid { + font-size: 16px; + color: #A6CE39; + /* margins are required by official ORCID trademark and display guidelines */ + margin-left:4px; + margin-right:4px; + vertical-align: middle; +} + +/* Reference index & topics ----------------------------------------------- */ + +.ref-index th {font-weight: normal;} + +.ref-index td {vertical-align: top; min-width: 100px} +.ref-index .icon {width: 40px;} +.ref-index .alias {width: 40%;} +.ref-index-icons .alias {width: calc(40% - 40px);} +.ref-index .title {width: 60%;} + +.ref-arguments th {text-align: right; padding-right: 10px;} +.ref-arguments th, .ref-arguments td {vertical-align: top; min-width: 100px} +.ref-arguments .name {width: 20%;} +.ref-arguments .desc {width: 80%;} + +/* Nice scrolling for wide elements --------------------------------------- */ + +table { + display: block; + overflow: auto; +} + +/* Syntax highlighting ---------------------------------------------------- */ + +pre { + word-wrap: normal; + word-break: normal; + border: 1px solid #eee; +} + +pre, code { + background-color: #f8f8f8; + color: #333; +} + +pre code { + overflow: auto; + word-wrap: normal; + white-space: pre; +} + +pre .img { + margin: 5px 0; +} + +pre .img img { + background-color: #fff; + display: block; + height: auto; +} + +code a, pre a { + color: #375f84; +} + +a.sourceLine:hover { + text-decoration: none; +} + +.fl {color: #1514b5;} +.fu {color: #000000;} /* function */ +.ch,.st {color: #036a07;} /* string */ +.kw {color: #264D66;} /* keyword */ +.co {color: #888888;} /* comment */ + +.message { color: black; font-weight: bolder;} +.error { color: orange; font-weight: bolder;} +.warning { color: #6A0366; font-weight: bolder;} + +/* Clipboard --------------------------*/ + +.hasCopyButton { + position: relative; +} + +.btn-copy-ex { + position: absolute; + right: 0; + top: 0; + visibility: hidden; +} + +.hasCopyButton:hover button.btn-copy-ex { + visibility: visible; +} + +/* headroom.js ------------------------ */ + +.headroom { + will-change: transform; + transition: transform 200ms linear; +} +.headroom--pinned { + transform: translateY(0%); +} +.headroom--unpinned { + transform: translateY(-100%); +} + +/* mark.js ----------------------------*/ + +mark { + background-color: rgba(255, 255, 51, 0.5); + border-bottom: 2px solid rgba(255, 153, 51, 0.3); + padding: 1px; +} + +/* vertical spacing after htmlwidgets */ +.html-widget { + margin-bottom: 10px; +} + +/* fontawesome ------------------------ */ + +.fab { + font-family: "Font Awesome 5 Brands" !important; +} + +/* don't display links in code chunks when printing */ +/* source: https://stackoverflow.com/a/10781533 */ +@media print { + code a:link:after, code a:visited:after { + content: ""; + } +} diff --git a/static/docs/pkgdown.js b/static/docs/pkgdown.js new file mode 100644 index 0000000000000000000000000000000000000000..7e7048faebb92b85ed06afddd1a8a4581241d6a4 --- /dev/null +++ b/static/docs/pkgdown.js @@ -0,0 +1,108 @@ +/* http://gregfranko.com/blog/jquery-best-practices/ */ +(function($) { + $(function() { + + $('.navbar-fixed-top').headroom(); + + $('body').css('padding-top', $('.navbar').height() + 10); + $(window).resize(function(){ + $('body').css('padding-top', $('.navbar').height() + 10); + }); + + $('[data-toggle="tooltip"]').tooltip(); + + var cur_path = paths(location.pathname); + var links = $("#navbar ul li a"); + var max_length = -1; + var pos = -1; + for (var i = 0; i < links.length; i++) { + if (links[i].getAttribute("href") === "#") + continue; + // Ignore external links + if (links[i].host !== location.host) + continue; + + var nav_path = paths(links[i].pathname); + + var length = prefix_length(nav_path, cur_path); + if (length > max_length) { + max_length = length; + pos = i; + } + } + + // Add class to parent
  • , and enclosing
  • if in dropdown + if (pos >= 0) { + var menu_anchor = $(links[pos]); + menu_anchor.parent().addClass("active"); + menu_anchor.closest("li.dropdown").addClass("active"); + } + }); + + function paths(pathname) { + var pieces = pathname.split("/"); + pieces.shift(); // always starts with / + + var end = pieces[pieces.length - 1]; + if (end === "index.html" || end === "") + pieces.pop(); + return(pieces); + } + + // Returns -1 if not found + function prefix_length(needle, haystack) { + if (needle.length > haystack.length) + return(-1); + + // Special case for length-0 haystack, since for loop won't run + if (haystack.length === 0) { + return(needle.length === 0 ? 0 : -1); + } + + for (var i = 0; i < haystack.length; i++) { + if (needle[i] != haystack[i]) + return(i); + } + + return(haystack.length); + } + + /* Clipboard --------------------------*/ + + function changeTooltipMessage(element, msg) { + var tooltipOriginalTitle=element.getAttribute('data-original-title'); + element.setAttribute('data-original-title', msg); + $(element).tooltip('show'); + element.setAttribute('data-original-title', tooltipOriginalTitle); + } + + if(ClipboardJS.isSupported()) { + $(document).ready(function() { + var copyButton = ""; + + $(".examples, div.sourceCode").addClass("hasCopyButton"); + + // Insert copy buttons: + $(copyButton).prependTo(".hasCopyButton"); + + // Initialize tooltips: + $('.btn-copy-ex').tooltip({container: 'body'}); + + // Initialize clipboard: + var clipboardBtnCopies = new ClipboardJS('[data-clipboard-copy]', { + text: function(trigger) { + return trigger.parentNode.textContent; + } + }); + + clipboardBtnCopies.on('success', function(e) { + changeTooltipMessage(e.trigger, 'Copied!'); + e.clearSelection(); + }); + + clipboardBtnCopies.on('error', function() { + changeTooltipMessage(e.trigger,'Press Ctrl+C or Command+C to copy'); + }); + }); + } +})(window.jQuery || window.$) diff --git a/static/docs/pkgdown.yml b/static/docs/pkgdown.yml new file mode 100644 index 0000000000000000000000000000000000000000..2d96c50e44dfe65990a835e67dfd481533019653 --- /dev/null +++ b/static/docs/pkgdown.yml @@ -0,0 +1,23 @@ +pandoc: 2.7.3 +pkgdown: 1.6.1 +pkgdown_sha: ~ +articles: + extending-autograd: extending-autograd.html + getting-started/autograd: autograd.html + getting-started/control-flow-and-weight-sharing: control-flow-and-weight-sharing.html + getting-started/custom-nn: custom-nn.html + getting-started/neural-networks: neural-networks.html + getting-started/new-autograd-functions: new-autograd-functions.html + getting-started/nn: nn.html + getting-started/optim: optim.html + getting-started/tensors-and-autograd: tensors-and-autograd.html + getting-started/tensors: tensors.html + getting-started/warmup: warmup.html + getting-started/what-is-torch: what-is-torch.html + indexing: indexing.html + loading-data: loading-data.html + tensor/index: index.html + tensor-creation: tensor-creation.html + using-autograd: using-autograd.html +last_built: 2020-09-17T22:12Z + diff --git a/static/docs/reference/AutogradContext.html b/static/docs/reference/AutogradContext.html new file mode 100644 index 0000000000000000000000000000000000000000..08ef97f1a8edc4b7b801bc52c8ef47612e6ad650 --- /dev/null +++ b/static/docs/reference/AutogradContext.html @@ -0,0 +1,338 @@ + + + + + + + + +Class representing the context. — AutogradContext • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Class representing the context.

    +

    Class representing the context.

    +
    + + + +

    Public fields

    + +

    +
    ptr

    (Dev related) pointer to the context c++ object.

    + +

    +

    Active bindings

    + +

    +
    needs_input_grad

    boolean listing arguments of forward and whether they require_grad.

    + +
    saved_variables

    list of objects that were saved for backward via save_for_backward.

    + +

    +

    Methods

    + + +

    Public methods

    + + +


    +

    Method new()

    +

    (Dev related) Initializes the context. Not user related.

    Usage

    +

    AutogradContext$new(
    +  ptr,
    +  env,
    +  argument_names = NULL,
    +  argument_needs_grad = NULL
    +)

    + +

    Arguments

    +

    +
    ptr

    pointer to the c++ object

    + +
    env

    environment that encloses both forward and backward

    + +
    argument_names

    names of forward arguments

    + +
    argument_needs_grad

    whether each argument in forward needs grad.

    + +

    +


    +

    Method save_for_backward()

    +

    Saves given objects for a future call to backward().

    +

    This should be called at most once, and only from inside the forward() +method.

    +

    Later, saved objects can be accessed through the saved_variables attribute. +Before returning them to the user, a check is made to ensure they weren’t used +in any in-place operation that modified their content.

    +

    Arguments can also be any kind of R object.

    Usage

    +

    AutogradContext$save_for_backward(...)

    + +

    Arguments

    +

    +
    ...

    any kind of R object that will be saved for the backward pass. +It's common to pass named arguments.

    + +

    +


    +

    Method mark_non_differentiable()

    +

    Marks outputs as non-differentiable.

    +

    This should be called at most once, only from inside the forward() method, +and all arguments should be outputs.

    +

    This will mark outputs as not requiring gradients, increasing the efficiency +of backward computation. You still need to accept a gradient for each output +in backward(), but it’s always going to be a zero tensor with the same +shape as the shape of a corresponding output.

    +

    This is used e.g. for indices returned from a max Function.

    Usage

    +

    AutogradContext$mark_non_differentiable(...)

    + +

    Arguments

    +

    +
    ...

    non-differentiable outputs.

    + +

    +


    +

    Method mark_dirty()

    +

    Marks given tensors as modified in an in-place operation.

    +

    This should be called at most once, only from inside the forward() method, +and all arguments should be inputs.

    +

    Every tensor that’s been modified in-place in a call to forward() should +be given to this function, to ensure correctness of our checks. It doesn’t +matter whether the function is called before or after modification.

    Usage

    +

    AutogradContext$mark_dirty(...)

    + +

    Arguments

    +

    +
    ...

    tensors that are modified in-place.

    + +

    +


    +

    Method clone()

    +

    The objects of this class are cloneable with this method.

    Usage

    +

    AutogradContext$clone(deep = FALSE)

    + +

    Arguments

    +

    +
    deep

    Whether to make a deep clone.

    + +

    + + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/Rplot001.png b/static/docs/reference/Rplot001.png new file mode 100644 index 0000000000000000000000000000000000000000..17a358060aed2a86950757bbd25c6f92c08c458f Binary files /dev/null and b/static/docs/reference/Rplot001.png differ diff --git a/static/docs/reference/as_array.html b/static/docs/reference/as_array.html new file mode 100644 index 0000000000000000000000000000000000000000..a903d715ca2acd6a3479f55eaa3b31bdfa97f521 --- /dev/null +++ b/static/docs/reference/as_array.html @@ -0,0 +1,237 @@ + + + + + + + + +Converts to array — as_array • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Converts to array

    +
    + +
    as_array(x)
    + +

    Arguments

    + + + + + + +
    x

    object to be converted into an array

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/autograd_backward.html b/static/docs/reference/autograd_backward.html new file mode 100644 index 0000000000000000000000000000000000000000..95eae9d7be6a0186c2d2a31de0cb7771d8e9482e --- /dev/null +++ b/static/docs/reference/autograd_backward.html @@ -0,0 +1,291 @@ + + + + + + + + +Computes the sum of gradients of given tensors w.r.t. graph leaves. — autograd_backward • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    The graph is differentiated using the chain rule. If any of tensors are +non-scalar (i.e. their data has more than one element) and require gradient, +then the Jacobian-vector product would be computed, in this case the function +additionally requires specifying grad_tensors. It should be a sequence of +matching length, that contains the “vector” in the Jacobian-vector product, +usually the gradient of the differentiated function w.r.t. corresponding +tensors (None is an acceptable value for all tensors that don’t need gradient +tensors).

    +
    + +
    autograd_backward(
    +  tensors,
    +  grad_tensors = NULL,
    +  retain_graph = create_graph,
    +  create_graph = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    tensors

    (list of Tensor) – Tensors of which the derivative will +be computed.

    grad_tensors

    (list of (Tensor or NULL)) – The “vector” in the Jacobian-vector product, usually gradients w.r.t. each element of corresponding tensors. NULLvalues can be specified for scalar Tensors or ones that don’t require grad. If aNULL` value would be acceptable for all +grad_tensors, then this argument is optional.

    retain_graph

    (bool, optional) – If FALSE, the graph used to compute +the grad will be freed. Note that in nearly all cases setting this option to +TRUE is not needed and often can be worked around in a much more efficient +way. Defaults to the value of create_graph.

    create_graph

    (bool, optional) – If TRUE, graph of the derivative will +be constructed, allowing to compute higher order derivative products. +Defaults to FALSE.

    + +

    Details

    + +

    This function accumulates gradients in the leaves - you might need to zero +them before calling it.

    + +

    Examples

    +
    if (torch_is_installed()) { +x <- torch_tensor(1, requires_grad = TRUE) +y <- 2 * x + +a <- torch_tensor(1, requires_grad = TRUE) +b <- 3 * a + +autograd_backward(list(y, b)) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/autograd_function.html b/static/docs/reference/autograd_function.html new file mode 100644 index 0000000000000000000000000000000000000000..50ff1cfbc391c29ebef441ae0a280b69e3edb9b3 --- /dev/null +++ b/static/docs/reference/autograd_function.html @@ -0,0 +1,279 @@ + + + + + + + + +Records operation history and defines formulas for differentiating ops. — autograd_function • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Every operation performed on Tensor's creates a new function object, that +performs the computation, and records that it happened. The history is +retained in the form of a DAG of functions, with edges denoting data +dependencies (input <- output). Then, when backward is called, the graph is +processed in the topological ordering, by calling backward() methods of each +Function object, and passing returned gradients on to next Function's.

    +
    + +
    autograd_function(forward, backward)
    + +

    Arguments

    + + + + + + + + + + +
    forward

    Performs the operation. It must accept a context ctx as the first argument, +followed by any number of arguments (tensors or other types). The context can be +used to store tensors that can be then retrieved during the backward pass. +See AutogradContext for more information about context methods.

    backward

    Defines a formula for differentiating the operation. It must accept +a context ctx as the first argument, followed by as many outputs did forward() +return, and it should return a named list. Each argument is the gradient w.r.t +the given output, and each element in the returned list should be the gradient +w.r.t. the corresponding input. The context can be used to retrieve tensors saved +during the forward pass. It also has an attribute ctx$needs_input_grad as a +named list of booleans representing whether each input needs gradient. +E.g., backward() will have ctx$needs_input_grad$input = TRUE if the input +argument to forward() needs gradient computated w.r.t. the output. +See AutogradContext for more information about context methods.

    + + +

    Examples

    +
    if (torch_is_installed()) { + +exp2 <- autograd_function( + forward = function(ctx, i) { + result <- i$exp() + ctx$save_for_backward(result = result) + result + }, + backward = function(ctx, grad_output) { + list(i = grad_output * ctx$saved_variable$result) + } +) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/autograd_grad.html b/static/docs/reference/autograd_grad.html new file mode 100644 index 0000000000000000000000000000000000000000..0ed04025423dcc64116709a1ec486ece2d4bc2a6 --- /dev/null +++ b/static/docs/reference/autograd_grad.html @@ -0,0 +1,305 @@ + + + + + + + + +Computes and returns the sum of gradients of outputs w.r.t. the inputs. — autograd_grad • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    grad_outputs should be a list of length matching output containing the “vector” +in Jacobian-vector product, usually the pre-computed gradients w.r.t. each of +the outputs. If an output doesn’t require_grad, then the gradient can be None).

    +
    + +
    autograd_grad(
    +  outputs,
    +  inputs,
    +  grad_outputs = NULL,
    +  retain_graph = create_graph,
    +  create_graph = FALSE,
    +  allow_unused = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    outputs

    (sequence of Tensor) – outputs of the differentiated function.

    inputs

    (sequence of Tensor) – Inputs w.r.t. which the gradient will be +returned (and not accumulated into .grad).

    grad_outputs

    (sequence of Tensor) – The “vector” in the Jacobian-vector +product. Usually gradients w.r.t. each output. None values can be specified for +scalar Tensors or ones that don’t require grad. If a None value would be acceptable +for all grad_tensors, then this argument is optional. Default: None.

    retain_graph

    (bool, optional) – If FALSE, the graph used to compute the +grad will be freed. Note that in nearly all cases setting this option to TRUE is +not needed and often can be worked around in a much more efficient way. +Defaults to the value of create_graph.

    create_graph

    (bool, optional) – If TRUE, graph of the derivative will be constructed, allowing to compute higher order derivative products. Default: FALSE`.

    allow_unused

    (bool, optional) – If FALSE, specifying inputs that were +not used when computing outputs (and therefore their grad is always zero) is an +error. Defaults to FALSE

    + +

    Details

    + +

    If only_inputs is TRUE, the function will only return a list of gradients w.r.t +the specified inputs. If it’s FALSE, then gradient w.r.t. all remaining leaves +will still be computed, and will be accumulated into their .grad attribute.

    + +

    Examples

    +
    if (torch_is_installed()) { +w <- torch_tensor(0.5, requires_grad = TRUE) +b <- torch_tensor(0.9, requires_grad = TRUE) +x <- torch_tensor(runif(100)) +y <- 2 * x + 1 +loss <- (y - (w*x + b))^2 +loss <- loss$mean() + +o <- autograd_grad(loss, list(w, b)) +o + +} +
    #> [[1]] +#> torch_tensor +#> -0.9935 +#> [ CPUFloatType{1} ] +#> +#> [[2]] +#> torch_tensor +#> -1.6206 +#> [ CPUFloatType{1} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/autograd_set_grad_mode.html b/static/docs/reference/autograd_set_grad_mode.html new file mode 100644 index 0000000000000000000000000000000000000000..485d2028f8b7bf837e96a7bc62db7aedc32e3681 --- /dev/null +++ b/static/docs/reference/autograd_set_grad_mode.html @@ -0,0 +1,237 @@ + + + + + + + + +Set grad mode — autograd_set_grad_mode • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Sets or disables gradient history.

    +
    + +
    autograd_set_grad_mode(enabled)
    + +

    Arguments

    + + + + + + +
    enabled

    bool wether to enable or disable the gradient recording.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/cuda_current_device.html b/static/docs/reference/cuda_current_device.html new file mode 100644 index 0000000000000000000000000000000000000000..528914706714a2c1aa4e53bc2ba8b899af005e4e --- /dev/null +++ b/static/docs/reference/cuda_current_device.html @@ -0,0 +1,229 @@ + + + + + + + + +Returns the index of a currently selected device. — cuda_current_device • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Returns the index of a currently selected device.

    +
    + +
    cuda_current_device()
    + + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/cuda_device_count.html b/static/docs/reference/cuda_device_count.html new file mode 100644 index 0000000000000000000000000000000000000000..8e58bf3ed050374ca3bf997364d34920f71d654c --- /dev/null +++ b/static/docs/reference/cuda_device_count.html @@ -0,0 +1,229 @@ + + + + + + + + +Returns the number of GPUs available. — cuda_device_count • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Returns the number of GPUs available.

    +
    + +
    cuda_device_count()
    + + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/cuda_is_available.html b/static/docs/reference/cuda_is_available.html new file mode 100644 index 0000000000000000000000000000000000000000..2282ae1373fa1ce3000d672d8d5311d170942a8e --- /dev/null +++ b/static/docs/reference/cuda_is_available.html @@ -0,0 +1,229 @@ + + + + + + + + +Returns a bool indicating if CUDA is currently available. — cuda_is_available • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Returns a bool indicating if CUDA is currently available.

    +
    + +
    cuda_is_available()
    + + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/dataloader.html b/static/docs/reference/dataloader.html new file mode 100644 index 0000000000000000000000000000000000000000..5f20021f4d164ef6a3289d0039ec4c3756a4053b --- /dev/null +++ b/static/docs/reference/dataloader.html @@ -0,0 +1,310 @@ + + + + + + + + +Data loader. Combines a dataset and a sampler, and provides +single- or multi-process iterators over the dataset. — dataloader • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Data loader. Combines a dataset and a sampler, and provides +single- or multi-process iterators over the dataset.

    +
    + +
    dataloader(
    +  dataset,
    +  batch_size = 1,
    +  shuffle = FALSE,
    +  sampler = NULL,
    +  batch_sampler = NULL,
    +  num_workers = 0,
    +  collate_fn = NULL,
    +  pin_memory = FALSE,
    +  drop_last = FALSE,
    +  timeout = 0,
    +  worker_init_fn = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    dataset

    (Dataset): dataset from which to load the data.

    batch_size

    (int, optional): how many samples per batch to load +(default: 1).

    shuffle

    (bool, optional): set to TRUE to have the data reshuffled +at every epoch (default: FALSE).

    sampler

    (Sampler, optional): defines the strategy to draw samples from +the dataset. If specified, shuffle must be False.

    batch_sampler

    (Sampler, optional): like sampler, but returns a batch of +indices at a time. Mutually exclusive with batch_size, +shuffle, sampler, and drop_last.

    num_workers

    (int, optional): how many subprocesses to use for data +loading. 0 means that the data will be loaded in the main process. +(default: 0)

    collate_fn

    (callable, optional): merges a list of samples to form a mini-batch.

    pin_memory

    (bool, optional): If TRUE, the data loader will copy tensors +into CUDA pinned memory before returning them. If your data elements +are a custom type, or your collate_fn returns a batch that is a custom type +see the example below.

    drop_last

    (bool, optional): set to TRUE to drop the last incomplete batch, +if the dataset size is not divisible by the batch size. If FALSE and +the size of dataset is not divisible by the batch size, then the last batch +will be smaller. (default: FALSE)

    timeout

    (numeric, optional): if positive, the timeout value for collecting a batch +from workers. Should always be non-negative. (default: 0)

    worker_init_fn

    (callable, optional): If not NULL, this will be called on each +worker subprocess with the worker id (an int in [0, num_workers - 1]) as +input, after seeding and before data loading. (default: NULL)

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/dataloader_make_iter.html b/static/docs/reference/dataloader_make_iter.html new file mode 100644 index 0000000000000000000000000000000000000000..ed0b16f265e0ef9e55c133d3274002b24cf60342 --- /dev/null +++ b/static/docs/reference/dataloader_make_iter.html @@ -0,0 +1,237 @@ + + + + + + + + +Creates an iterator from a DataLoader — dataloader_make_iter • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Creates an iterator from a DataLoader

    +
    + +
    dataloader_make_iter(dataloader)
    + +

    Arguments

    + + + + + + +
    dataloader

    a dataloader object.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/dataloader_next.html b/static/docs/reference/dataloader_next.html new file mode 100644 index 0000000000000000000000000000000000000000..966e40f3bfdc3b3efcbfe041f639df12ac994e1a --- /dev/null +++ b/static/docs/reference/dataloader_next.html @@ -0,0 +1,237 @@ + + + + + + + + +Get the next element of a dataloader iterator — dataloader_next • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Get the next element of a dataloader iterator

    +
    + +
    dataloader_next(iter)
    + +

    Arguments

    + + + + + + +
    iter

    a DataLoader iter created with dataloader_make_iter.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/dataset.html b/static/docs/reference/dataset.html new file mode 100644 index 0000000000000000000000000000000000000000..558e19b17d6ed5ea6c9f78de14b62f5af22e9361 --- /dev/null +++ b/static/docs/reference/dataset.html @@ -0,0 +1,267 @@ + + + + + + + + +An abstract class representing a Dataset. — dataset • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    All datasets that represent a map from keys to data samples should subclass +it. All subclasses should overwrite get_item, supporting fetching a +data sample for a given key. Subclasses could also optionally overwrite +lenght, which is expected to return the size of the dataset by many +~torch.utils.data.Sampler implementations and the default options +of ~torch.utils.data.DataLoader.

    +
    + +
    dataset(name = NULL, inherit = Dataset, ..., parent_env = parent.frame())
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    name

    a name for the dataset. It it's also used as the class +for it.

    inherit

    you can optionally inherit from a dataset when creating a +new dataset.

    ...

    public methods for the dataset class

    parent_env

    An environment to use as the parent of newly-created +objects.

    + +

    Note

    + +

    ~torch.utils.data.DataLoader by default constructs a index +sampler that yields integral indices. To make it work with a map-style +dataset with non-integral indices/keys, a custom sampler must be provided.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/default_dtype.html b/static/docs/reference/default_dtype.html new file mode 100644 index 0000000000000000000000000000000000000000..a517ff8bfbcd2e5ac8c966c454580971fa1b289c --- /dev/null +++ b/static/docs/reference/default_dtype.html @@ -0,0 +1,240 @@ + + + + + + + + +Gets and sets the default floating point dtype. — torch_set_default_dtype • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Gets and sets the default floating point dtype.

    +
    + +
    torch_set_default_dtype(d)
    +
    +torch_get_default_dtype()
    + +

    Arguments

    + + + + + + +
    d

    The default floating point dtype to set. Initially set to +torch_float().

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/enumerate.dataloader.html b/static/docs/reference/enumerate.dataloader.html new file mode 100644 index 0000000000000000000000000000000000000000..cdace84a567575119d25890890bb6838c3e572f4 --- /dev/null +++ b/static/docs/reference/enumerate.dataloader.html @@ -0,0 +1,246 @@ + + + + + + + + +Enumerate an iterator — enumerate.dataloader • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Enumerate an iterator

    +
    + +
    # S3 method for dataloader
    +enumerate(x, max_len = 1e+06, ...)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    x

    the generator to enumerate.

    max_len

    maximum number of iterations.

    ...

    passed to specific methods.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/enumerate.html b/static/docs/reference/enumerate.html new file mode 100644 index 0000000000000000000000000000000000000000..5726147af6131a22299222fec509f1f812a93d3b --- /dev/null +++ b/static/docs/reference/enumerate.html @@ -0,0 +1,241 @@ + + + + + + + + +Enumerate an iterator — enumerate • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Enumerate an iterator

    +
    + +
    enumerate(x, ...)
    + +

    Arguments

    + + + + + + + + + + +
    x

    the generator to enumerate.

    ...

    passed to specific methods.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/figures/torch.png b/static/docs/reference/figures/torch.png new file mode 100644 index 0000000000000000000000000000000000000000..61d24b86074b110f4cf3298f417c4148938c8f05 Binary files /dev/null and b/static/docs/reference/figures/torch.png differ diff --git a/static/docs/reference/index.html b/static/docs/reference/index.html new file mode 100644 index 0000000000000000000000000000000000000000..de6c63a9ca2b1214c3e6614fe428667aade2d4ff --- /dev/null +++ b/static/docs/reference/index.html @@ -0,0 +1,3157 @@ + + + + + + + + +Function reference • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +

    Tensor creation utilities

    +

    +
    +

    torch_empty()

    +

    Empty

    +

    torch_arange()

    +

    Arange

    +

    torch_eye()

    +

    Eye

    +

    torch_full()

    +

    Full

    +

    torch_linspace()

    +

    Linspace

    +

    torch_logspace()

    +

    Logspace

    +

    torch_ones()

    +

    Ones

    +

    torch_rand()

    +

    Rand

    +

    torch_randint()

    +

    Randint

    +

    torch_randn()

    +

    Randn

    +

    torch_randperm()

    +

    Randperm

    +

    torch_zeros()

    +

    Zeros

    +

    torch_empty_like()

    +

    Empty_like

    +

    torch_full_like()

    +

    Full_like

    +

    torch_ones_like()

    +

    Ones_like

    +

    torch_rand_like()

    +

    Rand_like

    +

    torch_randint_like()

    +

    Randint_like

    +

    torch_randn_like()

    +

    Randn_like

    +

    torch_zeros_like()

    +

    Zeros_like

    +

    as_array()

    +

    Converts to array

    +

    Tensor attributes

    +

    +
    +

    torch_set_default_dtype() torch_get_default_dtype()

    +

    Gets and sets the default floating point dtype.

    +

    is_torch_device()

    +

    Checks if object is a device

    +

    is_torch_dtype()

    +

    Check if object is a torch data type

    +

    torch_float32() torch_float() torch_float64() torch_double() torch_float16() torch_half() torch_uint8() torch_int8() torch_int16() torch_short() torch_int32() torch_int() torch_int64() torch_long() torch_bool() torch_quint8() torch_qint8() torch_qint32()

    +

    Torch data types

    +

    torch_finfo()

    +

    Floating point type info

    +

    torch_iinfo()

    +

    Integer type info

    +

    torch_per_channel_affine() torch_per_tensor_affine() torch_per_channel_symmetric() torch_per_tensor_symmetric()

    +

    Creates the corresponding Scheme object

    +

    torch_reduction_sum() torch_reduction_mean() torch_reduction_none()

    +

    Creates the reduction objet

    +

    is_torch_layout()

    +

    Check if an object is a torch layout.

    +

    is_torch_memory_format()

    +

    Check if an object is a memory format

    +

    is_torch_qscheme()

    +

    Checks if an object is a QScheme

    +

    is_undefined_tensor()

    +

    Checks if a tensor is undefined

    +

    Serialization

    +

    +
    +

    load_state_dict()

    +

    Load a state dict file

    +

    torch_load()

    +

    Loads a saved object

    +

    torch_save()

    +

    Saves an object to a disk file.

    +

    Mathematical operations on tensors

    +

    +
    +

    torch_abs()

    +

    Abs

    +

    torch_acos()

    +

    Acos

    +

    torch_adaptive_avg_pool1d()

    +

    Adaptive_avg_pool1d

    +

    torch_add()

    +

    Add

    +

    torch_addbmm()

    +

    Addbmm

    +

    torch_addcdiv()

    +

    Addcdiv

    +

    torch_addcmul()

    +

    Addcmul

    +

    torch_addmm()

    +

    Addmm

    +

    torch_addmv()

    +

    Addmv

    +

    torch_addr()

    +

    Addr

    +

    torch_allclose()

    +

    Allclose

    +

    torch_angle()

    +

    Angle

    +

    torch_argmax()

    +

    Argmax

    +

    torch_argmin()

    +

    Argmin

    +

    torch_argsort()

    +

    Argsort

    +

    torch_as_strided()

    +

    As_strided

    +

    torch_asin()

    +

    Asin

    +

    torch_atan()

    +

    Atan

    +

    torch_atan2()

    +

    Atan2

    +

    torch_avg_pool1d()

    +

    Avg_pool1d

    +

    torch_baddbmm()

    +

    Baddbmm

    +

    torch_bartlett_window()

    +

    Bartlett_window

    +

    torch_bernoulli()

    +

    Bernoulli

    +

    torch_bincount()

    +

    Bincount

    +

    torch_bitwise_and()

    +

    Bitwise_and

    +

    torch_bitwise_not()

    +

    Bitwise_not

    +

    torch_bitwise_or()

    +

    Bitwise_or

    +

    torch_bitwise_xor()

    +

    Bitwise_xor

    +

    torch_blackman_window()

    +

    Blackman_window

    +

    torch_bmm()

    +

    Bmm

    +

    torch_broadcast_tensors()

    +

    Broadcast_tensors

    +

    torch_can_cast()

    +

    Can_cast

    +

    torch_cartesian_prod()

    +

    Cartesian_prod

    +

    torch_cat()

    +

    Cat

    +

    torch_cdist()

    +

    Cdist

    +

    torch_ceil()

    +

    Ceil

    +

    torch_celu()

    +

    Celu

    +

    torch_celu_()

    +

    Celu_

    +

    torch_chain_matmul()

    +

    Chain_matmul

    +

    torch_cholesky()

    +

    Cholesky

    +

    torch_cholesky_inverse()

    +

    Cholesky_inverse

    +

    torch_cholesky_solve()

    +

    Cholesky_solve

    +

    torch_chunk()

    +

    Chunk

    +

    torch_clamp()

    +

    Clamp

    +

    torch_combinations()

    +

    Combinations

    +

    torch_conj()

    +

    Conj

    +

    torch_conv1d()

    +

    Conv1d

    +

    torch_conv2d()

    +

    Conv2d

    +

    torch_conv3d()

    +

    Conv3d

    +

    torch_conv_tbc()

    +

    Conv_tbc

    +

    torch_conv_transpose1d()

    +

    Conv_transpose1d

    +

    torch_conv_transpose2d()

    +

    Conv_transpose2d

    +

    torch_conv_transpose3d()

    +

    Conv_transpose3d

    +

    torch_cos()

    +

    Cos

    +

    torch_cosh()

    +

    Cosh

    +

    torch_cosine_similarity()

    +

    Cosine_similarity

    +

    torch_cross()

    +

    Cross

    +

    torch_cummax()

    +

    Cummax

    +

    torch_cummin()

    +

    Cummin

    +

    torch_cumprod()

    +

    Cumprod

    +

    torch_cumsum()

    +

    Cumsum

    +

    torch_det()

    +

    Det

    +

    torch_device()

    +

    Create a Device object

    +

    torch_diag()

    +

    Diag

    +

    torch_diag_embed()

    +

    Diag_embed

    +

    torch_diagflat()

    +

    Diagflat

    +

    torch_diagonal()

    +

    Diagonal

    +

    torch_digamma()

    +

    Digamma

    +

    torch_dist()

    +

    Dist

    +

    torch_div()

    +

    Div

    +

    torch_dot()

    +

    Dot

    +

    torch_eig()

    +

    Eig

    +

    torch_einsum()

    +

    Einsum

    +

    torch_empty_strided()

    +

    Empty_strided

    +

    torch_eq()

    +

    Eq

    +

    torch_equal()

    +

    Equal

    +

    torch_erf()

    +

    Erf

    +

    torch_erfc()

    +

    Erfc

    +

    torch_erfinv()

    +

    Erfinv

    +

    torch_exp()

    +

    Exp

    +

    torch_expm1()

    +

    Expm1

    +

    torch_fft()

    +

    Fft

    +

    torch_flatten()

    +

    Flatten

    +

    torch_flip()

    +

    Flip

    +

    torch_floor()

    +

    Floor

    +

    torch_floor_divide()

    +

    Floor_divide

    +

    torch_fmod()

    +

    Fmod

    +

    torch_frac()

    +

    Frac

    +

    torch_gather()

    +

    Gather

    +

    torch_ge()

    +

    Ge

    +

    torch_generator()

    +

    Create a Generator object

    +

    torch_geqrf()

    +

    Geqrf

    +

    torch_ger()

    +

    Ger

    +

    torch_gt()

    +

    Gt

    +

    torch_hamming_window()

    +

    Hamming_window

    +

    torch_hann_window()

    +

    Hann_window

    +

    torch_histc()

    +

    Histc

    +

    torch_ifft()

    +

    Ifft

    +

    torch_imag()

    +

    Imag

    +

    torch_index_select()

    +

    Index_select

    +

    torch_inverse()

    +

    Inverse

    +

    torch_irfft()

    +

    Irfft

    +

    torch_is_complex()

    +

    Is_complex

    +

    torch_is_floating_point()

    +

    Is_floating_point

    +

    torch_is_installed()

    +

    Verifies if torch is installed

    +

    torch_isfinite()

    +

    Isfinite

    +

    torch_isinf()

    +

    Isinf

    +

    torch_isnan()

    +

    Isnan

    +

    torch_kthvalue()

    +

    Kthvalue

    +

    torch_strided() torch_sparse_coo()

    +

    Creates the corresponding layout

    +

    torch_le()

    +

    Le

    +

    torch_lerp()

    +

    Lerp

    +

    torch_lgamma()

    +

    Lgamma

    +

    torch_log()

    +

    Log

    +

    torch_log10()

    +

    Log10

    +

    torch_log1p()

    +

    Log1p

    +

    torch_log2()

    +

    Log2

    +

    torch_logdet()

    +

    Logdet

    +

    torch_logical_and()

    +

    Logical_and

    +

    torch_logical_not

    +

    Logical_not

    +

    torch_logical_or()

    +

    Logical_or

    +

    torch_logical_xor()

    +

    Logical_xor

    +

    torch_logsumexp()

    +

    Logsumexp

    +

    torch_lstsq()

    +

    Lstsq

    +

    torch_lt()

    +

    Lt

    +

    torch_lu()

    +

    LU

    +

    torch_lu_solve()

    +

    Lu_solve

    +

    torch_manual_seed()

    +

    Sets the seed for generating random numbers.

    +

    torch_masked_select()

    +

    Masked_select

    +

    torch_matmul()

    +

    Matmul

    +

    torch_matrix_power()

    +

    Matrix_power

    +

    torch_matrix_rank()

    +

    Matrix_rank

    +

    torch_max

    +

    Max

    +

    torch_mean()

    +

    Mean

    +

    torch_median()

    +

    Median

    +

    torch_contiguous_format() torch_preserve_format() torch_channels_last_format()

    +

    Memory format

    +

    torch_meshgrid()

    +

    Meshgrid

    +

    torch_min

    +

    Min

    +

    torch_mm()

    +

    Mm

    +

    torch_mode()

    +

    Mode

    +

    torch_mul()

    +

    Mul

    +

    torch_multinomial()

    +

    Multinomial

    +

    torch_mv()

    +

    Mv

    +

    torch_mvlgamma()

    +

    Mvlgamma

    +

    torch_narrow()

    +

    Narrow

    +

    torch_ne()

    +

    Ne

    +

    torch_neg()

    +

    Neg

    +

    torch_nonzero()

    +

    Nonzero

    +

    torch_norm()

    +

    Norm

    +

    torch_normal()

    +

    Normal

    +

    torch_orgqr()

    +

    Orgqr

    +

    torch_ormqr()

    +

    Ormqr

    +

    torch_pdist()

    +

    Pdist

    +

    torch_pinverse()

    +

    Pinverse

    +

    torch_pixel_shuffle()

    +

    Pixel_shuffle

    +

    torch_poisson()

    +

    Poisson

    +

    torch_polygamma()

    +

    Polygamma

    +

    torch_pow()

    +

    Pow

    +

    torch_prod()

    +

    Prod

    +

    torch_promote_types()

    +

    Promote_types

    +

    torch_qr()

    +

    Qr

    +

    torch_quantize_per_channel()

    +

    Quantize_per_channel

    +

    torch_quantize_per_tensor()

    +

    Quantize_per_tensor

    +

    torch_range()

    +

    Range

    +

    torch_real()

    +

    Real

    +

    torch_reciprocal()

    +

    Reciprocal

    +

    torch_relu()

    +

    Relu

    +

    torch_relu_()

    +

    Relu_

    +

    torch_remainder()

    +

    Remainder

    +

    torch_renorm()

    +

    Renorm

    +

    torch_repeat_interleave()

    +

    Repeat_interleave

    +

    torch_reshape()

    +

    Reshape

    +

    torch_result_type()

    +

    Result_type

    +

    torch_rfft()

    +

    Rfft

    +

    torch_roll()

    +

    Roll

    +

    torch_rot90()

    +

    Rot90

    +

    torch_round()

    +

    Round

    +

    torch_rrelu_()

    +

    Rrelu_

    +

    torch_rsqrt()

    +

    Rsqrt

    +

    torch_selu()

    +

    Selu

    +

    torch_selu_()

    +

    Selu_

    +

    torch_sigmoid()

    +

    Sigmoid

    +

    torch_sign()

    +

    Sign

    +

    torch_sin()

    +

    Sin

    +

    torch_sinh()

    +

    Sinh

    +

    torch_slogdet()

    +

    Slogdet

    +

    torch_solve()

    +

    Solve

    +

    torch_sort()

    +

    Sort

    +

    torch_sparse_coo_tensor()

    +

    Sparse_coo_tensor

    +

    torch_split()

    +

    Split

    +

    torch_sqrt()

    +

    Sqrt

    +

    torch_square()

    +

    Square

    +

    torch_squeeze()

    +

    Squeeze

    +

    torch_stack()

    +

    Stack

    +

    torch_std()

    +

    Std

    +

    torch_std_mean()

    +

    Std_mean

    +

    torch_stft()

    +

    Stft

    +

    torch_sum()

    +

    Sum

    +

    torch_svd()

    +

    Svd

    +

    torch_symeig()

    +

    Symeig

    +

    torch_t()

    +

    T

    +

    torch_take()

    +

    Take

    +

    torch_tan()

    +

    Tan

    +

    torch_tanh()

    +

    Tanh

    +

    torch_tensor()

    +

    Converts R objects to a torch tensor

    +

    torch_tensordot()

    +

    Tensordot

    +

    torch_threshold_()

    +

    Threshold_

    +

    torch_topk()

    +

    Topk

    +

    torch_trace()

    +

    Trace

    +

    torch_transpose()

    +

    Transpose

    +

    torch_trapz()

    +

    Trapz

    +

    torch_triangular_solve()

    +

    Triangular_solve

    +

    torch_tril()

    +

    Tril

    +

    torch_tril_indices()

    +

    Tril_indices

    +

    torch_triu()

    +

    Triu

    +

    torch_triu_indices()

    +

    Triu_indices

    +

    torch_true_divide()

    +

    TRUE_divide

    +

    torch_trunc()

    +

    Trunc

    +

    torch_unbind()

    +

    Unbind

    +

    torch_unique_consecutive()

    +

    Unique_consecutive

    +

    torch_unsqueeze()

    +

    Unsqueeze

    +

    torch_var()

    +

    Var

    +

    torch_var_mean()

    +

    Var_mean

    +

    torch_where()

    +

    Where

    +

    Neural network modules

    +

    +
    +

    nn_adaptive_avg_pool1d()

    +

    Applies a 1D adaptive average pooling over an input signal composed of several input planes.

    +

    nn_adaptive_avg_pool2d()

    +

    Applies a 2D adaptive average pooling over an input signal composed of several input planes.

    +

    nn_adaptive_avg_pool3d()

    +

    Applies a 3D adaptive average pooling over an input signal composed of several input planes.

    +

    nn_adaptive_log_softmax_with_loss()

    +

    AdaptiveLogSoftmaxWithLoss module

    +

    nn_adaptive_max_pool1d()

    +

    Applies a 1D adaptive max pooling over an input signal composed of several input planes.

    +

    nn_adaptive_max_pool2d()

    +

    Applies a 2D adaptive max pooling over an input signal composed of several input planes.

    +

    nn_adaptive_max_pool3d()

    +

    Applies a 3D adaptive max pooling over an input signal composed of several input planes.

    +

    nn_avg_pool1d()

    +

    Applies a 1D average pooling over an input signal composed of several +input planes.

    +

    nn_avg_pool2d()

    +

    Applies a 2D average pooling over an input signal composed of several input +planes.

    +

    nn_avg_pool3d()

    +

    Applies a 3D average pooling over an input signal composed of several input +planes.

    +

    nn_batch_norm1d()

    +

    BatchNorm1D module

    +

    nn_batch_norm2d()

    +

    BatchNorm2D

    +

    nn_bce_loss()

    +

    Binary cross entropy loss

    +

    nn_bilinear()

    +

    Bilinear module

    +

    nn_celu()

    +

    CELU module

    +

    nn_conv1d()

    +

    Conv1D module

    +

    nn_conv2d()

    +

    Conv2D module

    +

    nn_conv3d()

    +

    Conv3D module

    +

    nn_conv_transpose1d()

    +

    ConvTranspose1D

    +

    nn_conv_transpose2d()

    +

    ConvTranpose2D module

    +

    nn_conv_transpose3d()

    +

    ConvTranpose3D module

    +

    nn_cross_entropy_loss()

    +

    CrossEntropyLoss module

    +

    nn_dropout()

    +

    Dropout module

    +

    nn_dropout2d()

    +

    Dropout2D module

    +

    nn_dropout3d()

    +

    Dropout3D module

    +

    nn_elu()

    +

    ELU module

    +

    nn_embedding()

    +

    Embedding module

    +

    nn_fractional_max_pool2d()

    +

    Applies a 2D fractional max pooling over an input signal composed of several input planes.

    +

    nn_fractional_max_pool3d()

    +

    Applies a 3D fractional max pooling over an input signal composed of several input planes.

    +

    nn_gelu()

    +

    GELU module

    +

    nn_glu()

    +

    GLU module

    +

    nn_hardshrink()

    +

    Hardshwink module

    +

    nn_hardsigmoid()

    +

    Hardsigmoid module

    +

    nn_hardswish()

    +

    Hardswish module

    +

    nn_hardtanh()

    +

    Hardtanh module

    +

    nn_identity()

    +

    Identity module

    +

    nn_init_calculate_gain()

    +

    Calculate gain

    +

    nn_init_constant_()

    +

    Constant initialization

    +

    nn_init_dirac_()

    +

    Dirac initialization

    +

    nn_init_eye_()

    +

    Eye initialization

    +

    nn_init_kaiming_normal_()

    +

    Kaiming normal initialization

    +

    nn_init_kaiming_uniform_()

    +

    Kaiming uniform initialization

    +

    nn_init_normal_()

    +

    Normal initialization

    +

    nn_init_ones_()

    +

    Ones initialization

    +

    nn_init_orthogonal_()

    +

    Orthogonal initialization

    +

    nn_init_sparse_()

    +

    Sparse initialization

    +

    nn_init_trunc_normal_()

    +

    Truncated normal initialization

    +

    nn_init_uniform_()

    +

    Uniform initialization

    +

    nn_init_xavier_normal_()

    +

    Xavier normal initialization

    +

    nn_init_xavier_uniform_()

    +

    Xavier uniform initialization

    +

    nn_init_zeros_()

    +

    Zeros initialization

    +

    nn_leaky_relu()

    +

    LeakyReLU module

    +

    nn_linear()

    +

    Linear module

    +

    nn_log_sigmoid()

    +

    LogSigmoid module

    +

    nn_log_softmax()

    +

    LogSoftmax module

    +

    nn_lp_pool1d()

    +

    Applies a 1D power-average pooling over an input signal composed of several input +planes.

    +

    nn_lp_pool2d()

    +

    Applies a 2D power-average pooling over an input signal composed of several input +planes.

    +

    nn_max_pool1d()

    +

    MaxPool1D module

    +

    nn_max_pool2d()

    +

    MaxPool2D module

    +

    nn_max_pool3d()

    +

    Applies a 3D max pooling over an input signal composed of several input +planes.

    +

    nn_max_unpool1d()

    +

    Computes a partial inverse of MaxPool1d.

    +

    nn_max_unpool2d()

    +

    Computes a partial inverse of MaxPool2d.

    +

    nn_max_unpool3d()

    +

    Computes a partial inverse of MaxPool3d.

    +

    nn_module()

    +

    Base class for all neural network modules.

    +

    nn_module_list()

    +

    Holds submodules in a list.

    +

    nn_multihead_attention()

    +

    MultiHead attention

    +

    nn_prelu()

    +

    PReLU module

    +

    nn_relu()

    +

    ReLU module

    +

    nn_relu6()

    +

    ReLu6 module

    +

    nn_rnn()

    +

    RNN module

    +

    nn_rrelu()

    +

    RReLU module

    +

    nn_selu()

    +

    SELU module

    +

    nn_sequential()

    +

    A sequential container

    +

    nn_sigmoid()

    +

    Sigmoid module

    +

    nn_softmax()

    +

    Softmax module

    +

    nn_softmax2d()

    +

    Softmax2d module

    +

    nn_softmin()

    +

    Softmin

    +

    nn_softplus()

    +

    Softplus module

    +

    nn_softshrink()

    +

    Softshrink module

    +

    nn_softsign()

    +

    Softsign module

    +

    nn_tanh()

    +

    Tanh module

    +

    nn_tanhshrink()

    +

    Tanhshrink module

    +

    nn_threshold()

    +

    Threshoold module

    +

    nn_utils_rnn_pack_padded_sequence()

    +

    Packs a Tensor containing padded sequences of variable length.

    +

    nn_utils_rnn_pack_sequence()

    +

    Packs a list of variable length Tensors

    +

    nn_utils_rnn_pad_packed_sequence()

    +

    Pads a packed batch of variable length sequences.

    +

    nn_utils_rnn_pad_sequence()

    +

    Pad a list of variable length Tensors with padding_value

    +

    Neural networks functional module

    +

    +
    +

    nnf_adaptive_avg_pool1d()

    +

    Adaptive_avg_pool1d

    +

    nnf_adaptive_avg_pool2d()

    +

    Adaptive_avg_pool2d

    +

    nnf_adaptive_avg_pool3d()

    +

    Adaptive_avg_pool3d

    +

    nnf_adaptive_max_pool1d()

    +

    Adaptive_max_pool1d

    +

    nnf_adaptive_max_pool2d()

    +

    Adaptive_max_pool2d

    +

    nnf_adaptive_max_pool3d()

    +

    Adaptive_max_pool3d

    +

    nnf_affine_grid()

    +

    Affine_grid

    +

    nnf_alpha_dropout()

    +

    Alpha_dropout

    +

    nnf_avg_pool1d()

    +

    Avg_pool1d

    +

    nnf_avg_pool2d()

    +

    Avg_pool2d

    +

    nnf_avg_pool3d()

    +

    Avg_pool3d

    +

    nnf_batch_norm()

    +

    Batch_norm

    +

    nnf_bilinear()

    +

    Bilinear

    +

    nnf_binary_cross_entropy()

    +

    Binary_cross_entropy

    +

    nnf_binary_cross_entropy_with_logits()

    +

    Binary_cross_entropy_with_logits

    +

    nnf_celu() nnf_celu_()

    +

    Celu

    +

    nnf_conv1d()

    +

    Conv1d

    +

    nnf_conv2d()

    +

    Conv2d

    +

    nnf_conv3d()

    +

    Conv3d

    +

    nnf_conv_tbc()

    +

    Conv_tbc

    +

    nnf_conv_transpose1d()

    +

    Conv_transpose1d

    +

    nnf_conv_transpose2d()

    +

    Conv_transpose2d

    +

    nnf_conv_transpose3d()

    +

    Conv_transpose3d

    +

    nnf_cosine_embedding_loss()

    +

    Cosine_embedding_loss

    +

    nnf_cosine_similarity()

    +

    Cosine_similarity

    +

    nnf_cross_entropy()

    +

    Cross_entropy

    +

    nnf_ctc_loss()

    +

    Ctc_loss

    +

    nnf_dropout()

    +

    Dropout

    +

    nnf_dropout2d()

    +

    Dropout2d

    +

    nnf_dropout3d()

    +

    Dropout3d

    +

    nnf_elu() nnf_elu_()

    +

    Elu

    +

    nnf_embedding()

    +

    Embedding

    +

    nnf_embedding_bag()

    +

    Embedding_bag

    +

    nnf_fold()

    +

    Fold

    +

    nnf_fractional_max_pool2d()

    +

    Fractional_max_pool2d

    +

    nnf_fractional_max_pool3d()

    +

    Fractional_max_pool3d

    +

    nnf_gelu()

    +

    Gelu

    +

    nnf_glu()

    +

    Glu

    +

    nnf_grid_sample()

    +

    Grid_sample

    +

    nnf_group_norm()

    +

    Group_norm

    +

    nnf_gumbel_softmax()

    +

    Gumbel_softmax

    +

    nnf_hardshrink()

    +

    Hardshrink

    +

    nnf_hardsigmoid()

    +

    Hardsigmoid

    +

    nnf_hardswish()

    +

    Hardswish

    +

    nnf_hardtanh() nnf_hardtanh_()

    +

    Hardtanh

    +

    nnf_hinge_embedding_loss()

    +

    Hinge_embedding_loss

    +

    nnf_instance_norm()

    +

    Instance_norm

    +

    nnf_interpolate()

    +

    Interpolate

    +

    nnf_kl_div()

    +

    Kl_div

    +

    nnf_l1_loss()

    +

    L1_loss

    +

    nnf_layer_norm()

    +

    Layer_norm

    +

    nnf_leaky_relu()

    +

    Leaky_relu

    +

    nnf_linear()

    +

    Linear

    +

    nnf_local_response_norm()

    +

    Local_response_norm

    +

    nnf_log_softmax()

    +

    Log_softmax

    +

    nnf_logsigmoid()

    +

    Logsigmoid

    +

    nnf_lp_pool1d()

    +

    Lp_pool1d

    +

    nnf_lp_pool2d()

    +

    Lp_pool2d

    +

    nnf_margin_ranking_loss()

    +

    Margin_ranking_loss

    +

    nnf_max_pool1d()

    +

    Max_pool1d

    +

    nnf_max_pool2d()

    +

    Max_pool2d

    +

    nnf_max_pool3d()

    +

    Max_pool3d

    +

    nnf_max_unpool1d()

    +

    Max_unpool1d

    +

    nnf_max_unpool2d()

    +

    Max_unpool2d

    +

    nnf_max_unpool3d()

    +

    Max_unpool3d

    +

    nnf_mse_loss()

    +

    Mse_loss

    +

    nnf_multi_head_attention_forward()

    +

    Multi head attention forward

    +

    nnf_multi_margin_loss()

    +

    Multi_margin_loss

    +

    nnf_multilabel_margin_loss()

    +

    Multilabel_margin_loss

    +

    nnf_multilabel_soft_margin_loss()

    +

    Multilabel_soft_margin_loss

    +

    nnf_nll_loss()

    +

    Nll_loss

    +

    nnf_normalize()

    +

    Normalize

    +

    nnf_one_hot()

    +

    One_hot

    +

    nnf_pad()

    +

    Pad

    +

    nnf_pairwise_distance()

    +

    Pairwise_distance

    +

    nnf_pdist()

    +

    Pdist

    +

    nnf_pixel_shuffle()

    +

    Pixel_shuffle

    +

    nnf_poisson_nll_loss()

    +

    Poisson_nll_loss

    +

    nnf_prelu()

    +

    Prelu

    +

    nnf_relu() nnf_relu_()

    +

    Relu

    +

    nnf_relu6()

    +

    Relu6

    +

    nnf_rrelu() nnf_rrelu_()

    +

    Rrelu

    +

    nnf_selu() nnf_selu_()

    +

    Selu

    +

    nnf_sigmoid()

    +

    Sigmoid

    +

    nnf_smooth_l1_loss()

    +

    Smooth_l1_loss

    +

    nnf_soft_margin_loss()

    +

    Soft_margin_loss

    +

    nnf_softmax()

    +

    Softmax

    +

    nnf_softmin()

    +

    Softmin

    +

    nnf_softplus()

    +

    Softplus

    +

    nnf_softshrink()

    +

    Softshrink

    +

    nnf_softsign()

    +

    Softsign

    +

    nnf_tanhshrink()

    +

    Tanhshrink

    +

    nnf_threshold() nnf_threshold_()

    +

    Threshold

    +

    nnf_triplet_margin_loss()

    +

    Triplet_margin_loss

    +

    nnf_unfold()

    +

    Unfold

    +

    Optimizers

    +

    +
    +

    optim_adam()

    +

    Implements Adam algorithm.

    +

    optim_required()

    +

    Dummy value indicating a required value.

    +

    optim_sgd()

    +

    SGD optimizer

    +

    Datasets

    +

    +
    +

    dataset()

    +

    An abstract class representing a Dataset.

    +

    dataloader()

    +

    Data loader. Combines a dataset and a sampler, and provides +single- or multi-process iterators over the dataset.

    +

    dataloader_make_iter()

    +

    Creates an iterator from a DataLoader

    +

    dataloader_next()

    +

    Get the next element of a dataloader iterator

    +

    enumerate()

    +

    Enumerate an iterator

    +

    enumerate(<dataloader>)

    +

    Enumerate an iterator

    +

    tensor_dataset()

    +

    Dataset wrapping tensors.

    +

    is_dataloader()

    +

    Checks if the object is a dataloader

    +

    Autograd

    +

    +
    +

    autograd_backward()

    +

    Computes the sum of gradients of given tensors w.r.t. graph leaves.

    +

    autograd_function()

    +

    Records operation history and defines formulas for differentiating ops.

    +

    autograd_grad()

    +

    Computes and returns the sum of gradients of outputs w.r.t. the inputs.

    +

    autograd_set_grad_mode()

    +

    Set grad mode

    +

    with_no_grad()

    +

    Temporarily modify gradient recording.

    +

    with_enable_grad()

    +

    Enable grad

    +

    AutogradContext

    +

    Class representing the context.

    +

    Cuda utilities

    +

    +
    +

    cuda_current_device()

    +

    Returns the index of a currently selected device.

    +

    cuda_device_count()

    +

    Returns the number of GPUs available.

    +

    cuda_is_available()

    +

    Returns a bool indicating if CUDA is currently available.

    +

    Installation

    +

    +
    +

    install_torch()

    +

    Install Torch

    +
    + + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/install_torch.html b/static/docs/reference/install_torch.html new file mode 100644 index 0000000000000000000000000000000000000000..c4e14bd5f2d0284222174d35f67c09d88664f51b --- /dev/null +++ b/static/docs/reference/install_torch.html @@ -0,0 +1,266 @@ + + + + + + + + +Install Torch — install_torch • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Installs Torch and its dependencies.

    +
    + +
    install_torch(
    +  version = "1.5.0",
    +  type = install_type(version = version),
    +  reinstall = FALSE,
    +  path = install_path(),
    +  ...
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    version

    The Torch version to install.

    type

    The installation type for Torch. Valid values are "cpu" or the 'CUDA' version.

    reinstall

    Re-install Torch even if its already installed?

    path

    Optional path to install or check for an already existing installation.

    ...

    other optional arguments (like load for manual installation.)

    + +

    Details

    + +

    When using path to install in a specific location, make sure the TORCH_HOME environment +variable is set to this same path to reuse this installation. The TORCH_INSTALL environment +variable can be set to 0 to prevent auto-installing torch and TORCH_LOAD set to 0 +to avoid loading dependencies automatically. These environment variables are meant for advanced use +cases and troubleshootinng only.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/is_dataloader.html b/static/docs/reference/is_dataloader.html new file mode 100644 index 0000000000000000000000000000000000000000..5688453efd44110f5f676760e6107ea02a8d5d8f --- /dev/null +++ b/static/docs/reference/is_dataloader.html @@ -0,0 +1,237 @@ + + + + + + + + +Checks if the object is a dataloader — is_dataloader • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Checks if the object is a dataloader

    +
    + +
    is_dataloader(x)
    + +

    Arguments

    + + + + + + +
    x

    object to check

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/is_torch_device.html b/static/docs/reference/is_torch_device.html new file mode 100644 index 0000000000000000000000000000000000000000..760d1dd2b622c2a846f65510fb7f67b6b3240893 --- /dev/null +++ b/static/docs/reference/is_torch_device.html @@ -0,0 +1,237 @@ + + + + + + + + +Checks if object is a device — is_torch_device • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Checks if object is a device

    +
    + +
    is_torch_device(x)
    + +

    Arguments

    + + + + + + +
    x

    object to check

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/is_torch_dtype.html b/static/docs/reference/is_torch_dtype.html new file mode 100644 index 0000000000000000000000000000000000000000..8315ff6b9479dc2603398f3365344014313443e8 --- /dev/null +++ b/static/docs/reference/is_torch_dtype.html @@ -0,0 +1,237 @@ + + + + + + + + +Check if object is a torch data type — is_torch_dtype • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Check if object is a torch data type

    +
    + +
    is_torch_dtype(x)
    + +

    Arguments

    + + + + + + +
    x

    object to check.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/is_torch_layout.html b/static/docs/reference/is_torch_layout.html new file mode 100644 index 0000000000000000000000000000000000000000..393fd39df5e2d35ca794a4e3f93f4f9ea1ccc2c7 --- /dev/null +++ b/static/docs/reference/is_torch_layout.html @@ -0,0 +1,237 @@ + + + + + + + + +Check if an object is a torch layout. — is_torch_layout • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Check if an object is a torch layout.

    +
    + +
    is_torch_layout(x)
    + +

    Arguments

    + + + + + + +
    x

    object to check

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/is_torch_memory_format.html b/static/docs/reference/is_torch_memory_format.html new file mode 100644 index 0000000000000000000000000000000000000000..a0e1b0f8d0324cf922a9dcb7fe218bf6dc7b113d --- /dev/null +++ b/static/docs/reference/is_torch_memory_format.html @@ -0,0 +1,237 @@ + + + + + + + + +Check if an object is a memory format — is_torch_memory_format • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Check if an object is a memory format

    +
    + +
    is_torch_memory_format(x)
    + +

    Arguments

    + + + + + + +
    x

    object to check

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/is_torch_qscheme.html b/static/docs/reference/is_torch_qscheme.html new file mode 100644 index 0000000000000000000000000000000000000000..aee7821a6c58996d50760bc7c39bf38b8e224ff1 --- /dev/null +++ b/static/docs/reference/is_torch_qscheme.html @@ -0,0 +1,237 @@ + + + + + + + + +Checks if an object is a QScheme — is_torch_qscheme • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Checks if an object is a QScheme

    +
    + +
    is_torch_qscheme(x)
    + +

    Arguments

    + + + + + + +
    x

    object to check

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/is_undefined_tensor.html b/static/docs/reference/is_undefined_tensor.html new file mode 100644 index 0000000000000000000000000000000000000000..f2a893716d1e0587ab7c5faba286cf0beaa0ab95 --- /dev/null +++ b/static/docs/reference/is_undefined_tensor.html @@ -0,0 +1,237 @@ + + + + + + + + +Checks if a tensor is undefined — is_undefined_tensor • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Checks if a tensor is undefined

    +
    + +
    is_undefined_tensor(x)
    + +

    Arguments

    + + + + + + +
    x

    tensor to check

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/load_state_dict.html b/static/docs/reference/load_state_dict.html new file mode 100644 index 0000000000000000000000000000000000000000..3cf56efae6c05836777966bc665d56c49eea8eb6 --- /dev/null +++ b/static/docs/reference/load_state_dict.html @@ -0,0 +1,250 @@ + + + + + + + + +Load a state dict file — load_state_dict • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    This function should only be used to load models saved in python. +For it to work correctly you need to use torch.save with the flag: +_use_new_zipfile_serialization=True and also remove all nn.Parameter +classes from the tensors in the dict.

    +
    + +
    load_state_dict(path)
    + +

    Arguments

    + + + + + + +
    path

    to the state dict file

    + +

    Value

    + +

    a named list of tensors.

    +

    Details

    + +

    The above might change with development of this +in pytorch's C++ api.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_adaptive_avg_pool1d.html b/static/docs/reference/nn_adaptive_avg_pool1d.html new file mode 100644 index 0000000000000000000000000000000000000000..40bd9d2f8ca8e8d820924ca0ebecba26e5273ed0 --- /dev/null +++ b/static/docs/reference/nn_adaptive_avg_pool1d.html @@ -0,0 +1,248 @@ + + + + + + + + +Applies a 1D adaptive average pooling over an input signal composed of several input planes. — nn_adaptive_avg_pool1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    The output size is H, for any input size. +The number of output features is equal to the number of input planes.

    +
    + +
    nn_adaptive_avg_pool1d(output_size)
    + +

    Arguments

    + + + + + + +
    output_size

    the target output size H

    + + +

    Examples

    +
    if (torch_is_installed()) { +# target output size of 5 +m = nn_adaptive_avg_pool1d(5) +input <- torch_randn(1, 64, 8) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_adaptive_avg_pool2d.html b/static/docs/reference/nn_adaptive_avg_pool2d.html new file mode 100644 index 0000000000000000000000000000000000000000..a1006d08af73b162a9b3c0a226c4d218c3974ad6 --- /dev/null +++ b/static/docs/reference/nn_adaptive_avg_pool2d.html @@ -0,0 +1,255 @@ + + + + + + + + +Applies a 2D adaptive average pooling over an input signal composed of several input planes. — nn_adaptive_avg_pool2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    The output is of size H x W, for any input size. +The number of output features is equal to the number of input planes.

    +
    + +
    nn_adaptive_avg_pool2d(output_size)
    + +

    Arguments

    + + + + + + +
    output_size

    the target output size of the image of the form H x W. +Can be a tuple (H, W) or a single H for a square image H x H. +H and W can be either a int, or NULL which means the size will +be the same as that of the input.

    + + +

    Examples

    +
    if (torch_is_installed()) { +# target output size of 5x7 +m <- nn_adaptive_avg_pool2d(c(5,7)) +input <- torch_randn(1, 64, 8, 9) +output <- m(input) +# target output size of 7x7 (square) +m <- nn_adaptive_avg_pool2d(7) +input <- torch_randn(1, 64, 10, 9) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_adaptive_avg_pool3d.html b/static/docs/reference/nn_adaptive_avg_pool3d.html new file mode 100644 index 0000000000000000000000000000000000000000..bafb679a2b09ab908ca349e0ecb7f83e30875bfe --- /dev/null +++ b/static/docs/reference/nn_adaptive_avg_pool3d.html @@ -0,0 +1,255 @@ + + + + + + + + +Applies a 3D adaptive average pooling over an input signal composed of several input planes. — nn_adaptive_avg_pool3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    The output is of size D x H x W, for any input size. +The number of output features is equal to the number of input planes.

    +
    + +
    nn_adaptive_avg_pool3d(output_size)
    + +

    Arguments

    + + + + + + +
    output_size

    the target output size of the form D x H x W. +Can be a tuple (D, H, W) or a single number D for a cube D x D x D. +D, H and W can be either a int, or None which means the size will +be the same as that of the input.

    + + +

    Examples

    +
    if (torch_is_installed()) { +# target output size of 5x7x9 +m <- nn_adaptive_avg_pool3d(c(5,7,9)) +input <- torch_randn(1, 64, 8, 9, 10) +output <- m(input) +# target output size of 7x7x7 (cube) +m <- nn_adaptive_avg_pool3d(7) +input <- torch_randn(1, 64, 10, 9, 8) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_adaptive_log_softmax_with_loss.html b/static/docs/reference/nn_adaptive_log_softmax_with_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..d2f39a7edd55cb4720cb3801a2a787a50c3cc5c3 --- /dev/null +++ b/static/docs/reference/nn_adaptive_log_softmax_with_loss.html @@ -0,0 +1,335 @@ + + + + + + + + +AdaptiveLogSoftmaxWithLoss module — nn_adaptive_log_softmax_with_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + + + +
    nn_adaptive_log_softmax_with_loss(
    +  in_features,
    +  n_classes,
    +  cutoffs,
    +  div_value = 4,
    +  head_bias = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    in_features

    (int): Number of features in the input tensor

    n_classes

    (int): Number of classes in the dataset

    cutoffs

    (Sequence): Cutoffs used to assign targets to their buckets

    div_value

    (float, optional): value used as an exponent to compute sizes +of the clusters. Default: 4.0

    head_bias

    (bool, optional): If True, adds a bias term to the 'head' of the +adaptive softmax. Default: False

    + +

    Value

    + +

    NamedTuple with output and loss fields:

      +
    • output is a Tensor of size N containing computed target +log probabilities for each example

    • +
    • loss is a Scalar representing the computed negative +log likelihood loss

    • +
    + +

    Details

    + +

    Adaptive softmax is an approximate strategy for training models with large +output spaces. It is most effective when the label distribution is highly +imbalanced, for example in natural language modelling, where the word +frequency distribution approximately follows the Zipf's law.

    +

    Adaptive softmax partitions the labels into several clusters, according to +their frequency. These clusters may contain different number of targets +each.

    +

    Additionally, clusters containing less frequent labels assign lower +dimensional embeddings to those labels, which speeds up the computation. +For each minibatch, only clusters for which at least one target is +present are evaluated.

    +

    The idea is that the clusters which are accessed frequently +(like the first one, containing most frequent labels), should also be cheap +to compute -- that is, contain a small number of assigned labels. +We highly recommend taking a look at the original paper for more details.

      +
    • cutoffs should be an ordered Sequence of integers sorted +in the increasing order. +It controls number of clusters and the partitioning of targets into +clusters. For example setting cutoffs = c(10, 100, 1000) +means that first 10 targets will be assigned +to the 'head' of the adaptive softmax, targets 11, 12, ..., 100 will be +assigned to the first cluster, and targets 101, 102, ..., 1000 will be +assigned to the second cluster, while targets +1001, 1002, ..., n_classes - 1 will be assigned +to the last, third cluster.

    • +
    • div_value is used to compute the size of each additional cluster, +which is given as +\(\left\lfloor\frac{\mbox{in\_features}}{\mbox{div\_value}^{idx}}\right\rfloor\), +where \(idx\) is the cluster index (with clusters +for less frequent words having larger indices, +and indices starting from \(1\)).

    • +
    • head_bias if set to True, adds a bias term to the 'head' of the +adaptive softmax. See paper for details. Set to False in the official +implementation.

    • +
    + +

    Note

    + +

    This module returns a NamedTuple with output +and loss fields. See further documentation for details.

    +

    To compute log-probabilities for all classes, the log_prob +method can be used.

    +

    Warning

    + + + +

    Labels passed as inputs to this module should be sorted according to +their frequency. This means that the most frequent label should be +represented by the index 0, and the least frequent +label should be represented by the index n_classes - 1.

    +

    Shape

    + + + +
      +
    • input: \((N, \mbox{in\_features})\)

    • +
    • target: \((N)\) where each value satisfies \(0 <= \mbox{target[i]} <= \mbox{n\_classes}\)

    • +
    • output1: \((N)\)

    • +
    • output2: Scalar

    • +
    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_adaptive_max_pool1d.html b/static/docs/reference/nn_adaptive_max_pool1d.html new file mode 100644 index 0000000000000000000000000000000000000000..35300f6438b6d1a2ebef4fa742c44cb2d148c61a --- /dev/null +++ b/static/docs/reference/nn_adaptive_max_pool1d.html @@ -0,0 +1,253 @@ + + + + + + + + +Applies a 1D adaptive max pooling over an input signal composed of several input planes. — nn_adaptive_max_pool1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    The output size is H, for any input size. +The number of output features is equal to the number of input planes.

    +
    + +
    nn_adaptive_max_pool1d(output_size, return_indices = FALSE)
    + +

    Arguments

    + + + + + + + + + + +
    output_size

    the target output size H

    return_indices

    if TRUE, will return the indices along with the outputs. +Useful to pass to nn_max_unpool1d(). Default: FALSE

    + + +

    Examples

    +
    if (torch_is_installed()) { +# target output size of 5 +m <- nn_adaptive_max_pool1d(5) +input <- torch_randn(1, 64, 8) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_adaptive_max_pool2d.html b/static/docs/reference/nn_adaptive_max_pool2d.html new file mode 100644 index 0000000000000000000000000000000000000000..1cbc81eef21d4714e2f98437aefe8508b3b1322f --- /dev/null +++ b/static/docs/reference/nn_adaptive_max_pool2d.html @@ -0,0 +1,260 @@ + + + + + + + + +Applies a 2D adaptive max pooling over an input signal composed of several input planes. — nn_adaptive_max_pool2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    The output is of size H x W, for any input size. +The number of output features is equal to the number of input planes.

    +
    + +
    nn_adaptive_max_pool2d(output_size, return_indices = FALSE)
    + +

    Arguments

    + + + + + + + + + + +
    output_size

    the target output size of the image of the form H x W. +Can be a tuple (H, W) or a single H for a square image H x H. +H and W can be either a int, or None which means the size will +be the same as that of the input.

    return_indices

    if TRUE, will return the indices along with the outputs. +Useful to pass to nn_max_unpool2d(). Default: FALSE

    + + +

    Examples

    +
    if (torch_is_installed()) { +# target output size of 5x7 +m <- nn_adaptive_max_pool2d(c(5,7)) +input <- torch_randn(1, 64, 8, 9) +output <- m(input) +# target output size of 7x7 (square) +m <- nn_adaptive_max_pool2d(7) +input <- torch_randn(1, 64, 10, 9) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_adaptive_max_pool3d.html b/static/docs/reference/nn_adaptive_max_pool3d.html new file mode 100644 index 0000000000000000000000000000000000000000..64d854a83ba91c187b95397f4517faab40e7ed0a --- /dev/null +++ b/static/docs/reference/nn_adaptive_max_pool3d.html @@ -0,0 +1,260 @@ + + + + + + + + +Applies a 3D adaptive max pooling over an input signal composed of several input planes. — nn_adaptive_max_pool3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    The output is of size D x H x W, for any input size. +The number of output features is equal to the number of input planes.

    +
    + +
    nn_adaptive_max_pool3d(output_size, return_indices = FALSE)
    + +

    Arguments

    + + + + + + + + + + +
    output_size

    the target output size of the image of the form D x H x W. +Can be a tuple (D, H, W) or a single D for a cube D x D x D. +D, H and W can be either a int, or None which means the size will +be the same as that of the input.

    return_indices

    if TRUE, will return the indices along with the outputs. +Useful to pass to nn_max_unpool3d(). Default: FALSE

    + + +

    Examples

    +
    if (torch_is_installed()) { +# target output size of 5x7x9 +m <- nn_adaptive_max_pool3d(c(5,7,9)) +input <- torch_randn(1, 64, 8, 9, 10) +output <- m(input) +# target output size of 7x7x7 (cube) +m <- nn_adaptive_max_pool3d(7) +input <- torch_randn(1, 64, 10, 9, 8) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_avg_pool1d.html b/static/docs/reference/nn_avg_pool1d.html new file mode 100644 index 0000000000000000000000000000000000000000..07a95869d73724698bbeab0d30802c7c22642fcd --- /dev/null +++ b/static/docs/reference/nn_avg_pool1d.html @@ -0,0 +1,305 @@ + + + + + + + + +Applies a 1D average pooling over an input signal composed of several +input planes. — nn_avg_pool1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    In the simplest case, the output value of the layer with input size \((N, C, L)\), +output \((N, C, L_{out})\) and kernel_size \(k\) +can be precisely described as:

    +

    $$ + \mbox{out}(N_i, C_j, l) = \frac{1}{k} \sum_{m=0}^{k-1} +\mbox{input}(N_i, C_j, \mbox{stride} \times l + m) +$$

    +
    + +
    nn_avg_pool1d(
    +  kernel_size,
    +  stride = NULL,
    +  padding = 0,
    +  ceil_mode = FALSE,
    +  count_include_pad = TRUE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    kernel_size

    the size of the window

    stride

    the stride of the window. Default value is kernel_size

    padding

    implicit zero padding to be added on both sides

    ceil_mode

    when TRUE, will use ceil instead of floor to compute the output shape

    count_include_pad

    when TRUE, will include the zero-padding in the averaging calculation

    + +

    Details

    + +

    If padding is non-zero, then the input is implicitly zero-padded on both sides +for padding number of points.

    +

    The parameters kernel_size, stride, padding can each be +an int or a one-element tuple.

    +

    Shape

    + + + +
      +
    • Input: \((N, C, L_{in})\)

    • +
    • Output: \((N, C, L_{out})\), where

    • +
    + +

    $$ + L_{out} = \left\lfloor \frac{L_{in} + + 2 \times \mbox{padding} - \mbox{kernel\_size}}{\mbox{stride}} + 1\right\rfloor +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +# pool with window of size=3, stride=2 +m <- nn_avg_pool1d(3, stride=2) +m(torch_randn(1, 1, 8)) + +} +
    #> torch_tensor +#> (1,.,.) = +#> 0.3975 -0.0079 -0.0240 +#> [ CPUFloatType{1,1,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_avg_pool2d.html b/static/docs/reference/nn_avg_pool2d.html new file mode 100644 index 0000000000000000000000000000000000000000..d4b74b32dd2f8cec9cddd1a30ab52c3c2adb5fca --- /dev/null +++ b/static/docs/reference/nn_avg_pool2d.html @@ -0,0 +1,318 @@ + + + + + + + + +Applies a 2D average pooling over an input signal composed of several input +planes. — nn_avg_pool2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    In the simplest case, the output value of the layer with input size \((N, C, H, W)\), +output \((N, C, H_{out}, W_{out})\) and kernel_size \((kH, kW)\) +can be precisely described as:

    +

    $$ + out(N_i, C_j, h, w) = \frac{1}{kH * kW} \sum_{m=0}^{kH-1} \sum_{n=0}^{kW-1} +input(N_i, C_j, stride[0] \times h + m, stride[1] \times w + n) +$$

    +
    + +
    nn_avg_pool2d(
    +  kernel_size,
    +  stride = NULL,
    +  padding = 0,
    +  ceil_mode = FALSE,
    +  count_include_pad = TRUE,
    +  divisor_override = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    kernel_size

    the size of the window

    stride

    the stride of the window. Default value is kernel_size

    padding

    implicit zero padding to be added on both sides

    ceil_mode

    when TRUE, will use ceil instead of floor to compute the output shape

    count_include_pad

    when TRUE, will include the zero-padding in the averaging calculation

    divisor_override

    if specified, it will be used as divisor, otherwise kernel_size will be used

    + +

    Details

    + +

    If padding is non-zero, then the input is implicitly zero-padded on both sides +for padding number of points.

    +

    The parameters kernel_size, stride, padding can either be:

      +
    • a single int -- in which case the same value is used for the height and width dimension

    • +
    • a tuple of two ints -- in which case, the first int is used for the height dimension, +and the second int for the width dimension

    • +
    + +

    Shape

    + + + +
      +
    • Input: \((N, C, H_{in}, W_{in})\)

    • +
    • Output: \((N, C, H_{out}, W_{out})\), where

    • +
    + +

    $$ + H_{out} = \left\lfloor\frac{H_{in} + 2 \times \mbox{padding}[0] - + \mbox{kernel\_size}[0]}{\mbox{stride}[0]} + 1\right\rfloor +$$ +$$ + W_{out} = \left\lfloor\frac{W_{in} + 2 \times \mbox{padding}[1] - + \mbox{kernel\_size}[1]}{\mbox{stride}[1]} + 1\right\rfloor +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +# pool of square window of size=3, stride=2 +m <- nn_avg_pool2d(3, stride=2) +# pool of non-square window +m <- nn_avg_pool2d(c(3, 2), stride=c(2, 1)) +input <- torch_randn(20, 16, 50, 32) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_avg_pool3d.html b/static/docs/reference/nn_avg_pool3d.html new file mode 100644 index 0000000000000000000000000000000000000000..ab057b173874933618c053dd2bc0a23d46f9a197 --- /dev/null +++ b/static/docs/reference/nn_avg_pool3d.html @@ -0,0 +1,326 @@ + + + + + + + + +Applies a 3D average pooling over an input signal composed of several input +planes. — nn_avg_pool3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    In the simplest case, the output value of the layer with input size \((N, C, D, H, W)\), +output \((N, C, D_{out}, H_{out}, W_{out})\) and kernel_size \((kD, kH, kW)\) +can be precisely described as:

    +

    $$ +\begin{array}{ll} +\mbox{out}(N_i, C_j, d, h, w) = & \sum_{k=0}^{kD-1} \sum_{m=0}^{kH-1} \sum_{n=0}^{kW-1} \\ +& \frac{\mbox{input}(N_i, C_j, \mbox{stride}[0] \times d + k, \mbox{stride}[1] \times h + m, \mbox{stride}[2] \times w + n)}{kD \times kH \times kW} +\end{array} +$$

    +
    + +
    nn_avg_pool3d(
    +  kernel_size,
    +  stride = NULL,
    +  padding = 0,
    +  ceil_mode = FALSE,
    +  count_include_pad = TRUE,
    +  divisor_override = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    kernel_size

    the size of the window

    stride

    the stride of the window. Default value is kernel_size

    padding

    implicit zero padding to be added on all three sides

    ceil_mode

    when TRUE, will use ceil instead of floor to compute the output shape

    count_include_pad

    when TRUE, will include the zero-padding in the averaging calculation

    divisor_override

    if specified, it will be used as divisor, otherwise kernel_size will be used

    + +

    Details

    + +

    If padding is non-zero, then the input is implicitly zero-padded on all three sides +for padding number of points.

    +

    The parameters kernel_size, stride can either be:

      +
    • a single int -- in which case the same value is used for the depth, height and width dimension

    • +
    • a tuple of three ints -- in which case, the first int is used for the depth dimension, +the second int for the height dimension and the third int for the width dimension

    • +
    + +

    Shape

    + + + +
      +
    • Input: \((N, C, D_{in}, H_{in}, W_{in})\)

    • +
    • Output: \((N, C, D_{out}, H_{out}, W_{out})\), where

    • +
    + +

    $$ + D_{out} = \left\lfloor\frac{D_{in} + 2 \times \mbox{padding}[0] - + \mbox{kernel\_size}[0]}{\mbox{stride}[0]} + 1\right\rfloor +$$ +$$ + H_{out} = \left\lfloor\frac{H_{in} + 2 \times \mbox{padding}[1] - + \mbox{kernel\_size}[1]}{\mbox{stride}[1]} + 1\right\rfloor +$$ +$$ + W_{out} = \left\lfloor\frac{W_{in} + 2 \times \mbox{padding}[2] - + \mbox{kernel\_size}[2]}{\mbox{stride}[2]} + 1\right\rfloor +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +# pool of square window of size=3, stride=2 +m = nn_avg_pool3d(3, stride=2) +# pool of non-square window +m = nn_avg_pool3d(c(3, 2, 2), stride=c(2, 1, 2)) +input = torch_randn(20, 16, 50,44, 31) +output = m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_batch_norm1d.html b/static/docs/reference/nn_batch_norm1d.html new file mode 100644 index 0000000000000000000000000000000000000000..19639da20679fab1efed3185ab0cc7a54840c99a --- /dev/null +++ b/static/docs/reference/nn_batch_norm1d.html @@ -0,0 +1,320 @@ + + + + + + + + +BatchNorm1D module — nn_batch_norm1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies Batch Normalization over a 2D or 3D input (a mini-batch of 1D +inputs with optional additional channel dimension) as described in the paper +Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

    +
    + +
    nn_batch_norm1d(
    +  num_features,
    +  eps = 1e-05,
    +  momentum = 0.1,
    +  affine = TRUE,
    +  track_running_stats = TRUE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    num_features

    \(C\) from an expected input of size +\((N, C, L)\) or \(L\) from input of size \((N, L)\)

    eps

    a value added to the denominator for numerical stability. +Default: 1e-5

    momentum

    the value used for the running_mean and running_var +computation. Can be set to NULL for cumulative moving average +(i.e. simple average). Default: 0.1

    affine

    a boolean value that when set to TRUE, this module has +learnable affine parameters. Default: TRUE

    track_running_stats

    a boolean value that when set to TRUE, this +module tracks the running mean and variance, and when set to FALSE, +this module does not track such statistics and always uses batch +statistics in both training and eval modes. Default: TRUE

    + +

    Details

    + +

    $$ +y = \frac{x - \mathrm{E}[x]}{\sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta +$$

    +

    The mean and standard-deviation are calculated per-dimension over +the mini-batches and \(\gamma\) and \(\beta\) are learnable parameter vectors +of size C (where C is the input size). By default, the elements of \(\gamma\) +are set to 1 and the elements of \(\beta\) are set to 0.

    +

    Also by default, during training this layer keeps running estimates of its +computed mean and variance, which are then used for normalization during +evaluation. The running estimates are kept with a default :attr:momentum +of 0.1. +If track_running_stats is set to FALSE, this layer then does not +keep running estimates, and batch statistics are instead used during +evaluation time as well.

    +

    Note

    + + + + +

    This momentum argument is different from one used in optimizer +classes and the conventional notion of momentum. Mathematically, the +update rule for running statistics here is +\(\hat{x}_{\mbox{new}} = (1 - \mbox{momentum}) \times \hat{x} + \mbox{momentum} \times x_t\), +where \(\hat{x}\) is the estimated statistic and \(x_t\) is the +new observed value.

    +

    Because the Batch Normalization is done over the C dimension, computing statistics +on (N, L) slices, it's common terminology to call this Temporal Batch Normalization.

    +

    Shape

    + + + +
      +
    • Input: \((N, C)\) or \((N, C, L)\)

    • +
    • Output: \((N, C)\) or \((N, C, L)\) (same shape as input)

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +# With Learnable Parameters +m <- nn_batch_norm1d(100) +# Without Learnable Parameters +m <- nn_batch_norm1d(100, affine = FALSE) +input <- torch_randn(20, 100) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_batch_norm2d.html b/static/docs/reference/nn_batch_norm2d.html new file mode 100644 index 0000000000000000000000000000000000000000..93d5b6accb766d74984ca2b359ad1ff9ba711200 --- /dev/null +++ b/static/docs/reference/nn_batch_norm2d.html @@ -0,0 +1,319 @@ + + + + + + + + +BatchNorm2D — nn_batch_norm2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies Batch Normalization over a 4D input (a mini-batch of 2D inputs +additional channel dimension) as described in the paper +Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.

    +
    + +
    nn_batch_norm2d(
    +  num_features,
    +  eps = 1e-05,
    +  momentum = 0.1,
    +  affine = TRUE,
    +  track_running_stats = TRUE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    num_features

    \(C\) from an expected input of size +\((N, C, H, W)\)

    eps

    a value added to the denominator for numerical stability. +Default: 1e-5

    momentum

    the value used for the running_mean and running_var +computation. Can be set to None for cumulative moving average +(i.e. simple average). Default: 0.1

    affine

    a boolean value that when set to TRUE, this module has +learnable affine parameters. Default: TRUE

    track_running_stats

    a boolean value that when set to TRUE, this +module tracks the running mean and variance, and when set to FALSE, +this module does not track such statistics and uses batch statistics instead +in both training and eval modes if the running mean and variance are None. +Default: TRUE

    + +

    Details

    + +

    $$ + y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta +$$

    +

    The mean and standard-deviation are calculated per-dimension over +the mini-batches and \(\gamma\) and \(\beta\) are learnable parameter vectors +of size C (where C is the input size). By default, the elements of \(\gamma\) are set +to 1 and the elements of \(\beta\) are set to 0. The standard-deviation is calculated +via the biased estimator, equivalent to torch_var(input, unbiased=FALSE). +Also by default, during training this layer keeps running estimates of its +computed mean and variance, which are then used for normalization during +evaluation. The running estimates are kept with a default momentum +of 0.1.

    +

    If track_running_stats is set to FALSE, this layer then does not +keep running estimates, and batch statistics are instead used during +evaluation time as well.

    +

    Note

    + +

    This momentum argument is different from one used in optimizer +classes and the conventional notion of momentum. Mathematically, the +update rule for running statistics here is +\(\hat{x}_{\mbox{new}} = (1 - \mbox{momentum}) \times \hat{x} + \mbox{momentum} \times x_t\), +where \(\hat{x}\) is the estimated statistic and \(x_t\) is the +new observed value. +Because the Batch Normalization is done over the C dimension, computing statistics +on (N, H, W) slices, it's common terminology to call this Spatial Batch Normalization.

    +

    Shape

    + + + +
      +
    • Input: \((N, C, H, W)\)

    • +
    • Output: \((N, C, H, W)\) (same shape as input)

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +# With Learnable Parameters +m <- nn_batch_norm2d(100) +# Without Learnable Parameters +m <- nn_batch_norm2d(100, affine=FALSE) +input <- torch_randn(20, 100, 35, 45) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_bce_loss.html b/static/docs/reference/nn_bce_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..5c7afd73a2a617a3a4d6ac5f8dc8ac5bd2eff726 --- /dev/null +++ b/static/docs/reference/nn_bce_loss.html @@ -0,0 +1,304 @@ + + + + + + + + +Binary cross entropy loss — nn_bce_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Creates a criterion that measures the Binary Cross Entropy +between the target and the output:

    +
    + +
    nn_bce_loss(weight = NULL, reduction = "mean")
    + +

    Arguments

    + + + + + + + + + + +
    weight

    (Tensor, optional): a manual rescaling weight given to the loss +of each batch element. If given, has to be a Tensor of size nbatch.

    reduction

    (string, optional): Specifies the reduction to apply to the output: +'none' | 'mean' | 'sum'. 'none': no reduction will be applied, +'mean': the sum of the output will be divided by the number of +elements in the output, 'sum': the output will be summed. Note: size_average +and reduce are in the process of being deprecated, and in the meantime, +specifying either of those two args will override reduction. Default: 'mean'

    + +

    Details

    + +

    The unreduced (i.e. with reduction set to 'none') loss can be described as: +$$ + \ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad +l_n = - w_n \left[ y_n \cdot \log x_n + (1 - y_n) \cdot \log (1 - x_n) \right] +$$ +where \(N\) is the batch size. If reduction is not 'none' +(default 'mean'), then

    +

    $$ + \ell(x, y) = \left\{ \begin{array}{ll} +\mbox{mean}(L), & \mbox{if reduction} = \mbox{'mean';}\\ +\mbox{sum}(L), & \mbox{if reduction} = \mbox{'sum'.} +\end{array} +\right. +$$

    +

    This is used for measuring the error of a reconstruction in for example +an auto-encoder. Note that the targets \(y\) should be numbers +between 0 and 1.

    +

    Notice that if \(x_n\) is either 0 or 1, one of the log terms would be +mathematically undefined in the above loss equation. PyTorch chooses to set +\(\log (0) = -\infty\), since \(\lim_{x\to 0} \log (x) = -\infty\).

    +

    However, an infinite term in the loss equation is not desirable for several reasons. +For one, if either \(y_n = 0\) or \((1 - y_n) = 0\), then we would be +multiplying 0 with infinity. Secondly, if we have an infinite loss value, then +we would also have an infinite term in our gradient, since +\(\lim_{x\to 0} \frac{d}{dx} \log (x) = \infty\).

    +

    This would make BCELoss's backward method nonlinear with respect to \(x_n\), +and using it for things like linear regression would not be straight-forward. +Our solution is that BCELoss clamps its log function outputs to be greater than +or equal to -100. This way, we can always have a finite loss value and a linear +backward method.

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where \(*\) means, any number of additional +dimensions

    • +
    • Target: \((N, *)\), same shape as the input

    • +
    • Output: scalar. If reduction is 'none', then \((N, *)\), same +shape as input.

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_sigmoid() +loss <- nn_bce_loss() +input <- torch_randn(3, requires_grad=TRUE) +target <- torch_rand(3) +output <- loss(m(input), target) +output$backward() + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_bilinear.html b/static/docs/reference/nn_bilinear.html new file mode 100644 index 0000000000000000000000000000000000000000..b2507d1260d17d733fb842bd37679aebbe33fab5 --- /dev/null +++ b/static/docs/reference/nn_bilinear.html @@ -0,0 +1,290 @@ + + + + + + + + +Bilinear module — nn_bilinear • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a bilinear transformation to the incoming data +\(y = x_1^T A x_2 + b\)

    +
    + +
    nn_bilinear(in1_features, in2_features, out_features, bias = TRUE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    in1_features

    size of each first input sample

    in2_features

    size of each second input sample

    out_features

    size of each output sample

    bias

    If set to FALSE, the layer will not learn an additive bias. +Default: TRUE

    + +

    Shape

    + + + +
      +
    • Input1: \((N, *, H_{in1})\) \(H_{in1}=\mbox{in1\_features}\) and +\(*\) means any number of additional dimensions. All but the last +dimension of the inputs should be the same.

    • +
    • Input2: \((N, *, H_{in2})\) where \(H_{in2}=\mbox{in2\_features}\).

    • +
    • Output: \((N, *, H_{out})\) where \(H_{out}=\mbox{out\_features}\) +and all but the last dimension are the same shape as the input.

    • +
    + +

    Attributes

    + + + +
      +
    • weight: the learnable weights of the module of shape +\((\mbox{out\_features}, \mbox{in1\_features}, \mbox{in2\_features})\). +The values are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\), where +\(k = \frac{1}{\mbox{in1\_features}}\)

    • +
    • bias: the learnable bias of the module of shape \((\mbox{out\_features})\). +If bias is TRUE, the values are initialized from +\(\mathcal{U}(-\sqrt{k}, \sqrt{k})\), where +\(k = \frac{1}{\mbox{in1\_features}}\)

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_bilinear(20, 30, 50) +input1 <- torch_randn(128, 20) +input2 <- torch_randn(128, 30) +output = m(input1, input2) +print(output$size()) + +} +
    #> [1] 128 50
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_celu.html b/static/docs/reference/nn_celu.html new file mode 100644 index 0000000000000000000000000000000000000000..a74105dc471c2b194da51ba44a0b782293be2808 --- /dev/null +++ b/static/docs/reference/nn_celu.html @@ -0,0 +1,266 @@ + + + + + + + + +CELU module — nn_celu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the element-wise function:

    +
    + +
    nn_celu(alpha = 1, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + +
    alpha

    the \(\alpha\) value for the CELU formulation. Default: 1.0

    inplace

    can optionally do the operation in-place. Default: FALSE

    + +

    Details

    + +

    $$ + \mbox{CELU}(x) = \max(0,x) + \min(0, \alpha * (\exp(x/\alpha) - 1)) +$$

    +

    More details can be found in the paper +Continuously Differentiable Exponential Linear Units.

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_celu() +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_conv1d.html b/static/docs/reference/nn_conv1d.html new file mode 100644 index 0000000000000000000000000000000000000000..62c7d451d2e76ef8b46f6e8eaefd36f2e36ad3ea --- /dev/null +++ b/static/docs/reference/nn_conv1d.html @@ -0,0 +1,377 @@ + + + + + + + + +Conv1D module — nn_conv1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 1D convolution over an input signal composed of several input +planes. +In the simplest case, the output value of the layer with input size +\((N, C_{\mbox{in}}, L)\) and output \((N, C_{\mbox{out}}, L_{\mbox{out}})\) can be +precisely described as:

    +
    + +
    nn_conv1d(
    +  in_channels,
    +  out_channels,
    +  kernel_size,
    +  stride = 1,
    +  padding = 0,
    +  dilation = 1,
    +  groups = 1,
    +  bias = TRUE,
    +  padding_mode = "zeros"
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    in_channels

    (int): Number of channels in the input image

    out_channels

    (int): Number of channels produced by the convolution

    kernel_size

    (int or tuple): Size of the convolving kernel

    stride

    (int or tuple, optional): Stride of the convolution. Default: 1

    padding

    (int or tuple, optional): Zero-padding added to both sides of +the input. Default: 0

    dilation

    (int or tuple, optional): Spacing between kernel +elements. Default: 1

    groups

    (int, optional): Number of blocked connections from input +channels to output channels. Default: 1

    bias

    (bool, optional): If TRUE, adds a learnable bias to the +output. Default: TRUE

    padding_mode

    (string, optional): 'zeros', 'reflect', +'replicate' or 'circular'. Default: 'zeros'

    + +

    Details

    + +

    $$ +\mbox{out}(N_i, C_{\mbox{out}_j}) = \mbox{bias}(C_{\mbox{out}_j}) + + \sum_{k = 0}^{C_{in} - 1} \mbox{weight}(C_{\mbox{out}_j}, k) +\star \mbox{input}(N_i, k) +$$

    +

    where \(\star\) is the valid +cross-correlation operator, +\(N\) is a batch size, \(C\) denotes a number of channels, +\(L\) is a length of signal sequence.

      +
    • stride controls the stride for the cross-correlation, a single +number or a one-element tuple.

    • +
    • padding controls the amount of implicit zero-paddings on both sides +for padding number of points.

    • +
    • dilation controls the spacing between the kernel points; also +known as the à trous algorithm. It is harder to describe, but this +link +has a nice visualization of what dilation does.

    • +
    • groups controls the connections between inputs and outputs. +in_channels and out_channels must both be divisible by +groups. For example,

        +
      • At groups=1, all inputs are convolved to all outputs.

      • +
      • At groups=2, the operation becomes equivalent to having two conv +layers side by side, each seeing half the input channels, +and producing half the output channels, and both subsequently +concatenated.

      • +
      • At groups= in_channels, each input channel is convolved with +its own set of filters, +of size \(\left\lfloor\frac{out\_channels}{in\_channels}\right\rfloor\).

      • +
    • +
    + +

    Note

    + + + + +

    Depending of the size of your kernel, several (of the last) +columns of the input might be lost, because it is a valid +cross-correlation, and not a full cross-correlation. +It is up to the user to add proper padding.

    +

    When groups == in_channels and out_channels == K * in_channels, +where K is a positive integer, this operation is also termed in +literature as depthwise convolution. +In other words, for an input of size \((N, C_{in}, L_{in})\), +a depthwise convolution with a depthwise multiplier K, can be constructed by arguments +\((C_{\mbox{in}}=C_{in}, C_{\mbox{out}}=C_{in} \times K, ..., \mbox{groups}=C_{in})\).

    +

    Shape

    + + + +
      +
    • Input: \((N, C_{in}, L_{in})\)

    • +
    • Output: \((N, C_{out}, L_{out})\) where

    • +
    + +

    $$ + L_{out} = \left\lfloor\frac{L_{in} + 2 \times \mbox{padding} - \mbox{dilation} + \times (\mbox{kernel\_size} - 1) - 1}{\mbox{stride}} + 1\right\rfloor +$$

    +

    Attributes

    + + + +
      +
    • weight (Tensor): the learnable weights of the module of shape +\((\mbox{out\_channels}, \frac{\mbox{in\_channels}}{\mbox{groups}}, \mbox{kernel\_size})\). +The values of these weights are sampled from +\(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where +\(k = \frac{groups}{C_{\mbox{in}} * \mbox{kernel\_size}}\)

    • +
    • bias (Tensor): the learnable bias of the module of shape +(out_channels). If bias is TRUE, then the values of these weights are +sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where +\(k = \frac{groups}{C_{\mbox{in}} * \mbox{kernel\_size}}\)

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_conv1d(16, 33, 3, stride=2) +input <- torch_randn(20, 16, 50) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_conv2d.html b/static/docs/reference/nn_conv2d.html new file mode 100644 index 0000000000000000000000000000000000000000..5c679803f873a477b2f80461f5d48336b308f685 --- /dev/null +++ b/static/docs/reference/nn_conv2d.html @@ -0,0 +1,394 @@ + + + + + + + + +Conv2D module — nn_conv2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 2D convolution over an input signal composed of several input +planes.

    +
    + +
    nn_conv2d(
    +  in_channels,
    +  out_channels,
    +  kernel_size,
    +  stride = 1,
    +  padding = 0,
    +  dilation = 1,
    +  groups = 1,
    +  bias = TRUE,
    +  padding_mode = "zeros"
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    in_channels

    (int): Number of channels in the input image

    out_channels

    (int): Number of channels produced by the convolution

    kernel_size

    (int or tuple): Size of the convolving kernel

    stride

    (int or tuple, optional): Stride of the convolution. Default: 1

    padding

    (int or tuple, optional): Zero-padding added to both sides of +the input. Default: 0

    dilation

    (int or tuple, optional): Spacing between kernel elements. Default: 1

    groups

    (int, optional): Number of blocked connections from input +channels to output channels. Default: 1

    bias

    (bool, optional): If TRUE, adds a learnable bias to the +output. Default: TRUE

    padding_mode

    (string, optional): 'zeros', 'reflect', +'replicate' or 'circular'. Default: 'zeros'

    + +

    Details

    + +

    In the simplest case, the output value of the layer with input size +\((N, C_{\mbox{in}}, H, W)\) and output \((N, C_{\mbox{out}}, H_{\mbox{out}}, W_{\mbox{out}})\) +can be precisely described as:

    +

    $$ +\mbox{out}(N_i, C_{\mbox{out}_j}) = \mbox{bias}(C_{\mbox{out}_j}) + + \sum_{k = 0}^{C_{\mbox{in}} - 1} \mbox{weight}(C_{\mbox{out}_j}, k) \star \mbox{input}(N_i, k) +$$

    +

    where \(\star\) is the valid 2D cross-correlation operator, +\(N\) is a batch size, \(C\) denotes a number of channels, +\(H\) is a height of input planes in pixels, and \(W\) is +width in pixels.

      +
    • stride controls the stride for the cross-correlation, a single +number or a tuple.

    • +
    • padding controls the amount of implicit zero-paddings on both +sides for padding number of points for each dimension.

    • +
    • dilation controls the spacing between the kernel points; also +known as the à trous algorithm. It is harder to describe, but this link_ +has a nice visualization of what dilation does.

    • +
    • groups controls the connections between inputs and outputs. +in_channels and out_channels must both be divisible by +groups. For example,

        +
      • At groups=1, all inputs are convolved to all outputs.

      • +
      • At groups=2, the operation becomes equivalent to having two conv +layers side by side, each seeing half the input channels, +and producing half the output channels, and both subsequently +concatenated.

      • +
      • At groups= in_channels, each input channel is convolved with +its own set of filters, of size: +\(\left\lfloor\frac{out\_channels}{in\_channels}\right\rfloor\).

      • +
    • +
    + +

    The parameters kernel_size, stride, padding, dilation can either be:

      +
    • a single int -- in which case the same value is used for the height and +width dimension

    • +
    • a tuple of two ints -- in which case, the first int is used for the height dimension, +and the second int for the width dimension

    • +
    + +

    Note

    + + + + +

    Depending of the size of your kernel, several (of the last) +columns of the input might be lost, because it is a valid cross-correlation, +and not a full cross-correlation. +It is up to the user to add proper padding.

    +

    When groups == in_channels and out_channels == K * in_channels, +where K is a positive integer, this operation is also termed in +literature as depthwise convolution. +In other words, for an input of size :math:(N, C_{in}, H_{in}, W_{in}), +a depthwise convolution with a depthwise multiplier K, can be constructed by arguments +\((in\_channels=C_{in}, out\_channels=C_{in} \times K, ..., groups=C_{in})\).

    +

    In some circumstances when using the CUDA backend with CuDNN, this operator +may select a nondeterministic algorithm to increase performance. If this is +undesirable, you can try to make the operation deterministic (potentially at +a performance cost) by setting backends_cudnn_deterministic = TRUE.

    +

    Shape

    + + + +
      +
    • Input: \((N, C_{in}, H_{in}, W_{in})\)

    • +
    • Output: \((N, C_{out}, H_{out}, W_{out})\) where +$$ + H_{out} = \left\lfloor\frac{H_{in} + 2 \times \mbox{padding}[0] - \mbox{dilation}[0] + \times (\mbox{kernel\_size}[0] - 1) - 1}{\mbox{stride}[0]} + 1\right\rfloor +$$ +$$ + W_{out} = \left\lfloor\frac{W_{in} + 2 \times \mbox{padding}[1] - \mbox{dilation}[1] + \times (\mbox{kernel\_size}[1] - 1) - 1}{\mbox{stride}[1]} + 1\right\rfloor +$$

    • +
    + +

    Attributes

    + + + +
      +
    • weight (Tensor): the learnable weights of the module of shape +\((\mbox{out\_channels}, \frac{\mbox{in\_channels}}{\mbox{groups}}\), +\(\mbox{kernel\_size[0]}, \mbox{kernel\_size[1]})\). +The values of these weights are sampled from +\(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where +\(k = \frac{groups}{C_{\mbox{in}} * \prod_{i=0}^{1}\mbox{kernel\_size}[i]}\)

    • +
    • bias (Tensor): the learnable bias of the module of shape +(out_channels). If bias is TRUE, +then the values of these weights are +sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where +\(k = \frac{groups}{C_{\mbox{in}} * \prod_{i=0}^{1}\mbox{kernel\_size}[i]}\)

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { + +# With square kernels and equal stride +m <- nn_conv2d(16, 33, 3, stride = 2) +# non-square kernels and unequal stride and with padding +m <- nn_conv2d(16, 33, c(3, 5), stride=c(2, 1), padding=c(4, 2)) +# non-square kernels and unequal stride and with padding and dilation +m <- nn_conv2d(16, 33, c(3, 5), stride=c(2, 1), padding=c(4, 2), dilation=c(3, 1)) +input <- torch_randn(20, 16, 50, 100) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_conv3d.html b/static/docs/reference/nn_conv3d.html new file mode 100644 index 0000000000000000000000000000000000000000..8d9457af29c66ae64b356c891c4254f18f2d9728 --- /dev/null +++ b/static/docs/reference/nn_conv3d.html @@ -0,0 +1,382 @@ + + + + + + + + +Conv3D module — nn_conv3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 3D convolution over an input signal composed of several input +planes. +In the simplest case, the output value of the layer with input size \((N, C_{in}, D, H, W)\) +and output \((N, C_{out}, D_{out}, H_{out}, W_{out})\) can be precisely described as:

    +
    + +
    nn_conv3d(
    +  in_channels,
    +  out_channels,
    +  kernel_size,
    +  stride = 1,
    +  padding = 0,
    +  dilation = 1,
    +  groups = 1,
    +  bias = TRUE,
    +  padding_mode = "zeros"
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    in_channels

    (int): Number of channels in the input image

    out_channels

    (int): Number of channels produced by the convolution

    kernel_size

    (int or tuple): Size of the convolving kernel

    stride

    (int or tuple, optional): Stride of the convolution. Default: 1

    padding

    (int or tuple, optional): Zero-padding added to all three sides of the input. Default: 0

    dilation

    (int or tuple, optional): Spacing between kernel elements. Default: 1

    groups

    (int, optional): Number of blocked connections from input channels to output channels. Default: 1

    bias

    (bool, optional): If TRUE, adds a learnable bias to the output. Default: TRUE

    padding_mode

    (string, optional): 'zeros', 'reflect', 'replicate' or 'circular'. Default: 'zeros'

    + +

    Details

    + +

    $$ + out(N_i, C_{out_j}) = bias(C_{out_j}) + + \sum_{k = 0}^{C_{in} - 1} weight(C_{out_j}, k) \star input(N_i, k) +$$

    +

    where \(\star\) is the valid 3D cross-correlation operator

      +
    • stride controls the stride for the cross-correlation.

    • +
    • padding controls the amount of implicit zero-paddings on both +sides for padding number of points for each dimension.

    • +
    • dilation controls the spacing between the kernel points; also known as the à trous algorithm. +It is harder to describe, but this link_ has a nice visualization of what dilation does.

    • +
    • groups controls the connections between inputs and outputs. +in_channels and out_channels must both be divisible by +groups. For example,

    • +
    • At groups=1, all inputs are convolved to all outputs.

    • +
    • At groups=2, the operation becomes equivalent to having two conv +layers side by side, each seeing half the input channels, +and producing half the output channels, and both subsequently +concatenated.

    • +
    • At groups= in_channels, each input channel is convolved with +its own set of filters, of size +\(\left\lfloor\frac{out\_channels}{in\_channels}\right\rfloor\).

    • +
    + +

    The parameters kernel_size, stride, padding, dilation can either be:

      +
    • a single int -- in which case the same value is used for the depth, height and width dimension

    • +
    • a tuple of three ints -- in which case, the first int is used for the depth dimension, +the second int for the height dimension and the third int for the width dimension

    • +
    + +

    Note

    + +

    Depending of the size of your kernel, several (of the last) +columns of the input might be lost, because it is a valid cross-correlation, +and not a full cross-correlation. +It is up to the user to add proper padding.

    +

    When groups == in_channels and out_channels == K * in_channels, +where K is a positive integer, this operation is also termed in +literature as depthwise convolution. +In other words, for an input of size \((N, C_{in}, D_{in}, H_{in}, W_{in})\), +a depthwise convolution with a depthwise multiplier K, can be constructed by arguments +\((in\_channels=C_{in}, out\_channels=C_{in} \times K, ..., groups=C_{in})\).

    +

    In some circumstances when using the CUDA backend with CuDNN, this operator +may select a nondeterministic algorithm to increase performance. If this is +undesirable, you can try to make the operation deterministic (potentially at +a performance cost) by setting torch.backends.cudnn.deterministic = TRUE. +Please see the notes on :doc:/notes/randomness for background.

    +

    Shape

    + + + +
      +
    • Input: \((N, C_{in}, D_{in}, H_{in}, W_{in})\)

    • +
    • Output: \((N, C_{out}, D_{out}, H_{out}, W_{out})\) where +$$ + D_{out} = \left\lfloor\frac{D_{in} + 2 \times \mbox{padding}[0] - \mbox{dilation}[0] + \times (\mbox{kernel\_size}[0] - 1) - 1}{\mbox{stride}[0]} + 1\right\rfloor + $$ +$$ + H_{out} = \left\lfloor\frac{H_{in} + 2 \times \mbox{padding}[1] - \mbox{dilation}[1] + \times (\mbox{kernel\_size}[1] - 1) - 1}{\mbox{stride}[1]} + 1\right\rfloor + $$ +$$ + W_{out} = \left\lfloor\frac{W_{in} + 2 \times \mbox{padding}[2] - \mbox{dilation}[2] + \times (\mbox{kernel\_size}[2] - 1) - 1}{\mbox{stride}[2]} + 1\right\rfloor + $$

    • +
    + +

    Attributes

    + + + +
      +
    • weight (Tensor): the learnable weights of the module of shape +\((\mbox{out\_channels}, \frac{\mbox{in\_channels}}{\mbox{groups}},\) +\(\mbox{kernel\_size[0]}, \mbox{kernel\_size[1]}, \mbox{kernel\_size[2]})\). +The values of these weights are sampled from +\(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where +\(k = \frac{groups}{C_{\mbox{in}} * \prod_{i=0}^{2}\mbox{kernel\_size}[i]}\)

    • +
    • bias (Tensor): the learnable bias of the module of shape (out_channels). If bias is True, +then the values of these weights are +sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where +\(k = \frac{groups}{C_{\mbox{in}} * \prod_{i=0}^{2}\mbox{kernel\_size}[i]}\)

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +# With square kernels and equal stride +m <- nn_conv3d(16, 33, 3, stride=2) +# non-square kernels and unequal stride and with padding +m <- nn_conv3d(16, 33, c(3, 5, 2), stride=c(2, 1, 1), padding=c(4, 2, 0)) +input <- torch_randn(20, 16, 10, 50, 100) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_conv_transpose1d.html b/static/docs/reference/nn_conv_transpose1d.html new file mode 100644 index 0000000000000000000000000000000000000000..ebad836e9e72cab40944268b10ae74aa320b0ab2 --- /dev/null +++ b/static/docs/reference/nn_conv_transpose1d.html @@ -0,0 +1,375 @@ + + + + + + + + +ConvTranspose1D — nn_conv_transpose1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 1D transposed convolution operator over an input image +composed of several input planes.

    +
    + +
    nn_conv_transpose1d(
    +  in_channels,
    +  out_channels,
    +  kernel_size,
    +  stride = 1,
    +  padding = 0,
    +  output_padding = 0,
    +  groups = 1,
    +  bias = TRUE,
    +  dilation = 1,
    +  padding_mode = "zeros"
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    in_channels

    (int): Number of channels in the input image

    out_channels

    (int): Number of channels produced by the convolution

    kernel_size

    (int or tuple): Size of the convolving kernel

    stride

    (int or tuple, optional): Stride of the convolution. Default: 1

    padding

    (int or tuple, optional): dilation * (kernel_size - 1) - padding zero-padding +will be added to both sides of the input. Default: 0

    output_padding

    (int or tuple, optional): Additional size added to one side +of the output shape. Default: 0

    groups

    (int, optional): Number of blocked connections from input channels to output channels. Default: 1

    bias

    (bool, optional): If True, adds a learnable bias to the output. Default: TRUE

    dilation

    (int or tuple, optional): Spacing between kernel elements. Default: 1

    padding_mode

    (string, optional): 'zeros', 'reflect', +'replicate' or 'circular'. Default: 'zeros'

    + +

    Details

    + +

    This module can be seen as the gradient of Conv1d with respect to its input. +It is also known as a fractionally-strided convolution or +a deconvolution (although it is not an actual deconvolution operation).

      +
    • stride controls the stride for the cross-correlation.

    • +
    • padding controls the amount of implicit zero-paddings on both +sides for dilation * (kernel_size - 1) - padding number of points. See note +below for details.

    • +
    • output_padding controls the additional size added to one side +of the output shape. See note below for details.

    • +
    • dilation controls the spacing between the kernel points; also known as the +à trous algorithm. It is harder to describe, but this link +has a nice visualization of what dilation does.

    • +
    • groups controls the connections between inputs and outputs. +in_channels and out_channels must both be divisible by +groups. For example,

        +
      • At groups=1, all inputs are convolved to all outputs.

      • +
      • At groups=2, the operation becomes equivalent to having two conv +layers side by side, each seeing half the input channels, +and producing half the output channels, and both subsequently +concatenated.

      • +
      • At groups= in_channels, each input channel is convolved with +its own set of filters (of size +\(\left\lfloor\frac{out\_channels}{in\_channels}\right\rfloor\)).

      • +
    • +
    + +

    Note

    + +

    Depending of the size of your kernel, several (of the last) +columns of the input might be lost, because it is a valid cross-correlation, +and not a full cross-correlation. +It is up to the user to add proper padding.

    +

    The padding argument effectively adds dilation * (kernel_size - 1) - padding +amount of zero padding to both sizes of the input. This is set so that +when a ~torch.nn.Conv1d and a ~torch.nn.ConvTranspose1d +are initialized with same parameters, they are inverses of each other in +regard to the input and output shapes. However, when stride > 1, +~torch.nn.Conv1d maps multiple input shapes to the same output +shape. output_padding is provided to resolve this ambiguity by +effectively increasing the calculated output shape on one side. Note +that output_padding is only used to find output shape, but does +not actually add zero-padding to output.

    +

    In some circumstances when using the CUDA backend with CuDNN, this operator +may select a nondeterministic algorithm to increase performance. If this is +undesirable, you can try to make the operation deterministic (potentially at +a performance cost) by setting torch.backends.cudnn.deterministic = TRUE.

    +

    Shape

    + + + +
      +
    • Input: \((N, C_{in}, L_{in})\)

    • +
    • Output: \((N, C_{out}, L_{out})\) where +$$ + L_{out} = (L_{in} - 1) \times \mbox{stride} - 2 \times \mbox{padding} + \mbox{dilation} +\times (\mbox{kernel\_size} - 1) + \mbox{output\_padding} + 1 +$$

    • +
    + +

    Attributes

    + + + +
      +
    • weight (Tensor): the learnable weights of the module of shape +\((\mbox{in\_channels}, \frac{\mbox{out\_channels}}{\mbox{groups}},\) +\(\mbox{kernel\_size})\). +The values of these weights are sampled from +\(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where +\(k = \frac{groups}{C_{\mbox{out}} * \mbox{kernel\_size}}\)

    • +
    • bias (Tensor): the learnable bias of the module of shape (out_channels). +If bias is TRUE, then the values of these weights are +sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where +\(k = \frac{groups}{C_{\mbox{out}} * \mbox{kernel\_size}}\)

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_conv_transpose1d(32, 16, 2) +input <- torch_randn(10, 32, 2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_conv_transpose2d.html b/static/docs/reference/nn_conv_transpose2d.html new file mode 100644 index 0000000000000000000000000000000000000000..0f99642ae04c0cce0c270941d542482dc5c087b0 --- /dev/null +++ b/static/docs/reference/nn_conv_transpose2d.html @@ -0,0 +1,395 @@ + + + + + + + + +ConvTranpose2D module — nn_conv_transpose2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 2D transposed convolution operator over an input image +composed of several input planes.

    +
    + +
    nn_conv_transpose2d(
    +  in_channels,
    +  out_channels,
    +  kernel_size,
    +  stride = 1,
    +  padding = 0,
    +  output_padding = 0,
    +  groups = 1,
    +  bias = TRUE,
    +  dilation = 1,
    +  padding_mode = "zeros"
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    in_channels

    (int): Number of channels in the input image

    out_channels

    (int): Number of channels produced by the convolution

    kernel_size

    (int or tuple): Size of the convolving kernel

    stride

    (int or tuple, optional): Stride of the convolution. Default: 1

    padding

    (int or tuple, optional): dilation * (kernel_size - 1) - padding zero-padding +will be added to both sides of each dimension in the input. Default: 0

    output_padding

    (int or tuple, optional): Additional size added to one side +of each dimension in the output shape. Default: 0

    groups

    (int, optional): Number of blocked connections from input channels to output channels. Default: 1

    bias

    (bool, optional): If True, adds a learnable bias to the output. Default: True

    dilation

    (int or tuple, optional): Spacing between kernel elements. Default: 1

    padding_mode

    (string, optional): 'zeros', 'reflect', +'replicate' or 'circular'. Default: 'zeros'

    + +

    Details

    + +

    This module can be seen as the gradient of Conv2d with respect to its input. +It is also known as a fractionally-strided convolution or +a deconvolution (although it is not an actual deconvolution operation).

      +
    • stride controls the stride for the cross-correlation.

    • +
    • padding controls the amount of implicit zero-paddings on both +sides for dilation * (kernel_size - 1) - padding number of points. See note +below for details.

    • +
    • output_padding controls the additional size added to one side +of the output shape. See note below for details.

    • +
    • dilation controls the spacing between the kernel points; also known as the à trous algorithm. +It is harder to describe, but this link_ has a nice visualization of what dilation does.

    • +
    • groups controls the connections between inputs and outputs. +in_channels and out_channels must both be divisible by +groups. For example,

        +
      • At groups=1, all inputs are convolved to all outputs.

      • +
      • At groups=2, the operation becomes equivalent to having two conv +layers side by side, each seeing half the input channels, +and producing half the output channels, and both subsequently +concatenated.

      • +
      • At groups= in_channels, each input channel is convolved with +its own set of filters (of size +\(\left\lfloor\frac{out\_channels}{in\_channels}\right\rfloor\)).

      • +
    • +
    + +

    The parameters kernel_size, stride, padding, output_padding +can either be:

      +
    • a single int -- in which case the same value is used for the height and width dimensions

    • +
    • a tuple of two ints -- in which case, the first int is used for the height dimension, +and the second int for the width dimension

    • +
    + +

    Note

    + +

    Depending of the size of your kernel, several (of the last) +columns of the input might be lost, because it is a valid cross-correlation_, +and not a full cross-correlation. It is up to the user to add proper padding.

    +

    The padding argument effectively adds dilation * (kernel_size - 1) - padding +amount of zero padding to both sizes of the input. This is set so that +when a nn_conv2d and a nn_conv_transpose2d are initialized with same +parameters, they are inverses of each other in +regard to the input and output shapes. However, when stride > 1, +nn_conv2d maps multiple input shapes to the same output +shape. output_padding is provided to resolve this ambiguity by +effectively increasing the calculated output shape on one side. Note +that output_padding is only used to find output shape, but does +not actually add zero-padding to output.

    +

    In some circumstances when using the CUDA backend with CuDNN, this operator +may select a nondeterministic algorithm to increase performance. If this is +undesirable, you can try to make the operation deterministic (potentially at +a performance cost) by setting torch.backends.cudnn.deterministic = TRUE.

    +

    Shape

    + + + +
      +
    • Input: \((N, C_{in}, H_{in}, W_{in})\)

    • +
    • Output: \((N, C_{out}, H_{out}, W_{out})\) where +$$ + H_{out} = (H_{in} - 1) \times \mbox{stride}[0] - 2 \times \mbox{padding}[0] + \mbox{dilation}[0] +\times (\mbox{kernel\_size}[0] - 1) + \mbox{output\_padding}[0] + 1 +$$ +$$ + W_{out} = (W_{in} - 1) \times \mbox{stride}[1] - 2 \times \mbox{padding}[1] + \mbox{dilation}[1] +\times (\mbox{kernel\_size}[1] - 1) + \mbox{output\_padding}[1] + 1 +$$

    • +
    + +

    Attributes

    + + + +
      +
    • weight (Tensor): the learnable weights of the module of shape +\((\mbox{in\_channels}, \frac{\mbox{out\_channels}}{\mbox{groups}},\) +\(\mbox{kernel\_size[0]}, \mbox{kernel\_size[1]})\). +The values of these weights are sampled from +\(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where +\(k = \frac{groups}{C_{\mbox{out}} * \prod_{i=0}^{1}\mbox{kernel\_size}[i]}\)

    • +
    • bias (Tensor): the learnable bias of the module of shape (out_channels) +If bias is True, then the values of these weights are +sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where +\(k = \frac{groups}{C_{\mbox{out}} * \prod_{i=0}^{1}\mbox{kernel\_size}[i]}\)

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +# With square kernels and equal stride +m <- nn_conv_transpose2d(16, 33, 3, stride=2) +# non-square kernels and unequal stride and with padding +m <- nn_conv_transpose2d(16, 33, c(3, 5), stride=c(2, 1), padding=c(4, 2)) +input <- torch_randn(20, 16, 50, 100) +output <- m(input) +# exact output size can be also specified as an argument +input <- torch_randn(1, 16, 12, 12) +downsample <- nn_conv2d(16, 16, 3, stride=2, padding=1) +upsample <- nn_conv_transpose2d(16, 16, 3, stride=2, padding=1) +h <- downsample(input) +h$size() +output <- upsample(h, output_size=input$size()) +output$size() + +} +
    #> [1] 1 16 12 12
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_conv_transpose3d.html b/static/docs/reference/nn_conv_transpose3d.html new file mode 100644 index 0000000000000000000000000000000000000000..e7d60c19a7ae6ac7fabe6db3fd7e27e144298fef --- /dev/null +++ b/static/docs/reference/nn_conv_transpose3d.html @@ -0,0 +1,396 @@ + + + + + + + + +ConvTranpose3D module — nn_conv_transpose3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 3D transposed convolution operator over an input image composed of several input +planes.

    +
    + +
    nn_conv_transpose3d(
    +  in_channels,
    +  out_channels,
    +  kernel_size,
    +  stride = 1,
    +  padding = 0,
    +  output_padding = 0,
    +  groups = 1,
    +  bias = TRUE,
    +  dilation = 1,
    +  padding_mode = "zeros"
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    in_channels

    (int): Number of channels in the input image

    out_channels

    (int): Number of channels produced by the convolution

    kernel_size

    (int or tuple): Size of the convolving kernel

    stride

    (int or tuple, optional): Stride of the convolution. Default: 1

    padding

    (int or tuple, optional): dilation * (kernel_size - 1) - padding zero-padding +will be added to both sides of each dimension in the input. Default: 0 +output_padding (int or tuple, optional): Additional size added to one side +of each dimension in the output shape. Default: 0

    output_padding

    (int or tuple, optional): Additional size added to one side +of each dimension in the output shape. Default: 0

    groups

    (int, optional): Number of blocked connections from input channels to output channels. Default: 1

    bias

    (bool, optional): If True, adds a learnable bias to the output. Default: True

    dilation

    (int or tuple, optional): Spacing between kernel elements. Default: 1

    padding_mode

    (string, optional): 'zeros', 'reflect', 'replicate' or 'circular'. Default: 'zeros'

    + +

    Details

    + +

    The transposed convolution operator multiplies each input value element-wise by a learnable kernel, +and sums over the outputs from all input feature planes.

    +

    This module can be seen as the gradient of Conv3d with respect to its input. +It is also known as a fractionally-strided convolution or +a deconvolution (although it is not an actual deconvolution operation).

      +
    • stride controls the stride for the cross-correlation.

    • +
    • padding controls the amount of implicit zero-paddings on both +sides for dilation * (kernel_size - 1) - padding number of points. See note +below for details.

    • +
    • output_padding controls the additional size added to one side +of the output shape. See note below for details.

    • +
    • dilation controls the spacing between the kernel points; also known as the à trous algorithm. +It is harder to describe, but this link_ has a nice visualization of what dilation does.

    • +
    • groups controls the connections between inputs and outputs. +in_channels and out_channels must both be divisible by +groups. For example,

        +
      • At groups=1, all inputs are convolved to all outputs.

      • +
      • At groups=2, the operation becomes equivalent to having two conv +layers side by side, each seeing half the input channels, +and producing half the output channels, and both subsequently +concatenated.

      • +
      • At groups= in_channels, each input channel is convolved with +its own set of filters (of size +\(\left\lfloor\frac{out\_channels}{in\_channels}\right\rfloor\)).

      • +
    • +
    + +

    The parameters kernel_size, stride, padding, output_padding +can either be:

      +
    • a single int -- in which case the same value is used for the depth, height and width dimensions

    • +
    • a tuple of three ints -- in which case, the first int is used for the depth dimension, +the second int for the height dimension and the third int for the width dimension

    • +
    + +

    Note

    + +

    Depending of the size of your kernel, several (of the last) +columns of the input might be lost, because it is a valid cross-correlation, +and not a full cross-correlation. +It is up to the user to add proper padding.

    +

    The padding argument effectively adds dilation * (kernel_size - 1) - padding +amount of zero padding to both sizes of the input. This is set so that +when a ~torch.nn.Conv3d and a ~torch.nn.ConvTranspose3d +are initialized with same parameters, they are inverses of each other in +regard to the input and output shapes. However, when stride > 1, +~torch.nn.Conv3d maps multiple input shapes to the same output +shape. output_padding is provided to resolve this ambiguity by +effectively increasing the calculated output shape on one side. Note +that output_padding is only used to find output shape, but does +not actually add zero-padding to output.

    +

    In some circumstances when using the CUDA backend with CuDNN, this operator +may select a nondeterministic algorithm to increase performance. If this is +undesirable, you can try to make the operation deterministic (potentially at +a performance cost) by setting torch.backends.cudnn.deterministic = TRUE.

    +

    Shape

    + + + +
      +
    • Input: \((N, C_{in}, D_{in}, H_{in}, W_{in})\)

    • +
    • Output: \((N, C_{out}, D_{out}, H_{out}, W_{out})\) where +$$ + D_{out} = (D_{in} - 1) \times \mbox{stride}[0] - 2 \times \mbox{padding}[0] + \mbox{dilation}[0] +\times (\mbox{kernel\_size}[0] - 1) + \mbox{output\_padding}[0] + 1 +$$ +$$ + H_{out} = (H_{in} - 1) \times \mbox{stride}[1] - 2 \times \mbox{padding}[1] + \mbox{dilation}[1] +\times (\mbox{kernel\_size}[1] - 1) + \mbox{output\_padding}[1] + 1 +$$ +$$ + W_{out} = (W_{in} - 1) \times \mbox{stride}[2] - 2 \times \mbox{padding}[2] + \mbox{dilation}[2] +\times (\mbox{kernel\_size}[2] - 1) + \mbox{output\_padding}[2] + 1 +$$

    • +
    + +

    Attributes

    + + + +
      +
    • weight (Tensor): the learnable weights of the module of shape +\((\mbox{in\_channels}, \frac{\mbox{out\_channels}}{\mbox{groups}},\) +\(\mbox{kernel\_size[0]}, \mbox{kernel\_size[1]}, \mbox{kernel\_size[2]})\). +The values of these weights are sampled from +\(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where +\(k = \frac{groups}{C_{\mbox{out}} * \prod_{i=0}^{2}\mbox{kernel\_size}[i]}\)

    • +
    • bias (Tensor): the learnable bias of the module of shape (out_channels) +If bias is True, then the values of these weights are +sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where +\(k = \frac{groups}{C_{\mbox{out}} * \prod_{i=0}^{2}\mbox{kernel\_size}[i]}\)

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +# With square kernels and equal stride +m <- nn_conv_transpose3d(16, 33, 3, stride=2) +# non-square kernels and unequal stride and with padding +m <- nn_conv_transpose3d(16, 33, c(3, 5, 2), stride=c(2, 1, 1), padding=c(0, 4, 2)) +input <- torch_randn(20, 16, 10, 50, 100) +output <- m(input) +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_cross_entropy_loss.html b/static/docs/reference/nn_cross_entropy_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..e45c00e75d3886d8458b6455612235217b10fd55 --- /dev/null +++ b/static/docs/reference/nn_cross_entropy_loss.html @@ -0,0 +1,310 @@ + + + + + + + + +CrossEntropyLoss module — nn_cross_entropy_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    This criterion combines nn_log_softmax() and nn_nll_loss() in one single class. +It is useful when training a classification problem with C classes.

    +
    + +
    nn_cross_entropy_loss(weight = NULL, ignore_index = -100, reduction = "mean")
    + +

    Arguments

    + + + + + + + + + + + + + + +
    weight

    (Tensor, optional): a manual rescaling weight given to each class. +If given, has to be a Tensor of size C

    ignore_index

    (int, optional): Specifies a target value that is ignored +and does not contribute to the input gradient. When size_average is +TRUE, the loss is averaged over non-ignored targets.

    reduction

    (string, optional): Specifies the reduction to apply to the output: +'none' | 'mean' | 'sum'. 'none': no reduction will be applied, +'mean': the sum of the output will be divided by the number of +elements in the output, 'sum': the output will be summed. Note: size_average +and reduce are in the process of being deprecated, and in the meantime, +specifying either of those two args will override reduction. Default: 'mean'

    + +

    Details

    + +

    If provided, the optional argument weight should be a 1D Tensor +assigning weight to each of the classes.

    +

    This is particularly useful when you have an unbalanced training set. +The input is expected to contain raw, unnormalized scores for each class. +input has to be a Tensor of size either \((minibatch, C)\) or +\((minibatch, C, d_1, d_2, ..., d_K)\) +with \(K \geq 1\) for the K-dimensional case (described later).

    +

    This criterion expects a class index in the range \([0, C-1]\) as the +target for each value of a 1D tensor of size minibatch; if ignore_index +is specified, this criterion also accepts this class index (this index may not +necessarily be in the class range).

    +

    The loss can be described as: +$$ + \mbox{loss}(x, class) = -\log\left(\frac{\exp(x[class])}{\sum_j \exp(x[j])}\right) += -x[class] + \log\left(\sum_j \exp(x[j])\right) +$$ +or in the case of the weight argument being specified: +$$ + \mbox{loss}(x, class) = weight[class] \left(-x[class] + \log\left(\sum_j \exp(x[j])\right)\right) +$$

    +

    The losses are averaged across observations for each minibatch. +Can also be used for higher dimension inputs, such as 2D images, by providing +an input of size \((minibatch, C, d_1, d_2, ..., d_K)\) with \(K \geq 1\), +where \(K\) is the number of dimensions, and a target of appropriate shape +(see below).

    +

    Shape

    + + + +
      +
    • Input: \((N, C)\) where C = number of classes, or +\((N, C, d_1, d_2, ..., d_K)\) with \(K \geq 1\) +in the case of K-dimensional loss.

    • +
    • Target: \((N)\) where each value is \(0 \leq \mbox{targets}[i] \leq C-1\), or +\((N, d_1, d_2, ..., d_K)\) with \(K \geq 1\) in the case of +K-dimensional loss.

    • +
    • Output: scalar. +If reduction is 'none', then the same size as the target: +\((N)\), or +\((N, d_1, d_2, ..., d_K)\) with \(K \geq 1\) in the case +of K-dimensional loss.

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +loss <- nn_cross_entropy_loss() +input <- torch_randn(3, 5, requires_grad=TRUE) +target <- torch_randint(low = 1, high = 5, size = 3, dtype = torch_long()) +output <- loss(input, target) +output$backward() + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_dropout.html b/static/docs/reference/nn_dropout.html new file mode 100644 index 0000000000000000000000000000000000000000..1db17faad4f50a5db6f49b231948700bd07c8c93 --- /dev/null +++ b/static/docs/reference/nn_dropout.html @@ -0,0 +1,272 @@ + + + + + + + + +Dropout module — nn_dropout • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    During training, randomly zeroes some of the elements of the input +tensor with probability p using samples from a Bernoulli +distribution. Each channel will be zeroed out independently on every forward +call.

    +
    + +
    nn_dropout(p = 0.5, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + +
    p

    probability of an element to be zeroed. Default: 0.5

    inplace

    If set to TRUE, will do this operation in-place. Default: FALSE.

    + +

    Details

    + +

    This has proven to be an effective technique for regularization and +preventing the co-adaptation of neurons as described in the paper +Improving neural networks by preventing co-adaptation of feature detectors.

    +

    Furthermore, the outputs are scaled by a factor of :math:\frac{1}{1-p} during +training. This means that during evaluation the module simply computes an +identity function.

    +

    Shape

    + + + +
      +
    • Input: \((*)\). Input can be of any shape

    • +
    • Output: \((*)\). Output is of the same shape as input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_dropout(p = 0.2) +input <- torch_randn(20, 16) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_dropout2d.html b/static/docs/reference/nn_dropout2d.html new file mode 100644 index 0000000000000000000000000000000000000000..517ad2cbc9552fbb776f97cdfb2ef1d5762e25d7 --- /dev/null +++ b/static/docs/reference/nn_dropout2d.html @@ -0,0 +1,276 @@ + + + + + + + + +Dropout2D module — nn_dropout2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Randomly zero out entire channels (a channel is a 2D feature map, +e.g., the \(j\)-th channel of the \(i\)-th sample in the +batched input is a 2D tensor \(\mbox{input}[i, j]\)).

    +
    + +
    nn_dropout2d(p = 0.5, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + +
    p

    (float, optional): probability of an element to be zero-ed.

    inplace

    (bool, optional): If set to TRUE, will do this operation +in-place

    + +

    Details

    + +

    Each channel will be zeroed out independently on every forward call with +probability p using samples from a Bernoulli distribution. +Usually the input comes from nn_conv2d modules.

    +

    As described in the paper +Efficient Object Localization Using Convolutional Networks , +if adjacent pixels within feature maps are strongly correlated +(as is normally the case in early convolution layers) then i.i.d. dropout +will not regularize the activations and will otherwise just result +in an effective learning rate decrease. +In this case, nn_dropout2d will help promote independence between +feature maps and should be used instead.

    +

    Shape

    + + + +
      +
    • Input: \((N, C, H, W)\)

    • +
    • Output: \((N, C, H, W)\) (same shape as input)

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_dropout2d(p = 0.2) +input <- torch_randn(20, 16, 32, 32) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_dropout3d.html b/static/docs/reference/nn_dropout3d.html new file mode 100644 index 0000000000000000000000000000000000000000..f03e2d62a07c488283dc32076ee9ec8990b68fdb --- /dev/null +++ b/static/docs/reference/nn_dropout3d.html @@ -0,0 +1,276 @@ + + + + + + + + +Dropout3D module — nn_dropout3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Randomly zero out entire channels (a channel is a 3D feature map, +e.g., the \(j\)-th channel of the \(i\)-th sample in the +batched input is a 3D tensor \(\mbox{input}[i, j]\)).

    +
    + +
    nn_dropout3d(p = 0.5, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + +
    p

    (float, optional): probability of an element to be zeroed.

    inplace

    (bool, optional): If set to TRUE, will do this operation +in-place

    + +

    Details

    + +

    Each channel will be zeroed out independently on every forward call with +probability p using samples from a Bernoulli distribution. +Usually the input comes from nn_conv2d modules.

    +

    As described in the paper +Efficient Object Localization Using Convolutional Networks , +if adjacent pixels within feature maps are strongly correlated +(as is normally the case in early convolution layers) then i.i.d. dropout +will not regularize the activations and will otherwise just result +in an effective learning rate decrease.

    +

    In this case, nn_dropout3d will help promote independence between +feature maps and should be used instead.

    +

    Shape

    + + + +
      +
    • Input: \((N, C, D, H, W)\)

    • +
    • Output: \((N, C, D, H, W)\) (same shape as input)

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_dropout3d(p = 0.2) +input <- torch_randn(20, 16, 4, 32, 32) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_elu.html b/static/docs/reference/nn_elu.html new file mode 100644 index 0000000000000000000000000000000000000000..36254f733bcfa5f9e5bd54b9cc0fca92d2a523fd --- /dev/null +++ b/static/docs/reference/nn_elu.html @@ -0,0 +1,264 @@ + + + + + + + + +ELU module — nn_elu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the element-wise function:

    +
    + +
    nn_elu(alpha = 1, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + +
    alpha

    the \(\alpha\) value for the ELU formulation. Default: 1.0

    inplace

    can optionally do the operation in-place. Default: FALSE

    + +

    Details

    + +

    $$ + \mbox{ELU}(x) = \max(0,x) + \min(0, \alpha * (\exp(x) - 1)) +$$

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_elu() +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_embedding.html b/static/docs/reference/nn_embedding.html new file mode 100644 index 0000000000000000000000000000000000000000..cef68d6e71bd1081e13fb94da8873fb0aeb9dce2 --- /dev/null +++ b/static/docs/reference/nn_embedding.html @@ -0,0 +1,333 @@ + + + + + + + + +Embedding module — nn_embedding • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    A simple lookup table that stores embeddings of a fixed dictionary and size. +This module is often used to store word embeddings and retrieve them using indices. +The input to the module is a list of indices, and the output is the corresponding +word embeddings.

    +
    + +
    nn_embedding(
    +  num_embeddings,
    +  embedding_dim,
    +  padding_idx = NULL,
    +  max_norm = NULL,
    +  norm_type = 2,
    +  scale_grad_by_freq = FALSE,
    +  sparse = FALSE,
    +  .weight = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    num_embeddings

    (int): size of the dictionary of embeddings

    embedding_dim

    (int): the size of each embedding vector

    padding_idx

    (int, optional): If given, pads the output with the embedding vector at padding_idx +(initialized to zeros) whenever it encounters the index.

    max_norm

    (float, optional): If given, each embedding vector with norm larger than max_norm +is renormalized to have norm max_norm.

    norm_type

    (float, optional): The p of the p-norm to compute for the max_norm option. Default 2.

    scale_grad_by_freq

    (boolean, optional): If given, this will scale gradients by the inverse of frequency of +the words in the mini-batch. Default False.

    sparse

    (bool, optional): If True, gradient w.r.t. weight matrix will be a sparse tensor.

    .weight

    (Tensor) embeddings weights (in case you want to set it manually)

    +

    See Notes for more details regarding sparse gradients.

    + +

    Note

    + +

    Keep in mind that only a limited number of optimizers support +sparse gradients: currently it's optim.SGD (CUDA and CPU), +optim.SparseAdam (CUDA and CPU) and optim.Adagrad (CPU)

    +

    With padding_idx set, the embedding vector at +padding_idx is initialized to all zeros. However, note that this +vector can be modified afterwards, e.g., using a customized +initialization method, and thus changing the vector used to pad the +output. The gradient for this vector from nn_embedding +is always zero.

    +

    Attributes

    + + + +
      +
    • weight (Tensor): the learnable weights of the module of shape (num_embeddings, embedding_dim) +initialized from \(\mathcal{N}(0, 1)\)

    • +
    + +

    Shape

    + + + +
      +
    • Input: \((*)\), LongTensor of arbitrary shape containing the indices to extract

    • +
    • Output: \((*, H)\), where * is the input shape and \(H=\mbox{embedding\_dim}\)

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +# an Embedding module containing 10 tensors of size 3 +embedding <- nn_embedding(10, 3) +# a batch of 2 samples of 4 indices each +input <- torch_tensor(rbind(c(1,2,4,5),c(4,3,2,9)), dtype = torch_long()) +embedding(input) +# example with padding_idx +embedding <- nn_embedding(10, 3, padding_idx=1) +input <- torch_tensor(matrix(c(1,3,1,6), nrow = 1), dtype = torch_long()) +embedding(input) + +} +
    #> torch_tensor +#> (1,.,.) = +#> 0.0000 0.0000 0.0000 +#> -1.2943 -1.0279 0.6483 +#> 0.0000 0.0000 0.0000 +#> 0.4053 0.7866 -0.3922 +#> [ CPUFloatType{1,4,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_fractional_max_pool2d.html b/static/docs/reference/nn_fractional_max_pool2d.html new file mode 100644 index 0000000000000000000000000000000000000000..76c0257bbdf55fb3427f83bbec367fb372ced818 --- /dev/null +++ b/static/docs/reference/nn_fractional_max_pool2d.html @@ -0,0 +1,276 @@ + + + + + + + + +Applies a 2D fractional max pooling over an input signal composed of several input planes. — nn_fractional_max_pool2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fractional MaxPooling is described in detail in the paper +Fractional MaxPooling by Ben Graham

    +
    + +
    nn_fractional_max_pool2d(
    +  kernel_size,
    +  output_size = NULL,
    +  output_ratio = NULL,
    +  return_indices = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    kernel_size

    the size of the window to take a max over. +Can be a single number k (for a square kernel of k x k) or a tuple (kh, kw)

    output_size

    the target output size of the image of the form oH x oW. +Can be a tuple (oH, oW) or a single number oH for a square image oH x oH

    output_ratio

    If one wants to have an output size as a ratio of the input size, this option can be given. +This has to be a number or tuple in the range (0, 1)

    return_indices

    if TRUE, will return the indices along with the outputs. +Useful to pass to nn_max_unpool2d(). Default: FALSE

    + +

    Details

    + +

    The max-pooling operation is applied in \(kH \times kW\) regions by a stochastic +step size determined by the target output size. +The number of output features is equal to the number of input planes.

    + +

    Examples

    +
    if (torch_is_installed()) { +# pool of square window of size=3, and target output size 13x12 +m = nn_fractional_max_pool2d(3, output_size=c(13, 12)) +# pool of square window and target output size being half of input image size +m = nn_fractional_max_pool2d(3, output_ratio=c(0.5, 0.5)) +input = torch_randn(20, 16, 50, 32) +output = m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_fractional_max_pool3d.html b/static/docs/reference/nn_fractional_max_pool3d.html new file mode 100644 index 0000000000000000000000000000000000000000..a9c4ff21b3e6ce48b6fdd542020d980a58adf92b --- /dev/null +++ b/static/docs/reference/nn_fractional_max_pool3d.html @@ -0,0 +1,276 @@ + + + + + + + + +Applies a 3D fractional max pooling over an input signal composed of several input planes. — nn_fractional_max_pool3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fractional MaxPooling is described in detail in the paper +Fractional MaxPooling by Ben Graham

    +
    + +
    nn_fractional_max_pool3d(
    +  kernel_size,
    +  output_size = NULL,
    +  output_ratio = NULL,
    +  return_indices = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    kernel_size

    the size of the window to take a max over. +Can be a single number k (for a square kernel of k x k x k) or a tuple (kt x kh x kw)

    output_size

    the target output size of the image of the form oT x oH x oW. +Can be a tuple (oT, oH, oW) or a single number oH for a square image oH x oH x oH

    output_ratio

    If one wants to have an output size as a ratio of the input size, this option can be given. +This has to be a number or tuple in the range (0, 1)

    return_indices

    if TRUE, will return the indices along with the outputs. +Useful to pass to nn_max_unpool3d(). Default: FALSE

    + +

    Details

    + +

    The max-pooling operation is applied in \(kTxkHxkW\) regions by a stochastic +step size determined by the target output size. +The number of output features is equal to the number of input planes.

    + +

    Examples

    +
    if (torch_is_installed()) { +# pool of cubic window of size=3, and target output size 13x12x11 +m = nn_fractional_max_pool3d(3, output_size=c(13, 12, 11)) +# pool of cubic window and target output size being half of input size +m = nn_fractional_max_pool3d(3, output_ratio=c(0.5, 0.5, 0.5)) +input = torch_randn(20, 16, 50, 32, 16) +output = m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_gelu.html b/static/docs/reference/nn_gelu.html new file mode 100644 index 0000000000000000000000000000000000000000..09708b4337470fa699dbe69070c890f8f433d855 --- /dev/null +++ b/static/docs/reference/nn_gelu.html @@ -0,0 +1,252 @@ + + + + + + + + +GELU module — nn_gelu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the Gaussian Error Linear Units function: +$$\mbox{GELU}(x) = x * \Phi(x)$$

    +
    + +
    nn_gelu()
    + + +

    Details

    + +

    where \(\Phi(x)\) is the Cumulative Distribution Function for Gaussian Distribution.

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m = nn_gelu() +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_glu.html b/static/docs/reference/nn_glu.html new file mode 100644 index 0000000000000000000000000000000000000000..2ae935a55bceb6c1d5579262a195f62da3340fc0 --- /dev/null +++ b/static/docs/reference/nn_glu.html @@ -0,0 +1,259 @@ + + + + + + + + +GLU module — nn_glu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the gated linear unit function +\({GLU}(a, b)= a \otimes \sigma(b)\) where \(a\) is the first half +of the input matrices and \(b\) is the second half.

    +
    + +
    nn_glu(dim = -1)
    + +

    Arguments

    + + + + + + +
    dim

    (int): the dimension on which to split the input. Default: -1

    + +

    Shape

    + + + +
      +
    • Input: \((\ast_1, N, \ast_2)\) where * means, any number of additional +dimensions

    • +
    • Output: \((\ast_1, M, \ast_2)\) where \(M=N/2\)

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_glu() +input <- torch_randn(4, 2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_hardshrink.html b/static/docs/reference/nn_hardshrink.html new file mode 100644 index 0000000000000000000000000000000000000000..e5e3de43a5943ef1342340b6c4d5897b9290ca0b --- /dev/null +++ b/static/docs/reference/nn_hardshrink.html @@ -0,0 +1,266 @@ + + + + + + + + +Hardshwink module — nn_hardshrink • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the hard shrinkage function element-wise:

    +
    + +
    nn_hardshrink(lambd = 0.5)
    + +

    Arguments

    + + + + + + +
    lambd

    the \(\lambda\) value for the Hardshrink formulation. Default: 0.5

    + +

    Details

    + +

    $$ + \mbox{HardShrink}(x) = + \left\{ \begin{array}{ll} +x, & \mbox{ if } x > \lambda \\ +x, & \mbox{ if } x < -\lambda \\ +0, & \mbox{ otherwise } +\end{array} +\right. +$$

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_hardshrink() +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_hardsigmoid.html b/static/docs/reference/nn_hardsigmoid.html new file mode 100644 index 0000000000000000000000000000000000000000..c5aa54e37a45b19a653d32d384574b88a68e4bda --- /dev/null +++ b/static/docs/reference/nn_hardsigmoid.html @@ -0,0 +1,257 @@ + + + + + + + + +Hardsigmoid module — nn_hardsigmoid • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the element-wise function:

    +
    + +
    nn_hardsigmoid()
    + + +

    Details

    + +

    $$ +\mbox{Hardsigmoid}(x) = \left\{ \begin{array}{ll} + 0 & \mbox{if~} x \le -3, \\ + 1 & \mbox{if~} x \ge +3, \\ + x / 6 + 1 / 2 & \mbox{otherwise} +\end{array} +\right. +$$

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_hardsigmoid() +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_hardswish.html b/static/docs/reference/nn_hardswish.html new file mode 100644 index 0000000000000000000000000000000000000000..8111dc13915ce3c47cb5896eedb8ce7ed7c4cbc9 --- /dev/null +++ b/static/docs/reference/nn_hardswish.html @@ -0,0 +1,260 @@ + + + + + + + + +Hardswish module — nn_hardswish • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the hardswish function, element-wise, as described in the paper: +Searching for MobileNetV3

    +
    + +
    nn_hardswish()
    + + +

    Details

    + +

    $$ \mbox{Hardswish}(x) = \left\{ + \begin{array}{ll} + 0 & \mbox{if } x \le -3, \\ + x & \mbox{if } x \ge +3, \\ + x \cdot (x + 3)/6 & \mbox{otherwise} + \end{array} + \right. $$

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +m <- nn_hardswish() +input <- torch_randn(2) +output <- m(input) +} + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_hardtanh.html b/static/docs/reference/nn_hardtanh.html new file mode 100644 index 0000000000000000000000000000000000000000..68c739fea0754f443851eae3275b3c34c767a2be --- /dev/null +++ b/static/docs/reference/nn_hardtanh.html @@ -0,0 +1,277 @@ + + + + + + + + +Hardtanh module — nn_hardtanh • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the HardTanh function element-wise +HardTanh is defined as:

    +
    + +
    nn_hardtanh(min_val = -1, max_val = 1, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    min_val

    minimum value of the linear region range. Default: -1

    max_val

    maximum value of the linear region range. Default: 1

    inplace

    can optionally do the operation in-place. Default: FALSE

    + +

    Details

    + +

    $$ +\mbox{HardTanh}(x) = \left\{ \begin{array}{ll} + 1 & \mbox{ if } x > 1 \\ + -1 & \mbox{ if } x < -1 \\ + x & \mbox{ otherwise } \\ +\end{array} +\right. +$$

    +

    The range of the linear region :math:[-1, 1] can be adjusted using +min_val and max_val.

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_hardtanh(-2, 2) +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_identity.html b/static/docs/reference/nn_identity.html new file mode 100644 index 0000000000000000000000000000000000000000..31ff5e2949ad02b8fa41aba05553adce9fe48dea --- /dev/null +++ b/static/docs/reference/nn_identity.html @@ -0,0 +1,246 @@ + + + + + + + + +Identity module — nn_identity • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    A placeholder identity operator that is argument-insensitive.

    +
    + +
    nn_identity(...)
    + +

    Arguments

    + + + + + + +
    ...

    any arguments (unused)

    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_identity(54, unused_argument1 = 0.1, unused_argument2 = FALSE) +input <- torch_randn(128, 20) +output <- m(input) +print(output$size()) + +} +
    #> [1] 128 20
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_init_calculate_gain.html b/static/docs/reference/nn_init_calculate_gain.html new file mode 100644 index 0000000000000000000000000000000000000000..529b7c12ad9992ebe60fc1ebd1a81c7a80e3b641 --- /dev/null +++ b/static/docs/reference/nn_init_calculate_gain.html @@ -0,0 +1,241 @@ + + + + + + + + +Calculate gain — nn_init_calculate_gain • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Return the recommended gain value for the given nonlinearity function.

    +
    + +
    nn_init_calculate_gain(nonlinearity, param = NULL)
    + +

    Arguments

    + + + + + + + + + + +
    nonlinearity

    the non-linear function

    param

    optional parameter for the non-linear function

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_init_constant_.html b/static/docs/reference/nn_init_constant_.html new file mode 100644 index 0000000000000000000000000000000000000000..7be0e2c24db7be6c309aa43381e5ea768ed2b39d --- /dev/null +++ b/static/docs/reference/nn_init_constant_.html @@ -0,0 +1,252 @@ + + + + + + + + +Constant initialization — nn_init_constant_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fills the input Tensor with the value val.

    +
    + +
    nn_init_constant_(tensor, val)
    + +

    Arguments

    + + + + + + + + + + +
    tensor

    an n-dimensional Tensor

    val

    the value to fill the tensor with

    + + +

    Examples

    +
    if (torch_is_installed()) { +w <- torch_empty(3, 5) +nn_init_constant_(w, 0.3) + +} +
    #> torch_tensor +#> 0.3000 0.3000 0.3000 0.3000 0.3000 +#> 0.3000 0.3000 0.3000 0.3000 0.3000 +#> 0.3000 0.3000 0.3000 0.3000 0.3000 +#> [ CPUFloatType{3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_init_dirac_.html b/static/docs/reference/nn_init_dirac_.html new file mode 100644 index 0000000000000000000000000000000000000000..e75204703f08d5851b238aa6a7212fd64bcd703f --- /dev/null +++ b/static/docs/reference/nn_init_dirac_.html @@ -0,0 +1,256 @@ + + + + + + + + +Dirac initialization — nn_init_dirac_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fills the 3, 4, 5-dimensional input Tensor with the Dirac +delta function. Preserves the identity of the inputs in Convolutional +layers, where as many input channels are preserved as possible. In case +of groups>1, each group of channels preserves identity.

    +
    + +
    nn_init_dirac_(tensor, groups = 1)
    + +

    Arguments

    + + + + + + + + + + +
    tensor

    a 3, 4, 5-dimensional torch.Tensor

    groups

    (optional) number of groups in the conv layer (default: 1)

    + + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +w <- torch_empty(3, 16, 5, 5) +nn_init_dirac_(w) +} + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_init_eye_.html b/static/docs/reference/nn_init_eye_.html new file mode 100644 index 0000000000000000000000000000000000000000..2491551f6451d2fc235e20e358258030d280d02c --- /dev/null +++ b/static/docs/reference/nn_init_eye_.html @@ -0,0 +1,252 @@ + + + + + + + + +Eye initialization — nn_init_eye_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fills the 2-dimensional input Tensor with the identity matrix. +Preserves the identity of the inputs in Linear layers, where as +many inputs are preserved as possible.

    +
    + +
    nn_init_eye_(tensor)
    + +

    Arguments

    + + + + + + +
    tensor

    a 2-dimensional torch tensor.

    + + +

    Examples

    +
    if (torch_is_installed()) { +w <- torch_empty(3, 5) +nn_init_eye_(w) + +} +
    #> torch_tensor +#> 1 0 0 0 0 +#> 0 1 0 0 0 +#> 0 0 1 0 0 +#> [ CPUFloatType{3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_init_kaiming_normal_.html b/static/docs/reference/nn_init_kaiming_normal_.html new file mode 100644 index 0000000000000000000000000000000000000000..89b43854deef1f0060b6c77074a7a2c1ac2f6da5 --- /dev/null +++ b/static/docs/reference/nn_init_kaiming_normal_.html @@ -0,0 +1,273 @@ + + + + + + + + +Kaiming normal initialization — nn_init_kaiming_normal_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fills the input Tensor with values according to the method +described in Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification - He, K. et al. (2015), using a +normal distribution.

    +
    + +
    nn_init_kaiming_normal_(
    +  tensor,
    +  a = 0,
    +  mode = "fan_in",
    +  nonlinearity = "leaky_relu"
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    tensor

    an n-dimensional torch.Tensor

    a

    the negative slope of the rectifier used after this layer (only used +with 'leaky_relu')

    mode

    either 'fan_in' (default) or 'fan_out'. Choosing 'fan_in' preserves +the magnitude of the variance of the weights in the forward pass. Choosing +'fan_out' preserves the magnitudes in the backwards pass.

    nonlinearity

    the non-linear function. recommended to use only with 'relu' +or 'leaky_relu' (default).

    + + +

    Examples

    +
    if (torch_is_installed()) { +w <- torch_empty(3, 5) +nn_init_kaiming_normal_(w, mode = "fan_in", nonlinearity = "leaky_relu") + +} +
    #> torch_tensor +#> -0.5594 0.2408 0.3946 0.5860 -0.4834 +#> -0.0442 0.7170 -0.3028 0.4015 -0.8906 +#> -0.5157 -0.1763 0.9366 0.4640 -0.5356 +#> [ CPUFloatType{3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_init_kaiming_uniform_.html b/static/docs/reference/nn_init_kaiming_uniform_.html new file mode 100644 index 0000000000000000000000000000000000000000..6c72753398b7bd232d40e37b3474c80eeb4f4666 --- /dev/null +++ b/static/docs/reference/nn_init_kaiming_uniform_.html @@ -0,0 +1,273 @@ + + + + + + + + +Kaiming uniform initialization — nn_init_kaiming_uniform_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fills the input Tensor with values according to the method +described in Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification - He, K. et al. (2015), using a +uniform distribution.

    +
    + +
    nn_init_kaiming_uniform_(
    +  tensor,
    +  a = 0,
    +  mode = "fan_in",
    +  nonlinearity = "leaky_relu"
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    tensor

    an n-dimensional torch.Tensor

    a

    the negative slope of the rectifier used after this layer (only used +with 'leaky_relu')

    mode

    either 'fan_in' (default) or 'fan_out'. Choosing 'fan_in' preserves +the magnitude of the variance of the weights in the forward pass. Choosing +'fan_out' preserves the magnitudes in the backwards pass.

    nonlinearity

    the non-linear function. recommended to use only with 'relu' +or 'leaky_relu' (default).

    + + +

    Examples

    +
    if (torch_is_installed()) { +w <- torch_empty(3, 5) +nn_init_kaiming_uniform_(w, mode = "fan_in", nonlinearity = "leaky_relu") + +} +
    #> torch_tensor +#> -0.7460 0.2070 -0.1066 -0.4344 -0.4666 +#> -0.5351 -0.4524 0.0950 -1.0077 -0.2169 +#> -0.9525 0.8753 0.0070 -0.4553 -0.3445 +#> [ CPUFloatType{3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_init_normal_.html b/static/docs/reference/nn_init_normal_.html new file mode 100644 index 0000000000000000000000000000000000000000..1611f767406eff5faa6b2a455c8d7c5cdc92c6c4 --- /dev/null +++ b/static/docs/reference/nn_init_normal_.html @@ -0,0 +1,256 @@ + + + + + + + + +Normal initialization — nn_init_normal_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fills the input Tensor with values drawn from the normal distribution

    +
    + +
    nn_init_normal_(tensor, mean = 0, std = 1)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    tensor

    an n-dimensional Tensor

    mean

    the mean of the normal distribution

    std

    the standard deviation of the normal distribution

    + + +

    Examples

    +
    if (torch_is_installed()) { +w <- torch_empty(3, 5) +nn_init_normal_(w) + +} +
    #> torch_tensor +#> -1.0569 -1.0900 1.2740 -1.7728 0.0593 +#> -1.7131 -0.1353 0.8191 0.1481 -0.9940 +#> -0.7544 -1.0298 0.4237 1.4650 0.0575 +#> [ CPUFloatType{3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_init_ones_.html b/static/docs/reference/nn_init_ones_.html new file mode 100644 index 0000000000000000000000000000000000000000..ffc2743e521ffc61e13e2e2e626789430528721b --- /dev/null +++ b/static/docs/reference/nn_init_ones_.html @@ -0,0 +1,248 @@ + + + + + + + + +Ones initialization — nn_init_ones_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fills the input Tensor with the scalar value 1

    +
    + +
    nn_init_ones_(tensor)
    + +

    Arguments

    + + + + + + +
    tensor

    an n-dimensional Tensor

    + + +

    Examples

    +
    if (torch_is_installed()) { +w <- torch_empty(3, 5) +nn_init_ones_(w) + +} +
    #> torch_tensor +#> 1 1 1 1 1 +#> 1 1 1 1 1 +#> 1 1 1 1 1 +#> [ CPUFloatType{3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_init_orthogonal_.html b/static/docs/reference/nn_init_orthogonal_.html new file mode 100644 index 0000000000000000000000000000000000000000..8c116004b3e455bec7d7e0fb839adcfed5a2dc4d --- /dev/null +++ b/static/docs/reference/nn_init_orthogonal_.html @@ -0,0 +1,258 @@ + + + + + + + + +Orthogonal initialization — nn_init_orthogonal_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fills the input Tensor with a (semi) orthogonal matrix, as +described in Exact solutions to the nonlinear dynamics of learning in deep linear neural networks - Saxe, A. et al. (2013). The input tensor must have +at least 2 dimensions, and for tensors with more than 2 dimensions the +trailing dimensions are flattened.

    +
    + +
    nn_init_orthogonal_(tensor, gain = 1)
    + +

    Arguments

    + + + + + + + + + + +
    tensor

    an n-dimensional Tensor

    gain

    optional scaling factor

    + + +

    Examples

    +
    if (torch_is_installed()) { +w <- torch_empty(3,5) +nn_init_orthogonal_(w) + +} +
    #> torch_tensor +#> -0.0261 -0.1169 0.8583 0.1248 -0.4831 +#> -0.1584 -0.1637 0.4579 -0.4078 0.7564 +#> 0.4959 -0.5308 0.0252 0.6151 0.3053 +#> [ CPUFloatType{3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_init_sparse_.html b/static/docs/reference/nn_init_sparse_.html new file mode 100644 index 0000000000000000000000000000000000000000..9a302fcc384eb82b73ba482046358f0dd1d4a0ea --- /dev/null +++ b/static/docs/reference/nn_init_sparse_.html @@ -0,0 +1,258 @@ + + + + + + + + +Sparse initialization — nn_init_sparse_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fills the 2D input Tensor as a sparse matrix, where the +non-zero elements will be drawn from the normal distribution +as described in Deep learning via Hessian-free optimization - Martens, J. (2010).

    +
    + +
    nn_init_sparse_(tensor, sparsity, std = 0.01)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    tensor

    an n-dimensional Tensor

    sparsity

    The fraction of elements in each column to be set to zero

    std

    the standard deviation of the normal distribution used to generate +the non-zero values

    + + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +w <- torch_empty(3, 5) +nn_init_sparse_(w, sparsity = 0.1) +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_init_trunc_normal_.html b/static/docs/reference/nn_init_trunc_normal_.html new file mode 100644 index 0000000000000000000000000000000000000000..87dbff64d0e253ed25cddbd428ff0bdfb797d76b --- /dev/null +++ b/static/docs/reference/nn_init_trunc_normal_.html @@ -0,0 +1,266 @@ + + + + + + + + +Truncated normal initialization — nn_init_trunc_normal_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fills the input Tensor with values drawn from a truncated +normal distribution.

    +
    + +
    nn_init_trunc_normal_(tensor, mean = 0, std = 1, a = -2, b = -2)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    tensor

    an n-dimensional Tensor

    mean

    the mean of the normal distribution

    std

    the standard deviation of the normal distribution

    a

    the minimum cutoff value

    b

    the maximum cutoff value

    + + +

    Examples

    +
    if (torch_is_installed()) { +w <- torch_empty(3, 5) +nn_init_trunc_normal_(w) + +} +
    #> torch_tensor +#> -2 -2 -2 -2 -2 +#> -2 -2 -2 -2 -2 +#> -2 -2 -2 -2 -2 +#> [ CPUFloatType{3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_init_uniform_.html b/static/docs/reference/nn_init_uniform_.html new file mode 100644 index 0000000000000000000000000000000000000000..98b02d9aa980cc69b87c1ad2bd93988288b61cb5 --- /dev/null +++ b/static/docs/reference/nn_init_uniform_.html @@ -0,0 +1,256 @@ + + + + + + + + +Uniform initialization — nn_init_uniform_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fills the input Tensor with values drawn from the uniform distribution

    +
    + +
    nn_init_uniform_(tensor, a = 0, b = 1)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    tensor

    an n-dimensional Tensor

    a

    the lower bound of the uniform distribution

    b

    the upper bound of the uniform distribution

    + + +

    Examples

    +
    if (torch_is_installed()) { +w <- torch_empty(3, 5) +nn_init_uniform_(w) + +} +
    #> torch_tensor +#> 0.8556 0.9331 0.3515 0.8071 0.4948 +#> 0.6075 0.9042 0.7181 0.7329 0.7563 +#> 0.2584 0.5293 0.9757 0.3030 0.3341 +#> [ CPUFloatType{3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_init_xavier_normal_.html b/static/docs/reference/nn_init_xavier_normal_.html new file mode 100644 index 0000000000000000000000000000000000000000..76687824eefe2baea6406fbd17a4817d3dbdfa74 --- /dev/null +++ b/static/docs/reference/nn_init_xavier_normal_.html @@ -0,0 +1,256 @@ + + + + + + + + +Xavier normal initialization — nn_init_xavier_normal_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fills the input Tensor with values according to the method +described in Understanding the difficulty of training deep feedforward neural networks - Glorot, X. & Bengio, Y. (2010), using a normal +distribution.

    +
    + +
    nn_init_xavier_normal_(tensor, gain = 1)
    + +

    Arguments

    + + + + + + + + + + +
    tensor

    an n-dimensional Tensor

    gain

    an optional scaling factor

    + + +

    Examples

    +
    if (torch_is_installed()) { +w <- torch_empty(3, 5) +nn_init_xavier_normal_(w) + +} +
    #> torch_tensor +#> 1.2535 -0.2197 0.5425 -3.0052 -4.2446 +#> -0.3570 -1.6970 -2.0154 -0.5348 2.7582 +#> 0.8714 -0.8924 0.7675 3.2553 -1.4333 +#> [ CPUFloatType{3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_init_xavier_uniform_.html b/static/docs/reference/nn_init_xavier_uniform_.html new file mode 100644 index 0000000000000000000000000000000000000000..38e12d6e403e06f49c1dc2765f3333eec42b5e61 --- /dev/null +++ b/static/docs/reference/nn_init_xavier_uniform_.html @@ -0,0 +1,256 @@ + + + + + + + + +Xavier uniform initialization — nn_init_xavier_uniform_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fills the input Tensor with values according to the method +described in Understanding the difficulty of training deep feedforward neural networks - Glorot, X. & Bengio, Y. (2010), using a uniform +distribution.

    +
    + +
    nn_init_xavier_uniform_(tensor, gain = 1)
    + +

    Arguments

    + + + + + + + + + + +
    tensor

    an n-dimensional Tensor

    gain

    an optional scaling factor

    + + +

    Examples

    +
    if (torch_is_installed()) { +w <- torch_empty(3, 5) +nn_init_xavier_uniform_(w) + +} +
    #> torch_tensor +#> 1.3397 1.1040 -3.0453 -1.7935 0.9545 +#> -0.0194 -2.4483 2.9345 2.2750 -2.4048 +#> -0.4406 -2.2409 0.4155 -0.1573 1.9776 +#> [ CPUFloatType{3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_init_zeros_.html b/static/docs/reference/nn_init_zeros_.html new file mode 100644 index 0000000000000000000000000000000000000000..a2285b26f5da2b8312b44f6140b54a347182e19f --- /dev/null +++ b/static/docs/reference/nn_init_zeros_.html @@ -0,0 +1,248 @@ + + + + + + + + +Zeros initialization — nn_init_zeros_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fills the input Tensor with the scalar value 0

    +
    + +
    nn_init_zeros_(tensor)
    + +

    Arguments

    + + + + + + +
    tensor

    an n-dimensional tensor

    + + +

    Examples

    +
    if (torch_is_installed()) { +w <- torch_empty(3, 5) +nn_init_zeros_(w) + +} +
    #> torch_tensor +#> 0 0 0 0 0 +#> 0 0 0 0 0 +#> 0 0 0 0 0 +#> [ CPUFloatType{3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_leaky_relu.html b/static/docs/reference/nn_leaky_relu.html new file mode 100644 index 0000000000000000000000000000000000000000..001286474c11f8d81acbad72cd0eff3f364adbf6 --- /dev/null +++ b/static/docs/reference/nn_leaky_relu.html @@ -0,0 +1,273 @@ + + + + + + + + +LeakyReLU module — nn_leaky_relu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the element-wise function:

    +
    + +
    nn_leaky_relu(negative_slope = 0.01, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + +
    negative_slope

    Controls the angle of the negative slope. Default: 1e-2

    inplace

    can optionally do the operation in-place. Default: FALSE

    + +

    Details

    + +

    $$ + \mbox{LeakyReLU}(x) = \max(0, x) + \mbox{negative\_slope} * \min(0, x) +$$ +or

    +

    $$ + \mbox{LeakyRELU}(x) = + \left\{ \begin{array}{ll} +x, & \mbox{ if } x \geq 0 \\ +\mbox{negative\_slope} \times x, & \mbox{ otherwise } +\end{array} +\right. +$$

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_leaky_relu(0.1) +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_linear.html b/static/docs/reference/nn_linear.html new file mode 100644 index 0000000000000000000000000000000000000000..e38cd0dfd2ebc3cc053a3a128318ab55f3fbe7fd --- /dev/null +++ b/static/docs/reference/nn_linear.html @@ -0,0 +1,281 @@ + + + + + + + + +Linear module — nn_linear • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a linear transformation to the incoming data: y = xA^T + b

    +
    + +
    nn_linear(in_features, out_features, bias = TRUE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    in_features

    size of each input sample

    out_features

    size of each output sample

    bias

    If set to FALSE, the layer will not learn an additive bias. +Default: TRUE

    + +

    Shape

    + + + +
      +
    • Input: (N, *, H_in) where * means any number of +additional dimensions and H_in = in_features.

    • +
    • Output: (N, *, H_out) where all but the last dimension +are the same shape as the input and :math:H_out = out_features.

    • +
    + +

    Attributes

    + + + +
      +
    • weight: the learnable weights of the module of shape +(out_features, in_features). The values are +initialized from \(U(-\sqrt{k}, \sqrt{k})\)s, where +\(k = \frac{1}{\mbox{in\_features}}\)

    • +
    • bias: the learnable bias of the module of shape \((\mbox{out\_features})\). +If bias is TRUE, the values are initialized from +\(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where +\(k = \frac{1}{\mbox{in\_features}}\)

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_linear(20, 30) +input <- torch_randn(128, 20) +output <- m(input) +print(output$size()) + +} +
    #> [1] 128 30
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_log_sigmoid.html b/static/docs/reference/nn_log_sigmoid.html new file mode 100644 index 0000000000000000000000000000000000000000..e2e4d18d7fae5846808ef26063d1d95514b9fdc8 --- /dev/null +++ b/static/docs/reference/nn_log_sigmoid.html @@ -0,0 +1,253 @@ + + + + + + + + +LogSigmoid module — nn_log_sigmoid • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the element-wise function: +$$ + \mbox{LogSigmoid}(x) = \log\left(\frac{ 1 }{ 1 + \exp(-x)}\right) + $$

    +
    + +
    nn_log_sigmoid()
    + + +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_log_sigmoid() +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_log_softmax.html b/static/docs/reference/nn_log_softmax.html new file mode 100644 index 0000000000000000000000000000000000000000..e54ce27ba08d86f7d19fa846a5465457062b293b --- /dev/null +++ b/static/docs/reference/nn_log_softmax.html @@ -0,0 +1,266 @@ + + + + + + + + +LogSoftmax module — nn_log_softmax • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the \(\log(\mbox{Softmax}(x))\) function to an n-dimensional +input Tensor. The LogSoftmax formulation can be simplified as:

    +
    + +
    nn_log_softmax(dim)
    + +

    Arguments

    + + + + + + +
    dim

    (int): A dimension along which LogSoftmax will be computed.

    + +

    Value

    + +

    a Tensor of the same dimension and shape as the input with +values in the range [-inf, 0)

    +

    Details

    + +

    $$ + \mbox{LogSoftmax}(x_{i}) = \log\left(\frac{\exp(x_i) }{ \sum_j \exp(x_j)} \right) +$$

    +

    Shape

    + + + +
      +
    • Input: \((*)\) where * means, any number of additional +dimensions

    • +
    • Output: \((*)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_log_softmax(1) +input <- torch_randn(2, 3) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_lp_pool1d.html b/static/docs/reference/nn_lp_pool1d.html new file mode 100644 index 0000000000000000000000000000000000000000..7dfb6f9caac1a43a68accede75aa8490565105a8 --- /dev/null +++ b/static/docs/reference/nn_lp_pool1d.html @@ -0,0 +1,292 @@ + + + + + + + + +Applies a 1D power-average pooling over an input signal composed of several input +planes. — nn_lp_pool1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    On each window, the function computed is:

    +

    $$ + f(X) = \sqrt[p]{\sum_{x \in X} x^{p}} +$$

    +
    + +
    nn_lp_pool1d(norm_type, kernel_size, stride = NULL, ceil_mode = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    norm_type

    if inf than one gets max pooling if 0 you get sum pooling ( +proportional to the avg pooling)

    kernel_size

    a single int, the size of the window

    stride

    a single int, the stride of the window. Default value is kernel_size

    ceil_mode

    when TRUE, will use ceil instead of floor to compute the output shape

    + +

    Details

    + + +
      +
    • At p = \(\infty\), one gets Max Pooling

    • +
    • At p = 1, one gets Sum Pooling (which is proportional to Average Pooling)

    • +
    + +

    Note

    + +

    If the sum to the power of p is zero, the gradient of this function is +not defined. This implementation will set the gradient to zero in this case.

    +

    Shape

    + + + +
      +
    • Input: \((N, C, L_{in})\)

    • +
    • Output: \((N, C, L_{out})\), where

    • +
    + +

    $$ + L_{out} = \left\lfloor\frac{L_{in} - \mbox{kernel\_size}}{\mbox{stride}} + 1\right\rfloor +$$

    + +

    Examples

    +
    if (torch_is_installed()) { +# power-2 pool of window of length 3, with stride 2. +m <- nn_lp_pool1d(2, 3, stride=2) +input <- torch_randn(20, 16, 50) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_lp_pool2d.html b/static/docs/reference/nn_lp_pool2d.html new file mode 100644 index 0000000000000000000000000000000000000000..739d71f09e556a3f46504e0925cb659f8dceb748 --- /dev/null +++ b/static/docs/reference/nn_lp_pool2d.html @@ -0,0 +1,304 @@ + + + + + + + + +Applies a 2D power-average pooling over an input signal composed of several input +planes. — nn_lp_pool2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    On each window, the function computed is:

    +

    $$ + f(X) = \sqrt[p]{\sum_{x \in X} x^{p}} +$$

    +
    + +
    nn_lp_pool2d(norm_type, kernel_size, stride = NULL, ceil_mode = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    norm_type

    if inf than one gets max pooling if 0 you get sum pooling ( +proportional to the avg pooling)

    kernel_size

    the size of the window

    stride

    the stride of the window. Default value is kernel_size

    ceil_mode

    when TRUE, will use ceil instead of floor to compute the output shape

    + +

    Details

    + + +
      +
    • At p = \(\infty\), one gets Max Pooling

    • +
    • At p = 1, one gets Sum Pooling (which is proportional to average pooling)

    • +
    + +

    The parameters kernel_size, stride can either be:

      +
    • a single int -- in which case the same value is used for the height and width dimension

    • +
    • a tuple of two ints -- in which case, the first int is used for the height dimension, +and the second int for the width dimension

    • +
    + +

    Note

    + +

    If the sum to the power of p is zero, the gradient of this function is +not defined. This implementation will set the gradient to zero in this case.

    +

    Shape

    + + + +
      +
    • Input: \((N, C, H_{in}, W_{in})\)

    • +
    • Output: \((N, C, H_{out}, W_{out})\), where

    • +
    + +

    $$ + H_{out} = \left\lfloor\frac{H_{in} - \mbox{kernel\_size}[0]}{\mbox{stride}[0]} + 1\right\rfloor +$$ +$$ + W_{out} = \left\lfloor\frac{W_{in} - \mbox{kernel\_size}[1]}{\mbox{stride}[1]} + 1\right\rfloor +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +# power-2 pool of square window of size=3, stride=2 +m <- nn_lp_pool2d(2, 3, stride=2) +# pool of non-square window of power 1.2 +m <- nn_lp_pool2d(1.2, c(3, 2), stride=c(2, 1)) +input <- torch_randn(20, 16, 50, 32) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_max_pool1d.html b/static/docs/reference/nn_max_pool1d.html new file mode 100644 index 0000000000000000000000000000000000000000..1169c25a17aecaa10ee05654e0be6f17fe42d479 --- /dev/null +++ b/static/docs/reference/nn_max_pool1d.html @@ -0,0 +1,301 @@ + + + + + + + + +MaxPool1D module — nn_max_pool1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 1D max pooling over an input signal composed of several input +planes.

    +
    + +
    nn_max_pool1d(
    +  kernel_size,
    +  stride = NULL,
    +  padding = 0,
    +  dilation = 1,
    +  return_indices = FALSE,
    +  ceil_mode = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    kernel_size

    the size of the window to take a max over

    stride

    the stride of the window. Default value is kernel_size

    padding

    implicit zero padding to be added on both sides

    dilation

    a parameter that controls the stride of elements in the window

    return_indices

    if TRUE, will return the max indices along with the outputs. +Useful for nn_max_unpool1d() later.

    ceil_mode

    when TRUE, will use ceil instead of floor to compute the output shape

    + +

    Details

    + +

    In the simplest case, the output value of the layer with input size \((N, C, L)\) +and output \((N, C, L_{out})\) can be precisely described as:

    +

    $$ + out(N_i, C_j, k) = \max_{m=0, \ldots, \mbox{kernel\_size} - 1} +input(N_i, C_j, stride \times k + m) +$$

    +

    If padding is non-zero, then the input is implicitly zero-padded on both sides +for padding number of points. dilation controls the spacing between the kernel points. +It is harder to describe, but this link +has a nice visualization of what dilation does.

    +

    Shape

    + + + +
      +
    • Input: \((N, C, L_{in})\)

    • +
    • Output: \((N, C, L_{out})\), where

    • +
    + +

    $$ + L_{out} = \left\lfloor \frac{L_{in} + 2 \times \mbox{padding} - \mbox{dilation} + \times (\mbox{kernel\_size} - 1) - 1}{\mbox{stride}} + 1\right\rfloor +$$

    + +

    Examples

    +
    if (torch_is_installed()) { +# pool of size=3, stride=2 +m <- nn_max_pool1d(3, stride=2) +input <- torch_randn(20, 16, 50) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_max_pool2d.html b/static/docs/reference/nn_max_pool2d.html new file mode 100644 index 0000000000000000000000000000000000000000..716673fed0be1df50c87e577e5bc532c4d85f78d --- /dev/null +++ b/static/docs/reference/nn_max_pool2d.html @@ -0,0 +1,316 @@ + + + + + + + + +MaxPool2D module — nn_max_pool2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 2D max pooling over an input signal composed of several input +planes.

    +
    + +
    nn_max_pool2d(
    +  kernel_size,
    +  stride = NULL,
    +  padding = 0,
    +  dilation = 1,
    +  return_indices = FALSE,
    +  ceil_mode = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    kernel_size

    the size of the window to take a max over

    stride

    the stride of the window. Default value is kernel_size

    padding

    implicit zero padding to be added on both sides

    dilation

    a parameter that controls the stride of elements in the window

    return_indices

    if TRUE, will return the max indices along with the outputs. +Useful for nn_max_unpool2d() later.

    ceil_mode

    when TRUE, will use ceil instead of floor to compute the output shape

    + +

    Details

    + +

    In the simplest case, the output value of the layer with input size \((N, C, H, W)\), +output \((N, C, H_{out}, W_{out})\) and kernel_size \((kH, kW)\) +can be precisely described as:

    +

    $$ +\begin{array}{ll} +out(N_i, C_j, h, w) ={} & \max_{m=0, \ldots, kH-1} \max_{n=0, \ldots, kW-1} \\ +& \mbox{input}(N_i, C_j, \mbox{stride[0]} \times h + m, + \mbox{stride[1]} \times w + n) +\end{array} +$$

    +

    If padding is non-zero, then the input is implicitly zero-padded on both sides +for padding number of points. dilation controls the spacing between the kernel points. +It is harder to describe, but this link has a nice visualization of what dilation does.

    +

    The parameters kernel_size, stride, padding, dilation can either be:

      +
    • a single int -- in which case the same value is used for the height and width dimension

    • +
    • a tuple of two ints -- in which case, the first int is used for the height dimension, +and the second int for the width dimension

    • +
    + +

    Shape

    + + + +
      +
    • Input: \((N, C, H_{in}, W_{in})\)

    • +
    • Output: \((N, C, H_{out}, W_{out})\), where

    • +
    + +

    $$ + H_{out} = \left\lfloor\frac{H_{in} + 2 * \mbox{padding[0]} - \mbox{dilation[0]} + \times (\mbox{kernel\_size[0]} - 1) - 1}{\mbox{stride[0]}} + 1\right\rfloor +$$

    +

    $$ + W_{out} = \left\lfloor\frac{W_{in} + 2 * \mbox{padding[1]} - \mbox{dilation[1]} + \times (\mbox{kernel\_size[1]} - 1) - 1}{\mbox{stride[1]}} + 1\right\rfloor +$$

    + +

    Examples

    +
    if (torch_is_installed()) { +# pool of square window of size=3, stride=2 +m <- nn_max_pool2d(3, stride=2) +# pool of non-square window +m <- nn_max_pool2d(c(3, 2), stride=c(2, 1)) +input <- torch_randn(20, 16, 50, 32) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_max_pool3d.html b/static/docs/reference/nn_max_pool3d.html new file mode 100644 index 0000000000000000000000000000000000000000..cc814e36d5f7134bae4d41be2831c9fc3567321f --- /dev/null +++ b/static/docs/reference/nn_max_pool3d.html @@ -0,0 +1,321 @@ + + + + + + + + +Applies a 3D max pooling over an input signal composed of several input +planes. — nn_max_pool3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    In the simplest case, the output value of the layer with input size \((N, C, D, H, W)\), +output \((N, C, D_{out}, H_{out}, W_{out})\) and kernel_size \((kD, kH, kW)\) +can be precisely described as:

    +
    + +
    nn_max_pool3d(
    +  kernel_size,
    +  stride = NULL,
    +  padding = 0,
    +  dilation = 1,
    +  return_indices = FALSE,
    +  ceil_mode = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    kernel_size

    the size of the window to take a max over

    stride

    the stride of the window. Default value is kernel_size

    padding

    implicit zero padding to be added on all three sides

    dilation

    a parameter that controls the stride of elements in the window

    return_indices

    if TRUE, will return the max indices along with the outputs. +Useful for torch_nn.MaxUnpool3d later

    ceil_mode

    when TRUE, will use ceil instead of floor to compute the output shape

    + +

    Details

    + +

    $$ +\begin{array}{ll} +\mbox{out}(N_i, C_j, d, h, w) = & \max_{k=0, \ldots, kD-1} \max_{m=0, \ldots, kH-1} \max_{n=0, \ldots, kW-1} \\ + & \mbox{input}(N_i, C_j, \mbox{stride[0]} \times d + k, \mbox{stride[1]} \times h + m, \mbox{stride[2]} \times w + n) +\end{array} +$$

    +

    If padding is non-zero, then the input is implicitly zero-padded on both sides +for padding number of points. dilation controls the spacing between the kernel points. +It is harder to describe, but this link_ has a nice visualization of what dilation does. +The parameters kernel_size, stride, padding, dilation can either be:

      +
    • a single int -- in which case the same value is used for the depth, height and width dimension

    • +
    • a tuple of three ints -- in which case, the first int is used for the depth dimension, +the second int for the height dimension and the third int for the width dimension

    • +
    + +

    Shape

    + + + +
      +
    • Input: \((N, C, D_{in}, H_{in}, W_{in})\)

    • +
    • Output: \((N, C, D_{out}, H_{out}, W_{out})\), where +$$ + D_{out} = \left\lfloor\frac{D_{in} + 2 \times \mbox{padding}[0] - \mbox{dilation}[0] \times + (\mbox{kernel\_size}[0] - 1) - 1}{\mbox{stride}[0]} + 1\right\rfloor +$$

    • +
    + +

    $$ + H_{out} = \left\lfloor\frac{H_{in} + 2 \times \mbox{padding}[1] - \mbox{dilation}[1] \times + (\mbox{kernel\_size}[1] - 1) - 1}{\mbox{stride}[1]} + 1\right\rfloor +$$

    +

    $$ + W_{out} = \left\lfloor\frac{W_{in} + 2 \times \mbox{padding}[2] - \mbox{dilation}[2] \times + (\mbox{kernel\_size}[2] - 1) - 1}{\mbox{stride}[2]} + 1\right\rfloor +$$

    + +

    Examples

    +
    if (torch_is_installed()) { +# pool of square window of size=3, stride=2 +m <- nn_max_pool3d(3, stride=2) +# pool of non-square window +m <- nn_max_pool3d(c(3, 2, 2), stride=c(2, 1, 2)) +input <- torch_randn(20, 16, 50,44, 31) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_max_unpool1d.html b/static/docs/reference/nn_max_unpool1d.html new file mode 100644 index 0000000000000000000000000000000000000000..ce9b0eb360d273ef6a37803b2e10099be865e55c --- /dev/null +++ b/static/docs/reference/nn_max_unpool1d.html @@ -0,0 +1,309 @@ + + + + + + + + +Computes a partial inverse of MaxPool1d. — nn_max_unpool1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    MaxPool1d is not fully invertible, since the non-maximal values are lost. +MaxUnpool1d takes in as input the output of MaxPool1d +including the indices of the maximal values and computes a partial inverse +in which all non-maximal values are set to zero.

    +
    + +
    nn_max_unpool1d(kernel_size, stride = NULL, padding = 0)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    kernel_size

    (int or tuple): Size of the max pooling window.

    stride

    (int or tuple): Stride of the max pooling window. +It is set to kernel_size by default.

    padding

    (int or tuple): Padding that was added to the input

    + +

    Note

    + +

    MaxPool1d can map several input sizes to the same output +sizes. Hence, the inversion process can get ambiguous. +To accommodate this, you can provide the needed output size +as an additional argument output_size in the forward call. +See the Inputs and Example below.

    +

    Inputs

    + + + +
      +
    • input: the input Tensor to invert

    • +
    • indices: the indices given out by nn_max_pool1d()

    • +
    • output_size (optional): the targeted output size

    • +
    + +

    Shape

    + + + +
      +
    • Input: \((N, C, H_{in})\)

    • +
    • Output: \((N, C, H_{out})\), where +$$ + H_{out} = (H_{in} - 1) \times \mbox{stride}[0] - 2 \times \mbox{padding}[0] + \mbox{kernel\_size}[0] +$$ +or as given by output_size in the call operator

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +pool <- nn_max_pool1d(2, stride=2, return_indices=TRUE) +unpool <- nn_max_unpool1d(2, stride=2) + +input <- torch_tensor(array(1:8/1, dim = c(1,1,8))) +out <- pool(input) +unpool(out[[1]], out[[2]]) + +# Example showcasing the use of output_size +input <- torch_tensor(array(1:8/1, dim = c(1,1,8))) +out <- pool(input) +unpool(out[[1]], out[[2]], output_size=input$size()) +unpool(out[[1]], out[[2]]) + +} +
    #> torch_tensor +#> (1,1,.,.) = +#> 0 +#> 2 +#> 0 +#> 4 +#> 0 +#> 6 +#> 0 +#> 8 +#> [ CPUFloatType{1,1,8,1} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_max_unpool2d.html b/static/docs/reference/nn_max_unpool2d.html new file mode 100644 index 0000000000000000000000000000000000000000..03109ac8473f11086825a6d67f9df33ae5214bab --- /dev/null +++ b/static/docs/reference/nn_max_unpool2d.html @@ -0,0 +1,306 @@ + + + + + + + + +Computes a partial inverse of MaxPool2d. — nn_max_unpool2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    MaxPool2d is not fully invertible, since the non-maximal values are lost. +MaxUnpool2d takes in as input the output of MaxPool2d +including the indices of the maximal values and computes a partial inverse +in which all non-maximal values are set to zero.

    +
    + +
    nn_max_unpool2d(kernel_size, stride = NULL, padding = 0)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    kernel_size

    (int or tuple): Size of the max pooling window.

    stride

    (int or tuple): Stride of the max pooling window. +It is set to kernel_size by default.

    padding

    (int or tuple): Padding that was added to the input

    + +

    Note

    + +

    MaxPool2d can map several input sizes to the same output +sizes. Hence, the inversion process can get ambiguous. +To accommodate this, you can provide the needed output size +as an additional argument output_size in the forward call. +See the Inputs and Example below.

    +

    Inputs

    + + + +
      +
    • input: the input Tensor to invert

    • +
    • indices: the indices given out by nn_max_pool2d()

    • +
    • output_size (optional): the targeted output size

    • +
    + +

    Shape

    + + + +
      +
    • Input: \((N, C, H_{in}, W_{in})\)

    • +
    • Output: \((N, C, H_{out}, W_{out})\), where +$$ + H_{out} = (H_{in} - 1) \times \mbox{stride[0]} - 2 \times \mbox{padding[0]} + \mbox{kernel\_size[0]} +$$ +$$ + W_{out} = (W_{in} - 1) \times \mbox{stride[1]} - 2 \times \mbox{padding[1]} + \mbox{kernel\_size[1]} +$$ +or as given by output_size in the call operator

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { + +pool <- nn_max_pool2d(2, stride=2, return_indices=TRUE) +unpool <- nn_max_unpool2d(2, stride=2) +input <- torch_randn(1,1,4,4) +out <- pool(input) +unpool(out[[1]], out[[2]]) + +# specify a different output size than input size +unpool(out[[1]], out[[2]], output_size=c(1, 1, 5, 5)) + +} +
    #> torch_tensor +#> (1,1,.,.) = +#> 0.0000 0.0000 0.0000 1.6626 1.3884 +#> 0.0000 0.0000 0.0000 0.4660 0.0000 +#> 0.0000 1.6033 0.0000 0.0000 0.0000 +#> 0.0000 0.0000 0.0000 0.0000 0.0000 +#> 0.0000 0.0000 0.0000 0.0000 0.0000 +#> [ CPUFloatType{1,1,5,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_max_unpool3d.html b/static/docs/reference/nn_max_unpool3d.html new file mode 100644 index 0000000000000000000000000000000000000000..fc4e8e4967a684090d933eaf60ad1d66d6a3bd8d --- /dev/null +++ b/static/docs/reference/nn_max_unpool3d.html @@ -0,0 +1,300 @@ + + + + + + + + +Computes a partial inverse of MaxPool3d. — nn_max_unpool3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    MaxPool3d is not fully invertible, since the non-maximal values are lost. +MaxUnpool3d takes in as input the output of MaxPool3d +including the indices of the maximal values and computes a partial inverse +in which all non-maximal values are set to zero.

    +
    + +
    nn_max_unpool3d(kernel_size, stride = NULL, padding = 0)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    kernel_size

    (int or tuple): Size of the max pooling window.

    stride

    (int or tuple): Stride of the max pooling window. +It is set to kernel_size by default.

    padding

    (int or tuple): Padding that was added to the input

    + +

    Note

    + +

    MaxPool3d can map several input sizes to the same output +sizes. Hence, the inversion process can get ambiguous. +To accommodate this, you can provide the needed output size +as an additional argument output_size in the forward call. +See the Inputs section below.

    +

    Inputs

    + + + +
      +
    • input: the input Tensor to invert

    • +
    • indices: the indices given out by nn_max_pool3d()

    • +
    • output_size (optional): the targeted output size

    • +
    + +

    Shape

    + + + +
      +
    • Input: \((N, C, D_{in}, H_{in}, W_{in})\)

    • +
    • Output: \((N, C, D_{out}, H_{out}, W_{out})\), where

    • +
    + +

    $$ + D_{out} = (D_{in} - 1) \times \mbox{stride[0]} - 2 \times \mbox{padding[0]} + \mbox{kernel\_size[0]} +$$ +$$ + H_{out} = (H_{in} - 1) \times \mbox{stride[1]} - 2 \times \mbox{padding[1]} + \mbox{kernel\_size[1]} +$$ +$$ + W_{out} = (W_{in} - 1) \times \mbox{stride[2]} - 2 \times \mbox{padding[2]} + \mbox{kernel\_size[2]} +$$

    +

    or as given by output_size in the call operator

    + +

    Examples

    +
    if (torch_is_installed()) { + +# pool of square window of size=3, stride=2 +pool <- nn_max_pool3d(3, stride=2, return_indices=TRUE) +unpool <- nn_max_unpool3d(3, stride=2) +out <- pool(torch_randn(20, 16, 51, 33, 15)) +unpooled_output <- unpool(out[[1]], out[[2]]) +unpooled_output$size() + +} +
    #> [1] 20 16 51 33 15
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_module.html b/static/docs/reference/nn_module.html new file mode 100644 index 0000000000000000000000000000000000000000..33a1f0ae436d0d4c4e273bcef5659d7ca622eb35 --- /dev/null +++ b/static/docs/reference/nn_module.html @@ -0,0 +1,276 @@ + + + + + + + + +Base class for all neural network modules. — nn_module • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Your models should also subclass this class.

    +
    + +
    nn_module(
    +  classname = NULL,
    +  inherit = nn_Module,
    +  ...,
    +  parent_env = parent.frame()
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    classname

    an optional name for the module

    inherit

    an optional module to inherit from

    ...

    methods implementation

    parent_env

    passed to R6::R6Class().

    + +

    Details

    + +

    Modules can also contain other Modules, allowing to nest them in a tree +structure. You can assign the submodules as regular attributes.

    + +

    Examples

    +
    if (torch_is_installed()) { +model <- nn_module( + initialize = function() { + self$conv1 <- nn_conv2d(1, 20, 5) + self$conv2 <- nn_conv2d(20, 20, 5) + }, + forward = function(input) { + input <- self$conv1(input) + input <- nnf_relu(input) + input <- self$conv2(input) + input <- nnf_relu(input) + input + } +) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_module_list.html b/static/docs/reference/nn_module_list.html new file mode 100644 index 0000000000000000000000000000000000000000..852cb4bad9bce01ca166a8be087f58847d6a2b32 --- /dev/null +++ b/static/docs/reference/nn_module_list.html @@ -0,0 +1,257 @@ + + + + + + + + +Holds submodules in a list. — nn_module_list • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    nn_module_list can be indexed like a regular R list, but +modules it contains are properly registered, and will be visible by all +nn_module methods.

    +
    + +
    nn_module_list(modules = list())
    + +

    Arguments

    + + + + + + +
    modules

    a list of modules to add

    + + +

    Examples

    +
    if (torch_is_installed()) { + +my_module <- nn_module( + initialize = function() { + self$linears <- nn_module_list(lapply(1:10, function(x) nn_linear(10, 10))) + }, + forward = function(x) { + for (i in 1:length(self$linears)) + x <- self$linears[[i]](x) + x + } +) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_multihead_attention.html b/static/docs/reference/nn_multihead_attention.html new file mode 100644 index 0000000000000000000000000000000000000000..277b3d21d4a7afbe00c9b7b659dd302ee091ac9a --- /dev/null +++ b/static/docs/reference/nn_multihead_attention.html @@ -0,0 +1,330 @@ + + + + + + + + +MultiHead attention — nn_multihead_attention • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Allows the model to jointly attend to information +from different representation subspaces. +See reference: Attention Is All You Need

    +
    + +
    nn_multihead_attention(
    +  embed_dim,
    +  num_heads,
    +  dropout = 0,
    +  bias = TRUE,
    +  add_bias_kv = FALSE,
    +  add_zero_attn = FALSE,
    +  kdim = NULL,
    +  vdim = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    embed_dim

    total dimension of the model.

    num_heads

    parallel attention heads.

    dropout

    a Dropout layer on attn_output_weights. Default: 0.0.

    bias

    add bias as module parameter. Default: True.

    add_bias_kv

    add bias to the key and value sequences at dim=0.

    add_zero_attn

    add a new batch of zeros to the key and +value sequences at dim=1.

    kdim

    total number of features in key. Default: NULL

    vdim

    total number of features in value. Default: NULL. +Note: if kdim and vdim are NULL, they will be set to embed_dim such that +query, key, and value have the same number of features.

    + +

    Details

    + +

    $$ + \mbox{MultiHead}(Q, K, V) = \mbox{Concat}(head_1,\dots,head_h)W^O +\mbox{where} head_i = \mbox{Attention}(QW_i^Q, KW_i^K, VW_i^V) +$$

    +

    Shape

    + + + + +

    Inputs:

      +
    • query: \((L, N, E)\) where L is the target sequence length, N is the batch size, E is +the embedding dimension.

    • +
    • key: \((S, N, E)\), where S is the source sequence length, N is the batch size, E is +the embedding dimension.

    • +
    • value: \((S, N, E)\) where S is the source sequence length, N is the batch size, E is +the embedding dimension.

    • +
    • key_padding_mask: \((N, S)\) where N is the batch size, S is the source sequence length. +If a ByteTensor is provided, the non-zero positions will be ignored while the position +with the zero positions will be unchanged. If a BoolTensor is provided, the positions with the +value of True will be ignored while the position with the value of False will be unchanged.

    • +
    • attn_mask: 2D mask \((L, S)\) where L is the target sequence length, S is the source sequence length. +3D mask \((N*num_heads, L, S)\) where N is the batch size, L is the target sequence length, +S is the source sequence length. attn_mask ensure that position i is allowed to attend the unmasked +positions. If a ByteTensor is provided, the non-zero positions are not allowed to attend +while the zero positions will be unchanged. If a BoolTensor is provided, positions with True +is not allowed to attend while False values will be unchanged. If a FloatTensor +is provided, it will be added to the attention weight.

    • +
    + +

    Outputs:

      +
    • attn_output: \((L, N, E)\) where L is the target sequence length, N is the batch size, +E is the embedding dimension.

    • +
    • attn_output_weights: \((N, L, S)\) where N is the batch size, +L is the target sequence length, S is the source sequence length.

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +multihead_attn = nn_multihead_attention(embed_dim, num_heads) +out <- multihead_attn(query, key, value) +attn_output <- out[[1]] +attn_output_weights <- out[[2]] +} + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_prelu.html b/static/docs/reference/nn_prelu.html new file mode 100644 index 0000000000000000000000000000000000000000..0002809059d8ec7e5cc5b151d6739c1472ec9c61 --- /dev/null +++ b/static/docs/reference/nn_prelu.html @@ -0,0 +1,303 @@ + + + + + + + + +PReLU module — nn_prelu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the element-wise function: +$$ + \mbox{PReLU}(x) = \max(0,x) + a * \min(0,x) +$$ +or +$$ + \mbox{PReLU}(x) = + \left\{ \begin{array}{ll} +x, & \mbox{ if } x \geq 0 \\ +ax, & \mbox{ otherwise } +\end{array} +\right. +$$

    +
    + +
    nn_prelu(num_parameters = 1, init = 0.25)
    + +

    Arguments

    + + + + + + + + + + +
    num_parameters

    (int): number of \(a\) to learn. +Although it takes an int as input, there is only two values are legitimate: +1, or the number of channels at input. Default: 1

    init

    (float): the initial value of \(a\). Default: 0.25

    + +

    Details

    + +

    Here \(a\) is a learnable parameter. When called without arguments, nn.prelu() uses a single +parameter \(a\) across all input channels. If called with nn_prelu(nChannels), +a separate \(a\) is used for each input channel.

    +

    Note

    + +

    weight decay should not be used when learning \(a\) for good performance.

    +

    Channel dim is the 2nd dim of input. When input has dims < 2, then there is +no channel dim and the number of channels = 1.

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + +

    Attributes

    + + + +
      +
    • weight (Tensor): the learnable weights of shape (num_parameters).

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_prelu() +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_relu.html b/static/docs/reference/nn_relu.html new file mode 100644 index 0000000000000000000000000000000000000000..5fa42bd92f8737ea2555e749e5e928a31f5bc9c1 --- /dev/null +++ b/static/docs/reference/nn_relu.html @@ -0,0 +1,260 @@ + + + + + + + + +ReLU module — nn_relu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the rectified linear unit function element-wise +$$\mbox{ReLU}(x) = (x)^+ = \max(0, x)$$

    +
    + +
    nn_relu(inplace = FALSE)
    + +

    Arguments

    + + + + + + +
    inplace

    can optionally do the operation in-place. Default: FALSE

    + +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_relu() +input <- torch_randn(2) +m(input) + +} +
    #> torch_tensor +#> 0.1347 +#> 0.1303 +#> [ CPUFloatType{2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_relu6.html b/static/docs/reference/nn_relu6.html new file mode 100644 index 0000000000000000000000000000000000000000..df28ccbf895939c836e0ee6978bb72c747ea0ad8 --- /dev/null +++ b/static/docs/reference/nn_relu6.html @@ -0,0 +1,260 @@ + + + + + + + + +ReLu6 module — nn_relu6 • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the element-wise function:

    +
    + +
    nn_relu6(inplace = FALSE)
    + +

    Arguments

    + + + + + + +
    inplace

    can optionally do the operation in-place. Default: FALSE

    + +

    Details

    + +

    $$ + \mbox{ReLU6}(x) = \min(\max(0,x), 6) +$$

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_relu6() +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_rnn.html b/static/docs/reference/nn_rnn.html new file mode 100644 index 0000000000000000000000000000000000000000..c8aa82720f3b51c15e2c2b428b7be2fac7f05435 --- /dev/null +++ b/static/docs/reference/nn_rnn.html @@ -0,0 +1,479 @@ + + + + + + + + +RNN module — nn_rnn • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a multi-layer Elman RNN with \(\tanh\) or \(\mbox{ReLU}\) non-linearity +to an input sequence.

    +
    + +
    nn_rnn(
    +  input_size,
    +  hidden_size,
    +  num_layers = 1,
    +  nonlinearity = NULL,
    +  bias = TRUE,
    +  batch_first = FALSE,
    +  dropout = 0,
    +  bidirectional = FALSE,
    +  ...
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input_size

    The number of expected features in the input x

    hidden_size

    The number of features in the hidden state h

    num_layers

    Number of recurrent layers. E.g., setting num_layers=2 +would mean stacking two RNNs together to form a stacked RNN, +with the second RNN taking in outputs of the first RNN and +computing the final results. Default: 1

    nonlinearity

    The non-linearity to use. Can be either 'tanh' or +'relu'. Default: 'tanh'

    bias

    If FALSE, then the layer does not use bias weights b_ih and +b_hh. Default: TRUE

    batch_first

    If TRUE, then the input and output tensors are provided +as (batch, seq, feature). Default: FALSE

    dropout

    If non-zero, introduces a Dropout layer on the outputs of each +RNN layer except the last layer, with dropout probability equal to +dropout. Default: 0

    bidirectional

    If TRUE, becomes a bidirectional RNN. Default: FALSE

    ...

    other arguments that can be passed to the super class.

    + +

    Details

    + +

    For each element in the input sequence, each layer computes the following +function:

    +

    $$ +h_t = \tanh(W_{ih} x_t + b_{ih} + W_{hh} h_{(t-1)} + b_{hh}) +$$

    +

    where \(h_t\) is the hidden state at time t, \(x_t\) is +the input at time t, and \(h_{(t-1)}\) is the hidden state of the +previous layer at time t-1 or the initial hidden state at time 0. +If nonlinearity is 'relu', then \(\mbox{ReLU}\) is used instead of +\(\tanh\).

    +

    Inputs

    + + + +
      +
    • input of shape (seq_len, batch, input_size): tensor containing the features +of the input sequence. The input can also be a packed variable length +sequence.

    • +
    • h_0 of shape (num_layers * num_directions, batch, hidden_size): tensor +containing the initial hidden state for each element in the batch. +Defaults to zero if not provided. If the RNN is bidirectional, +num_directions should be 2, else it should be 1.

    • +
    + +

    Outputs

    + + + +
      +
    • output of shape (seq_len, batch, num_directions * hidden_size): tensor +containing the output features (h_t) from the last layer of the RNN, +for each t. If a :class:nn_packed_sequence has +been given as the input, the output will also be a packed sequence. +For the unpacked case, the directions can be separated +using output$view(seq_len, batch, num_directions, hidden_size), +with forward and backward being direction 0 and 1 respectively. +Similarly, the directions can be separated in the packed case.

    • +
    • h_n of shape (num_layers * num_directions, batch, hidden_size): tensor +containing the hidden state for t = seq_len. +Like output, the layers can be separated using +h_n$view(num_layers, num_directions, batch, hidden_size).

    • +
    + +

    Shape

    + + + +
      +
    • Input1: \((L, N, H_{in})\) tensor containing input features where +\(H_{in}=\mbox{input\_size}\) and L represents a sequence length.

    • +
    • Input2: \((S, N, H_{out})\) tensor +containing the initial hidden state for each element in the batch. +\(H_{out}=\mbox{hidden\_size}\) +Defaults to zero if not provided. where \(S=\mbox{num\_layers} * \mbox{num\_directions}\) +If the RNN is bidirectional, num_directions should be 2, else it should be 1.

    • +
    • Output1: \((L, N, H_{all})\) where \(H_{all}=\mbox{num\_directions} * \mbox{hidden\_size}\)

    • +
    • Output2: \((S, N, H_{out})\) tensor containing the next hidden state +for each element in the batch

    • +
    + +

    Attributes

    + + + +
      +
    • weight_ih_l[k]: the learnable input-hidden weights of the k-th layer, +of shape (hidden_size, input_size) for k = 0. Otherwise, the shape is +(hidden_size, num_directions * hidden_size)

    • +
    • weight_hh_l[k]: the learnable hidden-hidden weights of the k-th layer, +of shape (hidden_size, hidden_size)

    • +
    • bias_ih_l[k]: the learnable input-hidden bias of the k-th layer, +of shape (hidden_size)

    • +
    • bias_hh_l[k]: the learnable hidden-hidden bias of the k-th layer, +of shape (hidden_size)

    • +
    + +

    Note

    + + + + +

    All the weights and biases are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) +where \(k = \frac{1}{\mbox{hidden\_size}}\)

    + +

    Examples

    +
    if (torch_is_installed()) { +rnn <- nn_rnn(10, 20, 2) +input <- torch_randn(5, 3, 10) +h0 <- torch_randn(2, 3, 20) +rnn(input, h0) + +} +
    #> [[1]] +#> torch_tensor +#> (1,.,.) = +#> Columns 1 to 9 0.0589 0.6076 0.5733 0.4325 -0.7804 -0.5884 -0.7990 -0.5201 -0.5453 +#> -0.3423 -0.1028 0.8203 -0.4395 -0.8638 -0.2453 -0.7419 -0.5734 -0.9229 +#> -0.0001 0.2485 -0.1473 0.5797 -0.1626 -0.1213 -0.3884 -0.5131 0.2432 +#> +#> Columns 10 to 18 0.2049 0.6141 0.0159 -0.5202 -0.0390 0.8628 0.0559 -0.6653 0.5929 +#> 0.1484 -0.3955 0.4318 -0.6923 0.6768 0.9427 -0.5935 -0.1124 0.8008 +#> -0.1115 0.3922 -0.0407 0.7891 -0.1390 0.3705 -0.2486 0.8745 -0.3595 +#> +#> Columns 19 to 20 0.0777 -0.9009 +#> -0.0200 -0.2583 +#> -0.1381 -0.4595 +#> +#> (2,.,.) = +#> Columns 1 to 9 -0.5747 -0.2374 0.3985 0.6116 -0.7816 0.0973 -0.2544 -0.2252 -0.2584 +#> -0.2011 -0.3030 0.4559 0.2132 -0.6755 0.1609 -0.8239 -0.2612 -0.6095 +#> -0.5134 0.3075 -0.2427 0.1875 -0.3368 0.1697 -0.3024 -0.5115 0.0946 +#> +#> Columns 10 to 18 0.5029 -0.4663 -0.6517 -0.3916 -0.3694 -0.3079 0.0697 0.4641 0.4321 +#> 0.8177 0.0184 -0.5879 -0.3717 -0.0954 0.1382 -0.1993 0.5621 -0.1939 +#> 0.3419 -0.0541 0.0665 0.0764 0.6821 0.0040 0.0504 0.6075 0.0703 +#> +#> Columns 19 to 20 -0.1393 -0.0169 +#> -0.2283 0.0531 +#> -0.0904 0.2892 +#> +#> (3,.,.) = +#> Columns 1 to 9 0.4901 0.5564 0.4157 -0.1239 -0.3616 -0.3069 -0.7718 0.2739 0.1213 +#> 0.4148 0.1776 0.2237 0.0323 -0.5166 -0.3952 -0.6464 0.1560 0.1433 +#> 0.3964 0.0038 0.3253 0.0715 0.1306 -0.0327 -0.2969 0.1245 -0.0192 +#> +#> Columns 10 to 18 0.7138 -0.0042 0.1386 -0.0459 -0.4418 0.2562 -0.1983 0.0256 0.4770 +#> 0.7310 0.4867 -0.2056 -0.4410 -0.1978 0.4279 -0.1295 0.2951 0.4745 +#> 0.6958 0.5029 0.0787 -0.2960 0.3043 0.3281 -0.3449 0.3081 0.4197 +#> +#> Columns 19 to 20 0.1745 -0.1350 +#> 0.0668 -0.4015 +#> 0.5455 0.3373 +#> +#> (4,.,.) = +#> Columns 1 to 9 -0.1242 -0.1825 0.0703 0.3368 -0.1879 -0.6188 -0.3606 0.2400 -0.4979 +#> -0.2285 -0.1238 0.2816 0.7158 -0.3795 -0.1800 -0.0903 0.0951 -0.4055 +#> -0.4161 0.1562 0.1320 0.5917 -0.2003 0.5784 -0.1302 -0.1113 -0.7354 +#> +#> Columns 10 to 18 0.6470 0.0346 -0.6063 -0.0664 0.3145 0.3909 -0.3163 0.0040 0.6242 +#> 0.3401 -0.2107 -0.5455 -0.4844 -0.0769 0.0620 -0.1112 0.5133 0.6606 +#> 0.4108 -0.3172 -0.3713 -0.5535 0.3277 0.0108 -0.0800 0.5410 0.1190 +#> +#> Columns 19 to 20 -0.1511 -0.0390 +#> 0.2056 0.1060 +#> 0.1649 0.4299 +#> +#> (5,.,.) = +#> Columns 1 to 9 -0.0957 0.3769 0.1599 0.2209 -0.5452 -0.2272 -0.5200 -0.0675 -0.3636 +#> 0.3314 0.3243 0.5617 0.3386 -0.1621 0.4163 -0.6332 -0.0253 -0.0027 +#> 0.1690 0.3272 0.5195 -0.1727 -0.1203 -0.0701 -0.5596 -0.2452 0.1866 +#> +#> Columns 10 to 18 0.7646 0.0595 -0.1395 -0.1890 0.1511 0.3273 -0.2851 0.1429 0.4489 +#> 0.7122 0.0548 -0.1359 -0.2292 -0.1363 -0.1930 -0.3282 0.5265 0.3602 +#> 0.7579 0.1487 0.1720 -0.0712 -0.0081 0.3472 -0.1236 0.3269 0.3627 +#> +#> Columns 19 to 20 -0.3324 0.0199 +#> 0.4319 0.0719 +#> 0.0541 -0.1996 +#> [ CPUFloatType{5,3,20} ] +#> +#> [[2]] +#> torch_tensor +#> (1,.,.) = +#> Columns 1 to 9 -0.0859 0.2441 -0.3785 0.4773 -0.6194 0.7348 -0.2389 0.3932 -0.5751 +#> -0.3461 -0.4299 -0.5382 -0.0556 0.2770 -0.4997 0.3906 -0.4267 -0.2962 +#> 0.0901 -0.0487 -0.2221 0.7402 -0.3091 0.1068 0.2309 0.0844 -0.8371 +#> +#> Columns 10 to 18 -0.7465 -0.0237 -0.5216 0.2091 0.7810 0.0142 -0.0300 0.0860 -0.4565 +#> 0.4484 -0.2549 0.6987 -0.4158 -0.6085 -0.0974 0.0892 -0.3520 0.0667 +#> -0.1097 -0.0399 0.2584 -0.1982 0.0520 -0.0103 -0.0936 0.3060 0.0546 +#> +#> Columns 19 to 20 0.1902 0.0056 +#> 0.3507 0.1104 +#> -0.0194 0.3957 +#> +#> (2,.,.) = +#> Columns 1 to 9 -0.0957 0.3769 0.1599 0.2209 -0.5452 -0.2272 -0.5200 -0.0675 -0.3636 +#> 0.3314 0.3243 0.5617 0.3386 -0.1621 0.4163 -0.6332 -0.0253 -0.0027 +#> 0.1690 0.3272 0.5195 -0.1727 -0.1203 -0.0701 -0.5596 -0.2452 0.1866 +#> +#> Columns 10 to 18 0.7646 0.0595 -0.1395 -0.1890 0.1511 0.3273 -0.2851 0.1429 0.4489 +#> 0.7122 0.0548 -0.1359 -0.2292 -0.1363 -0.1930 -0.3282 0.5265 0.3602 +#> 0.7579 0.1487 0.1720 -0.0712 -0.0081 0.3472 -0.1236 0.3269 0.3627 +#> +#> Columns 19 to 20 -0.3324 0.0199 +#> 0.4319 0.0719 +#> 0.0541 -0.1996 +#> [ CPUFloatType{2,3,20} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_rrelu.html b/static/docs/reference/nn_rrelu.html new file mode 100644 index 0000000000000000000000000000000000000000..366c47be2d9b48be5db032ea09b0a0ca4e645f42 --- /dev/null +++ b/static/docs/reference/nn_rrelu.html @@ -0,0 +1,283 @@ + + + + + + + + +RReLU module — nn_rrelu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the randomized leaky rectified liner unit function, element-wise, +as described in the paper:

    +
    + +
    nn_rrelu(lower = 1/8, upper = 1/3, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    lower

    lower bound of the uniform distribution. Default: \(\frac{1}{8}\)

    upper

    upper bound of the uniform distribution. Default: \(\frac{1}{3}\)

    inplace

    can optionally do the operation in-place. Default: FALSE

    + +

    Details

    + +

    Empirical Evaluation of Rectified Activations in Convolutional Network.

    +

    The function is defined as:

    +

    $$ +\mbox{RReLU}(x) = +\left\{ \begin{array}{ll} +x & \mbox{if } x \geq 0 \\ +ax & \mbox{ otherwise } +\end{array} +\right. +$$

    +

    where \(a\) is randomly sampled from uniform distribution +\(\mathcal{U}(\mbox{lower}, \mbox{upper})\). +See: https://arxiv.org/pdf/1505.00853.pdf

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_rrelu(0.1, 0.3) +input <- torch_randn(2) +m(input) + +} +
    #> torch_tensor +#> 0.5388 +#> 1.3588 +#> [ CPUFloatType{2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_selu.html b/static/docs/reference/nn_selu.html new file mode 100644 index 0000000000000000000000000000000000000000..4d98aa57f17feef3c04754863f62ac723179e557 --- /dev/null +++ b/static/docs/reference/nn_selu.html @@ -0,0 +1,264 @@ + + + + + + + + +SELU module — nn_selu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applied element-wise, as:

    +
    + +
    nn_selu(inplace = FALSE)
    + +

    Arguments

    + + + + + + +
    inplace

    (bool, optional): can optionally do the operation in-place. Default: FALSE

    + +

    Details

    + +

    $$ + \mbox{SELU}(x) = \mbox{scale} * (\max(0,x) + \min(0, \alpha * (\exp(x) - 1))) +$$

    +

    with \(\alpha = 1.6732632423543772848170429916717\) and +\(\mbox{scale} = 1.0507009873554804934193349852946\).

    +

    More details can be found in the paper +Self-Normalizing Neural Networks.

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_selu() +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_sequential.html b/static/docs/reference/nn_sequential.html new file mode 100644 index 0000000000000000000000000000000000000000..c63f5fe0dd973649b2ecb9c3a6a41654f40f176a --- /dev/null +++ b/static/docs/reference/nn_sequential.html @@ -0,0 +1,259 @@ + + + + + + + + +A sequential container — nn_sequential • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    A sequential container. +Modules will be added to it in the order they are passed in the constructor. +See examples.

    +
    + +
    nn_sequential(..., name = NULL)
    + +

    Arguments

    + + + + + + + + + + +
    ...

    sequence of modules to be added

    name

    optional name for the generated module.

    + + +

    Examples

    +
    if (torch_is_installed()) { + +model <- nn_sequential( + nn_conv2d(1, 20, 5), + nn_relu(), + nn_conv2d(20, 64, 5), + nn_relu() +) +input <- torch_randn(32, 1, 28, 28) +output <- model(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_sigmoid.html b/static/docs/reference/nn_sigmoid.html new file mode 100644 index 0000000000000000000000000000000000000000..5819e7b867890f4e945a66a082acbd3490749dc3 --- /dev/null +++ b/static/docs/reference/nn_sigmoid.html @@ -0,0 +1,252 @@ + + + + + + + + +Sigmoid module — nn_sigmoid • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the element-wise function:

    +
    + +
    nn_sigmoid()
    + + +

    Details

    + +

    $$ + \mbox{Sigmoid}(x) = \sigma(x) = \frac{1}{1 + \exp(-x)} +$$

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_sigmoid() +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_softmax.html b/static/docs/reference/nn_softmax.html new file mode 100644 index 0000000000000000000000000000000000000000..74bfe04e3daf5d08971988bdbba9aef42188c49e --- /dev/null +++ b/static/docs/reference/nn_softmax.html @@ -0,0 +1,279 @@ + + + + + + + + +Softmax module — nn_softmax • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the Softmax function to an n-dimensional input Tensor +rescaling them so that the elements of the n-dimensional output Tensor +lie in the range [0,1] and sum to 1. +Softmax is defined as:

    +
    + +
    nn_softmax(dim)
    + +

    Arguments

    + + + + + + +
    dim

    (int): A dimension along which Softmax will be computed (so every slice +along dim will sum to 1).

    + +

    Value

    + +

    : +a Tensor of the same dimension and shape as the input with +values in the range [0, 1]

    +

    Details

    + +

    $$ + \mbox{Softmax}(x_{i}) = \frac{\exp(x_i)}{\sum_j \exp(x_j)} +$$

    +

    When the input Tensor is a sparse tensor then the unspecifed +values are treated as -Inf.

    +

    Note

    + +

    This module doesn't work directly with NLLLoss, +which expects the Log to be computed between the Softmax and itself. +Use LogSoftmax instead (it's faster and has better numerical properties).

    +

    Shape

    + + + +
      +
    • Input: \((*)\) where * means, any number of additional +dimensions

    • +
    • Output: \((*)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_softmax(1) +input <- torch_randn(2, 3) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_softmax2d.html b/static/docs/reference/nn_softmax2d.html new file mode 100644 index 0000000000000000000000000000000000000000..1d7bb6f0bd137f0d3a1979ee1f406d6bc47d343f --- /dev/null +++ b/static/docs/reference/nn_softmax2d.html @@ -0,0 +1,254 @@ + + + + + + + + +Softmax2d module — nn_softmax2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies SoftMax over features to each spatial location. +When given an image of Channels x Height x Width, it will +apply Softmax to each location \((Channels, h_i, w_j)\)

    +
    + +
    nn_softmax2d()
    + + +

    Value

    + +

    a Tensor of the same dimension and shape as the input with +values in the range [0, 1]

    +

    Shape

    + + + +
      +
    • Input: \((N, C, H, W)\)

    • +
    • Output: \((N, C, H, W)\) (same shape as input)

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_softmax2d() +input <- torch_randn(2, 3, 12, 13) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_softmin.html b/static/docs/reference/nn_softmin.html new file mode 100644 index 0000000000000000000000000000000000000000..a22302383128bab89ecb0d6c488dec24b7b6b984 --- /dev/null +++ b/static/docs/reference/nn_softmin.html @@ -0,0 +1,271 @@ + + + + + + + + +Softmin — nn_softmin • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the Softmin function to an n-dimensional input Tensor +rescaling them so that the elements of the n-dimensional output Tensor +lie in the range [0, 1] and sum to 1. +Softmin is defined as:

    +
    + +
    nn_softmin(dim)
    + +

    Arguments

    + + + + + + +
    dim

    (int): A dimension along which Softmin will be computed (so every slice +along dim will sum to 1).

    + +

    Value

    + +

    a Tensor of the same dimension and shape as the input, with +values in the range [0, 1].

    +

    Details

    + +

    $$ + \mbox{Softmin}(x_{i}) = \frac{\exp(-x_i)}{\sum_j \exp(-x_j)} +$$

    +

    Shape

    + + + +
      +
    • Input: \((*)\) where * means, any number of additional +dimensions

    • +
    • Output: \((*)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_softmin(dim = 1) +input <- torch_randn(2, 2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_softplus.html b/static/docs/reference/nn_softplus.html new file mode 100644 index 0000000000000000000000000000000000000000..3b4b7e58ee2bd077bb343036d8085bbf5fdb0846 --- /dev/null +++ b/static/docs/reference/nn_softplus.html @@ -0,0 +1,271 @@ + + + + + + + + +Softplus module — nn_softplus • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the element-wise function: +$$ + \mbox{Softplus}(x) = \frac{1}{\beta} * \log(1 + \exp(\beta * x)) +$$

    +
    + +
    nn_softplus(beta = 1, threshold = 20)
    + +

    Arguments

    + + + + + + + + + + +
    beta

    the \(\beta\) value for the Softplus formulation. Default: 1

    threshold

    values above this revert to a linear function. Default: 20

    + +

    Details

    + +

    SoftPlus is a smooth approximation to the ReLU function and can be used +to constrain the output of a machine to always be positive. +For numerical stability the implementation reverts to the linear function +when \(input \times \beta > threshold\).

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_softplus() +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_softshrink.html b/static/docs/reference/nn_softshrink.html new file mode 100644 index 0000000000000000000000000000000000000000..7cc3d6d88fa69944ea21923aaad84cfaf6c03013 --- /dev/null +++ b/static/docs/reference/nn_softshrink.html @@ -0,0 +1,266 @@ + + + + + + + + +Softshrink module — nn_softshrink • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the soft shrinkage function elementwise:

    +
    + +
    nn_softshrink(lambd = 0.5)
    + +

    Arguments

    + + + + + + +
    lambd

    the \(\lambda\) (must be no less than zero) value for the Softshrink formulation. Default: 0.5

    + +

    Details

    + +

    $$ + \mbox{SoftShrinkage}(x) = + \left\{ \begin{array}{ll} +x - \lambda, & \mbox{ if } x > \lambda \\ +x + \lambda, & \mbox{ if } x < -\lambda \\ +0, & \mbox{ otherwise } +\end{array} +\right. +$$

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_softshrink() +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_softsign.html b/static/docs/reference/nn_softsign.html new file mode 100644 index 0000000000000000000000000000000000000000..d9ab5542b4fe651e3f5c979f3acd8dd660a7ecf0 --- /dev/null +++ b/static/docs/reference/nn_softsign.html @@ -0,0 +1,253 @@ + + + + + + + + +Softsign module — nn_softsign • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the element-wise function: +$$ + \mbox{SoftSign}(x) = \frac{x}{ 1 + |x|} +$$

    +
    + +
    nn_softsign()
    + + +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_softsign() +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_tanh.html b/static/docs/reference/nn_tanh.html new file mode 100644 index 0000000000000000000000000000000000000000..402f9d21e79a94a64e560e08b57d017c0f10a35e --- /dev/null +++ b/static/docs/reference/nn_tanh.html @@ -0,0 +1,252 @@ + + + + + + + + +Tanh module — nn_tanh • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the element-wise function:

    +
    + +
    nn_tanh()
    + + +

    Details

    + +

    $$ + \mbox{Tanh}(x) = \tanh(x) = \frac{\exp(x) - \exp(-x)} {\exp(x) + \exp(-x)} +$$

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_tanh() +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_tanhshrink.html b/static/docs/reference/nn_tanhshrink.html new file mode 100644 index 0000000000000000000000000000000000000000..9471893b57d42a0cecf5a502cd24bad23b9e4ab5 --- /dev/null +++ b/static/docs/reference/nn_tanhshrink.html @@ -0,0 +1,252 @@ + + + + + + + + +Tanhshrink module — nn_tanhshrink • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the element-wise function:

    +
    + +
    nn_tanhshrink()
    + + +

    Details

    + +

    $$ + \mbox{Tanhshrink}(x) = x - \tanh(x) +$$

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_tanhshrink() +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_threshold.html b/static/docs/reference/nn_threshold.html new file mode 100644 index 0000000000000000000000000000000000000000..97abffbbfc11e59d622032bcd02b845a9ebd6a4c --- /dev/null +++ b/static/docs/reference/nn_threshold.html @@ -0,0 +1,274 @@ + + + + + + + + +Threshoold module — nn_threshold • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Thresholds each element of the input Tensor.

    +
    + +
    nn_threshold(threshold, value, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    threshold

    The value to threshold at

    value

    The value to replace with

    inplace

    can optionally do the operation in-place. Default: FALSE

    + +

    Details

    + +

    Threshold is defined as: +$$ + y = + \left\{ \begin{array}{ll} + x, &\mbox{ if } x > \mbox{threshold} \\ + \mbox{value}, &\mbox{ otherwise } + \end{array} + \right. +$$

    +

    Shape

    + + + +
      +
    • Input: \((N, *)\) where * means, any number of additional +dimensions

    • +
    • Output: \((N, *)\), same shape as the input

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { +m <- nn_threshold(0.1, 20) +input <- torch_randn(2) +output <- m(input) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_utils_rnn_pack_padded_sequence.html b/static/docs/reference/nn_utils_rnn_pack_padded_sequence.html new file mode 100644 index 0000000000000000000000000000000000000000..f63ff359ed15ab29db3d4ea093b5e81dab174767 --- /dev/null +++ b/static/docs/reference/nn_utils_rnn_pack_padded_sequence.html @@ -0,0 +1,278 @@ + + + + + + + + +Packs a Tensor containing padded sequences of variable length. — nn_utils_rnn_pack_padded_sequence • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    input can be of size T x B x * where T is the length of the +longest sequence (equal to lengths[1]), B is the batch size, and +* is any number of dimensions (including 0). If batch_first is +TRUE, B x T x * input is expected.

    +
    + +
    nn_utils_rnn_pack_padded_sequence(
    +  input,
    +  lengths,
    +  batch_first = FALSE,
    +  enforce_sorted = TRUE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    input

    (Tensor): padded batch of variable length sequences.

    lengths

    (Tensor): list of sequences lengths of each batch element.

    batch_first

    (bool, optional): if TRUE, the input is expected in B x T x * +format.

    enforce_sorted

    (bool, optional): if TRUE, the input is expected to +contain sequences sorted by length in a decreasing order. If +FALSE, the input will get sorted unconditionally. Default: TRUE.

    + +

    Value

    + +

    a PackedSequence object

    +

    Details

    + +

    For unsorted sequences, use enforce_sorted = FALSE. If enforce_sorted is +TRUE, the sequences should be sorted by length in a decreasing order, i.e. +input[,1] should be the longest sequence, and input[,B] the shortest +one. enforce_sorted = TRUE is only necessary for ONNX export.

    +

    Note

    + +

    This function accepts any input that has at least two dimensions. You +can apply it to pack the labels, and use the output of the RNN with +them to compute the loss directly. A Tensor can be retrieved from +a PackedSequence object by accessing its .data attribute.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_utils_rnn_pack_sequence.html b/static/docs/reference/nn_utils_rnn_pack_sequence.html new file mode 100644 index 0000000000000000000000000000000000000000..5609d4356fd838af9b8f7cf316d4823d7f769e18 --- /dev/null +++ b/static/docs/reference/nn_utils_rnn_pack_sequence.html @@ -0,0 +1,265 @@ + + + + + + + + +Packs a list of variable length Tensors — nn_utils_rnn_pack_sequence • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    sequences should be a list of Tensors of size L x *, where L is +the length of a sequence and * is any number of trailing dimensions, +including zero.

    +
    + +
    nn_utils_rnn_pack_sequence(sequences, enforce_sorted = TRUE)
    + +

    Arguments

    + + + + + + + + + + +
    sequences

    (list[Tensor]): A list of sequences of decreasing length.

    enforce_sorted

    (bool, optional): if TRUE, checks that the input +contains sequences sorted by length in a decreasing order. If +FALSE, this condition is not checked. Default: TRUE.

    + +

    Value

    + +

    a PackedSequence object

    +

    Details

    + +

    For unsorted sequences, use enforce_sorted = FALSE. If enforce_sorted +is TRUE, the sequences should be sorted in the order of decreasing length. +enforce_sorted = TRUE is only necessary for ONNX export.

    + +

    Examples

    +
    if (torch_is_installed()) { +x <- torch_tensor(c(1,2,3), dtype = torch_long()) +y <- torch_tensor(c(4, 5), dtype = torch_long()) +z <- torch_tensor(c(6), dtype = torch_long()) + +p <- nn_utils_rnn_pack_sequence(list(x, y, z)) + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_utils_rnn_pad_packed_sequence.html b/static/docs/reference/nn_utils_rnn_pad_packed_sequence.html new file mode 100644 index 0000000000000000000000000000000000000000..e6caa589e6208beeee5879bdeea18b584ee94bb5 --- /dev/null +++ b/static/docs/reference/nn_utils_rnn_pad_packed_sequence.html @@ -0,0 +1,299 @@ + + + + + + + + +Pads a packed batch of variable length sequences. — nn_utils_rnn_pad_packed_sequence • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    It is an inverse operation to nn_utils_rnn_pack_padded_sequence().

    +
    + +
    nn_utils_rnn_pad_packed_sequence(
    +  sequence,
    +  batch_first = FALSE,
    +  padding_value = 0,
    +  total_length = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    sequence

    (PackedSequence): batch to pad

    batch_first

    (bool, optional): if True, the output will be in ``B x T x *` +format.

    padding_value

    (float, optional): values for padded elements.

    total_length

    (int, optional): if not NULL, the output will be padded to +have length total_length. This method will throw ValueError +if total_length is less than the max sequence length in +sequence.

    + +

    Value

    + +

    Tuple of Tensor containing the padded sequence, and a Tensor +containing the list of lengths of each sequence in the batch. +Batch elements will be re-ordered as they were ordered originally when +the batch was passed to nn_utils_rnn_pack_padded_sequence() or +nn_utils_rnn_pack_sequence().

    +

    Details

    + +

    The returned Tensor's data will be of size T x B x *, where T is the length +of the longest sequence and B is the batch size. If batch_first is TRUE, +the data will be transposed into B x T x * format.

    +

    Note

    + +

    total_length is useful to implement the +pack sequence -> recurrent network -> unpack sequence pattern in a +nn_module wrapped in ~torch.nn.DataParallel.

    + +

    Examples

    +
    if (torch_is_installed()) { +seq <- torch_tensor(rbind(c(1,2,0), c(3,0,0), c(4,5,6))) +lens <- c(2,1,3) +packed <- nn_utils_rnn_pack_padded_sequence(seq, lens, batch_first = TRUE, + enforce_sorted = FALSE) +packed +nn_utils_rnn_pad_packed_sequence(packed, batch_first=TRUE) + +} +
    #> [[1]] +#> torch_tensor +#> 1 2 0 +#> 3 0 0 +#> 4 5 6 +#> [ CPUFloatType{3,3} ] +#> +#> [[2]] +#> torch_tensor +#> 2 +#> 1 +#> 3 +#> [ CPULongType{3} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nn_utils_rnn_pad_sequence.html b/static/docs/reference/nn_utils_rnn_pad_sequence.html new file mode 100644 index 0000000000000000000000000000000000000000..cc3e8b10e2ec8ff167e2571357eeb3995318de4c --- /dev/null +++ b/static/docs/reference/nn_utils_rnn_pad_sequence.html @@ -0,0 +1,276 @@ + + + + + + + + +Pad a list of variable length Tensors with padding_value — nn_utils_rnn_pad_sequence • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    pad_sequence stacks a list of Tensors along a new dimension, +and pads them to equal length. For example, if the input is list of +sequences with size L x * and if batch_first is False, and T x B x * +otherwise.

    +
    + +
    nn_utils_rnn_pad_sequence(sequences, batch_first = FALSE, padding_value = 0)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    sequences

    (list[Tensor]): list of variable length sequences.

    batch_first

    (bool, optional): output will be in B x T x * if TRUE, +or in T x B x * otherwise

    padding_value

    (float, optional): value for padded elements. Default: 0.

    + +

    Value

    + +

    Tensor of size T x B x * if batch_first is FALSE. +Tensor of size B x T x * otherwise

    +

    Details

    + +

    B is batch size. It is equal to the number of elements in sequences. +T is length of the longest sequence. +L is length of the sequence. +* is any number of trailing dimensions, including none.

    +

    Note

    + +

    This function returns a Tensor of size T x B x * or B x T x * +where T is the length of the longest sequence. This function assumes +trailing dimensions and type of all the Tensors in sequences are same.

    + +

    Examples

    +
    if (torch_is_installed()) { +a <- torch_ones(25, 300) +b <- torch_ones(22, 300) +c <- torch_ones(15, 300) +nn_utils_rnn_pad_sequence(list(a, b, c))$size() + +} +
    #> [1] 25 3 300
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_adaptive_avg_pool1d.html b/static/docs/reference/nnf_adaptive_avg_pool1d.html new file mode 100644 index 0000000000000000000000000000000000000000..ec2ccdf8cb3ad6d0ffd1b772ff17eae8aae93a0e --- /dev/null +++ b/static/docs/reference/nnf_adaptive_avg_pool1d.html @@ -0,0 +1,243 @@ + + + + + + + + +Adaptive_avg_pool1d — nnf_adaptive_avg_pool1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 1D adaptive average pooling over an input signal composed of +several input planes.

    +
    + +
    nnf_adaptive_avg_pool1d(input, output_size)
    + +

    Arguments

    + + + + + + + + + + +
    input

    input tensor of shape (minibatch , in_channels , iW)

    output_size

    the target output size (single integer)

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_adaptive_avg_pool2d.html b/static/docs/reference/nnf_adaptive_avg_pool2d.html new file mode 100644 index 0000000000000000000000000000000000000000..e2fbfbb5ce50d55d5a8e4ef48a599e2c3e7eb463 --- /dev/null +++ b/static/docs/reference/nnf_adaptive_avg_pool2d.html @@ -0,0 +1,243 @@ + + + + + + + + +Adaptive_avg_pool2d — nnf_adaptive_avg_pool2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 2D adaptive average pooling over an input signal composed of +several input planes.

    +
    + +
    nnf_adaptive_avg_pool2d(input, output_size)
    + +

    Arguments

    + + + + + + + + + + +
    input

    input tensor (minibatch, in_channels , iH , iW)

    output_size

    the target output size (single integer or double-integer tuple)

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_adaptive_avg_pool3d.html b/static/docs/reference/nnf_adaptive_avg_pool3d.html new file mode 100644 index 0000000000000000000000000000000000000000..e919dbe4ec6b1fdf0d6c0f498b93bb7655cd4f56 --- /dev/null +++ b/static/docs/reference/nnf_adaptive_avg_pool3d.html @@ -0,0 +1,243 @@ + + + + + + + + +Adaptive_avg_pool3d — nnf_adaptive_avg_pool3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 3D adaptive average pooling over an input signal composed of +several input planes.

    +
    + +
    nnf_adaptive_avg_pool3d(input, output_size)
    + +

    Arguments

    + + + + + + + + + + +
    input

    input tensor (minibatch, in_channels , iT * iH , iW)

    output_size

    the target output size (single integer or triple-integer tuple)

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_adaptive_max_pool1d.html b/static/docs/reference/nnf_adaptive_max_pool1d.html new file mode 100644 index 0000000000000000000000000000000000000000..19b488653c75760223de0a51bd5432dd7826a303 --- /dev/null +++ b/static/docs/reference/nnf_adaptive_max_pool1d.html @@ -0,0 +1,247 @@ + + + + + + + + +Adaptive_max_pool1d — nnf_adaptive_max_pool1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 1D adaptive max pooling over an input signal composed of +several input planes.

    +
    + +
    nnf_adaptive_max_pool1d(input, output_size, return_indices = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    input tensor of shape (minibatch , in_channels , iW)

    output_size

    the target output size (single integer)

    return_indices

    whether to return pooling indices. Default: FALSE

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_adaptive_max_pool2d.html b/static/docs/reference/nnf_adaptive_max_pool2d.html new file mode 100644 index 0000000000000000000000000000000000000000..6e389c087fdcae182fc3db06898757ecc0763fed --- /dev/null +++ b/static/docs/reference/nnf_adaptive_max_pool2d.html @@ -0,0 +1,247 @@ + + + + + + + + +Adaptive_max_pool2d — nnf_adaptive_max_pool2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 2D adaptive max pooling over an input signal composed of +several input planes.

    +
    + +
    nnf_adaptive_max_pool2d(input, output_size, return_indices = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    input tensor (minibatch, in_channels , iH , iW)

    output_size

    the target output size (single integer or double-integer tuple)

    return_indices

    whether to return pooling indices. Default: FALSE

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_adaptive_max_pool3d.html b/static/docs/reference/nnf_adaptive_max_pool3d.html new file mode 100644 index 0000000000000000000000000000000000000000..66721efd9e86794bb0efcdfdd1e119e9851c7e87 --- /dev/null +++ b/static/docs/reference/nnf_adaptive_max_pool3d.html @@ -0,0 +1,247 @@ + + + + + + + + +Adaptive_max_pool3d — nnf_adaptive_max_pool3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 3D adaptive max pooling over an input signal composed of +several input planes.

    +
    + +
    nnf_adaptive_max_pool3d(input, output_size, return_indices = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    input tensor (minibatch, in_channels , iT * iH , iW)

    output_size

    the target output size (single integer or triple-integer tuple)

    return_indices

    whether to return pooling indices. Default:FALSE

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_affine_grid.html b/static/docs/reference/nnf_affine_grid.html new file mode 100644 index 0000000000000000000000000000000000000000..3ba7b5443fa5e0c46af130d67a3ab1f42eb49109 --- /dev/null +++ b/static/docs/reference/nnf_affine_grid.html @@ -0,0 +1,261 @@ + + + + + + + + +Affine_grid — nnf_affine_grid • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Generates a 2D or 3D flow field (sampling grid), given a batch of +affine matrices theta.

    +
    + +
    nnf_affine_grid(theta, size, align_corners = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    theta

    (Tensor) input batch of affine matrices with shape +(\(N \times 2 \times 3\)) for 2D or (\(N \times 3 \times 4\)) for 3D

    size

    (torch.Size) the target output image size. (\(N \times C \times H \times W\) +for 2D or \(N \times C \times D \times H \times W\) for 3D) +Example: torch.Size((32, 3, 24, 24))

    align_corners

    (bool, optional) if True, consider -1 and 1 +to refer to the centers of the corner pixels rather than the image corners. +Refer to nnf_grid_sample() for a more complete description. A grid generated by +nnf_affine_grid() should be passed to nnf_grid_sample() with the same setting for +this option. Default: False

    + +

    Note

    + + + + +

    This function is often used in conjunction with nnf_grid_sample() +to build Spatial Transformer Networks_ .

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_alpha_dropout.html b/static/docs/reference/nnf_alpha_dropout.html new file mode 100644 index 0000000000000000000000000000000000000000..00d715796e62289e81d24dd4ea6b9f9534c4d113 --- /dev/null +++ b/static/docs/reference/nnf_alpha_dropout.html @@ -0,0 +1,250 @@ + + + + + + + + +Alpha_dropout — nnf_alpha_dropout • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies alpha dropout to the input.

    +
    + +
    nnf_alpha_dropout(input, p = 0.5, training = FALSE, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    input

    the input tensor

    p

    probability of an element to be zeroed. Default: 0.5

    training

    apply dropout if is TRUE. Default: TRUE

    inplace

    If set to TRUE, will do this operation in-place. +Default: FALSE

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_avg_pool1d.html b/static/docs/reference/nnf_avg_pool1d.html new file mode 100644 index 0000000000000000000000000000000000000000..cc914d070aed48d18bc064a67a1d11d68cdf678d --- /dev/null +++ b/static/docs/reference/nnf_avg_pool1d.html @@ -0,0 +1,271 @@ + + + + + + + + +Avg_pool1d — nnf_avg_pool1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 1D average pooling over an input signal composed of several +input planes.

    +
    + +
    nnf_avg_pool1d(
    +  input,
    +  kernel_size,
    +  stride = NULL,
    +  padding = 0,
    +  ceil_mode = FALSE,
    +  count_include_pad = TRUE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor of shape (minibatch , in_channels , iW)

    kernel_size

    the size of the window. Can be a single number or a +tuple (kW,).

    stride

    the stride of the window. Can be a single number or a tuple +(sW,). Default: kernel_size

    padding

    implicit zero paddings on both sides of the input. Can be a +single number or a tuple (padW,). Default: 0

    ceil_mode

    when True, will use ceil instead of floor to compute the +output shape. Default: FALSE

    count_include_pad

    when True, will include the zero-padding in the +averaging calculation. Default: TRUE

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_avg_pool2d.html b/static/docs/reference/nnf_avg_pool2d.html new file mode 100644 index 0000000000000000000000000000000000000000..934d38fa89e8860406abd53eb65c09ecb706aa83 --- /dev/null +++ b/static/docs/reference/nnf_avg_pool2d.html @@ -0,0 +1,279 @@ + + + + + + + + +Avg_pool2d — nnf_avg_pool2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies 2D average-pooling operation in \(kH * kW\) regions by step size +\(sH * sW\) steps. The number of output features is equal to the number of +input planes.

    +
    + +
    nnf_avg_pool2d(
    +  input,
    +  kernel_size,
    +  stride = NULL,
    +  padding = 0,
    +  ceil_mode = FALSE,
    +  count_include_pad = TRUE,
    +  divisor_override = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor (minibatch, in_channels , iH , iW)

    kernel_size

    size of the pooling region. Can be a single number or a +tuple (kH, kW)

    stride

    stride of the pooling operation. Can be a single number or a +tuple (sH, sW). Default: kernel_size

    padding

    implicit zero paddings on both sides of the input. Can be a +single number or a tuple (padH, padW). Default: 0

    ceil_mode

    when True, will use ceil instead of floor in the formula +to compute the output shape. Default: FALSE

    count_include_pad

    when True, will include the zero-padding in the +averaging calculation. Default: TRUE

    divisor_override

    if specified, it will be used as divisor, otherwise +size of the pooling region will be used. Default: NULL

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_avg_pool3d.html b/static/docs/reference/nnf_avg_pool3d.html new file mode 100644 index 0000000000000000000000000000000000000000..c064049a46aecf112afb65256bea4a5247f82923 --- /dev/null +++ b/static/docs/reference/nnf_avg_pool3d.html @@ -0,0 +1,279 @@ + + + + + + + + +Avg_pool3d — nnf_avg_pool3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies 3D average-pooling operation in \(kT * kH * kW\) regions by step +size \(sT * sH * sW\) steps. The number of output features is equal to +\(\lfloor \frac{ \mbox{input planes} }{sT} \rfloor\).

    +
    + +
    nnf_avg_pool3d(
    +  input,
    +  kernel_size,
    +  stride = NULL,
    +  padding = 0,
    +  ceil_mode = FALSE,
    +  count_include_pad = TRUE,
    +  divisor_override = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor (minibatch, in_channels , iT * iH , iW)

    kernel_size

    size of the pooling region. Can be a single number or a +tuple (kT, kH, kW)

    stride

    stride of the pooling operation. Can be a single number or a +tuple (sT, sH, sW). Default: kernel_size

    padding

    implicit zero paddings on both sides of the input. Can be a +single number or a tuple (padT, padH, padW), Default: 0

    ceil_mode

    when True, will use ceil instead of floor in the formula +to compute the output shape

    count_include_pad

    when True, will include the zero-padding in the +averaging calculation

    divisor_override

    NA if specified, it will be used as divisor, otherwise +size of the pooling region will be used. Default: NULL

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_batch_norm.html b/static/docs/reference/nnf_batch_norm.html new file mode 100644 index 0000000000000000000000000000000000000000..b691600350ef65b4cb732e3e7d67c4155d80962d --- /dev/null +++ b/static/docs/reference/nnf_batch_norm.html @@ -0,0 +1,275 @@ + + + + + + + + +Batch_norm — nnf_batch_norm • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies Batch Normalization for each channel across a batch of data.

    +
    + +
    nnf_batch_norm(
    +  input,
    +  running_mean,
    +  running_var,
    +  weight = NULL,
    +  bias = NULL,
    +  training = FALSE,
    +  momentum = 0.1,
    +  eps = 1e-05
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor

    running_mean

    the running_mean tensor

    running_var

    the running_var tensor

    weight

    the weight tensor

    bias

    the bias tensor

    training

    bool wether it's training. Default: FALSE

    momentum

    the value used for the running_mean and running_var computation. +Can be set to None for cumulative moving average (i.e. simple average). Default: 0.1

    eps

    a value added to the denominator for numerical stability. Default: 1e-5

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_bilinear.html b/static/docs/reference/nnf_bilinear.html new file mode 100644 index 0000000000000000000000000000000000000000..72032616ecfaea323a40aba71635c5fe09deb4a5 --- /dev/null +++ b/static/docs/reference/nnf_bilinear.html @@ -0,0 +1,258 @@ + + + + + + + + +Bilinear — nnf_bilinear • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a bilinear transformation to the incoming data: +\(y = x_1 A x_2 + b\)

    +
    + +
    nnf_bilinear(input1, input2, weight, bias = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    input1

    \((N, *, H_{in1})\) where \(H_{in1}=\mbox{in1\_features}\) +and \(*\) means any number of additional dimensions. +All but the last dimension of the inputs should be the same.

    input2

    \((N, *, H_{in2})\) where \(H_{in2}=\mbox{in2\_features}\)

    weight

    \((\mbox{out\_features}, \mbox{in1\_features}, +\mbox{in2\_features})\)

    bias

    \((\mbox{out\_features})\)

    + +

    Value

    + +

    output \((N, *, H_{out})\) where \(H_{out}=\mbox{out\_features}\) +and all but the last dimension are the same shape as the input.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_binary_cross_entropy.html b/static/docs/reference/nnf_binary_cross_entropy.html new file mode 100644 index 0000000000000000000000000000000000000000..d4b1c2b7eebab354330e4f2ad812a1062444eecd --- /dev/null +++ b/static/docs/reference/nnf_binary_cross_entropy.html @@ -0,0 +1,259 @@ + + + + + + + + +Binary_cross_entropy — nnf_binary_cross_entropy • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Function that measures the Binary Cross Entropy +between the target and the output.

    +
    + +
    nnf_binary_cross_entropy(
    +  input,
    +  target,
    +  weight = NULL,
    +  reduction = c("mean", "sum", "none")
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    input

    tensor (N,*) where ** means, any number of additional dimensions

    target

    tensor (N,*) , same shape as the input

    weight

    (tensor) weight for each value.

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_binary_cross_entropy_with_logits.html b/static/docs/reference/nnf_binary_cross_entropy_with_logits.html new file mode 100644 index 0000000000000000000000000000000000000000..7aedf2ba0eb1cf53b77140b020ad0b716a4d72da --- /dev/null +++ b/static/docs/reference/nnf_binary_cross_entropy_with_logits.html @@ -0,0 +1,266 @@ + + + + + + + + +Binary_cross_entropy_with_logits — nnf_binary_cross_entropy_with_logits • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Function that measures Binary Cross Entropy between target and output +logits.

    +
    + +
    nnf_binary_cross_entropy_with_logits(
    +  input,
    +  target,
    +  weight = NULL,
    +  reduction = c("mean", "sum", "none"),
    +  pos_weight = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    input

    Tensor of arbitrary shape

    target

    Tensor of the same shape as input

    weight

    (Tensor, optional) a manual rescaling weight if provided it's +repeated to match input tensor shape.

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    pos_weight

    (Tensor, optional) a weight of positive examples. +Must be a vector with length equal to the number of classes.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_celu.html b/static/docs/reference/nnf_celu.html new file mode 100644 index 0000000000000000000000000000000000000000..71ae09d124c266ad9421154e77ed7980499d1dc0 --- /dev/null +++ b/static/docs/reference/nnf_celu.html @@ -0,0 +1,248 @@ + + + + + + + + +Celu — nnf_celu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies element-wise, \(CELU(x) = max(0,x) + min(0, \alpha * (exp(x \alpha) - 1))\).

    +
    + +
    nnf_celu(input, alpha = 1, inplace = FALSE)
    +
    +nnf_celu_(input, alpha = 1)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    alpha

    the alpha value for the CELU formulation. Default: 1.0

    inplace

    can optionally do the operation in-place. Default: FALSE

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_conv1d.html b/static/docs/reference/nnf_conv1d.html new file mode 100644 index 0000000000000000000000000000000000000000..5d76c0be8c7a71264971871caa69377ecbb50275 --- /dev/null +++ b/static/docs/reference/nnf_conv1d.html @@ -0,0 +1,275 @@ + + + + + + + + +Conv1d — nnf_conv1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 1D convolution over an input signal composed of several input +planes.

    +
    + +
    nnf_conv1d(
    +  input,
    +  weight,
    +  bias = NULL,
    +  stride = 1,
    +  padding = 0,
    +  dilation = 1,
    +  groups = 1
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor of shape (minibatch, in_channels , iW)

    weight

    filters of shape (out_channels, in_channels/groups , kW)

    bias

    optional bias of shape (out_channels). Default: NULL

    stride

    the stride of the convolving kernel. Can be a single number or +a one-element tuple (sW,). Default: 1

    padding

    implicit paddings on both sides of the input. Can be a +single number or a one-element tuple (padW,). Default: 0

    dilation

    the spacing between kernel elements. Can be a single number or +a one-element tuple (dW,). Default: 1

    groups

    split input into groups, in_channels should be divisible by +the number of groups. Default: 1

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_conv2d.html b/static/docs/reference/nnf_conv2d.html new file mode 100644 index 0000000000000000000000000000000000000000..e53d6faefa4a94ab8c0cd2cb25303c38e2ac08f7 --- /dev/null +++ b/static/docs/reference/nnf_conv2d.html @@ -0,0 +1,275 @@ + + + + + + + + +Conv2d — nnf_conv2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 2D convolution over an input image composed of several input +planes.

    +
    + +
    nnf_conv2d(
    +  input,
    +  weight,
    +  bias = NULL,
    +  stride = 1,
    +  padding = 0,
    +  dilation = 1,
    +  groups = 1
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor of shape (minibatch, in_channels, iH , iW)

    weight

    filters of shape (out_channels , in_channels/groups, kH , kW)

    bias

    optional bias tensor of shape (out_channels). Default: NULL

    stride

    the stride of the convolving kernel. Can be a single number or a +tuple (sH, sW). Default: 1

    padding

    implicit paddings on both sides of the input. Can be a +single number or a tuple (padH, padW). Default: 0

    dilation

    the spacing between kernel elements. Can be a single number or +a tuple (dH, dW). Default: 1

    groups

    split input into groups, in_channels should be divisible by the +number of groups. Default: 1

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_conv3d.html b/static/docs/reference/nnf_conv3d.html new file mode 100644 index 0000000000000000000000000000000000000000..9a872f5b9530dbc952824b5599f67863c8f99cec --- /dev/null +++ b/static/docs/reference/nnf_conv3d.html @@ -0,0 +1,275 @@ + + + + + + + + +Conv3d — nnf_conv3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 3D convolution over an input image composed of several input +planes.

    +
    + +
    nnf_conv3d(
    +  input,
    +  weight,
    +  bias = NULL,
    +  stride = 1,
    +  padding = 0,
    +  dilation = 1,
    +  groups = 1
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor of shape (minibatch, in_channels , iT , iH , iW)

    weight

    filters of shape (out_channels , in_channels/groups, kT , kH , kW)

    bias

    optional bias tensor of shape (out_channels). Default: NULL

    stride

    the stride of the convolving kernel. Can be a single number or a +tuple (sT, sH, sW). Default: 1

    padding

    implicit paddings on both sides of the input. Can be a +single number or a tuple (padT, padH, padW). Default: 0

    dilation

    the spacing between kernel elements. Can be a single number or +a tuple (dT, dH, dW). Default: 1

    groups

    split input into groups, in_channels should be divisible by +the number of groups. Default: 1

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_conv_tbc.html b/static/docs/reference/nnf_conv_tbc.html new file mode 100644 index 0000000000000000000000000000000000000000..8c08e93eb105aee39ba1fd9f6f001e92ab754370 --- /dev/null +++ b/static/docs/reference/nnf_conv_tbc.html @@ -0,0 +1,253 @@ + + + + + + + + +Conv_tbc — nnf_conv_tbc • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 1-dimensional sequence convolution over an input sequence. +Input and output dimensions are (Time, Batch, Channels) - hence TBC.

    +
    + +
    nnf_conv_tbc(input, weight, bias, pad = 0)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    input

    input tensor of shape \((\mbox{sequence length} \times +batch \times \mbox{in\_channels})\)

    weight

    filter of shape (\(\mbox{kernel width} \times \mbox{in\_channels} +\times \mbox{out\_channels}\))

    bias

    bias of shape (\(\mbox{out\_channels}\))

    pad

    number of timesteps to pad. Default: 0

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_conv_transpose1d.html b/static/docs/reference/nnf_conv_transpose1d.html new file mode 100644 index 0000000000000000000000000000000000000000..c199dc33628f6dccc2dd55f8c8ec1c5825575b2a --- /dev/null +++ b/static/docs/reference/nnf_conv_transpose1d.html @@ -0,0 +1,280 @@ + + + + + + + + +Conv_transpose1d — nnf_conv_transpose1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 1D transposed convolution operator over an input signal +composed of several input planes, sometimes also called "deconvolution".

    +
    + +
    nnf_conv_transpose1d(
    +  input,
    +  weight,
    +  bias = NULL,
    +  stride = 1,
    +  padding = 0,
    +  output_padding = 0,
    +  groups = 1,
    +  dilation = 1
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor of shape (minibatch, in_channels , iW)

    weight

    filters of shape (out_channels, in_channels/groups , kW)

    bias

    optional bias of shape (out_channels). Default: NULL

    stride

    the stride of the convolving kernel. Can be a single number or +a one-element tuple (sW,). Default: 1

    padding

    implicit paddings on both sides of the input. Can be a +single number or a one-element tuple (padW,). Default: 0

    output_padding

    padding applied to the output

    groups

    split input into groups, in_channels should be divisible by +the number of groups. Default: 1

    dilation

    the spacing between kernel elements. Can be a single number or +a one-element tuple (dW,). Default: 1

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_conv_transpose2d.html b/static/docs/reference/nnf_conv_transpose2d.html new file mode 100644 index 0000000000000000000000000000000000000000..368764e354d9aabfc61da6c833812c90a07e63f6 --- /dev/null +++ b/static/docs/reference/nnf_conv_transpose2d.html @@ -0,0 +1,280 @@ + + + + + + + + +Conv_transpose2d — nnf_conv_transpose2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 2D transposed convolution operator over an input image +composed of several input planes, sometimes also called "deconvolution".

    +
    + +
    nnf_conv_transpose2d(
    +  input,
    +  weight,
    +  bias = NULL,
    +  stride = 1,
    +  padding = 0,
    +  output_padding = 0,
    +  groups = 1,
    +  dilation = 1
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor of shape (minibatch, in_channels, iH , iW)

    weight

    filters of shape (out_channels , in_channels/groups, kH , kW)

    bias

    optional bias tensor of shape (out_channels). Default: NULL

    stride

    the stride of the convolving kernel. Can be a single number or a +tuple (sH, sW). Default: 1

    padding

    implicit paddings on both sides of the input. Can be a +single number or a tuple (padH, padW). Default: 0

    output_padding

    padding applied to the output

    groups

    split input into groups, in_channels should be divisible by the +number of groups. Default: 1

    dilation

    the spacing between kernel elements. Can be a single number or +a tuple (dH, dW). Default: 1

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_conv_transpose3d.html b/static/docs/reference/nnf_conv_transpose3d.html new file mode 100644 index 0000000000000000000000000000000000000000..c18b0b1242574b0b53ebb899fb1c13fd83ce4884 --- /dev/null +++ b/static/docs/reference/nnf_conv_transpose3d.html @@ -0,0 +1,280 @@ + + + + + + + + +Conv_transpose3d — nnf_conv_transpose3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 3D transposed convolution operator over an input image +composed of several input planes, sometimes also called "deconvolution"

    +
    + +
    nnf_conv_transpose3d(
    +  input,
    +  weight,
    +  bias = NULL,
    +  stride = 1,
    +  padding = 0,
    +  output_padding = 0,
    +  groups = 1,
    +  dilation = 1
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor of shape (minibatch, in_channels , iT , iH , iW)

    weight

    filters of shape (out_channels , in_channels/groups, kT , kH , kW)

    bias

    optional bias tensor of shape (out_channels). Default: NULL

    stride

    the stride of the convolving kernel. Can be a single number or a +tuple (sT, sH, sW). Default: 1

    padding

    implicit paddings on both sides of the input. Can be a +single number or a tuple (padT, padH, padW). Default: 0

    output_padding

    padding applied to the output

    groups

    split input into groups, in_channels should be divisible by +the number of groups. Default: 1

    dilation

    the spacing between kernel elements. Can be a single number or +a tuple (dT, dH, dW). Default: 1

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_cosine_embedding_loss.html b/static/docs/reference/nnf_cosine_embedding_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..f812a34b379e0c106d8ca3748b99ee56b1ce3aad --- /dev/null +++ b/static/docs/reference/nnf_cosine_embedding_loss.html @@ -0,0 +1,269 @@ + + + + + + + + +Cosine_embedding_loss — nnf_cosine_embedding_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Creates a criterion that measures the loss given input tensors x_1, x_2 and a +Tensor label y with values 1 or -1. This is used for measuring whether two inputs +are similar or dissimilar, using the cosine distance, and is typically used +for learning nonlinear embeddings or semi-supervised learning.

    +
    + +
    nnf_cosine_embedding_loss(
    +  input1,
    +  input2,
    +  target,
    +  margin = 0,
    +  reduction = c("mean", "sum", "none")
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    input1

    the input x_1 tensor

    input2

    the input x_2 tensor

    target

    the target tensor

    margin

    Should be a number from -1 to 1 , 0 to 0.5 is suggested. If margin +is missing, the default value is 0.

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_cosine_similarity.html b/static/docs/reference/nnf_cosine_similarity.html new file mode 100644 index 0000000000000000000000000000000000000000..32359212d4cc5de23383c8c654627fd8971a3f9f --- /dev/null +++ b/static/docs/reference/nnf_cosine_similarity.html @@ -0,0 +1,255 @@ + + + + + + + + +Cosine_similarity — nnf_cosine_similarity • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Returns cosine similarity between x1 and x2, computed along dim.

    +
    + +
    nnf_cosine_similarity(x1, x2, dim = 1, eps = 1e-08)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    x1

    (Tensor) First input.

    x2

    (Tensor) Second input (of size matching x1).

    dim

    (int, optional) Dimension of vectors. Default: 1

    eps

    (float, optional) Small value to avoid division by zero. +Default: 1e-8

    + +

    Details

    + +

    $$ + \mbox{similarity} = \frac{x_1 \cdot x_2}{\max(\Vert x_1 \Vert _2 \cdot \Vert x_2 \Vert _2, \epsilon)} +$$

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_cross_entropy.html b/static/docs/reference/nnf_cross_entropy.html new file mode 100644 index 0000000000000000000000000000000000000000..fec6ab54d2785e08f30d0d4a82b4443466fc7261 --- /dev/null +++ b/static/docs/reference/nnf_cross_entropy.html @@ -0,0 +1,269 @@ + + + + + + + + +Cross_entropy — nnf_cross_entropy • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    This criterion combines log_softmax and nll_loss in a single +function.

    +
    + +
    nnf_cross_entropy(
    +  input,
    +  target,
    +  weight = NULL,
    +  ignore_index = -100,
    +  reduction = c("mean", "sum", "none")
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    input

    (Tensor) \((N, C)\) where C = number of classes or \((N, C, H, W)\) +in case of 2D Loss, or \((N, C, d_1, d_2, ..., d_K)\) where \(K \geq 1\) +in the case of K-dimensional loss.

    target

    (Tensor) \((N)\) where each value is \(0 \leq \mbox{targets}[i] \leq C-1\), +or \((N, d_1, d_2, ..., d_K)\) where \(K \geq 1\) for K-dimensional loss.

    weight

    (Tensor, optional) a manual rescaling weight given to each class. If +given, has to be a Tensor of size C

    ignore_index

    (int, optional) Specifies a target value that is ignored +and does not contribute to the input gradient.

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_ctc_loss.html b/static/docs/reference/nnf_ctc_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..78800fb870356b59f4928cf496fd11c1b124d661 --- /dev/null +++ b/static/docs/reference/nnf_ctc_loss.html @@ -0,0 +1,277 @@ + + + + + + + + +Ctc_loss — nnf_ctc_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    The Connectionist Temporal Classification loss.

    +
    + +
    nnf_ctc_loss(
    +  log_probs,
    +  targets,
    +  input_lengths,
    +  target_lengths,
    +  blank = 0,
    +  reduction = c("mean", "sum", "none"),
    +  zero_infinity = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    log_probs

    \((T, N, C)\) where C = number of characters in alphabet including blank, +T = input length, and N = batch size. The logarithmized probabilities of +the outputs (e.g. obtained with nnf_log_softmax).

    targets

    \((N, S)\) or (sum(target_lengths)). Targets cannot be blank. +In the second form, the targets are assumed to be concatenated.

    input_lengths

    \((N)\). Lengths of the inputs (must each be \(\leq T\))

    target_lengths

    \((N)\). Lengths of the targets

    blank

    (int, optional) Blank label. Default \(0\).

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    zero_infinity

    (bool, optional) Whether to zero infinite losses and the +associated gradients. Default: FALSE Infinite losses mainly occur when the +inputs are too short to be aligned to the targets.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_dropout.html b/static/docs/reference/nnf_dropout.html new file mode 100644 index 0000000000000000000000000000000000000000..dac8fcc91f822158847f84b09d6c355b72a2d4fb --- /dev/null +++ b/static/docs/reference/nnf_dropout.html @@ -0,0 +1,254 @@ + + + + + + + + +Dropout — nnf_dropout • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    During training, randomly zeroes some of the elements of the input +tensor with probability p using samples from a Bernoulli +distribution.

    +
    + +
    nnf_dropout(input, p = 0.5, training = TRUE, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    input

    the input tensor

    p

    probability of an element to be zeroed. Default: 0.5

    training

    apply dropout if is TRUE. Default: TRUE

    inplace

    If set to TRUE, will do this operation in-place. +Default: FALSE

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_dropout2d.html b/static/docs/reference/nnf_dropout2d.html new file mode 100644 index 0000000000000000000000000000000000000000..0a008f3b084a289a3db4e0a00c3cd97486e636fb --- /dev/null +++ b/static/docs/reference/nnf_dropout2d.html @@ -0,0 +1,258 @@ + + + + + + + + +Dropout2d — nnf_dropout2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Randomly zero out entire channels (a channel is a 2D feature map, +e.g., the \(j\)-th channel of the \(i\)-th sample in the +batched input is a 2D tensor \(input[i, j]\)) of the input tensor). +Each channel will be zeroed out independently on every forward call with +probability p using samples from a Bernoulli distribution.

    +
    + +
    nnf_dropout2d(input, p = 0.5, training = TRUE, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    input

    the input tensor

    p

    probability of a channel to be zeroed. Default: 0.5

    training

    apply dropout if is TRUE. Default: TRUE.

    inplace

    If set to TRUE, will do this operation in-place. +Default: FALSE

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_dropout3d.html b/static/docs/reference/nnf_dropout3d.html new file mode 100644 index 0000000000000000000000000000000000000000..417bbb3fab9e56f43740da938435d87ad387e50d --- /dev/null +++ b/static/docs/reference/nnf_dropout3d.html @@ -0,0 +1,258 @@ + + + + + + + + +Dropout3d — nnf_dropout3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Randomly zero out entire channels (a channel is a 3D feature map, +e.g., the \(j\)-th channel of the \(i\)-th sample in the +batched input is a 3D tensor \(input[i, j]\)) of the input tensor). +Each channel will be zeroed out independently on every forward call with +probability p using samples from a Bernoulli distribution.

    +
    + +
    nnf_dropout3d(input, p = 0.5, training = TRUE, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    input

    the input tensor

    p

    probability of a channel to be zeroed. Default: 0.5

    training

    apply dropout if is TRUE. Default: TRUE.

    inplace

    If set to TRUE, will do this operation in-place. +Default: FALSE

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_elu.html b/static/docs/reference/nnf_elu.html new file mode 100644 index 0000000000000000000000000000000000000000..6f7d757ee1395b9dfbf92aeb3bf79ccde7d7933b --- /dev/null +++ b/static/docs/reference/nnf_elu.html @@ -0,0 +1,259 @@ + + + + + + + + +Elu — nnf_elu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies element-wise, +$$ELU(x) = max(0,x) + min(0, \alpha * (exp(x) - 1))$$.

    +
    + +
    nnf_elu(input, alpha = 1, inplace = FALSE)
    +
    +nnf_elu_(input, alpha = 1)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    alpha

    the alpha value for the ELU formulation. Default: 1.0

    inplace

    can optionally do the operation in-place. Default: FALSE

    + + +

    Examples

    +
    if (torch_is_installed()) { +x <- torch_randn(2, 2) +y <- nnf_elu(x, alpha = 1) +nnf_elu_(x, alpha = 1) +torch_equal(x, y) + +} +
    #> [1] TRUE
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_embedding.html b/static/docs/reference/nnf_embedding.html new file mode 100644 index 0000000000000000000000000000000000000000..30ced172d9d4fee5188c52f926964f5929f91dd0 --- /dev/null +++ b/static/docs/reference/nnf_embedding.html @@ -0,0 +1,282 @@ + + + + + + + + +Embedding — nnf_embedding • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    A simple lookup table that looks up embeddings in a fixed dictionary and size.

    +
    + +
    nnf_embedding(
    +  input,
    +  weight,
    +  padding_idx = NULL,
    +  max_norm = NULL,
    +  norm_type = 2,
    +  scale_grad_by_freq = FALSE,
    +  sparse = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    (LongTensor) Tensor containing indices into the embedding matrix

    weight

    (Tensor) The embedding matrix with number of rows equal to the +maximum possible index + 1, and number of columns equal to the embedding size

    padding_idx

    (int, optional) If given, pads the output with the embedding +vector at padding_idx (initialized to zeros) whenever it encounters the index.

    max_norm

    (float, optional) If given, each embedding vector with norm larger +than max_norm is renormalized to have norm max_norm. Note: this will modify +weight in-place.

    norm_type

    (float, optional) The p of the p-norm to compute for the max_norm +option. Default 2.

    scale_grad_by_freq

    (boolean, optional) If given, this will scale gradients +by the inverse of frequency of the words in the mini-batch. Default FALSE.

    sparse

    (bool, optional) If TRUE, gradient w.r.t. weight will be a +sparse tensor. See Notes under nn_embedding for more details regarding +sparse gradients.

    + +

    Details

    + +

    This module is often used to retrieve word embeddings using indices. +The input to the module is a list of indices, and the embedding matrix, +and the output is the corresponding word embeddings.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_embedding_bag.html b/static/docs/reference/nnf_embedding_bag.html new file mode 100644 index 0000000000000000000000000000000000000000..fb3d738e25a34540946d1486e20fc0463da19fef --- /dev/null +++ b/static/docs/reference/nnf_embedding_bag.html @@ -0,0 +1,299 @@ + + + + + + + + +Embedding_bag — nnf_embedding_bag • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Computes sums, means or maxes of bags of embeddings, without instantiating the +intermediate embeddings.

    +
    + +
    nnf_embedding_bag(
    +  input,
    +  weight,
    +  offsets = NULL,
    +  max_norm = NULL,
    +  norm_type = 2,
    +  scale_grad_by_freq = FALSE,
    +  mode = "mean",
    +  sparse = FALSE,
    +  per_sample_weights = NULL,
    +  include_last_offset = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    (LongTensor) Tensor containing bags of indices into the embedding matrix

    weight

    (Tensor) The embedding matrix with number of rows equal to the +maximum possible index + 1, and number of columns equal to the embedding size

    offsets

    (LongTensor, optional) Only used when input is 1D. offsets +determines the starting index position of each bag (sequence) in input.

    max_norm

    (float, optional) If given, each embedding vector with norm +larger than max_norm is renormalized to have norm max_norm. +Note: this will modify weight in-place.

    norm_type

    (float, optional) The p in the p-norm to compute for the +max_norm option. Default 2.

    scale_grad_by_freq

    (boolean, optional) if given, this will scale gradients +by the inverse of frequency of the words in the mini-batch. Default FALSE. Note: this option is not supported when mode="max".

    mode

    (string, optional) "sum", "mean" or "max". Specifies +the way to reduce the bag. Default: 'mean'

    sparse

    (bool, optional) if TRUE, gradient w.r.t. weight will be a +sparse tensor. See Notes under nn_embedding for more details regarding +sparse gradients. Note: this option is not supported when mode="max".

    per_sample_weights

    (Tensor, optional) a tensor of float / double weights, +or NULL to indicate all weights should be taken to be 1. If specified, +per_sample_weights must have exactly the same shape as input and is treated +as having the same offsets, if those are not NULL.

    include_last_offset

    (bool, optional) if TRUE, the size of offsets is +equal to the number of bags + 1.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_fold.html b/static/docs/reference/nnf_fold.html new file mode 100644 index 0000000000000000000000000000000000000000..6ac9488780e7f176d1d754256380933d48a4f6ab --- /dev/null +++ b/static/docs/reference/nnf_fold.html @@ -0,0 +1,277 @@ + + + + + + + + +Fold — nnf_fold • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Combines an array of sliding local blocks into a large containing +tensor.

    +
    + +
    nnf_fold(
    +  input,
    +  output_size,
    +  kernel_size,
    +  dilation = 1,
    +  padding = 0,
    +  stride = 1
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    the input tensor

    output_size

    the shape of the spatial dimensions of the output (i.e., +output$sizes()[-c(1,2)])

    kernel_size

    the size of the sliding blocks

    dilation

    a parameter that controls the stride of elements within the +neighborhood. Default: 1

    padding

    implicit zero padding to be added on both sides of input. +Default: 0

    stride

    the stride of the sliding blocks in the input spatial dimensions. +Default: 1

    + +

    Warning

    + + + + +

    Currently, only 4-D output tensors (batched image-like tensors) are +supported.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_fractional_max_pool2d.html b/static/docs/reference/nnf_fractional_max_pool2d.html new file mode 100644 index 0000000000000000000000000000000000000000..a7afbdf5b6b9cd0e3889ecd671f0b0cba6dd2bf2 --- /dev/null +++ b/static/docs/reference/nnf_fractional_max_pool2d.html @@ -0,0 +1,274 @@ + + + + + + + + +Fractional_max_pool2d — nnf_fractional_max_pool2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies 2D fractional max pooling over an input signal composed of several input planes.

    +
    + +
    nnf_fractional_max_pool2d(
    +  input,
    +  kernel_size,
    +  output_size = NULL,
    +  output_ratio = NULL,
    +  return_indices = FALSE,
    +  random_samples = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    the input tensor

    kernel_size

    the size of the window to take a max over. Can be a +single number \(k\) (for a square kernel of \(k * k\)) or +a tuple (kH, kW)

    output_size

    the target output size of the image of the form \(oH * oW\). +Can be a tuple (oH, oW) or a single number \(oH\) for a square image \(oH * oH\)

    output_ratio

    If one wants to have an output size as a ratio of the input size, +this option can be given. This has to be a number or tuple in the range (0, 1)

    return_indices

    if True, will return the indices along with the outputs.

    random_samples

    optional random samples.

    + +

    Details

    + +

    Fractional MaxPooling is described in detail in the paper Fractional MaxPooling_ by Ben Graham

    +

    The max-pooling operation is applied in \(kH * kW\) regions by a stochastic +step size determined by the target output size. +The number of output features is equal to the number of input planes.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_fractional_max_pool3d.html b/static/docs/reference/nnf_fractional_max_pool3d.html new file mode 100644 index 0000000000000000000000000000000000000000..09dfbc31b162667c3f87c2438e6925bf88c444bc --- /dev/null +++ b/static/docs/reference/nnf_fractional_max_pool3d.html @@ -0,0 +1,275 @@ + + + + + + + + +Fractional_max_pool3d — nnf_fractional_max_pool3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies 3D fractional max pooling over an input signal composed of several input planes.

    +
    + +
    nnf_fractional_max_pool3d(
    +  input,
    +  kernel_size,
    +  output_size = NULL,
    +  output_ratio = NULL,
    +  return_indices = FALSE,
    +  random_samples = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    the input tensor

    kernel_size

    the size of the window to take a max over. Can be a single number \(k\) +(for a square kernel of \(k * k * k\)) or a tuple (kT, kH, kW)

    output_size

    the target output size of the form \(oT * oH * oW\). +Can be a tuple (oT, oH, oW) or a single number \(oH\) for a cubic output +\(oH * oH * oH\)

    output_ratio

    If one wants to have an output size as a ratio of the +input size, this option can be given. This has to be a number or tuple in the +range (0, 1)

    return_indices

    if True, will return the indices along with the outputs.

    random_samples

    undocumented argument.

    + +

    Details

    + +

    Fractional MaxPooling is described in detail in the paper Fractional MaxPooling_ by Ben Graham

    +

    The max-pooling operation is applied in \(kT * kH * kW\) regions by a stochastic +step size determined by the target output size. +The number of output features is equal to the number of input planes.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_gelu.html b/static/docs/reference/nnf_gelu.html new file mode 100644 index 0000000000000000000000000000000000000000..78acd4728ad375b7d5b8b64243bebce0d55a1293 --- /dev/null +++ b/static/docs/reference/nnf_gelu.html @@ -0,0 +1,248 @@ + + + + + + + + +Gelu — nnf_gelu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Gelu

    +
    + +
    nnf_gelu(input)
    + +

    Arguments

    + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    + +

    gelu(input) -> Tensor

    + + + + +

    Applies element-wise the function +\(GELU(x) = x * \Phi(x)\)

    +

    where \(\Phi(x)\) is the Cumulative Distribution Function for +Gaussian Distribution.

    +

    See Gaussian Error Linear Units (GELUs).

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_glu.html b/static/docs/reference/nnf_glu.html new file mode 100644 index 0000000000000000000000000000000000000000..b9bba4760fd62848a997e0ce70d093711bb90718 --- /dev/null +++ b/static/docs/reference/nnf_glu.html @@ -0,0 +1,248 @@ + + + + + + + + +Glu — nnf_glu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    The gated linear unit. Computes:

    +
    + +
    nnf_glu(input, dim = -1)
    + +

    Arguments

    + + + + + + + + + + +
    input

    (Tensor) input tensor

    dim

    (int) dimension on which to split the input. Default: -1

    + +

    Details

    + +

    $$GLU(a, b) = a \otimes \sigma(b)$$

    +

    where input is split in half along dim to form a and b, \(\sigma\) +is the sigmoid function and \(\otimes\) is the element-wise product +between matrices.

    +

    See Language Modeling with Gated Convolutional Networks.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_grid_sample.html b/static/docs/reference/nnf_grid_sample.html new file mode 100644 index 0000000000000000000000000000000000000000..c1de2a986eb54b89a719355d3eb743bd5fb02493 --- /dev/null +++ b/static/docs/reference/nnf_grid_sample.html @@ -0,0 +1,309 @@ + + + + + + + + +Grid_sample — nnf_grid_sample • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Given an input and a flow-field grid, computes the +output using input values and pixel locations from grid.

    +
    + +
    nnf_grid_sample(
    +  input,
    +  grid,
    +  mode = c("bilinear", "nearest"),
    +  padding_mode = c("zeros", "border", "reflection"),
    +  align_corners = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    input

    (Tensor) input of shape \((N, C, H_{\mbox{in}}, W_{\mbox{in}})\) (4-D case) or \((N, C, D_{\mbox{in}}, H_{\mbox{in}}, W_{\mbox{in}})\) (5-D case)

    grid

    (Tensor) flow-field of shape \((N, H_{\mbox{out}}, W_{\mbox{out}}, 2)\) (4-D case) or \((N, D_{\mbox{out}}, H_{\mbox{out}}, W_{\mbox{out}}, 3)\) (5-D case)

    mode

    (str) interpolation mode to calculate output values 'bilinear' | 'nearest'. +Default: 'bilinear'

    padding_mode

    (str) padding mode for outside grid values 'zeros' | 'border' +| 'reflection'. Default: 'zeros'

    align_corners

    (bool, optional) Geometrically, we consider the pixels of the +input as squares rather than points. If set to True, the extrema (-1 and +1) are considered as referring to the center points of the input's corner pixels. +If set to False, they are instead considered as referring to the corner +points of the input's corner pixels, making the sampling more resolution +agnostic. This option parallels the align_corners option in nnf_interpolate(), and +so whichever option is used here should also be used there to resize the input +image before grid sampling. Default: False

    + +

    Details

    + +

    Currently, only spatial (4-D) and volumetric (5-D) input are +supported.

    +

    In the spatial (4-D) case, for input with shape +\((N, C, H_{\mbox{in}}, W_{\mbox{in}})\) and grid with shape +\((N, H_{\mbox{out}}, W_{\mbox{out}}, 2)\), the output will have shape +\((N, C, H_{\mbox{out}}, W_{\mbox{out}})\).

    +

    For each output location output[n, :, h, w], the size-2 vector +grid[n, h, w] specifies input pixel locations x and y, +which are used to interpolate the output value output[n, :, h, w]. +In the case of 5D inputs, grid[n, d, h, w] specifies the +x, y, z pixel locations for interpolating +output[n, :, d, h, w]. mode argument specifies nearest or +bilinear interpolation method to sample the input pixels.

    +

    grid specifies the sampling pixel locations normalized by the +input spatial dimensions. Therefore, it should have most values in +the range of [-1, 1]. For example, values x = -1, y = -1 is the +left-top pixel of input, and values x = 1, y = 1 is the +right-bottom pixel of input.

    +

    If grid has values outside the range of [-1, 1], the corresponding +outputs are handled as defined by padding_mode. Options are

      +
    • padding_mode="zeros": use 0 for out-of-bound grid locations,

    • +
    • padding_mode="border": use border values for out-of-bound grid locations,

    • +
    • padding_mode="reflection": use values at locations reflected by +the border for out-of-bound grid locations. For location far away +from the border, it will keep being reflected until becoming in bound, +e.g., (normalized) pixel location x = -3.5 reflects by border -1 +and becomes x' = 1.5, then reflects by border 1 and becomes +x'' = -0.5.

    • +
    + +

    Note

    + + + + +

    This function is often used in conjunction with nnf_affine_grid() +to build Spatial Transformer Networks_ .

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_group_norm.html b/static/docs/reference/nnf_group_norm.html new file mode 100644 index 0000000000000000000000000000000000000000..16e20f9e1ad26c4efa125d78788de6fab95b2a94 --- /dev/null +++ b/static/docs/reference/nnf_group_norm.html @@ -0,0 +1,253 @@ + + + + + + + + +Group_norm — nnf_group_norm • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies Group Normalization for last certain number of dimensions.

    +
    + +
    nnf_group_norm(input, num_groups, weight = NULL, bias = NULL, eps = 1e-05)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    input

    the input tensor

    num_groups

    number of groups to separate the channels into

    weight

    the weight tensor

    bias

    the bias tensor

    eps

    a value added to the denominator for numerical stability. Default: 1e-5

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_gumbel_softmax.html b/static/docs/reference/nnf_gumbel_softmax.html new file mode 100644 index 0000000000000000000000000000000000000000..dabf6530357f007707fb948a8920820222b85376 --- /dev/null +++ b/static/docs/reference/nnf_gumbel_softmax.html @@ -0,0 +1,251 @@ + + + + + + + + +Gumbel_softmax — nnf_gumbel_softmax • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Samples from the Gumbel-Softmax distribution and +optionally discretizes.

    +
    + +
    nnf_gumbel_softmax(logits, tau = 1, hard = FALSE, dim = -1)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    logits

    [..., num_features] unnormalized log probabilities

    tau

    non-negative scalar temperature

    hard

    if True, the returned samples will be discretized as one-hot vectors, but will be differentiated as if it is the soft sample in autograd

    dim

    (int) A dimension along which softmax will be computed. Default: -1.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_hardshrink.html b/static/docs/reference/nnf_hardshrink.html new file mode 100644 index 0000000000000000000000000000000000000000..633de3e9334c129b346d8a9b3c4cd540950b65de --- /dev/null +++ b/static/docs/reference/nnf_hardshrink.html @@ -0,0 +1,242 @@ + + + + + + + + +Hardshrink — nnf_hardshrink • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the hard shrinkage function element-wise

    +
    + +
    nnf_hardshrink(input, lambd = 0.5)
    + +

    Arguments

    + + + + + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    lambd

    the lambda value for the Hardshrink formulation. Default: 0.5

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_hardsigmoid.html b/static/docs/reference/nnf_hardsigmoid.html new file mode 100644 index 0000000000000000000000000000000000000000..9c3a747596644594bc4a25549534c39ae7335de5 --- /dev/null +++ b/static/docs/reference/nnf_hardsigmoid.html @@ -0,0 +1,242 @@ + + + + + + + + +Hardsigmoid — nnf_hardsigmoid • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the element-wise function \(\mbox{Hardsigmoid}(x) = \frac{ReLU6(x + 3)}{6}\)

    +
    + +
    nnf_hardsigmoid(input, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    inplace

    NA If set to True, will do this operation in-place. Default: False

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_hardswish.html b/static/docs/reference/nnf_hardswish.html new file mode 100644 index 0000000000000000000000000000000000000000..3d969fa2a09b4f4e5e511e4a5ec4e14d8c949db2 --- /dev/null +++ b/static/docs/reference/nnf_hardswish.html @@ -0,0 +1,253 @@ + + + + + + + + +Hardswish — nnf_hardswish • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the hardswish function, element-wise, as described in the paper: +Searching for MobileNetV3.

    +
    + +
    nnf_hardswish(input, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    inplace

    can optionally do the operation in-place. Default: FALSE

    + +

    Details

    + +

    $$ \mbox{Hardswish}(x) = \left\{ + \begin{array}{ll} + 0 & \mbox{if } x \le -3, \\ + x & \mbox{if } x \ge +3, \\ + x \cdot (x + 3)/6 & \mbox{otherwise} + \end{array} + \right. $$

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_hardtanh.html b/static/docs/reference/nnf_hardtanh.html new file mode 100644 index 0000000000000000000000000000000000000000..cc6e8c7089de4158e083159674c09eb7bd2ba678 --- /dev/null +++ b/static/docs/reference/nnf_hardtanh.html @@ -0,0 +1,252 @@ + + + + + + + + +Hardtanh — nnf_hardtanh • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the HardTanh function element-wise.

    +
    + +
    nnf_hardtanh(input, min_val = -1, max_val = 1, inplace = FALSE)
    +
    +nnf_hardtanh_(input, min_val = -1, max_val = 1)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    min_val

    minimum value of the linear region range. Default: -1

    max_val

    maximum value of the linear region range. Default: 1

    inplace

    can optionally do the operation in-place. Default: FALSE

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_hinge_embedding_loss.html b/static/docs/reference/nnf_hinge_embedding_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..e7c6a0e6e6c180c49cb324a7ae4acf8e606aa195 --- /dev/null +++ b/static/docs/reference/nnf_hinge_embedding_loss.html @@ -0,0 +1,258 @@ + + + + + + + + +Hinge_embedding_loss — nnf_hinge_embedding_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Measures the loss given an input tensor xx and a labels tensor yy (containing 1 or -1). +This is usually used for measuring whether two inputs are similar or dissimilar, e.g. +using the L1 pairwise distance as xx , and is typically used for learning nonlinear +embeddings or semi-supervised learning.

    +
    + +
    nnf_hinge_embedding_loss(input, target, margin = 1, reduction = "mean")
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    input

    tensor (N,*) where ** means, any number of additional dimensions

    target

    tensor (N,*) , same shape as the input

    margin

    Has a default value of 1.

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_instance_norm.html b/static/docs/reference/nnf_instance_norm.html new file mode 100644 index 0000000000000000000000000000000000000000..3b30820f5aa5d294b177cd6e24fb61840ec9ad41 --- /dev/null +++ b/static/docs/reference/nnf_instance_norm.html @@ -0,0 +1,276 @@ + + + + + + + + +Instance_norm — nnf_instance_norm • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies Instance Normalization for each channel in each data sample in a +batch.

    +
    + +
    nnf_instance_norm(
    +  input,
    +  running_mean = NULL,
    +  running_var = NULL,
    +  weight = NULL,
    +  bias = NULL,
    +  use_input_stats = TRUE,
    +  momentum = 0.1,
    +  eps = 1e-05
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    the input tensor

    running_mean

    the running_mean tensor

    running_var

    the running var tensor

    weight

    the weight tensor

    bias

    the bias tensor

    use_input_stats

    whether to use input stats

    momentum

    a double for the momentum

    eps

    an eps double for numerical stability

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_interpolate.html b/static/docs/reference/nnf_interpolate.html new file mode 100644 index 0000000000000000000000000000000000000000..2606ca3be1737d593a8799e5855beee58f0da01f --- /dev/null +++ b/static/docs/reference/nnf_interpolate.html @@ -0,0 +1,295 @@ + + + + + + + + +Interpolate — nnf_interpolate • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Down/up samples the input to either the given size or the given +scale_factor

    +
    + +
    nnf_interpolate(
    +  input,
    +  size = NULL,
    +  scale_factor = NULL,
    +  mode = "nearest",
    +  align_corners = FALSE,
    +  recompute_scale_factor = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    (Tensor) the input tensor

    size

    (int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int]) +output spatial size.

    scale_factor

    (float or Tuple[float]) multiplier for spatial size. +Has to match input size if it is a tuple.

    mode

    (str) algorithm used for upsampling: 'nearest' | 'linear' | 'bilinear' +| 'bicubic' | 'trilinear' | 'area' Default: 'nearest'

    align_corners

    (bool, optional) Geometrically, we consider the pixels +of the input and output as squares rather than points. If set to TRUE, +the input and output tensors are aligned by the center points of their corner +pixels, preserving the values at the corner pixels. If set to False, the +input and output tensors are aligned by the corner points of their corner pixels, +and the interpolation uses edge value padding for out-of-boundary values, +making this operation independent of input size when scale_factor is kept +the same. This only has an effect when mode is 'linear', 'bilinear', +'bicubic' or 'trilinear'. Default: False

    recompute_scale_factor

    (bool, optional) recompute the scale_factor +for use in the interpolation calculation. When scale_factor is passed +as a parameter, it is used to compute the output_size. If recompute_scale_factor +is ```True`` or not specified, a new scale_factor will be computed based on +the output and input sizes for use in the interpolation computation (i.e. the +computation will be identical to if the computed `output_size` were passed-in +explicitly). Otherwise, the passed-in `scale_factor` will be used in the +interpolation computation. Note that when `scale_factor` is floating-point, +the recomputed scale_factor may differ from the one passed in due to rounding +and precision issues.

    + +

    Details

    + +

    The algorithm used for interpolation is determined by mode.

    +

    Currently temporal, spatial and volumetric sampling are supported, i.e. +expected inputs are 3-D, 4-D or 5-D in shape.

    +

    The input dimensions are interpreted in the form: +mini-batch x channels x [optional depth] x [optional height] x width.

    +

    The modes available for resizing are: nearest, linear (3D-only), +bilinear, bicubic (4D-only), trilinear (5D-only), area

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_kl_div.html b/static/docs/reference/nnf_kl_div.html new file mode 100644 index 0000000000000000000000000000000000000000..d90a75f01950c05519cbb2d3e2593dba04f2df03 --- /dev/null +++ b/static/docs/reference/nnf_kl_div.html @@ -0,0 +1,248 @@ + + + + + + + + +Kl_div — nnf_kl_div • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    The Kullback-Leibler divergence Loss.

    +
    + +
    nnf_kl_div(input, target, reduction = "mean")
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    tensor (N,*) where ** means, any number of additional dimensions

    target

    tensor (N,*) , same shape as the input

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_l1_loss.html b/static/docs/reference/nnf_l1_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..790dee1b25112d3abb3d5779c0821f1fe7f3226d --- /dev/null +++ b/static/docs/reference/nnf_l1_loss.html @@ -0,0 +1,248 @@ + + + + + + + + +L1_loss — nnf_l1_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Function that takes the mean element-wise absolute value difference.

    +
    + +
    nnf_l1_loss(input, target, reduction = "mean")
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    tensor (N,*) where ** means, any number of additional dimensions

    target

    tensor (N,*) , same shape as the input

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_layer_norm.html b/static/docs/reference/nnf_layer_norm.html new file mode 100644 index 0000000000000000000000000000000000000000..b9ba41fab5a41faae6f3805f23e100975b6511cd --- /dev/null +++ b/static/docs/reference/nnf_layer_norm.html @@ -0,0 +1,261 @@ + + + + + + + + +Layer_norm — nnf_layer_norm • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies Layer Normalization for last certain number of dimensions.

    +
    + +
    nnf_layer_norm(
    +  input,
    +  normalized_shape,
    +  weight = NULL,
    +  bias = NULL,
    +  eps = 1e-05
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    input

    the input tensor

    normalized_shape

    input shape from an expected input of size. If a single +integer is used, it is treated as a singleton list, and this module will normalize +over the last dimension which is expected to be of that specific size.

    weight

    the weight tensor

    bias

    the bias tensor

    eps

    a value added to the denominator for numerical stability. Default: 1e-5

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_leaky_relu.html b/static/docs/reference/nnf_leaky_relu.html new file mode 100644 index 0000000000000000000000000000000000000000..f5bcbf2540119ed121a7e7ef773fc8076fc7feea --- /dev/null +++ b/static/docs/reference/nnf_leaky_relu.html @@ -0,0 +1,248 @@ + + + + + + + + +Leaky_relu — nnf_leaky_relu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies element-wise, +\(LeakyReLU(x) = max(0, x) + negative_slope * min(0, x)\)

    +
    + +
    nnf_leaky_relu(input, negative_slope = 0.01, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    negative_slope

    Controls the angle of the negative slope. Default: 1e-2

    inplace

    can optionally do the operation in-place. Default: FALSE

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_linear.html b/static/docs/reference/nnf_linear.html new file mode 100644 index 0000000000000000000000000000000000000000..45d222cf24ca2062b45dc55d51ed473bf87ab012 --- /dev/null +++ b/static/docs/reference/nnf_linear.html @@ -0,0 +1,246 @@ + + + + + + + + +Linear — nnf_linear • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a linear transformation to the incoming data: \(y = xA^T + b\).

    +
    + +
    nnf_linear(input, weight, bias = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    \((N, *, in\_features)\) where * means any number of +additional dimensions

    weight

    \((out\_features, in\_features)\) the weights tensor.

    bias

    optional tensor \((out\_features)\)

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_local_response_norm.html b/static/docs/reference/nnf_local_response_norm.html new file mode 100644 index 0000000000000000000000000000000000000000..612a9ee528a415957a74af154118a17e273d9517 --- /dev/null +++ b/static/docs/reference/nnf_local_response_norm.html @@ -0,0 +1,257 @@ + + + + + + + + +Local_response_norm — nnf_local_response_norm • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies local response normalization over an input signal composed of +several input planes, where channels occupy the second dimension. +Applies normalization across channels.

    +
    + +
    nnf_local_response_norm(input, size, alpha = 1e-04, beta = 0.75, k = 1)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    input

    the input tensor

    size

    amount of neighbouring channels used for normalization

    alpha

    multiplicative factor. Default: 0.0001

    beta

    exponent. Default: 0.75

    k

    additive factor. Default: 1

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_log_softmax.html b/static/docs/reference/nnf_log_softmax.html new file mode 100644 index 0000000000000000000000000000000000000000..f29faf0e33d702bbd95ac785460a104da764a52b --- /dev/null +++ b/static/docs/reference/nnf_log_softmax.html @@ -0,0 +1,253 @@ + + + + + + + + +Log_softmax — nnf_log_softmax • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a softmax followed by a logarithm.

    +
    + +
    nnf_log_softmax(input, dim = NULL, dtype = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    (Tensor) input

    dim

    (int) A dimension along which log_softmax will be computed.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. +If specified, the input tensor is casted to dtype before the operation +is performed. This is useful for preventing data type overflows. +Default: NULL.

    + +

    Details

    + +

    While mathematically equivalent to log(softmax(x)), doing these two +operations separately is slower, and numerically unstable. This function +uses an alternative formulation to compute the output and gradient correctly.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_logsigmoid.html b/static/docs/reference/nnf_logsigmoid.html new file mode 100644 index 0000000000000000000000000000000000000000..5c2b14712c9e45efb3348278b38f497943ed0af7 --- /dev/null +++ b/static/docs/reference/nnf_logsigmoid.html @@ -0,0 +1,238 @@ + + + + + + + + +Logsigmoid — nnf_logsigmoid • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies element-wise \(LogSigmoid(x_i) = log(\frac{1}{1 + exp(-x_i)})\)

    +
    + +
    nnf_logsigmoid(input)
    + +

    Arguments

    + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_lp_pool1d.html b/static/docs/reference/nnf_lp_pool1d.html new file mode 100644 index 0000000000000000000000000000000000000000..70abd1fc226766b65d5852fddfa31732f549fc45 --- /dev/null +++ b/static/docs/reference/nnf_lp_pool1d.html @@ -0,0 +1,258 @@ + + + + + + + + +Lp_pool1d — nnf_lp_pool1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 1D power-average pooling over an input signal composed of +several input planes. If the sum of all inputs to the power of p is +zero, the gradient is set to zero as well.

    +
    + +
    nnf_lp_pool1d(input, norm_type, kernel_size, stride = NULL, ceil_mode = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    input

    the input tensor

    norm_type

    if inf than one gets max pooling if 0 you get sum pooling ( +proportional to the avg pooling)

    kernel_size

    a single int, the size of the window

    stride

    a single int, the stride of the window. Default value is kernel_size

    ceil_mode

    when True, will use ceil instead of floor to compute the output shape

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_lp_pool2d.html b/static/docs/reference/nnf_lp_pool2d.html new file mode 100644 index 0000000000000000000000000000000000000000..d8f53a49dd5e18174d1f9ccd4a2ba931f8bc9028 --- /dev/null +++ b/static/docs/reference/nnf_lp_pool2d.html @@ -0,0 +1,258 @@ + + + + + + + + +Lp_pool2d — nnf_lp_pool2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 2D power-average pooling over an input signal composed of +several input planes. If the sum of all inputs to the power of p is +zero, the gradient is set to zero as well.

    +
    + +
    nnf_lp_pool2d(input, norm_type, kernel_size, stride = NULL, ceil_mode = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    input

    the input tensor

    norm_type

    if inf than one gets max pooling if 0 you get sum pooling ( +proportional to the avg pooling)

    kernel_size

    a single int, the size of the window

    stride

    a single int, the stride of the window. Default value is kernel_size

    ceil_mode

    when True, will use ceil instead of floor to compute the output shape

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_margin_ranking_loss.html b/static/docs/reference/nnf_margin_ranking_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..02a37006266f37bdec4f9fb63b2369cdeff24039 --- /dev/null +++ b/static/docs/reference/nnf_margin_ranking_loss.html @@ -0,0 +1,258 @@ + + + + + + + + +Margin_ranking_loss — nnf_margin_ranking_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Creates a criterion that measures the loss given inputs x1 , x2 , two 1D +mini-batch Tensors, and a label 1D mini-batch tensor y (containing 1 or -1).

    +
    + +
    nnf_margin_ranking_loss(input1, input2, target, margin = 0, reduction = "mean")
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    input1

    the first tensor

    input2

    the second input tensor

    target

    the target tensor

    margin

    Has a default value of 00 .

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_max_pool1d.html b/static/docs/reference/nnf_max_pool1d.html new file mode 100644 index 0000000000000000000000000000000000000000..0663c6e52165761f26f855b36b8eb1a930f6cedb --- /dev/null +++ b/static/docs/reference/nnf_max_pool1d.html @@ -0,0 +1,276 @@ + + + + + + + + +Max_pool1d — nnf_max_pool1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 1D max pooling over an input signal composed of several input +planes.

    +
    + +
    nnf_max_pool1d(
    +  input,
    +  kernel_size,
    +  stride = NULL,
    +  padding = 0,
    +  dilation = 1,
    +  ceil_mode = FALSE,
    +  return_indices = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor of shape (minibatch , in_channels , iW)

    kernel_size

    the size of the window. Can be a single number or a +tuple (kW,).

    stride

    the stride of the window. Can be a single number or a tuple +(sW,). Default: kernel_size

    padding

    implicit zero paddings on both sides of the input. Can be a +single number or a tuple (padW,). Default: 0

    dilation

    controls the spacing between the kernel points; also known as +the à trous algorithm.

    ceil_mode

    when True, will use ceil instead of floor to compute the +output shape. Default: FALSE

    return_indices

    whether to return the indices where the max occurs.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_max_pool2d.html b/static/docs/reference/nnf_max_pool2d.html new file mode 100644 index 0000000000000000000000000000000000000000..f7dafcb091493869816e01003c6834cc0d8758e6 --- /dev/null +++ b/static/docs/reference/nnf_max_pool2d.html @@ -0,0 +1,276 @@ + + + + + + + + +Max_pool2d — nnf_max_pool2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 2D max pooling over an input signal composed of several input +planes.

    +
    + +
    nnf_max_pool2d(
    +  input,
    +  kernel_size,
    +  stride = kernel_size,
    +  padding = 0,
    +  dilation = 1,
    +  ceil_mode = FALSE,
    +  return_indices = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor (minibatch, in_channels , iH , iW)

    kernel_size

    size of the pooling region. Can be a single number or a +tuple (kH, kW)

    stride

    stride of the pooling operation. Can be a single number or a +tuple (sH, sW). Default: kernel_size

    padding

    implicit zero paddings on both sides of the input. Can be a +single number or a tuple (padH, padW). Default: 0

    dilation

    controls the spacing between the kernel points; also known as +the à trous algorithm.

    ceil_mode

    when True, will use ceil instead of floor in the formula +to compute the output shape. Default: FALSE

    return_indices

    whether to return the indices where the max occurs.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_max_pool3d.html b/static/docs/reference/nnf_max_pool3d.html new file mode 100644 index 0000000000000000000000000000000000000000..840ffcf2e5643c2255f91dadd882a29c62887deb --- /dev/null +++ b/static/docs/reference/nnf_max_pool3d.html @@ -0,0 +1,276 @@ + + + + + + + + +Max_pool3d — nnf_max_pool3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a 3D max pooling over an input signal composed of several input +planes.

    +
    + +
    nnf_max_pool3d(
    +  input,
    +  kernel_size,
    +  stride = NULL,
    +  padding = 0,
    +  dilation = 1,
    +  ceil_mode = FALSE,
    +  return_indices = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor (minibatch, in_channels , iT * iH , iW)

    kernel_size

    size of the pooling region. Can be a single number or a +tuple (kT, kH, kW)

    stride

    stride of the pooling operation. Can be a single number or a +tuple (sT, sH, sW). Default: kernel_size

    padding

    implicit zero paddings on both sides of the input. Can be a +single number or a tuple (padT, padH, padW), Default: 0

    dilation

    controls the spacing between the kernel points; also known as +the à trous algorithm.

    ceil_mode

    when True, will use ceil instead of floor in the formula +to compute the output shape

    return_indices

    whether to return the indices where the max occurs.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_max_unpool1d.html b/static/docs/reference/nnf_max_unpool1d.html new file mode 100644 index 0000000000000000000000000000000000000000..721c3111118510bf7cc50716ba8ffdccea377c5e --- /dev/null +++ b/static/docs/reference/nnf_max_unpool1d.html @@ -0,0 +1,264 @@ + + + + + + + + +Max_unpool1d — nnf_max_unpool1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Computes a partial inverse of MaxPool1d.

    +
    + +
    nnf_max_unpool1d(
    +  input,
    +  indices,
    +  kernel_size,
    +  stride = NULL,
    +  padding = 0,
    +  output_size = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    the input Tensor to invert

    indices

    the indices given out by max pool

    kernel_size

    Size of the max pooling window.

    stride

    Stride of the max pooling window. It is set to kernel_size by default.

    padding

    Padding that was added to the input

    output_size

    the targeted output size

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_max_unpool2d.html b/static/docs/reference/nnf_max_unpool2d.html new file mode 100644 index 0000000000000000000000000000000000000000..eaeb2fe6cc68d64491c71e2e8b4d9e7501e8e320 --- /dev/null +++ b/static/docs/reference/nnf_max_unpool2d.html @@ -0,0 +1,264 @@ + + + + + + + + +Max_unpool2d — nnf_max_unpool2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Computes a partial inverse of MaxPool2d.

    +
    + +
    nnf_max_unpool2d(
    +  input,
    +  indices,
    +  kernel_size,
    +  stride = NULL,
    +  padding = 0,
    +  output_size = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    the input Tensor to invert

    indices

    the indices given out by max pool

    kernel_size

    Size of the max pooling window.

    stride

    Stride of the max pooling window. It is set to kernel_size by default.

    padding

    Padding that was added to the input

    output_size

    the targeted output size

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_max_unpool3d.html b/static/docs/reference/nnf_max_unpool3d.html new file mode 100644 index 0000000000000000000000000000000000000000..65bb334cc989265d5df54d5587e1bb8997932c93 --- /dev/null +++ b/static/docs/reference/nnf_max_unpool3d.html @@ -0,0 +1,264 @@ + + + + + + + + +Max_unpool3d — nnf_max_unpool3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Computes a partial inverse of MaxPool3d.

    +
    + +
    nnf_max_unpool3d(
    +  input,
    +  indices,
    +  kernel_size,
    +  stride = NULL,
    +  padding = 0,
    +  output_size = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    the input Tensor to invert

    indices

    the indices given out by max pool

    kernel_size

    Size of the max pooling window.

    stride

    Stride of the max pooling window. It is set to kernel_size by default.

    padding

    Padding that was added to the input

    output_size

    the targeted output size

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_mse_loss.html b/static/docs/reference/nnf_mse_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..39a803971108ab68cef82fef1f90cbdff7a9386e --- /dev/null +++ b/static/docs/reference/nnf_mse_loss.html @@ -0,0 +1,248 @@ + + + + + + + + +Mse_loss — nnf_mse_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Measures the element-wise mean squared error.

    +
    + +
    nnf_mse_loss(input, target, reduction = "mean")
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    tensor (N,*) where ** means, any number of additional dimensions

    target

    tensor (N,*) , same shape as the input

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_multi_head_attention_forward.html b/static/docs/reference/nnf_multi_head_attention_forward.html new file mode 100644 index 0000000000000000000000000000000000000000..f4e6e342867559a3cfa4199aad63a793c7307bc9 --- /dev/null +++ b/static/docs/reference/nnf_multi_head_attention_forward.html @@ -0,0 +1,366 @@ + + + + + + + + +Multi head attention forward — nnf_multi_head_attention_forward • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Allows the model to jointly attend to information from different representation +subspaces. See reference: Attention Is All You Need

    +
    + +
    nnf_multi_head_attention_forward(
    +  query,
    +  key,
    +  value,
    +  embed_dim_to_check,
    +  num_heads,
    +  in_proj_weight,
    +  in_proj_bias,
    +  bias_k,
    +  bias_v,
    +  add_zero_attn,
    +  dropout_p,
    +  out_proj_weight,
    +  out_proj_bias,
    +  training = TRUE,
    +  key_padding_mask = NULL,
    +  need_weights = TRUE,
    +  attn_mask = NULL,
    +  use_separate_proj_weight = FALSE,
    +  q_proj_weight = NULL,
    +  k_proj_weight = NULL,
    +  v_proj_weight = NULL,
    +  static_k = NULL,
    +  static_v = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    query

    \((L, N, E)\) where L is the target sequence length, N is the batch size, E is +the embedding dimension.

    key

    \((S, N, E)\), where S is the source sequence length, N is the batch size, E is +the embedding dimension.

    value

    \((S, N, E)\) where S is the source sequence length, N is the batch size, E is +the embedding dimension.

    embed_dim_to_check

    total dimension of the model.

    num_heads

    parallel attention heads.

    in_proj_weight

    input projection weight and bias.

    in_proj_bias

    currently undocumented.

    bias_k

    bias of the key and value sequences to be added at dim=0.

    bias_v

    currently undocumented.

    add_zero_attn

    add a new batch of zeros to the key and +value sequences at dim=1.

    dropout_p

    probability of an element to be zeroed.

    out_proj_weight

    the output projection weight and bias.

    out_proj_bias

    currently undocumented.

    training

    apply dropout if is TRUE.

    key_padding_mask

    \((N, S)\) where N is the batch size, S is the source sequence length. +If a ByteTensor is provided, the non-zero positions will be ignored while the position +with the zero positions will be unchanged. If a BoolTensor is provided, the positions with the +value of True will be ignored while the position with the value of False will be unchanged.

    need_weights

    output attn_output_weights.

    attn_mask

    2D mask \((L, S)\) where L is the target sequence length, S is the source sequence length. +3D mask \((N*num_heads, L, S)\) where N is the batch size, L is the target sequence length, +S is the source sequence length. attn_mask ensure that position i is allowed to attend the unmasked +positions. If a ByteTensor is provided, the non-zero positions are not allowed to attend +while the zero positions will be unchanged. If a BoolTensor is provided, positions with True +is not allowed to attend while False values will be unchanged. If a FloatTensor +is provided, it will be added to the attention weight.

    use_separate_proj_weight

    the function accept the proj. weights for +query, key, and value in different forms. If false, in_proj_weight will be used, +which is a combination of q_proj_weight, k_proj_weight, v_proj_weight.

    q_proj_weight

    input projection weight and bias.

    k_proj_weight

    currently undocumented.

    v_proj_weight

    currently undocumented.

    static_k

    static key and value used for attention operators.

    static_v

    currently undocumented.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_multi_margin_loss.html b/static/docs/reference/nnf_multi_margin_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..11526ddf479a37a3b395a81ea432e6a0c41d118c --- /dev/null +++ b/static/docs/reference/nnf_multi_margin_loss.html @@ -0,0 +1,272 @@ + + + + + + + + +Multi_margin_loss — nnf_multi_margin_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Creates a criterion that optimizes a multi-class classification hinge loss +(margin-based loss) between input x (a 2D mini-batch Tensor) and output y +(which is a 1D tensor of target class indices, 0 <= y <= x$size(2) - 1 ).

    +
    + +
    nnf_multi_margin_loss(
    +  input,
    +  target,
    +  p = 1,
    +  margin = 1,
    +  weight = NULL,
    +  reduction = "mean"
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    tensor (N,*) where ** means, any number of additional dimensions

    target

    tensor (N,*) , same shape as the input

    p

    Has a default value of 1. 1 and 2 are the only supported values.

    margin

    Has a default value of 1.

    weight

    a manual rescaling weight given to each class. If given, it has to +be a Tensor of size C. Otherwise, it is treated as if having all ones.

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_multilabel_margin_loss.html b/static/docs/reference/nnf_multilabel_margin_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..8281d5e2fffd5f9f59decb6acea3da6145e3dbf4 --- /dev/null +++ b/static/docs/reference/nnf_multilabel_margin_loss.html @@ -0,0 +1,252 @@ + + + + + + + + +Multilabel_margin_loss — nnf_multilabel_margin_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Creates a criterion that optimizes a multi-class multi-classification hinge loss +(margin-based loss) between input x (a 2D mini-batch Tensor) and output y (which +is a 2D Tensor of target class indices).

    +
    + +
    nnf_multilabel_margin_loss(input, target, reduction = "mean")
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    tensor (N,*) where ** means, any number of additional dimensions

    target

    tensor (N,*) , same shape as the input

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_multilabel_soft_margin_loss.html b/static/docs/reference/nnf_multilabel_soft_margin_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..0150c00f4f0bd00a51fec1e317d8fa21c9d81692 --- /dev/null +++ b/static/docs/reference/nnf_multilabel_soft_margin_loss.html @@ -0,0 +1,254 @@ + + + + + + + + +Multilabel_soft_margin_loss — nnf_multilabel_soft_margin_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Creates a criterion that optimizes a multi-label one-versus-all loss based on +max-entropy, between input x and target y of size (N, C).

    +
    + +
    nnf_multilabel_soft_margin_loss(input, target, weight, reduction = "mean")
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    input

    tensor (N,*) where ** means, any number of additional dimensions

    target

    tensor (N,*) , same shape as the input

    weight

    weight tensor to apply on the loss.

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_nll_loss.html b/static/docs/reference/nnf_nll_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..96ba14ba4939e7753a32c116be8c4defb4da84ea --- /dev/null +++ b/static/docs/reference/nnf_nll_loss.html @@ -0,0 +1,267 @@ + + + + + + + + +Nll_loss — nnf_nll_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    The negative log likelihood loss.

    +
    + +
    nnf_nll_loss(
    +  input,
    +  target,
    +  weight = NULL,
    +  ignore_index = -100,
    +  reduction = "mean"
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    input

    \((N, C)\) where C = number of classes or \((N, C, H, W)\) in +case of 2D Loss, or \((N, C, d_1, d_2, ..., d_K)\) where \(K \geq 1\) in +the case of K-dimensional loss.

    target

    \((N)\) where each value is \(0 \leq \mbox{targets}[i] \leq C-1\), +or \((N, d_1, d_2, ..., d_K)\) where \(K \geq 1\) for K-dimensional loss.

    weight

    (Tensor, optional) a manual rescaling weight given to each class. +If given, has to be a Tensor of size C

    ignore_index

    (int, optional) Specifies a target value that is ignored and +does not contribute to the input gradient.

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_normalize.html b/static/docs/reference/nnf_normalize.html new file mode 100644 index 0000000000000000000000000000000000000000..28efa2debbfebcd9dee8670c7bdfa9cb39f45867 --- /dev/null +++ b/static/docs/reference/nnf_normalize.html @@ -0,0 +1,262 @@ + + + + + + + + +Normalize — nnf_normalize • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Performs \(L_p\) normalization of inputs over specified dimension.

    +
    + +
    nnf_normalize(input, p = 2, dim = 1, eps = 1e-12, out = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor of any shape

    p

    (float) the exponent value in the norm formulation. Default: 2

    dim

    (int) the dimension to reduce. Default: 1

    eps

    (float) small value to avoid division by zero. Default: 1e-12

    out

    (Tensor, optional) the output tensor. If out is used, this operation won't be differentiable.

    + +

    Details

    + +

    For a tensor input of sizes \((n_0, ..., n_{dim}, ..., n_k)\), each +\(n_{dim}\) -element vector \(v\) along dimension dim is transformed as

    +

    $$ + v = \frac{v}{\max(\Vert v \Vert_p, \epsilon)}. +$$

    +

    With the default arguments it uses the Euclidean norm over vectors along +dimension \(1\) for normalization.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_one_hot.html b/static/docs/reference/nnf_one_hot.html new file mode 100644 index 0000000000000000000000000000000000000000..7ffec6383e43b5bac16b16dafe8bf89a23ba4ed2 --- /dev/null +++ b/static/docs/reference/nnf_one_hot.html @@ -0,0 +1,252 @@ + + + + + + + + +One_hot — nnf_one_hot • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Takes LongTensor with index values of shape (*) and returns a tensor +of shape (*, num_classes) that have zeros everywhere except where the +index of last dimension matches the corresponding value of the input tensor, +in which case it will be 1.

    +
    + +
    nnf_one_hot(tensor, num_classes = -1)
    + +

    Arguments

    + + + + + + + + + + +
    tensor

    (LongTensor) class values of any shape.

    num_classes

    (int) Total number of classes. If set to -1, the number +of classes will be inferred as one greater than the largest class value in +the input tensor.

    + +

    Details

    + +

    One-hot on Wikipedia: https://en.wikipedia.org/wiki/One-hot

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_pad.html b/static/docs/reference/nnf_pad.html new file mode 100644 index 0000000000000000000000000000000000000000..aeb442200552b3a79bac15ed7e7115698aa4d60a --- /dev/null +++ b/static/docs/reference/nnf_pad.html @@ -0,0 +1,280 @@ + + + + + + + + +Pad — nnf_pad • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Pads tensor.

    +
    + +
    nnf_pad(input, pad, mode = "constant", value = 0)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    input

    (Tensor) N-dimensional tensor

    pad

    (tuple) m-elements tuple, where \(\frac{m}{2} \leq\) input dimensions +and \(m\) is even.

    mode

    'constant', 'reflect', 'replicate' or 'circular'. Default: 'constant'

    value

    fill value for 'constant' padding. Default: 0.

    + +

    Padding size

    + + + + +

    The padding size by which to pad some dimensions of input +are described starting from the last dimension and moving forward. +\(\left\lfloor\frac{\mbox{len(pad)}}{2}\right\rfloor\) dimensions +of input will be padded. +For example, to pad only the last dimension of the input tensor, then +pad has the form +\((\mbox{padding\_left}, \mbox{padding\_right})\); +to pad the last 2 dimensions of the input tensor, then use +\((\mbox{padding\_left}, \mbox{padding\_right},\) +\(\mbox{padding\_top}, \mbox{padding\_bottom})\); +to pad the last 3 dimensions, use +\((\mbox{padding\_left}, \mbox{padding\_right},\) +\(\mbox{padding\_top}, \mbox{padding\_bottom}\) +\(\mbox{padding\_front}, \mbox{padding\_back})\).

    +

    Padding mode

    + + + + +

    See nn_constant_pad_2d, nn_reflection_pad_2d, and +nn_replication_pad_2d for concrete examples on how each of the +padding modes works. Constant padding is implemented for arbitrary dimensions. +tensor, or the last 2 dimensions of 4D input tensor, or the last dimension of +3D input tensor. Reflect padding is only implemented for padding the last 2 +dimensions of 4D input tensor, or the last dimension of 3D input tensor.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_pairwise_distance.html b/static/docs/reference/nnf_pairwise_distance.html new file mode 100644 index 0000000000000000000000000000000000000000..2c20427272fe88b57c573a79605421732182c05b --- /dev/null +++ b/static/docs/reference/nnf_pairwise_distance.html @@ -0,0 +1,254 @@ + + + + + + + + +Pairwise_distance — nnf_pairwise_distance • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Computes the batchwise pairwise distance between vectors using the p-norm.

    +
    + +
    nnf_pairwise_distance(x1, x2, p = 2, eps = 1e-06, keepdim = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    x1

    (Tensor) First input.

    x2

    (Tensor) Second input (of size matching x1).

    p

    the norm degree. Default: 2

    eps

    (float, optional) Small value to avoid division by zero. +Default: 1e-8

    keepdim

    Determines whether or not to keep the vector dimension. Default: False

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_pdist.html b/static/docs/reference/nnf_pdist.html new file mode 100644 index 0000000000000000000000000000000000000000..b383fc008beac11d3852e6b3d4bb3c94d525436a --- /dev/null +++ b/static/docs/reference/nnf_pdist.html @@ -0,0 +1,252 @@ + + + + + + + + +Pdist — nnf_pdist • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Computes the p-norm distance between every pair of row vectors in the input. +This is identical to the upper triangular portion, excluding the diagonal, of +torch_norm(input[:, None] - input, dim=2, p=p). This function will be faster +if the rows are contiguous.

    +
    + +
    nnf_pdist(input, p = 2)
    + +

    Arguments

    + + + + + + + + + + +
    input

    input tensor of shape \(N \times M\).

    p

    p value for the p-norm distance to calculate between each vector pair +\(\in [0, \infty]\).

    + +

    Details

    + +

    If input has shape \(N \times M\) then the output will have shape +\(\frac{1}{2} N (N - 1)\).

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_pixel_shuffle.html b/static/docs/reference/nnf_pixel_shuffle.html new file mode 100644 index 0000000000000000000000000000000000000000..2024598b701e408be14f8cb4a886acd2faddb979 --- /dev/null +++ b/static/docs/reference/nnf_pixel_shuffle.html @@ -0,0 +1,243 @@ + + + + + + + + +Pixel_shuffle — nnf_pixel_shuffle • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Rearranges elements in a tensor of shape \((*, C \times r^2, H, W)\) to a +tensor of shape \((*, C, H \times r, W \times r)\).

    +
    + +
    nnf_pixel_shuffle(input, upscale_factor)
    + +

    Arguments

    + + + + + + + + + + +
    input

    (Tensor) the input tensor

    upscale_factor

    (int) factor to increase spatial resolution by

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_poisson_nll_loss.html b/static/docs/reference/nnf_poisson_nll_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..4212a07df8b6c504f0178390131bba3d1aed9f85 --- /dev/null +++ b/static/docs/reference/nnf_poisson_nll_loss.html @@ -0,0 +1,271 @@ + + + + + + + + +Poisson_nll_loss — nnf_poisson_nll_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Poisson negative log likelihood loss.

    +
    + +
    nnf_poisson_nll_loss(
    +  input,
    +  target,
    +  log_input = TRUE,
    +  full = FALSE,
    +  eps = 1e-08,
    +  reduction = "mean"
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    tensor (N,*) where ** means, any number of additional dimensions

    target

    tensor (N,*) , same shape as the input

    log_input

    if TRUE the loss is computed as \(\exp(\mbox{input}) - \mbox{target} * \mbox{input}\), +if FALSE then loss is \(\mbox{input} - \mbox{target} * \log(\mbox{input}+\mbox{eps})\). +Default: TRUE.

    full

    whether to compute full loss, i. e. to add the Stirling approximation +term. Default: FALSE.

    eps

    (float, optional) Small value to avoid evaluation of \(\log(0)\) when +log_input=FALSE. Default: 1e-8

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_prelu.html b/static/docs/reference/nnf_prelu.html new file mode 100644 index 0000000000000000000000000000000000000000..6b6f6c6ff033eddb6373367b139f7c1da3e2e895 --- /dev/null +++ b/static/docs/reference/nnf_prelu.html @@ -0,0 +1,246 @@ + + + + + + + + +Prelu — nnf_prelu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies element-wise the function +\(PReLU(x) = max(0,x) + weight * min(0,x)\) +where weight is a learnable parameter.

    +
    + +
    nnf_prelu(input, weight)
    + +

    Arguments

    + + + + + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    weight

    (Tensor) the learnable weights

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_relu.html b/static/docs/reference/nnf_relu.html new file mode 100644 index 0000000000000000000000000000000000000000..529454defd9bcd3c89d9404023d718e2e54beec1 --- /dev/null +++ b/static/docs/reference/nnf_relu.html @@ -0,0 +1,244 @@ + + + + + + + + +Relu — nnf_relu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the rectified linear unit function element-wise.

    +
    + +
    nnf_relu(input, inplace = FALSE)
    +
    +nnf_relu_(input)
    + +

    Arguments

    + + + + + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    inplace

    can optionally do the operation in-place. Default: FALSE

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_relu6.html b/static/docs/reference/nnf_relu6.html new file mode 100644 index 0000000000000000000000000000000000000000..e8dd073f92bb0624fdbae50de8d8ffbc9855e738 --- /dev/null +++ b/static/docs/reference/nnf_relu6.html @@ -0,0 +1,242 @@ + + + + + + + + +Relu6 — nnf_relu6 • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the element-wise function \(ReLU6(x) = min(max(0,x), 6)\).

    +
    + +
    nnf_relu6(input, inplace = FALSE)
    + +

    Arguments

    + + + + + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    inplace

    can optionally do the operation in-place. Default: FALSE

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_rrelu.html b/static/docs/reference/nnf_rrelu.html new file mode 100644 index 0000000000000000000000000000000000000000..f1826a3c16c2c984f1cfb6bb25ddb30d8463d2cc --- /dev/null +++ b/static/docs/reference/nnf_rrelu.html @@ -0,0 +1,256 @@ + + + + + + + + +Rrelu — nnf_rrelu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Randomized leaky ReLU.

    +
    + +
    nnf_rrelu(input, lower = 1/8, upper = 1/3, training = FALSE, inplace = FALSE)
    +
    +nnf_rrelu_(input, lower = 1/8, upper = 1/3, training = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    lower

    lower bound of the uniform distribution. Default: 1/8

    upper

    upper bound of the uniform distribution. Default: 1/3

    training

    bool wether it's a training pass. DEfault: FALSE

    inplace

    can optionally do the operation in-place. Default: FALSE

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_selu.html b/static/docs/reference/nnf_selu.html new file mode 100644 index 0000000000000000000000000000000000000000..caa020d5ceee9a4250c102287e95584941c8eaa5 --- /dev/null +++ b/static/docs/reference/nnf_selu.html @@ -0,0 +1,259 @@ + + + + + + + + +Selu — nnf_selu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies element-wise, +$$SELU(x) = scale * (max(0,x) + min(0, \alpha * (exp(x) - 1)))$$, +with \(\alpha=1.6732632423543772848170429916717\) and +\(scale=1.0507009873554804934193349852946\).

    +
    + +
    nnf_selu(input, inplace = FALSE)
    +
    +nnf_selu_(input)
    + +

    Arguments

    + + + + + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    inplace

    can optionally do the operation in-place. Default: FALSE

    + + +

    Examples

    +
    if (torch_is_installed()) { +x <- torch_randn(2, 2) +y <- nnf_selu(x) +nnf_selu_(x) +torch_equal(x, y) + +} +
    #> [1] TRUE
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_sigmoid.html b/static/docs/reference/nnf_sigmoid.html new file mode 100644 index 0000000000000000000000000000000000000000..b1110569e2a21a3b8b8a0ec77023d77f4b5acf42 --- /dev/null +++ b/static/docs/reference/nnf_sigmoid.html @@ -0,0 +1,238 @@ + + + + + + + + +Sigmoid — nnf_sigmoid • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies element-wise \(Sigmoid(x_i) = \frac{1}{1 + exp(-x_i)}\)

    +
    + +
    nnf_sigmoid(input)
    + +

    Arguments

    + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_smooth_l1_loss.html b/static/docs/reference/nnf_smooth_l1_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..a027812c94b517e89f87829dba510ccd74ca2f10 --- /dev/null +++ b/static/docs/reference/nnf_smooth_l1_loss.html @@ -0,0 +1,250 @@ + + + + + + + + +Smooth_l1_loss — nnf_smooth_l1_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Function that uses a squared term if the absolute +element-wise error falls below 1 and an L1 term otherwise.

    +
    + +
    nnf_smooth_l1_loss(input, target, reduction = "mean")
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    tensor (N,*) where ** means, any number of additional dimensions

    target

    tensor (N,*) , same shape as the input

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_soft_margin_loss.html b/static/docs/reference/nnf_soft_margin_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..677426bae16647f16e761fa5b5f3ab9d9d44fdc3 --- /dev/null +++ b/static/docs/reference/nnf_soft_margin_loss.html @@ -0,0 +1,250 @@ + + + + + + + + +Soft_margin_loss — nnf_soft_margin_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Creates a criterion that optimizes a two-class classification logistic loss +between input tensor x and target tensor y (containing 1 or -1).

    +
    + +
    nnf_soft_margin_loss(input, target, reduction = "mean")
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    tensor (N,*) where ** means, any number of additional dimensions

    target

    tensor (N,*) , same shape as the input

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_softmax.html b/static/docs/reference/nnf_softmax.html new file mode 100644 index 0000000000000000000000000000000000000000..7cd9ec6f90ae6b96b2487db793efb9d3d611b9c3 --- /dev/null +++ b/static/docs/reference/nnf_softmax.html @@ -0,0 +1,252 @@ + + + + + + + + +Softmax — nnf_softmax • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a softmax function.

    +
    + +
    nnf_softmax(input, dim, dtype = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    (Tensor) input

    dim

    (int) A dimension along which softmax will be computed.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. +Default: NULL.

    + +

    Details

    + +

    Softmax is defined as:

    +

    $$Softmax(x_{i}) = exp(x_i)/\sum_j exp(x_j)$$

    +

    It is applied to all slices along dim, and will re-scale them so that the elements +lie in the range [0, 1] and sum to 1.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_softmin.html b/static/docs/reference/nnf_softmin.html new file mode 100644 index 0000000000000000000000000000000000000000..493bef654388eb2128f7451dcf8ebfe743d9b8f6 --- /dev/null +++ b/static/docs/reference/nnf_softmin.html @@ -0,0 +1,252 @@ + + + + + + + + +Softmin — nnf_softmin • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies a softmin function.

    +
    + +
    nnf_softmin(input, dim, dtype = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    (Tensor) input

    dim

    (int) A dimension along which softmin will be computed +(so every slice along dim will sum to 1).

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. +This is useful for preventing data type overflows. Default: NULL.

    + +

    Details

    + +

    Note that

    +

    $$Softmin(x) = Softmax(-x)$$.

    +

    See nnf_softmax definition for mathematical formula.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_softplus.html b/static/docs/reference/nnf_softplus.html new file mode 100644 index 0000000000000000000000000000000000000000..425d517a159241d5193af49b8ed262c6481095cf --- /dev/null +++ b/static/docs/reference/nnf_softplus.html @@ -0,0 +1,250 @@ + + + + + + + + +Softplus — nnf_softplus • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies element-wise, the function \(Softplus(x) = 1/\beta * log(1 + exp(\beta * x))\).

    +
    + +
    nnf_softplus(input, beta = 1, threshold = 20)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    beta

    the beta value for the Softplus formulation. Default: 1

    threshold

    values above this revert to a linear function. Default: 20

    + +

    Details

    + +

    For numerical stability the implementation reverts to the linear function +when \(input * \beta > threshold\).

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_softshrink.html b/static/docs/reference/nnf_softshrink.html new file mode 100644 index 0000000000000000000000000000000000000000..65b9433028ac4567dce861255e24fb554fcace76 --- /dev/null +++ b/static/docs/reference/nnf_softshrink.html @@ -0,0 +1,243 @@ + + + + + + + + +Softshrink — nnf_softshrink • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies the soft shrinkage function elementwise

    +
    + +
    nnf_softshrink(input, lambd = 0.5)
    + +

    Arguments

    + + + + + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    lambd

    the lambda (must be no less than zero) value for the Softshrink +formulation. Default: 0.5

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_softsign.html b/static/docs/reference/nnf_softsign.html new file mode 100644 index 0000000000000000000000000000000000000000..7553e69fd514daf79bbfe75e576f2e138e7e3d52 --- /dev/null +++ b/static/docs/reference/nnf_softsign.html @@ -0,0 +1,238 @@ + + + + + + + + +Softsign — nnf_softsign • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies element-wise, the function \(SoftSign(x) = x/(1 + |x|\)

    +
    + +
    nnf_softsign(input)
    + +

    Arguments

    + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_tanhshrink.html b/static/docs/reference/nnf_tanhshrink.html new file mode 100644 index 0000000000000000000000000000000000000000..69950243ac92cc790f4d80b180a79608c3ddb4ff --- /dev/null +++ b/static/docs/reference/nnf_tanhshrink.html @@ -0,0 +1,238 @@ + + + + + + + + +Tanhshrink — nnf_tanhshrink • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Applies element-wise, \(Tanhshrink(x) = x - Tanh(x)\)

    +
    + +
    nnf_tanhshrink(input)
    + +

    Arguments

    + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_threshold.html b/static/docs/reference/nnf_threshold.html new file mode 100644 index 0000000000000000000000000000000000000000..37401c5561d7aaa164970fb212e53a7d83936bf1 --- /dev/null +++ b/static/docs/reference/nnf_threshold.html @@ -0,0 +1,252 @@ + + + + + + + + +Threshold — nnf_threshold • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Thresholds each element of the input Tensor.

    +
    + +
    nnf_threshold(input, threshold, value, inplace = FALSE)
    +
    +nnf_threshold_(input, threshold, value)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    input

    (N,*) tensor, where * means, any number of additional +dimensions

    threshold

    The value to threshold at

    value

    The value to replace with

    inplace

    can optionally do the operation in-place. Default: FALSE

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_triplet_margin_loss.html b/static/docs/reference/nnf_triplet_margin_loss.html new file mode 100644 index 0000000000000000000000000000000000000000..045165d165f00e352af925bf249f6f8ea6dddfef --- /dev/null +++ b/static/docs/reference/nnf_triplet_margin_loss.html @@ -0,0 +1,287 @@ + + + + + + + + +Triplet_margin_loss — nnf_triplet_margin_loss • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Creates a criterion that measures the triplet loss given an input tensors x1 , +x2 , x3 and a margin with a value greater than 0 . This is used for measuring +a relative similarity between samples. A triplet is composed by a, p and n (i.e., +anchor, positive examples and negative examples respectively). The shapes of all +input tensors should be (N, D).

    +
    + +
    nnf_triplet_margin_loss(
    +  anchor,
    +  positive,
    +  negative,
    +  margin = 1,
    +  p = 2,
    +  eps = 1e-06,
    +  swap = FALSE,
    +  reduction = "mean"
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    anchor

    the anchor input tensor

    positive

    the positive input tensor

    negative

    the negative input tensor

    margin

    Default: 1.

    p

    The norm degree for pairwise distance. Default: 2.

    eps

    (float, optional) Small value to avoid division by zero.

    swap

    The distance swap is described in detail in the paper Learning shallow +convolutional feature descriptors with triplet losses by V. Balntas, E. Riba et al. +Default: FALSE.

    reduction

    (string, optional) – Specifies the reduction to apply to the +output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': +the sum of the output will be divided by the number of elements in the output, +'sum': the output will be summed. Default: 'mean'

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/nnf_unfold.html b/static/docs/reference/nnf_unfold.html new file mode 100644 index 0000000000000000000000000000000000000000..e431a19a71dc715025ea4e5e5f82021c5d5ed397 --- /dev/null +++ b/static/docs/reference/nnf_unfold.html @@ -0,0 +1,269 @@ + + + + + + + + +Unfold — nnf_unfold • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Extracts sliding local blocks from an batched input tensor.

    +
    + +
    nnf_unfold(input, kernel_size, dilation = 1, padding = 0, stride = 1)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    input

    the input tensor

    kernel_size

    the size of the sliding blocks

    dilation

    a parameter that controls the stride of elements within the +neighborhood. Default: 1

    padding

    implicit zero padding to be added on both sides of input. +Default: 0

    stride

    the stride of the sliding blocks in the input spatial dimensions. +Default: 1

    + +

    Warning

    + + + + +

    Currently, only 4-D input tensors (batched image-like tensors) are +supported.

    + + +

    More than one element of the unfolded tensor may refer to a single +memory location. As a result, in-place operations (especially ones that +are vectorized) may result in incorrect behavior. If you need to write +to the tensor, please clone it first.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/optim_adam.html b/static/docs/reference/optim_adam.html new file mode 100644 index 0000000000000000000000000000000000000000..c32c93f39f30c40fac274ab4358669d869eaf500 --- /dev/null +++ b/static/docs/reference/optim_adam.html @@ -0,0 +1,280 @@ + + + + + + + + +Implements Adam algorithm. — optim_adam • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    It has been proposed in Adam: A Method for Stochastic Optimization.

    +
    + +
    optim_adam(
    +  params,
    +  lr = 0.001,
    +  betas = c(0.9, 0.999),
    +  eps = 1e-08,
    +  weight_decay = 0,
    +  amsgrad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    params

    (iterable): iterable of parameters to optimize or dicts defining +parameter groups

    lr

    (float, optional): learning rate (default: 1e-3)

    betas

    (Tuple[float, float], optional): coefficients used for computing +running averages of gradient and its square (default: (0.9, 0.999))

    eps

    (float, optional): term added to the denominator to improve +numerical stability (default: 1e-8)

    weight_decay

    (float, optional): weight decay (L2 penalty) (default: 0)

    amsgrad

    (boolean, optional): whether to use the AMSGrad variant of this +algorithm from the paper On the Convergence of Adam and Beyond +(default: FALSE)

    + + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +optimizer <- optim_adam(model$parameters(), lr=0.1) +optimizer$zero_grad() +loss_fn(model(input), target)$backward() +optimizer$step() +} + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/optim_required.html b/static/docs/reference/optim_required.html new file mode 100644 index 0000000000000000000000000000000000000000..8b916e3de48f018ae4dda85c43b3677596184fa9 --- /dev/null +++ b/static/docs/reference/optim_required.html @@ -0,0 +1,229 @@ + + + + + + + + +Dummy value indicating a required value. — optim_required • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    export

    +
    + +
    optim_required()
    + + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/optim_sgd.html b/static/docs/reference/optim_sgd.html new file mode 100644 index 0000000000000000000000000000000000000000..46a631c5376f242fe5c67abd7093a1c0b0d96f4c --- /dev/null +++ b/static/docs/reference/optim_sgd.html @@ -0,0 +1,305 @@ + + + + + + + + +SGD optimizer — optim_sgd • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Implements stochastic gradient descent (optionally with momentum). +Nesterov momentum is based on the formula from +On the importance of initialization and momentum in deep learning.

    +
    + +
    optim_sgd(
    +  params,
    +  lr = optim_required(),
    +  momentum = 0,
    +  dampening = 0,
    +  weight_decay = 0,
    +  nesterov = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    params

    (iterable): iterable of parameters to optimize or dicts defining +parameter groups

    lr

    (float): learning rate

    momentum

    (float, optional): momentum factor (default: 0)

    dampening

    (float, optional): dampening for momentum (default: 0)

    weight_decay

    (float, optional): weight decay (L2 penalty) (default: 0)

    nesterov

    (bool, optional): enables Nesterov momentum (default: FALSE)

    + +

    Note

    + + + + +

    The implementation of SGD with Momentum-Nesterov subtly differs from +Sutskever et. al. and implementations in some other frameworks.

    +

    Considering the specific case of Momentum, the update can be written as +$$ + \begin{array}{ll} +v_{t+1} & = \mu * v_{t} + g_{t+1}, \\ +p_{t+1} & = p_{t} - \mbox{lr} * v_{t+1}, +\end{array} +$$

    +

    where \(p\), \(g\), \(v\) and \(\mu\) denote the +parameters, gradient, velocity, and momentum respectively.

    +

    This is in contrast to Sutskever et. al. and +other frameworks which employ an update of the form

    +

    $$ + \begin{array}{ll} +v_{t+1} & = \mu * v_{t} + \mbox{lr} * g_{t+1}, \\ +p_{t+1} & = p_{t} - v_{t+1}. +\end{array} +$$ +The Nesterov version is analogously modified.

    + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +optimizer <- optim_sgd(model$parameters(), lr=0.1, momentum=0.9) +optimizer$zero_grad() +loss_fn(model(input), target)$backward() +optimizer$step() +} + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/pipe.html b/static/docs/reference/pipe.html new file mode 100644 index 0000000000000000000000000000000000000000..000f09547ee10807f0d6bfcb03a6cfd192d3b412 --- /dev/null +++ b/static/docs/reference/pipe.html @@ -0,0 +1,229 @@ + + + + + + + + +Pipe operator — %>% • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    See magrittr::%>% for details.

    +
    + +
    lhs %>% rhs
    + + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/tensor_dataset.html b/static/docs/reference/tensor_dataset.html new file mode 100644 index 0000000000000000000000000000000000000000..24d21ae1cfa6ea9ad2fb6b76cc02ecec7a0dbd05 --- /dev/null +++ b/static/docs/reference/tensor_dataset.html @@ -0,0 +1,237 @@ + + + + + + + + +Dataset wrapping tensors. — tensor_dataset • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Each sample will be retrieved by indexing tensors along the first dimension.

    +
    + +
    tensor_dataset(...)
    + +

    Arguments

    + + + + + + +
    ...

    tensors that have the same size of the first dimension.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_abs.html b/static/docs/reference/torch_abs.html new file mode 100644 index 0000000000000000000000000000000000000000..7df9e69596b29ea42ad9719edac7f9aaf53bbcdb --- /dev/null +++ b/static/docs/reference/torch_abs.html @@ -0,0 +1,256 @@ + + + + + + + + +Abs — torch_abs • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Abs

    +
    + +
    torch_abs(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    abs(input) -> Tensor

    + + + + +

    Computes the element-wise absolute value of the given input tensor.

    +

    $$ + \mbox{out}_{i} = |\mbox{input}_{i}| +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_abs(torch_tensor(c(-1, -2, 3))) +} +
    #> torch_tensor +#> 1 +#> 2 +#> 3 +#> [ CPUFloatType{3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_acos.html b/static/docs/reference/torch_acos.html new file mode 100644 index 0000000000000000000000000000000000000000..3d472e52d2dc0ba1d43ea92478db31fa761138ad --- /dev/null +++ b/static/docs/reference/torch_acos.html @@ -0,0 +1,259 @@ + + + + + + + + +Acos — torch_acos • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Acos

    +
    + +
    torch_acos(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    acos(input) -> Tensor

    + + + + +

    Returns a new tensor with the arccosine of the elements of input.

    +

    $$ + \mbox{out}_{i} = \cos^{-1}(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_acos(a) +} +
    #> torch_tensor +#> 0.7871 +#> 1.0800 +#> 1.4084 +#> 1.7142 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_adaptive_avg_pool1d.html b/static/docs/reference/torch_adaptive_avg_pool1d.html new file mode 100644 index 0000000000000000000000000000000000000000..d1fba1b0806b95ae1ff3cf94fc9d7de90d3573b8 --- /dev/null +++ b/static/docs/reference/torch_adaptive_avg_pool1d.html @@ -0,0 +1,249 @@ + + + + + + + + +Adaptive_avg_pool1d — torch_adaptive_avg_pool1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Adaptive_avg_pool1d

    +
    + +
    torch_adaptive_avg_pool1d(self, output_size)
    + +

    Arguments

    + + + + + + + + + + +
    self

    the input tensor

    output_size

    the target output size (single integer)

    + +

    adaptive_avg_pool1d(input, output_size) -> Tensor

    + + + + +

    Applies a 1D adaptive average pooling over an input signal composed of +several input planes.

    +

    See nn_adaptive_avg_pool1d() for details and output shape.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_add.html b/static/docs/reference/torch_add.html new file mode 100644 index 0000000000000000000000000000000000000000..4baa586525c0ba59cac262037f1b2fa62266085a --- /dev/null +++ b/static/docs/reference/torch_add.html @@ -0,0 +1,292 @@ + + + + + + + + +Add — torch_add • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Add

    +
    + +
    torch_add(self, other, alpha = 1L)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    other

    (Tensor/Number) the second input tensor/number.

    alpha

    (Number) the scalar multiplier for other

    + +

    add(input, other, out=NULL)

    + + + + +

    Adds the scalar other to each element of the input input +and returns a new resulting tensor.

    +

    $$ + \mbox{out} = \mbox{input} + \mbox{other} +$$ +If input is of type FloatTensor or DoubleTensor, other must be +a real number, otherwise it should be an integer.

    +

    add(input, other, *, alpha=1, out=NULL)

    + + + + +

    Each element of the tensor other is multiplied by the scalar +alpha and added to each element of the tensor input. +The resulting tensor is returned.

    +

    The shapes of input and other must be +broadcastable .

    +

    $$ + \mbox{out} = \mbox{input} + \mbox{alpha} \times \mbox{other} +$$ +If other is of type FloatTensor or DoubleTensor, alpha must be +a real number, otherwise it should be an integer.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_add(a, 20) + + +a = torch_randn(c(4)) +a +b = torch_randn(c(4, 1)) +b +torch_add(a, b) +} +
    #> torch_tensor +#> -0.1740 -0.4454 0.7719 0.2179 +#> 0.1897 -0.0817 1.1355 0.5816 +#> -0.2508 -0.5222 0.6951 0.1411 +#> -1.4621 -1.7335 -0.5163 -1.0702 +#> [ CPUFloatType{4,4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_addbmm.html b/static/docs/reference/torch_addbmm.html new file mode 100644 index 0000000000000000000000000000000000000000..d1e36d5bfeb504fda0b79881aded208bf1edc6fc --- /dev/null +++ b/static/docs/reference/torch_addbmm.html @@ -0,0 +1,287 @@ + + + + + + + + +Addbmm — torch_addbmm • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Addbmm

    +
    + +
    torch_addbmm(self, batch1, batch2, beta = 1L, alpha = 1L)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) matrix to be added

    batch1

    (Tensor) the first batch of matrices to be multiplied

    batch2

    (Tensor) the second batch of matrices to be multiplied

    beta

    (Number, optional) multiplier for input (\(\beta\))

    alpha

    (Number, optional) multiplier for batch1 @ batch2 (\(\alpha\))

    + +

    addbmm(input, batch1, batch2, *, beta=1, alpha=1, out=NULL) -> Tensor

    + + + + +

    Performs a batch matrix-matrix product of matrices stored +in batch1 and batch2, +with a reduced add step (all matrix multiplications get accumulated +along the first dimension). +input is added to the final result.

    +

    batch1 and batch2 must be 3-D tensors each containing the +same number of matrices.

    +

    If batch1 is a \((b \times n \times m)\) tensor, batch2 is a +\((b \times m \times p)\) tensor, input must be +broadcastable with a \((n \times p)\) tensor +and out will be a \((n \times p)\) tensor.

    +

    $$ + out = \beta\ \mbox{input} + \alpha\ (\sum_{i=0}^{b-1} \mbox{batch1}_i \mathbin{@} \mbox{batch2}_i) +$$ +For inputs of type FloatTensor or DoubleTensor, arguments beta and alpha +must be real numbers, otherwise they should be integers.

    + +

    Examples

    +
    if (torch_is_installed()) { + +M = torch_randn(c(3, 5)) +batch1 = torch_randn(c(10, 3, 4)) +batch2 = torch_randn(c(10, 4, 5)) +torch_addbmm(M, batch1, batch2) +} +
    #> torch_tensor +#> 4.3623 -13.1864 1.5644 6.3273 -2.3118 +#> 1.5691 -5.3031 -0.9555 2.9742 2.8950 +#> -7.7087 -2.5553 -4.0583 -2.0273 4.9884 +#> [ CPUFloatType{3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_addcdiv.html b/static/docs/reference/torch_addcdiv.html new file mode 100644 index 0000000000000000000000000000000000000000..37ad2cf009653e6c570607b85f964969b7381e5a --- /dev/null +++ b/static/docs/reference/torch_addcdiv.html @@ -0,0 +1,290 @@ + + + + + + + + +Addcdiv — torch_addcdiv • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Addcdiv

    +
    + +
    torch_addcdiv(self, tensor1, tensor2, value = 1L)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the tensor to be added

    tensor1

    (Tensor) the numerator tensor

    tensor2

    (Tensor) the denominator tensor

    value

    (Number, optional) multiplier for \(\mbox{tensor1} / \mbox{tensor2}\)

    + +

    addcdiv(input, tensor1, tensor2, *, value=1, out=NULL) -> Tensor

    + + + + +

    Performs the element-wise division of tensor1 by tensor2, +multiply the result by the scalar value and add it to input.

    +

    Warning

    + + + +

    Integer division with addcdiv is deprecated, and in a future release +addcdiv will perform a true division of tensor1 and tensor2. +The current addcdiv behavior can be replicated using torch_floor_divide() +for integral inputs +(input + value * tensor1 // tensor2) +and torch_div() for float inputs +(input + value * tensor1 / tensor2). +The new addcdiv behavior can be implemented with torch_true_divide() +(input + value * torch.true_divide(tensor1, +tensor2).

    +

    $$ + \mbox{out}_i = \mbox{input}_i + \mbox{value} \times \frac{\mbox{tensor1}_i}{\mbox{tensor2}_i} +$$

    +

    The shapes of input, tensor1, and tensor2 must be +broadcastable .

    +

    For inputs of type FloatTensor or DoubleTensor, value must be +a real number, otherwise an integer.

    + +

    Examples

    +
    if (torch_is_installed()) { + +t = torch_randn(c(1, 3)) +t1 = torch_randn(c(3, 1)) +t2 = torch_randn(c(1, 3)) +torch_addcdiv(t, t1, t2, 0.1) +} +
    #> torch_tensor +#> 0.2466 0.6010 1.8962 +#> 0.0637 0.7913 2.4221 +#> 0.8083 0.0164 0.2814 +#> [ CPUFloatType{3,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_addcmul.html b/static/docs/reference/torch_addcmul.html new file mode 100644 index 0000000000000000000000000000000000000000..b2e2037940bff73a95b9af5f289ab9b53d19d8da --- /dev/null +++ b/static/docs/reference/torch_addcmul.html @@ -0,0 +1,277 @@ + + + + + + + + +Addcmul — torch_addcmul • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Addcmul

    +
    + +
    torch_addcmul(self, tensor1, tensor2, value = 1L)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the tensor to be added

    tensor1

    (Tensor) the tensor to be multiplied

    tensor2

    (Tensor) the tensor to be multiplied

    value

    (Number, optional) multiplier for \(tensor1 .* tensor2\)

    + +

    addcmul(input, tensor1, tensor2, *, value=1, out=NULL) -> Tensor

    + + + + +

    Performs the element-wise multiplication of tensor1 +by tensor2, multiply the result by the scalar value +and add it to input.

    +

    $$ + \mbox{out}_i = \mbox{input}_i + \mbox{value} \times \mbox{tensor1}_i \times \mbox{tensor2}_i +$$ +The shapes of tensor, tensor1, and tensor2 must be +broadcastable .

    +

    For inputs of type FloatTensor or DoubleTensor, value must be +a real number, otherwise an integer.

    + +

    Examples

    +
    if (torch_is_installed()) { + +t = torch_randn(c(1, 3)) +t1 = torch_randn(c(3, 1)) +t2 = torch_randn(c(1, 3)) +torch_addcmul(t, t1, t2, 0.1) +} +
    #> torch_tensor +#> -0.9987 -0.1486 1.1388 +#> -1.0322 -0.0828 0.9679 +#> -0.9683 -0.2083 1.2937 +#> [ CPUFloatType{3,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_addmm.html b/static/docs/reference/torch_addmm.html new file mode 100644 index 0000000000000000000000000000000000000000..37f25247e9eccb9d1f37520d30dda24ef58ea474 --- /dev/null +++ b/static/docs/reference/torch_addmm.html @@ -0,0 +1,283 @@ + + + + + + + + +Addmm — torch_addmm • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Addmm

    +
    + +
    torch_addmm(self, mat1, mat2, beta = 1L, alpha = 1L)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) matrix to be added

    mat1

    (Tensor) the first matrix to be multiplied

    mat2

    (Tensor) the second matrix to be multiplied

    beta

    (Number, optional) multiplier for input (\(\beta\))

    alpha

    (Number, optional) multiplier for \(mat1 @ mat2\) (\(\alpha\))

    + +

    addmm(input, mat1, mat2, *, beta=1, alpha=1, out=NULL) -> Tensor

    + + + + +

    Performs a matrix multiplication of the matrices mat1 and mat2. +The matrix input is added to the final result.

    +

    If mat1 is a \((n \times m)\) tensor, mat2 is a +\((m \times p)\) tensor, then input must be +broadcastable with a \((n \times p)\) tensor +and out will be a \((n \times p)\) tensor.

    +

    alpha and beta are scaling factors on matrix-vector product between +mat1 and mat2 and the added matrix input respectively.

    +

    $$ + \mbox{out} = \beta\ \mbox{input} + \alpha\ (\mbox{mat1}_i \mathbin{@} \mbox{mat2}_i) +$$ +For inputs of type FloatTensor or DoubleTensor, arguments beta and +alpha must be real numbers, otherwise they should be integers.

    + +

    Examples

    +
    if (torch_is_installed()) { + +M = torch_randn(c(2, 3)) +mat1 = torch_randn(c(2, 3)) +mat2 = torch_randn(c(3, 3)) +torch_addmm(M, mat1, mat2) +} +
    #> torch_tensor +#> 0.0521 1.5207 0.4070 +#> 1.0992 -2.9412 0.1886 +#> [ CPUFloatType{2,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_addmv.html b/static/docs/reference/torch_addmv.html new file mode 100644 index 0000000000000000000000000000000000000000..b7aa0a186c88aac1ae66f4a8bc09aafcb7ee8458 --- /dev/null +++ b/static/docs/reference/torch_addmv.html @@ -0,0 +1,284 @@ + + + + + + + + +Addmv — torch_addmv • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Addmv

    +
    + +
    torch_addmv(self, mat, vec, beta = 1L, alpha = 1L)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) vector to be added

    mat

    (Tensor) matrix to be multiplied

    vec

    (Tensor) vector to be multiplied

    beta

    (Number, optional) multiplier for input (\(\beta\))

    alpha

    (Number, optional) multiplier for \(mat @ vec\) (\(\alpha\))

    + +

    addmv(input, mat, vec, *, beta=1, alpha=1, out=NULL) -> Tensor

    + + + + +

    Performs a matrix-vector product of the matrix mat and +the vector vec. +The vector input is added to the final result.

    +

    If mat is a \((n \times m)\) tensor, vec is a 1-D tensor of +size m, then input must be +broadcastable with a 1-D tensor of size n and +out will be 1-D tensor of size n.

    +

    alpha and beta are scaling factors on matrix-vector product between +mat and vec and the added tensor input respectively.

    +

    $$ + \mbox{out} = \beta\ \mbox{input} + \alpha\ (\mbox{mat} \mathbin{@} \mbox{vec}) +$$ +For inputs of type FloatTensor or DoubleTensor, arguments beta and +alpha must be real numbers, otherwise they should be integers

    + +

    Examples

    +
    if (torch_is_installed()) { + +M = torch_randn(c(2)) +mat = torch_randn(c(2, 3)) +vec = torch_randn(c(3)) +torch_addmv(M, mat, vec) +} +
    #> torch_tensor +#> 2.8130 +#> 1.3307 +#> [ CPUFloatType{2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_addr.html b/static/docs/reference/torch_addr.html new file mode 100644 index 0000000000000000000000000000000000000000..bc7ddf9d3892b189a41c974384f5cf3a0431fda6 --- /dev/null +++ b/static/docs/reference/torch_addr.html @@ -0,0 +1,286 @@ + + + + + + + + +Addr — torch_addr • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Addr

    +
    + +
    torch_addr(self, vec1, vec2, beta = 1L, alpha = 1L)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) matrix to be added

    vec1

    (Tensor) the first vector of the outer product

    vec2

    (Tensor) the second vector of the outer product

    beta

    (Number, optional) multiplier for input (\(\beta\))

    alpha

    (Number, optional) multiplier for \(\mbox{vec1} \otimes \mbox{vec2}\) (\(\alpha\))

    + +

    addr(input, vec1, vec2, *, beta=1, alpha=1, out=NULL) -> Tensor

    + + + + +

    Performs the outer-product of vectors vec1 and vec2 +and adds it to the matrix input.

    +

    Optional values beta and alpha are scaling factors on the +outer product between vec1 and vec2 and the added matrix +input respectively.

    +

    $$ + \mbox{out} = \beta\ \mbox{input} + \alpha\ (\mbox{vec1} \otimes \mbox{vec2}) +$$ +If vec1 is a vector of size n and vec2 is a vector +of size m, then input must be +broadcastable with a matrix of size +\((n \times m)\) and out will be a matrix of size +\((n \times m)\).

    +

    For inputs of type FloatTensor or DoubleTensor, arguments beta and +alpha must be real numbers, otherwise they should be integers

    + +

    Examples

    +
    if (torch_is_installed()) { + +vec1 = torch_arange(1., 4.) +vec2 = torch_arange(1., 3.) +M = torch_zeros(c(3, 2)) +torch_addr(M, vec1, vec2) +} +
    #> torch_tensor +#> 1 2 +#> 2 4 +#> 3 6 +#> [ CPUFloatType{3,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_allclose.html b/static/docs/reference/torch_allclose.html new file mode 100644 index 0000000000000000000000000000000000000000..035b5c3a53d02d29b13f556971997fd6db3f4333 --- /dev/null +++ b/static/docs/reference/torch_allclose.html @@ -0,0 +1,273 @@ + + + + + + + + +Allclose — torch_allclose • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Allclose

    +
    + +
    torch_allclose(self, other, rtol = 1e-05, atol = 0, equal_nan = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) first tensor to compare

    other

    (Tensor) second tensor to compare

    rtol

    (float, optional) relative tolerance. Default: 1e-05

    atol

    (float, optional) absolute tolerance. Default: 1e-08

    equal_nan

    (bool, optional) if TRUE, then two NaN s will be compared as equal. Default: FALSE

    + +

    allclose(input, other, rtol=1e-05, atol=1e-08, equal_nan=False) -> bool

    + + + + +

    This function checks if all input and other satisfy the condition:

    +

    $$ + \vert \mbox{input} - \mbox{other} \vert \leq \mbox{atol} + \mbox{rtol} \times \vert \mbox{other} \vert +$$ +elementwise, for all elements of input and other. The behaviour of this function is analogous to +numpy.allclose <https://docs.scipy.org/doc/numpy/reference/generated/numpy.allclose.html>_

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_allclose(torch_tensor(c(10000., 1e-07)), torch_tensor(c(10000.1, 1e-08))) +torch_allclose(torch_tensor(c(10000., 1e-08)), torch_tensor(c(10000.1, 1e-09))) +torch_allclose(torch_tensor(c(1.0, NaN)), torch_tensor(c(1.0, NaN))) +torch_allclose(torch_tensor(c(1.0, NaN)), torch_tensor(c(1.0, NaN)), equal_nan=TRUE) +} +
    #> [1] TRUE
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_angle.html b/static/docs/reference/torch_angle.html new file mode 100644 index 0000000000000000000000000000000000000000..501c1b0c78e12732d122d355e47673ff08d082cd --- /dev/null +++ b/static/docs/reference/torch_angle.html @@ -0,0 +1,254 @@ + + + + + + + + +Angle — torch_angle • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Angle

    +
    + +
    torch_angle(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    angle(input) -> Tensor

    + + + + +

    Computes the element-wise angle (in radians) of the given input tensor.

    +

    $$ + \mbox{out}_{i} = angle(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +torch_angle(torch_tensor(c(-1 + 1i, -2 + 2i, 3 - 3i)))*180/3.14159 +} + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_arange.html b/static/docs/reference/torch_arange.html new file mode 100644 index 0000000000000000000000000000000000000000..89ee0b1150430fe7937066cf3ca0636c81ac5797 --- /dev/null +++ b/static/docs/reference/torch_arange.html @@ -0,0 +1,295 @@ + + + + + + + + +Arange — torch_arange • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Arange

    +
    + +
    torch_arange(
    +  start,
    +  end,
    +  step = 1,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    start

    (Number) the starting value for the set of points. Default: 0.

    end

    (Number) the ending value for the set of points

    step

    (Number) the gap between each pair of adjacent points. Default: 1.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type). If dtype is not given, infer the data type from the other input arguments. If any of start, end, or stop are floating-point, the dtype is inferred to be the default dtype, see ~torch.get_default_dtype. Otherwise, the dtype is inferred to be torch.int64.

    layout

    (torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    arange(start=0, end, step=1, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

    + + + + +

    Returns a 1-D tensor of size \(\left\lceil \frac{\mbox{end} - \mbox{start}}{\mbox{step}} \right\rceil\) +with values from the interval [start, end) taken with common difference +step beginning from start.

    +

    Note that non-integer step is subject to floating point rounding errors when +comparing against end; to avoid inconsistency, we advise adding a small epsilon to end +in such cases.

    +

    $$ + \mbox{out}_{{i+1}} = \mbox{out}_{i} + \mbox{step} +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_arange(start = 0, end = 5) +torch_arange(1, 4) +torch_arange(1, 2.5, 0.5) +} +
    #> torch_tensor +#> 1.0000 +#> 1.5000 +#> 2.0000 +#> [ CPUFloatType{3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_argmax.html b/static/docs/reference/torch_argmax.html new file mode 100644 index 0000000000000000000000000000000000000000..fad8cad656c5e41dc0ce25de2f5aa2377bd56a80 --- /dev/null +++ b/static/docs/reference/torch_argmax.html @@ -0,0 +1,281 @@ + + + + + + + + +Argmax — torch_argmax • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Argmax

    +
    + +
    torch_argmax(self, dim = NULL, keepdim = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int) the dimension to reduce. If NULL, the argmax of the flattened input is returned.

    keepdim

    (bool) whether the output tensor has dim retained or not. Ignored if dim=NULL.

    + +

    argmax(input) -> LongTensor

    + + + + +

    Returns the indices of the maximum value of all elements in the input tensor.

    +

    This is the second value returned by torch_max. See its +documentation for the exact semantics of this method.

    +

    argmax(input, dim, keepdim=False) -> LongTensor

    + + + + +

    Returns the indices of the maximum values of a tensor across a dimension.

    +

    This is the second value returned by torch_max. See its +documentation for the exact semantics of this method.

    + +

    Examples

    +
    if (torch_is_installed()) { + +if (FALSE) { +a = torch_randn(c(4, 4)) +a +torch_argmax(a) +} + + +a = torch_randn(c(4, 4)) +a +torch_argmax(a, dim=1) +} +
    #> torch_tensor +#> 2 +#> 0 +#> 1 +#> 1 +#> [ CPULongType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_argmin.html b/static/docs/reference/torch_argmin.html new file mode 100644 index 0000000000000000000000000000000000000000..de87cd26e914a0250bcdf75fefdd76ffaac6b40a --- /dev/null +++ b/static/docs/reference/torch_argmin.html @@ -0,0 +1,279 @@ + + + + + + + + +Argmin — torch_argmin • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Argmin

    +
    + +
    torch_argmin(self, dim = NULL, keepdim = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int) the dimension to reduce. If NULL, the argmin of the flattened input is returned.

    keepdim

    (bool) whether the output tensor has dim retained or not. Ignored if dim=NULL.

    + +

    argmin(input) -> LongTensor

    + + + + +

    Returns the indices of the minimum value of all elements in the input tensor.

    +

    This is the second value returned by torch_min. See its +documentation for the exact semantics of this method.

    +

    argmin(input, dim, keepdim=False, out=NULL) -> LongTensor

    + + + + +

    Returns the indices of the minimum values of a tensor across a dimension.

    +

    This is the second value returned by torch_min. See its +documentation for the exact semantics of this method.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4, 4)) +a +torch_argmin(a) + + +a = torch_randn(c(4, 4)) +a +torch_argmin(a, dim=1) +} +
    #> torch_tensor +#> 0 +#> 1 +#> 1 +#> 1 +#> [ CPULongType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_argsort.html b/static/docs/reference/torch_argsort.html new file mode 100644 index 0000000000000000000000000000000000000000..40263634b13441b00aac9465a51e391bd833b584 --- /dev/null +++ b/static/docs/reference/torch_argsort.html @@ -0,0 +1,267 @@ + + + + + + + + +Argsort — torch_argsort • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Argsort

    +
    + +
    torch_argsort(self, dim = -1L, descending = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int, optional) the dimension to sort along

    descending

    (bool, optional) controls the sorting order (ascending or descending)

    + +

    argsort(input, dim=-1, descending=False) -> LongTensor

    + + + + +

    Returns the indices that sort a tensor along a given dimension in ascending +order by value.

    +

    This is the second value returned by torch_sort. See its documentation +for the exact semantics of this method.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4, 4)) +a +torch_argsort(a, dim=1) +} +
    #> torch_tensor +#> 3 0 0 2 +#> 2 1 1 3 +#> 1 2 3 1 +#> 0 3 2 0 +#> [ CPULongType{4,4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_as_strided.html b/static/docs/reference/torch_as_strided.html new file mode 100644 index 0000000000000000000000000000000000000000..f63b078d1c40439fdbb9ac72a45e57779f6c8887 --- /dev/null +++ b/static/docs/reference/torch_as_strided.html @@ -0,0 +1,283 @@ + + + + + + + + +As_strided — torch_as_strided • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    As_strided

    +
    + +
    torch_as_strided(self, size, stride, storage_offset = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    size

    (tuple or ints) the shape of the output tensor

    stride

    (tuple or ints) the stride of the output tensor

    storage_offset

    (int, optional) the offset in the underlying storage of the output tensor

    + +

    as_strided(input, size, stride, storage_offset=0) -> Tensor

    + + + + +

    Create a view of an existing torch_Tensor input with specified +size, stride and storage_offset.

    +

    Warning

    + + + +

    More than one element of a created tensor may refer to a single memory +location. As a result, in-place operations (especially ones that are +vectorized) may result in incorrect behavior. If you need to write to +the tensors, please clone them first.

    Many PyTorch functions, which return a view of a tensor, are internally
    +implemented with this function. Those functions, like
    +`torch_Tensor.expand`, are easier to read and are therefore more
    +advisable to use.
    +
    + + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_randn(c(3, 3)) +x +t = torch_as_strided(x, list(2, 2), list(1, 2)) +t +t = torch_as_strided(x, list(2, 2), list(1, 2), 1) +t +} +
    #> torch_tensor +#> 2.2728 1.0136 +#> -0.1885 1.2535 +#> [ CPUFloatType{2,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_asin.html b/static/docs/reference/torch_asin.html new file mode 100644 index 0000000000000000000000000000000000000000..6c6f8f162853b0c94691d04866f53fdddd8f4e73 --- /dev/null +++ b/static/docs/reference/torch_asin.html @@ -0,0 +1,260 @@ + + + + + + + + +Asin — torch_asin • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Asin

    +
    + +
    torch_asin(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    asin(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the arcsine of the elements of input.

    +

    $$ + \mbox{out}_{i} = \sin^{-1}(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_asin(a) +} +
    #> torch_tensor +#> 0.01 * +#> nan +#> 7.9841 +#> nan +#> -30.8534 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_atan.html b/static/docs/reference/torch_atan.html new file mode 100644 index 0000000000000000000000000000000000000000..872eb7bdc5001cf06803141373689cd974a897ee --- /dev/null +++ b/static/docs/reference/torch_atan.html @@ -0,0 +1,259 @@ + + + + + + + + +Atan — torch_atan • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Atan

    +
    + +
    torch_atan(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    atan(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the arctangent of the elements of input.

    +

    $$ + \mbox{out}_{i} = \tan^{-1}(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_atan(a) +} +
    #> torch_tensor +#> -0.5608 +#> 0.9247 +#> 1.1369 +#> 0.8835 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_atan2.html b/static/docs/reference/torch_atan2.html new file mode 100644 index 0000000000000000000000000000000000000000..8bf4ac4838c703fc07447f2b5b3538b950903083 --- /dev/null +++ b/static/docs/reference/torch_atan2.html @@ -0,0 +1,267 @@ + + + + + + + + +Atan2 — torch_atan2 • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Atan2

    +
    + +
    torch_atan2(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the first input tensor

    other

    (Tensor) the second input tensor

    + +

    atan2(input, other, out=NULL) -> Tensor

    + + + + +

    Element-wise arctangent of \(\mbox{input}_{i} / \mbox{other}_{i}\) +with consideration of the quadrant. Returns a new tensor with the signed angles +in radians between vector \((\mbox{other}_{i}, \mbox{input}_{i})\) +and vector \((1, 0)\). (Note that \(\mbox{other}_{i}\), the second +parameter, is the x-coordinate, while \(\mbox{input}_{i}\), the first +parameter, is the y-coordinate.)

    +

    The shapes of input and other must be +broadcastable .

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_atan2(a, torch_randn(c(4))) +} +
    #> torch_tensor +#> -0.5145 +#> 0.3180 +#> 1.6062 +#> 2.3995 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_avg_pool1d.html b/static/docs/reference/torch_avg_pool1d.html new file mode 100644 index 0000000000000000000000000000000000000000..88d095acb1faa12f7683e95e3274bfbcc460521a --- /dev/null +++ b/static/docs/reference/torch_avg_pool1d.html @@ -0,0 +1,272 @@ + + + + + + + + +Avg_pool1d — torch_avg_pool1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Avg_pool1d

    +
    + +
    torch_avg_pool1d(
    +  self,
    +  kernel_size,
    +  stride = list(),
    +  padding = 0L,
    +  ceil_mode = FALSE,
    +  count_include_pad = TRUE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    self

    input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iW)\)

    kernel_size

    the size of the window. Can be a single number or a tuple (kW,)

    stride

    the stride of the window. Can be a single number or a tuple (sW,). Default: kernel_size

    padding

    implicit zero paddings on both sides of the input. Can be a single number or a tuple (padW,). Default: 0

    ceil_mode

    when TRUE, will use ceil instead of floor to compute the output shape. Default: FALSE

    count_include_pad

    when TRUE, will include the zero-padding in the averaging calculation. Default: TRUE

    + +

    avg_pool1d(input, kernel_size, stride=NULL, padding=0, ceil_mode=FALSE, count_include_pad=TRUE) -> Tensor

    + + + + +

    Applies a 1D average pooling over an input signal composed of several +input planes.

    +

    See nn_avg_pool1d() for details and output shape.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_baddbmm.html b/static/docs/reference/torch_baddbmm.html new file mode 100644 index 0000000000000000000000000000000000000000..ec281a4708f71c058c695275294b3ec77f053a56 --- /dev/null +++ b/static/docs/reference/torch_baddbmm.html @@ -0,0 +1,333 @@ + + + + + + + + +Baddbmm — torch_baddbmm • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Baddbmm

    +
    + +
    torch_baddbmm(self, batch1, batch2, beta = 1L, alpha = 1L)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the tensor to be added

    batch1

    (Tensor) the first batch of matrices to be multiplied

    batch2

    (Tensor) the second batch of matrices to be multiplied

    beta

    (Number, optional) multiplier for input (\(\beta\))

    alpha

    (Number, optional) multiplier for \(\mbox{batch1} \mathbin{@} \mbox{batch2}\) (\(\alpha\))

    + +

    baddbmm(input, batch1, batch2, *, beta=1, alpha=1, out=NULL) -> Tensor

    + + + + +

    Performs a batch matrix-matrix product of matrices in batch1 +and batch2. +input is added to the final result.

    +

    batch1 and batch2 must be 3-D tensors each containing the same +number of matrices.

    +

    If batch1 is a \((b \times n \times m)\) tensor, batch2 is a +\((b \times m \times p)\) tensor, then input must be +broadcastable with a +\((b \times n \times p)\) tensor and out will be a +\((b \times n \times p)\) tensor. Both alpha and beta mean the +same as the scaling factors used in torch_addbmm.

    +

    $$ + \mbox{out}_i = \beta\ \mbox{input}_i + \alpha\ (\mbox{batch1}_i \mathbin{@} \mbox{batch2}_i) +$$ +For inputs of type FloatTensor or DoubleTensor, arguments beta and +alpha must be real numbers, otherwise they should be integers.

    + +

    Examples

    +
    if (torch_is_installed()) { + +M = torch_randn(c(10, 3, 5)) +batch1 = torch_randn(c(10, 3, 4)) +batch2 = torch_randn(c(10, 4, 5)) +torch_baddbmm(M, batch1, batch2) +} +
    #> torch_tensor +#> (1,.,.) = +#> 3.2950 1.1934 1.0951 -0.1641 -0.1625 +#> 0.0234 0.6164 1.1209 -1.4287 -1.4717 +#> 1.4437 0.0008 0.1610 -0.6639 -3.1524 +#> +#> (2,.,.) = +#> -1.1120 -1.6823 4.3323 -4.6368 5.3289 +#> -0.0067 0.9474 -2.1810 -2.4339 -1.4058 +#> -0.0811 1.0781 0.5114 -0.6792 5.5863 +#> +#> (3,.,.) = +#> -1.0254 1.4248 2.5140 0.1094 0.8005 +#> -0.0712 3.0454 -0.8717 -1.3384 -1.1721 +#> -1.0192 -0.3714 0.0774 -0.0318 1.3226 +#> +#> (4,.,.) = +#> 0.7244 1.0012 0.7026 -0.1657 -0.5903 +#> -1.1023 0.7397 -2.5617 -2.8854 -0.4641 +#> -1.8027 1.8865 -5.9998 -2.7661 -4.2857 +#> +#> (5,.,.) = +#> 0.7827 0.5095 -4.7454 -0.9276 0.9389 +#> 0.2047 -0.1286 -0.0545 0.0190 2.7191 +#> -0.1130 -1.0268 0.5965 0.4682 -1.9710 +#> +#> (6,.,.) = +#> 0.3016 -0.8588 0.4652 2.3788 -3.3351 +#> 0.6153 1.9567 -0.4531 -3.5487 -0.0359 +#> -1.0311 1.7841 -1.4333 -1.4575 -5.2221 +#> +#> (7,.,.) = +#> 0.6391 1.4053 1.4987 -1.3632 1.9467 +#> -0.3214 1.5380 -1.9825 5.0689 -1.2355 +#> -2.0131 0.5735 -0.3492 1.2934 -0.4853 +#> +#> (8,.,.) = +#> -3.1747 0.3503 -0.8512 -0.8386 1.8389 +#> -0.4581 -0.1452 -0.2862 -2.5819 0.0945 +#> 1.9201 -3.9626 -1.0644 1.4395 -0.9784 +#> +#> (9,.,.) = +#> 2.0265 -1.0222 0.4194 3.9349 -1.2036 +#> 0.2550 3.1147 -1.8770 -1.3894 0.1003 +#> -0.7578 -3.4052 -1.9434 -5.4013 -1.7570 +#> +#> (10,.,.) = +#> 3.5818 2.3017 -0.6280 -1.3274 -2.3944 +#> -1.8303 1.0979 -0.9950 -1.9600 -1.0145 +#> -0.5720 -0.0300 -1.1251 4.3175 -0.5519 +#> [ CPUFloatType{10,3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_bartlett_window.html b/static/docs/reference/torch_bartlett_window.html new file mode 100644 index 0000000000000000000000000000000000000000..9154bd553e8790136efe9df0ee051c7730391f45 --- /dev/null +++ b/static/docs/reference/torch_bartlett_window.html @@ -0,0 +1,292 @@ + + + + + + + + +Bartlett_window — torch_bartlett_window • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Bartlett_window

    +
    + +
    torch_bartlett_window(
    +  window_length,
    +  periodic = TRUE,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    window_length

    (int) the size of returned window

    periodic

    (bool, optional) If TRUE, returns a window to be used as periodic function. If False, return a symmetric window.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type). Only floating point types are supported.

    layout

    (torch.layout, optional) the desired layout of returned window tensor. Only torch_strided (dense layout) is supported.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    Note

    + + +
    If `window_length` \eqn{=1}, the returned window contains a single value 1.
    +
    + +

    bartlett_window(window_length, periodic=TRUE, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

    + + + + +

    Bartlett window function.

    +

    $$ + w[n] = 1 - \left| \frac{2n}{N-1} - 1 \right| = \left\{ \begin{array}{ll} + \frac{2n}{N - 1} & \mbox{if } 0 \leq n \leq \frac{N - 1}{2} \\ + 2 - \frac{2n}{N - 1} & \mbox{if } \frac{N - 1}{2} < n < N \\ + \end{array} + \right. , +$$ +where \(N\) is the full window size.

    +

    The input window_length is a positive integer controlling the +returned window size. periodic flag determines whether the returned +window trims off the last duplicate value from the symmetric window and is +ready to be used as a periodic window with functions like +torch_stft. Therefore, if periodic is true, the \(N\) in +above formula is in fact \(\mbox{window\_length} + 1\). Also, we always have +torch_bartlett_window(L, periodic=TRUE) equal to +torch_bartlett_window(L + 1, periodic=False)[:-1]).

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_bernoulli.html b/static/docs/reference/torch_bernoulli.html new file mode 100644 index 0000000000000000000000000000000000000000..75e3f5c07e3a14c3de8d9aae5c66b649b2be3bbf --- /dev/null +++ b/static/docs/reference/torch_bernoulli.html @@ -0,0 +1,283 @@ + + + + + + + + +Bernoulli — torch_bernoulli • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Bernoulli

    +
    + +
    torch_bernoulli(self, p, generator = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor of probability values for the Bernoulli +distribution

    p

    (Number) a probability value. If p is passed than it's used instead of +the values in self tensor.

    generator

    (torch.Generator, optional) a pseudorandom number generator for sampling

    + +

    bernoulli(input, *, generator=NULL, out=NULL) -> Tensor

    + + + + +

    Draws binary random numbers (0 or 1) from a Bernoulli distribution.

    +

    The input tensor should be a tensor containing probabilities +to be used for drawing the binary random number. +Hence, all values in input have to be in the range: +\(0 \leq \mbox{input}_i \leq 1\).

    +

    The \(\mbox{i}^{th}\) element of the output tensor will draw a +value \(1\) according to the \(\mbox{i}^{th}\) probability value given +in input.

    +

    $$ + \mbox{out}_{i} \sim \mathrm{Bernoulli}(p = \mbox{input}_{i}) +$$ +The returned out tensor only has values 0 or 1 and is of the same +shape as input.

    +

    out can have integral dtype, but input must have floating +point dtype.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_empty(c(3, 3))$uniform_(0, 1) # generate a uniform random matrix with range c(0, 1) +a +torch_bernoulli(a) +a = torch_ones(c(3, 3)) # probability of drawing "1" is 1 +torch_bernoulli(a) +a = torch_zeros(c(3, 3)) # probability of drawing "1" is 0 +torch_bernoulli(a) +} +
    #> torch_tensor +#> 0 0 0 +#> 0 0 0 +#> 0 0 0 +#> [ CPUFloatType{3,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_bincount.html b/static/docs/reference/torch_bincount.html new file mode 100644 index 0000000000000000000000000000000000000000..fa08a32bddcce9600cfb352085bdd33a74dd026e --- /dev/null +++ b/static/docs/reference/torch_bincount.html @@ -0,0 +1,279 @@ + + + + + + + + +Bincount — torch_bincount • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Bincount

    +
    + +
    torch_bincount(self, weights = list(), minlength = 0L)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) 1-d int tensor

    weights

    (Tensor) optional, weight for each value in the input tensor. Should be of same size as input tensor.

    minlength

    (int) optional, minimum number of bins. Should be non-negative.

    + +

    bincount(input, weights=NULL, minlength=0) -> Tensor

    + + + + +

    Count the frequency of each value in an array of non-negative ints.

    +

    The number of bins (size 1) is one larger than the largest value in +input unless input is empty, in which case the result is a +tensor of size 0. If minlength is specified, the number of bins is at least +minlength and if input is empty, then the result is tensor of size +minlength filled with zeros. If n is the value at position i, +out[n] += weights[i] if weights is specified else +out[n] += 1.

    +

    .. include:: cuda_deterministic.rst

    + +

    Examples

    +
    if (torch_is_installed()) { + +input = torch_randint(0, 8, list(5), dtype=torch_int64()) +weights = torch_linspace(0, 1, steps=5) +input +weights +torch_bincount(input, weights) +input$bincount(weights) +} +
    #> torch_tensor +#> 0.0000 +#> 0.0000 +#> 0.7500 +#> 1.0000 +#> 0.2500 +#> 0.0000 +#> 0.0000 +#> 0.5000 +#> [ CPUFloatType{8} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_bitwise_and.html b/static/docs/reference/torch_bitwise_and.html new file mode 100644 index 0000000000000000000000000000000000000000..fe67716a2b7ad64a013c9820608302773f97752b --- /dev/null +++ b/static/docs/reference/torch_bitwise_and.html @@ -0,0 +1,248 @@ + + + + + + + + +Bitwise_and — torch_bitwise_and • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Bitwise_and

    +
    + +
    torch_bitwise_and(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    NA the first input tensor

    other

    NA the second input tensor

    + +

    bitwise_and(input, other, out=NULL) -> Tensor

    + + + + +

    Computes the bitwise AND of input and other. The input tensor must be of +integral or Boolean types. For bool tensors, it computes the logical AND.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_bitwise_not.html b/static/docs/reference/torch_bitwise_not.html new file mode 100644 index 0000000000000000000000000000000000000000..723dfc1950c1d8e9ae77b6894b1f8ce2a7b3983d --- /dev/null +++ b/static/docs/reference/torch_bitwise_not.html @@ -0,0 +1,244 @@ + + + + + + + + +Bitwise_not — torch_bitwise_not • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Bitwise_not

    +
    + +
    torch_bitwise_not(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    bitwise_not(input, out=NULL) -> Tensor

    + + + + +

    Computes the bitwise NOT of the given input tensor. The input tensor must be of +integral or Boolean types. For bool tensors, it computes the logical NOT.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_bitwise_or.html b/static/docs/reference/torch_bitwise_or.html new file mode 100644 index 0000000000000000000000000000000000000000..3f56126d5268e4541fc88e5818417c0d959e1069 --- /dev/null +++ b/static/docs/reference/torch_bitwise_or.html @@ -0,0 +1,248 @@ + + + + + + + + +Bitwise_or — torch_bitwise_or • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Bitwise_or

    +
    + +
    torch_bitwise_or(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    NA the first input tensor

    other

    NA the second input tensor

    + +

    bitwise_or(input, other, out=NULL) -> Tensor

    + + + + +

    Computes the bitwise OR of input and other. The input tensor must be of +integral or Boolean types. For bool tensors, it computes the logical OR.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_bitwise_xor.html b/static/docs/reference/torch_bitwise_xor.html new file mode 100644 index 0000000000000000000000000000000000000000..66ac3aa011385b60a61d8f4f707428005b272763 --- /dev/null +++ b/static/docs/reference/torch_bitwise_xor.html @@ -0,0 +1,248 @@ + + + + + + + + +Bitwise_xor — torch_bitwise_xor • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Bitwise_xor

    +
    + +
    torch_bitwise_xor(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    NA the first input tensor

    other

    NA the second input tensor

    + +

    bitwise_xor(input, other, out=NULL) -> Tensor

    + + + + +

    Computes the bitwise XOR of input and other. The input tensor must be of +integral or Boolean types. For bool tensors, it computes the logical XOR.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_blackman_window.html b/static/docs/reference/torch_blackman_window.html new file mode 100644 index 0000000000000000000000000000000000000000..780331de260536b730cb5976c2bce0699058c3de --- /dev/null +++ b/static/docs/reference/torch_blackman_window.html @@ -0,0 +1,288 @@ + + + + + + + + +Blackman_window — torch_blackman_window • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Blackman_window

    +
    + +
    torch_blackman_window(
    +  window_length,
    +  periodic = TRUE,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    window_length

    (int) the size of returned window

    periodic

    (bool, optional) If TRUE, returns a window to be used as periodic function. If False, return a symmetric window.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type). Only floating point types are supported.

    layout

    (torch.layout, optional) the desired layout of returned window tensor. Only torch_strided (dense layout) is supported.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    Note

    + + +
    If `window_length` \eqn{=1}, the returned window contains a single value 1.
    +
    + +

    blackman_window(window_length, periodic=TRUE, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

    + + + + +

    Blackman window function.

    +

    $$ + w[n] = 0.42 - 0.5 \cos \left( \frac{2 \pi n}{N - 1} \right) + 0.08 \cos \left( \frac{4 \pi n}{N - 1} \right) +$$ +where \(N\) is the full window size.

    +

    The input window_length is a positive integer controlling the +returned window size. periodic flag determines whether the returned +window trims off the last duplicate value from the symmetric window and is +ready to be used as a periodic window with functions like +torch_stft. Therefore, if periodic is true, the \(N\) in +above formula is in fact \(\mbox{window\_length} + 1\). Also, we always have +torch_blackman_window(L, periodic=TRUE) equal to +torch_blackman_window(L + 1, periodic=False)[:-1]).

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_bmm.html b/static/docs/reference/torch_bmm.html new file mode 100644 index 0000000000000000000000000000000000000000..b4dc3f38e8c9799148970edf152633daa3cfc5d0 --- /dev/null +++ b/static/docs/reference/torch_bmm.html @@ -0,0 +1,319 @@ + + + + + + + + +Bmm — torch_bmm • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Bmm

    +
    + +
    torch_bmm(self, mat2)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the first batch of matrices to be multiplied

    mat2

    (Tensor) the second batch of matrices to be multiplied

    + +

    Note

    + +

    This function does not broadcast . +For broadcasting matrix products, see torch_matmul.

    +

    bmm(input, mat2, out=NULL) -> Tensor

    + + + + +

    Performs a batch matrix-matrix product of matrices stored in input +and mat2.

    +

    input and mat2 must be 3-D tensors each containing +the same number of matrices.

    +

    If input is a \((b \times n \times m)\) tensor, mat2 is a +\((b \times m \times p)\) tensor, out will be a +\((b \times n \times p)\) tensor.

    +

    $$ + \mbox{out}_i = \mbox{input}_i \mathbin{@} \mbox{mat2}_i +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +input = torch_randn(c(10, 3, 4)) +mat2 = torch_randn(c(10, 4, 5)) +res = torch_bmm(input, mat2) +res +} +
    #> torch_tensor +#> (1,.,.) = +#> 1.0892 1.3748 0.8459 -2.0545 -1.4386 +#> 0.3660 -0.1655 0.0205 -0.7982 -1.6220 +#> 0.4613 0.7003 0.2010 1.5919 1.4269 +#> +#> (2,.,.) = +#> -2.2132 -1.0830 0.8680 -0.3013 0.5952 +#> -1.7194 1.7257 -1.8858 0.5295 0.0113 +#> -1.9067 -3.6883 3.4484 -0.6959 1.6596 +#> +#> (3,.,.) = +#> 0.4246 -0.4291 0.5495 0.0139 -2.8828 +#> 0.8869 -1.7168 0.9370 2.8693 -0.8925 +#> -0.1330 -0.6530 0.2961 -0.4661 0.2099 +#> +#> (4,.,.) = +#> 3.2950 1.2318 -0.2358 3.7370 7.2076 +#> 0.5021 -7.1274 -0.6974 2.2166 -1.0051 +#> 0.5575 -0.9316 -0.4947 1.2719 -0.3476 +#> +#> (5,.,.) = +#> -0.7364 -1.5675 1.2250 0.6341 2.4889 +#> -3.3237 2.9684 3.2533 0.5785 -1.9284 +#> -3.7940 1.4235 3.7042 -0.9522 -1.3960 +#> +#> (6,.,.) = +#> -3.5668 -1.3443 5.1221 -1.1451 0.9565 +#> -1.3876 1.5025 -3.9358 1.3078 0.9739 +#> -1.9256 -2.0872 7.0016 -1.1434 0.6967 +#> +#> (7,.,.) = +#> -0.2430 -1.1612 0.2267 -0.4137 0.3275 +#> -1.0699 1.5163 1.1168 0.4205 1.8671 +#> -1.1855 5.5310 1.8558 4.2721 1.0853 +#> +#> (8,.,.) = +#> -1.0895 -2.0372 -2.1277 0.7614 -1.5783 +#> -0.6140 0.0763 1.5264 -0.7559 -0.3574 +#> 0.2487 -1.0749 0.2471 1.2148 0.5383 +#> +#> (9,.,.) = +#> -0.7402 0.6009 1.3001 0.8246 -0.4576 +#> -0.1567 -0.6394 0.0932 -0.1400 -0.7153 +#> -0.5724 -0.2588 -1.4616 -1.0371 -0.5793 +#> +#> (10,.,.) = +#> 7.8231 2.6005 -0.8253 -0.6163 -4.4321 +#> 3.5758 0.7528 0.7691 3.3599 -1.3278 +#> -3.2284 -0.5208 -1.5541 -2.5006 2.0721 +#> [ CPUFloatType{10,3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_broadcast_tensors.html b/static/docs/reference/torch_broadcast_tensors.html new file mode 100644 index 0000000000000000000000000000000000000000..fe409f130dd1ef695bda3ed34762637d1dad942c --- /dev/null +++ b/static/docs/reference/torch_broadcast_tensors.html @@ -0,0 +1,255 @@ + + + + + + + + +Broadcast_tensors — torch_broadcast_tensors • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Broadcast_tensors

    +
    + +
    torch_broadcast_tensors(tensors)
    + +

    Arguments

    + + + + + + +
    tensors

    a list containing any number of tensors of the same type

    + +

    broadcast_tensors(tensors) -> List of Tensors

    + + + + +

    Broadcasts the given tensors according to broadcasting-semantics.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_arange(0, 3)$view(c(1, 3)) +y = torch_arange(0, 2)$view(c(2, 1)) +out = torch_broadcast_tensors(list(x, y)) +out[[1]] +} +
    #> torch_tensor +#> 0 1 2 +#> 0 1 2 +#> [ CPUFloatType{2,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_can_cast.html b/static/docs/reference/torch_can_cast.html new file mode 100644 index 0000000000000000000000000000000000000000..8716b900191c8f147feae28be2262c3c329e42f0 --- /dev/null +++ b/static/docs/reference/torch_can_cast.html @@ -0,0 +1,255 @@ + + + + + + + + +Can_cast — torch_can_cast • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Can_cast

    +
    + +
    torch_can_cast(from, to)
    + +

    Arguments

    + + + + + + + + + + +
    from

    (dtype) The original torch_dtype.

    to

    (dtype) The target torch_dtype.

    + +

    can_cast(from, to) -> bool

    + + + + +

    Determines if a type conversion is allowed under PyTorch casting rules +described in the type promotion documentation .

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_can_cast(torch_double(), torch_float()) +torch_can_cast(torch_float(), torch_int()) +} +
    #> [1] FALSE
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_cartesian_prod.html b/static/docs/reference/torch_cartesian_prod.html new file mode 100644 index 0000000000000000000000000000000000000000..41d5300cb93f85d08730983bcf21381cb4d5ad61 --- /dev/null +++ b/static/docs/reference/torch_cartesian_prod.html @@ -0,0 +1,254 @@ + + + + + + + + +Cartesian_prod — torch_cartesian_prod • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Do cartesian product of the given sequence of tensors.

    +
    + +
    torch_cartesian_prod(tensors)
    + +

    Arguments

    + + + + + + +
    tensors

    a list containing any number of 1 dimensional tensors.

    + + +

    Examples

    +
    if (torch_is_installed()) { + +a = c(1, 2, 3) +b = c(4, 5) +tensor_a = torch_tensor(a) +tensor_b = torch_tensor(b) +torch_cartesian_prod(list(tensor_a, tensor_b)) +} +
    #> torch_tensor +#> 1 4 +#> 1 5 +#> 2 4 +#> 2 5 +#> 3 4 +#> 3 5 +#> [ CPUFloatType{6,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_cat.html b/static/docs/reference/torch_cat.html new file mode 100644 index 0000000000000000000000000000000000000000..2841ea8b9ee5c79193035de3eb86c6d5b5ef2533 --- /dev/null +++ b/static/docs/reference/torch_cat.html @@ -0,0 +1,264 @@ + + + + + + + + +Cat — torch_cat • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Cat

    +
    + +
    torch_cat(tensors, dim = 1L)
    + +

    Arguments

    + + + + + + + + + + +
    tensors

    (sequence of Tensors) any python sequence of tensors of the same type. Non-empty tensors provided must have the same shape, except in the cat dimension.

    dim

    (int, optional) the dimension over which the tensors are concatenated

    + +

    cat(tensors, dim=0, out=NULL) -> Tensor

    + + + + +

    Concatenates the given sequence of seq tensors in the given dimension. +All tensors must either have the same shape (except in the concatenating +dimension) or be empty.

    +

    torch_cat can be seen as an inverse operation for torch_split() +and torch_chunk.

    +

    torch_cat can be best understood via examples.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_randn(c(2, 3)) +x +torch_cat(list(x, x, x), 1) +torch_cat(list(x, x, x), 2) +} +
    #> torch_tensor +#> 0.4974 -0.0008 -0.8116 0.4974 -0.0008 -0.8116 0.4974 -0.0008 -0.8116 +#> 0.0269 -0.0270 0.1641 0.0269 -0.0270 0.1641 0.0269 -0.0270 0.1641 +#> [ CPUFloatType{2,9} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_cdist.html b/static/docs/reference/torch_cdist.html new file mode 100644 index 0000000000000000000000000000000000000000..5c7145b15e8f49b9c916b36e4baf7a376e6e7770 --- /dev/null +++ b/static/docs/reference/torch_cdist.html @@ -0,0 +1,255 @@ + + + + + + + + +Cdist — torch_cdist • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Cdist

    +
    + +
    torch_cdist(x1, x2, p = 2L, compute_mode = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    x1

    (Tensor) input tensor of shape \(B \times P \times M\).

    x2

    (Tensor) input tensor of shape \(B \times R \times M\).

    p

    NA p value for the p-norm distance to calculate between each vector pair \(\in [0, \infty]\).

    compute_mode

    NA 'use_mm_for_euclid_dist_if_necessary' - will use matrix multiplication approach to calculate euclidean distance (p = 2) if P > 25 or R > 25 'use_mm_for_euclid_dist' - will always use matrix multiplication approach to calculate euclidean distance (p = 2) 'donot_use_mm_for_euclid_dist' - will never use matrix multiplication approach to calculate euclidean distance (p = 2) Default: use_mm_for_euclid_dist_if_necessary.

    + +

    TEST

    + + + + +

    Computes batched the p-norm distance between each pair of the two collections of row vectors.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_ceil.html b/static/docs/reference/torch_ceil.html new file mode 100644 index 0000000000000000000000000000000000000000..b8c9c7c85530ef5677743fd9d1d2ceaaae3300db --- /dev/null +++ b/static/docs/reference/torch_ceil.html @@ -0,0 +1,260 @@ + + + + + + + + +Ceil — torch_ceil • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Ceil

    +
    + +
    torch_ceil(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    ceil(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the ceil of the elements of input, +the smallest integer greater than or equal to each element.

    +

    $$ + \mbox{out}_{i} = \left\lceil \mbox{input}_{i} \right\rceil = \left\lfloor \mbox{input}_{i} \right\rfloor + 1 +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_ceil(a) +} +
    #> torch_tensor +#> 1 +#> -1 +#> -0 +#> 1 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_celu.html b/static/docs/reference/torch_celu.html new file mode 100644 index 0000000000000000000000000000000000000000..8f7fece55f401346f9176cdcf684c76a5505e6fc --- /dev/null +++ b/static/docs/reference/torch_celu.html @@ -0,0 +1,247 @@ + + + + + + + + +Celu — torch_celu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Celu

    +
    + +
    torch_celu(self, alpha = 1)
    + +

    Arguments

    + + + + + + + + + + +
    self

    the input tensor

    alpha

    the alpha value for the CELU formulation. Default: 1.0

    + +

    celu(input, alpha=1.) -> Tensor

    + + + + +

    See nnf_celu() for more info.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_celu_.html b/static/docs/reference/torch_celu_.html new file mode 100644 index 0000000000000000000000000000000000000000..60df6e872c3508934b28fcb31c777b83ab155d6a --- /dev/null +++ b/static/docs/reference/torch_celu_.html @@ -0,0 +1,247 @@ + + + + + + + + +Celu_ — torch_celu_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Celu_

    +
    + +
    torch_celu_(self, alpha = 1)
    + +

    Arguments

    + + + + + + + + + + +
    self

    the input tensor

    alpha

    the alpha value for the CELU formulation. Default: 1.0

    + +

    celu_(input, alpha=1.) -> Tensor

    + + + + +

    In-place version of torch_celu().

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_chain_matmul.html b/static/docs/reference/torch_chain_matmul.html new file mode 100644 index 0000000000000000000000000000000000000000..205cdbf91c227cf2e0904d15fdfe21b57c12b587 --- /dev/null +++ b/static/docs/reference/torch_chain_matmul.html @@ -0,0 +1,261 @@ + + + + + + + + +Chain_matmul — torch_chain_matmul • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Chain_matmul

    +
    + +
    torch_chain_matmul(matrices)
    + +

    Arguments

    + + + + + + +
    matrices

    (Tensors...) a sequence of 2 or more 2-D tensors whose product is to be determined.

    + +

    TEST

    + + + + +

    Returns the matrix product of the \(N\) 2-D tensors. This product is efficiently computed +using the matrix chain order algorithm which selects the order in which incurs the lowest cost in terms +of arithmetic operations ([CLRS]_). Note that since this is a function to compute the product, \(N\) +needs to be greater than or equal to 2; if equal to 2 then a trivial matrix-matrix product is returned. +If \(N\) is 1, then this is a no-op - the original matrix is returned as is.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(3, 4)) +b = torch_randn(c(4, 5)) +c = torch_randn(c(5, 6)) +d = torch_randn(c(6, 7)) +torch_chain_matmul(list(a, b, c, d)) +} +
    #> torch_tensor +#> -2.9763 5.1541 0.4802 4.2000 -0.3407 -3.9994 -9.5460 +#> 9.4136 -12.4751 -16.3055 -13.4903 0.4411 12.5776 21.3514 +#> 3.9498 -8.2069 -5.7414 1.2695 6.5496 18.3628 12.8464 +#> [ CPUFloatType{3,7} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_cholesky.html b/static/docs/reference/torch_cholesky.html new file mode 100644 index 0000000000000000000000000000000000000000..01dc171f62a4dff90e8779d54224a0c5f07e4fed --- /dev/null +++ b/static/docs/reference/torch_cholesky.html @@ -0,0 +1,283 @@ + + + + + + + + +Cholesky — torch_cholesky • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Cholesky

    +
    + +
    torch_cholesky(self, upper = FALSE)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor \(A\) of size \((*, n, n)\) where * is zero or more +batch dimensions consisting of symmetric positive-definite matrices.

    upper

    (bool, optional) flag that indicates whether to return a +upper or lower triangular matrix. Default: FALSE

    + +

    cholesky(input, upper=False, out=NULL) -> Tensor

    + + + + +

    Computes the Cholesky decomposition of a symmetric positive-definite +matrix \(A\) or for batches of symmetric positive-definite matrices.

    +

    If upper is TRUE, the returned matrix U is upper-triangular, and +the decomposition has the form:

    +

    $$ + A = U^TU +$$ +If upper is FALSE, the returned matrix L is lower-triangular, and +the decomposition has the form:

    +

    $$ + A = LL^T +$$ +If upper is TRUE, and \(A\) is a batch of symmetric positive-definite +matrices, then the returned tensor will be composed of upper-triangular Cholesky factors +of each of the individual matrices. Similarly, when upper is FALSE, the returned +tensor will be composed of lower-triangular Cholesky factors of each of the individual +matrices.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(3, 3)) +a = torch_mm(a, a$t()) # make symmetric positive-definite +l = torch_cholesky(a) +a +l +torch_mm(l, l$t()) +a = torch_randn(c(3, 2, 2)) +if (FALSE) { +a = torch_matmul(a, a$transpose(-1, -2)) + 1e-03 # make symmetric positive-definite +l = torch_cholesky(a) +z = torch_matmul(l, l$transpose(-1, -2)) +torch_max(torch_abs(z - a)) # Max non-zero +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_cholesky_inverse.html b/static/docs/reference/torch_cholesky_inverse.html new file mode 100644 index 0000000000000000000000000000000000000000..79cb4211c31659908bd7095daf1aa1902a4c4368 --- /dev/null +++ b/static/docs/reference/torch_cholesky_inverse.html @@ -0,0 +1,272 @@ + + + + + + + + +Cholesky_inverse — torch_cholesky_inverse • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Cholesky_inverse

    +
    + +
    torch_cholesky_inverse(self, upper = FALSE)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input 2-D tensor \(u\), a upper or lower triangular Cholesky factor

    upper

    (bool, optional) whether to return a lower (default) or upper triangular matrix

    + +

    cholesky_inverse(input, upper=False, out=NULL) -> Tensor

    + + + + +

    Computes the inverse of a symmetric positive-definite matrix \(A\) using its +Cholesky factor \(u\): returns matrix inv. The inverse is computed using +LAPACK routines dpotri and spotri (and the corresponding MAGMA routines).

    +

    If upper is FALSE, \(u\) is lower triangular +such that the returned tensor is

    +

    $$ + inv = (uu^{{T}})^{{-1}} +$$ +If upper is TRUE or not provided, \(u\) is upper +triangular such that the returned tensor is

    +

    $$ + inv = (u^T u)^{{-1}} +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +if (FALSE) { +a = torch_randn(c(3, 3)) +a = torch_mm(a, a$t()) + 1e-05 * torch_eye(3) # make symmetric positive definite +u = torch_cholesky(a) +a +torch_cholesky_inverse(u) +a$inverse() +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_cholesky_solve.html b/static/docs/reference/torch_cholesky_solve.html new file mode 100644 index 0000000000000000000000000000000000000000..d8470114ed5643f31d5c7e0ba2507be3230b1f96 --- /dev/null +++ b/static/docs/reference/torch_cholesky_solve.html @@ -0,0 +1,282 @@ + + + + + + + + +Cholesky_solve — torch_cholesky_solve • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Cholesky_solve

    +
    + +
    torch_cholesky_solve(self, input2, upper = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) input matrix \(b\) of size \((*, m, k)\), where \(*\) is zero or more batch dimensions

    input2

    (Tensor) input matrix \(u\) of size \((*, m, m)\), where \(*\) is zero of more batch dimensions composed of upper or lower triangular Cholesky factor

    upper

    (bool, optional) whether to consider the Cholesky factor as a lower or upper triangular matrix. Default: FALSE.

    + +

    cholesky_solve(input, input2, upper=False, out=NULL) -> Tensor

    + + + + +

    Solves a linear system of equations with a positive semidefinite +matrix to be inverted given its Cholesky factor matrix \(u\).

    +

    If upper is FALSE, \(u\) is and lower triangular and c is +returned such that:

    +

    $$ + c = (u u^T)^{{-1}} b +$$ +If upper is TRUE or not provided, \(u\) is upper triangular +and c is returned such that:

    +

    $$ + c = (u^T u)^{{-1}} b +$$ +torch_cholesky_solve(b, u) can take in 2D inputs b, u or inputs that are +batches of 2D matrices. If the inputs are batches, then returns +batched outputs c

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(3, 3)) +a = torch_mm(a, a$t()) # make symmetric positive definite +u = torch_cholesky(a) +a +b = torch_randn(c(3, 2)) +b +torch_cholesky_solve(b, u) +torch_mm(a$inverse(), b) +} +
    #> torch_tensor +#> -4.4143 1.6973 +#> 3.7547 -2.0436 +#> 1.5229 -2.0137 +#> [ CPUFloatType{3,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_chunk.html b/static/docs/reference/torch_chunk.html new file mode 100644 index 0000000000000000000000000000000000000000..d67eb1abc9a741ebf76d520c09239d56770a82d7 --- /dev/null +++ b/static/docs/reference/torch_chunk.html @@ -0,0 +1,254 @@ + + + + + + + + +Chunk — torch_chunk • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Chunk

    +
    + +
    torch_chunk(self, chunks, dim = 1L)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the tensor to split

    chunks

    (int) number of chunks to return

    dim

    (int) dimension along which to split the tensor

    + +

    chunk(input, chunks, dim=0) -> List of Tensors

    + + + + +

    Splits a tensor into a specific number of chunks. Each chunk is a view of +the input tensor.

    +

    Last chunk will be smaller if the tensor size along the given dimension +dim is not divisible by chunks.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_clamp.html b/static/docs/reference/torch_clamp.html new file mode 100644 index 0000000000000000000000000000000000000000..87460e824a26344b69ca4f0e27ce0ea274f7eac5 --- /dev/null +++ b/static/docs/reference/torch_clamp.html @@ -0,0 +1,301 @@ + + + + + + + + +Clamp — torch_clamp • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Clamp

    +
    + +
    torch_clamp(self, min = NULL, max = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    min

    (Number) lower-bound of the range to be clamped to

    max

    (Number) upper-bound of the range to be clamped to

    + +

    clamp(input, min, max, out=NULL) -> Tensor

    + + + + +

    Clamp all elements in input into the range [ min, max ] and return +a resulting tensor:

    +

    $$ + y_i = \left\{ \begin{array}{ll} + \mbox{min} & \mbox{if } x_i < \mbox{min} \\ + x_i & \mbox{if } \mbox{min} \leq x_i \leq \mbox{max} \\ + \mbox{max} & \mbox{if } x_i > \mbox{max} + \end{array} + \right. +$$ +If input is of type FloatTensor or DoubleTensor, args min +and max must be real numbers, otherwise they should be integers.

    +

    clamp(input, *, min, out=NULL) -> Tensor

    + + + + +

    Clamps all elements in input to be larger or equal min.

    +

    If input is of type FloatTensor or DoubleTensor, value +should be a real number, otherwise it should be an integer.

    +

    clamp(input, *, max, out=NULL) -> Tensor

    + + + + +

    Clamps all elements in input to be smaller or equal max.

    +

    If input is of type FloatTensor or DoubleTensor, value +should be a real number, otherwise it should be an integer.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_clamp(a, min=-0.5, max=0.5) + + +a = torch_randn(c(4)) +a +torch_clamp(a, min=0.5) + + +a = torch_randn(c(4)) +a +torch_clamp(a, max=0.5) +} +
    #> torch_tensor +#> -0.1812 +#> -0.9782 +#> 0.5000 +#> 0.2475 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_combinations.html b/static/docs/reference/torch_combinations.html new file mode 100644 index 0000000000000000000000000000000000000000..572aa5cb943299a910b890fb404122cabe3c7b0c --- /dev/null +++ b/static/docs/reference/torch_combinations.html @@ -0,0 +1,270 @@ + + + + + + + + +Combinations — torch_combinations • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Combinations

    +
    + +
    torch_combinations(self, r = 2L, with_replacement = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) 1D vector.

    r

    (int, optional) number of elements to combine

    with_replacement

    (boolean, optional) whether to allow duplication in combination

    + +

    combinations(input, r=2, with_replacement=False) -> seq

    + + + + +

    Compute combinations of length \(r\) of the given tensor. The behavior is similar to +python's itertools.combinations when with_replacement is set to False, and +itertools.combinations_with_replacement when with_replacement is set to TRUE.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = c(1, 2, 3) +tensor_a = torch_tensor(a) +torch_combinations(tensor_a) +torch_combinations(tensor_a, r=3) +torch_combinations(tensor_a, with_replacement=TRUE) +} +
    #> torch_tensor +#> 1 1 +#> 1 2 +#> 1 3 +#> 2 2 +#> 2 3 +#> 3 3 +#> [ CPUFloatType{6,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_conj.html b/static/docs/reference/torch_conj.html new file mode 100644 index 0000000000000000000000000000000000000000..a3560f399486d82a05666fc34b9516b2002b5259 --- /dev/null +++ b/static/docs/reference/torch_conj.html @@ -0,0 +1,253 @@ + + + + + + + + +Conj — torch_conj • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Conj

    +
    + +
    torch_conj(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    conj(input) -> Tensor

    + + + + +

    Computes the element-wise conjugate of the given input tensor.

    +

    $$ + \mbox{out}_{i} = conj(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +torch_conj(torch_tensor(c(-1 + 1i, -2 + 2i, 3 - 3i))) +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_conv1d.html b/static/docs/reference/torch_conv1d.html new file mode 100644 index 0000000000000000000000000000000000000000..297f1ce91436f37de0accfbc501788e082a08f6c --- /dev/null +++ b/static/docs/reference/torch_conv1d.html @@ -0,0 +1,4657 @@ + + + + + + + + +Conv1d — torch_conv1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Conv1d

    +
    + +
    torch_conv1d(
    +  input,
    +  weight,
    +  bias = list(),
    +  stride = 1L,
    +  padding = 0L,
    +  dilation = 1L,
    +  groups = 1L
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iW)\)

    weight

    filters of shape \((\mbox{out\_channels} , \frac{\mbox{in\_channels}}{\mbox{groups}} , kW)\)

    bias

    optional bias of shape \((\mbox{out\_channels})\). Default: NULL

    stride

    the stride of the convolving kernel. Can be a single number or a one-element tuple (sW,). Default: 1

    padding

    implicit paddings on both sides of the input. Can be a single number or a one-element tuple (padW,). Default: 0

    dilation

    the spacing between kernel elements. Can be a single number or a one-element tuple (dW,). Default: 1

    groups

    split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

    + +

    conv1d(input, weight, bias=NULL, stride=1, padding=0, dilation=1, groups=1) -> Tensor

    + + + + +

    Applies a 1D convolution over an input signal composed of several input +planes.

    +

    See nn_conv1d() for details and output shape.

    + +

    Examples

    +
    if (torch_is_installed()) { + +filters = torch_randn(c(33, 16, 3)) +inputs = torch_randn(c(20, 16, 50)) +nnf_conv1d(inputs, filters) +} +
    #> torch_tensor +#> (1,.,.) = +#> Columns 1 to 8 -9.2155 -13.3788 -4.8815 -5.9509 1.0559 11.3390 4.1760 -7.3364 +#> 0.9687 3.5362 6.5343 9.7170 3.5029 -0.5239 11.8401 -3.6896 +#> 9.2053 -0.8704 2.5167 5.3901 5.4175 -10.4469 -1.7450 -8.4401 +#> 17.3697 8.0818 -6.5866 -5.1253 -1.3999 -14.3850 -0.6795 6.7809 +#> 7.6453 6.5674 2.8867 6.4723 7.2736 -9.4473 -1.8036 -1.8636 +#> -10.4610 0.7640 -11.0805 7.2157 -3.6219 -11.6838 -7.2940 6.4684 +#> -5.6155 -2.4741 4.2673 -2.6692 -2.5511 5.6617 8.3692 11.8541 +#> 2.9653 10.0184 2.3498 10.2008 -0.0906 -8.3016 -7.6074 -8.9669 +#> -0.1892 -1.4338 -8.1527 6.1579 14.1115 6.3555 -5.8888 1.3635 +#> -0.8715 3.2313 -5.0396 -8.8492 -2.6872 2.5312 1.5977 4.7325 +#> -0.6426 6.7878 10.6713 4.4664 15.5926 -12.9809 8.5686 -14.1252 +#> -9.4297 2.5895 6.8971 -1.2187 -8.5115 2.7068 -3.6668 3.7638 +#> -4.8841 1.5752 -8.0881 -0.7977 -13.0366 1.6083 8.0490 -1.1757 +#> 9.2181 -18.6418 5.3897 -7.0765 10.9595 -1.2353 11.2700 5.9045 +#> -3.9796 18.5287 -6.2100 2.2016 -4.9045 13.3548 4.2783 7.5602 +#> 6.4389 -3.1190 4.0417 3.9676 5.6056 -6.5717 -6.5705 1.6651 +#> 7.8152 -2.1107 -8.4163 3.4218 -11.4875 -1.3201 10.1588 4.6279 +#> 3.1470 -0.6810 -4.7713 2.0391 -4.2066 -20.4188 -4.7113 8.9348 +#> -3.2366 -6.0332 2.2069 8.2774 12.2769 14.3749 -12.7319 -0.7708 +#> 0.6335 -1.3701 -9.1195 11.4271 -5.1396 5.6112 0.7465 0.8958 +#> -4.4243 3.2009 3.4101 0.8854 -6.3913 -5.3965 -5.5959 6.9523 +#> -2.0583 3.5878 2.1682 1.0807 2.3449 -7.7811 5.4776 -2.1308 +#> -14.8754 -3.5022 -4.9228 2.9605 5.1781 -0.4115 1.6601 6.8168 +#> -8.0577 -3.7750 -12.4450 -7.1238 -3.8977 14.8252 8.1138 3.9659 +#> -8.2239 3.0827 1.9788 -3.2824 1.4724 -0.2935 9.8999 4.8447 +#> 5.2614 -0.9639 -11.0854 -1.7399 -8.8694 3.1894 -9.9635 -0.2191 +#> -6.9460 2.0936 -4.6144 2.4964 -2.3149 -4.9224 5.6279 17.6400 +#> 2.7239 -7.1132 -3.2270 -4.4261 -0.1128 2.0319 5.7296 -8.8630 +#> 1.2327 2.0651 8.5595 -5.4172 13.7335 3.8594 13.4662 10.1180 +#> -10.8110 -5.3874 1.2913 -6.1393 -10.3391 6.3426 20.4897 -13.8959 +#> -1.9465 -9.7411 -2.5889 -3.1008 -4.2539 6.0369 -6.8057 4.0780 +#> 2.4637 0.0831 -4.2466 -2.7181 1.9345 -4.0950 -4.5429 -2.3028 +#> -11.2792 -0.2996 1.2558 -2.7901 3.7011 10.1111 3.7003 -1.7684 +#> +#> Columns 9 to 16 -1.2297 14.3283 2.2631 1.3546 -1.5108 -7.3943 8.3483 6.9639 +#> -0.8142 -4.1600 0.6429 3.5323 1.7201 2.6613 -5.5547 -6.9555 +#> -0.4468 -9.6767 3.4243 -5.5370 7.9330 -3.7124 -8.0854 2.8903 +#> -0.8646 3.8281 -11.0271 3.5653 6.2605 -8.7873 -19.2922 1.3006 +#> 1.9641 -0.2966 1.1310 -8.4550 12.6014 -0.7149 -4.5330 3.5843 +#> 1.9168 17.5806 -0.1329 5.5879 8.6691 -11.9588 3.0880 -8.4299 +#> 7.6088 -4.8286 -7.4288 -2.1821 -10.3345 2.2468 -1.3855 6.6098 +#> 3.9023 2.7969 -6.6740 -2.6189 3.4172 -7.3487 4.3757 20.1543 +#> -9.2841 -5.3292 7.3305 -6.0223 6.0400 4.6032 -4.5086 1.9101 +#> 2.6685 -11.7018 -10.9301 7.0899 5.2995 10.9822 -4.8582 -10.7569 +#> 3.1771 -2.5365 17.9825 -12.2625 9.7687 -3.4043 -8.8135 4.3696 +#> 5.2823 0.7356 -4.1298 -0.8445 -1.4917 6.5445 13.9097 -1.9308 +#> 1.4821 4.8847 -0.5632 -3.9153 0.0182 -6.5668 3.3724 -0.0089 +#> 2.0266 -4.8865 10.2491 1.0754 -5.8866 -3.6270 -8.2653 6.7061 +#> -2.9120 2.2089 -9.9280 -5.7288 -0.5021 3.7785 2.3965 2.2677 +#> 2.3213 -1.8852 6.9190 1.6314 -2.5556 1.8858 -11.7685 -8.4369 +#> 9.6492 -2.9285 12.2038 -0.3930 -8.1063 2.2913 4.9456 10.8998 +#> -0.6015 -0.4647 -5.5746 15.3407 -13.0841 6.5545 -4.9857 -2.0244 +#> 4.8784 -8.7558 6.3537 4.8576 4.3489 -6.7662 -8.7465 2.3861 +#> -6.2488 11.2704 -1.0825 6.4878 -7.8461 5.2779 5.0536 4.8804 +#> 7.4775 -4.1008 -3.6440 -0.9562 1.8611 3.2053 3.7267 6.6024 +#> 1.7077 16.4328 1.1904 11.2724 -1.9391 -12.8586 3.8936 3.2751 +#> -12.2966 -2.3560 -0.6443 3.9921 -7.5230 0.9491 14.3577 -4.5637 +#> -0.1001 -11.4522 -0.7943 -1.2022 5.1980 -8.8641 -13.6163 -11.5659 +#> 3.8534 1.4053 4.2852 -9.7152 -4.8377 5.4733 11.9647 -3.4902 +#> 5.6387 12.2535 -4.6256 9.8812 -10.5789 3.1894 12.6462 2.0144 +#> 12.5536 -5.3916 -6.8821 -7.8555 0.9683 -10.4353 -7.7733 -3.1686 +#> 3.2834 -13.1777 0.7912 7.4581 7.8920 15.8003 -5.9017 6.8834 +#> 6.0552 1.3924 6.7256 -5.4646 -5.5976 -8.2639 0.7305 4.5903 +#> -3.8023 -2.4783 6.5691 -2.8264 5.2719 20.0680 3.1247 8.2646 +#> 5.6692 -8.6198 -15.6985 11.5830 -14.8205 10.7908 14.9619 17.0967 +#> -8.9506 -2.7822 1.0922 -10.7574 -0.3966 -10.7521 14.5444 -18.7963 +#> -6.0050 6.5820 -2.7543 4.5226 1.5246 9.3887 2.3188 -16.2012 +#> +#> Columns 17 to 24 -4.8695 3.7966 2.5417 6.4239 1.1281 -12.0619 0.3447 -1.8495 +#> -1.4736 -17.1540 3.4796 0.8374 9.6773 -2.4723 3.6881 1.1677 +#> -14.2464 0.4473 -4.9651 0.1220 -3.6831 6.5208 2.0460 -4.2853 +#> -1.5036 4.6976 -3.8827 -16.1917 -1.0532 -6.9277 -9.8995 6.4730 +#> 0.7103 -1.6257 3.9315 -10.7942 4.8641 -4.6957 3.4183 2.0056 +#> -5.2118 3.0820 14.3102 -12.4407 -4.1198 -3.3126 -12.5885 -0.2005 +#> -2.3870 7.6276 5.7137 5.6988 -1.1870 -2.0965 1.6126 2.9134 +#> 14.2791 9.6267 9.3154 0.1505 13.1650 -2.0966 7.8122 -0.5654 +#> -12.7522 -0.3538 7.9037 10.9583 -2.5624 12.9610 0.0900 -2.0126 +#> 4.2564 -0.3686 -12.5285 3.6130 0.7694 -9.3640 -11.3119 4.9255 +#> -6.6944 -12.5297 7.5991 -9.9898 -5.4475 -1.1485 2.1881 -6.5117 +#> 6.7062 -5.6601 -0.6736 4.1640 -6.3789 -4.6947 3.3107 0.1270 +#> 1.4707 1.9069 7.0601 -11.8777 0.2879 -8.3140 10.3291 5.2538 +#> 4.4303 -6.6440 -10.1144 -3.8942 -15.6688 10.0768 -4.0495 3.0229 +#> 0.7410 3.9392 -5.8770 -9.6422 -4.4338 7.3714 1.1163 1.7708 +#> 9.1297 2.9569 -12.4715 3.4630 2.7644 -0.6531 -16.8944 1.5479 +#> -1.2994 8.4404 0.0242 6.1673 -5.8023 1.5684 14.3514 -0.2133 +#> -3.3769 6.3468 -2.9930 -3.8928 -14.3513 -1.1740 -16.5160 3.8613 +#> -2.0130 9.6563 3.1024 12.3306 0.5746 5.1057 0.9603 0.5509 +#> 3.8197 15.7484 0.7000 -5.7188 -3.2259 11.2313 -2.1890 2.5566 +#> -6.0698 5.0667 -0.8852 2.6291 4.4607 -2.9877 -0.4767 -2.3615 +#> 3.3213 2.0417 -7.6848 2.5958 3.6713 3.4101 2.9736 6.2485 +#> 6.5813 -5.7292 -4.1808 -4.0592 -2.1550 -1.4666 -10.1642 -4.0215 +#> -2.6416 1.1199 -1.7281 -1.9130 -22.5373 -15.1305 -2.5205 8.0506 +#> -5.9875 -3.0936 2.9764 6.2160 -1.3205 11.5670 1.5781 -3.7485 +#> 7.4648 16.2491 -5.4137 6.2873 13.0638 15.2053 0.0131 0.5840 +#> 6.0694 2.7495 15.8616 -9.4167 -2.1686 0.9645 7.1023 3.1712 +#> -4.9406 -1.4991 -8.2263 -11.4416 -7.9785 -4.3864 -0.9767 6.5426 +#> -0.4732 -1.7360 -5.7882 1.7442 -9.5797 -1.2339 2.6533 1.0732 +#> 2.7094 -7.0719 0.8703 13.5209 14.0761 -16.0680 4.5148 7.2220 +#> 5.4432 14.7888 -6.7092 17.5242 -11.2132 -9.3064 -1.8499 -3.9303 +#> 17.9627 -11.4976 9.9461 -0.3440 -4.5030 -1.2401 -0.3820 -8.0093 +#> -14.7131 -13.0150 -0.9647 -3.1112 -12.8724 -8.7595 -10.1707 -8.1402 +#> +#> Columns 25 to 32 1.8547 10.9740 -4.0183 17.7692 9.3797 -5.4057 -1.2404 3.7908 +#> 0.2848 5.8314 3.8718 8.4746 9.3938 -0.3301 -3.7456 -1.8790 +#> -7.1713 0.7990 -0.0932 4.3337 6.5727 1.5852 -3.6544 -5.7062 +#> -1.1724 -10.0680 -15.0025 -6.9689 -9.6004 7.3883 1.7708 7.3982 +#> -0.5131 8.0941 -0.3890 1.9520 -1.9299 3.8167 3.7566 1.6492 +#> 7.9042 -2.2868 8.7217 5.5398 -6.5718 2.3075 -18.9907 1.2899 +#> 2.5883 -1.2177 -12.2389 3.3108 -1.5127 -6.1679 -3.8565 13.5627 +#> -5.1450 5.9179 -3.6925 -10.1028 -7.9639 3.4012 4.1468 2.0707 +#> -9.5785 -2.3109 1.5583 -3.7295 7.2034 4.7588 -15.4971 -11.6062 +#> 3.6796 13.0790 2.7777 -2.5593 4.2404 6.4327 2.8832 -7.3154 +#> -8.5260 4.1016 -11.3898 3.9401 1.4674 -13.3377 0.7232 0.7624 +#> 6.6091 0.1427 -1.6529 -1.2078 -0.8534 -1.6778 7.6013 -0.3548 +#> 5.6827 -5.8490 3.1977 8.0379 -7.3799 4.6825 -1.2672 7.2958 +#> -1.9738 3.6717 -10.3418 -12.5794 9.9656 -5.8658 7.0017 -3.5312 +#> 2.2719 -1.7249 -9.5627 7.5573 -12.4113 10.4928 7.8016 0.9843 +#> 8.1371 6.7174 2.2147 0.4939 8.8809 -7.9415 5.5854 15.3170 +#> -11.3932 -1.0371 4.5466 -5.9271 -3.3451 4.6784 -2.8040 8.8742 +#> 2.4757 -1.5845 -1.1163 -0.1127 3.7453 -1.4919 3.8370 16.2731 +#> 5.6458 4.9561 7.9401 -3.1958 3.8495 -2.7730 -12.7283 -2.1418 +#> 3.3611 7.1985 7.2372 -11.4225 -0.6149 -0.4715 -1.4611 4.9257 +#> -9.7007 -0.9226 -4.5642 -5.6105 -7.3980 3.1887 -2.8203 1.5539 +#> 2.0339 -7.3502 -4.2530 0.4539 -2.2738 -7.3877 3.7860 1.1196 +#> 1.9136 -2.9839 -6.5101 9.0191 0.3577 -2.4797 -0.3782 5.8724 +#> 5.7521 3.2082 16.4502 0.2664 -4.4580 8.6922 -6.5666 -15.9535 +#> -18.4346 -10.0809 3.6438 2.8954 -10.2978 6.2441 9.7118 -1.3140 +#> -0.3073 -8.0157 -2.1891 -3.7042 -4.4942 -1.6702 3.9804 1.5787 +#> -1.2848 0.6467 -5.3219 -3.3679 -3.5367 4.1371 -11.7713 20.9084 +#> 4.5120 9.6014 -1.7913 -0.7869 14.9937 13.7954 0.0611 -12.0633 +#> -3.4014 -2.2428 -3.0984 -1.5958 -3.6751 -7.7030 14.9411 10.2175 +#> -1.6834 8.8222 -4.5406 12.8437 7.8638 0.8852 4.3863 -19.6971 +#> 0.6685 -5.6872 -17.9757 -7.1754 -1.7314 0.7158 9.9081 8.7799 +#> 1.5819 -4.4604 15.8837 -4.2586 -3.4666 4.1039 0.2826 -8.2974 +#> 1.1024 10.2194 4.3061 9.3984 10.0735 2.7659 -1.0396 -15.0943 +#> +#> Columns 33 to 40 3.5032 1.6151 13.6459 -6.1995 -9.8488 12.3514 2.2729 2.7821 +#> -10.2329 4.8449 1.8473 -11.9907 -1.5774 4.7261 -4.6764 3.4796 +#> 3.8205 -5.8217 0.2988 -1.5748 1.5653 -11.6889 1.7686 1.9090 +#> 13.4749 -8.6482 13.3443 4.5522 -1.9011 0.3399 -5.3454 -9.5580 +#> 7.1295 -9.1016 -0.1034 2.5529 4.4182 7.2676 8.8859 0.2998 +#> 13.1355 -0.0824 -3.8959 2.2275 1.8356 -5.7940 2.1526 -2.6223 +#> -4.7088 -2.6130 3.9118 -0.8039 -0.8296 7.5224 0.0166 7.7747 +#> -0.1367 10.0605 -3.1626 -4.9235 9.0574 8.3608 -0.6431 1.6203 +#> 3.9886 -7.4451 1.0348 -6.2363 2.4075 -16.8698 7.9134 13.1190 +#> -5.1753 -10.0095 4.1602 -4.6037 7.9717 2.1947 -3.1838 1.4414 +#> -5.7036 17.6836 0.8175 -12.0724 10.1267 -12.3621 -6.8936 12.6102 +#> 0.2967 -3.6497 -0.3328 -0.5213 2.7169 4.6599 -0.7938 -6.6801 +#> -4.6639 10.4488 1.7619 8.8252 4.8165 2.1116 -14.7942 0.6958 +#> 10.5806 -5.0217 10.8438 15.9300 3.0902 -4.6161 7.6645 -3.7093 +#> -1.9758 8.7825 5.5702 -1.1931 20.7325 -1.6902 -6.5710 8.5370 +#> -3.4315 -3.9690 -2.7181 3.9475 -5.2950 -4.0683 5.1271 -8.8019 +#> -2.4889 7.4373 -1.2045 4.6231 6.4070 1.5636 1.8816 2.0895 +#> -2.4631 8.3441 2.7115 12.9789 11.4943 -4.0801 -2.7654 -14.1363 +#> 2.3585 -3.1852 -4.5072 3.1720 2.2918 -3.6508 -0.0585 4.2550 +#> -7.1227 14.0587 -9.2695 2.0586 8.1077 -8.0613 -0.0276 -2.7233 +#> 11.5503 -1.0402 -0.7068 7.1030 9.6921 4.0622 1.5635 -0.8581 +#> -12.0199 3.8998 -3.4075 -9.5343 -9.1043 -7.0554 -2.9911 -6.6626 +#> -5.6089 1.6227 6.6670 2.8772 2.9687 -11.4097 11.1582 2.4165 +#> 6.3019 3.1575 7.2511 8.8503 20.9788 -13.1751 -7.8800 5.9546 +#> 6.2591 -1.8061 0.9736 -4.3092 10.0755 3.7018 1.4896 5.1624 +#> 2.0282 5.3408 -1.1668 2.4538 0.4726 9.4383 3.5097 -6.0822 +#> -9.5976 -1.6313 10.0339 0.4870 8.2388 2.8234 -7.8079 11.4701 +#> -1.7225 9.5804 6.9717 17.5916 14.5329 2.5304 -11.2523 1.8387 +#> -2.7241 0.4796 2.8995 -2.0427 5.6780 2.6961 1.2184 1.7330 +#> -14.0950 14.8797 6.8516 -4.5487 -0.0471 19.4064 -9.8698 2.2338 +#> -3.9601 2.5363 8.8149 5.9725 3.4330 12.5849 5.8831 -0.3196 +#> 6.8494 -7.2550 1.9249 -1.6378 -4.1263 -2.4659 -3.9709 -8.6884 +#> 3.5879 -0.8758 7.2625 -4.9020 -4.2377 -0.3593 -5.0603 1.3574 +#> +#> Columns 41 to 48 4.7544 -3.8532 1.3537 18.0648 10.9368 -3.5256 3.2463 -11.0885 +#> -11.3760 -8.4545 -6.2563 5.3757 8.7233 -3.5149 6.3589 -8.7134 +#> -6.3385 3.0080 -2.7377 -11.4562 -1.4043 -11.9691 4.4455 -10.3454 +#> -1.7988 12.7498 3.4409 -11.4334 -3.6944 3.1703 -18.5209 13.9670 +#> -7.0067 0.2700 0.6592 -2.8205 -2.1231 2.6654 4.0859 -0.1331 +#> -0.1821 10.3581 -1.0482 18.0136 5.8157 0.6943 5.0057 -9.2490 +#> 16.3705 -3.4258 0.1996 2.7516 -2.5854 3.3346 -5.4813 10.1746 +#> -4.4923 10.9602 -2.0492 -6.0832 3.6700 -0.7903 10.9270 6.1423 +#> -3.9708 2.4995 -6.0148 3.8555 8.7296 7.2791 -1.8554 -6.3309 +#> 1.3187 -1.9468 -4.6737 7.0641 -0.5110 3.2483 5.6464 11.6303 +#> -12.5985 -1.3315 12.5610 -5.4499 -1.1348 -1.6720 -4.7975 -6.8854 +#> 2.6793 0.8099 -0.8208 2.0605 0.2827 -2.0339 6.4241 0.4315 +#> 7.9504 9.4412 11.9954 -4.2733 -6.7321 -10.7687 5.7605 1.0645 +#> 10.1457 -0.1423 -4.4749 3.1614 0.5063 13.8041 0.4637 -2.7746 +#> -8.8189 13.9607 6.1141 -4.7353 -4.7814 -11.4741 15.6631 3.3705 +#> -12.9460 3.0432 22.2455 5.7695 -7.4932 4.4910 -7.6372 1.0687 +#> 7.6966 7.9218 6.8505 -3.2164 -2.5481 -1.6493 0.3228 11.9375 +#> -5.3288 6.7775 21.9513 -2.1556 -16.4669 3.2823 -15.1787 -1.6592 +#> -5.5416 1.3512 11.4356 13.0597 8.0178 -9.0771 7.9728 -10.5826 +#> 4.3661 8.3695 7.9370 4.1265 -1.1032 3.1672 8.7553 7.8063 +#> 0.4222 2.5522 -11.8432 1.0525 6.8551 -9.1025 0.2412 4.1610 +#> -1.5518 12.8515 13.8811 -0.2850 6.1632 3.1550 -3.3247 -6.2828 +#> -0.3344 -8.8364 6.2271 3.0440 -13.1455 8.6795 -3.1012 -0.8562 +#> 1.7786 9.6441 -1.1621 -0.8870 1.6428 -6.4055 8.3717 14.0249 +#> -4.4673 1.8278 0.4781 0.6198 -1.5879 -7.5952 -3.1377 -3.3075 +#> 4.9676 6.9847 -2.2823 8.5192 4.7167 2.9328 -1.9337 2.2208 +#> 5.1345 4.7293 6.2842 6.9622 -3.0074 2.0482 -8.6324 11.0786 +#> 9.9139 -5.6082 1.8029 -2.0476 -3.7845 -0.8135 2.5508 7.4146 +#> -1.6769 8.3854 17.1547 3.6791 1.2201 -11.0141 1.3322 -1.4765 +#> 12.4462 -1.2692 -3.4418 -0.4186 10.4377 5.5881 5.5438 4.7403 +#> 16.0646 6.3947 -3.8445 -5.2718 2.5817 4.1743 -0.7499 2.3548 +#> 9.2805 0.4435 -12.8332 2.4571 -14.3137 -8.1492 19.0161 -4.3424 +#> 1.7508 -5.0976 -9.9643 10.7083 4.8640 0.9441 -0.6708 -10.0837 +#> +#> (2,.,.) = +#> Columns 1 to 6 1.1040e+01 2.2389e+00 4.4628e+00 5.7466e+00 -2.6627e-01 -8.1387e+00 +#> 4.2474e+00 6.3469e+00 1.3473e+01 6.8586e+00 -3.1554e+00 -1.1583e+01 +#> -1.1430e+01 -3.9404e+00 6.4832e+00 8.8420e-01 -3.3189e+00 -7.1515e-01 +#> -5.3851e+00 -1.9954e+01 -3.6777e+00 -3.9789e+00 6.8651e+00 -1.2576e+01 +#> -1.2494e-01 4.7613e+00 8.2518e+00 5.1161e+00 2.6879e+00 -1.1478e+00 +#> 8.1778e+00 1.4736e+01 8.8339e+00 -5.1749e+00 1.2643e+01 -1.4105e+01 +#> 8.0566e+00 -1.2687e+01 -9.9194e-01 4.1174e+00 -2.6348e+00 -4.0748e+00 +#> -8.4652e+00 3.9182e-01 8.5976e+00 -1.7796e+00 1.9045e+00 2.7233e+00 +#> -4.8983e+00 -5.3415e+00 1.3234e+01 -2.1544e+00 5.6411e+00 1.3019e+01 +#> 9.0287e+00 1.1770e+01 1.4281e-01 8.8434e+00 -7.3315e+00 -1.2302e+01 +#> -4.0297e-01 -5.8028e+00 -2.7611e+00 -2.3790e+00 -9.7937e+00 1.5359e+01 +#> 6.2538e+00 4.3550e+00 -8.2056e+00 2.8763e+00 -1.0728e+00 -1.0987e+00 +#> 2.8934e+00 -4.6745e+00 -9.3801e+00 -3.2556e+00 -1.2905e+00 1.5976e+00 +#> 6.5937e+00 -1.0203e+01 -4.2327e+00 3.3749e+00 5.8875e+00 9.6595e+00 +#> 1.0829e+01 -1.6179e+01 1.3427e+01 4.3272e+00 -4.3343e+00 1.1981e+00 +#> -1.6241e+00 -5.2897e+00 4.4671e-01 3.6279e+00 -1.6293e+01 -9.8559e+00 +#> -9.2592e-01 1.3693e+01 1.5238e+00 1.2777e+01 -1.5975e+00 2.5551e-01 +#> 2.8383e+00 -9.2113e+00 2.6174e+00 -8.6557e+00 -1.3164e+01 3.6949e+00 +#> -1.8526e+00 6.1922e+00 3.9519e+00 -1.0871e+00 -1.0004e+01 2.8450e+00 +#> 3.3199e+00 5.9430e+00 -1.6988e+00 3.6933e+00 -1.6695e+00 9.7463e-01 +#> 6.8643e-01 -1.8230e-01 2.7946e+00 -9.0175e-01 7.5062e+00 2.9221e+00 +#> -1.0841e+01 -7.1443e+00 3.9800e-01 -4.6693e+00 -1.6905e-01 -9.1411e-01 +#> 8.9760e+00 6.5289e+00 2.0646e+00 2.0664e+00 -7.4163e+00 3.1503e+00 +#> 3.2287e-02 9.4439e+00 -1.1551e+01 -5.0678e+00 1.9310e+00 3.4262e+00 +#> -1.2702e+00 3.0408e+00 2.1001e+00 -3.5340e+00 -3.2536e+00 6.8281e+00 +#> -7.6196e+00 5.7684e+00 3.2679e-01 5.5921e+00 9.2952e+00 -4.7166e+00 +#> 1.3529e+00 3.3966e-01 -1.8293e+00 1.5309e+01 -1.0006e+00 -7.7701e+00 +#> 1.0581e+01 7.6868e+00 -1.2268e+01 8.1392e+00 4.6540e+00 9.6224e+00 +#> -5.1705e+00 -1.1192e+01 9.5458e+00 -8.6510e-04 -1.1793e+01 8.4559e+00 +#> 1.8310e+01 1.4111e+01 1.5774e+00 -2.1493e+00 -5.2412e-01 9.6496e+00 +#> -2.8237e+00 2.5916e+00 2.1321e+00 -5.9730e+00 -3.1469e+00 2.4400e+00 +#> -1.5843e+01 1.3855e+01 -1.3872e+01 -1.8868e+00 1.0080e+01 -8.2378e+00 +#> 8.8551e+00 4.9966e+00 -6.3457e+00 -1.0323e+01 5.0205e+00 -9.3470e-01 +#> +#> Columns 7 to 12 6.8024e+00 6.8142e+00 4.5797e+00 6.9012e+00 1.0459e+01 1.0657e+00 +#> -6.9185e-01 -6.4943e+00 5.4717e+00 -3.6463e+00 4.3075e+00 7.7041e+00 +#> -4.0044e+00 4.6633e+00 5.0276e+00 -1.4530e+00 6.8511e+00 1.2912e+00 +#> -1.3504e+01 -1.1171e+01 -5.0406e+00 -7.0066e+00 -1.7130e+01 -2.3064e-01 +#> -2.8190e+00 4.1526e+00 7.7969e+00 -6.6416e+00 3.5648e-01 -4.1884e+00 +#> 2.6731e+00 -9.3361e-01 -1.9250e-01 7.6096e-01 -4.5198e+00 -4.9756e+00 +#> -7.7441e+00 3.3261e+00 -1.1134e+01 1.9768e+00 3.2081e+00 -9.4453e+00 +#> -1.4551e+00 1.6707e+01 1.1992e+01 -1.2095e+01 -8.7680e+00 -5.7492e+00 +#> -2.0924e+01 9.0712e+00 4.3001e+00 2.3030e+00 1.5413e+00 1.7113e+00 +#> -2.4857e+00 -8.2347e+00 -1.2573e+01 -1.0862e+01 -4.1379e+00 -6.6089e+00 +#> -1.5728e+01 2.3377e+00 -3.2312e+00 -8.4338e+00 1.4856e+01 -1.8265e+01 +#> 5.0071e+00 -6.7687e+00 -3.5333e+00 -6.7396e+00 2.0115e+00 -5.8410e+00 +#> 1.7353e+01 -9.7737e+00 5.5499e+00 1.0203e+01 -3.5550e+00 -8.5098e+00 +#> 1.4913e+00 7.1976e+00 5.9653e+00 -9.6813e+00 5.5417e+00 2.8466e+00 +#> 3.6951e+00 1.7859e-02 -5.5736e+00 2.3252e+01 -6.3235e+00 -1.6749e+01 +#> 1.2651e+00 4.7339e-01 1.5961e+00 9.3233e+00 1.4838e+00 -4.3081e+00 +#> 1.2277e+00 1.7227e+01 -6.4262e-01 -2.7803e+00 2.6236e+00 -1.0040e+00 +#> 1.9608e+00 3.1339e+00 -4.8171e+00 5.3754e+00 -4.6124e+00 -6.1681e+00 +#> -3.4816e-01 2.3533e+00 1.2979e+01 -3.9980e+00 1.0758e+01 1.9073e+00 +#> 1.0177e+01 1.0798e+01 -8.6815e+00 7.7241e+00 -1.2984e+01 -5.2720e+00 +#> -2.7988e+00 7.2102e+00 -4.4924e+00 -6.3182e+00 -1.2623e-01 1.3144e+00 +#> -3.5231e+00 4.8029e+00 8.3415e-01 1.5933e+00 -4.7331e+00 6.3149e-01 +#> 5.2782e+00 -4.3263e+00 -1.2520e+01 4.0108e+00 3.0807e+00 -8.3008e+00 +#> 3.2957e+00 -9.9090e+00 -7.1363e+00 8.8062e-02 -8.3463e+00 -7.5363e+00 +#> -5.6222e-03 -2.9559e+00 -7.3561e+00 -9.5151e-01 1.2968e+01 -1.3217e-01 +#> 6.8911e+00 9.1908e+00 -3.3467e+00 1.7311e+01 -3.0840e+00 1.2651e+01 +#> -7.3550e+00 6.4477e+00 -4.7388e+00 -1.8323e+00 7.4196e+00 -1.2512e+01 +#> 8.4276e+00 -7.1850e+00 -1.0022e+00 -5.9997e+00 6.7695e+00 5.1475e+00 +#> 7.7367e-03 7.5006e+00 5.3158e+00 5.5656e+00 5.8394e+00 -4.7957e+00 +#> 8.5397e+00 8.4526e+00 2.3535e+00 7.2155e+00 7.2376e+00 3.4551e-01 +#> -3.7541e+00 1.3644e+01 -1.1201e+01 -9.4936e+00 -1.4025e+01 4.1256e+00 +#> 1.5837e+01 -1.2085e+01 1.8337e+01 -7.5301e-03 7.2932e-01 -3.3146e+00 +#> -8.1402e+00 -1.4226e+01 -1.6781e+01 -2.6737e+00 -5.7033e-01 4.2115e+00 +#> +#> Columns 13 to 18 -7.6116e+00 -2.2572e+01 -7.3696e+00 1.2489e+01 -3.6996e+00 -8.4814e-01 +#> -6.5330e+00 -1.4115e+01 -1.6207e+01 4.8898e+00 -3.3812e+00 -2.8668e+00 +#> 1.0369e+01 8.3727e+00 2.5078e+00 -4.5453e+00 -9.8145e+00 5.5830e+00 +#> 4.2195e+00 3.3645e+00 4.0064e+00 -8.8580e+00 4.7633e+00 2.1263e+01 +#> 1.0394e+00 -4.8507e+00 -1.1569e+00 3.6901e+00 -4.9992e-01 -2.2131e+00 +#> -1.2750e+01 -3.0266e+01 -1.2321e+01 -1.3099e+01 6.0695e+00 9.5454e+00 +#> -8.8311e-02 -6.3177e+00 3.9969e+00 3.2729e+00 1.2270e+01 1.4457e+01 +#> 5.8119e+00 -5.0032e+00 -4.5845e+00 5.5784e+00 7.0958e+00 -4.4555e+00 +#> 1.0096e+01 8.4955e+00 -6.9884e+00 -1.1447e+01 -6.7327e+00 -5.6462e+00 +#> -5.5768e+00 -2.3684e+00 1.2694e+00 4.3510e+00 7.8375e-01 5.9263e+00 +#> 8.8960e+00 -3.4571e+00 -1.0240e+01 6.3798e+00 -7.5809e+00 -1.7999e+00 +#> -8.4793e+00 9.5572e-01 -1.8657e+00 8.0276e+00 -1.9531e+00 1.1821e+00 +#> -6.2658e+00 -2.4873e+00 6.7244e+00 -2.7642e+00 4.8987e+00 1.1314e+01 +#> 5.8645e+00 8.9250e+00 -2.0626e-01 2.6696e+00 5.9454e+00 -4.5350e+00 +#> 3.3258e+00 -2.1198e+00 1.3239e+01 6.2403e+00 -1.0381e+01 9.9492e+00 +#> -7.3888e+00 1.7891e+00 -6.5434e-01 -2.0986e+00 5.7303e+00 3.8639e+00 +#> 4.8493e-01 -1.2327e+01 -1.0240e+01 6.6167e-02 8.3635e+00 5.0293e+00 +#> -5.1651e-01 -4.9654e+00 5.2045e+00 -1.3461e+01 7.8285e+00 1.7121e+01 +#> -1.0272e+01 9.2964e+00 -5.5075e+00 -2.4449e+00 2.7975e+00 2.6272e+00 +#> -4.2984e-01 -9.9611e+00 6.5585e+00 3.3719e+00 2.8657e+00 -1.7146e+00 +#> -3.3866e+00 5.4789e-03 6.0831e+00 -6.4697e+00 4.7145e+00 -2.1051e+00 +#> 1.2717e+01 7.5293e-01 -1.4031e+00 -4.8171e+00 -1.0169e+01 3.8149e+00 +#> 9.2181e+00 -1.5667e+00 7.8714e+00 -3.2686e+00 -1.9517e+00 -2.9886e+00 +#> -7.0515e+00 4.5613e+00 -5.9774e+00 1.5230e+00 4.9189e+00 1.2735e+01 +#> -1.5374e-01 1.9978e+00 7.7595e+00 -1.8426e+00 -4.5077e+00 -4.7388e+00 +#> -2.7195e+00 -4.0337e+00 5.6005e+00 -1.1810e+00 1.2080e+00 -6.5539e+00 +#> -8.3179e+00 -1.5172e+01 -6.6159e+00 4.3008e+00 1.7152e+01 1.1670e+01 +#> 9.5280e+00 1.1548e+01 -7.2594e+00 -3.2168e+00 3.3255e+00 -2.1210e+00 +#> -1.5480e+00 3.5520e+00 8.4671e+00 2.8069e+00 -4.7788e+00 6.6099e+00 +#> -3.6551e+00 -6.4310e+00 -7.5838e+00 3.0752e+00 1.7736e+00 -1.5723e+01 +#> 2.2205e+00 1.1095e+01 8.1703e+00 1.4510e+00 1.1793e+01 1.3099e+01 +#> -7.7663e+00 6.1615e+00 -8.8113e+00 -1.3853e+00 5.6379e+00 -3.9343e-01 +#> -5.7712e+00 -1.0588e+01 -1.5045e+01 -6.2267e-01 -5.5895e+00 2.6833e+00 +#> +#> Columns 19 to 24 -5.2464e+00 -7.0874e+00 -4.7609e+00 -3.7282e+00 1.0706e+01 1.5297e+00 +#> -6.9263e+00 1.1783e+01 -1.1422e+01 -4.0469e+00 6.4235e+00 2.0727e-01 +#> -4.9231e+00 -1.5128e+01 6.5018e+00 2.1611e+00 2.1524e+00 -1.9323e-01 +#> 4.3420e+00 -1.3789e+01 -6.0091e+00 2.4006e-01 -9.3553e+00 -9.8307e+00 +#> 3.4428e+00 7.1870e+00 -1.1922e+00 1.1430e+00 5.4847e-01 5.8617e+00 +#> 5.1685e+00 6.3463e+00 4.5243e+00 -2.6326e+00 -3.9794e+00 -3.6855e+00 +#> -2.6699e+00 5.1123e-01 4.6196e+00 -4.7047e+00 4.5384e+00 -1.8685e+01 +#> 5.0849e-01 1.6162e+01 4.2410e+00 -4.2715e+00 3.1371e+00 -4.5112e+00 +#> 1.6873e+01 -1.2267e+01 -4.0977e-01 8.0116e+00 -7.4305e+00 5.1819e+00 +#> 2.2561e+00 9.6738e-01 -2.3555e-01 -1.0170e+01 -4.5446e+00 5.6410e+00 +#> 6.0245e+00 6.3343e+00 -6.5556e+00 -2.2797e+00 2.3705e+00 -3.5569e+00 +#> 1.8994e+00 8.7867e+00 3.5606e+00 -8.5007e-01 2.7260e+00 1.3389e+00 +#> -1.1418e+01 -9.5140e-01 1.0049e+01 -6.1606e+00 9.7031e+00 -1.6093e+01 +#> 9.8920e+00 1.4330e+01 -6.2582e+00 7.7795e-01 -5.7016e+00 8.8303e+00 +#> 2.4353e+00 -4.5353e+00 7.7747e+00 -8.8086e+00 3.7741e+00 -1.5807e+01 +#> -8.1247e+00 -8.7499e+00 -2.1982e+00 1.5405e+00 -1.4039e+00 2.3166e+00 +#> 5.2253e+00 -3.7718e+00 8.5709e+00 -5.2631e+00 -7.7831e-02 -6.2829e+00 +#> -5.4685e+00 -4.3472e+00 3.7960e+00 -2.9034e+00 1.7978e+01 -1.5647e+01 +#> -3.9083e+00 -4.4565e+00 2.4797e+00 -3.7251e+00 4.1443e+00 -5.5959e-01 +#> 6.9433e+00 1.5254e+00 1.3598e+01 3.9247e-01 6.8856e+00 -1.0771e+01 +#> 8.9114e+00 6.2955e+00 5.4177e+00 9.8818e+00 -5.2334e+00 2.4367e-01 +#> 1.5399e+00 2.3911e-01 2.7749e+00 -4.2119e+00 -5.4261e+00 -1.6565e+00 +#> 2.0196e+00 -5.7851e+00 -1.3748e+00 5.4834e+00 -1.1966e+00 -6.5270e+00 +#> 7.0821e+00 -3.0755e+00 1.2174e+01 -9.4236e-01 -5.6663e+00 -2.5944e+00 +#> 3.3522e+00 -5.0099e+00 -2.1456e+00 1.4415e+00 -6.1322e+00 1.2063e+01 +#> -4.2653e+00 -6.8232e+00 -1.5760e+00 1.0697e+00 -5.0486e+00 7.7071e+00 +#> -2.7145e-01 4.4730e+00 -6.8754e+00 -9.2159e-01 -4.1075e+00 -8.8420e+00 +#> 2.1405e+00 4.0768e+00 2.8086e+00 -7.6287e+00 8.4893e+00 -1.3900e+00 +#> 8.6783e+00 -2.5042e+00 1.1771e+00 1.9851e+00 -1.3014e+01 -2.3765e+00 +#> 1.0708e+00 8.6690e+00 2.8070e+00 -3.4836e+00 -2.7329e+00 1.0127e+01 +#> 1.8867e+00 -2.3190e+00 1.6788e+00 3.5833e+00 -7.0983e+00 1.2037e+00 +#> -8.4881e+00 -4.3573e+00 1.2775e+01 1.5797e+00 -6.7303e+00 2.0272e+01 +#> 8.9483e+00 -8.3223e-01 -6.2088e+00 -2.7979e+00 1.4796e+00 8.8738e+00 +#> +#> Columns 25 to 30 4.7605e+00 -1.4757e+00 1.5414e+01 -3.8196e+00 -1.6951e+01 -1.2996e+01 +#> -1.9973e+00 -4.6919e-01 8.8073e+00 -2.2466e+00 4.9008e+00 1.9037e+00 +#> -5.7509e-01 2.7547e+00 -1.0311e+01 -2.8828e+00 6.6137e+00 8.7601e+00 +#> 6.2616e+00 -5.0881e-01 1.0574e+00 9.8757e+00 1.5003e+01 6.7349e+00 +#> 4.1598e+00 -6.4312e+00 -1.6850e+01 -1.8725e+00 -8.4253e+00 -5.4590e+00 +#> -7.0131e+00 9.9630e+00 -4.2980e+00 -2.8688e+00 -4.8960e+00 -1.5651e+01 +#> 1.0108e+01 6.2334e-01 5.0382e+00 -7.0535e+00 -7.1729e+00 -1.6165e+00 +#> 6.7001e+00 -1.1708e+01 -1.5331e+01 -4.2425e+00 -1.5112e+00 6.2536e+00 +#> -2.0866e+00 6.0834e+00 -9.9227e-01 -1.0545e+01 1.6857e+00 -6.7036e+00 +#> 2.2843e+00 -3.1801e+00 5.8044e+00 5.1553e+00 4.9275e+00 -1.0224e+01 +#> -4.4992e-01 8.7138e+00 2.1283e-01 1.3802e+00 1.5857e+00 1.1157e+01 +#> -1.2385e+00 -2.4477e+00 7.9737e+00 -4.4014e+00 -4.9261e+00 -3.1951e+00 +#> -2.3980e+00 1.3084e+00 -7.7907e+00 7.2450e-01 -1.1586e+01 1.0383e+01 +#> -1.8079e+01 -9.0459e+00 1.4397e+01 -6.2526e+00 6.3617e+00 -7.8238e+00 +#> -3.8272e+00 -1.9251e+00 1.0159e+01 -1.6060e+01 -2.2783e+00 -2.2136e-01 +#> 9.4213e-01 -2.9319e+00 2.3522e+00 -1.1946e+01 -5.9928e+00 -8.4409e-01 +#> -4.5177e+00 -9.0236e+00 3.4303e+00 1.8347e+00 -1.2845e+01 3.2762e+00 +#> -5.9002e+00 -1.3134e+01 1.5702e+00 -1.1459e+01 -2.4262e+00 1.1036e+01 +#> 7.1187e+00 -4.8343e+00 -1.0063e+01 -4.6802e+00 -7.7001e+00 1.7641e+00 +#> -6.5346e+00 1.9943e+00 5.7655e+00 -7.6248e+00 -5.8852e+00 -3.1929e+00 +#> 3.2667e+00 3.4035e-01 -1.3065e+01 1.6010e+01 7.5781e+00 -9.4680e+00 +#> -2.7541e+00 9.0126e-03 3.2973e+00 -8.4961e+00 -3.8408e+00 2.5491e+00 +#> -8.2020e+00 1.2686e+01 1.0638e+01 1.8329e+01 -2.1860e+00 6.4469e+00 +#> -2.0596e+01 9.4246e+00 1.9946e+00 9.0939e+00 3.2005e+00 3.8513e+00 +#> -4.8925e+00 7.0237e-01 5.3233e+00 3.8712e+00 3.7284e+00 -9.4912e+00 +#> 4.7726e+00 -6.0445e-01 6.0763e-02 -1.1063e+00 4.0299e-01 -5.1322e+00 +#> -1.4589e+01 3.5644e+00 -1.2131e+01 -2.8696e+00 -2.1631e+00 6.4978e-01 +#> -2.0117e+01 -4.2851e+00 -2.0913e+00 8.8746e+00 -6.0943e+00 6.6141e+00 +#> -3.0863e+00 -4.9088e+00 -9.5859e-01 -8.5561e+00 -9.7085e+00 -6.5426e+00 +#> -9.3064e+00 3.8217e+00 4.1449e+00 -1.3908e+00 -1.4476e+01 -1.2525e+01 +#> 2.5078e+00 -2.1096e+00 -2.8901e+00 -5.3604e+00 -3.8419e+00 -1.0655e+00 +#> -1.2512e+01 -4.7244e+00 1.2641e+00 1.1913e+01 8.4535e-01 9.1668e+00 +#> -8.5001e+00 1.2013e+01 1.9289e+01 5.0782e+00 7.5338e+00 -9.1519e+00 +#> +#> Columns 31 to 36 -1.5825e+01 -6.0525e+00 -3.6059e-01 -3.5320e+00 -1.0963e+01 9.0893e+00 +#> -7.6471e+00 3.8245e+00 7.4362e+00 9.0340e+00 -3.2226e+00 8.3234e+00 +#> 1.0664e+01 3.1036e+00 7.8870e+00 5.2952e+00 -5.6097e+00 1.1988e+01 +#> 7.5233e+00 -3.8525e-01 -1.0644e+00 2.8296e+00 -8.0137e+00 -1.5562e+01 +#> -3.9163e+00 2.3552e-01 2.7444e+00 1.1502e+01 2.2503e+00 -1.2105e+00 +#> -7.3549e+00 -1.0166e+01 1.9247e+00 -1.8355e-01 -8.8643e+00 -2.6318e+00 +#> -4.0218e+00 -3.7994e+00 -3.6637e+00 -9.0932e+00 8.2074e+00 -1.3125e+00 +#> -8.5332e+00 -1.0384e+01 -2.4280e+00 4.7237e+00 -2.0530e+00 6.6218e+00 +#> 1.9824e+01 -3.3879e+00 -4.7558e+00 6.3653e+00 -7.8356e+00 2.3336e+00 +#> -2.3862e+00 4.6184e+00 -5.4476e+00 -1.4052e+00 -3.0102e+00 3.4202e-01 +#> -4.5635e+00 -9.0183e+00 -9.8438e-01 7.6168e+00 -5.9883e+00 9.1950e+00 +#> -1.1475e+01 7.0631e+00 -6.4333e+00 -1.1149e-01 6.0743e+00 -2.4352e+00 +#> -3.1736e+00 2.1220e+00 -6.6557e+00 -3.4379e+00 -2.7073e+00 1.0520e+00 +#> 1.3549e+00 -7.3758e+00 7.0423e+00 -5.1716e+00 -6.6967e+00 1.9914e+00 +#> -5.8276e-01 1.3280e+00 -4.0232e+00 6.2002e+00 -1.1908e+01 1.5772e+01 +#> -6.6587e+00 -5.5256e+00 3.1339e+00 -5.5551e-01 -5.4036e+00 -9.0692e+00 +#> -7.6355e+00 -1.5741e+01 3.9675e-01 3.0156e+00 -2.0665e+00 3.9257e+00 +#> 2.0522e+00 -2.8533e+00 8.6485e+00 -7.8702e+00 4.1215e+00 -3.2170e-01 +#> -8.2318e-01 3.2963e+00 -1.1312e+01 -8.2619e-01 -4.2630e+00 6.1051e-01 +#> -1.3481e+01 -1.0202e+01 3.7420e+00 -9.4366e+00 -8.2541e+00 4.4941e+00 +#> 4.6272e+00 1.1581e+01 -4.7529e+00 -4.3113e+00 8.0156e+00 3.4779e+00 +#> 1.7016e+00 -1.1712e+01 7.2312e-01 -7.1206e+00 -9.3145e+00 9.2114e+00 +#> 5.9762e+00 -5.2956e+00 -3.9463e+00 -1.1333e+01 -1.3987e+00 -4.9694e+00 +#> 7.4142e+00 -8.1344e+00 -1.3305e+01 -1.7310e+00 -7.5089e+00 -4.8229e+00 +#> 7.3922e+00 8.3139e+00 -5.2840e+00 -1.4562e+00 -3.4649e+00 1.2898e+01 +#> -9.4360e+00 -1.3312e+00 1.1261e+01 -5.3233e+00 -3.2232e+00 4.1879e+00 +#> 5.3539e+00 3.0263e+00 -3.2155e+00 3.4591e+00 -3.5687e+00 -6.5521e+00 +#> -3.5393e+00 -1.3799e+01 -3.3037e+00 1.9697e+00 2.8058e-01 -4.6900e+00 +#> 4.9514e+00 4.1531e-01 -8.0968e+00 -3.9034e+00 -4.8970e+00 1.0481e+01 +#> -6.1879e+00 -9.2681e+00 -6.2762e+00 -2.2763e+00 4.9857e-02 2.0032e+00 +#> -3.8395e+00 -1.0061e+01 -7.0415e+00 -1.5563e+01 7.8947e+00 -1.0889e+01 +#> 4.6596e+00 -3.8205e+00 -1.4254e+00 9.8757e+00 -8.8425e+00 1.1036e+00 +#> -5.7265e+00 -1.0946e+01 -3.0999e+00 5.1787e-02 -4.1085e+00 1.6805e+00 +#> +#> Columns 37 to 42 -1.1352e+01 1.1675e+00 4.1326e+00 2.0810e+01 -3.1170e+00 -5.3886e+00 +#> -6.3405e+00 9.9968e+00 4.5593e+00 4.3253e+00 4.3078e+00 -5.0758e+00 +#> -1.8509e+00 -2.5069e-01 -1.1716e+01 -6.5717e+00 3.8582e+00 4.9188e+00 +#> 5.6229e+00 -2.9105e+00 -2.3605e+01 8.7952e+00 -3.9078e+00 4.0413e+00 +#> 6.1460e+00 7.2539e+00 -4.2070e+00 -3.4539e+00 -3.6857e-01 -4.5547e+00 +#> -6.1082e+00 3.7526e+00 8.7095e+00 1.6753e+01 -5.0704e+00 -3.2953e+00 +#> -5.7529e+00 -3.3428e+00 1.3160e+00 1.1378e+01 -8.3256e+00 9.5483e-02 +#> -1.5073e+00 6.0489e+00 2.0360e+00 1.3820e+00 -2.4588e+00 -1.2680e+00 +#> -4.2967e+00 -9.2732e-01 -3.1906e+00 -7.1769e+00 4.9066e+00 -2.2283e+00 +#> -1.2227e+01 8.5371e+00 -2.6248e+00 2.4014e+00 -5.6499e+00 1.0123e+01 +#> -8.5863e+00 9.5664e+00 -5.9350e+00 -1.6468e+01 1.0391e+01 -1.0179e+01 +#> 3.1844e+00 1.2364e-01 1.3085e+01 -5.8785e-01 2.3136e+00 -9.7390e+00 +#> 4.6407e+00 4.9859e+00 -1.2739e+00 2.1925e-01 2.5094e-01 3.3886e-01 +#> 1.1058e+01 -1.0747e+01 8.3456e+00 -2.2283e+00 4.7523e+00 -7.7536e+00 +#> -1.3527e+01 1.4885e+01 -5.2081e+00 9.1698e+00 -7.9931e+00 -8.4733e-01 +#> -1.9517e+00 -3.4732e+00 5.4558e+00 -2.4234e+00 3.1985e+00 -6.9968e-01 +#> 8.7000e+00 -3.4409e+00 -1.1334e+00 1.4169e+01 -1.9912e+00 -2.1136e+00 +#> -7.1622e+00 1.2549e+00 4.2199e-01 3.1096e+00 2.5984e+00 4.3691e+00 +#> 3.0234e+00 -2.0207e+00 1.4672e+01 -1.1174e+01 5.1304e+00 7.1506e+00 +#> -1.1437e+01 3.4149e+00 7.5767e+00 5.9337e+00 -1.2918e+01 3.3527e+00 +#> 1.2484e+00 4.7930e-01 -7.1958e+00 2.7164e+00 2.0207e+00 7.7346e+00 +#> 1.3582e+00 2.8437e+00 -7.3608e-01 2.2923e+00 -7.7159e+00 -3.2784e+00 +#> -8.8959e+00 -3.6378e+00 -2.6879e+00 5.3815e-01 8.6462e+00 1.2901e+00 +#> -1.0886e+01 -9.1296e-01 -1.3696e+00 4.0285e+00 1.1338e+01 -3.0686e+00 +#> -7.4435e+00 -1.6110e+00 -6.7849e+00 -3.5437e+00 6.4861e+00 3.2415e+00 +#> 2.0525e+00 -1.4578e+00 -2.7559e+00 5.7021e+00 -9.3438e+00 9.2374e+00 +#> -2.7167e+00 1.1972e+01 1.5159e+00 8.5275e+00 5.1403e+00 -4.5702e+00 +#> 8.4622e+00 -1.0705e+00 -2.7975e+00 1.3748e-01 6.0727e+00 9.2312e+00 +#> -1.2966e+00 1.4843e+00 1.3079e+01 -4.8325e+00 -2.5283e+00 -1.0658e+01 +#> 9.5890e-01 8.5580e+00 8.0375e-01 6.0536e+00 -2.3652e+00 1.9715e+00 +#> -8.4933e+00 -6.1661e+00 -2.8968e+00 8.8597e+00 -7.1364e+00 2.2233e+00 +#> 6.3513e+00 -1.3840e+01 9.8739e+00 -1.0290e+01 1.4279e+01 -6.2812e+00 +#> -1.7568e+01 5.2109e+00 -1.6603e+00 5.6085e+00 -3.3452e+00 -2.7954e+00 +#> +#> Columns 43 to 48 -1.4316e+01 -1.7188e+00 3.2444e-01 8.9797e+00 -3.2084e+00 7.8822e+00 +#> 3.1460e+00 -6.4460e+00 5.2454e+00 -1.9715e+00 1.6432e+00 1.1744e+01 +#> 8.6926e+00 -2.2753e+00 -4.4450e-01 -3.1107e+00 2.9401e+00 3.6147e+00 +#> 6.4619e+00 3.0811e+00 -9.3294e+00 3.1618e-02 -5.4628e-01 -1.5545e+01 +#> 3.2211e+00 2.7079e+00 -1.7153e+00 6.0905e-01 9.2878e+00 7.0190e+00 +#> -1.1072e+01 -1.9385e+00 1.8828e+00 1.5127e+00 2.6904e+00 -1.7966e+00 +#> -9.0222e+00 9.6423e-01 5.8570e+00 2.7499e+00 -3.3381e+00 -9.7036e+00 +#> -1.7245e+00 -4.4181e-01 -3.4086e-01 6.2400e+00 1.5419e-01 1.2574e+01 +#> 9.0976e+00 -3.2707e+00 -1.0903e+00 -3.3040e+00 9.1338e+00 -1.2215e+01 +#> -3.1234e+00 -2.8812e-01 -2.1735e+00 -1.0646e+00 -1.2786e+01 -1.5235e+01 +#> 6.6138e+00 -4.3299e+00 8.6403e+00 9.3438e-01 -1.1749e+01 1.8446e+00 +#> 6.2344e-01 9.8338e-01 8.2563e+00 -8.2448e+00 -4.5447e+00 6.5609e+00 +#> -1.4375e+00 5.1289e+00 5.4863e+00 4.6983e+00 -5.3126e+00 6.0145e+00 +#> 5.5840e+00 -1.5232e-01 1.0648e+01 -8.1426e+00 9.6469e+00 7.1755e+00 +#> -3.7829e+00 3.3464e+00 1.6952e+01 4.4535e+00 -2.9354e+00 1.5281e+01 +#> -1.0402e+01 -3.4845e+00 1.1573e+01 2.1074e+00 -8.8245e+00 1.1603e+01 +#> -1.1021e-01 -6.7476e+00 5.0775e+00 1.6184e+01 -3.9077e+00 8.0282e+00 +#> -6.1079e+00 -4.1911e+00 1.3648e+01 6.0187e+00 -7.1858e+00 9.6539e+00 +#> 1.7198e+00 -1.1251e+01 3.8847e+00 1.0030e+01 -7.0925e+00 -4.5950e+00 +#> -8.4614e+00 -9.8298e-01 4.4841e+00 2.8005e+00 -7.3674e+00 1.0545e+01 +#> 5.7187e+00 2.9945e+00 -5.2711e+00 -3.7479e+00 1.5054e+01 -8.7987e+00 +#> -5.5156e+00 1.2471e+00 -4.1211e+00 3.1218e+00 -6.8213e+00 3.6497e+00 +#> -6.2814e+00 6.7489e-01 4.7816e-01 5.8086e+00 -7.7224e+00 -1.1090e+01 +#> -2.7705e+00 -5.7763e+00 4.7694e+00 -1.5708e+00 -9.1193e+00 -1.5565e+01 +#> 9.1999e+00 -3.0123e+00 -2.4817e-01 6.1060e+00 4.1478e+00 1.1212e+00 +#> -2.9603e+00 -2.1796e+00 -2.0766e-01 7.8280e+00 3.1220e+00 1.4492e+01 +#> 3.6359e+00 -2.0703e+00 4.0872e+00 4.5950e+00 4.9332e+00 -5.6241e+00 +#> 5.1562e+00 -8.1316e+00 4.7583e+00 1.3220e+01 -5.4718e+00 -3.3530e+00 +#> 4.5500e-01 2.9386e-01 7.0324e+00 1.1036e+01 7.7246e+00 -4.6796e+00 +#> -8.1972e+00 4.4841e+00 4.2891e+00 5.8185e+00 -2.5228e+00 5.5887e-01 +#> -2.3574e+00 1.5458e+00 3.8627e+00 1.2930e+00 -2.1494e+00 -1.3772e+01 +#> 3.3417e+00 -4.2553e+00 -1.0949e+00 5.9260e+00 -1.1277e+01 4.3272e+00 +#> -3.4267e+00 9.3572e-01 -1.9961e+00 -4.4621e+00 -5.6221e+00 -1.7291e+01 +#> +#> (3,.,.) = +#> Columns 1 to 8 4.9562 -3.7582 5.3943 -3.9742 -5.9963 -12.7356 -12.3488 -0.0281 +#> 11.2481 1.8273 4.5276 3.2045 -7.3007 4.8452 -2.3510 2.6769 +#> -4.4985 -1.0768 8.5073 -11.6589 4.7996 16.7321 -3.0965 -0.3856 +#> 0.6945 -3.6676 6.7533 -4.5750 4.0977 13.4447 -10.3970 12.6614 +#> -5.4403 -5.4518 1.6155 -8.4963 -5.9472 14.6116 2.3946 -1.3451 +#> -5.3157 -2.3617 2.7643 -2.6659 2.6909 -6.9647 -15.1995 5.5772 +#> 3.6503 5.7380 -0.8403 0.6548 -1.7302 -10.1615 5.2492 8.6817 +#> 7.2741 2.6035 -3.6202 -9.2834 -5.8462 8.7866 -2.8505 -1.4817 +#> -19.9435 -4.4367 -1.1491 11.0581 -7.7709 -0.3405 0.1401 9.1089 +#> 3.6538 -6.3743 -1.6704 -4.4341 11.5087 7.0363 1.0560 -3.5037 +#> 15.8845 -1.4452 2.1056 -6.1563 -17.7140 8.9579 -12.1511 0.6144 +#> 2.2474 -4.6580 0.3898 -1.0848 -1.7472 -14.7547 13.3952 -8.3100 +#> 11.9129 0.8427 9.8857 5.7461 3.1181 -4.4163 -0.3108 -5.2917 +#> -6.2343 1.1551 4.5635 11.1697 2.7615 2.6141 -7.0863 4.4991 +#> 15.9765 1.4206 5.0062 -2.9111 1.0533 6.9639 -11.3675 11.6931 +#> 1.9163 4.4746 -2.2195 -6.6301 4.6677 12.5423 4.5925 -1.8943 +#> 6.9265 8.7154 4.7434 -9.8159 -4.0058 1.6135 3.6586 -20.6469 +#> 8.3534 3.6571 -8.5923 -1.9266 18.2730 3.6534 -1.9278 3.3406 +#> -0.0040 10.9221 4.4209 -6.5203 6.8133 10.0404 0.2141 -2.6464 +#> 7.6266 -0.4131 -13.4900 -2.6130 5.9332 -1.1798 -11.2315 5.4657 +#> -0.3001 3.4722 -2.9582 -10.0573 4.1344 -1.6925 0.8711 14.5282 +#> 1.7112 4.8549 14.1942 2.7680 0.3115 -1.2966 5.9259 0.5383 +#> 0.7748 2.5614 -1.0904 -2.4082 9.1032 -11.0332 -3.9074 1.4275 +#> 2.6380 -1.4095 8.9158 10.1162 18.3666 3.2091 -6.9841 -6.9002 +#> 12.0612 3.2341 -4.0510 -10.9244 -1.5035 4.1921 -1.7979 1.8640 +#> -3.7546 6.1095 -7.6240 -7.1370 1.2163 -7.0363 -6.3017 0.9980 +#> -1.8646 7.0013 12.8986 3.9962 4.5674 16.9755 -0.0370 -3.8269 +#> 2.7722 -7.3149 5.8819 4.5695 14.4691 5.7079 -11.5897 -10.8310 +#> 8.9845 13.7087 -0.9190 2.1477 2.1493 9.3396 7.7154 12.4460 +#> 4.9266 -4.6779 -1.5574 12.9128 -12.8062 -12.4111 -8.2159 -14.7549 +#> -4.2986 -0.1995 -10.0720 -9.3908 4.2700 -8.7040 6.4533 -11.1542 +#> -0.6362 -0.8948 -10.9689 8.4188 0.0568 -4.8182 3.3533 -5.8279 +#> 2.6255 -7.6898 -7.6644 5.9319 -8.0142 -11.2382 -11.3271 -0.3779 +#> +#> Columns 9 to 16 -9.7360 -6.4222 -5.7229 7.4474 -6.9485 -18.3427 -2.7103 3.7782 +#> 1.2750 10.4092 0.1234 9.9826 -7.2135 -5.7552 -1.9998 -1.9475 +#> 1.5161 -2.5462 -1.0372 2.6637 5.9223 6.5293 -17.3755 10.4248 +#> -11.9943 1.9396 -10.8378 -27.0799 24.8994 3.4319 0.5319 -13.5652 +#> -2.1803 -3.0364 -3.0484 -11.1355 1.9524 -6.2876 -0.3480 6.6803 +#> -12.3258 -14.2578 -11.3267 -2.5336 -18.2894 -4.2594 -1.9660 12.5184 +#> -6.8200 -8.1209 -9.5000 1.3338 1.5568 -5.8006 14.6046 -7.5704 +#> 4.2130 3.7024 -1.5168 2.1240 -12.3614 -7.4998 -5.9788 -5.3655 +#> -1.4886 -14.2426 8.8877 3.6402 -13.4771 7.7371 -8.9763 -0.4561 +#> -13.1842 -2.8242 -4.3459 -3.8975 -7.4657 8.8410 4.0576 3.8440 +#> 7.6390 3.3561 0.9564 10.3876 -6.1527 -1.8119 1.3460 5.9928 +#> 8.8839 -0.3631 -3.6710 0.2649 4.8643 -5.2106 14.0710 -2.3089 +#> -4.5459 2.2723 -1.4164 -5.1818 11.6005 6.0302 -0.6840 -1.0466 +#> -2.4372 -3.9876 -7.3022 16.0805 28.7031 -6.6200 1.3126 15.1169 +#> -8.7492 -17.0470 -3.5868 9.9832 -13.2698 14.2807 7.7293 -0.8965 +#> -8.0431 6.6999 -8.5087 -17.4936 15.0480 -1.8982 -5.8062 0.4042 +#> -1.0442 1.5605 -3.1818 -10.4155 1.5044 4.2108 -10.3067 2.6241 +#> -14.2217 8.4671 -2.2625 -13.4366 4.9955 11.3371 3.7654 -5.4279 +#> 5.1664 0.4539 11.9847 -5.2858 7.8836 -3.6860 -7.9037 4.0934 +#> 4.9691 2.4076 9.0446 0.2401 -17.3001 13.3294 -13.2066 4.6766 +#> 10.6484 -4.1710 -4.8643 8.5792 0.4483 -4.1667 2.6186 2.1822 +#> -4.8166 4.1797 -2.3908 14.4425 -8.0795 2.8944 2.9143 0.6577 +#> -5.5388 -2.0295 -0.4797 5.0858 -11.7329 4.8734 8.9627 2.9415 +#> -9.7544 -5.7999 -0.3570 -1.3464 4.3133 23.6557 -4.5242 7.1379 +#> 3.3175 -1.3137 -7.1024 9.0726 -9.1605 4.0476 -3.9804 4.8644 +#> 1.4142 -4.5650 -6.4153 3.6048 -13.3332 -6.1364 -11.7781 1.3500 +#> -1.1517 -4.6129 -19.7663 -6.7512 17.9404 -8.0486 4.8501 1.3591 +#> -5.8542 0.8734 3.2384 -1.5739 21.9474 -3.7593 -0.9735 12.2523 +#> -2.3665 -7.0931 0.6589 16.3679 -4.9314 10.7504 9.8569 0.4346 +#> -5.0901 -1.9335 -2.1915 7.1066 -6.8045 -7.4043 3.2150 4.0988 +#> -8.1054 -8.8176 -11.3086 -7.5997 1.9523 2.5381 7.7193 -4.2398 +#> 3.0093 5.9653 4.1393 0.3365 -1.2716 11.9673 -15.8381 -0.9102 +#> -1.5938 -3.4779 10.4909 12.1245 -15.0920 4.7935 3.1971 2.9640 +#> +#> Columns 17 to 24 -7.8446 10.8300 3.1495 -6.4143 -4.9418 1.5859 5.7119 4.1122 +#> -8.2818 -2.5108 0.3972 5.5391 2.6315 5.6176 4.3199 -1.2731 +#> -3.7133 -7.0713 -9.7593 3.5059 4.4236 4.6572 -5.9136 6.1746 +#> -11.5718 3.8795 -2.3085 -3.9470 1.1945 -1.6105 -1.4233 4.4411 +#> 0.0810 6.6555 3.7000 14.6576 11.7133 10.7315 2.1382 -2.0174 +#> 11.3570 3.8331 2.8234 12.0460 5.1824 -8.6710 -6.2794 0.8210 +#> 5.8555 3.8700 1.9546 -6.2020 -2.1532 -0.5159 3.9648 7.4020 +#> 5.3760 -0.2685 7.1646 -0.2744 3.5396 5.4370 -1.4105 -15.6596 +#> 11.7633 -10.5656 6.0806 0.3545 2.7078 0.2011 2.5850 -2.2982 +#> -1.9521 11.4095 9.7509 -2.7458 -1.9989 1.4426 3.1751 7.4327 +#> -0.7683 -11.2969 0.6807 1.4418 1.1541 1.8743 -6.6355 -2.5485 +#> 2.3949 11.4733 -4.1730 1.7488 -2.7655 1.3965 2.2042 -7.7285 +#> 2.0386 -3.8169 -5.1341 2.4189 -0.9858 -1.9761 -2.5394 -2.1106 +#> 2.8564 13.8817 3.5312 7.7483 2.4548 3.4022 -6.1050 14.8377 +#> 5.4632 -4.0690 8.5569 -0.5126 13.3378 2.3015 1.6362 4.0142 +#> -8.9410 1.1707 1.4379 -3.3880 1.1356 -3.8432 -0.2028 14.7388 +#> 9.5698 0.8845 3.6077 1.9991 5.9551 4.0725 -12.3633 -3.3834 +#> 5.4765 -10.8793 -1.3330 -0.4239 -0.2444 -4.3707 4.3048 -2.0963 +#> 7.7052 0.6048 -3.1704 1.7206 -6.3772 -2.8244 -1.9311 7.0662 +#> 13.3648 -1.6975 12.1434 5.2608 3.4101 -5.8776 -9.4342 -11.3617 +#> 4.4433 5.9981 -0.4610 3.9429 2.5325 -6.9121 -0.0131 -6.9599 +#> 4.6063 -0.7668 2.4319 -10.1652 -6.0697 -2.6817 -2.0906 -2.6583 +#> 2.4389 2.1851 -2.5737 4.2586 -0.7670 -10.1724 6.2627 7.3546 +#> 9.2102 12.2677 0.6505 0.8508 8.8404 -2.4235 -0.3184 13.9138 +#> 0.4299 -7.7316 -4.0262 -1.0616 -1.8941 -4.5296 -2.2042 3.9778 +#> -2.5706 -1.2521 5.6562 1.2311 0.4175 -11.0818 -4.9537 -2.0501 +#> 10.8075 0.0243 -1.6829 -1.8148 3.6318 -7.6898 3.4834 13.7268 +#> -1.2697 5.0217 -2.6425 11.8827 8.9970 5.9287 -12.2312 7.6060 +#> 11.1753 0.4443 -2.5237 -0.2271 -1.2767 1.0772 11.0985 8.0461 +#> -5.9804 7.4466 12.3405 -5.5906 -5.0035 11.6740 -8.4227 -11.0415 +#> 8.5981 13.2756 0.7086 -7.9883 -5.1583 0.4455 6.2542 -0.3895 +#> -10.7068 -0.0316 -7.3415 13.2332 -1.9071 6.7170 -0.1882 -2.2628 +#> -7.5274 -4.2322 4.3580 -0.3955 -1.2535 -0.3756 4.1293 -3.7303 +#> +#> Columns 25 to 32 -11.3413 -0.6342 1.2782 -1.0040 0.3941 -1.2505 -7.8863 2.5857 +#> -2.3209 5.3974 -0.6238 -1.9349 -5.1256 -0.9068 -1.6616 -2.2188 +#> -5.5235 -1.3256 7.7680 0.0419 4.6588 8.5270 -14.0638 -5.7350 +#> 23.6620 5.5299 3.8752 -8.5570 1.7381 12.4320 1.1287 -2.2732 +#> 6.0077 11.7650 5.9815 -1.2362 -2.6430 -1.9138 -9.4792 -9.2415 +#> 13.9983 9.2234 2.0461 -5.2399 6.2495 -1.0772 -8.9246 2.9772 +#> 5.9896 4.1438 1.1027 -1.9300 7.9763 -11.2774 -5.4039 -8.4377 +#> -3.3392 4.2929 -1.5951 -0.7069 -5.5264 -0.2146 2.6092 -3.2463 +#> -2.7116 2.4720 11.3886 5.4341 2.0100 13.9830 1.6636 1.8237 +#> 8.7153 7.0944 4.8265 2.3740 -2.2886 -3.3963 -8.7984 -2.1433 +#> -11.5131 12.9237 -5.3272 5.0744 -6.4152 13.7479 -0.6910 2.8149 +#> 3.7939 -4.5852 -9.2190 -6.7536 7.5973 -8.4979 5.2731 3.2761 +#> -0.0529 -8.7274 -6.3776 -6.3154 10.8041 -0.3989 -2.7530 4.2337 +#> 6.1232 -10.6163 -8.4237 14.1920 1.7618 3.2803 6.9025 -3.0799 +#> 17.9500 -4.1805 -11.3844 -3.2068 5.9128 -1.0761 -21.9036 3.2809 +#> -3.7840 -8.8256 1.5954 7.9607 -4.1433 -1.0089 -1.6928 9.5089 +#> 3.4930 10.6973 -6.1665 -6.6312 -0.4147 6.9757 -9.2026 -10.0955 +#> 6.7164 -0.0767 -2.7187 -11.2172 2.1860 -1.5859 12.1095 3.5310 +#> -3.8810 -12.2251 14.5198 6.4680 -4.0062 2.9629 5.1757 14.1900 +#> 4.1051 -0.6954 -8.3595 2.2953 6.4439 15.2789 1.7176 3.2482 +#> -1.8213 -0.6504 6.6178 -10.4879 11.9210 7.6479 -2.6390 4.8771 +#> -8.8219 -0.2972 -9.5819 9.2868 6.3488 2.5390 11.7205 2.8740 +#> -0.9322 -2.2826 -1.1460 -6.5040 -0.7949 -4.9513 -2.1422 -0.7358 +#> 13.7824 4.6837 9.9722 -6.0029 4.5716 12.4649 -9.9900 6.4003 +#> -0.3243 8.9251 -2.4305 -4.2840 -4.8234 5.5312 -10.5267 6.2358 +#> 0.0412 -7.5629 -0.1690 0.6404 -5.2020 -3.4128 -8.0235 -0.3811 +#> 9.9647 9.9041 0.7346 -8.5584 -0.1955 5.9069 -8.0969 -5.4198 +#> 5.9976 -8.5542 5.4198 -3.4565 -11.3585 6.2732 -7.2283 -6.8408 +#> -6.9500 4.6178 -6.5540 5.1711 6.6195 2.7765 4.6212 7.9218 +#> -11.7170 -2.9426 -8.0669 6.0971 -8.5434 3.0232 -5.7801 -4.7574 +#> 8.4081 0.8452 8.0060 -4.1084 -0.0990 -9.2372 9.4923 -5.6565 +#> -12.8302 -4.7184 -8.1430 -1.4103 -4.2225 4.9293 6.2137 5.6808 +#> 0.6241 2.8501 -1.8625 2.9744 -4.9602 2.5389 8.5130 4.8167 +#> +#> Columns 33 to 40 -3.6500 -2.8977 -9.5778 2.7114 -1.0903 -1.8082 -1.5907 -2.7820 +#> -2.2093 6.8268 -4.4927 -3.6955 -5.9770 -0.7346 4.8322 7.7719 +#> 6.6788 10.7728 3.5679 4.8409 -6.9984 -7.0787 9.2602 -4.7396 +#> -2.6668 6.8936 3.9008 0.6742 9.4432 10.4602 -8.3675 -7.7000 +#> -12.6144 -4.0425 4.8042 -1.1339 -1.6229 -3.6535 5.7386 -2.7556 +#> -6.4437 -3.8952 2.4651 -1.3158 5.6072 -1.4178 4.2024 -5.7168 +#> -1.0680 -1.5215 -5.3585 4.8564 -1.5816 6.3936 -4.4383 -9.2617 +#> -16.5886 -10.7780 -3.7502 -4.6051 3.0135 2.6677 6.4653 0.4712 +#> -2.2415 7.8049 -2.4149 11.1596 -0.9475 -4.2648 7.2630 -3.0681 +#> -12.9061 3.0613 6.8384 0.5073 5.1894 0.5770 0.2758 4.9556 +#> 7.6316 3.3277 0.5884 6.6436 -7.7094 -0.2284 5.5000 -12.2994 +#> -7.0150 -4.5404 3.0001 1.8709 -6.1011 3.0254 -6.1995 6.7020 +#> 6.1677 -0.5276 -6.2246 -6.4439 -6.6510 -1.6874 1.4696 -12.6055 +#> -1.5107 4.3904 -8.7457 4.4525 5.1152 -10.7965 -15.5326 -0.2713 +#> -4.2965 11.2479 -1.4450 1.0726 -11.2088 2.0681 0.3535 -0.9314 +#> 7.5914 0.7966 1.4415 -1.1603 2.0057 3.8741 -6.7373 3.2602 +#> 0.3362 -3.4889 -8.5000 -1.5220 -2.7686 -1.4101 0.6254 -5.9871 +#> 2.6305 5.9587 -6.0280 4.6396 -11.8046 2.4536 -3.5823 -10.0967 +#> -0.1964 -8.6662 2.6549 2.3293 0.8565 0.5143 -4.3031 -3.5940 +#> -7.2017 -10.6218 -12.5678 0.4210 0.4910 -3.6761 8.4244 1.6943 +#> -8.5529 -2.2835 -4.7047 -2.7886 3.0916 -6.8411 -0.9513 7.0117 +#> 9.7563 2.3959 8.8850 4.0535 -2.0100 4.7188 -4.1533 -8.6275 +#> 4.3480 5.3681 -3.4141 3.3045 -3.1704 -3.2342 5.7581 1.7282 +#> -6.3818 -10.8806 -5.7432 -4.5982 5.5797 -3.4485 -1.1335 -4.5567 +#> 3.6543 0.1287 -2.1096 7.8689 -2.1157 -5.4079 1.6017 11.3682 +#> -2.5885 3.2719 2.9070 -5.1318 12.5928 1.2289 -5.1828 6.7898 +#> -15.2362 -3.3284 11.5042 -0.6523 -1.5897 -0.7097 -11.7615 1.6562 +#> -1.8654 1.9799 -0.0391 -4.9505 -1.6498 -8.5602 1.9597 -3.4851 +#> 2.5662 -1.3787 -2.7014 7.4054 -2.2759 -8.8455 -8.5638 -6.0573 +#> 7.1246 4.2721 -7.5306 -14.3989 -3.6718 1.1478 12.7831 0.1247 +#> -7.4369 -7.6452 1.9472 3.2242 3.6734 11.6259 -12.0304 2.0364 +#> 1.5119 3.8946 -12.3758 -7.1929 8.3145 -15.0007 7.2717 2.4664 +#> -4.5497 2.5463 -2.5948 0.9746 -1.7025 -2.7597 1.9230 1.6655 +#> +#> Columns 41 to 48 6.1813 6.2912 7.3241 0.7908 2.1313 11.1295 3.3205 10.2126 +#> -3.8660 6.1652 5.3523 -1.6550 -7.9229 5.9736 -10.1264 -0.1634 +#> 0.8225 -3.2113 -3.1650 -2.8780 -17.5243 -1.3155 0.5709 -0.4941 +#> 5.9318 -7.8656 -10.8085 13.1046 -5.3957 -13.6984 2.6905 3.0666 +#> 0.4034 -4.6984 -8.2092 -1.9694 -13.3521 -7.1029 -5.8867 -3.3301 +#> -3.1147 4.8214 2.9073 -2.1222 7.6735 4.6024 9.5707 -5.2442 +#> 15.9312 -0.7414 0.4791 -0.9335 6.7204 2.6602 -2.1277 -4.0374 +#> 2.9042 -2.8153 14.4209 1.4202 -12.9603 13.3820 3.0815 1.4037 +#> -14.4248 9.5510 -4.2185 -2.1044 -5.7456 -10.7641 8.7872 0.1149 +#> 1.9290 -4.3031 -5.0268 2.0541 9.3620 -6.5746 -6.5838 6.1757 +#> 1.8804 5.5209 -1.6502 -5.0906 -13.2797 19.7620 -3.4012 -4.2540 +#> 1.3196 8.8311 -5.2938 0.7205 11.5397 4.6186 0.1667 4.4330 +#> 8.2320 3.5439 -0.0781 13.5372 -1.3183 -2.8995 7.4505 -0.4583 +#> -12.9408 -11.3869 1.4935 -12.7002 -5.3621 -8.7744 0.8293 -0.3609 +#> 2.6342 -0.4168 -4.7770 1.8892 -3.1467 -6.2088 -7.4270 0.3072 +#> -2.0058 -10.8628 3.7055 -5.4891 -0.8779 -0.8320 0.4250 1.8867 +#> 8.6515 -4.1764 12.1561 3.2225 -3.6669 0.8715 1.6936 -4.9513 +#> -0.8066 -6.8987 2.7274 -2.5657 -2.2977 -2.5447 -5.2651 -1.3667 +#> 0.0641 -1.1050 9.1838 -0.7779 -6.5939 10.3293 -0.6831 -3.4317 +#> -0.9526 -3.3516 14.0134 -1.8953 10.8101 5.9014 -1.4601 11.3590 +#> -8.8058 9.7172 -1.8348 2.3232 7.8532 2.6836 -6.7784 -4.1494 +#> -5.1763 -11.3810 4.2428 2.8889 -9.5280 10.5218 10.3084 2.3278 +#> 3.5902 8.9550 3.7468 0.4195 11.1313 14.4605 7.6010 6.7326 +#> -1.8484 4.9172 -1.4582 6.6719 2.1260 -9.1362 0.8069 4.6898 +#> -3.9948 8.9041 -2.0977 0.1985 8.5756 -4.3336 -2.5226 -6.8194 +#> -6.7353 -5.9661 9.1023 -8.8217 9.8306 6.4111 -5.0250 -0.2583 +#> -0.5733 2.9590 1.0549 13.1248 -2.2757 -5.6781 4.8615 -10.4787 +#> -2.8304 -6.6133 1.1348 1.9259 -7.5608 -0.0975 -1.9719 -6.2450 +#> -14.2544 -2.1351 -3.8724 -1.3873 -5.2126 -4.2660 -13.4798 -2.1145 +#> -2.4491 11.6818 7.1726 8.7630 6.2928 4.9393 6.2456 3.5497 +#> 4.1296 7.1968 4.9017 0.2784 1.5593 13.0424 1.4521 11.3329 +#> -2.2085 8.1701 -2.5337 -3.7367 9.3092 -9.0775 8.4497 7.4483 +#> -2.8945 17.4843 -11.3043 -2.8576 11.5501 8.4810 2.3641 5.4486 +#> +#> (4,.,.) = +#> Columns 1 to 8 -8.9373 1.6399 -9.3011 1.4044 3.6362 -3.6884 4.1129 -0.9177 +#> -1.4104 -3.4010 11.6886 -2.0425 5.0381 10.3211 2.3125 3.7280 +#> -0.9357 -2.5987 -0.8097 -7.3440 -2.9790 0.2844 1.1797 -2.0219 +#> 0.7954 26.6022 -0.1761 -10.4019 -3.9853 -4.7054 -2.7099 -11.5866 +#> -4.8281 3.9997 7.6639 2.1329 1.7832 -3.4229 -4.4235 -8.0600 +#> -4.2906 0.4296 14.0283 11.0699 -2.6126 0.9977 -10.1659 -20.9791 +#> -0.2850 -4.6091 3.8426 -3.2056 1.4130 0.4385 -2.6187 -4.8739 +#> -5.6108 1.3174 7.9251 11.1445 2.7707 -7.1793 -7.8399 -3.0120 +#> 5.1576 -13.3771 0.9829 -5.2116 -4.3821 8.2142 -3.4522 -15.3296 +#> -4.0186 12.0413 -2.8010 5.3009 -1.6269 -7.3934 0.1458 3.5200 +#> 0.1809 -1.2963 14.7856 4.9618 -1.2995 1.4195 -6.5237 -2.4909 +#> 16.6593 -5.1476 4.8693 7.8189 -4.1561 -3.6614 1.0279 9.9730 +#> -1.9016 5.1086 4.6128 -1.8447 0.4608 10.1100 -4.6261 4.2095 +#> 4.9385 -8.9146 -0.5647 -3.1531 6.9549 -17.0622 -2.5873 -1.0333 +#> -5.8255 4.3723 14.0465 9.6723 -9.6586 6.4133 -1.3056 -12.5534 +#> -5.5134 2.5385 3.7693 -1.7086 -4.3682 -3.3993 -7.5111 9.0573 +#> -4.5008 -2.2101 3.2887 9.2156 8.1258 14.3153 -6.4282 -0.2755 +#> 10.0607 -4.0007 4.2165 8.6627 -5.6962 6.5382 -0.7017 -3.3520 +#> 10.9792 -12.7363 -11.1796 6.1025 8.0711 2.8154 -0.6251 8.5030 +#> -15.3810 -0.3471 6.0013 8.3160 -2.4042 3.0223 -16.8555 -6.2519 +#> 12.9764 -4.6472 1.4249 8.6086 3.9251 5.5342 1.4575 -9.6606 +#> -7.7859 -0.0378 -8.7754 0.2117 -11.5043 -5.5715 3.8264 0.7867 +#> -5.8436 5.1906 -2.9890 12.7835 -2.3970 3.9694 -2.1704 -7.6511 +#> -11.8085 -0.1376 -7.4560 4.4743 3.5176 -7.5724 -16.7047 -10.5319 +#> 15.3922 -4.8711 -13.4026 11.7450 2.0306 9.9591 15.9956 -2.2847 +#> -1.6474 2.2722 0.9578 -0.9675 -0.1793 8.5340 6.2574 1.9746 +#> -0.1732 0.8481 5.0246 10.0355 2.5423 4.7380 -4.1744 -5.7309 +#> 5.0698 -3.8620 -1.4083 10.5425 10.8409 3.9629 7.7026 12.7015 +#> 2.9825 -5.9775 -5.2800 7.1343 4.5012 4.4923 2.3546 -3.2652 +#> -4.3361 -7.6220 10.1317 2.4227 3.2601 6.8644 4.4243 5.2520 +#> 17.7917 -0.3951 -4.4227 -0.2346 -1.7326 -9.4963 -5.2619 -0.7878 +#> 5.3519 0.6308 -1.0294 -0.9731 6.0362 0.5765 -8.1187 -3.5761 +#> 0.3248 -3.5005 3.3166 1.9265 1.7660 -1.5276 6.8794 -1.3940 +#> +#> Columns 9 to 16 -1.9001 0.7127 -3.7042 -3.1003 -2.5198 -2.0575 10.7891 -10.1290 +#> 4.4664 -1.3785 5.6772 -7.5936 -2.2864 -4.5115 5.9416 -2.9930 +#> 1.5377 4.4286 3.8994 7.8767 -6.2099 -9.3024 9.8539 0.7968 +#> 1.3931 1.4117 -5.5809 -17.4775 -3.3827 -6.8981 -7.4701 -0.2287 +#> -3.5261 7.2352 9.7388 10.1538 9.3579 4.1696 3.4854 -4.4308 +#> 3.2155 2.2456 -5.0688 16.8926 1.8043 11.3323 2.9362 -17.2064 +#> -2.5780 2.1673 8.9328 1.9443 2.6978 -3.6639 4.2491 -7.3115 +#> -0.0379 -0.3899 -11.1863 -7.5292 2.2368 5.0501 10.9519 4.0228 +#> -2.3787 -6.3793 3.8723 19.7264 -2.1229 -8.1372 4.8397 13.4221 +#> -0.1609 3.9356 7.6169 -0.8369 10.8069 4.9614 -0.4863 -5.5323 +#> -4.1389 1.4834 7.0239 -4.2083 -5.1313 -15.0423 3.8272 -4.4371 +#> -4.0908 2.8806 1.0398 1.6341 1.6630 2.1113 -5.2521 -0.4592 +#> -2.3149 -4.1857 -9.5716 3.2153 -9.7876 -1.8165 -15.7325 -6.8080 +#> 7.6909 9.9126 5.9013 0.0249 14.8729 -14.5049 -1.9218 -2.0016 +#> 11.1113 10.2416 -2.4377 18.4841 -3.1200 -12.1874 -7.3556 -17.1354 +#> 7.0546 0.5588 5.1830 -13.3850 -10.7262 -6.7357 -3.4260 2.6514 +#> -1.7956 1.6426 1.9519 -1.5552 8.5460 1.7914 -1.6229 0.8920 +#> 6.2778 -9.5054 -7.6767 -8.9659 -8.5650 -6.6644 -4.2292 -12.1641 +#> -1.5601 -4.5378 -9.5443 -0.4477 -11.3808 2.5598 -7.0608 4.8732 +#> 4.9494 -3.9585 -18.3522 3.4936 5.4241 1.2551 3.3881 -6.7100 +#> -2.2057 0.6258 -9.7220 -0.1111 -1.0761 5.7311 -0.8607 -5.8128 +#> 4.0180 -1.9138 -2.3049 2.2944 -1.7141 -6.2059 7.9873 0.2988 +#> 8.3576 0.5287 4.6250 12.8253 -11.8698 -3.0675 -2.3879 -3.5915 +#> -6.9098 -6.2171 2.3000 3.1197 3.8548 11.2952 -14.4839 -5.1035 +#> 3.6927 -0.5734 -9.4985 -1.7064 -4.4952 -2.9143 -6.4201 -2.4317 +#> 5.8637 -5.0499 -5.3575 -5.7396 5.6403 6.2517 1.5342 -3.1273 +#> -5.5487 2.7170 5.6355 -4.3530 -0.3532 3.4242 -8.7395 -2.6661 +#> 0.6817 -2.5049 -0.9142 -3.5636 9.6664 -0.0384 -3.9433 -1.6649 +#> 5.6631 2.3630 -5.0465 5.2111 2.0411 -5.5932 -6.4517 -6.7330 +#> -3.7180 -1.7679 6.3940 0.5621 1.1296 -1.2607 7.3263 -7.4035 +#> 1.4558 -3.6146 9.7355 -3.0817 -9.2252 -0.9715 8.1628 1.2281 +#> -7.5844 -5.2732 -4.6288 1.9936 -0.4796 6.6172 -9.8305 21.6248 +#> -2.4536 -6.4660 1.2808 -3.9041 -4.9362 -3.5620 3.0095 -9.3904 +#> +#> Columns 17 to 24 -16.2372 -19.7274 -0.1741 -11.2008 -3.2621 6.3569 -8.8925 -0.4303 +#> -5.4173 -7.6782 8.2397 1.4117 3.6337 -3.4932 -0.8219 11.9920 +#> 19.2885 2.5153 -12.7047 1.5867 -8.2949 -3.2381 5.2396 0.1813 +#> 12.2386 6.4969 -4.7977 11.9596 -0.3093 -0.3655 -0.3141 2.8275 +#> 2.3578 -1.3342 -3.8336 2.7758 -1.4061 -8.5146 -0.3398 -8.0748 +#> -20.0700 -15.1627 -0.5593 6.0172 -8.7065 1.4813 0.0891 1.6563 +#> 0.2059 -4.4305 1.2511 3.0906 0.2580 12.1064 -12.8778 5.0823 +#> -2.4030 -19.5416 3.9174 16.3369 3.0540 -8.3036 4.4496 12.4252 +#> -6.7507 0.8657 -8.0464 -0.3716 -0.4432 -3.8865 -0.0692 -9.6786 +#> -0.0698 4.0154 2.6120 -6.7098 0.9287 -1.1052 -2.9665 4.0454 +#> 4.2824 3.9316 2.0320 2.0653 6.2146 -16.8679 4.5053 4.8263 +#> -14.8595 13.3765 1.6610 2.0509 7.3690 2.4040 6.5151 1.7843 +#> 10.0346 6.3441 -2.4157 -0.6306 1.8411 5.8174 3.8976 3.5893 +#> -7.7210 19.3262 7.3632 6.9095 -3.1356 -4.3910 -0.6466 -13.5442 +#> 16.1872 -0.0640 -5.5496 0.1496 -1.1111 0.9940 -6.1868 2.5441 +#> 5.3269 4.6372 -10.4780 -8.9268 3.1330 0.6902 -5.6594 -3.4153 +#> -10.7116 -2.4769 -6.5876 4.9373 -0.7942 -8.1281 5.3133 -9.1278 +#> 5.4327 10.0923 1.0010 13.1000 1.8219 1.5296 -1.7788 6.2398 +#> 2.8703 -9.3128 8.1365 4.9767 2.0086 -2.9929 4.9331 -7.5836 +#> 4.2043 -3.4412 4.2323 -2.8825 -5.4795 -6.1055 2.8913 7.2720 +#> -1.9965 2.1791 16.0918 13.4114 -5.0736 6.8383 7.0362 1.3149 +#> 12.9917 1.7107 -9.1155 7.2644 3.6828 -1.5261 -6.1586 9.2209 +#> 5.1585 -6.3146 -10.2284 -17.6754 0.5683 -0.8958 -4.6662 8.7225 +#> -0.9275 1.6786 6.2533 -7.8757 3.7599 -6.7295 -1.7024 1.6199 +#> -5.5077 5.7192 0.1597 -1.1456 0.1365 -0.2670 9.3889 -10.7641 +#> -3.3875 -12.9637 5.6332 -0.7724 -6.7348 6.3785 -6.2865 -7.7037 +#> -16.6626 10.6243 3.7678 6.9794 10.9552 -4.2435 3.0685 2.4144 +#> -3.6866 7.1044 -0.2200 0.8470 -3.6753 -7.6094 5.4141 -15.7434 +#> 8.7154 7.6460 11.4999 3.9323 6.8737 -5.2896 -9.2223 1.9661 +#> -12.7411 -20.4097 -10.3811 -1.7526 -3.3337 3.1244 1.5932 -5.8384 +#> -6.4045 -21.4485 -3.3501 7.6588 -1.9086 3.0637 -8.2305 4.4404 +#> -6.2674 3.3319 7.3105 -18.2420 7.0885 1.8895 4.2690 4.4079 +#> -12.1091 -0.3891 7.2408 -5.9361 -2.7110 3.0618 -4.8158 3.3095 +#> +#> Columns 25 to 32 2.3998 -17.3448 -12.2325 0.3315 -15.4217 -1.8867 -15.4240 12.3478 +#> -3.7151 6.9167 9.9820 1.5672 -11.4572 1.9856 7.3794 -0.1111 +#> 2.3240 -8.5145 0.3523 8.1304 -1.9023 7.6816 0.2831 -4.7317 +#> 23.5435 2.0492 -20.7813 14.2461 11.3960 12.3975 -11.7466 3.0538 +#> -7.4008 -1.0536 -4.7788 11.5750 4.3103 0.4249 3.9897 -4.0659 +#> -0.4591 -8.9794 0.9216 -4.6796 -8.7029 9.5869 8.7900 -7.8309 +#> -2.9604 2.2297 1.1120 -0.7426 7.9197 7.3716 -10.5481 4.1526 +#> -3.9723 2.9816 -9.3026 5.5909 -10.0158 -15.0935 8.5722 1.3201 +#> 1.1391 -0.4905 11.8503 -1.8477 -2.4021 15.0876 10.7224 -2.2367 +#> 4.5792 0.5044 3.7245 0.0699 7.8139 6.8465 -3.0149 5.3561 +#> -2.2293 14.3296 1.3361 -7.7686 -0.4618 8.6373 12.8193 -5.5542 +#> 5.4532 -4.3885 9.4846 -17.4665 8.1103 -0.7766 -1.3261 3.4329 +#> -1.9946 -0.3186 0.0346 -9.3878 3.0484 -10.4834 -2.9084 -7.7686 +#> 2.0548 -10.5485 -3.2662 22.6547 -8.1382 11.3451 -15.2585 6.4609 +#> 0.1131 -9.3124 9.9692 4.3660 -0.1986 -10.1405 -4.0936 -1.1072 +#> 17.0797 -3.4246 -2.5888 -0.7562 -8.3573 -0.1185 1.1050 7.3033 +#> -6.9426 -6.1876 2.3584 9.0243 -8.2849 -13.8852 -0.2879 -0.9890 +#> -3.9229 2.9294 14.3624 -6.8364 -14.9054 2.0942 -1.3370 11.7426 +#> 10.4417 2.0153 2.5499 2.6353 -5.2696 -1.1515 12.8907 -6.8000 +#> -3.3850 -0.4643 8.3822 -4.8786 -18.5677 0.2475 10.1148 5.0903 +#> 6.1073 -5.6765 3.8480 1.6076 0.9365 7.3126 -3.0763 -7.4835 +#> -6.2270 -7.8434 -1.8985 3.3555 11.7319 -3.2799 2.3244 15.7276 +#> 8.4865 8.8633 21.3272 -16.4600 -3.8408 -8.0375 1.0212 -4.3776 +#> 12.8918 1.8276 4.7197 -12.4718 5.4431 -3.3738 5.3581 -13.0529 +#> -3.7650 -3.9353 16.2862 4.6456 -7.5221 -0.1671 -6.6627 -1.2377 +#> -4.4990 -21.3608 -4.0670 15.6238 -7.5423 -9.5070 -1.6854 3.0255 +#> 9.1725 3.4281 5.9490 -2.2386 12.4893 -1.6703 -0.1803 -9.1139 +#> 4.4431 1.9173 -6.4646 9.7265 -12.4915 -3.9208 -2.5061 2.8529 +#> -10.7019 -8.1675 11.7109 12.2299 3.7126 -1.5526 0.0804 3.4049 +#> -7.0743 7.0589 -11.8581 -12.1029 -16.7801 -5.9686 -5.1970 0.2153 +#> 1.8325 3.2412 5.7683 -8.6227 -2.8456 -5.8553 -3.2370 6.9899 +#> 12.3013 -5.2388 5.6335 -1.6851 -11.4705 -6.1349 1.0349 -8.6584 +#> 4.2542 14.5048 2.5535 -17.0956 -5.7988 16.1823 3.0141 7.0088 +#> +#> Columns 33 to 40 1.7074 -2.7095 -5.2740 -5.3750 -11.1734 -11.5376 1.2613 8.9030 +#> -9.7125 -3.7753 -7.5031 -0.7469 -0.7143 -11.1453 -3.4587 -9.8992 +#> -1.4763 0.0384 6.2177 0.9006 3.5500 -0.8569 -2.1706 -1.7766 +#> -7.8055 -4.5051 2.6745 12.8302 7.7032 11.8959 1.4649 -17.2213 +#> 1.7518 -9.0725 -6.0132 -3.7355 -0.6736 -1.7817 2.1679 -9.3729 +#> -7.0022 -11.5829 -17.9102 -14.6837 -6.6562 -7.1852 6.0695 5.5540 +#> -11.7927 -5.6766 -3.0036 -2.3266 1.5770 2.8812 8.7181 0.4599 +#> -1.0159 -5.8263 -16.4976 -11.9847 12.6645 3.8689 10.7117 -1.5844 +#> 3.7081 0.7444 -8.1803 -6.8067 -7.9231 -3.9081 -5.9678 9.1194 +#> -8.0077 -4.4849 5.6012 5.4904 0.2560 -4.3410 -1.3505 -9.2316 +#> -4.5165 0.3389 -5.6415 -6.9742 1.8363 2.0317 0.4151 -3.6613 +#> -3.5573 -3.8663 -2.2184 -0.8321 3.6882 0.6856 2.9591 3.8303 +#> 8.2161 4.0893 4.6047 -1.8199 4.3622 -5.2914 17.8723 -0.6648 +#> -1.0932 6.2708 3.1357 9.0462 -10.0658 18.2034 -30.1197 8.5641 +#> -11.7677 5.8088 -2.8664 -1.4192 13.0200 -14.9370 0.6709 -2.9731 +#> -3.5297 -1.1649 0.3058 5.2004 -14.7660 14.8186 -7.3994 8.6478 +#> -2.5049 -1.9505 -0.6840 4.6141 -11.7132 10.3747 -10.6767 -0.7528 +#> -15.6232 12.4212 5.1873 0.5407 7.8264 10.1926 -2.0064 4.7172 +#> 2.5496 -2.0950 -4.1168 0.9357 -10.7598 6.7054 3.1847 -4.4976 +#> -1.6174 10.3704 -11.4205 -9.5627 -2.5331 0.6784 -10.3460 10.6275 +#> -4.5203 0.6928 1.9589 -0.1332 10.7521 -0.8171 3.8033 -2.3770 +#> -1.5401 5.7727 1.8338 -4.8751 10.4040 -0.1402 0.5706 9.6135 +#> -5.1437 -0.0033 7.3305 -7.7063 -3.8892 -11.6323 -1.3622 -1.1627 +#> 13.3520 -9.0883 11.4072 -7.6520 -3.3230 -3.5779 4.2729 -15.3720 +#> -5.4414 10.3479 9.3171 4.2304 2.9741 0.8669 -11.3900 -2.6916 +#> 3.4813 3.9532 0.0644 5.0330 -1.2378 -1.8766 -6.9203 11.8834 +#> -11.3476 -20.5107 -1.1963 -1.3385 -1.6884 16.3563 1.6985 -9.4230 +#> 9.1603 7.9446 7.2743 15.5263 -12.4083 6.3507 -16.7579 -1.2418 +#> -5.2214 5.9875 9.8173 -3.4455 9.0035 0.2276 -4.4667 -2.9136 +#> 0.8931 11.0414 -11.4794 1.4996 -14.8614 -12.1374 0.5295 25.4513 +#> -9.3430 -4.1520 -2.2349 2.5586 4.5160 5.2115 6.6060 6.2346 +#> 14.7500 -6.3226 -1.7460 -1.9870 -12.6927 -0.8219 5.2718 -0.2050 +#> -1.9830 -0.0687 -4.7570 -4.9114 -1.7514 -18.3255 -2.6171 1.5965 +#> +#> Columns 41 to 48 13.1469 16.3241 -9.5096 -7.9503 -5.8903 1.4845 3.5997 -3.2972 +#> -0.2769 -4.5006 5.6813 2.3416 -2.4813 0.8077 1.2412 3.2028 +#> -3.1104 -0.1819 -2.6743 2.3650 4.9108 -9.3813 -1.2675 -5.7836 +#> -16.8457 -0.2598 6.2999 1.5642 2.1965 -10.8874 0.8298 0.6867 +#> 1.6406 -2.1335 -5.3743 4.0202 9.0518 -4.3672 -0.4848 7.5687 +#> 13.2089 0.0184 -15.4310 -3.9341 12.7642 0.4858 -0.8550 0.0537 +#> -11.6616 -1.7800 -5.4005 -0.6940 0.2097 3.4405 -4.8381 -9.8920 +#> 14.0421 -4.1125 -5.8660 3.0663 -1.4779 6.1736 4.8914 -2.6142 +#> 2.3728 -8.8320 -1.4000 -10.7431 13.7504 -0.3255 -9.8459 -8.4252 +#> -6.7076 -6.9181 5.7444 11.9882 1.8702 -1.4792 -1.4045 13.8206 +#> 1.9241 -3.6843 -10.4069 -4.3291 4.4341 1.6662 -4.2902 -9.6582 +#> 0.4927 -3.8589 0.4731 1.3278 4.1912 7.5662 -1.6354 10.4284 +#> -0.5050 8.8472 0.8561 -4.2709 -7.3363 7.3410 12.7846 -8.7095 +#> -1.2324 0.7144 -2.9817 -14.2587 3.4681 -11.2354 -6.6864 9.6205 +#> 7.9781 -1.9659 -7.9456 0.1395 -2.1838 -1.9923 3.2518 -10.3654 +#> -2.0977 4.2419 0.9674 2.3197 -10.0479 -7.7920 -2.7702 13.2239 +#> -2.8001 11.2738 -6.1056 -5.1717 -4.9363 8.1373 5.2983 -8.2018 +#> -5.6889 -7.8704 2.1990 7.1298 -0.7015 -0.9009 9.0010 4.0715 +#> 4.0209 -1.8575 -4.0008 -2.7558 -4.3971 9.8000 -4.1534 -5.9647 +#> 17.2192 6.2090 -2.3315 3.7335 0.1062 8.8375 4.9632 0.1628 +#> -6.7845 -9.5097 1.2010 -0.8135 5.6752 -1.4077 -1.1803 8.0863 +#> 11.0266 2.1652 -13.4578 5.4363 -3.1816 -7.5622 12.6702 -5.5490 +#> 11.5781 8.2979 4.9476 7.6994 -0.9640 -6.0966 -2.0411 -4.1657 +#> 2.9375 10.2070 18.4995 -6.5145 -4.5667 15.3985 2.2779 -11.5283 +#> -2.6135 -4.7623 -5.1036 -5.7793 0.9005 -7.5000 0.2239 0.9951 +#> 8.0807 0.8971 -10.2686 -6.8935 -11.4938 3.7583 -4.6565 1.4418 +#> -5.9638 -3.1656 -2.6535 -2.4869 8.5280 -0.5289 -8.7591 1.3365 +#> -11.9118 16.4476 5.1364 -9.7505 -8.8722 9.3799 11.1230 -7.8695 +#> 6.6437 -9.3812 -8.0302 -4.6728 -1.1732 -5.6723 0.8056 -0.3872 +#> -3.6792 11.9203 4.2696 -6.0437 -18.1020 2.5408 22.5066 -5.7841 +#> -7.2307 -6.7341 3.0000 4.8462 -2.0850 6.5586 -2.3047 -9.5330 +#> 3.5926 8.4891 14.9880 -8.0819 -0.7994 6.8544 -4.8742 4.6444 +#> -3.4799 -3.4725 4.6508 -2.2688 3.3988 1.6961 -0.7697 -2.0110 +#> +#> (5,.,.) = +#> Columns 1 to 8 0.4391 -1.9098 2.1484 2.2646 5.2900 2.4446 -4.2038 11.1062 +#> 4.8973 4.6691 -0.1577 4.2984 6.9359 6.8015 -8.0088 0.2365 +#> 2.3555 -3.1241 5.4234 -3.6021 -3.1128 -3.5330 0.4087 -14.0087 +#> -21.4126 4.3750 14.8495 0.9699 -23.3634 -9.8421 5.1767 -0.3107 +#> 8.8056 -6.8391 4.3331 3.3613 -3.3439 -6.6336 -4.3775 4.2387 +#> 0.9767 -5.5275 -5.2243 1.3197 -7.8901 -15.0289 0.7283 -4.1920 +#> 9.3987 3.4653 -3.4113 -5.7025 -3.7569 -2.7637 -8.1449 8.4361 +#> 6.8643 -4.8464 8.5880 0.4491 2.4999 4.5187 -10.8066 -0.0955 +#> -3.5246 1.8129 -7.8024 3.5076 2.1022 -6.6070 -6.1765 -15.9043 +#> -3.9591 1.8916 -0.3776 1.3119 -4.7591 -1.5195 1.4179 2.9710 +#> -1.6701 -15.0615 0.8259 6.0667 5.4124 -2.4430 -16.1106 11.8974 +#> 0.9344 9.1141 -8.1943 5.6707 4.5830 -3.3910 8.2350 10.4890 +#> 9.7430 1.3351 3.8455 -4.2307 -2.5922 7.3867 6.6023 2.8679 +#> -7.8984 0.8581 -12.1087 -6.0889 -1.5231 -6.6723 10.8628 -5.9127 +#> -5.5918 -0.2745 -1.9311 -0.2384 -8.2658 -3.8147 -0.2191 7.8357 +#> -6.2771 7.5198 5.3943 6.4536 -2.6440 -4.4889 0.2298 8.3339 +#> 2.2442 15.3852 -3.1567 -12.4926 -6.3730 6.0722 -6.5579 1.1781 +#> -12.8170 13.8433 11.4968 -7.2771 -13.1532 0.3584 6.8091 7.4388 +#> -1.5858 -2.2859 8.7732 2.5848 6.5458 -5.5752 -6.7938 4.3016 +#> 9.7460 2.3616 0.1275 -2.4913 7.3516 10.8242 1.6281 7.0327 +#> 4.9148 -0.2744 -0.9559 0.1036 0.1812 3.7332 -5.5575 -3.6512 +#> -6.9862 -2.7739 6.1212 5.5122 -1.4691 4.1725 3.8487 -5.0333 +#> -5.1841 -8.2360 -0.5528 -7.2601 -1.2878 4.2392 1.6847 9.3671 +#> -17.1302 -2.2287 -6.7963 -14.4281 -0.8908 -3.4021 11.1699 -3.4806 +#> -3.0202 9.0764 -3.9049 -2.3577 -4.3465 -1.2582 5.9626 -4.2595 +#> 1.1443 6.7906 4.8259 0.8769 -2.3707 1.1460 1.2335 -3.5128 +#> -2.2088 9.7532 -6.1144 4.8872 -23.0924 -0.9535 0.8545 -1.0388 +#> -1.0195 5.6566 2.3717 -8.6502 -5.5510 13.9087 2.7717 1.1356 +#> -3.9602 -2.6769 6.9075 -3.2793 1.0274 -6.7747 -0.8613 9.1491 +#> 10.5364 -5.7910 -14.2881 -3.7272 10.0372 20.2330 -12.9053 -2.4491 +#> -10.0494 6.0045 1.5262 -9.1165 -13.6801 -4.7538 0.3811 7.3859 +#> 1.4289 5.4423 -2.4157 -13.3889 6.3931 -3.5567 12.3840 -13.6936 +#> -4.7881 -1.5958 -6.8650 3.8896 3.9981 4.8909 5.9710 5.3680 +#> +#> Columns 9 to 16 2.2322 -7.0163 7.4025 1.4474 -16.0608 -8.2812 3.9479 -12.4157 +#> 5.5612 4.6730 14.6888 -6.9282 -2.4945 -2.1730 6.0320 -4.8716 +#> -3.6143 1.5976 -7.2834 -4.2591 18.0512 3.4045 4.9996 9.2229 +#> -1.9145 4.3505 -4.2095 8.8816 4.4741 13.3800 -8.6061 -15.9516 +#> -0.1081 -0.1761 -4.2558 -0.4345 5.6086 1.8441 10.3737 5.1652 +#> 2.2998 -1.7684 -10.6446 -2.0748 9.0934 0.4708 -10.4841 1.5032 +#> -5.1979 -5.4615 9.0303 6.3415 -5.2508 9.9896 -8.8075 5.4614 +#> -1.3947 7.3096 -4.8045 -4.1015 5.2261 -4.1460 3.2539 5.1348 +#> -6.8889 0.6101 -5.8317 -11.2515 5.8717 7.2147 1.3801 8.7224 +#> 7.5277 1.7141 5.3771 -3.1537 -4.7789 -2.5609 -0.3410 -5.8091 +#> -3.9199 1.3565 0.4403 -10.9124 14.8008 -1.0440 2.7889 3.3954 +#> -1.7609 -6.7693 3.1827 4.2683 -16.9452 1.0002 2.4671 -4.3169 +#> -4.8054 1.8201 -14.5093 7.1921 0.2319 -6.2055 -11.8342 6.1623 +#> 1.7320 -9.1120 15.5501 5.9678 4.7828 5.1776 -1.9527 -13.2768 +#> -10.0481 2.2758 3.7188 -0.5877 -6.5601 4.4341 0.3167 -3.7434 +#> 1.2645 -9.2064 0.6251 -2.0461 -6.9692 3.2809 1.6773 -1.2751 +#> 11.8582 -2.5059 -1.4314 4.9035 -1.7204 -1.3300 -9.6967 6.4524 +#> -0.5472 2.9117 -10.2421 2.0473 -3.7858 9.6259 -16.1022 3.4859 +#> 2.1549 3.3371 -12.6224 0.8640 -4.7667 -2.4520 -4.1159 2.7331 +#> -2.1234 5.4642 -14.9485 -9.0224 0.1193 -7.7338 -10.7780 6.8374 +#> 1.2525 12.7835 -8.2089 5.7755 6.3247 -5.0586 -1.0041 -6.4176 +#> -4.7794 -5.4369 2.2887 -8.2611 0.6869 1.6407 1.2151 15.0891 +#> 0.3344 -3.4176 -5.3505 1.7168 -7.6792 -5.8174 -0.8998 -2.9376 +#> 4.0955 1.8786 -7.8103 -0.1581 10.2054 -7.6718 -3.1871 -4.9185 +#> 8.1981 3.2560 4.0140 1.1940 1.7563 -8.6477 -2.5206 -5.7887 +#> 1.5151 1.2714 4.4403 2.6299 2.0904 -6.3844 -1.2666 -8.5626 +#> -10.9549 -1.0200 7.7347 7.2965 2.4207 2.6545 -10.7580 3.1383 +#> 12.8288 -0.2713 -9.4372 6.5619 -0.2274 -4.2176 -12.9764 -1.5689 +#> -6.1175 5.0343 7.6258 -9.1574 0.1178 2.2677 2.1093 -6.2908 +#> 4.1674 -5.4791 2.5661 -10.4149 -7.1355 -9.9736 1.3936 4.6197 +#> -3.8975 -5.1808 -0.8492 -0.0343 -11.0863 5.1938 -1.0113 -2.9068 +#> 7.4677 -8.5787 -5.1975 1.3381 5.0835 -6.4174 6.8993 -17.7437 +#> 7.7881 -0.0287 5.9182 -7.3728 -1.9345 -3.7842 3.4900 -16.8634 +#> +#> Columns 17 to 24 1.6771 15.3690 3.7065 14.0798 -3.7558 -14.2482 -15.8616 -4.6108 +#> -9.0802 3.9591 -0.1773 -0.3126 -0.1790 -2.6130 8.8423 -5.2414 +#> -6.9821 -10.3903 -5.4097 -7.7257 2.0685 5.1675 -5.2855 2.8596 +#> -4.9021 -7.2088 7.6342 5.6961 12.7943 17.9261 11.7175 -0.8168 +#> 2.5747 -3.4802 3.6670 -4.6425 1.4458 7.2083 -4.3110 3.6268 +#> 4.0800 -7.1349 4.5110 11.2754 2.0006 1.3698 -5.7636 -2.4166 +#> -1.7837 1.6697 4.5390 8.0320 6.6240 -1.3921 5.0736 6.5151 +#> -10.3835 -2.0104 -2.1694 7.6783 10.7268 8.1382 -3.9772 -10.8648 +#> 18.0275 -15.5641 -13.7067 0.6534 -3.9048 2.4265 12.8767 -5.0597 +#> 8.7382 9.9576 8.0037 5.4577 -7.9091 4.1477 2.0845 2.2091 +#> -8.5455 -3.5102 -9.4138 -1.0351 5.3878 1.6572 -0.7054 -1.4215 +#> 8.5940 -4.8951 7.0747 0.1936 -1.3161 -1.9277 -4.8872 7.2786 +#> -8.8369 5.5785 6.5429 -4.4698 2.2424 3.5187 -13.6622 13.3914 +#> 3.3157 1.1893 3.4218 0.6910 15.3152 -25.8847 -1.6705 -7.5419 +#> 1.2810 3.5057 -5.8798 11.3180 -0.9019 0.7256 11.0449 1.4503 +#> -14.3743 4.2423 7.3155 -1.0684 10.6818 4.7645 4.5571 -16.5317 +#> -11.8851 5.7834 11.3538 0.4630 7.3968 3.2219 -7.4244 -1.9372 +#> -9.4704 -4.0520 1.9144 3.3256 8.3051 -3.5069 6.0086 1.8476 +#> 5.8126 5.6365 -3.7384 -11.9932 7.9528 3.3870 0.0428 -8.3264 +#> 3.3812 10.5610 -2.1292 1.3869 -9.4373 -3.9937 8.0604 -2.4713 +#> -3.9936 0.3866 -7.1512 10.2174 -3.1713 8.1932 -2.9202 3.2001 +#> -3.5016 -6.3447 -3.2122 5.7092 -0.7514 5.7600 0.1821 -3.7173 +#> 9.9094 -4.9634 -3.0044 1.4675 -5.3280 8.1705 8.6492 9.4478 +#> 12.2444 0.0311 5.9437 0.7478 -6.0319 -0.5225 -5.2490 -0.2744 +#> 6.4196 6.0773 -6.8921 6.7982 1.2868 -0.0129 4.0809 -0.3024 +#> -4.7182 15.5866 0.2722 6.8277 2.2576 -1.9896 -0.1264 -9.8248 +#> -12.0991 1.7634 4.6613 8.1298 9.6500 8.8975 3.5907 5.0736 +#> 4.8709 13.9266 3.8506 -6.3351 0.5635 -7.9125 -4.9749 9.1720 +#> 2.3501 4.4443 -3.0040 4.8984 0.7580 5.2943 2.1156 -9.3190 +#> -2.4022 12.4300 -0.4464 3.9948 -6.7848 -3.4612 -16.2256 -6.0657 +#> 0.3598 5.1294 1.4181 11.9905 7.0827 9.3332 -3.5159 -3.1946 +#> 8.4195 -10.1014 4.3162 -10.1488 9.6877 -3.9963 -13.8788 -3.8362 +#> 10.7531 3.6968 -0.5402 12.8164 -12.9496 -12.6744 -4.4541 4.8352 +#> +#> Columns 25 to 32 3.0754 0.9656 11.4195 -1.6933 -2.7757 10.5142 1.0324 11.4810 +#> 8.3067 -12.5141 10.5343 -0.3537 7.5259 6.7108 -2.0825 -1.2409 +#> -0.3672 -8.2046 -1.6047 0.8231 7.5048 3.2305 -11.0317 2.1045 +#> -24.1941 -14.2835 0.0603 3.4999 0.9970 -1.3493 -21.5908 3.9657 +#> 5.0227 -1.3011 -1.1914 -0.2580 -2.6511 10.8058 -7.1249 3.4567 +#> -4.3163 -5.5814 -7.0375 -3.9281 -4.2662 8.3106 6.4370 -6.4413 +#> -22.2715 -1.5557 -7.1393 -13.8717 4.1784 -3.9443 -0.4707 4.6257 +#> -1.1398 9.7803 -4.3700 11.5757 3.7639 3.8352 7.0444 9.1612 +#> 7.0127 -1.5018 -6.6758 -9.8043 -5.0684 -8.0241 0.1096 -9.0917 +#> -1.9075 -4.6612 -3.8810 4.1255 -8.3587 -0.0907 -9.8954 0.6749 +#> -1.0257 -4.4097 -14.4159 -3.3711 4.3964 6.4112 -3.2049 2.1978 +#> 9.8993 -3.4522 3.5321 -9.1970 8.0096 1.7683 12.9511 -6.5953 +#> -5.5456 0.3709 5.9697 3.9595 7.7234 2.9263 -6.2956 8.9204 +#> 8.2467 -7.8250 9.2086 -14.7752 -1.7631 2.3624 7.4934 -3.4432 +#> -3.3292 -8.0017 -11.9161 0.0641 5.8049 15.1777 -4.0188 3.4478 +#> -5.8577 -7.9515 -4.2061 4.6872 5.1803 10.4527 -3.0946 -8.3235 +#> -0.1268 4.5932 1.4038 6.5119 -8.2524 -2.4885 1.8776 3.0738 +#> -11.7010 4.4528 -1.9193 5.6535 7.9257 1.6852 5.6245 -9.1236 +#> -2.1876 5.5739 2.5077 -1.3249 -0.3368 -12.2279 5.1158 9.0283 +#> 7.9109 9.3029 -0.0488 7.3976 -11.4432 0.1573 8.6820 -6.8024 +#> 7.6055 6.7798 0.9633 0.9630 -8.1765 -2.1018 -4.6992 2.1717 +#> -5.0130 11.0520 -15.4249 0.6652 4.8052 -0.2627 2.0799 -7.4576 +#> -5.7443 7.9972 -15.6795 -6.0751 -12.8827 3.2815 1.8649 7.1333 +#> 0.3538 -9.3284 -6.3689 -3.5229 -12.5694 -2.9542 8.0311 22.9101 +#> 9.3625 2.6567 -5.3627 -4.1245 -15.9260 -8.2227 3.4513 5.8330 +#> 10.5383 6.4723 5.7762 13.8513 -11.9889 -0.1678 -6.6589 -16.1521 +#> -17.6372 -6.7296 -8.8622 0.8981 -4.1522 7.4152 5.9546 -1.9361 +#> 5.4169 -4.5138 11.0641 5.9831 -10.9107 6.2039 -0.9973 4.2854 +#> -0.9271 3.5063 -8.0455 -7.9462 5.0734 -0.4818 2.4464 3.7186 +#> 12.8333 -0.2911 6.5849 6.0337 -2.4569 12.1207 -9.7083 7.0773 +#> -10.5021 3.0687 -7.7843 -4.2201 -5.2013 -10.5865 5.5016 2.3961 +#> 12.5794 -20.6448 16.9042 5.0654 7.4768 -1.5273 6.9260 6.6722 +#> 6.9695 -17.0422 -1.9053 -7.2440 -2.6287 7.0495 4.2520 -2.3256 +#> +#> Columns 33 to 40 -10.0027 3.8379 1.0881 -5.4896 -3.4261 -2.0429 8.5668 0.6859 +#> -0.2681 1.0662 -3.8098 3.2342 2.6800 -8.1446 -2.7712 -4.1984 +#> 4.7517 -2.0557 -7.7677 5.1592 0.5944 3.9591 4.3850 -1.2466 +#> 8.1936 3.0673 -0.3543 -10.5834 2.9359 -10.8625 0.6194 -2.0070 +#> -5.1751 -2.5127 -1.9720 -6.5750 -0.1732 4.2047 13.0346 -2.6657 +#> -1.8758 8.4967 -2.0558 -4.6847 -2.2359 -7.7241 -3.3445 15.3346 +#> 5.9761 1.3729 -8.9394 -4.0960 -4.1077 -6.0165 0.7958 -1.0748 +#> -13.7243 -7.8930 2.7955 0.2883 -5.9294 -1.7375 5.3112 -3.4478 +#> -0.9705 9.0413 -2.2212 3.6686 -5.7288 -4.2316 3.9254 0.0704 +#> 2.9807 0.6605 -7.1388 -3.8520 2.1888 -2.2726 9.6096 -3.6688 +#> -3.6191 -4.2340 2.6247 6.8139 -7.8175 -4.5176 -1.0235 -5.8777 +#> -4.2742 5.0907 1.7450 -3.6382 -3.0324 2.9528 0.7540 -0.4634 +#> 10.9585 0.0143 -7.0358 6.8255 4.3354 0.3694 -1.9219 3.4484 +#> 0.5214 -0.1948 7.1384 -7.7696 0.6348 11.3390 -1.7319 -4.2369 +#> -2.6518 6.9048 -11.4995 -5.9130 11.9696 8.3919 -4.3782 -14.1445 +#> 10.4091 -6.7464 -4.8839 -2.3931 5.8914 -1.0612 -13.6791 2.6800 +#> 2.1091 2.4291 -11.2586 5.8175 -6.0045 7.4189 -3.7206 -0.2299 +#> 15.1141 -1.9943 -2.2590 -0.6403 10.9068 2.3399 -9.3898 -1.8449 +#> -1.6461 -0.8309 0.3367 1.4051 -2.9572 -1.9485 -0.5922 7.7519 +#> 7.2909 -1.9546 0.1197 1.6492 1.1757 -0.4608 -0.5773 2.3481 +#> 8.1006 3.4007 7.7575 -1.3652 -6.8355 -8.0055 13.8854 5.6229 +#> -5.3073 -2.1287 4.6341 -0.0629 -3.8079 5.9468 -6.5184 -4.4899 +#> 9.1216 4.1665 1.5098 2.4553 10.6162 3.4273 -9.7515 -5.4893 +#> 8.5937 6.0146 -10.0838 7.7629 5.9513 -9.6387 -0.2607 9.8624 +#> -0.9794 4.6749 -1.4617 -2.0968 -2.9037 2.0378 -0.8534 -2.7718 +#> -2.9497 -0.1574 1.7163 -5.7772 10.2611 3.0250 -5.9499 5.6355 +#> 13.6754 6.2193 -15.8523 2.5245 -7.9110 -0.2794 4.3846 2.2639 +#> 1.1008 -3.4523 -4.9767 2.7175 21.2093 10.1844 -13.1447 2.7406 +#> 1.5942 0.2254 2.8626 -4.0579 -6.1620 3.5378 9.9390 -9.9881 +#> -1.0034 -6.4215 1.5955 3.2818 9.3604 -1.8906 -9.9140 1.6970 +#> 4.4405 -1.0841 -1.2114 -6.8427 -1.5594 4.5440 -5.3968 -2.4016 +#> -4.8038 6.7719 -0.6516 1.4910 2.9071 -1.2205 -5.2272 5.3886 +#> -3.9530 7.0034 4.7422 -4.3794 0.1748 -10.2672 -3.6337 1.0284 +#> +#> Columns 41 to 48 3.9176 9.3826 -18.1409 -1.2317 18.4283 14.2380 10.9816 16.3214 +#> 9.3704 8.3192 -3.7702 9.3107 -6.9517 -1.9656 -2.1970 -5.9058 +#> -6.1956 8.0480 -0.4568 -7.2946 -7.2395 10.2508 -1.6460 -17.5588 +#> -5.8348 6.3836 -15.9138 -7.1003 0.7682 -8.1109 -4.9736 4.3648 +#> -12.1081 7.9106 11.7615 -7.2715 7.0097 4.9865 -1.4555 -2.7906 +#> -7.2788 -6.8688 -5.7403 0.8716 2.4997 2.0539 6.3284 -3.5148 +#> -0.6817 -14.6984 -1.2534 14.0554 3.7611 2.5724 -6.4090 0.5131 +#> 3.0084 3.5823 1.3246 0.2096 5.1705 -0.3375 -4.4088 0.3239 +#> 8.4180 -2.8182 13.1569 7.0721 -9.1781 0.7615 12.5517 -14.7409 +#> -5.2442 2.1872 -10.3611 3.4234 -7.4395 -6.3410 -0.9863 1.5954 +#> 7.6250 7.5944 9.0311 0.7588 -9.2115 10.0633 -7.0872 -6.7356 +#> 5.4534 -16.1096 5.8245 3.7962 5.2196 -3.4442 -2.0334 -0.8086 +#> -9.7670 -8.5133 -0.4997 1.8853 1.8471 17.3971 -2.1560 6.0896 +#> -2.7300 0.1150 7.3367 3.4939 2.0422 2.3700 22.4666 2.8843 +#> 1.3771 0.3651 -12.6147 20.4125 1.0130 10.7655 -0.1040 -6.2868 +#> 7.7744 7.6784 -10.9426 -6.5254 -3.0292 12.7293 -1.1339 1.9785 +#> -7.1104 -0.2583 -0.5166 10.2442 5.2892 5.7679 2.6449 2.1772 +#> 10.5209 -10.9635 -16.7092 15.8249 -2.4129 7.9576 1.9455 -3.7210 +#> 7.9507 3.4032 6.5550 -0.7690 -1.6202 8.4765 -3.6645 -3.3324 +#> 7.6527 5.3433 -1.8237 17.1745 -9.4284 0.2114 11.2357 -10.2487 +#> 1.7741 -1.8712 2.9795 -0.9704 -2.9409 -6.4351 5.4212 -0.7826 +#> 1.7348 5.4718 -10.0168 -1.3006 4.4316 4.7486 -4.0691 0.6938 +#> 7.6392 6.3423 -4.4581 -9.5702 -1.7579 -6.8640 -5.6261 -9.5138 +#> -4.7566 -14.3245 -9.4253 2.2101 -8.7745 -0.5797 0.4113 7.4549 +#> 0.7994 8.8246 -6.1234 5.6424 2.0019 -4.7824 1.0309 -0.9710 +#> 0.6113 9.3829 -9.3782 4.8075 13.1326 -1.6003 13.6170 15.4264 +#> -13.4268 -3.9375 -4.0792 -2.3965 2.2949 1.7475 -2.0046 9.6053 +#> -5.2425 1.8676 1.0152 6.7712 1.6528 6.5646 7.5009 6.0387 +#> 6.4326 -3.4127 -5.5756 13.0351 6.9856 3.1156 8.2280 1.4301 +#> 7.9989 -1.2362 1.1412 7.9949 3.8730 7.1355 7.6854 5.3798 +#> 10.1957 -12.0346 -5.5023 11.2652 13.5204 -10.4236 -2.8808 1.6140 +#> -3.5676 -4.3917 1.2228 -6.8104 1.3814 -12.4657 8.9958 -3.9858 +#> 10.2427 0.1554 0.9914 7.9511 -2.0589 -10.3418 4.6433 1.2253 +#> +#> (6,.,.) = +#> Columns 1 to 8 7.2243 8.3392 -1.3849 -3.9614 -2.4271 -21.3446 -6.4720 -8.3673 +#> 4.5975 -3.0668 0.3427 -11.5763 6.4074 0.9656 7.8276 6.9289 +#> -1.3308 -6.9574 -10.1657 -4.7257 2.6630 -3.6961 4.3472 6.0477 +#> 5.5895 -2.4206 8.7348 7.3996 -7.2864 17.2307 -4.4007 -8.8312 +#> 8.1620 -1.6408 -7.1717 -5.5062 5.5247 1.3972 -9.0990 1.6231 +#> 11.3486 4.4702 4.3114 8.5954 -2.8606 -2.3586 -3.0956 3.8136 +#> 0.6174 -10.7279 10.3699 -0.2084 -7.2189 6.3811 7.4373 -5.8331 +#> 4.2271 7.9513 -2.5008 -5.1251 18.8871 -6.0456 -9.1673 8.8651 +#> 13.8916 2.3057 -4.5864 4.5505 -2.0082 -2.5087 0.9130 -5.7247 +#> -0.8457 -5.6887 -6.4370 14.4166 -2.5539 6.7175 3.5127 -4.8348 +#> 4.2450 -0.7323 -4.0768 -10.9424 15.4770 -13.5984 -3.0378 3.8981 +#> -3.0415 0.1284 -4.0245 14.9781 -12.0422 2.1056 3.5895 1.7076 +#> -6.8926 -0.9696 8.6010 1.8853 -8.9117 8.2546 4.2719 -6.5448 +#> -0.2702 9.9633 13.8592 -7.1001 -0.2188 -10.0001 -13.6396 5.4360 +#> 5.0542 -8.5357 -1.4633 13.0656 0.2974 -2.0189 16.9817 -8.3320 +#> -3.1462 6.0385 -0.4890 -5.6040 -0.0464 2.3662 -5.6113 3.7265 +#> 2.3273 -0.7567 4.0286 0.0487 6.6288 7.2548 -5.5849 -1.8846 +#> 2.4761 -8.6424 9.6763 13.4918 -8.8691 -7.4611 6.6871 4.9654 +#> -4.5020 8.7862 -9.5270 -3.2915 16.6043 -4.4464 -4.8521 2.0281 +#> 5.3709 9.0742 -0.1652 0.2802 5.4078 -10.7173 8.5004 -0.5599 +#> 1.3871 2.2028 -6.8016 7.5116 0.0626 -3.4792 4.5549 4.6654 +#> 4.8865 7.4633 2.3566 -2.2726 4.8016 -0.7082 -3.7235 6.7077 +#> 10.7146 -6.8378 -2.8024 9.2662 4.8256 -0.8213 4.1914 -3.8058 +#> -7.5799 4.1754 8.7691 16.6743 -7.9259 -0.3989 8.3513 -11.5092 +#> -0.1406 -7.8099 -12.4845 11.8588 6.0273 -7.4990 1.4426 0.4183 +#> 2.9798 5.2181 -9.0369 -0.1315 3.9919 -12.7542 -6.8516 -1.9525 +#> 0.9739 0.1558 9.1333 4.2886 -5.1501 17.3377 -2.2212 11.3604 +#> -11.1918 -2.9488 7.4725 -1.0213 4.8964 -1.2393 -6.3039 0.7197 +#> 8.7943 7.6386 -3.0797 -1.4985 13.1746 -6.3380 5.8917 -0.8362 +#> -0.2607 -8.8499 9.9483 1.3202 4.7826 -3.1972 -4.9586 -4.9227 +#> 7.2917 -8.7727 5.4313 17.7468 2.7473 -4.0644 -8.2797 -8.8895 +#> -0.4174 3.1143 -0.2159 5.9918 -2.2373 -0.7149 4.2587 -1.6637 +#> 5.2812 -2.3961 0.0392 6.4498 -13.8805 -15.2444 6.3047 -6.9894 +#> +#> Columns 9 to 16 1.3572 -9.2639 -3.6357 7.5815 16.4982 -5.5324 3.9752 -4.3839 +#> 7.2428 -7.1613 0.4689 3.8176 -2.4002 0.4925 1.8304 2.1115 +#> 0.6485 -3.9916 1.7729 0.7433 -0.5789 -1.0123 -3.7413 5.1510 +#> -3.0748 6.1491 2.2909 -3.0659 12.8628 -2.2961 22.6958 -0.5127 +#> 6.3820 -2.0064 7.3224 7.8251 1.0473 -8.0397 -10.5067 0.9349 +#> -11.4967 3.2232 8.5371 10.9801 9.2796 -4.1002 -3.6480 1.2015 +#> -1.4213 2.9341 -0.9396 10.2263 0.6507 -0.9364 5.5842 -1.1230 +#> 4.6090 -6.3211 9.4724 12.5845 -1.7015 4.1189 -12.2793 -7.4774 +#> -5.4928 -0.5904 -0.2987 -0.3799 9.6282 4.2732 -10.4704 13.7935 +#> 1.5482 -0.3901 3.1564 -7.1711 4.9637 -5.9584 1.0225 -0.7327 +#> 0.7841 -21.3481 19.5244 5.3517 -6.1240 3.0176 -2.3114 1.9763 +#> 8.1032 2.8423 0.5301 2.2284 -9.0271 3.6010 -9.2538 -4.2355 +#> -1.7511 8.3882 0.8187 4.0027 -5.3239 4.3685 3.4672 -10.8329 +#> 0.5106 3.5901 1.4955 0.8682 0.7976 -5.2305 -5.9664 -6.8258 +#> -9.1375 0.9898 16.0165 -7.1575 0.5436 1.2575 -14.7935 -1.9664 +#> 5.6937 -3.7379 -6.6352 3.8704 8.8477 6.6741 17.8161 -0.2798 +#> -2.6651 4.0551 7.2325 4.8498 -4.1348 -10.7248 5.0212 -7.8985 +#> -1.6667 12.8498 8.3556 4.0415 -9.2053 1.0740 12.3649 -3.1267 +#> 9.4538 -7.2539 0.2545 14.1017 -0.7963 14.3931 -10.0916 2.4066 +#> -12.3760 6.1386 5.3391 -9.2521 -9.7553 8.1339 -3.7875 -10.5115 +#> -3.7960 5.5098 4.1127 3.2031 -3.0166 -2.5891 -10.9013 5.1421 +#> 0.8645 -10.0569 1.1566 4.1279 9.3596 -3.6824 4.2446 -8.3089 +#> -10.0649 -10.2554 -4.7482 -15.6579 -10.9623 -1.0678 -0.2935 6.8447 +#> -6.6380 4.3850 0.8957 -11.3851 1.7734 9.1456 -6.6717 -4.7344 +#> -4.5926 2.4124 11.3638 -13.4111 -0.5375 -0.5407 6.2940 4.6291 +#> -12.7819 0.4595 0.4682 4.7299 6.6059 -2.4985 -4.2601 -3.3678 +#> 11.4470 -5.6345 10.5911 5.3314 7.6042 -2.4948 7.9147 -5.3031 +#> -3.2883 6.0255 -3.3904 -6.2489 0.7011 -4.9232 -6.4242 -15.4514 +#> -0.8515 -8.9004 8.9762 7.3365 -1.6781 1.1573 -1.3595 0.8473 +#> -10.0676 -4.2501 -4.2951 0.9042 11.0537 -6.8604 -2.8014 -8.3547 +#> -0.3443 1.6298 2.3734 5.8460 -2.7301 -1.4867 -2.6336 2.5605 +#> -1.5731 5.2042 -6.0489 -3.9748 -3.5646 4.9206 4.5472 -0.7470 +#> -4.9682 -0.6410 -2.7100 -11.2327 7.9666 0.3811 5.3322 9.1792 +#> +#> Columns 17 to 24 -1.9419 -5.7336 11.9942 4.7601 -1.1021 0.1526 -2.7318 0.4802 +#> 9.9439 -9.4554 1.3996 1.3670 7.8745 4.9322 10.9325 -4.1276 +#> 5.9729 -0.2948 -3.4857 -6.6527 4.3021 -4.2746 4.1884 0.5543 +#> -3.9633 7.9015 -8.6306 5.5853 0.2484 8.5325 -5.4779 17.4002 +#> -2.2753 4.9165 -1.4233 -4.3736 -0.2018 2.6888 0.5883 1.5900 +#> 2.7971 -0.4647 0.6217 -12.6645 0.1814 -2.6693 -2.5864 1.4766 +#> 0.1495 3.5149 0.8440 -7.1651 2.7019 5.8067 6.0512 3.3628 +#> 7.3606 0.4496 9.6615 -1.6973 -6.6609 -11.4058 -6.6486 -12.5350 +#> -5.2156 0.2924 -5.0834 -16.2553 0.3100 -2.5564 11.9973 4.2131 +#> -5.6752 -10.3026 4.2528 2.2497 4.1874 14.9477 -2.5128 1.4847 +#> 13.0678 5.6539 -7.4028 0.5487 4.6282 0.2611 9.8012 -4.0386 +#> 2.3555 -2.8775 2.5253 -0.3183 -9.4194 7.1053 -2.6442 -2.2717 +#> 3.3136 7.7977 -4.0650 3.8109 3.1865 -17.8396 -12.4813 -4.0486 +#> -0.3119 -4.4636 -3.1033 1.1887 -11.4041 3.6654 6.1167 4.8608 +#> 19.5184 -7.2677 3.1874 -9.7602 3.3361 -7.0507 -3.6390 7.9522 +#> 0.9450 1.6529 -6.6908 18.1123 3.8180 5.7401 4.9581 -0.9846 +#> 1.0836 12.0571 -5.5106 1.3248 1.5558 -10.3360 0.0579 -2.6499 +#> 3.1612 5.1826 -1.1608 6.2212 -3.1358 -8.7089 -7.4634 -0.5527 +#> -8.9989 -1.6155 7.4303 6.5009 10.9928 -2.6357 0.0780 -16.1420 +#> 2.0072 0.7219 0.9024 -0.4102 0.1720 -11.4900 -3.7621 -2.7232 +#> -8.0794 -2.6456 1.0228 -7.7458 -3.6464 0.9997 -3.6155 -3.4252 +#> 14.5272 -0.7068 4.7087 4.9182 -4.6983 4.6151 1.5049 1.8046 +#> -6.9474 -3.9977 -9.7340 -8.4386 5.1468 1.1116 0.7188 3.6614 +#> -16.7842 -2.0017 0.0629 1.1708 4.9418 -14.1091 -8.3668 -3.3363 +#> 2.5515 -3.3029 1.2168 3.5226 9.0527 6.2613 1.6803 3.9490 +#> 5.5269 3.0326 5.7349 7.2353 -2.8270 -4.0605 -3.7755 0.9408 +#> -6.9021 16.4827 -12.1811 -1.6658 1.7088 -5.2144 -4.3741 -6.3384 +#> -2.3419 6.4386 -2.0155 13.3472 2.9459 -7.8724 -11.8473 -4.2770 +#> 0.5906 1.5462 4.4134 -0.2814 2.9043 1.3267 6.0800 -1.0883 +#> 1.9978 -4.2010 2.7048 2.1470 -0.1110 -1.9612 -11.0203 -14.5765 +#> -8.3450 -2.3093 1.0679 -7.3374 -6.5944 4.6366 3.8195 -4.5595 +#> -7.4569 0.4207 -2.8997 -6.5984 -4.0737 -18.4086 -7.7615 -3.0192 +#> -2.1727 -12.4999 7.7563 -0.1128 6.9967 17.2992 7.5060 4.3193 +#> +#> Columns 25 to 32 0.5489 -2.7851 1.2910 -2.2830 10.3267 13.5500 1.8178 -9.0620 +#> -5.9758 -3.8377 -6.0791 -3.0669 -4.7702 5.1763 -6.1482 8.1660 +#> 2.9291 -6.6904 -2.7593 -3.2061 -6.3909 2.5089 -7.2814 2.2478 +#> -4.3237 -2.3705 2.0448 -8.5145 -6.7395 -4.7917 1.4278 18.8616 +#> 0.9036 -4.9193 -1.8782 1.3285 2.3220 6.1395 1.8885 -2.8320 +#> -2.3343 -8.1989 2.6961 9.1824 7.3328 4.9421 3.1049 0.7268 +#> -4.9143 0.4837 5.9807 1.4539 -0.0714 -3.9909 7.5337 6.3653 +#> -0.3442 6.1906 1.8804 -0.9160 -0.4821 -3.0456 -0.8775 3.1517 +#> -1.5952 -0.0023 -7.2607 5.9104 0.8051 8.0313 -10.2251 6.7405 +#> -2.6411 -3.0006 7.9548 -1.5612 3.7520 10.1731 2.7219 -0.3250 +#> 12.8345 4.2254 -5.2382 0.9249 -11.7240 -6.5759 -12.0975 1.4645 +#> -4.2454 -0.7754 -3.2727 6.7287 -2.9156 -8.4225 7.2463 -9.1077 +#> -1.6413 -11.3781 7.5926 10.0934 1.9571 -9.0014 1.8181 -13.9184 +#> -2.2377 14.4022 -0.3523 -0.0334 -1.3645 8.2180 6.2597 6.9555 +#> -2.1595 -12.7020 5.5225 8.1006 -5.1287 12.3235 -7.3240 2.6665 +#> -7.5441 -2.7023 -7.9101 -6.7034 1.2142 0.8901 1.9602 -3.6886 +#> -1.1957 -9.2353 8.6472 2.5590 9.5425 -0.2925 14.7062 1.1890 +#> -15.5934 -2.7007 -6.9255 1.8178 5.3152 -5.9453 -8.6526 9.2370 +#> -6.7564 4.7754 -0.7221 -1.1370 2.2326 2.2628 -1.2350 -9.8540 +#> 1.0357 -0.1774 3.2705 1.8051 2.4900 5.3541 -2.1648 2.5351 +#> -0.0126 5.2413 -4.6247 -3.5731 -4.6246 -2.1948 9.3360 12.5130 +#> 1.9397 4.2349 2.4305 -1.1732 -1.8129 -3.7480 -8.5892 0.7909 +#> -2.0414 -0.3399 2.2622 5.8912 5.0537 4.1629 -2.7187 -8.9195 +#> 6.0152 -4.7058 18.0248 13.2227 -1.2188 3.7254 -2.6734 -2.7519 +#> 1.6365 -4.4861 -5.3771 0.5365 2.0207 8.5161 -0.0189 4.6858 +#> 9.8376 0.5626 0.3699 2.3950 7.1727 8.5187 7.6857 6.0384 +#> -8.6334 -9.1464 8.0587 -0.2189 7.0526 4.0767 7.4329 5.8209 +#> -0.0165 -5.7914 6.4489 7.2829 13.7562 -5.0777 5.1799 -10.5214 +#> -3.0783 7.1251 -10.0074 -2.4632 -1.2107 14.3788 -4.4379 4.4232 +#> -1.1358 -1.6174 4.0920 16.6688 12.7050 -5.0852 -0.8548 -8.4431 +#> -7.3369 3.8981 1.5434 2.7106 8.9853 0.2032 0.9945 11.2380 +#> 5.3462 -8.5643 -4.6187 9.0935 -6.7361 -5.9773 0.6195 -10.5092 +#> 2.4468 -1.5816 -0.6258 3.5935 -4.3062 0.5268 -13.2230 0.6924 +#> +#> Columns 33 to 40 -7.6027 -2.2487 2.1900 11.9675 -3.3487 -9.6965 -2.9535 -0.7557 +#> -3.2157 4.8165 4.5860 -0.2468 -2.9938 -6.0932 -0.5876 13.6491 +#> 4.5272 -2.9818 -2.8203 6.5853 -5.0184 1.7838 0.4984 -1.9432 +#> 1.1660 -3.9935 -1.8290 5.3999 6.6793 2.8560 4.8642 6.5097 +#> 10.5755 3.2100 -3.6359 0.4694 -0.8818 -11.4375 -4.1460 -10.3099 +#> -5.6870 6.3213 8.4095 4.9459 -11.2726 -14.5157 -7.3464 -8.7629 +#> -8.5790 5.5107 6.3126 -5.6498 6.9198 -8.4822 -7.7633 -6.6192 +#> 7.5044 13.3505 9.8939 -4.0529 -5.3569 -12.1263 -5.1392 -12.6179 +#> 0.2750 -8.4532 12.4254 1.4368 5.1401 -3.1507 -4.9469 -3.5927 +#> -8.8588 1.4924 -2.4861 -10.4314 0.3319 5.8148 -4.7300 9.7827 +#> 5.0628 8.7692 -2.4614 2.7285 3.5396 0.4264 5.5486 -0.9388 +#> 7.9110 -0.0486 4.8019 -7.9210 0.9448 3.8109 -2.0076 -0.4768 +#> 4.4348 11.1488 -8.0820 5.5546 -1.5815 2.9275 6.0719 3.2779 +#> -2.9327 -8.0244 -2.7404 2.2129 -4.0796 -3.0339 3.3700 -13.2439 +#> 11.6239 -6.7035 -7.9736 4.2794 3.0072 -5.7825 -10.7079 2.9405 +#> -8.3416 -8.4063 0.1600 20.7802 -0.7024 -1.7364 3.3059 7.9594 +#> 3.2505 0.9716 -5.6477 3.5167 -1.7800 -9.7658 -6.5191 -3.1310 +#> -7.3276 -1.4290 7.9197 -0.0182 -0.8833 1.8832 8.2552 4.7918 +#> 3.2498 -2.9161 12.6821 6.7317 -6.3797 7.1543 7.1523 -0.3207 +#> -8.8900 7.2855 -0.7633 5.5332 4.4910 0.5948 4.8468 -6.2392 +#> -0.6436 4.6097 9.0892 -13.1945 4.9483 1.6102 -2.2883 -5.4042 +#> 1.6717 -2.0507 -4.9608 12.4468 -1.3210 4.8892 -0.3961 2.6101 +#> -0.7270 -17.2108 -5.1492 6.3074 11.6628 1.2956 4.2730 -0.8359 +#> -5.0182 -5.8871 8.5032 -2.6112 -6.5407 16.0788 -3.7708 8.1343 +#> 3.7324 -8.7810 -2.2099 2.1963 -2.1305 11.4818 -1.9305 0.0411 +#> -5.2839 -3.4533 1.1692 0.5344 -2.7929 -5.6060 -5.2001 -3.6439 +#> -2.6511 4.7609 19.7984 -9.0056 -6.3521 -8.7703 -17.5731 3.9446 +#> -9.1425 2.8850 -4.3467 -1.8818 3.7963 6.1533 3.1709 2.0593 +#> -4.9947 -8.8462 1.5875 8.7470 -1.5148 -0.5422 3.0933 -0.3313 +#> -10.4418 15.3400 -3.9069 1.0759 1.3927 -11.4530 -5.4137 1.2727 +#> -9.3344 -12.7449 22.2623 -4.2399 2.1708 -1.2364 -6.7290 -2.2009 +#> 4.0125 6.1803 -5.5008 4.6972 -5.7827 2.0033 4.9381 -7.6543 +#> -12.5544 -0.4399 9.3944 -5.7592 5.7587 11.4921 2.9829 7.2321 +#> +#> Columns 41 to 48 -5.2301 -8.3919 0.4745 12.1680 8.2690 0.7700 -1.3986 -15.0949 +#> -3.8473 5.5070 8.4038 13.3590 0.2387 -7.7313 -1.4964 -15.2668 +#> -0.8133 0.6185 8.4404 1.8471 -7.6965 9.7787 11.1924 0.3181 +#> 13.5075 5.0517 -10.7942 -10.8533 0.4656 -0.7475 7.3318 0.0555 +#> -9.7146 -4.7262 7.8232 2.9174 -0.7198 -1.2243 5.9117 -0.7655 +#> -5.3252 9.9646 10.1692 13.3378 3.5901 1.3333 -4.9229 -6.5326 +#> 2.8730 -7.6642 -10.3643 -0.7128 11.7228 2.5994 -8.3758 -8.6394 +#> -23.6110 -2.7063 6.4992 -3.1165 -1.9089 -10.5941 -7.0350 -4.2015 +#> -7.3903 -3.7530 -0.4041 14.3706 10.5003 6.9389 4.9652 4.0909 +#> 13.5233 16.0055 10.8386 2.8506 -2.9455 -4.1969 -7.9186 6.8712 +#> -12.5978 2.9476 19.6807 1.1032 -4.8529 2.4183 -1.7873 -10.5665 +#> 7.4521 5.0772 -0.3968 -5.8041 3.3329 -11.9797 -6.1260 3.4263 +#> 3.5739 -2.4945 1.2146 -18.5102 2.0377 0.5358 3.7605 -8.1046 +#> -5.1213 -5.6798 -18.7280 18.6834 3.4769 10.0587 9.7840 1.1688 +#> -5.1982 2.3021 -9.6020 1.8456 14.4900 -4.3919 -5.9182 -10.1780 +#> 1.4465 -3.4884 0.3757 15.2833 -4.7847 -6.8593 0.2916 -1.5028 +#> -10.1953 -5.9133 -0.5997 -6.4112 6.6287 3.9975 -8.3894 -9.4420 +#> 11.0555 12.4552 -4.2484 -8.0617 -8.0408 -0.1720 -0.0028 -3.1412 +#> -2.9399 4.6805 12.8330 6.3452 -5.2365 -4.5358 -8.0995 2.9361 +#> -11.0020 5.1860 -9.4381 1.0695 -4.4030 5.9915 -11.9550 5.6597 +#> -1.6070 4.5199 6.4365 -12.4429 -1.7472 -3.1261 4.8034 6.6773 +#> -4.5037 -9.0047 -2.9427 4.7468 -3.2937 2.3801 5.7475 0.3027 +#> 10.0648 -1.4978 -4.0922 2.3945 -2.6983 4.7756 -5.0650 -9.1463 +#> 4.9168 2.3982 -0.2161 -12.9496 2.4841 20.3679 -3.3443 3.2126 +#> -1.2465 0.5778 5.6310 -9.3660 -2.0670 3.2506 -6.5215 -5.0134 +#> -13.8826 -3.2358 -5.5231 10.6802 1.2777 2.4405 2.5325 -0.5911 +#> -3.6412 -2.0257 10.3329 -9.5415 15.9104 -2.6404 -5.0439 -11.1058 +#> 6.9971 2.9543 1.2786 -5.9788 -1.6136 3.2090 0.8348 -4.1627 +#> -8.3324 -14.0545 -3.3994 3.9126 0.4258 4.7417 -1.5262 -6.5741 +#> -3.9503 -8.7550 10.9331 7.1293 -0.4960 -5.8905 -4.1936 -7.0265 +#> 1.8810 -5.1962 -6.0232 -0.7553 3.0264 -6.1715 -10.1147 -6.3372 +#> -4.4186 -1.0833 -7.4865 -0.2799 -8.6644 8.1144 3.3730 0.9093 +#> 5.1813 12.1565 5.8088 7.9778 2.0510 -3.9625 -5.2565 -4.9910 +#> +#> (7,.,.) = +#> Columns 1 to 8 -12.5637 -2.5921 7.1782 7.8186 12.1642 -5.9426 -14.8531 1.3177 +#> 2.1248 -5.9844 -4.4564 -4.5092 6.7187 4.8098 -7.4867 -1.2874 +#> 1.5000 -12.1292 8.3217 -5.3953 -9.2065 -5.9762 -2.2805 7.4073 +#> 0.2294 -16.4882 3.5766 2.2640 -5.6411 11.0628 -1.8238 1.7027 +#> 7.3974 -2.1059 0.7667 -1.1699 -13.5659 0.0552 -7.4127 -2.4198 +#> -5.3633 -3.2434 9.4996 0.8949 -7.2516 3.3304 -2.1083 6.2119 +#> -6.0371 0.9534 5.7248 0.9747 3.8753 -9.6830 1.2595 4.4478 +#> 4.3532 4.1490 1.4650 5.5049 -6.8201 0.3387 1.8417 -2.3344 +#> 5.4369 -3.8337 -12.7340 3.3148 -13.8978 -11.2822 -6.7416 -6.7818 +#> 6.7436 -0.0121 3.2164 -8.8355 2.4412 8.5234 0.2344 -11.6883 +#> 4.8147 -10.3121 -5.8911 7.6287 -3.2987 -9.7721 2.5174 7.5296 +#> 2.0386 4.1301 -1.2932 6.6238 0.4111 -3.3190 8.2287 -0.0189 +#> 0.6954 1.3338 5.9806 4.5208 14.4822 -2.7630 -2.8924 10.0675 +#> -8.4399 -1.8747 4.0748 -3.2390 -3.1167 11.0646 0.7673 2.3934 +#> 6.4320 -4.3108 4.2115 5.2095 -3.3331 -4.7085 0.3097 10.0811 +#> 1.0512 -2.9661 2.0181 5.1407 3.9240 1.8381 -7.2879 3.1933 +#> 6.5491 -4.1914 -1.2071 4.5877 7.1293 -2.2568 -3.5221 0.0526 +#> 2.4886 2.6609 0.6132 -5.0607 8.6768 0.5913 3.3273 4.5410 +#> -4.4869 4.9565 -0.4279 3.9000 -4.1421 -4.8384 -11.0614 -1.9895 +#> 4.3772 -0.2396 3.2004 3.2212 3.7800 5.5176 -7.6825 -2.1287 +#> -9.9844 3.5585 3.2863 -5.5902 -4.6006 3.6855 8.6799 1.5601 +#> 1.8843 0.4628 1.8192 0.8578 2.7260 -5.9450 3.4451 -7.6032 +#> 6.0221 -3.7656 -3.4224 -1.3820 9.8885 2.1165 5.3021 7.6218 +#> 7.4587 -8.7724 4.6601 3.4141 12.8465 3.5597 5.6033 5.5195 +#> -7.8602 4.5715 0.3401 -4.5892 -0.1992 -5.6559 6.1851 7.1525 +#> -2.7145 0.3003 8.5194 -4.0813 -7.0842 -0.4283 -10.4691 -2.9705 +#> 1.7503 -1.6545 7.4607 6.6784 -1.7863 -1.5975 7.0052 3.1186 +#> 7.2097 4.8913 1.6908 -8.0299 7.3147 13.8606 -6.3935 -0.7843 +#> -6.8344 8.7484 3.1693 3.3031 3.8666 -3.8317 4.2813 1.1445 +#> 5.0369 3.8190 -10.9977 3.7152 14.4262 3.8348 -17.4962 -0.2185 +#> -2.1229 -1.1563 3.0345 1.8111 -3.2802 -11.0495 3.9942 0.6429 +#> 7.0119 3.9208 -0.6389 -1.0716 5.4505 14.2304 -1.4294 9.7966 +#> -3.1686 -0.6396 -6.7988 0.9020 4.8284 8.9844 6.6045 -1.4290 +#> +#> Columns 9 to 16 1.3615 6.7430 -9.0831 2.4756 3.7019 4.5959 -9.2389 2.4258 +#> -6.5283 4.2182 -14.7588 6.0469 -13.5433 0.6464 -14.0126 -0.1585 +#> -0.4165 -2.1957 9.4785 16.3252 -11.8770 -4.1678 -7.0924 3.0644 +#> 5.4771 11.0229 -6.5639 -17.6075 -7.7505 -4.6961 8.8753 6.9847 +#> -4.9094 3.5675 -1.0285 6.8008 -0.9435 -2.4753 -5.8107 -11.1148 +#> -1.3169 3.8941 6.4620 0.5569 -11.8652 1.5728 -10.2236 -9.3686 +#> 3.9423 -0.3688 5.2345 -9.8630 -3.5721 10.8983 -11.1943 -4.6832 +#> 0.7959 -1.5588 -8.7820 3.3694 -5.9139 6.2685 1.9696 -11.8555 +#> -0.0112 -8.3372 13.6339 3.5996 -6.6341 -3.1870 -2.4487 -17.8097 +#> -3.1110 -0.1339 -5.4582 3.0029 5.9676 1.6402 4.2996 -1.7442 +#> 3.0221 -7.6143 2.1927 12.5149 -17.3180 -4.4666 -6.2207 -5.8665 +#> -1.7066 -5.6932 -2.6972 -9.1662 10.9101 0.1779 2.6959 -7.8873 +#> -5.7650 2.6454 2.5285 -7.5337 1.6641 5.1793 -4.0750 9.8508 +#> 4.6218 -4.3656 -16.7795 -0.9792 -1.7471 -0.4700 0.4574 4.0469 +#> -0.6444 -3.1186 1.3270 -7.1979 -10.8263 2.7043 -9.5467 2.8626 +#> 0.1377 -1.6119 -4.6476 -1.0249 4.4360 -10.2520 -0.8962 7.3968 +#> 5.6244 -4.7285 1.2343 -6.5620 0.1256 9.5666 1.6124 -5.8117 +#> 9.5864 -12.8068 6.7174 -0.9492 -11.1470 2.9382 2.7348 7.2478 +#> -1.3516 -9.1671 -9.7949 4.4277 4.5211 -10.2511 -1.0271 -3.8912 +#> -3.4992 -9.1626 2.4570 -2.6582 -6.1909 16.2682 -2.3738 2.5297 +#> -4.0920 -8.7836 -0.3173 1.5961 -15.0735 2.7017 6.4111 -4.5299 +#> 0.8594 -4.6585 1.7853 3.0959 4.5689 1.6610 6.1223 1.7822 +#> 11.7365 -2.0738 6.6759 -0.0544 -4.5379 -2.9902 6.5058 -0.1362 +#> 1.2749 7.0817 3.4731 -4.2940 -1.4348 -4.4507 6.7489 -5.4719 +#> 4.1983 -13.3311 -3.0335 4.3130 -6.7434 -3.9381 7.5854 2.0121 +#> -1.6503 -2.0314 -6.4046 6.1112 3.9059 6.7474 3.3489 7.1325 +#> 5.0405 4.1070 -10.8778 -5.6209 -0.6011 -14.6451 -2.2682 -9.9665 +#> -0.4697 -4.2318 -14.3726 11.9567 7.8060 4.3527 10.3033 2.6359 +#> 4.1752 -10.0265 -8.4880 -1.4998 -6.4839 3.2508 4.9882 0.2567 +#> -7.8847 0.6969 0.6589 9.8041 0.4428 12.1619 -8.5297 -7.3596 +#> 13.3403 1.4642 -7.9892 -7.5376 4.4967 7.2595 1.9894 -8.7621 +#> -3.1823 8.1989 -1.4497 0.5435 -1.0498 1.2208 0.1718 4.7092 +#> -3.5594 -1.3115 -3.3914 2.5498 -1.9613 1.6140 -5.5963 -1.0378 +#> +#> Columns 17 to 24 -6.1352 0.7855 5.2496 -10.7202 1.1006 6.4437 3.0041 2.3786 +#> -11.1167 10.4239 0.2350 8.9435 -3.6779 4.8471 0.7140 3.8333 +#> 9.4847 0.8165 -3.8158 6.3221 -10.7197 0.8852 -5.6699 9.1540 +#> 4.0945 -3.8234 -0.7001 -1.1315 0.4381 -3.8137 4.0863 6.7160 +#> 1.1146 -8.9878 2.3985 -0.9147 -0.8046 2.3567 -10.0703 -2.1854 +#> 1.8469 -2.6686 -8.5182 -5.4251 6.5746 -7.8745 3.0007 -3.6743 +#> -4.1822 -9.5143 -4.3121 7.2566 2.6041 0.5658 -2.4030 0.4245 +#> -11.7281 -0.9243 5.2365 -2.4933 0.3209 -1.5823 -7.3054 0.5439 +#> 1.9351 5.7115 -14.8850 5.1515 -9.2994 0.5744 -10.7098 2.5380 +#> 3.1209 -1.7344 6.0623 -0.9722 1.9104 -4.2705 -4.6320 2.4032 +#> 12.0585 4.4151 4.1575 -0.8302 -9.3978 3.5050 -6.6041 20.1393 +#> 5.0416 -3.7321 -0.9123 1.5950 8.2849 1.2795 3.4207 -7.8130 +#> 1.9756 -6.4791 7.8072 2.6787 4.0306 -9.3902 5.1214 1.6635 +#> -4.7427 9.7381 -14.8424 12.1369 -2.9461 -2.3381 -12.2591 -4.3829 +#> -6.2556 1.9847 -1.4314 6.6051 2.5084 -5.3805 -4.3812 0.1995 +#> -0.7258 -1.5201 -5.3685 5.4863 1.4409 -1.3454 -0.1501 -5.4024 +#> -4.7067 -4.7743 2.3876 2.4734 2.3301 -0.7725 -2.1543 -1.3893 +#> 1.5617 4.9953 -10.5440 10.1063 7.0310 -3.7266 -2.5219 -3.8014 +#> -3.4102 11.2320 -3.4485 3.0704 -6.8103 5.8674 -6.1492 3.8883 +#> -8.3267 1.2774 -1.3571 1.9461 1.9410 -9.3266 -4.4550 -0.7253 +#> 8.4757 -2.4299 -3.2148 4.9527 4.3251 7.9487 -3.1187 0.4195 +#> -1.0336 -5.3404 6.4247 -7.5989 1.9064 -6.2831 1.7634 -6.0175 +#> 10.4051 -10.9130 6.5344 -3.2940 1.8782 -6.1374 9.9976 -2.0857 +#> 1.6580 5.6035 1.0687 8.1476 -8.0305 -10.1498 -7.4877 0.9982 +#> 1.5491 5.0754 0.2085 -4.2722 6.8387 10.7773 5.8851 -0.3984 +#> -4.5910 4.0407 -3.8389 -8.2975 5.9267 1.9849 8.1880 -9.6735 +#> 4.7822 -11.5824 0.7062 5.2624 8.5684 -6.3034 -4.5508 0.3868 +#> -2.6230 9.7855 1.0649 7.0056 -4.0832 -7.7116 -5.7604 2.8494 +#> -5.7510 -1.5148 1.2035 2.5692 2.1899 10.1006 -14.8982 -4.4679 +#> -9.8924 1.5419 14.2471 -0.8770 -0.2646 -5.9197 -6.6279 6.6351 +#> -6.1204 -2.9526 -3.7854 -0.1987 0.0084 -0.0103 -6.4451 -8.6840 +#> -4.4767 -1.5587 -0.5513 -7.6515 9.3017 -6.6286 6.1258 -8.2799 +#> 2.5453 10.2133 1.3841 -2.8850 2.5809 1.0825 1.8629 9.2428 +#> +#> Columns 25 to 32 2.0304 6.7152 -4.7547 12.5145 -4.5016 -4.0179 -1.2330 9.1559 +#> 14.2909 1.4769 0.6636 8.6500 -2.1221 1.7183 -5.6604 9.8726 +#> -2.7097 4.5264 4.3187 -7.1046 -0.9723 5.9788 -5.5353 -1.4749 +#> -8.6412 7.7463 -5.2986 16.9666 14.3188 21.4422 3.3956 1.1343 +#> -7.0317 6.6929 -6.0369 0.6207 -3.9455 1.8005 -0.5599 1.8508 +#> 2.0361 2.7921 -4.3200 -2.3661 2.9078 1.1786 5.4417 -8.1194 +#> -7.8485 -8.0762 3.2615 -5.6470 0.3354 5.4427 -2.8058 -4.3386 +#> 3.7953 3.5336 -4.9533 -5.2637 -10.4107 -12.4107 2.0646 9.5287 +#> -2.7825 -4.6040 9.5309 -2.1843 5.5711 2.9852 -1.9312 -6.5354 +#> -3.6551 -9.9503 -5.3903 3.4260 1.4111 -0.6740 2.9563 -2.8675 +#> -1.4813 3.6377 2.2588 2.0900 5.0809 5.0369 5.1234 9.0396 +#> 5.2403 -6.4151 6.7793 -3.0877 -12.6484 -1.7808 1.7761 -0.5235 +#> 1.3175 5.3689 -0.2347 -7.3404 -6.2206 -0.3964 -2.9847 -12.7738 +#> 0.5948 -2.7842 -9.6191 8.1294 0.9181 9.2779 -5.9096 8.0563 +#> -0.3378 2.3956 4.6235 -0.7978 -5.5830 -0.4862 -0.2478 -1.7941 +#> 0.0686 10.2917 -0.9817 5.5907 3.5988 -1.3564 2.4055 -0.9436 +#> 2.0567 6.2534 -2.2078 -6.1647 0.5606 -2.7191 3.2266 -11.3656 +#> 4.2932 -0.4873 8.3800 -2.3044 1.8511 8.4027 6.6752 -16.2451 +#> -2.3357 -2.6602 7.1631 0.3165 3.9131 -7.8905 -3.5363 -4.4079 +#> 5.4912 -7.6143 -2.0331 -2.5894 3.9076 -18.9411 3.6695 -13.5628 +#> 1.0256 0.8843 -1.2594 -5.4993 1.2796 5.1276 -2.8727 1.2982 +#> 0.8742 6.3731 5.7563 0.4166 -2.4657 -6.1710 5.1675 7.3948 +#> 1.8751 -3.1597 6.6459 5.1926 1.8573 -12.9828 -10.6121 -5.7677 +#> -3.8644 -11.0444 -4.1877 -8.5450 1.3069 -0.6498 -4.9872 -11.6447 +#> -1.1720 6.4177 5.2251 4.5926 2.8544 5.3088 4.0496 0.8798 +#> 8.5778 2.0765 -3.0492 2.6605 5.9763 -1.0507 12.9023 2.8845 +#> -2.5537 6.7471 -2.3166 0.1698 7.3152 4.9079 -5.6434 -18.8992 +#> -2.4438 -2.8801 -10.5118 4.0220 -6.5804 0.2695 -3.0997 -9.9757 +#> -1.8960 0.7844 5.0251 5.0129 3.5039 -1.9246 -2.0302 11.9824 +#> 3.2717 6.6071 -7.0335 -4.5295 -23.8735 -9.4977 0.9193 2.7063 +#> -2.8423 -12.5636 8.4244 -2.9791 -3.4124 3.0636 1.3172 -1.1011 +#> 9.9325 0.5829 -6.6425 -2.1746 -5.1121 -9.9372 -3.6588 -1.7340 +#> 8.5402 -6.0357 -0.9178 7.4435 2.9151 8.6923 3.0402 3.1449 +#> +#> Columns 33 to 40 -17.0318 -20.0478 -11.5813 -0.8116 0.6021 -9.7061 -5.2541 -7.5142 +#> -10.3421 -3.6284 -8.4016 -2.5618 5.3978 -9.7860 10.5355 -5.2481 +#> -4.6916 1.1132 5.8259 4.2710 -0.0016 8.1836 5.5236 -10.1070 +#> 7.3221 6.5633 -3.7985 3.9131 10.9585 -17.2966 -5.5431 11.2191 +#> -11.6022 -1.6023 -4.5938 -1.9300 -0.4376 -7.2064 -7.8986 -0.3702 +#> -14.3030 2.5388 1.2509 -2.7564 -10.5961 -3.5264 -12.7500 -22.1331 +#> 12.6453 7.9274 -8.0327 -1.6566 5.7491 -5.4175 -3.5723 -1.2638 +#> -8.9598 -3.8361 5.2146 -4.4693 -8.4153 -8.1980 4.2895 -4.7867 +#> -3.0841 1.6341 -0.0632 4.4309 3.3442 1.7426 11.5374 -17.0239 +#> 3.0663 11.0731 -2.8260 -5.2820 5.2010 4.3655 0.5538 1.9090 +#> -3.2084 5.2607 -0.3004 3.4364 2.2088 1.5744 6.6938 -7.3395 +#> -6.4610 5.9028 -1.2363 -2.9217 0.4373 1.8325 -3.0021 3.0616 +#> 2.2177 -4.2643 7.7422 -2.1623 -5.7365 8.1062 -10.1076 5.0764 +#> 0.9173 8.6032 1.4039 4.1027 3.0920 -22.4057 7.6579 4.9080 +#> 0.8167 -10.3278 1.2432 -2.0476 5.4842 -10.7095 2.1405 -3.7050 +#> 1.7803 -22.8740 -1.0902 2.1489 9.6227 -4.6850 -5.7256 11.1705 +#> 14.4816 -9.7424 3.4600 3.2909 -16.3311 -5.0685 -5.5932 9.4876 +#> -5.1677 2.5401 5.2312 -6.5760 3.9057 -9.6523 2.2021 3.4456 +#> -6.3658 -2.7545 17.0195 -9.7690 1.8630 9.1328 3.1055 3.9391 +#> 3.1821 -9.0758 10.9002 -3.9246 -12.8605 1.5466 11.6374 0.1441 +#> -3.7770 18.8824 4.3626 -2.8148 -4.7249 4.2569 8.2437 -7.0431 +#> 3.5508 -6.7707 0.8483 13.8608 -2.7573 -3.5297 1.8513 2.4026 +#> 7.9127 9.2728 -9.9353 3.2328 3.6214 7.5038 0.4667 9.2028 +#> 8.3492 6.9488 7.7495 -9.1828 -5.8187 11.7171 -7.3384 -3.0282 +#> -5.8212 9.4377 3.9020 0.9103 -9.7031 4.4426 17.3763 0.3510 +#> -3.7819 -11.6569 7.6439 2.5470 -7.9286 -3.5027 0.1709 7.4708 +#> 4.4982 15.0765 -2.6611 -7.6918 -1.7578 -0.8912 -7.4240 2.7192 +#> 5.5335 7.5726 8.4664 -5.9477 -3.5248 11.7820 0.5398 13.5853 +#> -5.4325 -2.7990 3.2807 8.9660 -0.7183 -13.0419 13.5544 2.4505 +#> 0.3033 -9.9993 -11.8875 8.8428 -9.5778 5.9503 3.4623 -12.5212 +#> -3.6057 8.8081 -1.8937 2.5456 -5.3832 -5.6471 1.6094 4.5927 +#> -10.4760 0.6477 3.8404 2.4465 -5.8038 7.6152 -6.2678 -5.6440 +#> -9.8051 10.2489 -6.0445 -3.1732 1.9759 6.9409 6.3362 -10.0141 +#> +#> Columns 41 to 48 -0.5757 -1.3542 7.1357 -2.2870 -2.8012 3.0657 5.0447 -15.8321 +#> 0.1535 0.3076 17.1299 8.4095 -2.6762 5.0797 1.7882 -4.3992 +#> 0.4592 -3.5133 -1.8708 0.8405 1.9575 -8.9528 7.9915 -4.7535 +#> -3.2901 -2.6491 -4.4144 9.6573 -10.8269 10.2468 -8.4412 -1.5340 +#> 0.9673 10.0159 4.7422 9.9742 -0.5255 -8.9552 2.3655 -1.5989 +#> 1.6231 -6.4483 10.5318 -4.9875 1.7577 9.2817 -2.5071 -4.0078 +#> 1.4573 -3.2758 -1.6678 5.4910 12.7946 5.9670 -6.0930 -4.4879 +#> -5.1614 -4.0308 0.7992 -6.0837 -0.3915 -9.4512 -1.2862 -0.5319 +#> 0.4310 -0.9557 -6.4016 2.8146 0.5966 -2.8999 1.7562 5.5143 +#> -5.1265 4.3655 4.9410 14.8872 -2.8013 -9.8312 7.9825 -4.1148 +#> 1.8815 5.4926 -7.3356 2.1816 3.0059 3.2300 6.5676 -2.0128 +#> 1.1780 8.0908 -12.3752 1.3859 6.4601 -2.0588 -7.2472 -5.7050 +#> 5.9115 -0.6530 0.9260 -8.3563 -0.4082 0.7834 -5.9661 -0.5926 +#> 11.0495 11.7859 3.0577 -4.4598 -4.8840 22.0590 12.1955 4.7068 +#> 20.0985 -4.0410 4.3649 10.0583 6.1918 -1.7880 -1.9070 -6.3624 +#> -5.8225 0.6477 0.3729 -0.1409 1.5204 7.7822 0.8015 -3.2746 +#> -2.1589 0.5456 11.5298 11.6881 -2.7155 -5.6491 0.4327 7.7103 +#> 1.0508 -0.0593 8.5939 -11.2689 -8.9750 2.3534 -7.5014 0.4162 +#> -2.2612 3.0069 6.5653 -7.6077 13.5167 -8.2681 7.5790 3.1596 +#> 5.2002 -9.1811 12.2834 -23.1594 -3.5273 -1.1709 12.7870 4.4330 +#> -0.0276 3.1669 -2.0981 -4.0901 4.8303 -8.5770 1.6487 8.6810 +#> 7.7900 -6.3951 -12.2464 2.8105 11.5417 14.2515 -2.9096 2.5551 +#> 1.0313 -10.4730 -10.2800 -12.7194 5.4654 -2.9958 4.4049 -11.4623 +#> -4.4022 3.6837 2.7533 -4.6619 -4.6065 -3.2298 21.5926 3.3128 +#> 5.3238 3.2650 1.3261 13.1070 -3.3967 -4.7740 -5.1533 3.8692 +#> 1.0264 -4.9872 7.8718 -5.9630 3.3917 -7.7953 -2.6791 2.2528 +#> -7.6961 2.1876 3.5483 16.0663 -10.7377 17.4609 -0.0595 0.9353 +#> -3.0517 3.1029 19.1180 -5.9878 -10.0726 0.6224 3.0131 4.7465 +#> 9.6907 7.8143 1.0314 12.2337 17.0781 7.3258 8.1910 2.9945 +#> -3.3594 -1.7302 7.7383 5.9785 -6.6533 -7.8406 -8.4122 -0.4406 +#> -2.0650 -9.0354 -5.8994 0.9212 3.4177 -13.0390 -11.4290 -11.9110 +#> -2.7836 -11.7873 -3.5815 -5.7332 -11.7829 -13.6638 8.5906 -4.0360 +#> 5.6294 -4.0216 2.0273 4.5414 -5.5946 3.0184 3.4674 -2.7942 +#> +#> (8,.,.) = +#> Columns 1 to 6 1.6767e+01 7.3242e+00 2.2688e+00 -1.1885e+01 -4.0734e+00 3.3318e+00 +#> 1.1166e+01 1.7645e+01 -6.4430e+00 -4.1363e+00 -2.6064e+00 -1.7742e+00 +#> -5.5551e+00 8.0560e-01 -4.0773e-01 6.0480e+00 2.0295e+01 -3.6027e+00 +#> -1.4078e+00 -2.3412e+01 1.2144e+01 5.2885e+00 7.3681e+00 2.1041e+00 +#> 2.5697e+00 -6.0522e+00 1.5430e+00 -1.5490e+00 4.9428e+00 7.4056e+00 +#> -1.9167e+00 -6.7456e+00 7.2300e+00 1.8173e+00 1.1700e+01 4.0283e+00 +#> -4.1001e+00 -1.3626e+01 7.4635e+00 -2.5361e+00 -1.2605e+01 4.6264e+00 +#> -3.6862e+00 -8.2448e+00 -1.7109e+00 -8.2455e+00 -6.6692e+00 -1.0588e+00 +#> -1.0147e+01 -5.4059e+00 -9.8364e+00 1.0415e+01 1.1184e+00 1.0907e+01 +#> 6.4281e+00 6.9009e+00 1.4842e+01 -2.3334e+00 -2.5984e+00 -9.1516e+00 +#> 6.5785e+00 -3.3714e+00 -1.7497e+01 4.7739e+00 7.6406e+00 1.7642e+01 +#> -5.3044e+00 2.2508e+00 -1.0266e+01 4.9423e+00 -1.1276e+01 5.0438e+00 +#> 5.4556e+00 4.8300e+00 -1.0827e+01 -8.4740e+00 2.4237e+00 2.5224e-02 +#> -1.2140e+01 4.3315e+00 1.0711e+01 1.6573e+01 7.7428e-01 -8.4217e+00 +#> 4.2287e+00 -1.9173e+00 -5.8150e+00 -7.3568e+00 1.4070e+01 -4.5091e+00 +#> 5.3989e+00 6.2967e+00 2.2752e+00 -7.8721e+00 3.6903e+00 -4.5436e+00 +#> -3.9085e+00 -6.5628e+00 6.9691e+00 -1.7535e+01 2.0640e+00 2.8361e+00 +#> -1.1818e+00 -1.7437e+00 -7.5470e+00 1.0356e+00 3.6821e+00 -9.2207e+00 +#> 4.5794e+00 -5.4332e-01 -1.0079e+01 1.0920e+00 -7.0514e+00 -6.2585e+00 +#> -1.0217e+01 7.5117e+00 -1.0014e+01 -1.2400e+01 1.1899e+01 -6.6089e+00 +#> -2.2282e+01 -5.4358e+00 1.4162e+00 1.1134e+01 3.8861e+00 -6.9433e+00 +#> -4.4862e+00 -4.4780e+00 3.9681e+00 -8.0532e+00 -1.5911e+00 -2.8791e+00 +#> 3.1209e+00 2.8594e+00 4.0225e+00 -2.8797e+00 1.2730e+01 -3.6122e+00 +#> 8.4232e+00 -6.2612e+00 1.2269e+01 5.9408e+00 1.1985e+01 -5.5113e+00 +#> -4.0361e+00 -4.6405e+00 -1.3339e+01 6.2889e+00 9.8516e+00 -7.8917e+00 +#> -6.0944e+00 5.6074e-01 6.4615e+00 -2.0297e+01 4.6597e+00 -1.1273e+01 +#> -8.1253e+00 -1.5362e+01 1.7911e+01 1.8621e+00 5.5949e-01 2.8031e+00 +#> 2.0541e+01 2.3189e+00 6.7424e+00 -4.7597e+00 -6.9405e+00 -1.1465e+01 +#> -1.7620e+01 1.1792e+01 -7.7298e+00 5.4394e+00 -2.1601e+00 -4.3825e+00 +#> 1.9074e+01 1.3653e+01 -1.3568e+01 -1.2175e+01 -1.7947e+01 1.0558e+01 +#> -1.0617e+01 -1.4844e+01 -1.3629e+00 -3.0530e+00 -4.5964e+00 -8.6009e-01 +#> -6.2399e+00 1.8821e+01 -1.3541e+01 6.2537e+00 3.8780e+00 2.3052e+00 +#> 1.0811e+01 8.7568e+00 -6.4851e+00 1.1809e+01 -4.8284e+00 7.9604e+00 +#> +#> Columns 7 to 12 8.9290e+00 1.3536e+01 1.6096e+01 -7.1693e+00 1.2483e+01 1.1455e+01 +#> 5.2199e+00 5.6739e+00 1.1925e+01 6.1331e+00 3.9429e+00 -4.3705e-02 +#> -5.7627e+00 -6.5053e+00 -3.2291e+00 8.3001e+00 -7.4037e+00 -3.3170e+00 +#> -8.6931e+00 -1.4302e+01 -5.0089e+00 -8.2270e+00 -6.2948e+00 7.2968e+00 +#> -3.5095e+00 -5.2325e+00 7.1503e+00 1.1552e+01 -9.3108e-01 -7.8535e-01 +#> 9.4466e+00 4.1475e+00 6.0636e+00 9.1754e+00 6.2806e+00 2.5567e+00 +#> -1.3670e+00 -1.1705e+01 4.5845e-02 -9.9646e+00 9.0778e+00 1.5874e+01 +#> -8.5052e-01 -9.3265e+00 7.1684e+00 1.2459e+01 -7.0863e+00 -1.5340e+00 +#> -1.0967e+00 -6.8202e+00 -4.1483e+00 4.5109e-01 7.2468e+00 -3.0846e+00 +#> 1.8900e+00 -2.9275e+00 1.2501e+01 -2.6666e+00 -1.0941e+01 7.5141e+00 +#> 1.2399e+00 -6.7585e+00 -3.3682e+00 9.8080e+00 -3.6451e+00 -5.7484e-01 +#> 2.3451e+00 3.7670e+00 -4.2033e-01 -5.0846e+00 4.7556e+00 -1.1006e+00 +#> 8.3667e+00 3.6975e+00 -1.0243e+01 -6.9477e+00 1.0840e+00 8.2389e-01 +#> -1.3075e+01 3.0193e+00 2.7116e+00 -1.9629e+00 5.1973e+00 -4.0322e+00 +#> 6.1567e+00 -2.2938e+00 9.9950e+00 -3.3615e+00 8.2121e+00 -7.6035e-01 +#> -2.1567e+00 4.1321e+00 2.0549e+00 -5.1987e-01 7.5985e-01 -2.0032e+00 +#> -2.2345e+00 3.1456e-01 7.0497e+00 3.9596e+00 4.0817e+00 -7.9889e-01 +#> 1.2495e+01 -6.3476e+00 -5.9824e-01 -5.4388e+00 -8.6041e+00 -4.8455e+00 +#> 1.6624e+01 -1.1884e+01 -2.9734e+00 1.5225e+01 -7.3970e+00 5.7698e+00 +#> 9.8040e+00 4.3454e+00 7.0386e+00 5.5264e+00 -8.3666e+00 3.2071e-01 +#> -2.4268e+00 -1.5725e+01 9.1409e+00 -4.2469e+00 -2.6802e+00 3.1987e+00 +#> -6.8343e+00 8.6813e+00 -9.9641e+00 1.4666e-01 6.9162e-01 -8.6080e+00 +#> 9.1200e+00 6.2205e+00 4.8410e+00 -2.9351e+00 4.7084e+00 -1.6139e+00 +#> 1.8724e+01 3.2264e+00 -6.2778e+00 6.9400e+00 -2.6209e+00 9.7244e+00 +#> -2.7610e+00 -5.3795e+00 1.8431e+00 -2.5759e+00 4.9165e+00 -7.0577e+00 +#> -3.9320e+00 -4.4469e+00 1.7978e+00 -2.6139e+00 -2.1787e+00 -2.3098e+00 +#> -9.8916e+00 -1.7884e+01 6.5494e+00 1.5247e+00 9.8584e+00 4.3776e+00 +#> 1.6337e+01 -9.0102e-01 -2.9395e-01 1.1285e+00 -1.3418e+01 -2.7005e+00 +#> -5.5151e+00 4.3336e+00 -3.2838e+00 1.0588e+00 6.9050e+00 -2.5221e+00 +#> 1.2281e+01 2.1200e+01 1.0232e+01 -9.2227e+00 3.8279e+00 3.2877e+00 +#> 6.6581e+00 -8.2416e+00 -1.6229e+00 -5.3015e+00 4.4514e+00 2.3358e+00 +#> -4.5829e+00 1.2662e+01 -1.1380e+01 2.7443e+00 4.3965e+00 -4.6810e+00 +#> 1.2336e+01 1.2162e+01 8.9652e-01 -1.9086e+00 1.8242e+00 -3.1326e-01 +#> +#> Columns 13 to 18 -6.5864e+00 -1.0712e+01 1.9184e+00 -7.9737e+00 8.0341e-01 -6.6428e+00 +#> 3.9817e+00 -4.6648e+00 -2.4271e+00 1.6329e+00 3.4401e+00 5.3616e+00 +#> 2.2465e+00 -5.6628e+00 4.1712e-01 2.4878e+00 8.9113e+00 1.2079e+01 +#> -1.0932e-01 -1.2413e+00 -1.4202e-02 1.0870e+00 -3.4432e+00 1.7965e+01 +#> -1.1225e+00 -3.2036e+00 7.4699e+00 -1.8162e+00 3.0432e-01 -2.7530e+00 +#> 6.3094e+00 1.5723e+00 -4.2568e+00 -1.4982e+00 1.3841e+00 -1.3459e+01 +#> -1.0469e+00 -3.9008e+00 -1.3422e+01 1.2605e+01 -2.4243e-01 -6.4081e+00 +#> 4.0135e+00 5.0065e+00 7.7655e+00 -3.5427e+00 -7.3873e+00 1.6300e+00 +#> 5.7995e+00 1.1139e+01 -6.9868e+00 3.4718e+00 3.2083e+00 8.6517e+00 +#> 3.8380e-01 -4.6602e+00 -5.0067e-01 -4.6188e-01 -1.0016e+01 7.1506e+00 +#> 8.1005e+00 -8.0245e+00 1.1189e-01 2.7982e+00 1.0693e+01 2.4759e+00 +#> -4.8095e+00 4.7866e+00 -5.4752e+00 -2.1411e+00 4.1932e+00 -1.1467e+01 +#> 4.3658e-01 -5.7922e+00 -3.6600e+00 4.0172e+00 -4.1100e+00 -3.1343e+00 +#> -6.9230e+00 1.6722e+01 1.2365e+01 -1.2803e+01 4.0601e+00 -9.9373e+00 +#> -5.9905e+00 -4.2004e+00 -3.8107e+00 1.3444e+01 -6.3351e+00 1.8679e+00 +#> -7.2217e+00 1.9600e+00 1.8366e+00 -9.0836e-01 6.3407e+00 2.5531e+00 +#> 5.2419e+00 -1.9273e+00 2.6189e+00 -1.6239e+00 -5.9923e+00 -8.3155e+00 +#> -5.4557e+00 1.4535e+01 -1.5117e+01 7.2507e+00 -4.4490e+00 -1.6060e-01 +#> 1.4703e+00 9.8438e-01 4.8113e+00 -8.3850e+00 8.4060e+00 7.6364e+00 +#> 2.7511e+00 1.2935e+01 1.8571e+00 -2.6687e+00 -1.7088e+01 4.8235e+00 +#> -3.5270e+00 7.8432e+00 -3.4736e+00 2.9558e+00 -4.3210e+00 8.6346e+00 +#> 7.3180e+00 4.5550e+00 2.7375e+00 8.4454e-01 3.2313e+00 -8.0284e+00 +#> 3.5681e+00 4.2156e+00 -1.3016e+01 -2.6686e+00 7.8514e+00 -1.1472e+01 +#> 4.8989e+00 -3.6281e+00 -3.1882e+00 -9.6628e-01 -1.1654e+01 3.9920e+00 +#> -6.1137e+00 -1.6611e+00 7.9596e-01 -2.5879e+00 3.8247e+00 5.4991e+00 +#> -3.2835e+00 2.8457e+00 -4.2449e+00 5.1501e+00 -8.8733e+00 -4.2499e+00 +#> 2.7270e+00 3.2537e+00 -7.4448e+00 -3.9524e+00 6.0253e+00 -2.9487e+00 +#> 8.9148e+00 1.3280e+00 -4.6664e+00 -1.0784e+01 3.8554e+00 -8.9095e-01 +#> -5.5961e+00 6.6477e+00 8.5598e+00 2.1061e+00 6.4038e-01 5.2681e+00 +#> 1.1707e+01 -1.2155e+01 -5.7153e+00 2.8509e+00 -1.4552e+00 -1.5719e+01 +#> 2.7364e+00 8.6112e+00 -1.3925e+01 -7.8262e-02 -3.3054e+00 -1.2692e+01 +#> 1.5652e+00 -1.6843e+00 1.4623e+01 -5.7382e+00 1.3702e+01 -6.5341e+00 +#> 8.5536e+00 -2.3798e+00 -1.0280e+01 6.6825e-01 7.5417e-01 4.4648e+00 +#> +#> Columns 19 to 24 -4.4853e+00 -2.1131e+00 2.3472e+00 1.5614e+00 -9.1763e+00 1.3149e+00 +#> 3.3663e+00 -1.1476e+00 -4.6271e+00 -6.8828e+00 -2.7274e+00 8.1809e+00 +#> -6.0512e+00 -1.9255e+00 -1.2470e+00 2.3085e+00 5.1944e+00 2.7115e+00 +#> 6.8703e+00 -9.9798e-01 5.7421e+00 1.2826e+01 1.3569e+01 -1.3485e+01 +#> -7.8852e+00 -5.3608e+00 -1.2194e+01 -2.2309e+00 7.4194e-01 3.8362e+00 +#> -7.9683e+00 -3.7411e+00 9.5716e-01 7.7865e+00 -1.2211e+00 1.3266e+01 +#> -8.6393e+00 1.0575e+01 1.8087e+01 -6.3996e+00 -2.0487e+00 -2.4887e+00 +#> -1.4954e+01 -2.8493e+00 -1.0947e-01 9.6807e+00 9.7672e+00 -4.4043e+00 +#> -1.3268e+01 2.1509e+00 -1.2294e+00 1.0422e-01 6.8833e+00 1.1067e+01 +#> -1.1917e+01 -5.0383e+00 6.8034e+00 -1.3399e+01 3.1128e+00 -1.1073e+00 +#> 1.5490e+00 1.5129e+01 -5.0927e+00 4.9878e+00 1.4499e+01 -2.5065e-01 +#> -2.5972e+00 7.2299e+00 7.2963e+00 -1.6446e+00 -9.7672e+00 2.2963e-01 +#> 3.6946e-01 -1.1741e+00 -5.4392e+00 -3.6667e+00 2.6146e+00 -4.2971e+00 +#> 6.6354e+00 7.4895e+00 -1.1072e+01 6.1214e+00 -2.4847e+01 -1.9569e+00 +#> -5.5859e+00 3.3497e-01 -5.0915e+00 -7.4545e+00 -1.7294e+00 6.5300e+00 +#> 6.9209e+00 -2.9821e+00 -1.0543e+01 1.0908e+01 7.0799e+00 -7.8464e-01 +#> -4.7602e+00 -1.8840e+00 -5.3847e+00 -5.1367e-01 -6.6393e+00 1.1736e+00 +#> -3.5408e-01 6.9321e+00 4.6526e+00 -3.6509e+00 -1.8282e-01 -8.6614e+00 +#> -1.3014e+01 9.7575e-01 -3.6201e+00 4.9557e+00 3.5110e+00 -1.1991e+00 +#> 4.2591e-01 -7.2595e+00 -8.4041e+00 -6.0745e+00 5.0202e+00 8.3158e+00 +#> -1.0734e+01 6.3550e+00 1.2172e+01 -6.8847e+00 -7.6938e+00 -1.1426e+01 +#> 1.0593e+00 -4.3771e+00 1.9166e+00 7.3604e+00 6.9675e+00 -2.2241e+00 +#> 9.0315e+00 2.5758e+00 -1.6054e+00 3.1866e+00 -3.8649e+00 2.2475e+00 +#> -4.5584e-01 -8.9639e+00 2.4443e+00 -1.3255e+01 -7.6367e+00 -1.4364e+00 +#> 6.0904e+00 5.5175e+00 2.9946e+00 -1.7121e+00 -1.1451e+01 -2.2756e+00 +#> 4.5679e+00 -2.0458e+01 8.9792e-01 8.5564e+00 -6.2259e+00 -2.7014e+00 +#> -4.5265e+00 3.2387e+00 -6.7524e-01 -6.3879e+00 4.6612e+00 -7.4260e+00 +#> 4.2265e+00 -9.4775e+00 -1.7111e+01 -4.3356e+00 -1.8778e+01 -6.7874e+00 +#> -2.2547e+00 6.5539e+00 2.1854e-01 -1.2272e+01 -5.7822e+00 -6.0883e+00 +#> -8.5049e+00 -2.1965e+00 -5.3985e+00 9.2189e-02 -3.4820e+00 1.0115e+01 +#> -8.2111e+00 7.4293e+00 2.7259e+01 7.2645e+00 1.7943e-01 -9.3449e+00 +#> 7.5274e+00 -5.9399e+00 -7.6304e+00 1.6460e+01 -9.6886e+00 8.5740e+00 +#> 1.2634e+01 7.2209e+00 9.3985e+00 -1.0274e+01 -6.0598e+00 9.2750e+00 +#> +#> Columns 25 to 30 -1.6500e+00 1.2556e+01 4.7886e+00 -1.2065e+01 -1.1266e+01 1.6267e+01 +#> 5.6686e+00 7.5015e+00 4.9051e+00 -5.6231e+00 -8.5556e+00 -8.8199e+00 +#> 2.3384e+00 1.2451e+00 -3.1599e+00 1.2265e+01 -4.1203e+00 -5.0749e+00 +#> -1.7114e+00 -7.9214e+00 2.9079e+00 3.2128e+00 2.6000e-01 -1.0404e+01 +#> 6.9109e+00 8.9490e+00 -8.1327e+00 3.3842e+00 9.4082e-01 -1.2268e+01 +#> 1.9725e+01 7.5589e+00 -3.3212e+00 -6.2901e+00 4.8042e+00 9.6396e+00 +#> 1.6098e-01 6.2835e+00 -4.2581e+00 -9.5596e+00 5.3666e-01 4.8876e+00 +#> 1.1118e+00 7.3037e+00 -8.2213e+00 1.0765e+00 -4.7982e+00 -1.8013e+01 +#> 2.7118e+00 1.0105e+01 4.1488e-01 5.4600e+00 2.4152e+00 -6.6327e+00 +#> 8.3795e-01 4.6922e+00 -2.7764e-01 -3.3877e+00 5.3466e+00 7.5152e+00 +#> 8.2482e+00 1.9565e+00 -1.5874e+01 1.2110e+01 -7.5095e+00 -4.2220e+00 +#> 1.7925e+00 4.7533e+00 1.5763e+00 -9.5619e+00 1.4166e+01 8.8671e+00 +#> 1.2803e+00 -7.2786e+00 1.9553e-01 3.9502e+00 8.1943e+00 3.4421e+00 +#> 1.4177e+00 -8.2810e+00 1.8800e+00 4.0896e+00 4.0136e-01 3.8002e+00 +#> 8.3683e+00 -7.2149e+00 2.0337e+00 -7.7635e+00 -1.4227e+00 1.4717e+01 +#> 7.8810e+00 -4.5467e+00 7.2520e+00 -5.8690e+00 -7.0170e+00 1.0698e+01 +#> 6.4822e+00 -3.0455e+00 -5.8516e-01 3.7871e+00 -2.0598e+01 2.5791e+00 +#> 1.1569e+01 -1.0959e+01 4.0629e+00 1.4087e+00 5.1808e+00 1.4298e+01 +#> 9.5239e-01 8.9995e+00 -3.8282e+00 -5.4321e+00 -2.1114e+00 -5.2939e+00 +#> 6.0747e+00 -5.8771e+00 2.1364e+00 3.0143e+00 -1.2609e+01 8.1266e+00 +#> -6.9970e+00 4.0875e+00 -4.6289e+00 4.5900e+00 5.8148e+00 2.1470e+00 +#> 5.9624e+00 -8.3459e+00 -3.9486e-01 1.1296e+00 -7.7978e+00 -8.5532e+00 +#> 2.5669e-01 -9.6446e+00 -2.1524e+00 8.1044e+00 -6.5395e+00 9.2771e+00 +#> -1.6091e+00 8.4890e-01 7.0265e+00 -3.2210e+00 3.4743e+00 8.7699e+00 +#> -4.3547e+00 -1.0256e+01 -5.4367e-01 -1.8768e+00 -9.9176e+00 1.3847e+01 +#> -3.5261e+00 -7.5636e+00 -2.1715e-01 -7.0807e+00 -1.4738e+01 1.4545e+01 +#> 7.3968e+00 7.2117e+00 -1.4983e+01 -3.1700e-01 4.2418e+00 -3.1306e+00 +#> 1.5941e+00 -2.3137e+00 4.9425e+00 1.1920e+01 -6.7446e+00 1.3191e+00 +#> -1.9429e+00 8.0296e-01 -7.1434e+00 -6.7915e+00 -5.0054e+00 6.8658e+00 +#> -8.3416e-01 6.5902e+00 2.2657e+00 4.3535e+00 -6.2116e+00 9.4883e+00 +#> -4.5389e+00 8.2557e+00 -5.1779e+00 -2.0092e+00 2.3989e+00 1.1704e+01 +#> -1.7732e+00 7.6986e-01 6.3287e+00 7.3798e+00 2.2958e+00 2.1280e+00 +#> -2.2058e+00 7.9456e+00 2.4047e+00 -5.4327e+00 4.2458e+00 8.6109e+00 +#> +#> Columns 31 to 36 1.6968e+01 1.4056e+00 -1.3579e+01 -9.4824e+00 -3.1901e+00 -1.5056e+01 +#> 3.3381e+00 3.8086e+00 -1.0171e+01 -8.9115e+00 6.1601e+00 -1.3511e+01 +#> -8.7886e+00 1.1156e+00 2.0683e+00 7.6585e+00 1.0455e+01 9.0514e-02 +#> 2.2720e+00 1.0692e+01 5.2476e+00 1.5036e+01 5.3032e+00 5.4630e+00 +#> 1.5361e+00 -1.8013e+00 -2.5648e+00 3.2145e+00 4.1681e+00 3.9148e+00 +#> 1.1932e+01 -1.6998e+00 6.7076e-01 8.8148e-01 -6.3605e+00 1.7223e-02 +#> -1.9262e+00 -9.2899e+00 -9.5948e-01 1.8220e+00 -3.3391e+00 5.1185e+00 +#> 6.4869e+00 -4.8870e-01 -1.2153e+00 5.8586e-01 5.8405e+00 -5.5425e+00 +#> -1.3434e+01 -2.8273e+00 -3.8836e+00 3.6068e+00 4.8271e-01 6.4796e+00 +#> -2.6460e+00 2.7995e+00 -9.0357e+00 1.2773e+01 -2.1468e-01 2.2131e+00 +#> -8.1002e-02 6.3334e+00 -3.2733e+00 1.2021e+01 5.1330e+00 -1.6836e-01 +#> 3.5073e+00 -8.2699e+00 1.6375e+00 -3.3127e+00 -4.3346e+00 -1.8677e+00 +#> 7.9125e-01 -9.4631e+00 -2.6451e+00 -9.7767e+00 2.7838e+00 -1.1135e+00 +#> -8.0633e+00 -3.0523e-01 1.0451e+01 3.4521e+00 -4.9708e+00 -1.2935e+01 +#> -1.3916e+00 -2.5956e+00 -1.1536e+01 -1.3008e-01 1.2098e+01 -7.4245e+00 +#> -8.8167e+00 -4.2268e+00 -4.3778e+00 -3.3864e+00 2.3205e+00 2.8307e+00 +#> 1.5622e+01 5.6263e+00 -1.1112e+00 -1.5210e+01 -3.7397e+00 2.1831e+00 +#> -6.3162e+00 -6.0929e+00 2.0127e+00 -5.9087e+00 -2.6422e+00 1.6015e+00 +#> -7.7263e+00 -1.6963e+01 -1.6966e+00 -5.4852e+00 5.0603e+00 -1.1660e+01 +#> 8.5882e-01 -1.3065e+00 -3.0651e+00 -1.0387e+01 7.6108e+00 1.0662e-02 +#> 3.0485e+00 1.5695e+00 2.5101e+00 1.7252e+00 3.2778e-01 -2.9300e+00 +#> -8.3939e+00 5.5164e+00 1.6709e+00 3.5003e-01 -1.2074e+01 2.2491e+00 +#> 8.1846e+00 6.9727e+00 8.1906e+00 4.9120e+00 -1.6916e+00 1.5517e+01 +#> -5.5969e+00 2.9195e+00 -3.6840e-01 1.7274e+01 4.2599e+00 1.2123e+01 +#> 4.5651e+00 8.8605e+00 -5.9988e+00 -4.6366e+00 -3.1383e+00 -1.1470e+00 +#> 7.4988e+00 4.8204e+00 -4.8057e+00 -1.4195e+01 -1.6288e+00 -7.9563e+00 +#> -7.8994e-02 -4.8934e+00 -7.4796e+00 -3.6926e+00 -4.7582e+00 4.3932e+00 +#> 5.4703e+00 -2.7433e+00 1.5770e+00 -2.6086e+00 5.7119e+00 -1.0186e+01 +#> -8.3719e+00 -4.3971e+00 -1.1563e+01 -5.9293e+00 -6.1670e+00 -9.7025e-01 +#> 1.2016e+01 1.7604e+00 -1.4021e+01 -8.7414e+00 3.9784e+00 -7.5222e+00 +#> 4.3311e+00 -1.9517e+00 7.4661e+00 6.4736e+00 7.3379e+00 3.1902e+00 +#> 1.4190e+00 -3.5703e+00 2.5072e+00 -2.0089e+00 7.2083e+00 3.6831e-01 +#> 1.4973e+00 8.0480e+00 -4.3088e+00 5.0519e+00 -1.2431e-02 -3.9246e+00 +#> +#> Columns 37 to 42 1.0349e+01 3.5146e+00 4.7248e+00 2.4224e+00 3.1631e+00 -1.8932e+00 +#> 3.7355e+00 -4.9438e+00 5.1820e-01 -1.3734e+01 2.2520e+00 1.2284e+01 +#> -7.2834e+00 -2.9175e-03 -9.4973e+00 3.5105e+00 -2.9854e+00 -7.4132e+00 +#> 4.6324e+00 -4.4956e+00 -2.1906e+00 6.3653e+00 -1.3877e+01 1.4574e+00 +#> -6.3226e+00 5.5992e+00 -1.6028e+00 1.2614e+01 2.3026e+00 2.4687e+00 +#> 5.3798e+00 -5.2953e+00 5.1357e+00 -2.8384e+00 8.5415e+00 -4.4999e-01 +#> 1.3274e+01 -5.1165e+00 -9.4891e-01 5.1584e+00 -1.4090e+01 -2.0401e+00 +#> -1.0071e+01 8.7172e+00 2.4762e-01 4.5268e-01 9.5065e+00 6.5990e+00 +#> 3.8933e+00 -1.4519e+01 -4.2085e+00 -5.6620e+00 2.6159e-02 2.7152e-02 +#> 1.9220e+00 8.1747e+00 5.4029e+00 8.3436e+00 7.5888e+00 -6.9745e+00 +#> -8.2549e+00 7.4162e-01 -6.1332e+00 -2.8622e+00 3.1681e+00 1.2724e+01 +#> 8.1403e+00 3.7808e+00 2.9839e+00 3.2436e+00 -2.0750e+00 -6.6230e+00 +#> -3.8046e+00 1.1949e+01 2.0603e+00 1.3661e+01 -4.4081e+00 -8.8752e+00 +#> 4.0630e-02 -9.7636e+00 1.0656e+01 -1.0516e+01 6.4355e+00 2.4332e+01 +#> 4.1552e+00 -4.4734e+00 9.5094e+00 -8.9332e-04 1.4007e+00 -6.9024e+00 +#> -7.1951e+00 -9.5963e+00 6.6720e+00 -7.8115e+00 -3.9879e+00 1.7070e+01 +#> -1.1219e+01 3.4550e+00 6.2057e+00 1.7176e+01 -3.6684e+00 9.2366e+00 +#> 1.0725e+01 -4.2982e+00 1.2791e+01 5.9354e+00 -6.8996e+00 1.2754e+00 +#> 3.3122e+00 -5.9823e+00 -1.6210e+01 5.5249e+00 7.5663e+00 1.0667e+01 +#> -7.7834e-01 2.7156e+00 1.1732e+01 -7.9248e+00 1.3731e+01 -7.1842e-01 +#> 8.4354e+00 -2.6473e+00 -7.0378e+00 9.5294e+00 1.0219e+01 4.3076e+00 +#> -1.6865e+01 -1.0685e+01 3.3568e+00 -8.7478e+00 5.4412e+00 4.8094e+00 +#> 4.0752e+00 1.9527e+00 2.2571e+00 -3.4768e-01 -2.9087e+00 -8.7077e+00 +#> -1.1830e+00 1.9568e+01 -5.2073e+00 6.3466e+00 5.4534e+00 -1.6413e+01 +#> 5.9593e+00 -8.8745e+00 2.7595e-01 2.5411e+00 8.2347e+00 4.6738e+00 +#> -5.1192e+00 5.9188e-01 9.3946e+00 -6.3051e+00 1.1390e+01 6.2909e+00 +#> 2.7971e+00 -7.5224e+00 -3.4047e+00 1.0831e+01 -1.3286e+01 1.6648e+01 +#> 4.9520e-01 2.1406e+01 3.6402e+00 8.1908e+00 2.2226e+00 1.0036e+00 +#> -5.8694e+00 -1.9102e+01 -1.5645e+00 2.6963e+00 1.2637e+01 1.9963e+01 +#> -7.9123e+00 1.4164e+01 1.4852e+01 -6.8379e+00 7.8168e+00 -2.5681e+00 +#> 1.5386e+01 6.3521e+00 8.1042e+00 6.7304e+00 -1.3655e+01 -8.0183e+00 +#> -5.5019e+00 2.0131e+00 5.0412e+00 -7.0756e+00 5.0653e+00 -1.2269e+01 +#> 1.1495e+01 4.0187e+00 4.1938e+00 -1.1167e+01 1.0863e+01 -9.0092e+00 +#> +#> Columns 43 to 48 1.9086e+00 -6.0084e+00 1.1663e+01 1.6831e+01 1.2646e+01 1.4408e-01 +#> 7.7910e+00 9.5390e-01 6.2202e-01 8.2527e+00 6.3771e-01 -2.7207e+00 +#> 2.8097e+00 -5.4443e-02 8.8826e+00 -1.0211e+01 5.2685e+00 5.2125e-01 +#> 8.4363e+00 5.9659e+00 -2.7217e+00 -1.3296e+01 -5.1549e+00 1.4063e+01 +#> 1.0081e+01 3.2227e+00 9.6486e+00 -7.0275e+00 -4.2905e+00 8.8995e+00 +#> 8.6326e+00 1.1378e+01 -2.3089e+00 8.4529e+00 1.1244e+00 -9.1948e-01 +#> 3.7497e+00 8.2288e+00 9.7651e+00 -3.2653e+00 1.7484e+00 6.9823e+00 +#> 1.0540e+00 -2.4206e-01 -8.1490e-01 -2.3298e+01 -1.6047e+01 4.2955e+00 +#> 6.5837e+00 1.1039e+01 1.2911e+01 9.7721e+00 -1.1140e-01 -7.1750e+00 +#> -8.2676e+00 -1.0484e+01 1.3251e+00 -2.4151e+00 3.7316e+00 2.1665e+01 +#> -2.0550e+00 8.4096e+00 1.5844e+01 -1.5410e+01 -4.6701e+00 1.7298e+00 +#> -8.3883e+00 -8.4074e-01 -7.2652e+00 3.0749e+00 -5.4976e+00 -7.0820e+00 +#> -3.3652e+00 -1.8007e+00 -3.4501e-01 -2.0708e-01 -5.1626e+00 2.1304e-01 +#> 1.6354e+01 9.7717e+00 -1.5131e+01 8.6530e+00 1.5403e+01 -6.7722e+00 +#> 1.2625e+01 1.7819e+01 3.2093e+00 -5.8925e+00 6.5707e+00 8.1253e+00 +#> 2.4240e+00 5.0578e+00 -1.0251e+01 6.3943e+00 1.5771e+01 -2.9332e+00 +#> 2.9671e+00 4.1355e+00 5.8757e-01 -1.5622e+01 3.2887e+00 6.6223e+00 +#> -1.5192e+00 5.2934e+00 -1.2964e+01 -7.3480e+00 2.9503e+00 -6.8738e+00 +#> 4.9112e+00 4.4184e+00 1.4162e+01 -6.7740e+00 2.7057e+00 2.2440e-01 +#> -1.0507e+00 -2.7629e-01 -1.4210e+01 -5.9002e+00 3.1342e+00 -2.1610e+00 +#> 2.3724e+00 -6.5675e+00 -3.6146e+00 -1.4949e+01 -1.0355e+01 -3.2275e+00 +#> -3.8806e+00 4.0648e+00 -6.2838e-01 6.4692e-01 -3.7762e+00 -4.5512e+00 +#> 5.0239e-01 -1.0756e+01 -2.6947e-01 1.3397e+00 1.3567e+01 3.3119e+00 +#> -4.1779e+00 -3.8258e+00 9.0232e+00 -2.8485e+00 9.2501e+00 1.4849e+01 +#> -2.0827e+00 1.7546e+00 -3.7182e-01 4.5059e+00 1.2492e+00 -8.6025e+00 +#> 8.0779e+00 3.7081e+00 -7.3670e+00 4.0668e+00 6.6366e+00 -3.3706e+00 +#> 1.1349e+01 7.9921e+00 4.4069e+00 -5.2089e+00 -6.7050e+00 1.2475e+01 +#> -1.4296e+00 -8.9067e+00 7.0266e+00 -2.1112e+00 3.0178e+00 1.9257e+01 +#> 1.1055e+01 1.1767e+01 5.6965e+00 -2.8091e+00 7.3366e+00 -1.0011e+01 +#> -1.2181e+01 -1.1801e+01 1.3236e+01 1.6488e+01 -6.9282e+00 6.8039e+00 +#> 2.4913e+00 -7.4843e+00 3.3452e+00 -8.6046e+00 3.4812e+00 5.3174e+00 +#> -3.3165e+00 1.2759e+00 -1.6281e+01 4.1924e+00 -3.1811e+00 -1.9584e+01 +#> -1.4903e+00 -5.9860e+00 6.4547e+00 2.1014e+01 -4.2636e+00 6.3224e-01 +#> +#> (9,.,.) = +#> Columns 1 to 8 4.0794 6.0684 -0.8311 4.8778 -2.0680 -5.0483 -8.1212 -0.4670 +#> -5.0321 -1.8980 -7.3105 3.9990 2.5946 3.8788 -5.8263 -0.4804 +#> 12.8792 -12.9890 1.0354 0.2266 4.5250 -7.3148 5.9497 3.1359 +#> -3.6702 1.7862 3.8205 -1.1715 9.8759 -2.4723 -4.8305 -20.5428 +#> 13.3035 -1.2442 -2.8034 4.6820 -5.0879 6.6643 3.9401 2.0031 +#> 3.3032 2.2903 10.9000 9.5233 -8.0382 -17.9027 10.9481 1.0601 +#> 8.1374 2.9132 6.7715 6.9283 -7.8365 0.8160 10.2724 -10.3356 +#> 5.3852 -3.9204 -0.3595 -1.8751 -3.5394 20.6540 9.7091 -1.9489 +#> 0.8717 1.9024 -7.1412 5.2893 2.1951 -23.9145 9.6487 2.1870 +#> -2.4710 -3.1546 11.8893 -3.1098 5.0098 -6.4714 4.5560 -6.5721 +#> 8.2237 -3.3640 -6.0437 10.2538 -8.5034 9.4943 4.6656 0.5343 +#> 4.6898 4.6695 -6.3377 6.2074 -2.6848 8.3498 1.1032 0.8084 +#> 5.9736 -3.9981 -0.2361 -0.2370 -13.3947 8.2142 0.5673 -6.0443 +#> -5.6291 18.3545 -9.2522 -10.8182 7.8836 -18.0514 -10.5459 1.1597 +#> 4.2296 -7.4623 6.4381 4.7830 -4.2380 -5.9684 14.1344 -19.2029 +#> -7.1332 18.8846 5.4187 -9.2719 -5.0517 -2.2815 2.1851 4.0934 +#> -5.2267 -8.4886 -2.4892 5.5292 4.2910 1.0350 -4.9162 -0.9101 +#> -8.1712 0.3072 7.1822 -4.1487 4.3976 -9.8743 14.2932 -14.2236 +#> -3.3688 1.2148 2.7096 -11.7475 -12.5490 -3.2319 -0.9382 15.8213 +#> -11.5855 -5.4447 11.0557 3.4124 1.1715 -10.6065 6.2076 2.0333 +#> 1.6404 -4.7255 1.6720 5.3021 6.5244 -17.9380 3.5248 4.6255 +#> 4.3144 -9.9478 -5.6320 -11.0713 -11.2514 7.6034 9.0796 -7.7695 +#> -10.1595 0.9970 14.5487 12.4250 -3.0845 -3.2217 0.8475 4.6780 +#> 5.8065 -2.8322 14.9710 -7.3477 0.8346 -11.5608 1.4898 0.7894 +#> -7.8699 -5.5137 -6.0862 3.4121 15.1010 -22.8861 3.0404 13.9179 +#> -6.6843 -4.1122 4.2866 -9.9555 7.2913 -10.2866 -9.7519 2.7445 +#> 1.7172 1.6126 -2.0818 3.9591 -8.4740 -5.7567 -0.8639 -3.9426 +#> -4.2496 6.4542 -0.2463 1.8102 5.1814 -6.9287 -7.0213 9.4895 +#> 6.3897 0.4685 -0.1945 -5.5021 -6.2104 -7.8718 0.0966 6.0887 +#> -3.4026 10.1926 -6.2556 8.6156 -7.1564 3.1795 2.7905 -0.4113 +#> 4.7720 1.4452 15.7820 -2.6927 7.7603 8.4853 -2.7105 -5.2556 +#> -4.3637 11.7902 -7.3200 3.3080 12.9666 8.0143 -15.2166 15.6200 +#> -2.3635 1.5295 2.4287 10.6374 10.6979 -5.8618 2.1516 -5.7713 +#> +#> Columns 9 to 16 6.4279 -16.6403 -18.3629 -17.5751 1.5928 -11.3314 2.3082 -3.3119 +#> 15.1420 -7.8180 5.0243 -2.5117 -0.7384 -3.4678 -2.5084 9.2290 +#> 1.9041 1.7503 2.8069 -6.1292 7.4328 6.1056 9.0484 -3.3383 +#> 16.0871 16.7116 4.0324 -2.9360 9.1257 2.7541 5.0165 -3.1155 +#> -3.6804 -5.7240 -2.9193 -5.5015 4.6540 2.6300 -0.3867 -5.0567 +#> -18.2437 -23.9267 -1.5425 4.6601 -2.0346 -0.8490 -5.6068 -3.2220 +#> 7.2447 6.9831 0.4325 -7.8024 1.7635 -8.6065 10.7907 4.7603 +#> -2.1499 -18.3106 -1.5499 2.3885 4.2410 0.3205 -0.0706 -15.5370 +#> -7.2642 -3.3160 2.4902 0.7529 -2.6600 1.8324 -4.9594 9.6799 +#> 3.5323 7.4025 8.2952 -5.4834 -2.4099 -5.4144 3.5544 -4.9415 +#> -6.3884 -3.4431 7.9970 2.7729 3.4578 -0.4114 11.7179 7.9719 +#> -11.1124 10.7049 2.7164 6.0928 -10.8485 -2.9538 -8.0745 7.4084 +#> -6.2716 0.8655 -0.5051 -2.3025 2.4459 -2.4969 -2.2801 -12.8532 +#> 1.7287 0.0174 -0.5900 8.8881 4.1497 -3.9259 -13.2147 15.8763 +#> 11.9867 -6.9489 -1.4523 -4.2875 -0.1344 -3.6711 9.0379 -4.5052 +#> 9.7145 -2.6730 -15.3691 -7.4209 1.0313 10.1746 6.8281 8.4267 +#> -2.2905 -2.6880 -12.2839 -2.8750 7.6246 -3.2068 -3.9528 -8.3951 +#> -1.7096 -6.3767 1.9717 6.9931 10.4830 -0.2552 -2.3697 -7.0752 +#> -3.4325 -10.2179 -3.4009 -4.4899 -3.8635 8.8770 2.8008 4.5130 +#> 2.5901 -18.7449 -0.3828 1.1593 -3.4166 -10.1580 -14.9132 -10.8550 +#> -5.1128 -0.5076 15.3124 0.3820 4.5298 0.6550 -2.3290 -13.3959 +#> -0.8302 0.2536 -4.9642 4.4154 -5.0953 -4.1481 10.3788 2.9103 +#> 3.0757 5.9847 -0.6975 1.0189 -6.6328 -8.9430 -5.0165 5.8381 +#> -12.5129 3.4130 7.9631 -1.8649 -8.0697 -7.1588 -13.7465 -16.1740 +#> -4.3849 -3.6834 1.4595 4.1363 1.2030 4.3309 7.0658 4.2559 +#> 6.0773 -19.6393 -14.0629 -2.3869 4.5976 7.4737 5.9223 -6.7138 +#> -8.0946 5.0019 -0.4931 0.1409 8.9070 -0.9026 -1.5408 2.8584 +#> -3.2197 -7.2077 0.3732 -1.0563 8.2695 -7.9381 -2.0098 -11.3553 +#> 3.1388 -3.3002 -3.0741 1.2747 3.5834 5.7041 4.2506 11.6135 +#> -6.2947 -18.5550 -7.1581 -9.2618 -2.6979 -14.5040 7.1882 -7.9180 +#> 3.8713 -1.1188 -5.3525 -4.0106 3.4619 -9.9489 0.9650 0.0564 +#> -7.3784 7.5384 -2.5166 9.6918 -0.9690 16.9486 -23.4199 2.4217 +#> -2.1453 2.9490 10.9724 -0.0240 -6.9801 -10.6743 1.9304 8.1391 +#> +#> Columns 17 to 24 8.0659 -9.8109 -17.5197 -1.4956 -2.7980 -1.3839 10.8934 -9.2477 +#> 2.9348 3.9345 -6.3472 6.7811 -6.7502 3.8585 5.7719 7.3738 +#> 3.7759 -1.2448 4.2845 -4.3191 0.2600 4.1004 2.6863 4.5444 +#> 12.4635 13.6476 4.2648 -9.2781 -1.6858 -4.4844 4.1397 5.1457 +#> -0.3290 -9.6221 -1.0781 1.6044 -0.5301 -11.1221 5.6580 -4.9634 +#> 6.2739 -5.5969 -4.1184 -7.9925 0.6532 -1.1652 3.4576 -5.9513 +#> 10.6276 1.0927 -9.2528 2.5044 9.9256 -5.4391 0.4126 -1.3181 +#> -0.9920 -2.8386 0.3108 -4.3165 5.5252 7.3908 -0.4162 -14.1043 +#> 2.3134 -7.4856 -0.0544 -5.2347 -0.9355 6.6240 -1.3276 10.7852 +#> 7.8179 -9.1380 -1.4468 6.2373 6.9862 1.4667 -12.8261 -0.8780 +#> 0.6620 5.7587 11.7828 -1.8801 -7.8894 11.5090 4.9039 -5.0160 +#> -4.0376 1.7842 -7.6461 6.9464 5.3759 -3.3801 -7.9744 1.3897 +#> -0.2717 4.3383 5.1764 -12.3860 -2.6778 -4.4454 6.4609 3.0602 +#> -7.5645 9.5570 -7.4381 -4.3807 -1.6359 -14.0753 -0.6845 -5.7670 +#> 12.7688 -8.1250 1.3557 -5.4335 -5.8085 3.7587 3.5603 2.1621 +#> -11.5333 2.3040 12.5750 -4.4823 -8.4828 2.9692 9.9874 12.1453 +#> 5.8875 -0.1194 -0.8944 -12.4041 1.4352 -8.2784 10.2505 -6.2482 +#> -14.2126 7.0171 12.9963 -7.4035 -9.4354 -5.3856 -5.4378 14.4636 +#> -8.6178 -14.3717 10.9290 -1.1101 0.5204 10.4756 1.0523 1.6210 +#> 4.2888 -2.1153 -2.1217 -6.3444 -6.8318 11.7709 -3.7033 10.7405 +#> 8.0226 -4.4725 -2.1581 -2.4154 4.2568 -2.2097 -19.8357 5.1823 +#> 0.3821 3.0292 3.1383 7.3895 -5.0592 9.0426 5.7879 -11.8164 +#> 2.0660 -5.6077 8.9119 0.3280 -2.1825 0.6266 14.0744 2.4195 +#> 1.0469 10.6938 8.7573 -25.5889 -0.2029 9.9365 -13.8058 5.6670 +#> 3.1036 -6.7063 3.8607 2.4476 -3.6309 7.1585 4.8015 1.0726 +#> -4.9733 -1.1730 -4.1544 -5.9858 -3.5929 4.7898 -5.2797 -4.2409 +#> 7.0606 -1.9421 2.2402 -9.2652 0.2939 -3.5939 5.4470 1.3376 +#> -12.6290 8.0336 13.5466 -14.6799 -0.1997 -6.8844 6.4218 -7.7101 +#> 4.3238 -7.4270 2.7846 10.7714 -11.9544 8.1263 -7.6938 9.7714 +#> -9.7766 -5.1577 1.0592 -3.9683 -7.2028 4.2519 9.6452 -16.5199 +#> -3.3280 -8.5588 -3.2580 1.6575 0.7049 3.9927 -6.2243 -0.4779 +#> -10.1726 15.6080 -0.0723 -8.2501 4.1061 -1.6644 -0.3377 4.0142 +#> 5.6474 8.4324 -2.6393 4.7854 -1.1411 11.3926 -4.7050 0.1635 +#> +#> Columns 25 to 32 11.1852 -7.6783 -4.4760 -6.2867 -14.4768 -5.5915 -2.3897 -1.7277 +#> 2.5738 -1.7489 -0.8135 -12.1182 11.3483 8.8064 -6.4352 5.5023 +#> -0.9505 -1.6135 -5.0113 9.0222 -3.6225 3.2098 3.7084 -5.9199 +#> 5.4133 -16.0761 7.3412 19.7539 8.0536 -4.3415 2.0151 -6.6417 +#> 0.5821 -9.7743 -8.9166 -2.1462 4.3378 1.4944 13.7540 5.7123 +#> -11.6263 -14.4128 -12.4400 -6.0476 -11.2459 6.1059 2.8093 9.7881 +#> 6.1340 -1.5156 4.7161 -12.7458 0.4976 8.1295 8.6126 3.0070 +#> -2.7693 10.9625 2.0651 -19.3172 4.9871 7.9184 13.0141 19.7211 +#> -11.5707 3.4166 -15.3739 -4.2086 0.3017 -7.8660 1.3321 -8.0142 +#> 5.0140 -4.8983 -7.2128 0.9540 4.8724 -0.1939 -1.9584 -15.3688 +#> -1.4485 5.1045 -0.0854 -14.6064 9.1149 1.7403 4.2503 -2.0895 +#> -6.7345 9.3218 -1.2156 -7.1773 1.4140 7.4115 5.7588 8.5236 +#> -4.1262 0.9882 15.1218 -0.9274 -7.9207 3.1481 -1.6564 4.9407 +#> -6.0797 2.7882 4.1580 12.1277 6.2366 -2.8015 -11.5175 -3.9804 +#> 0.5365 -2.5556 -2.7976 -9.2249 1.2070 12.6141 5.7339 -0.6995 +#> 1.4168 -14.0447 8.2472 6.8786 -4.0087 -5.6162 0.5010 -4.9696 +#> 3.9318 -7.7873 3.0492 -5.4148 -9.5561 0.0113 14.3477 -3.5788 +#> -6.2673 -0.4624 16.4651 0.4618 2.4340 7.2395 -5.6198 8.3328 +#> -7.6682 4.2533 7.8766 0.0981 -10.9411 -9.1038 -4.9945 -0.3450 +#> -7.6848 0.6633 -4.0730 -7.5607 -8.0175 -0.1730 -4.9055 11.3131 +#> -5.7167 9.9912 -1.9191 2.7759 4.4893 1.2290 4.7103 8.0413 +#> -1.6533 8.7521 9.9300 4.1369 -11.7643 -1.3706 4.2014 0.8237 +#> -2.6175 -8.3520 -3.8934 0.8523 -12.1993 -6.4672 -4.6347 -5.2350 +#> -6.7551 -3.6316 1.6006 1.2255 -10.7569 -3.4983 -4.6958 -18.2304 +#> -2.2933 -0.8002 5.0204 9.0503 -7.8023 -1.4203 0.3685 -6.3405 +#> 0.9046 1.7930 4.5703 3.4607 -6.1397 -7.0052 -1.4754 -3.1954 +#> -2.8187 -9.5606 12.8859 -9.7074 3.5686 12.6171 9.1333 -6.4353 +#> 2.7439 -2.4240 8.5999 3.5995 -3.5231 -9.2750 -11.4203 -4.5213 +#> -4.9503 6.8071 8.7293 4.0040 -5.3628 4.6692 2.8335 -0.7443 +#> 0.4532 3.8679 -10.4293 -14.7597 -8.6314 -3.1424 3.6973 -1.7702 +#> -5.7853 3.6905 -8.3404 -10.1098 -4.8931 7.1072 4.4414 -5.0839 +#> -1.0830 0.6989 -2.6288 9.4472 -13.4725 5.8905 -2.6344 6.1472 +#> 1.6652 5.0443 -13.9105 -2.0468 3.3163 -0.7483 -12.5587 -4.1535 +#> +#> Columns 33 to 40 1.2980 5.7922 -2.6835 6.7159 9.6417 -2.6298 -0.8528 -0.4202 +#> 1.8438 -2.5133 -0.9831 2.2937 6.0663 8.0411 4.4082 2.6379 +#> -8.1019 -1.3295 2.5620 -4.6356 -2.6643 3.4993 0.4289 -7.3983 +#> -3.0134 -4.4428 -1.4359 -5.3858 8.7989 -13.9669 -7.9427 3.1195 +#> -4.4803 4.7607 -6.0444 -0.9186 -1.2217 1.9578 2.5686 -0.9102 +#> -8.7065 4.3657 1.5683 5.5544 -17.4600 12.2941 4.0490 -1.7500 +#> 0.0132 2.6190 -0.4216 -12.0082 -9.7049 -6.1584 -1.5848 -0.1581 +#> 9.8927 20.7229 7.2420 2.7999 -11.8450 1.7785 2.4026 -7.7475 +#> -7.6714 -5.8588 -2.8094 3.0173 -20.9604 -2.8228 4.7490 -0.5274 +#> -3.7243 -3.8045 -7.7921 -10.8642 5.1522 -0.0962 2.9651 9.7325 +#> -6.3715 -0.2433 0.0752 -9.4436 -4.0806 9.0486 9.3798 -6.6853 +#> 8.6038 4.7407 2.6414 5.2493 1.0450 -1.2455 11.3315 -5.4282 +#> -1.3117 11.8059 14.3912 6.1844 3.2220 5.4814 -6.3132 -1.5302 +#> -3.6532 -17.5908 -3.5290 -6.2984 -1.7378 -7.9013 6.8136 -2.7963 +#> -9.3930 3.2737 -2.1988 6.9773 2.0967 -8.0507 -9.9822 -4.4279 +#> 3.3786 -8.5124 6.4465 1.9150 10.1108 -13.1152 -0.7935 9.3200 +#> 10.4707 -8.0685 8.6091 -2.0858 4.2249 -9.6019 -2.6548 -5.5713 +#> 3.5279 -1.3915 16.7588 7.1058 5.7903 2.1495 -3.0769 8.0010 +#> -7.8178 7.4678 -1.4509 2.5190 -5.2619 -2.2746 16.0377 1.3202 +#> 1.2576 4.2236 11.5194 13.0194 -3.2023 9.1482 -13.7077 4.5081 +#> -5.5141 7.7185 -0.4163 2.6531 -5.2403 10.6327 -4.4554 -0.7468 +#> 2.4275 -5.7975 1.7842 -2.6893 -2.7460 -4.6191 4.1462 -3.3226 +#> -8.3906 6.9027 -3.9422 9.1921 0.6376 2.4150 -10.8794 6.3257 +#> 2.2617 -1.5724 -2.1324 1.4894 0.7507 3.9992 10.0400 4.0461 +#> -3.0378 -7.1001 -6.8504 1.8413 0.2720 -9.0168 -8.3117 -3.9334 +#> -2.4101 -10.7373 4.4570 -0.2030 -5.8664 -1.8609 -17.1171 4.6217 +#> 9.4701 -5.9071 7.8036 -3.2998 -8.9704 4.3793 0.5720 -6.6505 +#> 0.3020 -6.4357 11.4205 -10.2687 5.2483 -2.8666 7.2009 6.5887 +#> -7.9362 -6.2417 -10.4147 10.7933 -5.1977 -10.6599 -2.4203 3.5938 +#> -0.5701 5.9710 9.2179 -2.2129 -3.2973 0.7501 3.1773 6.7176 +#> 8.1722 2.4580 8.0178 -3.3873 -16.7557 -11.7991 -1.3586 4.7263 +#> 8.6734 11.5497 1.4913 17.1166 -5.3510 6.3995 -3.3012 -0.8676 +#> -1.1865 -5.8330 -5.4534 -3.7517 -4.7330 14.3180 7.2902 7.8606 +#> +#> Columns 41 to 48 3.5615 -9.7688 -4.3698 9.4824 -3.1192 4.5284 11.1475 6.8623 +#> 8.3879 2.4612 9.4136 10.2137 1.5750 1.5142 -2.0387 -4.3419 +#> 4.8508 4.3983 0.7569 -4.0457 1.3259 0.2923 1.1547 3.2599 +#> -1.4479 -3.5318 -3.8466 -4.2446 -7.6283 -11.9392 -19.9979 -4.2009 +#> 9.4132 5.7710 8.7189 9.7178 -1.7340 6.4248 -1.3654 -2.6776 +#> 0.6586 -1.7411 6.1335 -8.5081 9.3015 9.7500 -1.5191 -4.5925 +#> -5.9769 2.0140 -7.3969 -2.6265 6.8943 -4.0132 -5.8919 6.3048 +#> -6.8142 -5.2937 3.0421 5.4276 -3.1331 9.1253 3.9011 -1.3756 +#> -8.5611 -0.3279 6.4001 -12.0204 3.4183 7.4415 -9.0594 11.2156 +#> 8.1157 4.5898 -1.0780 2.5033 -2.1577 -6.4207 -14.1143 -8.2264 +#> 2.4492 -0.1197 4.2152 5.1966 -10.2710 2.1175 -0.4986 -3.3390 +#> -2.0919 2.5979 8.4456 -1.3720 -3.2895 6.1981 3.1747 -0.2003 +#> -3.3364 1.5890 -2.6446 -4.0857 8.2205 -18.2212 8.3387 -1.3977 +#> 8.6574 4.8976 4.3133 11.0241 -9.1563 10.2224 -9.8328 -7.7376 +#> 6.9685 6.5083 11.3045 -10.6788 1.1892 1.8784 -3.1766 -4.8822 +#> 8.6182 -6.2387 -18.0660 4.8077 2.5121 -12.4155 0.6158 -1.9639 +#> -5.5851 11.7859 -0.7654 -3.9968 4.2055 -6.8894 9.0475 -2.4525 +#> -3.0200 -1.1177 3.6660 -7.0122 2.4538 -0.5365 -10.3393 6.7757 +#> 0.0465 -9.8708 -4.2814 0.0838 4.1645 -10.5060 -0.7702 3.3377 +#> -0.3392 -3.5162 2.9724 -11.0832 6.6663 0.7159 -1.9853 2.1120 +#> -4.8323 0.7137 10.9749 -1.8166 4.4660 6.5846 -15.5988 -1.3574 +#> -4.1564 -3.3339 -5.3380 -8.1182 -10.7575 1.1331 7.1809 -7.1301 +#> 2.8081 -9.6512 2.0413 -3.0798 -5.8924 0.5218 -2.0901 9.6971 +#> -4.0738 6.1353 3.4338 -5.3469 2.7293 -11.0841 0.2694 -5.0426 +#> -2.1924 -1.7987 11.2064 -9.8901 -15.7909 6.7003 -0.3568 -1.9345 +#> 0.7802 -2.9752 -9.0987 -4.5661 3.1597 0.8458 0.2071 -4.5137 +#> -3.6960 -1.9876 -0.2447 3.1732 6.7256 -8.5313 -4.9369 -13.9011 +#> 3.2326 14.2881 3.1216 2.2416 2.1360 -6.8162 0.7646 -7.6339 +#> 4.7298 -4.4067 9.1713 5.2442 -4.9678 4.6076 -9.5215 -4.5575 +#> -14.0215 3.5428 -1.2730 -1.0921 6.3475 2.0168 9.2348 5.8859 +#> -18.8806 -10.0248 -4.5760 -8.5119 -1.5314 -2.2611 -1.2832 9.1316 +#> 3.2393 -5.1773 -0.6072 5.6596 3.7721 -0.3780 9.8461 4.7959 +#> 4.3407 7.1963 6.0763 -4.1961 -0.9477 11.8882 -6.1514 -1.7034 +#> +#> (10,.,.) = +#> Columns 1 to 6 -1.1510e+00 1.4708e+01 6.1299e+00 1.9380e+00 9.1910e+00 -5.4459e+00 +#> -2.4515e+00 5.8284e+00 -7.1605e+00 4.7084e+00 3.7136e+00 -9.5652e+00 +#> -1.7493e+00 -9.4196e+00 -3.2808e+00 1.5286e+00 7.8173e+00 8.9133e+00 +#> -4.3640e+00 9.2855e-01 -6.1887e+00 -6.1604e-03 -1.9447e+00 1.2969e+01 +#> 9.0246e-02 5.5193e+00 -2.7066e-01 -3.4072e+00 -2.3908e-01 -1.2357e+00 +#> 1.1252e+01 7.1942e+00 1.1412e+01 4.9808e+00 -3.6412e+00 1.6060e+00 +#> -3.5517e+00 -1.7384e+00 2.6949e+00 -2.0860e+00 1.4382e+00 -6.8337e+00 +#> 3.3280e+00 2.3369e+00 -1.0039e+01 -3.3540e+00 3.2409e+00 -1.2759e+01 +#> 4.4777e+00 -3.3609e+00 4.2431e+00 1.2564e+01 -7.6268e+00 1.8690e+01 +#> 3.0412e+00 1.4121e+00 -6.2280e+00 -6.6548e+00 -1.7658e+00 -5.1015e+00 +#> 1.5029e+00 2.9603e+00 -3.1640e+00 6.9582e-01 8.4434e+00 2.5271e+00 +#> -4.1522e+00 4.7775e+00 -1.0516e+01 -9.1064e-01 -6.6537e+00 -3.6508e+00 +#> -8.0951e+00 6.3684e+00 -9.9464e-01 -8.3192e+00 1.2819e+01 -1.2725e+00 +#> 1.7781e+01 1.4571e+01 -5.4962e-01 1.0911e+01 -3.1689e+00 1.7386e+01 +#> -2.2865e+00 6.4819e+00 -9.7681e+00 -2.3027e-01 6.4727e+00 -5.5001e+00 +#> -8.3685e+00 1.9537e+00 5.4966e+00 -1.0910e+01 -1.1238e+00 1.0691e+01 +#> 7.1031e+00 -2.3167e+00 1.8460e+01 -3.7902e+00 1.3384e+01 -7.9244e+00 +#> -2.3056e+00 -3.9997e+00 -2.2651e-01 7.2948e+00 -2.4566e+00 8.5297e+00 +#> 8.0650e-01 3.1507e+00 8.2394e-01 5.3150e+00 -2.3854e+00 4.6375e+00 +#> 8.0931e+00 4.0334e+00 5.6126e+00 -3.0156e+00 -3.7886e+00 -9.7867e+00 +#> -4.6833e+00 6.6924e-01 5.7609e+00 7.9856e+00 -1.3209e+01 -1.2543e+00 +#> 5.6184e+00 -7.3573e+00 -3.9674e+00 -4.5558e+00 1.1325e+01 1.1856e+01 +#> 2.3098e+00 -7.6610e+00 1.2973e+00 -2.5097e+00 -1.1555e+01 -1.6619e+00 +#> 1.0846e+01 1.2510e+01 5.4761e+00 -3.8703e+00 1.8016e+00 -1.6538e+00 +#> -1.0300e+00 -5.6389e+00 7.5530e+00 9.5494e+00 -9.2211e+00 -1.8173e+00 +#> 2.5764e+00 -6.1685e-01 1.0452e+01 -3.8054e+00 7.0047e+00 2.2879e-01 +#> 9.5835e+00 3.4535e+00 2.5474e+00 -5.9788e+00 1.5455e+00 -1.1756e+00 +#> 1.0323e+01 8.5548e-02 1.0385e+00 -6.7566e+00 1.1697e+01 4.6183e+00 +#> -4.9143e+00 9.3366e+00 2.1455e+00 6.2903e+00 -6.1693e+00 1.2083e+01 +#> -1.3754e+00 3.9394e+00 1.0684e+01 -7.6846e+00 1.5168e+01 -4.2258e+00 +#> 1.2749e+00 -4.5121e+00 2.4132e-01 1.1152e+00 -1.5815e+00 5.2832e-01 +#> -9.0462e+00 6.7138e+00 -3.1081e+00 -5.1703e+00 -2.0812e+00 6.7342e+00 +#> 4.6960e+00 9.0685e+00 -6.0166e-01 5.2299e+00 -2.3888e+00 -3.4944e+00 +#> +#> Columns 7 to 12 -1.0082e+01 -1.0771e+01 -8.9546e+00 1.8042e+00 -2.7159e+00 -1.7762e+00 +#> -8.4417e+00 -7.9267e+00 -9.4689e+00 -2.6069e+00 -2.0608e+00 1.0649e+00 +#> 1.4768e+00 6.8986e+00 -2.8778e-01 -5.5315e-01 8.9970e-01 -4.4995e+00 +#> 9.3775e+00 -7.7333e+00 1.3670e+01 4.7990e+00 -2.5009e+00 -9.3834e+00 +#> 3.6530e+00 -1.1222e+01 -5.3500e+00 -5.1788e+00 5.5077e+00 -5.6291e-02 +#> -1.0494e+01 -3.6301e+00 -1.0994e+01 -2.9919e-01 3.4897e+00 5.0841e+00 +#> 6.2714e-01 -4.2035e+00 1.6428e+00 7.9124e-01 -9.2246e+00 3.1176e+00 +#> -9.9808e+00 -8.9762e+00 5.5213e-01 -3.8203e+00 -4.1071e+00 4.6112e+00 +#> 3.8380e+00 5.4504e+00 -4.7822e+00 -9.8952e+00 4.3101e+00 1.2299e+01 +#> 3.6037e+00 7.2293e+00 -1.0695e+01 -2.0063e+00 6.5496e+00 -2.8960e+00 +#> -1.3454e+00 -3.6698e-01 9.2473e-01 7.9984e+00 2.8898e+00 9.4804e+00 +#> -1.0162e+01 4.9699e+00 3.4657e+00 -1.2196e+00 5.4783e-01 -4.5905e+00 +#> -4.7108e+00 -2.1457e+00 5.7715e+00 9.0599e+00 -1.3146e+01 -1.2753e+01 +#> 1.1410e+01 4.6495e+00 7.7839e+00 -9.2500e-01 1.3030e+01 3.1535e+00 +#> -7.5110e+00 1.3772e+01 -2.4349e-01 3.3336e+00 -1.5929e+01 6.0472e-01 +#> 8.5152e+00 -7.2888e+00 -2.2775e+00 3.6564e+00 7.3078e+00 -2.1732e+00 +#> -8.1812e+00 -5.6365e+00 2.7685e+00 -5.5288e+00 -8.5448e+00 -1.0250e+01 +#> 1.2640e+00 1.5171e+00 2.8921e+00 1.3248e+01 -1.1732e+01 -2.6864e+00 +#> 3.8779e+00 -6.2568e+00 -1.8679e+00 3.7103e-01 5.4897e+00 4.7297e+00 +#> -1.1899e+01 7.1531e+00 -4.0514e+00 4.0651e+00 -6.1371e+00 9.5976e+00 +#> -2.0377e+00 7.9734e+00 3.2218e+00 2.6228e-01 1.4351e+00 7.6534e+00 +#> -1.9397e+00 1.1279e+01 4.0064e+00 7.8378e+00 -1.6255e+00 -2.5241e+00 +#> -8.9815e+00 1.8569e+00 -8.1455e+00 1.0342e+01 -2.6133e+00 1.6513e+00 +#> -4.7943e+00 4.7982e+00 3.4861e+00 2.0369e+00 2.9492e+00 -1.2867e+01 +#> -1.0601e+01 1.4021e+01 2.1461e+00 -2.4602e-01 -7.3911e+00 3.3181e+00 +#> -3.6000e+00 1.0415e+00 -5.2782e-01 -5.1445e+00 2.0022e-01 1.7768e+00 +#> -4.7952e+00 -6.2317e+00 1.9980e+00 2.2601e+00 -2.6737e+00 -8.5609e+00 +#> 8.5771e+00 -2.4351e+00 -1.1265e+00 -2.6971e+00 5.2554e+00 -1.7823e+01 +#> 6.7204e+00 1.4255e+01 6.1903e+00 5.5310e+00 -7.5880e+00 9.5411e+00 +#> -7.2577e+00 -6.7162e+00 -9.2917e+00 -2.7819e+00 2.8843e+00 -7.8432e+00 +#> -3.5886e+00 1.1184e+00 2.5095e+00 -3.2145e+00 -5.5742e-01 -2.6120e+00 +#> -1.0957e+01 2.7352e+00 -3.9247e+00 -1.8870e+00 9.8008e-01 -1.1580e+00 +#> -4.1271e+00 4.0719e+00 -1.3393e+01 -4.4279e-01 6.8259e+00 8.2849e+00 +#> +#> Columns 13 to 18 1.4518e+00 -1.7907e+00 -7.9384e+00 6.5315e+00 4.2660e+00 1.4122e+01 +#> -4.7786e+00 9.9377e+00 7.1004e+00 9.3690e+00 -2.6187e+00 1.4162e+01 +#> 9.8217e+00 -1.7407e+00 8.8441e+00 7.0788e+00 -1.1299e+01 -7.9481e+00 +#> 9.8827e+00 5.8448e+00 -9.9542e+00 -7.7639e+00 -4.0326e+00 -3.8740e+00 +#> -9.0378e-02 -2.6884e+00 4.5369e+00 7.4611e+00 -1.6236e+00 8.2237e-01 +#> 2.6702e+00 -8.4511e+00 7.7322e+00 6.1408e+00 -5.8271e+00 1.1371e+01 +#> 2.4814e+00 1.9165e+00 -1.1149e+00 -7.0363e+00 2.5774e+00 1.0552e+01 +#> -1.3126e+01 9.4007e-01 1.2221e+01 1.2636e+01 -7.5024e+00 -1.8134e+00 +#> -3.1079e+00 3.2881e+00 5.4640e+00 8.8803e+00 1.3867e+00 -1.4285e+01 +#> -4.6069e+00 -7.2583e+00 -8.5716e-01 -6.4116e+00 6.0483e+00 1.4786e+00 +#> -2.1766e+00 1.1177e+01 7.7207e+00 7.4819e+00 -9.5557e+00 -1.5603e+01 +#> -3.5557e+00 1.5440e+00 3.4989e-01 1.0391e+00 -1.3528e+00 9.3870e+00 +#> -3.6253e+00 -1.6029e-02 -8.2342e+00 6.4836e+00 2.6881e+00 2.9792e+00 +#> 5.0866e+00 -1.5473e+01 -9.9891e+00 -7.4947e+00 -1.3624e+00 7.0891e+00 +#> -8.1447e+00 5.6452e+00 1.6197e+00 1.3334e+01 -1.1119e-01 3.0060e+00 +#> 2.4880e+00 -2.5340e+00 -8.1627e+00 7.4808e-01 1.6019e+00 4.5472e+00 +#> 1.5419e+00 5.1208e+00 -3.9342e+00 5.4785e+00 7.8769e-01 5.2958e+00 +#> -1.0982e+01 -2.1442e+00 3.8397e+00 -6.2287e+00 -5.1945e+00 4.4689e+00 +#> -8.0164e+00 -6.6248e+00 -3.7490e+00 1.5171e+01 4.9799e+00 -3.7437e+00 +#> -1.1349e+01 -2.8744e+00 -2.7050e+00 4.0806e+00 1.3722e+00 4.1536e+00 +#> -7.2407e-01 -9.3004e+00 -3.5112e+00 -5.6065e+00 -6.6253e+00 5.4250e+00 +#> -4.0743e+00 -4.7070e+00 1.2838e+01 2.6905e+00 -8.5392e+00 -8.7264e+00 +#> -1.7624e+00 -5.0954e+00 6.2427e+00 -6.9174e+00 4.3744e+00 -4.5048e+00 +#> 3.0713e+00 -1.1409e+01 -9.2123e+00 -3.3423e+00 1.1981e+01 -1.6001e+01 +#> 3.4765e+00 -2.1769e+00 -2.9399e+00 -4.0751e-01 6.6048e+00 -9.9558e-01 +#> 5.0735e+00 -8.0494e+00 -5.2977e+00 -2.9626e+00 -6.8171e-01 3.6940e+00 +#> -6.0202e+00 -5.2359e+00 -9.8874e+00 1.0407e+01 7.2523e+00 6.6818e+00 +#> -4.5084e+00 -8.7041e+00 -1.4744e+01 1.1044e+01 6.7042e+00 -7.7103e+00 +#> -5.2094e+00 -9.2715e+00 -2.5132e+00 -2.0430e+00 3.9644e+00 1.8019e+00 +#> -1.0330e+01 1.0737e+01 6.5686e-01 7.8642e+00 9.6778e+00 8.8608e+00 +#> -2.2792e+00 -2.1896e+00 1.7922e+00 -8.7522e+00 -3.9561e-01 1.1902e+01 +#> 1.1399e+01 4.3614e+00 -3.1763e+00 -3.0516e+00 9.3060e-01 -6.1095e-01 +#> 2.6698e+00 8.5572e+00 9.3059e-01 -8.6888e+00 1.2132e+00 2.0137e+00 +#> +#> Columns 19 to 24 3.1242e+00 -1.7359e+00 -3.3670e+00 9.2639e+00 1.1055e+01 -2.3429e+00 +#> -4.2909e+00 3.3208e+00 -1.5062e+01 4.3947e+00 5.0361e+00 -2.6086e+00 +#> 5.6307e+00 -4.7571e+00 -1.1382e-01 -3.3869e+00 -6.9188e+00 7.4758e+00 +#> 4.4813e+00 5.7593e+00 8.7943e+00 -1.1875e+01 -6.0524e+00 8.9488e+00 +#> 1.0235e+01 8.4402e+00 1.4450e+01 2.7812e+00 7.2263e-02 -3.3179e+00 +#> 1.5148e+01 9.6237e+00 6.9552e+00 -3.8001e-01 -1.0859e+01 6.4538e+00 +#> 5.2313e+00 7.1842e+00 -9.8373e+00 -5.4476e+00 9.7978e+00 2.4108e+00 +#> 1.5244e+01 1.4151e+01 3.2298e+00 -7.6963e+00 -4.4103e+00 -2.9889e+00 +#> 1.0298e+01 6.3490e+00 -4.4758e+00 8.8059e+00 -1.7732e+01 -2.1141e+00 +#> 3.7406e+00 -8.9840e+00 2.5201e+00 -5.0936e+00 6.2817e+00 5.8291e+00 +#> 9.6266e+00 1.9386e+00 -5.5450e+00 7.9442e+00 -6.1569e+00 3.7012e+00 +#> 3.7923e-01 1.5367e+00 4.1575e+00 -6.0499e+00 5.9302e+00 2.2745e+00 +#> -1.0050e+01 -2.2731e+00 8.6788e+00 -9.9291e+00 -5.6922e-01 -1.3604e+00 +#> -6.5292e+00 1.3730e+01 -1.0047e+01 9.6298e+00 -9.2012e+00 -4.4376e+00 +#> 4.0541e+00 3.1760e+00 1.2248e+01 -2.8897e+00 3.2389e-01 -1.1149e+01 +#> 2.2017e+00 4.8893e+00 4.7247e+00 6.1307e-01 -1.6959e+00 -1.9882e+00 +#> 1.0252e+01 -2.2491e+00 2.3286e+00 -4.3153e+00 1.2832e+00 -3.1327e+00 +#> -2.3985e+00 -3.4597e-01 2.4154e+00 -4.7692e+00 -3.3074e+00 -7.4276e-01 +#> -3.8533e+00 5.3527e+00 -8.8127e+00 -1.4880e+01 4.0541e+00 -3.2262e+00 +#> 1.3978e+00 2.5510e+00 2.0188e+01 9.2224e+00 -1.1162e+01 -6.1515e+00 +#> 4.1505e+00 -6.7055e+00 -4.2155e+00 1.4835e-01 1.5798e+00 9.3494e+00 +#> 1.1808e+01 6.8497e-01 2.3234e+00 -1.0927e+01 -1.1371e+01 6.6349e+00 +#> 4.8906e+00 -5.8453e+00 1.0335e+01 3.2712e-01 6.6653e+00 2.5518e-01 +#> 4.0984e+00 -1.0536e+01 1.5835e+01 2.6016e+00 1.6003e+00 3.5174e+00 +#> -3.7511e+00 -1.2930e+01 -8.8864e+00 8.9835e+00 -1.7069e+00 2.0311e+00 +#> 8.3025e-01 -3.5143e+00 -1.0223e+00 -5.9961e+00 -4.2259e+00 -4.4451e+00 +#> 1.5221e+01 2.0810e-02 -1.1063e+01 -1.3123e+01 -2.5652e+00 1.4035e+00 +#> -1.1284e+01 7.8458e+00 5.7174e+00 -6.0471e+00 -6.5509e+00 -1.2211e+01 +#> 3.0079e-02 -2.7623e+00 -5.8705e+00 1.0009e+01 -4.2271e+00 -1.3138e+00 +#> -6.4721e+00 6.9665e+00 -7.0353e+00 3.5454e+00 -5.5335e-02 -7.7748e+00 +#> 1.4640e+01 1.3125e+01 1.0948e+00 -1.2123e+01 8.9720e+00 4.7290e-01 +#> -1.4898e+01 4.9410e+00 2.4164e+00 1.2534e+01 -9.3768e-02 1.6201e+00 +#> -5.7218e+00 -4.0255e+00 -2.5160e+00 1.2270e+01 2.6487e+00 4.5178e+00 +#> +#> Columns 25 to 30 -2.2760e+00 -7.3144e+00 4.8292e+00 -4.9657e+00 -7.1232e+00 1.3142e+00 +#> 6.5929e+00 3.1490e+00 6.9204e+00 6.4151e+00 -6.3404e-02 -3.7897e+00 +#> -4.0338e+00 -4.5316e-01 -7.0003e+00 3.3699e-01 6.8378e-01 -4.5510e+00 +#> 1.6622e+01 5.6350e+00 3.6938e+00 -3.4149e+00 6.7602e+00 -6.1815e+00 +#> -4.3909e+00 1.5651e+00 2.4429e+00 -3.0016e+00 9.1529e-01 -2.4233e+00 +#> 2.3989e+00 5.4087e+00 -7.5278e+00 -1.5690e-01 -2.3424e+01 -7.2567e+00 +#> 3.1272e-01 -1.2439e+01 6.5578e+00 9.9514e-01 1.0721e+01 -1.0280e+01 +#> 5.9367e+00 4.2118e+00 7.7408e+00 8.6852e+00 1.1312e+01 3.4991e+00 +#> 9.7336e+00 -4.2098e+00 -1.2592e+01 7.2535e-01 -5.9812e+00 -1.1667e+01 +#> -8.3478e+00 -2.7242e+00 1.3632e+00 -1.0446e+01 1.7912e-01 -3.8909e+00 +#> 6.3733e+00 6.0159e+00 3.1218e+00 2.2932e+00 3.8410e-01 -8.7694e+00 +#> -8.2073e+00 -1.6457e-01 3.4766e+00 -4.1029e+00 7.8382e+00 4.4347e+00 +#> -1.7276e+00 4.3395e+00 -5.2719e+00 9.4849e-01 2.7905e+00 1.4499e+00 +#> -1.5380e+00 -2.5573e+00 -2.8523e+00 -3.6523e+00 -5.3597e+00 -2.7917e+00 +#> 9.0720e-01 1.2394e+00 -3.1198e+00 4.7940e+00 5.8422e+00 -1.1933e+01 +#> -6.5024e+00 -4.4094e+00 1.1358e+01 -5.3117e+00 -6.8802e+00 -6.3841e-01 +#> 6.5537e+00 -4.6925e+00 2.1657e+00 9.1326e+00 -2.0540e+00 -9.5667e+00 +#> 7.2024e+00 1.1624e+00 6.0365e+00 2.2904e+00 6.0530e+00 -1.9205e+01 +#> 1.4611e+00 -4.2919e+00 -1.4080e-01 2.7746e+00 2.0329e+00 2.0843e+00 +#> 5.4795e+00 -8.7125e-01 2.6906e+00 3.5602e+00 -3.3491e+00 1.1599e+00 +#> -2.3461e+00 6.5558e+00 8.0490e-01 -1.0127e+00 7.3854e+00 5.4343e+00 +#> -2.3029e+00 2.5260e+00 -3.6641e+00 2.6716e+00 3.3921e+00 3.6307e+00 +#> -8.0277e+00 -4.0514e-03 -1.1109e+01 -2.7329e-01 -5.4768e+00 -2.4345e+00 +#> -4.2815e-01 5.5739e-01 -1.0275e+01 -1.1990e+00 -3.5567e+00 2.1626e+00 +#> -2.6098e+00 1.0137e+01 -6.7354e+00 6.6041e+00 8.3207e-01 4.4087e+00 +#> 2.3618e+00 1.0020e+00 7.4331e+00 1.3256e+00 -7.2381e+00 6.9058e-01 +#> 4.2137e+00 4.6010e+00 -3.9047e-01 1.5557e+00 -3.4771e+00 -6.7903e+00 +#> 3.8030e+00 -4.4897e+00 -2.7895e-01 8.4346e+00 -2.8834e+00 -7.8370e+00 +#> -7.1090e+00 5.3218e+00 -8.2733e-01 -5.1612e-01 8.7156e+00 -4.2369e+00 +#> -8.5179e+00 -1.3605e+00 -1.1377e+00 1.4068e+00 -8.1943e+00 -1.7865e+00 +#> -5.5196e+00 -7.5730e+00 -5.7586e-01 3.5259e+00 4.4455e+00 -1.3109e+01 +#> -3.4827e+00 8.6397e+00 -1.4932e+01 4.6374e+00 -9.3930e+00 1.1858e+01 +#> 5.6901e+00 2.3284e+00 -1.0366e+00 6.8662e-01 -5.5608e+00 -2.5195e+00 +#> +#> Columns 31 to 36 2.0549e+00 -4.3663e+00 4.6274e+00 -2.7770e+00 4.0951e+00 -3.0836e-01 +#> 1.6512e+01 -5.8965e-01 4.0134e+00 1.1879e+01 1.4031e+01 3.2302e+00 +#> -2.9421e+00 6.2940e+00 5.0424e+00 3.0158e+00 4.4634e+00 -2.1569e+00 +#> -1.1351e+01 -3.4089e-01 5.8826e+00 4.3915e+00 -7.7045e+00 7.1392e-02 +#> -6.5426e+00 -3.9611e+00 -1.5092e-01 4.2079e-01 5.9726e-01 2.1377e+00 +#> -1.3936e+01 3.3846e+00 5.6410e+00 1.8708e+00 -8.9681e+00 -4.8510e+00 +#> 3.8352e+00 7.1991e-01 -8.4910e-01 -1.2214e+01 -3.6696e+00 9.2911e+00 +#> 1.4515e+01 -2.6906e+00 -7.7254e+00 4.5797e+00 1.1218e+00 -2.0977e+00 +#> -4.4800e+00 2.6482e+00 6.9065e+00 3.1507e+00 2.0082e+00 -5.3859e+00 +#> -1.4128e+01 1.3816e-01 2.9477e+00 -1.9106e+00 -3.6697e+00 -2.2761e+00 +#> 5.4178e+00 -5.8320e+00 -5.7231e+00 -1.5806e+00 1.2152e+01 -1.0140e+01 +#> 1.9179e+00 -2.3483e+00 5.8366e-01 -6.8551e+00 -3.2098e+00 8.6950e+00 +#> 3.8399e+00 9.9482e+00 1.5313e+00 6.8653e-02 4.2482e+00 3.6826e+00 +#> -2.5470e+00 -1.4484e+01 -9.0480e+00 -4.8119e+00 -1.6745e+01 -4.4600e-01 +#> -5.5114e+00 1.3359e+00 -1.6710e+00 -1.4465e+00 8.3191e+00 -9.3249e+00 +#> 6.0060e-02 9.4988e+00 6.0412e+00 8.8024e-01 6.9324e+00 1.2031e+01 +#> -5.6851e+00 2.5318e-01 -9.3610e+00 9.1362e-04 9.4072e-01 -4.5883e+00 +#> 1.7162e-01 9.5146e+00 5.0873e+00 7.1316e+00 3.6845e+00 7.4398e+00 +#> 2.5172e+00 7.7000e+00 1.1956e+01 4.7511e+00 -2.8945e+00 1.5428e+01 +#> 5.9333e+00 7.9293e-01 -7.8763e-01 9.9137e+00 -6.2309e+00 -1.2783e+01 +#> -4.6147e+00 -1.4190e+01 2.4994e+00 -2.6133e+00 -7.3028e+00 5.2317e+00 +#> 1.6406e+00 9.9635e+00 -6.1104e+00 -1.0154e+01 9.4588e+00 -2.3062e+00 +#> -8.0695e+00 1.3592e+01 1.2976e+00 -1.0431e+01 -1.4089e+00 -6.0822e+00 +#> -1.1015e+01 1.5045e+01 1.5913e+00 2.8774e+00 -3.9416e+00 -1.8998e+01 +#> -1.2227e+01 -7.4663e+00 -8.8610e-01 -7.8272e-01 4.3430e+00 -3.1237e+00 +#> -1.0315e+01 -1.3596e+01 -8.0018e+00 -2.9016e+00 6.0070e-01 -3.3632e+00 +#> -3.8378e+00 9.5447e+00 5.4928e+00 -1.2036e+00 5.7343e+00 7.3284e+00 +#> -1.5390e+01 -8.8993e+00 5.3606e-01 6.2705e+00 -1.0778e+01 2.5031e+00 +#> -4.9752e+00 -3.1720e+00 -2.2769e+00 -7.5106e+00 8.8023e+00 3.0209e+00 +#> 8.2876e+00 -5.6669e+00 -6.2541e+00 -5.9230e+00 7.6724e+00 -6.0533e-01 +#> -9.2757e-01 -2.7814e+00 -4.3140e+00 -1.3984e+01 -4.1415e+00 1.2586e+00 +#> -4.3848e-01 1.1996e+01 3.2413e+00 6.3810e+00 -1.3565e+00 -5.3411e+00 +#> -1.7962e+00 -6.0509e+00 3.4097e+00 2.9417e+00 1.9023e+00 -1.4072e+01 +#> +#> Columns 37 to 42 -6.2390e+00 7.4627e+00 1.5723e+00 -2.5190e+00 4.9774e-01 3.5490e+00 +#> -1.8892e+00 9.1216e+00 7.8881e+00 -6.1767e+00 4.9554e+00 8.8803e-01 +#> -7.0721e+00 4.0161e+00 -1.0343e+00 1.2696e+01 -6.3369e+00 -2.1948e-01 +#> 1.3684e+01 -9.2964e+00 -8.0246e+00 1.0011e+00 -2.0440e+01 -2.1537e+01 +#> -6.7623e+00 6.8869e+00 -2.3903e+00 5.0027e+00 -1.3824e+00 7.4134e+00 +#> 1.7012e+00 2.9455e+00 6.5387e+00 -2.4028e+00 -3.4351e+00 -9.1958e+00 +#> 3.7916e+00 -8.9537e+00 -7.2207e+00 -4.9574e+00 8.1806e-01 1.7337e+00 +#> -3.7733e+00 1.3423e+01 1.3992e+01 1.0672e+00 -1.1168e+00 -5.3264e-01 +#> -2.9634e+00 4.7297e+00 -5.1181e-01 2.1821e+01 2.2174e+00 2.6987e+00 +#> 8.9396e+00 -7.7852e+00 -8.7303e+00 -2.0843e+00 -5.4726e+00 -3.1127e+00 +#> -1.3309e+01 1.7285e+01 -3.7094e+00 -1.1378e+00 3.7256e+00 -7.7327e+00 +#> 4.6996e+00 -1.4067e-01 -3.8312e+00 -6.2441e+00 7.1122e-01 2.4115e-01 +#> -8.7191e+00 -2.7296e+00 -3.7690e+00 -6.6840e+00 2.8123e+00 -9.0216e+00 +#> -7.8271e+00 -1.4785e+00 1.2006e+00 -3.0301e+00 -3.6380e+00 2.2138e+01 +#> -1.8020e-02 -7.3946e+00 -1.8668e+01 4.9143e+00 8.0434e+00 -7.5144e+00 +#> 2.9670e-01 -6.9659e+00 6.4073e+00 3.5793e+00 -4.7447e+00 -6.0572e+00 +#> -7.2437e+00 -8.0995e-01 -1.1089e+01 -2.2674e+00 -4.6293e+00 4.8363e+00 +#> 7.0735e+00 -7.6543e+00 3.0502e-01 -4.8940e+00 -2.1813e+00 -1.7265e+01 +#> -2.1917e+00 9.3046e-01 4.0582e+00 7.7997e+00 2.3249e+00 2.7131e+00 +#> 6.4485e+00 -1.8380e-01 -2.9445e-01 2.0924e+00 7.4262e+00 -8.2395e+00 +#> -2.2302e+00 1.7512e+00 -3.4890e+00 -2.6745e+00 3.4653e-01 3.6789e+00 +#> 8.9544e+00 -6.1406e-01 9.0091e+00 -2.5289e+00 -6.6100e+00 -9.2875e+00 +#> -8.6285e+00 -7.6416e+00 -1.2472e+01 -3.9108e+00 1.2114e+01 -3.0555e+00 +#> -3.7552e+00 -1.0725e+01 -1.0359e+01 1.1363e+01 -4.7689e+00 -6.3023e+00 +#> -4.3483e+00 -2.4968e+00 -1.6712e+01 -2.3206e+00 9.0312e+00 -4.3267e+00 +#> 1.7978e+00 1.7474e-01 4.3791e+00 -5.9808e+00 4.6256e+00 7.4821e+00 +#> -3.0482e+00 -7.1479e+00 1.4285e+00 -4.9415e+00 -5.8376e+00 -5.3080e+00 +#> -1.7139e+01 -5.7809e+00 -8.0277e+00 -5.4555e+00 -2.2411e+00 1.5101e+00 +#> -1.6970e+00 -5.6225e+00 -1.0386e+01 -4.1191e+00 -4.6791e-01 1.1010e+00 +#> -2.1864e+01 1.1266e+00 1.2700e+00 -7.2086e-01 9.0356e+00 1.3241e+01 +#> 1.5384e+00 -1.5492e+01 -2.5981e+00 -2.3097e+00 -7.5876e+00 1.6444e+00 +#> -9.8640e+00 7.9875e+00 7.5978e+00 -5.3028e+00 7.6840e+00 9.5088e-01 +#> 4.4981e+00 2.9509e+00 -3.7114e+00 -5.6748e+00 -2.7754e+00 -9.2440e+00 +#> +#> Columns 43 to 48 7.5252e+00 -4.7240e+00 2.8171e+00 3.8666e+00 1.0257e+01 1.6760e-01 +#> 1.4654e+01 -1.1176e+01 6.4938e+00 -4.7155e+00 1.6589e+01 1.9941e+00 +#> 2.4068e+00 3.9986e+00 2.0345e+00 -5.9613e+00 1.1134e-01 1.2031e+01 +#> -4.1788e+00 -2.1450e+00 -1.4720e+01 1.2275e+01 -1.3673e+01 8.7555e-01 +#> 6.7612e+00 1.4690e+01 -7.0012e+00 -4.7893e+00 3.9739e+00 7.7816e+00 +#> 5.9188e+00 3.5156e+00 -8.0060e+00 -4.2263e+00 1.0632e+01 4.4382e+00 +#> -8.1016e-01 3.7534e+00 4.6450e-01 -2.3587e+00 -3.2603e+00 9.0285e+00 +#> 1.1181e+01 6.6565e+00 -1.0311e+00 -1.6246e-01 2.7070e+00 7.7686e-01 +#> 2.6272e+00 2.6926e+00 -2.1784e+00 -3.2185e+00 -2.2183e+00 1.5972e+00 +#> -5.3338e+00 -1.2750e+01 -4.8989e+00 2.2367e+00 -7.1979e+00 -5.7989e+00 +#> -8.5229e+00 -2.5259e-01 1.8121e+00 -1.6339e+01 1.4385e+01 3.1480e+00 +#> -5.5123e+00 -2.9095e+00 9.6932e+00 -7.3890e+00 -2.1089e+00 -5.4664e+00 +#> 5.6571e-01 2.2835e+00 2.0445e+00 -3.1459e+00 -1.8582e+00 1.9847e+00 +#> 1.1227e+01 -3.7863e+00 2.6117e+00 -2.0254e-01 2.9419e+00 3.3584e+00 +#> 1.8669e+01 -2.5709e+00 -7.3639e+00 -1.0516e+01 3.2457e+00 1.2949e+01 +#> -1.4009e+01 -1.4490e+01 1.3553e+00 2.0864e+00 -3.2283e-01 1.0447e+01 +#> 6.4845e+00 1.0992e+01 -8.9764e+00 -4.2840e+00 -6.8353e+00 2.0669e+01 +#> -4.7237e+00 -1.5023e+01 5.2116e+00 2.4891e+00 -5.8575e+00 7.5567e+00 +#> -1.1441e+01 -7.2402e+00 1.1403e+01 3.5053e+00 -9.5773e+00 8.1672e-01 +#> 1.3615e+01 -6.2740e+00 -7.0639e+00 3.1193e+00 -4.7688e+00 -3.8634e+00 +#> 5.3512e+00 2.4257e+00 2.7007e+00 -7.2868e-01 -1.0831e+00 -1.0120e+01 +#> -1.9115e+00 -3.5195e+00 7.7107e-01 -1.0501e+01 -9.2194e+00 -2.2617e+00 +#> -1.2850e+01 -8.3718e+00 -2.8517e+00 -3.1152e+00 1.5854e+00 -4.3248e+00 +#> -7.8824e+00 1.0903e+01 1.5360e-01 4.4607e+00 -6.1502e+00 -5.7983e+00 +#> -4.9865e+00 -7.0381e+00 3.4789e+00 -6.8242e+00 6.5550e+00 1.0764e+00 +#> 1.3635e+01 1.7434e+00 -1.0626e+01 2.4001e+00 1.0004e+00 4.2354e+00 +#> 5.1739e+00 1.3836e+01 -7.8074e+00 -1.8637e+00 9.1099e+00 3.1453e+00 +#> -8.1692e+00 6.1423e-01 -4.5621e-01 1.4270e+00 -1.0939e+01 1.0901e+01 +#> 5.8934e+00 -7.4564e+00 4.9315e+00 -8.1992e+00 -4.8292e-01 -2.0732e+00 +#> -4.3536e+00 1.4776e+00 5.8203e+00 -8.0869e+00 6.1614e+00 9.3853e+00 +#> -1.3184e+01 -1.1708e+01 -2.4239e+00 -1.0928e+00 -7.8648e+00 7.4512e+00 +#> 3.7807e+00 4.5977e+00 1.4227e+01 4.8748e-01 5.5077e+00 -3.9448e+00 +#> -1.1252e+01 -1.0866e+01 3.1634e+00 4.5651e+00 1.2367e+01 -1.0874e+01 +#> +#> (11,.,.) = +#> Columns 1 to 8 -2.0044 4.3699 -2.4127 4.2359 4.1969 -0.2678 3.8926 7.5892 +#> 6.1867 0.1656 -4.8839 -1.8466 -4.4712 -5.9437 -10.9038 4.0003 +#> 8.4201 -8.1247 -1.8980 -4.0356 -6.8123 4.3460 5.1257 -9.9883 +#> 0.9669 8.8183 8.4483 -4.3425 0.9247 6.5621 12.3047 2.4185 +#> 6.7167 -2.4476 -1.9954 -3.5758 -6.7449 -8.1649 -5.5353 -1.2549 +#> -2.4296 2.2598 14.8066 4.2901 -8.1575 -0.0060 -10.8620 15.7208 +#> -3.4516 -10.6639 5.1614 -7.7433 2.2696 2.9458 3.3822 0.3909 +#> 6.6280 -11.6794 0.7623 7.4987 3.8300 1.4233 -13.8746 -11.0928 +#> 11.9090 -16.2859 -2.0074 -5.6302 -6.8406 -7.0040 -0.6425 -3.5537 +#> -8.4572 1.7583 -2.4892 1.3537 4.7256 -3.4651 3.8739 0.7713 +#> 9.7891 -11.9684 -0.2066 15.0137 -7.3798 -1.7238 11.3389 -16.5909 +#> -2.2045 8.7826 3.2816 0.7466 -5.2149 -2.3559 -3.3869 1.5777 +#> -2.5540 7.1872 11.5563 -2.1551 -0.1032 12.4840 6.1519 3.8442 +#> 1.0610 13.9169 -5.9530 -3.8892 -4.4112 -5.2429 9.1333 9.5078 +#> 5.8089 -4.8836 15.1818 -1.9421 -10.6217 6.9401 -3.8909 -1.3724 +#> -7.8908 5.5054 -8.3785 -8.3396 -3.5264 1.3435 18.2884 6.9249 +#> 6.6423 -7.1030 1.7219 7.1863 7.9274 -0.1598 -4.0641 -9.6481 +#> 5.8048 2.5447 14.5549 6.4658 -1.0197 6.5001 7.7019 -2.9925 +#> -2.3873 -14.1483 -8.2676 0.0137 7.7786 6.5895 6.4276 -6.6812 +#> -8.9539 -10.5986 8.0475 3.8106 -2.6754 0.6577 -6.8355 4.2199 +#> 9.1807 -1.5445 11.4027 3.9223 3.0354 -2.9551 -15.0646 0.2942 +#> -1.2010 3.2711 -18.3424 -2.1797 -5.7391 11.5237 13.7566 -7.5049 +#> -1.2460 -2.2535 1.8274 10.3330 4.5104 -2.5318 4.1912 -5.8933 +#> -2.9833 -2.7955 13.6504 4.9941 5.8426 3.9711 12.0044 3.4093 +#> 4.5080 0.0392 -7.4748 10.0702 2.1613 6.0438 -0.6968 -12.2627 +#> -5.8455 -4.4407 5.9451 -1.1177 7.1119 4.5139 -9.7403 2.9613 +#> 5.2632 2.9664 3.9723 2.8500 3.8921 4.1397 4.4269 -3.3525 +#> -4.6684 2.3957 6.1306 16.3635 10.9679 -0.5797 -0.5639 -2.7507 +#> 9.3602 -2.3128 -11.8529 3.9773 -1.8450 2.9816 14.0366 -6.2402 +#> -6.2591 -1.8565 -0.4631 14.7339 2.1109 -2.1565 -12.1677 6.6745 +#> -1.3769 -11.6554 5.5780 5.5344 12.0577 7.3156 -3.7152 -3.5623 +#> -7.9856 10.3130 -1.8823 2.1432 -0.4400 -4.6726 -6.2517 1.1903 +#> -0.9517 -2.9278 1.0489 7.3569 0.5211 0.1772 -0.7285 13.3707 +#> +#> Columns 9 to 16 6.3823 3.0219 -2.8446 -4.6026 -3.7400 -4.1116 -7.1448 -7.5928 +#> -3.3294 -4.4999 -5.0383 -3.9010 -2.0349 -6.5806 -0.7017 0.5485 +#> 0.2619 0.9938 -6.3134 -3.0559 3.3365 1.7051 -0.3802 2.8995 +#> -8.0614 -0.8129 0.3033 20.1874 1.7606 0.4109 5.9526 10.6269 +#> -4.0922 -5.5850 -13.6079 -5.6689 -11.0693 -8.8210 -3.5062 -6.0664 +#> 5.8566 2.6684 -5.3008 -9.9224 0.8673 -11.0890 7.2668 5.1669 +#> -3.4305 4.8934 -13.3550 1.1970 -0.1186 4.9312 0.2421 3.2356 +#> -0.8893 -1.7562 -4.0922 -1.5024 -24.6047 1.4579 13.1380 -5.0338 +#> 0.1682 2.2131 1.6275 -7.8510 7.6663 -6.3077 -14.3870 22.1853 +#> 5.7356 -5.8148 2.0537 10.2324 2.7517 10.0743 -1.6487 -6.4380 +#> 12.8964 -4.1159 -4.4931 2.6539 3.5004 1.7731 12.6290 -4.6256 +#> 5.3116 -3.7887 4.4995 -14.8593 -4.2049 7.0560 -3.3320 -7.5902 +#> -4.0621 6.1609 -4.2993 -0.3039 -8.4894 -1.1391 14.2142 -10.8272 +#> -0.8747 -6.0277 1.8341 -16.5662 9.2631 9.6649 0.5672 -3.7222 +#> 6.7275 -13.1337 -7.5456 -12.3858 -4.7681 9.5557 -5.8113 -4.4115 +#> -1.1570 -7.7482 -1.9757 2.1171 -0.6067 -3.9330 0.0538 3.5047 +#> 1.0995 1.2456 0.8121 -7.7823 -8.1074 -3.2819 6.8381 -3.0345 +#> 2.5230 1.4343 -1.5845 4.0496 -7.0159 -7.2508 -0.9545 9.0573 +#> -9.3421 -1.2291 5.5133 4.4957 -4.7450 1.2507 -0.0840 3.7238 +#> 5.9721 0.4973 -0.8971 1.9492 -11.6507 -6.1942 -0.0588 3.0740 +#> 0.6651 8.9529 1.2361 -3.9383 -4.2929 6.1735 -1.6646 4.5894 +#> 9.7561 -8.0153 2.2952 1.5529 0.2461 -0.9306 7.2547 -2.9878 +#> 9.1697 4.8139 13.1205 0.1607 11.1355 -3.9690 -2.9050 9.5772 +#> 13.4256 7.8929 5.7852 -1.8729 -4.2819 12.1169 4.5292 -1.7870 +#> 7.7253 -4.2477 13.1721 -6.5561 10.9189 1.2953 1.1537 -0.6233 +#> -8.4916 -0.1692 3.4664 7.2991 -1.1835 3.4548 -1.7278 0.9493 +#> -2.8162 -4.2595 -1.5781 -6.4060 2.9334 -7.0955 12.3795 2.8014 +#> -10.3906 6.7465 3.1686 -0.3980 -2.3234 -0.3691 9.6803 -14.8288 +#> 4.0592 -10.3295 -3.2436 -3.0166 2.3703 -2.3240 -4.1811 8.5929 +#> -1.2070 10.0707 -0.4461 -7.8677 -11.1716 -5.1939 14.7600 -11.8722 +#> -5.8241 4.7446 7.1372 -0.0313 -4.6634 1.8151 -5.5978 7.6861 +#> -3.1263 5.6811 13.5624 -14.0331 6.8948 -7.2957 8.0425 1.4875 +#> 6.3600 5.2668 0.5438 -0.3248 15.2348 4.9118 -5.3705 0.7440 +#> +#> Columns 17 to 24 0.5236 6.8693 -0.4034 -9.7509 -4.5863 18.9446 12.2359 -1.6541 +#> -2.9956 4.0989 -10.3061 1.1723 -1.5345 7.7239 14.5010 -0.1832 +#> 13.2186 0.7272 -8.6150 3.3184 16.5073 1.1210 1.6489 -2.2074 +#> -3.4138 -16.0043 17.6044 1.4957 -10.2243 -4.1157 -0.2742 5.9374 +#> -4.2704 0.2206 -4.0499 8.9308 1.4830 -1.8442 10.6688 -4.1980 +#> -0.2917 -13.9340 6.4268 12.3239 8.9711 7.4519 7.6545 1.0119 +#> -2.5605 -1.7786 5.2483 -10.9202 -9.1474 -3.8889 -4.1318 -4.5254 +#> -10.1291 -4.8287 -14.1620 -6.3614 -11.1534 -8.4878 12.9382 -1.4245 +#> 16.6430 -8.2832 -4.1078 14.2384 13.0596 1.6659 -0.0978 6.2452 +#> -9.2022 6.4704 15.6910 -2.6631 0.6269 1.7004 -1.7806 0.7099 +#> 3.6032 7.8300 -24.2912 12.1176 9.7138 -0.3804 15.4161 -10.8723 +#> -10.0752 9.4278 2.1156 -8.8559 -2.5048 6.6876 -4.4726 5.2917 +#> 4.8320 10.4034 0.9745 -12.8283 1.6226 -0.0745 -9.1379 -3.6713 +#> -5.8681 14.4700 12.2456 1.0197 -7.4787 0.7518 7.1122 3.2161 +#> 13.5318 1.5475 -6.2782 -10.4522 3.0410 8.6182 -2.5363 -7.1108 +#> -4.6763 0.1332 9.1238 -3.5555 -2.0220 -1.4984 7.1836 8.1871 +#> 1.5535 -12.9286 2.1660 -0.5209 -7.5615 -4.8321 -1.2277 -3.6797 +#> 0.9089 -10.6601 10.4946 -1.6091 6.9751 -8.0629 -4.4556 2.6121 +#> 6.1206 4.4565 -4.0688 -2.7858 5.5975 -1.0423 -3.8477 0.0219 +#> -2.9188 -6.3728 8.4461 -3.2103 -7.5373 5.3769 -0.4173 -6.6054 +#> 0.5667 -12.7742 2.3528 1.9002 1.5721 -12.6775 0.5521 -3.3713 +#> -0.7820 4.8531 -1.1454 -4.0201 8.4258 3.1378 5.1369 4.2380 +#> 14.7232 -0.8633 -7.0142 3.2225 -2.5383 -9.4545 -2.2383 -4.1793 +#> 4.2805 1.1091 9.2722 3.3933 -5.3543 0.2697 -3.0848 8.4100 +#> 10.8122 -9.1854 -14.0378 1.1658 8.9828 -7.5437 -14.8283 -1.8276 +#> 0.2647 -4.6193 5.6080 -3.8388 -7.6890 3.8130 -2.9010 -7.1018 +#> 11.1065 -1.7242 2.5281 -2.9828 -4.6451 -11.2156 -1.1177 10.7715 +#> 4.6313 5.9269 6.4877 -5.0038 -7.0321 -7.7048 0.2439 -3.5283 +#> -3.2314 3.8181 -0.7659 0.2947 3.5189 -0.3271 -1.7176 -9.7372 +#> -7.1887 16.1351 -6.9254 -10.8912 -5.4141 3.3091 14.0301 -5.9519 +#> -1.1507 -7.1501 2.9040 -11.2942 -8.9464 -11.5441 -7.8685 8.8295 +#> -11.5279 10.8920 -2.8373 8.6184 -10.8648 6.1828 -5.2552 8.0270 +#> -6.4354 -0.0202 2.1212 10.6847 1.9121 10.6659 5.9482 3.1849 +#> +#> Columns 25 to 32 -5.0729 -4.2956 2.7086 6.4407 15.8604 -0.3588 3.0230 -8.7129 +#> -7.9630 -0.9580 5.0442 -2.5717 1.9893 5.3521 6.3859 5.3545 +#> 2.0941 1.9097 -12.0480 -7.8120 -1.1532 11.4927 -8.4720 0.4449 +#> 4.2617 -6.9544 -8.1263 -3.9371 -0.5503 -4.5696 19.1867 7.3988 +#> 5.5247 0.5356 0.7781 1.2835 1.0388 -1.3812 2.8845 9.4449 +#> -0.6169 10.7332 -4.6661 3.8535 -1.2422 -5.3036 2.2425 9.5974 +#> 4.5023 -8.9653 -4.7451 6.3774 -3.6699 -1.5664 1.6872 4.6048 +#> 3.5291 2.9527 8.3790 -1.0565 -9.1058 1.3912 -4.0224 12.2775 +#> 6.6163 11.7969 -0.0560 -6.3300 -6.5183 -7.4860 -8.6395 9.1613 +#> -4.0831 0.6745 -8.1844 3.2280 -3.4047 -1.9166 0.7233 4.1844 +#> 6.8935 3.1620 -0.2730 -8.7033 -3.1915 8.9787 -0.6695 3.8883 +#> -4.3835 -1.0841 5.7311 1.6674 2.7202 -10.2040 6.5731 -3.4295 +#> 6.2861 -4.3858 0.8793 5.7060 -0.1589 6.7821 -7.4484 -0.8987 +#> -11.4815 -1.8563 8.6766 -5.0828 -0.2160 -10.1367 16.3202 -15.2852 +#> -0.9740 4.4800 4.1621 11.5320 4.9337 2.3787 -13.8811 13.0770 +#> -2.3275 -6.4719 -5.6679 -0.2408 3.0308 2.7526 8.1827 -7.3714 +#> -7.6305 -1.1428 8.5583 6.8418 -1.1626 6.2628 2.1209 7.9571 +#> 1.3825 -4.8033 -1.8967 3.3390 -5.9439 1.8516 11.8852 4.2061 +#> 6.7047 -1.5728 1.9421 -6.5667 -3.1609 2.4729 0.5374 4.6061 +#> 3.9707 13.9025 12.1335 6.5156 -2.7861 0.3097 -8.4294 4.0610 +#> 1.8200 10.6741 -6.9567 -3.4479 6.3126 3.0632 -9.6933 5.1884 +#> -6.5620 1.9818 -3.6081 -0.1819 -10.5116 -3.7730 3.1047 -5.6763 +#> 11.4688 8.5353 3.8160 13.8754 5.2576 -0.4056 -7.2265 -6.5845 +#> 6.4180 -1.8389 -0.7055 8.5180 -8.2785 7.0835 -19.0100 15.5229 +#> -6.8713 7.3240 -1.0499 -1.4583 10.3664 5.5127 -1.9181 -5.3814 +#> -10.7972 5.2820 -1.3866 2.9625 1.5788 10.2621 -6.9272 0.7044 +#> -9.1831 -3.3115 1.0362 -3.3940 -4.2613 5.9859 9.9480 3.7198 +#> -8.1137 3.6621 8.2821 1.7979 -1.2668 1.4004 4.5593 1.2181 +#> 1.0208 -0.7449 2.9976 5.4707 2.6559 -3.3447 1.3942 -0.3096 +#> -4.3125 10.7800 7.2531 9.5339 0.3425 -1.7535 -7.0209 -12.0271 +#> 1.3050 2.2791 6.5436 9.1550 -10.5545 -6.5303 4.5502 9.5093 +#> 8.7469 -4.9613 2.1701 1.9083 9.6481 -6.2135 -2.7381 -14.1108 +#> -4.4900 5.6005 -2.7029 -1.0956 2.2024 -1.2472 7.6779 1.9417 +#> +#> Columns 33 to 40 -0.0019 0.2802 11.2920 -9.1013 -5.6047 -0.0040 5.0107 6.8514 +#> 6.9784 -4.2512 -0.7175 -4.0739 8.1430 -6.1468 2.5069 8.5195 +#> 1.0547 -0.3367 -7.9803 -4.1151 -1.3741 11.1988 1.6343 -2.4667 +#> -8.8407 -7.3048 0.8817 1.6348 19.7482 -2.2385 -11.9681 -0.4096 +#> 14.3348 8.9444 6.1513 -3.8608 -4.9776 0.7029 4.0575 0.0101 +#> -1.0902 -10.9092 -6.6897 -5.8111 5.5139 9.2579 5.1977 9.8372 +#> 0.8237 -4.5022 10.2385 12.2472 -1.3605 5.4792 -4.8483 -9.0908 +#> 12.6671 3.5102 6.4837 5.3618 6.4916 -10.6185 -7.3910 6.6741 +#> 1.3334 -0.3487 -7.4179 1.1611 -6.2333 0.7965 3.6717 0.0761 +#> -6.9966 -4.8677 1.2862 -3.7916 0.1066 -16.2524 6.9257 9.2883 +#> 14.2929 -15.8576 0.3450 -4.3398 3.7223 11.3818 3.6238 -7.7689 +#> 0.7186 3.9697 -1.0347 8.9312 -9.0946 -4.7356 6.0722 -1.6076 +#> -6.7104 -2.8066 -1.2940 -9.2257 6.4690 11.5947 -4.0898 -7.5930 +#> 9.8058 13.1059 -4.8653 12.2930 -12.9543 -8.7284 2.0773 -2.7334 +#> 10.1541 -2.2709 -0.0826 -13.8402 13.5996 10.3662 -8.8221 -7.8720 +#> 3.1338 3.4794 -8.2790 -1.3139 8.1712 5.2529 0.6040 -7.0523 +#> -2.3543 9.7280 -3.3076 1.4279 0.8343 -0.5399 -0.3940 -7.1317 +#> 1.3395 -0.0886 -14.3154 -3.6316 4.7655 11.4265 -6.9424 -0.0195 +#> -7.2402 0.8065 -6.1732 5.3272 -10.5435 0.5907 7.7513 6.7939 +#> 0.6357 3.2251 -4.7789 -8.3201 4.2994 -3.6608 -10.4161 14.7643 +#> 1.0850 5.2056 -1.1094 9.1666 -9.4409 -8.2777 0.9586 8.9043 +#> 9.5923 -4.4406 0.3631 -4.2734 7.5759 5.4691 -4.3900 -6.4848 +#> -4.8792 -13.1105 -9.9396 -6.0821 -4.2209 12.5849 -2.5196 -7.9711 +#> -23.4039 -9.2183 -8.1744 -8.4877 4.3403 -1.6354 1.4089 2.3033 +#> 4.3382 4.4030 1.3752 -6.4615 -2.0714 0.8318 4.8986 2.2077 +#> 2.5029 10.4185 1.9057 -7.3488 5.1091 -2.3736 -5.1177 8.5054 +#> -11.5271 1.1920 -1.4756 -4.4427 8.2370 5.6577 4.2550 -7.8401 +#> 3.8904 4.7149 -14.7242 -5.9000 -12.1250 -3.5753 9.7209 3.8768 +#> 11.7669 3.3826 7.0856 2.0755 -1.0698 3.9380 -8.9544 -1.4195 +#> 18.2892 -11.3806 0.7879 -2.0373 -8.2313 -9.0982 10.7094 0.3240 +#> -4.2813 1.9703 -0.1238 16.0385 -7.7890 -12.1289 -10.8846 4.2699 +#> -8.0051 -3.0747 4.6354 1.3537 6.1540 -0.7435 -3.5279 -8.5859 +#> -1.3912 -15.9602 1.5367 -5.4228 -1.3131 -4.3742 7.9185 9.7122 +#> +#> Columns 41 to 48 11.9842 -0.5593 -1.7805 -0.0418 2.9324 4.4272 10.7479 -5.3032 +#> 6.4636 -7.0260 -2.1067 -2.6697 -0.7586 -0.1127 -1.9880 4.2114 +#> -7.8123 4.6202 -1.8848 2.7081 7.1877 -24.3626 2.2723 5.8324 +#> -3.6324 2.1598 1.4967 2.0073 2.5986 1.4709 8.2080 8.4126 +#> -3.0173 -2.4682 1.8973 -0.4220 11.2470 -8.9623 -0.4299 1.1831 +#> -11.3260 0.8946 -11.9878 8.5653 -13.7869 2.4784 -3.6501 3.3985 +#> 0.7377 3.5998 1.3043 -4.0607 3.9259 5.5577 13.7190 -4.1788 +#> 3.7997 -7.6852 -2.1255 2.1282 16.8586 -0.1670 -0.5216 -5.6847 +#> 0.0417 0.4327 3.9841 3.7029 -3.3788 -1.5266 -3.1260 9.1985 +#> -7.9677 3.4836 2.0614 -9.5991 -4.0841 5.0656 12.3368 -3.2452 +#> 6.2654 -4.9748 4.1019 -9.4819 30.9802 -20.2084 1.4996 -1.2812 +#> -2.4100 -1.8367 -0.7233 1.9043 -4.4144 7.0083 -4.5828 -3.8193 +#> -1.2719 7.6519 -12.3358 -1.2329 2.1559 -0.8138 -0.6092 0.2082 +#> 1.6161 6.0475 0.1815 -5.6616 -2.7878 -1.0785 -2.3120 0.9993 +#> 5.1240 -1.1110 -8.6762 8.2783 -3.1611 -8.0870 3.4626 2.6431 +#> 0.5291 3.3617 0.6551 -8.7125 11.2056 5.7662 5.2713 -2.7741 +#> 11.6409 -2.7260 -4.5710 -3.0022 7.6567 -9.3038 -0.0798 5.8057 +#> -5.5851 3.2143 -1.6160 -3.7551 -2.0959 1.3541 5.3000 -3.7737 +#> -3.2166 -2.7181 7.8093 -9.5603 8.7075 7.0108 5.1225 -11.7553 +#> 6.5461 -7.2921 -1.2451 7.5864 6.0977 -1.2823 4.6137 -0.9353 +#> -13.1320 10.1831 2.5122 6.5948 -8.3315 -9.3659 -1.7092 1.8071 +#> 1.1201 -0.9914 -2.0848 4.5972 2.9202 -1.4931 -2.1520 3.0542 +#> 7.4057 5.9809 -4.3698 1.5064 -2.8317 -2.2117 2.1553 -0.2234 +#> 3.6103 7.1969 -12.8206 -7.6817 -0.5296 -3.9748 5.1967 -1.6486 +#> -5.1564 6.3437 4.0816 13.9357 -10.6232 -6.3037 -5.1245 4.8427 +#> -0.6854 1.5132 3.3787 9.7651 -1.6342 3.9888 1.7152 -3.4456 +#> -2.9518 -1.5337 -3.2749 -7.5476 -2.8363 0.7673 -4.6810 4.3637 +#> 3.4508 5.3307 0.0274 -18.9641 10.7559 0.6545 -2.6214 2.7683 +#> -0.3699 -2.1346 5.4093 -2.6155 -1.9389 -6.1243 9.4638 -2.6488 +#> 4.5731 9.7016 -3.4944 -14.8443 8.1975 8.4388 -7.5679 -0.7393 +#> 1.9754 2.5775 7.9926 3.1289 0.2847 18.3038 10.5872 -6.6662 +#> -1.1725 5.3218 -12.1124 0.4054 -11.2314 9.3301 -8.0784 0.4656 +#> 5.2721 1.0929 3.0305 0.9061 -9.7753 12.5669 2.3650 6.1578 +#> +#> (12,.,.) = +#> Columns 1 to 8 4.5001 -1.2295 -1.8164 0.6520 3.2763 5.5859 3.9245 -4.6731 +#> -3.8160 2.2018 5.1452 2.9829 7.7092 2.6458 -3.2690 1.8272 +#> 10.2061 5.9445 -5.4832 5.7572 -13.4972 -9.2555 -4.7143 1.7513 +#> -7.6837 8.4554 -6.2225 -7.9215 -15.1581 4.2338 -4.3397 -8.5616 +#> -1.4925 0.9310 0.1745 7.8605 -10.1225 -9.7224 -10.6454 -5.1328 +#> 5.1166 -5.5563 -3.4195 -7.5142 -6.0150 3.4729 -12.7218 -4.4432 +#> -0.6968 1.9179 -1.0473 -12.1094 1.7510 -3.5513 -10.3687 3.9037 +#> -8.8777 -8.4637 9.1328 7.5304 2.5435 -0.7738 -4.5382 7.4405 +#> 3.4902 -4.5657 -6.3298 -8.0692 6.9725 -9.0220 -4.4722 -9.1183 +#> 4.3632 7.8274 7.3080 1.6116 -3.2386 -2.5663 -2.4155 2.2608 +#> -6.8906 -7.7736 -3.5131 5.5850 -7.2101 -7.0828 -3.4648 -1.2636 +#> -4.9135 7.8486 6.0787 9.6393 6.3337 2.7853 -2.4782 -2.6175 +#> 12.2056 11.0097 -5.6749 6.6335 -0.6023 5.6280 -3.9017 10.6491 +#> -3.4025 -15.7746 -15.6751 -16.0670 1.3937 -4.1239 -5.8367 -6.8215 +#> 12.1459 3.2775 7.3165 -6.2589 -5.5058 -3.6283 -6.4149 -9.3527 +#> -1.2078 -0.0522 3.6285 0.1165 -5.0967 3.7390 -1.1201 -9.5000 +#> -3.5308 2.7058 -3.9890 3.4945 -5.7438 -0.8554 -8.4195 2.9762 +#> 6.6982 3.4786 8.0461 -9.3455 -1.4272 6.8840 -7.2722 1.4466 +#> -4.4001 -0.1649 2.9268 8.8747 -1.2902 5.5666 2.5303 0.6259 +#> 8.3606 -7.1928 13.2341 -0.3941 18.6431 6.5971 -1.3029 4.8651 +#> 1.9562 5.0652 1.6687 8.4165 -2.4939 3.4391 -3.4970 2.5919 +#> -6.0691 -18.4595 -1.0337 6.6505 -1.8230 -0.6532 -1.8843 -6.8945 +#> 8.5778 1.2562 13.4635 5.7836 6.2390 6.4734 13.7121 -1.7506 +#> 16.3378 12.4865 -5.7148 -9.5783 1.0756 6.7436 11.4067 -5.2211 +#> 3.4916 0.0112 0.4477 5.7018 -9.1030 -0.5422 2.6076 -5.5469 +#> -4.8885 -7.4376 2.1646 -15.9880 -2.2590 1.2272 1.8984 -7.8221 +#> -8.0032 14.9478 -12.9263 1.4746 -10.2535 2.0050 -9.9589 1.8762 +#> 6.5848 19.4712 -4.4186 1.4986 -9.8512 7.1145 -0.2163 10.5178 +#> 0.1889 -11.4429 -1.6276 -2.4710 -0.2683 -0.3900 -5.8692 -18.0839 +#> 9.9172 7.0181 7.4458 3.8973 1.2154 -1.8324 -1.7741 17.6313 +#> -3.3293 10.3854 7.7157 -2.8775 -0.8746 0.2266 -1.4422 -2.2013 +#> 13.3809 -2.5503 0.5669 -8.6428 12.1443 6.1498 5.0442 -0.9521 +#> 0.7790 -5.2123 4.2130 -6.1849 10.4151 0.1508 3.2000 -3.6462 +#> +#> Columns 9 to 16 -4.0860 6.4213 -2.5722 -2.3741 4.1568 -5.8059 1.7523 -1.2796 +#> -3.1006 -3.4590 -16.6207 -8.0393 -1.1310 -3.5510 4.8503 -2.9037 +#> 3.0810 -3.6914 -6.7095 2.1614 -2.9226 -10.6996 15.6299 6.2988 +#> -11.1525 -16.5291 9.1431 5.3459 3.2410 -10.4744 -6.6682 1.0263 +#> 1.0383 1.9176 -2.6346 -2.2827 -3.5935 -3.7414 1.7668 10.5311 +#> -0.7062 -1.7054 0.5477 -2.0877 -7.5597 -2.5377 1.7576 -6.5614 +#> -8.4782 12.9652 11.9687 -3.9546 9.3618 9.1582 -10.6152 -4.3985 +#> 2.9258 5.9505 1.6781 -2.0285 -6.0790 2.2242 6.5264 6.0665 +#> 12.6039 -2.1076 4.9537 -11.3343 2.3227 1.4610 -2.6695 13.5633 +#> 6.5875 -11.4847 6.2436 2.8999 0.8432 -0.0396 0.9859 -1.0960 +#> -2.4116 4.0544 -15.8423 -0.0828 0.6131 -5.3123 -0.8512 3.7520 +#> 2.1860 -3.6086 -0.5867 -3.6484 0.5793 10.9506 -6.7126 -4.9702 +#> -13.6129 5.2558 -3.1411 4.7320 0.3424 7.0105 4.9324 -6.7802 +#> 4.0216 6.1902 5.0901 10.2227 -0.1422 -11.2877 -6.1075 3.0605 +#> 5.2406 -1.4246 3.9030 -2.4046 0.8196 -8.4962 4.4973 6.3254 +#> -7.6489 6.6798 -7.4386 3.0880 -0.5156 -8.3741 3.2873 6.4475 +#> -3.4557 14.1308 -2.1692 -4.5955 7.5381 3.6648 -0.5949 12.6182 +#> -8.7020 -7.9468 -3.3009 -5.2928 -1.0027 7.5290 9.5723 -9.5589 +#> -1.3027 5.1076 8.0577 0.2842 -11.1052 4.8339 6.8218 -0.0609 +#> 2.5851 0.7047 6.0073 9.2696 5.6514 -10.3622 10.6630 1.7014 +#> 0.3271 -7.3651 4.2371 0.7115 -2.7260 4.1920 1.4280 -10.3795 +#> 3.9296 8.8945 -0.0389 -0.4414 -3.4227 -8.3822 15.2850 20.2510 +#> 5.6511 4.8389 2.8384 -2.2936 2.6809 3.3876 -9.1269 -3.5801 +#> 8.9159 -13.7750 14.9805 15.4946 2.3645 4.7642 -14.0470 -10.4856 +#> 8.8515 -4.1713 -1.1083 5.2987 -3.5451 1.7682 8.2087 -5.3819 +#> 4.3616 9.0161 6.3545 2.6342 -3.4428 -7.2009 6.3231 10.2266 +#> -1.8823 7.2042 4.8943 -3.7700 -0.2878 4.5386 -11.0202 4.4571 +#> -10.8001 2.1424 10.2672 8.6709 -4.4365 3.3502 -5.4543 -4.8962 +#> 6.6005 10.0152 1.3169 2.4047 -1.9600 -12.1608 11.7453 8.2295 +#> 0.8924 8.1780 -7.9939 -4.9859 5.4768 8.9285 5.1992 -9.6600 +#> 9.9078 7.4913 11.5323 -4.9031 8.4436 12.0946 -1.5957 -8.7517 +#> 2.2561 -3.1106 -7.4184 4.5571 -2.6696 2.9912 -1.4644 -10.2755 +#> 6.9493 -20.7305 -2.5021 0.0686 7.6799 0.9227 -8.6592 -16.3700 +#> +#> Columns 17 to 24 -2.0726 -8.5104 7.9019 0.8095 1.3915 -1.9690 6.3407 -4.2864 +#> 4.3306 -6.2841 5.3894 -0.8770 -8.1014 -1.7010 -6.8155 7.6810 +#> -6.7169 -8.4859 -10.5882 3.0183 2.8342 -0.5263 -1.2124 -0.5939 +#> -1.4485 10.2153 -6.5924 0.9652 2.0956 1.7560 1.1663 -10.9764 +#> 1.4974 -3.7323 -9.4498 8.6725 13.9418 -6.7380 -3.5900 -14.4211 +#> -5.1598 -6.6805 1.9089 11.0177 4.9359 -1.2375 -18.6508 -2.8171 +#> -4.3521 4.9877 7.8086 2.4810 -13.5333 -5.0737 5.3768 0.7935 +#> 5.4761 -15.1532 -17.5176 7.3348 3.8836 4.4217 1.4999 3.2297 +#> 4.6920 -13.0070 4.5573 4.2250 -7.9661 6.4834 -1.7304 -0.2558 +#> -6.9633 10.6470 7.2614 2.0687 1.7736 -3.7101 10.6454 -6.3196 +#> 3.5066 -10.6950 -5.6799 -4.6649 -2.4139 -8.7156 0.1989 1.2605 +#> 0.2611 2.4521 4.0678 0.5756 1.4464 -7.7336 2.6385 1.9377 +#> 0.3724 12.6858 -6.6219 1.6642 4.2785 -2.6077 -4.8870 11.6064 +#> 16.4867 -1.3708 10.7355 3.2567 -2.8250 -9.6266 -13.2044 -5.2769 +#> 4.0348 -10.8766 8.0989 11.4609 -13.0163 -0.6370 1.8148 -1.7120 +#> 2.6839 7.6035 2.5099 -13.0648 3.8901 4.8382 -3.4283 2.3372 +#> -11.1093 -4.6944 -5.4822 9.0872 7.4998 -2.4886 -0.8404 1.1495 +#> -4.3792 0.9561 7.4340 3.5363 -3.4969 7.9804 -12.3729 6.5105 +#> 21.3925 2.3420 -2.7837 -7.9849 -2.1592 5.0746 0.0229 15.6099 +#> 5.2014 -7.5834 6.1664 -0.7725 -0.5959 10.4268 -6.7089 9.3841 +#> 3.9032 0.0352 -7.8161 11.2603 -4.6685 1.1181 -5.0832 3.3944 +#> 7.3372 -1.8604 -3.0359 -0.2803 -2.4817 -3.0330 4.7001 0.7510 +#> -8.9387 -7.2561 15.3523 -0.1818 -4.5693 -1.8624 8.3489 3.5271 +#> -1.2746 25.1526 3.3148 -6.3416 7.5979 -3.0577 -1.0300 -1.1902 +#> -1.9080 -0.5866 1.9487 4.7672 -0.3371 1.0384 -0.0173 2.7061 +#> -0.4439 -6.8372 1.0193 3.8842 6.6423 3.5682 -9.6387 -1.2007 +#> -0.0374 14.5370 -9.4165 8.8781 -1.4394 -11.5466 -5.7869 -0.3195 +#> 5.4837 11.2849 1.7591 1.1738 17.6410 2.1348 -10.2499 1.9873 +#> 16.1238 -0.1729 4.4630 0.1696 -10.7953 1.3255 1.3351 2.5280 +#> -8.5880 -1.6725 7.6080 -1.2273 1.6391 2.3870 7.2627 4.0279 +#> -12.7845 -4.4068 10.2311 7.9620 -6.4136 11.8917 12.7687 -2.2985 +#> -0.3770 -10.9905 0.7378 -15.5345 9.1464 1.6215 -10.1390 9.8305 +#> -4.4512 -1.1555 16.8918 -3.8699 -9.2140 -3.4994 0.3889 -7.3029 +#> +#> Columns 25 to 32 -1.4314 0.2024 -4.2907 -3.4588 -5.4996 -3.6940 -5.0281 1.0755 +#> -11.5288 -0.3438 -5.6180 5.4355 -1.9867 8.7279 -1.3348 -3.2532 +#> -1.7205 -2.4896 -6.8395 2.5132 6.8259 7.1714 -8.4811 1.8884 +#> 12.2358 -2.2164 9.4752 -6.3795 10.7066 19.2873 25.2370 -6.6958 +#> -12.0759 -11.5659 -1.5740 3.2449 -0.4928 2.1976 7.9682 -3.1802 +#> -5.4648 -1.1428 3.6683 1.4968 11.3532 -2.0583 9.4373 -3.6948 +#> -0.8467 0.3965 -1.5380 -3.3360 13.1708 -2.0463 2.0306 9.3244 +#> -6.1372 -11.2179 -2.1520 0.2010 -9.1823 -1.6848 9.5493 2.4082 +#> -0.1230 -4.3619 -2.8306 2.8916 8.8175 7.8968 -11.3143 7.2510 +#> 3.9155 -5.1768 5.4694 3.1729 -1.6373 -1.1168 -6.9770 -2.4431 +#> -5.5661 -2.9941 5.8947 11.3340 -0.4344 10.3948 6.8087 14.5704 +#> -7.4637 5.9919 -1.1063 -2.8024 0.4877 0.9881 0.4987 7.7180 +#> -0.1479 8.8453 -7.3753 -11.8225 1.0302 -5.8837 -0.5575 -0.0465 +#> -1.1157 10.4909 9.3996 4.8179 8.5646 -2.8770 -1.1479 -7.2244 +#> -7.7664 2.7722 0.2519 -5.8544 -0.2825 8.1229 2.5394 -1.9655 +#> 3.7868 6.7755 1.1934 -2.7120 -0.8109 -2.1745 9.7496 -7.8422 +#> -6.0130 -4.9658 3.0462 -4.8569 3.0011 3.1681 -7.8576 4.7335 +#> -0.8590 -2.2963 3.9233 -11.4663 2.8552 5.0722 13.3096 -0.2567 +#> 2.4902 6.0696 -4.5408 6.5379 -5.8495 -6.7323 -6.0372 3.8359 +#> -1.7285 4.3793 8.7735 -2.9332 -6.4038 -0.0454 -4.6984 -8.4368 +#> 4.4295 -4.1835 0.2776 7.2391 8.5665 8.1895 -2.9724 1.7052 +#> 8.5844 0.4590 5.8992 -0.5390 -5.5647 -4.9448 -0.5048 -2.1230 +#> 8.4651 0.7857 8.0964 4.4390 -7.4403 -7.3722 -2.1246 -7.9744 +#> 9.2435 3.5409 -2.5928 6.3410 -3.9584 -1.3803 -16.1972 3.2548 +#> 6.9294 -0.4257 3.1535 1.8400 5.1561 8.7191 -6.5142 -0.4285 +#> 2.4707 2.4562 -0.8055 -6.3825 -8.4170 5.5621 5.7736 -12.0265 +#> 17.0837 -1.2967 -6.9036 3.5851 8.0978 1.0316 1.1240 0.9540 +#> 0.2170 -2.0506 -1.8686 -8.5422 -8.8884 -4.6117 -5.6362 -1.9346 +#> 4.5573 -1.4111 4.1964 4.8692 -5.5536 4.0321 -9.5204 -3.1594 +#> -6.7277 -0.8789 0.3533 -11.3807 -5.0255 -7.8873 -17.7735 4.3360 +#> 4.5420 0.0373 -0.1702 -0.2021 -1.0652 9.4204 -2.9035 13.7808 +#> -7.4720 3.0406 -4.3222 -3.9177 1.7697 2.4556 -0.5615 -7.5813 +#> -1.4074 -0.0846 5.4931 7.8539 1.9790 6.8378 -1.2712 9.0633 +#> +#> Columns 33 to 40 6.9240 2.2739 7.0122 -1.2701 -2.0115 -11.0764 -0.4458 1.0459 +#> 7.3224 -17.3510 7.5953 -1.9614 0.2458 4.0317 -6.9640 1.3121 +#> -1.6173 -3.9226 -3.5357 0.0570 7.2102 5.1422 3.8731 1.4483 +#> 9.0869 -4.2868 -9.8618 -3.8325 -0.4532 17.1419 -8.5024 -10.2601 +#> -0.2259 -4.7216 -4.4411 1.1250 10.8359 7.9578 2.3752 -4.4357 +#> 5.8127 5.1264 -2.9787 -1.3527 -1.2789 -4.9458 -7.6917 -3.4897 +#> -1.2848 -5.1894 -6.8718 -5.8907 1.3262 3.1774 -5.5343 -0.2647 +#> -2.7601 -6.5327 2.7842 -1.9254 3.0175 7.3933 -0.0057 9.5916 +#> 5.9871 -2.5564 -4.1029 -9.8237 -4.6692 8.0873 -0.3651 7.6660 +#> 0.9929 -7.0597 8.7479 1.4223 0.3354 4.1855 -6.3449 -6.7046 +#> -2.0061 -11.2205 12.1521 -7.9127 -2.7581 -5.9309 0.4322 3.8732 +#> -4.0634 4.8246 -2.8899 -5.7091 1.0374 -2.4193 3.5607 -0.9288 +#> 5.1217 5.9972 -6.4678 -1.1522 -5.8698 -8.0975 0.9765 -2.1171 +#> -7.7123 -9.3345 4.5372 -3.0058 6.6523 -2.0967 -14.2951 10.2028 +#> -0.6436 -8.3312 0.9566 -1.0786 -5.6910 2.0966 -6.9138 12.0494 +#> -4.8390 -4.3222 4.5728 5.7941 -4.1121 -2.2718 -9.8567 -0.8623 +#> -3.5303 0.5670 -4.8218 4.8225 1.5968 4.4576 -4.7246 -3.1750 +#> 2.3410 -1.1211 -2.5534 -0.4316 -8.1122 13.7344 -6.8140 -3.7069 +#> -10.9319 -4.5192 -4.2456 3.3167 -10.8028 -6.1140 -8.8725 11.7455 +#> -4.2912 1.0876 5.6363 -0.4622 -8.5785 2.3193 -15.1438 9.9583 +#> 2.7829 0.9444 -9.0406 -0.5925 -2.3984 2.4452 3.7937 4.2564 +#> -1.4780 4.3162 2.4416 -2.1974 1.2660 -11.5144 2.8115 3.2969 +#> -1.6887 7.3647 3.4006 8.6264 -2.1062 -21.2366 5.4839 -7.3611 +#> -3.4757 -4.6786 1.9789 -4.2565 -10.2247 -7.5134 -1.3938 -6.8941 +#> 6.3073 -2.8765 -2.7399 1.3770 -4.9203 -3.8809 6.7716 0.3234 +#> 5.7622 8.9038 4.6575 16.3209 -2.9244 -1.3163 -0.1654 8.7446 +#> -4.1624 -6.9978 -15.7060 -0.7463 1.0695 0.1511 -2.9578 -10.5753 +#> -7.4290 -1.1818 6.1719 11.9056 -2.4689 -1.7901 -9.7585 -3.2402 +#> -2.9330 -10.4269 -1.6227 -3.9259 -4.4273 -6.0231 1.4791 2.9367 +#> 10.5194 2.4614 13.6208 1.2071 -6.8071 -14.5845 -7.7553 7.8994 +#> -2.7698 3.2642 -13.1002 1.3493 -2.2242 0.5566 4.5718 -4.1912 +#> 6.4498 -4.5213 3.0324 -1.5308 1.1251 -1.1830 10.9511 -10.9109 +#> 2.9400 -4.1101 7.9595 -4.0359 -6.0525 -3.9387 -2.4284 -12.6910 +#> +#> Columns 41 to 48 -6.0871 -3.5399 0.5556 9.9953 0.1490 3.1870 -6.8017 -13.3374 +#> 1.2691 2.3812 -6.8871 1.8500 5.0405 -0.9936 5.5278 5.6075 +#> -5.6155 -1.3369 -2.0059 9.6355 -12.1226 1.6743 -0.1978 11.2916 +#> 1.6661 4.4906 7.2712 -12.7531 13.2853 0.7525 3.8068 13.7190 +#> 4.6584 -3.7663 -8.8265 2.4192 -7.8541 -10.5215 -4.8588 1.6292 +#> -6.0489 -4.9913 13.1335 7.5758 -5.6704 -2.7146 7.0350 3.8747 +#> -1.6261 4.0118 14.6895 3.5888 -2.2662 3.8648 -1.8610 -6.0246 +#> 14.2621 -8.8456 -21.6802 7.7011 -0.4258 -8.2232 0.4463 1.6795 +#> -5.0160 6.0531 1.9887 7.9535 3.9957 -0.8742 -5.1014 5.0997 +#> 4.3507 6.6107 -2.5763 -5.0663 5.9065 7.2129 -0.1093 2.1234 +#> 8.6998 -11.3704 -9.0154 16.7581 -7.8162 -9.6847 10.0728 1.3776 +#> -5.6878 -4.9777 0.8464 5.6317 -4.4443 1.0655 -0.7358 -0.4476 +#> -3.2676 1.0310 2.7182 -4.5991 -6.6428 11.7684 0.0934 2.0296 +#> 3.3787 5.7968 2.6977 -6.3838 1.3135 -14.7291 -5.4446 -2.5756 +#> -6.7851 -4.3474 4.3288 -3.5593 -11.8488 12.7276 -11.8330 -1.3451 +#> -10.9471 2.5203 13.9947 -6.5298 6.3223 -5.1206 -5.3993 -0.0330 +#> 7.5481 -7.9991 -7.1928 0.4923 -10.9895 6.2631 4.2667 -2.1313 +#> -10.8836 -6.3230 8.0348 -10.5145 -5.3140 1.6496 4.7261 0.3512 +#> 11.6409 8.2070 -3.0861 11.9172 -7.4494 0.6038 -5.9887 4.6739 +#> 2.8818 -3.6726 -2.8435 -3.0786 -3.5361 10.6113 -3.9672 -12.3372 +#> -5.1373 2.7864 0.3852 2.0220 -4.7495 -2.6269 8.5026 2.4522 +#> 10.0219 -5.2154 -1.9534 1.5399 -0.1130 -11.5162 5.6648 1.9193 +#> -1.1927 -3.9725 2.2673 7.0946 2.2641 5.1577 6.1204 -6.5758 +#> -0.5557 19.6535 5.5160 -0.9664 -5.6467 11.3334 -3.9025 2.1416 +#> -2.0937 -2.5151 8.3120 1.5805 -8.8088 1.0911 6.1394 -5.1742 +#> -3.1773 -3.2192 3.7206 -6.5699 -1.1953 1.7972 -1.5603 -6.4786 +#> 8.5507 11.2789 15.4065 -11.7882 1.7775 0.7023 10.9679 7.2172 +#> 6.6951 0.6656 -15.2272 -4.2974 -6.1461 -9.1327 -0.0604 8.8080 +#> 3.9147 7.3136 -0.1624 -3.7806 -10.1947 -5.6692 -1.2823 -10.3802 +#> 4.9400 -8.4757 -12.2068 7.9412 8.0861 -6.6263 1.0023 -2.1691 +#> -3.4307 -2.0088 1.9208 5.2437 4.9451 -4.2088 -8.7323 -1.8118 +#> -4.3933 5.3883 -6.2951 -0.2386 2.5083 12.3352 5.4033 -1.1363 +#> -9.1130 6.0769 5.5055 2.3297 3.4682 3.0630 2.3178 -1.0043 +#> +#> (13,.,.) = +#> Columns 1 to 8 12.7182 3.8064 -4.6155 -2.5688 -14.8029 -15.1014 14.4892 15.4805 +#> -2.9289 -1.0909 3.3326 1.2205 -0.1754 -3.5472 -0.4407 3.1290 +#> -3.2056 -17.0926 3.7162 9.3470 8.8201 1.5397 -7.4968 -8.2629 +#> -2.9956 13.7263 -3.6488 14.7005 10.6438 -1.2256 -14.0772 -0.5874 +#> -5.1724 -0.4424 -7.6078 -6.2448 -3.3594 4.1992 -4.1085 -0.3341 +#> -12.0820 -4.6100 -1.2477 -3.0685 1.4967 -2.3838 10.4906 7.2323 +#> -3.5632 -6.4135 0.5737 -3.8202 -7.1241 -1.6567 8.9361 -5.5403 +#> 1.4872 -3.9668 4.4526 -1.0609 -7.8944 -0.7267 6.2148 17.0953 +#> -4.3076 -8.5738 -12.5724 2.4266 4.4529 4.6341 -14.1243 2.0411 +#> -8.8410 2.2157 -2.9455 -2.8395 -4.8541 8.5633 10.4077 -10.2788 +#> -4.8465 -24.2523 9.7135 9.3001 -7.8211 13.3860 -2.3863 -3.3853 +#> -4.7057 5.7289 3.5920 -10.1702 0.9869 11.4714 13.8918 -2.4690 +#> 15.4570 -4.6192 4.4933 2.7143 0.9961 5.4573 6.6889 -3.3549 +#> -1.1126 4.7426 -5.7291 -8.4460 3.5938 -12.8634 -16.3419 9.6687 +#> -5.3567 -8.1428 -4.4861 -2.8718 -10.2998 14.9662 7.5159 -16.8273 +#> 0.2225 -6.3716 5.0470 2.5779 -7.1910 0.0347 2.0312 4.2226 +#> 15.0371 -5.1816 0.3470 -2.7460 -5.2628 -6.9144 -5.2145 11.9414 +#> 0.8363 0.7920 6.9030 3.1910 8.0712 7.6977 2.9686 -3.0270 +#> -1.0038 -8.6622 0.0588 3.2239 -10.7892 3.4944 6.1863 7.9359 +#> 11.5861 -1.0449 -1.9407 -6.4830 -6.3589 -6.8171 11.7829 6.8568 +#> 2.6081 5.8065 2.0014 -1.1764 1.5566 -1.7988 6.6751 -1.2022 +#> -3.4828 -8.0346 1.7535 5.6982 3.8348 -0.0052 2.4027 2.7767 +#> 1.1414 -1.1804 -3.3855 -0.6290 -5.3685 4.0934 9.0223 -5.5763 +#> -9.7996 -10.3637 3.0964 2.5960 -3.6950 10.3639 -4.1379 -4.6375 +#> 8.5050 0.8158 -6.0417 -3.8965 -2.2206 3.6987 -0.5295 -1.9969 +#> 15.0140 5.7383 0.3618 -8.7803 -8.2044 -21.6215 11.0205 8.9326 +#> 3.8972 -0.7520 -4.1582 2.4096 0.7532 4.9532 -2.0873 6.8268 +#> 12.8697 -12.6970 -10.1723 3.7770 -0.2269 -6.4761 -2.3072 9.8770 +#> 0.8609 -3.8583 -3.4605 -3.7649 -12.9216 3.7425 -1.5771 -5.6844 +#> 22.6210 -12.0578 -11.3394 0.6404 -6.8014 -2.1969 11.6617 9.4068 +#> 1.5590 1.0693 -3.1193 -8.9169 0.2025 -1.1535 12.7848 5.9852 +#> 10.6550 3.2722 7.2799 1.1171 4.2774 -4.7724 -2.8974 1.9139 +#> -10.4001 1.4934 -4.0810 1.3808 4.9810 -2.6628 4.0061 0.4019 +#> +#> Columns 9 to 16 -3.8668 -2.7478 7.1392 -0.0054 -10.1722 0.1198 2.4732 -10.4295 +#> -2.5754 -7.8994 -0.1849 -3.9888 4.4277 2.1233 -1.5317 -3.4512 +#> 0.1197 -2.1993 -2.9206 3.3155 1.5636 -2.1639 -5.8681 3.5642 +#> 11.7872 16.3354 1.8357 2.9643 -8.7039 -16.2158 8.4494 18.9837 +#> 0.9327 -6.3295 -8.8967 -0.9992 -1.9962 1.4210 -4.5254 5.3463 +#> 2.7183 2.6481 2.7299 -4.0675 -0.1711 2.8628 0.1096 -13.9721 +#> -3.0392 11.4258 -5.2078 -6.1831 -8.3396 1.0591 5.2099 -17.4486 +#> 3.5829 -9.0933 -1.7016 -3.1307 -0.3316 -0.5869 1.8062 2.3150 +#> 11.4195 9.0115 0.8662 -4.4907 1.4685 2.5792 -11.6804 1.5862 +#> 4.9865 2.8401 -10.9968 4.7530 12.6697 1.5929 -3.2429 -4.4498 +#> 2.8203 -7.8873 2.5764 4.9312 -2.4513 8.1254 -9.4267 14.4767 +#> -12.1972 4.5896 -3.4220 0.1365 -1.4710 7.8849 10.8458 -12.1589 +#> -1.4284 2.1966 3.1238 6.6784 -10.8862 -3.4908 -1.3427 -4.6650 +#> -7.8388 -6.3305 -7.3320 -13.8610 -4.9487 -6.9116 8.5775 11.5956 +#> 0.0853 -3.0662 -2.6750 -8.6276 -1.1991 7.8136 -2.3070 -9.7200 +#> 5.5640 1.9188 6.5125 10.3249 2.2903 -10.1161 -9.1294 11.1086 +#> -0.1784 -7.3305 -7.4177 -0.3665 -7.3336 5.7114 0.8584 -0.4436 +#> -1.0338 1.6243 3.5237 6.4708 2.4569 -15.7223 -2.4562 3.3643 +#> 7.7476 -0.8750 -7.3911 -0.5791 1.9732 -9.2091 -3.6274 1.1787 +#> 6.1295 0.0278 6.0033 -4.9993 9.2848 -0.0427 4.6808 5.4250 +#> -3.9739 5.5625 -14.3787 -10.4563 -2.6791 5.7062 2.1011 -13.5959 +#> 6.9645 -6.5812 9.3948 5.4502 3.3648 13.1890 5.1092 10.3599 +#> 14.9967 2.4909 -1.4835 2.4225 1.3400 4.2230 1.5507 -4.6680 +#> 7.1955 -4.8003 -1.2917 -5.7070 0.1072 -8.7850 -0.7931 7.4616 +#> 1.6131 -5.1979 -9.2722 -5.3319 -5.0503 12.5062 1.3686 -5.8991 +#> -6.1673 2.4002 6.9490 -12.5821 2.0743 -1.9734 4.7670 -11.5226 +#> 4.9347 5.6705 -9.3898 -4.2742 -13.3988 4.0754 -5.8353 -2.3527 +#> -7.3341 -6.5663 -11.6332 2.3066 -2.2382 -7.5558 8.6617 -5.3038 +#> 5.8818 -6.0350 -2.6655 -7.3441 -0.0255 10.9176 -10.1541 8.6460 +#> -5.3208 -9.5275 -1.1192 8.5280 -2.6452 5.6199 -7.0017 -17.5711 +#> 4.3405 14.7082 -0.3999 -9.3258 -3.6946 -8.6221 6.9970 -2.3698 +#> 1.7119 1.5103 10.6896 4.0890 0.2511 -10.5023 3.5156 -0.9019 +#> -4.1529 1.8704 6.3679 -3.1304 8.9665 2.7667 1.1333 -0.4228 +#> +#> Columns 17 to 24 3.2644 -7.6745 10.2642 6.0378 -0.4222 6.0229 -1.2183 -9.0270 +#> 0.5475 3.1340 -1.7418 11.0928 0.0034 4.6541 -6.4449 6.8976 +#> -5.2873 8.3848 1.2836 -3.6144 -0.5825 7.4089 5.6819 -6.9342 +#> -6.3869 6.9111 0.6449 -4.9521 15.6333 1.1160 -3.0849 0.4828 +#> 7.6230 -1.8587 7.6517 7.5453 -5.5194 -6.6080 -2.5460 -4.6176 +#> 10.3962 12.6604 -3.6792 -1.5480 4.6448 9.8750 -3.0308 -5.3889 +#> 7.2503 0.8777 -9.2999 -4.8500 14.7870 9.0033 -5.2733 -0.1219 +#> 4.2802 -6.1003 4.3013 2.7553 -15.3193 -9.5015 -0.1899 8.7518 +#> -4.2541 10.1026 0.5425 -6.4690 -3.4436 10.0342 1.6691 -7.5241 +#> 6.9315 -2.6069 -5.3847 1.8447 3.9693 -10.8297 -3.8406 2.8337 +#> 1.8448 -1.0881 7.9049 7.4700 -8.6152 9.5878 6.2947 -9.4251 +#> -1.6250 -5.0960 -5.8780 -6.4262 -0.6168 3.6598 -5.6435 0.1190 +#> 8.8656 -3.5729 -0.7615 -2.3844 0.9248 2.6855 1.6193 5.6035 +#> -11.0357 8.2507 1.4583 12.7547 8.3913 4.8109 -0.6845 -17.8735 +#> -5.1945 3.3425 14.4335 -5.6128 -1.2405 16.7262 -8.0684 -3.8181 +#> 1.7073 11.1047 7.8915 2.4152 6.0107 3.5970 4.5400 -14.8674 +#> -0.2061 -6.4413 10.6123 -5.4404 0.3608 -10.9615 12.0835 -10.8443 +#> 1.0752 16.0308 3.4707 -10.5946 1.7174 9.9684 -1.6525 3.1576 +#> 2.7748 6.7030 3.1177 8.2975 -1.7461 -10.9206 7.0825 -4.9354 +#> 1.3440 -1.1578 4.7978 -5.6843 -10.4707 -6.3842 -5.6464 -2.9741 +#> -3.5364 5.4568 -13.7798 -1.0582 -4.4093 2.3835 0.6889 -0.1660 +#> -1.9331 0.0593 9.4289 -1.3197 -3.2413 2.3622 -1.3254 -2.7177 +#> -4.3343 -19.4646 -0.5988 -13.6729 -8.8478 8.7292 0.9266 -0.4544 +#> 0.7685 12.0428 -7.5370 -4.6764 13.1441 11.0089 2.3001 0.6684 +#> -12.9065 -0.3823 6.3071 1.0944 -9.8442 0.6113 6.0180 -6.7031 +#> -9.0167 3.9631 7.3313 -2.6333 -9.3185 -10.9269 -5.3837 2.5250 +#> 4.2730 0.4342 -5.3750 -1.3528 13.0142 -0.3168 0.0088 2.7591 +#> -7.7178 5.6187 3.7903 2.4726 1.0198 -20.0105 12.6015 6.4381 +#> -3.6189 9.8294 9.2189 14.6899 0.4935 11.5095 -4.7393 -10.4125 +#> -4.1638 -2.9080 0.6444 2.5991 -8.8652 -5.8333 12.0345 9.4001 +#> -9.8705 4.0219 -8.9801 -18.6822 6.6756 0.3598 -11.3057 10.5311 +#> 5.9611 -5.2889 -5.9852 2.2408 -7.3118 5.1187 5.4850 0.3128 +#> 0.6640 2.9601 -4.1693 2.8989 8.1751 13.8882 -11.8104 4.4661 +#> +#> Columns 25 to 32 0.1069 4.4300 4.3003 6.5373 -7.5178 -9.7551 -7.0320 -9.3276 +#> -12.5463 -8.7917 -0.6243 3.1369 -2.3819 1.8997 8.5821 1.8319 +#> 1.0214 -0.2199 -10.3648 -4.5515 -2.5590 -4.8727 -0.3350 -0.8497 +#> -5.2845 -4.0951 0.4800 3.8095 13.0600 -0.3360 11.9384 19.4068 +#> -6.6383 1.7713 -5.8471 -3.2602 4.2594 -4.0928 2.9030 8.8970 +#> -5.2800 4.1837 1.3231 15.0332 0.6706 -8.4351 -4.1162 -10.4221 +#> 0.5118 3.6860 2.4320 8.9058 1.7754 0.0929 1.0673 11.7182 +#> -1.4659 10.9084 -0.4592 7.5206 8.2967 7.0389 4.2406 11.6816 +#> -1.0805 -12.3320 -19.3237 -5.8110 -0.9302 1.8380 -19.4156 -7.7184 +#> -2.6221 1.7152 3.4518 4.9626 2.2381 -2.2154 -3.6776 5.4062 +#> -8.0877 -4.6032 2.0279 -9.7583 7.7099 3.8745 3.0555 6.6926 +#> 4.7537 6.8879 7.1303 -0.9630 9.1403 2.7630 3.1092 1.5607 +#> 1.4150 18.0743 10.4641 -1.5044 -3.9003 -10.2854 3.7217 -7.7446 +#> 6.4209 -9.0526 6.0607 -6.2215 -6.8936 -5.0646 3.7337 -2.1049 +#> 2.4639 16.8365 -3.5789 3.9893 -8.1617 -2.3536 4.4018 2.1488 +#> 3.4904 -5.3955 17.2709 -0.1046 -0.7013 1.9910 11.2501 2.8145 +#> 5.8840 4.0392 4.6691 8.8491 1.7121 -9.7288 1.5834 -0.9072 +#> 5.7569 -5.7727 12.9906 6.4348 -4.4528 9.0758 -1.0589 4.9945 +#> -4.2344 -0.3840 -0.8968 0.2907 -1.2674 1.3855 -8.0873 -9.2414 +#> 5.2562 2.3932 12.1631 13.1696 -7.7894 14.9373 -5.3492 -4.2549 +#> -10.1516 1.2459 0.8897 5.6136 8.0912 5.3998 -3.4557 -4.1128 +#> 9.0228 -0.1128 -3.0950 -2.6131 2.9784 1.2623 -4.3874 -4.5560 +#> 3.7943 6.0817 6.3203 11.0212 -0.4039 7.1245 -0.3493 -9.0718 +#> -4.3440 15.9696 4.4231 9.1531 -3.5382 -7.9130 -3.9667 -8.0826 +#> 3.7145 -1.6401 -0.5113 -5.6870 2.7285 6.6672 2.8719 -11.5719 +#> -3.1369 -4.8979 -4.1840 0.1913 -13.0126 -4.8511 -3.0648 -7.7171 +#> 3.0135 1.6006 12.5806 11.2100 7.9489 -15.5392 3.0761 1.9750 +#> 0.2687 1.8509 10.7228 -3.3233 0.6185 -14.0387 1.9764 -4.6311 +#> 5.9052 -0.2443 7.0376 0.0236 -4.4985 7.8975 -6.8967 -6.6564 +#> -1.9590 6.0037 13.4657 -8.3504 3.7786 -16.6422 -12.9441 -7.0071 +#> 8.0193 3.8135 3.1621 12.9735 10.7427 5.3991 -4.2870 14.5053 +#> 3.5541 2.5437 2.1426 3.5000 0.3929 0.2014 12.6902 -4.4364 +#> -5.8694 -7.8271 -2.6009 5.2320 5.7658 9.6837 -4.6968 2.5444 +#> +#> Columns 33 to 40 -12.4905 0.9962 -12.2520 1.8805 0.7729 2.2591 -2.4218 -1.1181 +#> 4.0905 -4.6726 -4.1660 -6.2877 2.1565 -9.2002 -3.7986 -5.0362 +#> 2.3994 8.7141 3.2630 -0.1725 4.8186 0.7792 -5.8155 6.4687 +#> 1.4119 -7.1807 10.3300 -12.6097 18.3856 12.5793 0.5493 6.3617 +#> 10.8638 7.3653 7.2373 -4.0725 4.0856 8.2117 -9.3570 -7.8061 +#> 7.3977 3.2087 -3.9545 -0.3392 0.3293 0.7637 -0.3166 5.9642 +#> 5.6641 -2.3012 1.2621 -1.1695 2.5621 4.1702 4.9207 3.2028 +#> 7.3067 5.3040 1.6473 1.7503 9.1175 2.8779 -2.2866 -5.0414 +#> 9.1455 1.1883 7.9984 -1.1010 0.7344 -0.3915 6.8009 8.0225 +#> -7.8686 -5.7029 -1.8316 -0.0498 -4.6592 -1.9285 -3.9908 -2.4433 +#> 2.5765 4.2916 -0.1664 -1.0247 6.6853 3.0413 0.8368 5.5294 +#> 2.1822 -1.1567 -4.4557 11.3355 -9.0053 2.5235 -1.9342 -7.2046 +#> -0.7974 0.6058 -6.8831 0.0058 2.0477 8.0620 -6.5564 0.8866 +#> -3.5303 -0.4432 5.9479 5.8558 8.6230 -8.0827 2.2993 -6.1262 +#> 4.9180 -3.5954 -9.6872 6.2345 6.8421 1.4011 -11.3722 1.2919 +#> -9.6036 -6.8975 -0.2126 -1.7491 3.3766 4.9670 5.3099 1.0471 +#> -0.6523 8.0283 -13.2220 -1.0271 -1.3558 8.9063 -15.6677 -1.9965 +#> 9.0133 -7.1879 3.1848 3.3971 7.7392 3.0583 4.0695 -1.5547 +#> -7.8588 -2.7530 -3.3350 4.8636 -6.2975 2.7418 2.4916 3.9969 +#> -2.1845 0.3225 -7.1511 5.1200 1.0721 -5.5169 -3.6237 -2.7235 +#> 4.9371 5.6697 10.0409 -0.1665 1.5843 -3.9746 5.9828 8.9432 +#> -8.0029 2.7323 -1.8968 -2.2308 -1.7877 -3.7023 4.5103 1.2744 +#> -6.0240 -2.3316 -12.1196 6.0434 -12.3856 -0.0222 -0.2043 5.0812 +#> -0.8801 -0.2558 -8.0960 0.6245 1.0336 1.1230 -1.4520 -2.2997 +#> 0.1147 3.5448 -9.3894 1.3536 -2.8737 -1.1762 -4.0554 5.0370 +#> -12.9656 -2.1674 -5.5133 0.7504 2.1340 -3.4405 -2.6298 3.6028 +#> 6.0563 -4.7571 -5.6242 -5.9527 7.6094 13.6002 -14.7488 6.5359 +#> -12.3398 1.0796 -5.4789 5.6667 -0.5361 -2.4495 -18.1071 -4.5523 +#> 2.5113 -0.5029 -0.0765 -4.5280 6.1293 -2.0885 12.7282 -5.5231 +#> -18.0310 0.9654 -0.9127 -5.1784 -2.2538 -5.6351 -6.1726 -1.2225 +#> 2.4638 -3.7619 1.7139 7.3916 9.0185 7.7355 1.6119 -4.0585 +#> -1.5231 11.2594 -10.2775 11.4900 -0.3246 6.5134 8.1654 -7.3673 +#> -1.2108 -10.1927 -2.5560 2.4528 -2.8826 -7.8111 5.8999 -0.9200 +#> +#> Columns 41 to 48 0.1988 5.2504 -3.0760 10.2908 -1.3008 -9.9980 -13.6230 5.6009 +#> 1.1537 3.1284 -4.4897 3.9502 5.0897 3.6015 7.8427 1.9832 +#> 5.0285 -1.6221 4.5983 -10.6953 -3.2972 8.4126 -4.6680 2.0519 +#> 4.9383 6.7629 0.1700 -0.7731 10.6456 2.6704 -4.1326 -3.7576 +#> -6.5911 6.1939 7.0798 1.6225 -1.5633 3.8868 5.1865 9.6929 +#> 3.2894 0.1017 11.9715 0.1416 8.5866 -7.0269 8.0777 -6.1043 +#> -2.9275 -10.6820 1.3487 13.4299 -3.0858 -4.1574 -12.7612 3.0961 +#> -3.7740 1.8306 -1.6566 -0.4461 -2.3291 4.8333 4.8621 3.8554 +#> 4.4658 1.1858 8.9472 2.9284 -4.3364 -3.1499 -4.1935 0.7509 +#> -3.4900 -1.9808 -0.9273 8.6051 -0.6985 -1.2492 -1.7975 3.1656 +#> 9.5229 5.4037 5.7429 2.6790 -12.2003 6.4440 -5.3810 4.4119 +#> 2.7466 -4.2808 -1.5434 7.0435 -1.9691 0.1578 13.3589 -2.7863 +#> -0.4999 -5.5772 -2.0889 -3.4647 -0.7473 3.2818 4.4717 -5.4836 +#> 3.9357 5.8264 2.2869 -2.3583 -8.4141 -7.6839 -1.1675 -4.9500 +#> -3.5572 2.3844 12.0013 1.1560 -10.2125 0.8103 -1.5860 4.9254 +#> -5.5393 15.7903 -2.9637 -7.3790 -2.8508 0.4124 -2.9728 -2.5130 +#> -15.2763 1.5354 -2.9154 5.6464 -4.8810 12.7598 -8.5844 1.4449 +#> 0.0455 -4.2933 3.1696 -11.3167 5.7426 10.5657 5.2644 9.0828 +#> -3.8138 -0.8780 -6.8863 -2.3013 -4.3458 -3.6998 -8.0492 -6.8730 +#> -2.9428 6.1095 3.6243 0.9395 1.2437 1.5961 -2.6437 -7.6080 +#> -2.8307 -14.9884 -3.5193 -1.7380 13.1939 1.2391 -1.5804 6.7580 +#> 4.3376 1.4819 5.9095 -10.5479 -6.1397 -1.9740 -3.3975 -11.0190 +#> 6.2649 0.7111 13.0831 -2.3153 -0.6452 1.3613 -1.6137 -3.0733 +#> -2.5901 -5.3288 7.2456 12.8651 1.4870 -3.5211 -5.8653 -25.0905 +#> -0.2713 -8.3647 0.0763 -5.4667 5.4971 -1.8255 -10.7636 0.0127 +#> -0.4668 8.2574 -11.0812 -7.1201 1.5301 -6.1162 -2.5051 0.0130 +#> 0.9573 -5.0825 -1.6988 8.7437 -5.5978 2.2083 -11.9892 -2.3081 +#> -4.2024 -0.2054 -11.7608 0.5975 -10.1964 9.2613 6.0544 5.0097 +#> -3.8999 -6.2661 8.1442 -2.5761 -2.0750 -13.3615 -13.5262 0.3246 +#> -4.1377 4.2118 -12.6908 7.3387 -3.0250 -1.6286 4.4283 8.4087 +#> 1.0502 -1.8882 2.5351 5.0288 -0.8796 -3.3671 -5.9324 0.6171 +#> 6.7530 5.1505 -5.3810 3.0831 8.8983 -7.8994 19.5764 -7.6865 +#> 10.5669 6.0169 2.3651 7.6554 3.8816 -5.4518 -0.1022 -2.6415 +#> +#> (14,.,.) = +#> Columns 1 to 6 -2.9470e+00 -8.6971e+00 -4.9650e+00 4.1232e+00 -3.1464e+00 5.3055e+00 +#> 1.5427e+00 -1.1095e+01 4.2713e+00 5.7637e+00 3.6362e-01 -9.4469e+00 +#> -7.6291e-01 -3.9613e+00 -1.6844e+00 -2.7396e+00 1.8311e+00 -4.1143e+00 +#> 8.9186e-03 1.9440e+01 -8.9361e+00 -2.8924e+00 -3.0730e+00 1.9352e+01 +#> -9.7225e+00 -4.7628e+00 -6.7697e+00 3.6149e+00 -8.2467e+00 -2.5238e+00 +#> -7.0066e+00 -2.8066e+00 -6.3864e-01 1.5807e+01 -3.9110e+00 3.8801e-01 +#> -6.5141e+00 1.6159e+00 -2.5237e+00 2.6393e+00 1.0126e+01 2.1174e+00 +#> -1.9377e+00 -3.7156e+00 -1.0927e+01 2.3730e+00 3.7326e+00 -1.6164e+00 +#> -1.1518e+01 1.2488e-01 -8.5711e+00 1.2285e+00 1.5453e+00 1.2867e+00 +#> 1.4090e+00 1.2069e+00 4.5120e+00 8.8675e+00 -4.1212e+00 2.0452e+00 +#> 2.1451e+00 7.7938e-01 9.6499e+00 4.1449e+00 1.4574e+00 -8.9375e+00 +#> 1.0404e+01 -3.8630e+00 7.2910e+00 -5.0135e+00 8.1828e+00 -4.1867e-01 +#> -4.0174e-01 6.0947e+00 3.0210e+00 -1.5495e+01 2.5243e+00 -9.6399e-02 +#> -1.4876e+00 -3.3305e+00 -8.6169e+00 6.6952e+00 -2.1132e+00 2.6100e+00 +#> -2.6931e+00 5.3624e+00 -4.1022e+00 5.6431e+00 -2.5500e-01 -9.9579e-01 +#> 6.6913e+00 -1.3960e-01 -3.6253e+00 5.5914e+00 -1.5811e+01 -5.9588e+00 +#> -8.7709e+00 9.3600e-01 -8.2002e+00 4.3308e+00 6.0203e+00 -5.1867e+00 +#> 2.4328e+00 7.7536e+00 2.4390e+00 -4.8175e+00 -8.0653e+00 -2.9132e+00 +#> -7.8568e-01 -3.3420e+00 7.9374e-01 1.0131e+00 -5.6944e-01 -1.9089e+01 +#> -3.7811e+00 -1.9507e+00 -1.2322e+00 4.1061e+00 -8.2291e+00 6.5997e+00 +#> -7.1566e-01 2.7382e+00 2.6303e+00 3.4786e+00 2.8957e+00 5.6836e+00 +#> 9.6807e+00 -6.2116e+00 -3.0417e+00 -2.4916e+00 3.7252e+00 2.3739e-01 +#> 1.0791e+00 6.4898e+00 1.3307e+01 1.0551e+01 -5.9581e-01 2.3691e+00 +#> -5.3603e+00 1.0517e+01 1.5184e+00 -9.7263e+00 4.9366e+00 4.8630e+00 +#> -2.2348e+00 5.4156e+00 7.0300e+00 1.3209e+00 -4.3836e+00 3.8853e+00 +#> -5.0869e+00 -8.2025e+00 -8.6510e+00 1.0470e+01 -5.3927e+00 2.7191e+00 +#> -1.1621e+01 4.2484e+00 -4.0592e+00 1.0454e+00 1.1135e+01 2.2191e+00 +#> -8.1427e-01 3.7419e+00 -6.3552e+00 -9.7194e-01 5.5231e+00 -8.6829e+00 +#> 3.1891e+00 -4.9019e+00 7.8427e+00 -1.7551e+00 -2.0898e+00 -5.0366e+00 +#> -2.5993e+00 -4.6363e+00 3.5349e+00 2.1700e+00 -3.2316e+00 -3.7960e-01 +#> 8.6088e+00 1.6032e-01 3.2837e+00 -4.5223e+00 1.6399e+01 7.8730e-04 +#> 5.7161e+00 -4.1015e+00 2.2706e+00 -7.0973e+00 -2.1299e+00 6.9960e+00 +#> 7.8803e+00 -2.1816e+00 1.0224e+01 2.5879e+00 1.2398e+00 7.2957e+00 +#> +#> Columns 7 to 12 -1.1646e+01 1.0131e+01 2.9991e+00 1.1064e+01 -2.1592e+01 -1.3639e+00 +#> -7.7052e+00 1.2533e+01 -2.7843e+00 1.1178e+01 -7.1699e-01 -3.4950e+00 +#> 1.7887e+00 -4.3862e-01 4.8924e+00 1.2163e+01 4.5573e+00 3.6067e-01 +#> -2.0127e+00 9.5348e-01 3.6808e+00 -7.9013e+00 1.7339e+00 1.1636e+00 +#> -2.2187e-01 3.9322e+00 -8.6138e+00 1.6170e+00 1.3332e+00 -5.1143e+00 +#> 7.7466e+00 -2.5890e-01 6.3755e+00 -1.1040e+01 -1.2306e+01 -6.3907e+00 +#> -9.5735e+00 -2.1791e-02 -8.1075e+00 -1.0824e+01 -1.0851e+01 1.0848e+01 +#> 7.0934e+00 1.8923e+01 -9.2418e+00 -1.6874e+00 -3.7214e+00 -1.5075e+01 +#> -2.3427e+00 -8.0402e+00 1.0169e+01 6.0570e-01 -5.0686e+00 -2.0361e+00 +#> -1.3110e+01 -1.7934e+00 -6.1507e+00 -1.4862e+01 1.0235e+01 4.1763e+00 +#> 5.6747e+00 9.7487e+00 -1.0725e+01 1.5667e+01 -4.6575e+00 -2.7561e+00 +#> 2.7392e+00 3.2462e-01 -6.5484e+00 -1.1757e+01 -1.2866e+00 1.7478e-01 +#> 2.4737e+00 3.8282e+00 1.9346e+00 1.3481e+00 -2.8944e+00 -1.9211e+00 +#> -2.5990e+00 -1.9938e+01 -1.8334e+01 3.8725e+00 8.6759e+00 -5.0472e+00 +#> -7.9813e+00 -7.0264e+00 -6.9558e+00 7.8006e+00 -1.2365e+01 3.5747e+00 +#> 5.1817e-01 1.5110e+00 1.7366e+01 -1.8837e+00 9.2961e+00 -5.0517e+00 +#> 3.5365e+00 -2.3325e+00 -2.8275e+00 -3.1046e+00 -5.2681e+00 -8.9198e+00 +#> 6.5027e+00 4.3644e+00 1.2701e+01 -3.7251e+00 2.0427e+00 7.9933e-01 +#> 7.8586e+00 -7.0188e+00 5.9492e+00 2.5074e-01 -3.2924e+00 1.8061e+00 +#> 8.8907e+00 -5.8894e-01 1.5509e+00 -3.8481e+00 -7.0443e+00 -1.4297e+01 +#> 7.1570e-01 -4.8762e+00 -1.1524e+01 -1.3736e+01 2.4542e+00 -1.8681e+00 +#> -1.7810e+00 2.3527e+00 -6.2339e-01 4.3841e+00 3.7593e-01 2.7804e+00 +#> -9.2921e-01 -1.0520e+01 -4.9004e+00 4.5912e+00 -9.6276e+00 -5.5966e+00 +#> -1.2594e+00 -1.3813e+01 1.0828e+01 -1.5079e+01 7.6320e+00 9.8324e-01 +#> 1.4521e+00 -1.4763e+01 -2.9935e+00 7.2191e-01 -9.4360e+00 7.4886e+00 +#> -2.6939e+00 -6.2316e+00 6.4547e+00 -1.0803e+01 4.7456e-01 -1.2776e+01 +#> -7.0297e+00 -3.4002e+00 -3.2316e+00 -2.2789e+01 1.9848e-01 9.2831e+00 +#> -9.8949e+00 -1.2377e+01 -3.3984e+00 6.5185e+00 8.3058e+00 -4.8508e+00 +#> -9.2559e+00 -9.8128e+00 -1.0571e+01 4.7695e+00 -4.0007e+00 9.6223e+00 +#> -1.2972e+01 1.3614e+01 -8.2897e+00 1.0589e+01 -6.8666e+00 -5.1787e+00 +#> -2.1274e+00 2.2466e+00 -4.8599e+00 -1.7663e+01 -1.2687e+01 6.8946e+00 +#> 9.7860e+00 7.6324e+00 8.4055e+00 7.6672e+00 4.1579e-01 -1.3755e+01 +#> -9.5256e+00 5.1509e+00 5.7313e-01 3.8565e+00 -8.5037e+00 6.5861e+00 +#> +#> Columns 13 to 18 -5.8376e+00 -2.9279e+00 -5.7467e+00 -5.5014e+00 -4.6681e+00 -1.1295e+00 +#> -9.8382e+00 4.1628e+00 -4.8965e+00 -6.0329e+00 1.7361e+00 2.7454e+00 +#> 5.3213e+00 -9.8942e-01 3.3805e+00 4.8124e+00 7.9408e+00 -4.1334e+00 +#> -8.4554e-01 -5.8992e+00 3.3015e+00 9.2698e+00 -6.1940e+00 -1.8760e+00 +#> -3.3602e+00 -6.5618e+00 -1.3458e+01 -1.0804e+01 -1.0276e+01 -2.3611e+00 +#> -3.2807e+00 -4.7024e-01 -4.8106e+00 -7.1463e+00 -1.2021e+01 -2.0456e+00 +#> -3.3972e-01 -6.5393e+00 -7.5630e-01 -1.8538e+00 -3.5975e+00 -3.9039e+00 +#> -1.5135e+01 7.3035e-01 -8.3808e+00 -4.8221e+00 -9.3744e+00 -3.2798e-01 +#> 2.4560e-01 -7.5552e-01 1.1924e+00 -3.4284e+00 1.9044e+00 -2.9079e+00 +#> 6.4364e+00 -2.0622e+00 -2.1329e+00 2.0296e+00 5.0129e-01 1.3035e+00 +#> -7.6127e-01 1.3828e+01 -9.9733e+00 4.2783e+00 2.4848e+00 -9.1430e-01 +#> 6.9285e+00 -1.0299e+01 2.3943e+00 -3.8257e+00 1.1201e+00 1.4560e+00 +#> -4.5045e-01 -4.0313e+00 -4.1263e+00 5.8071e+00 -2.0225e+00 3.4909e+00 +#> 6.6008e+00 2.6956e+00 1.0403e+01 -1.1447e+01 -5.6352e+00 -5.7858e+00 +#> -3.8587e+00 -4.4260e+00 -3.5954e+00 1.2241e+00 3.7192e-01 3.6454e+00 +#> -2.1584e+00 -7.1288e-01 -1.5161e+00 -2.7456e+00 1.8898e+00 4.4897e-01 +#> -1.1219e+00 -3.7443e+00 -4.8727e+00 -8.3364e+00 -7.5283e+00 -4.1012e+00 +#> -1.0698e-01 -1.2422e+01 -2.1764e+00 -1.7482e+00 1.9705e+00 -6.7154e+00 +#> 4.0076e+00 -8.2704e-01 -2.5556e-01 2.2082e+00 4.1201e+00 3.2584e+00 +#> -4.1204e+00 1.1085e+00 -3.8030e+00 -3.9246e-01 -6.4055e-01 7.0182e+00 +#> 4.5501e-01 1.9863e+00 4.2138e+00 -3.2505e+00 2.1790e-01 -6.2447e-01 +#> -2.7618e+00 8.8997e+00 3.3469e+00 -1.5589e+00 -2.7696e+00 -9.9674e+00 +#> 7.7602e+00 4.1035e+00 6.3377e+00 8.3649e+00 1.8985e+01 1.2069e+01 +#> 8.8029e+00 -3.1696e+00 6.7168e+00 1.4785e+01 8.6215e+00 2.7457e+00 +#> 7.1574e+00 -1.0295e+00 2.9361e-01 3.8908e+00 6.6932e+00 3.7401e+00 +#> -3.9838e+00 1.7488e+00 -2.1708e+00 -1.1437e+01 -4.0910e+00 -7.1037e+00 +#> -1.6132e+01 -8.5650e+00 -2.4694e-02 -1.5977e+00 -7.3895e+00 8.8435e-01 +#> 5.3712e+00 1.7691e+00 2.4260e+00 -1.8620e-01 3.9535e+00 1.0418e+00 +#> 9.8902e-01 9.6299e+00 -3.1638e+00 -6.1946e+00 -4.0484e+00 -7.3948e+00 +#> -7.4685e+00 7.6784e+00 -7.6226e+00 -9.7231e-01 4.6998e+00 1.2277e+00 +#> -2.1309e+00 -9.5858e+00 3.8287e+00 -4.5513e+00 1.1547e+01 -8.7646e+00 +#> 5.6137e+00 -1.0663e+00 -2.6638e+00 7.5905e+00 -2.1023e+00 4.7568e+00 +#> 2.8300e+00 6.5272e+00 -2.2828e+00 3.1876e+00 6.7941e+00 -1.4367e+00 +#> +#> Columns 19 to 24 3.9527e+00 3.9152e+00 8.9049e+00 1.1294e+01 -9.8714e+00 -1.3542e+01 +#> 7.7918e-01 -3.5703e+00 1.1471e+00 1.0779e+01 -4.1714e+00 3.7584e+00 +#> -1.1080e+00 9.3968e+00 -5.0234e-01 -7.4000e+00 -4.7738e-01 4.4831e+00 +#> -7.2076e+00 -3.4422e+00 -6.2024e+00 1.1907e+01 1.0278e+01 -1.3093e+01 +#> -8.0908e+00 4.3010e+00 -3.2117e+00 -8.4518e+00 3.5677e+00 -1.0008e+01 +#> -1.2001e+01 4.0682e+00 -4.7979e+00 1.7572e+00 -4.5670e+00 3.5043e+00 +#> 5.7016e+00 -8.2803e+00 3.3730e+00 3.0721e+00 5.9141e-01 -4.8445e+00 +#> -3.6571e+00 3.2611e-01 -9.7575e+00 -3.3793e+00 8.9044e+00 8.9033e+00 +#> -2.7785e+00 3.7773e-02 -7.7068e+00 4.1643e+00 -1.1703e+01 3.0570e+00 +#> -3.7972e-01 -6.2760e+00 -2.5783e+00 6.8392e+00 1.0227e+00 -1.8564e+01 +#> -4.7111e+00 5.1840e+00 1.6555e+00 -1.8181e+01 1.1991e+01 9.1902e+00 +#> -1.3548e+00 -6.1443e+00 5.9713e+00 -2.2839e+00 3.6700e+00 1.3198e+00 +#> -6.7875e+00 1.3635e+01 6.8890e+00 -1.0232e+01 -9.0848e+00 5.8770e+00 +#> -2.5420e+00 -5.9944e+00 6.9125e+00 -5.3103e-01 -1.1211e+01 2.6487e-02 +#> -1.0139e+01 1.1467e+01 -5.1473e+00 5.1925e+00 -1.4699e+01 2.3066e+00 +#> -7.9126e+00 2.2568e+00 7.3034e+00 -5.2624e+00 -5.4210e+00 4.4153e+00 +#> 3.0570e-01 -1.5279e+00 -1.7311e+00 7.4561e+00 -8.5627e+00 -1.4265e+00 +#> -3.0795e+00 -3.2056e+00 2.3707e+00 -3.7382e+00 -5.6777e+00 1.3584e+01 +#> -6.0551e+00 9.6878e+00 -2.8827e+00 -1.5638e+01 -6.0660e+00 1.1794e+01 +#> 5.2208e-01 -9.5151e-01 4.8781e-01 7.1150e+00 -1.1985e+01 1.1426e+01 +#> 5.2291e+00 -5.9671e+00 -7.7412e+00 7.6153e+00 1.0399e+01 -2.9500e+00 +#> -6.2040e+00 -2.7724e+00 -2.0706e+00 -9.4660e+00 2.1540e+00 3.0201e+00 +#> 4.2770e+00 8.7830e+00 5.0159e+00 -4.2996e+00 9.8929e+00 -2.7062e+00 +#> 2.9477e-01 1.0222e+01 -4.2887e+00 5.1453e+00 -4.7670e+00 -2.9534e+00 +#> 5.0972e+00 6.2548e+00 -8.7009e+00 9.8776e-01 -2.0427e+00 -6.0557e+00 +#> 3.7945e+00 1.5340e+00 -4.9280e+00 1.0356e+00 -1.1497e+01 1.8980e+00 +#> -5.5207e+00 -2.3598e+00 -6.4203e+00 5.3196e+00 1.6321e+00 -7.9663e+00 +#> -3.3121e+00 4.5133e+00 4.7246e+00 -1.1493e+01 -1.7462e+01 -2.9324e+00 +#> -5.5510e+00 -7.6881e+00 3.9163e+00 2.0472e+00 -4.8133e+00 -8.8644e+00 +#> 3.4029e+00 3.8217e+00 9.1463e+00 -6.9773e-01 -1.1475e+01 -8.3752e+00 +#> 1.2595e+01 -9.3585e+00 2.2784e+00 3.9545e+00 1.0027e+00 -6.0553e+00 +#> -1.7755e+00 4.8426e+00 3.1131e+00 5.8507e+00 1.4758e+00 8.1132e+00 +#> 8.1158e+00 -2.6878e+00 3.8420e+00 9.7280e+00 -2.0917e+00 -1.7866e+00 +#> +#> Columns 25 to 30 2.7514e+00 -2.9044e+00 3.1635e+00 4.8970e+00 -2.7153e+00 -2.1692e+00 +#> 4.0094e+00 5.8292e+00 1.2055e+01 9.6828e+00 -3.7371e+00 3.6239e+00 +#> 1.4078e+00 4.0304e-01 1.2239e+00 8.0703e+00 -9.6928e+00 -6.7939e+00 +#> -7.4295e+00 7.2254e+00 2.9496e+00 3.6850e+00 1.9936e+01 -7.6404e+00 +#> 7.7423e+00 4.6614e+00 3.0677e+00 1.6276e+00 -8.0058e+00 -1.4904e+01 +#> 2.7324e+00 4.6477e+00 2.9174e+00 -7.6481e+00 -1.5952e+00 -7.8083e+00 +#> -2.1986e+00 5.1477e+00 -4.2553e-02 1.3089e+00 4.7724e+00 -9.3814e-01 +#> 1.0899e+01 -4.5767e+00 7.8206e+00 4.1006e+00 -9.6803e+00 -9.8559e+00 +#> -2.5533e+00 5.8232e+00 4.3523e+00 3.5645e+00 -2.8284e-01 -2.3316e+00 +#> -8.0524e-01 7.9485e+00 -1.6328e+00 6.4514e-01 9.6695e+00 3.4099e+00 +#> 1.4511e+01 -1.5813e+01 1.3674e+01 9.2899e+00 -1.7942e+01 1.4025e+01 +#> -4.0874e+00 5.8157e+00 -7.6620e-01 -1.0892e+00 6.5480e+00 -3.9128e+00 +#> 3.3873e+00 -1.3534e+01 2.6775e+00 6.6464e+00 5.7631e+00 -6.4381e+00 +#> -1.1190e+01 -4.2002e-01 -1.3070e+01 -4.3599e+00 -7.0482e+00 2.7458e+00 +#> -2.0814e+00 -8.8541e+00 1.6759e+01 5.5173e-01 -4.2523e+00 1.2654e+00 +#> -4.2751e+00 1.3150e+00 -1.3943e-01 3.2242e+00 1.1298e+01 1.2557e+01 +#> 2.9336e+00 -4.9230e+00 6.7140e+00 2.0939e+00 -6.5652e+00 -1.6155e+01 +#> -8.0575e+00 -1.4924e+00 3.4084e+00 -3.2414e+00 6.2329e+00 5.5945e+00 +#> 3.6416e+00 7.1970e+00 2.9832e+00 4.8062e+00 3.9785e+00 1.2869e+01 +#> -2.4926e+00 -5.4260e+00 6.8716e+00 -3.1893e+00 -4.6895e+00 1.4379e+00 +#> -8.6248e-01 4.2699e+00 -1.4311e+01 -6.6221e+00 8.4722e-01 -1.9782e-01 +#> 6.0904e+00 -8.3998e+00 7.4810e-01 8.4135e+00 8.9536e-02 1.5779e+00 +#> 9.0231e-01 -1.2763e+01 2.2154e+00 -6.3118e+00 -5.7779e+00 2.1693e+01 +#> -7.9907e+00 -1.0028e+01 -2.4887e+00 -2.3234e-01 1.3156e+01 9.6231e+00 +#> 4.0964e+00 -2.1144e+00 1.2411e+00 -2.9359e+00 -2.9314e-02 1.0143e+01 +#> -3.5980e+00 3.4881e+00 -2.2463e+00 -1.3874e+01 -1.0153e+01 -5.2240e+00 +#> 1.0538e+01 -2.3452e+00 1.2710e+00 7.5885e+00 4.9355e+00 -1.0769e+01 +#> -2.4105e+00 -3.5217e+00 4.9055e+00 1.9954e+00 -4.5076e+00 -1.8899e-01 +#> 3.2111e+00 -7.7566e+00 -4.4094e+00 9.5524e+00 6.6475e-01 1.4449e+01 +#> 1.2576e+01 -4.6264e+00 -2.9808e+00 4.9721e+00 -7.8618e+00 -2.9451e+00 +#> -1.2270e+01 7.4681e+00 -8.6775e-01 3.1733e+00 -1.0157e+00 -4.1103e+00 +#> 6.1041e+00 -7.8737e+00 7.6910e-01 2.6178e+00 -2.1165e+00 -2.4539e+00 +#> -2.9781e+00 5.1067e+00 9.0291e+00 3.7386e+00 4.0225e+00 1.1059e+01 +#> +#> Columns 31 to 36 -1.0024e+00 -2.6713e+00 1.2578e+01 -1.7690e+00 -3.4575e+00 3.3586e+00 +#> 6.5265e+00 -6.1522e+00 3.4934e+00 -4.9728e+00 5.8530e+00 -2.0366e+00 +#> 2.4769e+00 -4.4414e+00 -7.5334e-01 1.5265e+01 -1.1129e+01 -2.2469e-01 +#> 8.2506e+00 1.4354e+00 4.4313e-01 5.4406e+00 1.2897e+01 4.4803e+00 +#> -2.8658e+00 -1.8296e+00 -2.1623e+00 -2.4244e+00 -3.0937e+00 -7.2425e-01 +#> 8.4132e+00 7.2456e-01 -5.0226e-02 5.9712e+00 -3.5478e+00 3.6569e+00 +#> 2.0938e+00 3.0349e+00 1.0649e+01 3.8103e+00 -3.3931e+00 5.1124e+00 +#> -5.9067e+00 2.8918e-01 -7.7774e+00 -6.2789e+00 2.6692e+00 1.3624e+00 +#> -7.5225e+00 5.4142e+00 -1.2569e+01 1.1863e+01 -8.2005e-01 9.7496e-01 +#> 4.8118e+00 9.3095e-01 5.2597e+00 5.2339e-01 -2.7255e+00 8.6351e+00 +#> -2.6015e+00 -9.6194e-01 3.9729e+00 -1.5552e+00 4.7174e+00 -4.7509e+00 +#> -5.2280e+00 2.3861e+00 -6.1787e+00 2.1090e+00 -8.7541e-01 1.8596e+00 +#> 4.1943e+00 -2.1511e+00 7.2290e+00 1.4783e+00 -6.3846e+00 3.8103e+00 +#> 6.9841e+00 5.9577e+00 -8.0214e+00 3.0639e-01 1.8434e+01 -7.8145e+00 +#> -2.7008e+00 9.5668e+00 2.2890e+00 2.7696e+00 -1.1703e+01 1.4588e+01 +#> 6.2605e+00 -5.3954e+00 2.2707e+00 -2.5968e+00 4.0614e+00 -6.1080e+00 +#> -2.1722e+00 2.8522e+00 5.6883e+00 2.5314e+00 -6.3639e+00 4.9474e+00 +#> -9.4953e-01 1.5049e+00 -3.5119e+00 3.2323e+00 4.7714e+00 2.7848e+00 +#> -1.2248e+00 5.2855e+00 -1.1353e+01 8.8063e+00 -4.5217e+00 -2.6649e+00 +#> -2.8687e+00 3.8691e+00 -8.2179e+00 -6.2951e-01 -3.1783e+00 5.1316e+00 +#> 4.0843e+00 -8.5289e+00 1.2830e+00 1.2650e+01 -5.2441e+00 8.3415e-02 +#> -5.9715e+00 -1.1104e+01 1.9260e+00 -3.7978e+00 3.4070e+00 2.1377e+00 +#> 1.4444e+00 2.2540e+00 3.7112e+00 8.3033e+00 -8.9732e+00 1.7013e+00 +#> -7.3074e+00 1.8956e+01 -1.8431e+00 1.0475e+01 -8.2182e+00 1.1970e+01 +#> 2.3107e+00 1.4038e+00 8.8203e+00 4.4088e+00 -4.7749e+00 8.1737e-01 +#> 5.5478e+00 4.3362e-01 -3.9417e-01 5.8719e-01 -3.4224e+00 8.3111e-01 +#> 1.9707e+01 1.6573e+00 4.5278e+00 6.3618e+00 6.5622e+00 2.3959e+00 +#> 9.7553e+00 1.3196e+00 -3.7189e+00 1.0158e+00 4.2140e-01 -5.3399e+00 +#> -1.0143e+01 1.5711e+00 7.4052e+00 -3.8900e+00 1.1622e+00 1.5689e+00 +#> 8.2355e-01 -1.3308e+01 1.7697e+01 -1.3180e+01 -1.5565e+00 -6.2004e+00 +#> 1.5589e+00 3.6957e+00 -3.5663e-01 3.4026e+00 9.4058e-01 8.3490e+00 +#> -9.1588e+00 3.6904e+00 -1.9694e+00 -5.0691e+00 -1.4893e+00 -2.4870e-01 +#> 1.5074e+00 4.2083e+00 6.5068e+00 -1.4737e+00 7.8155e+00 1.9302e+00 +#> +#> Columns 37 to 42 -6.0443e+00 -5.0038e+00 2.4085e+00 6.1652e-01 -2.9455e-01 4.1388e+00 +#> -2.1589e+00 7.5638e-01 -1.4299e+00 5.4934e+00 1.1379e+01 -1.7734e+01 +#> -5.5570e+00 1.4518e+00 -5.1437e+00 5.8455e+00 -1.1211e+01 -1.0749e+01 +#> 8.0975e+00 2.3521e+00 3.8920e-01 7.0470e-01 1.1417e+01 5.2706e+00 +#> 2.2747e+00 -2.5634e+00 -2.5810e+00 2.8128e+00 -1.2677e+01 -9.6755e+00 +#> -6.1550e+00 -4.8737e+00 -1.0272e+01 8.9978e+00 4.5288e+00 2.6653e+00 +#> -6.3205e+00 -5.1040e+00 3.3478e+00 1.1458e+01 1.1980e+01 1.3942e+01 +#> 2.7540e-01 -1.3097e+01 -1.1894e+01 8.7668e+00 -6.5043e+00 1.0282e+00 +#> -1.1896e-02 -1.3126e+01 -8.0822e+00 -2.1835e+00 4.4062e+00 -8.4565e+00 +#> -5.2831e+00 -2.2819e+00 1.3993e+01 6.1044e+00 1.3993e+00 2.7036e+00 +#> 5.0564e+00 -2.4792e+00 2.1099e+00 1.6402e+01 -9.3117e-02 -1.0041e+01 +#> 1.6624e-01 -1.1526e+01 1.0828e+01 -5.7474e+00 4.6739e+00 -3.1839e+00 +#> -5.9014e+00 2.9813e+00 -1.6901e+01 2.9944e-02 -2.5500e+00 1.0309e+00 +#> 3.1118e+00 -7.3564e-01 5.5841e+00 -1.4666e+01 3.8443e+00 6.1601e+00 +#> -9.0465e+00 -3.6688e-01 -1.7310e+00 -6.8496e+00 6.2570e+00 5.2079e+00 +#> 2.4793e+00 5.8129e+00 -6.8916e+00 -1.0394e+01 4.2798e+00 4.3335e+00 +#> -8.6506e+00 1.2264e+00 -4.2477e+00 2.0104e+00 5.3704e-01 5.4668e+00 +#> -4.2562e+00 1.6511e+00 -7.5951e+00 -1.8699e+01 -4.9848e+00 2.8267e+00 +#> -4.0240e+00 -6.9409e+00 -3.4937e+00 1.0665e+01 -1.2527e+01 -7.0438e+00 +#> -2.1840e+00 4.4412e+00 -1.4393e+01 -1.0654e+01 -3.5939e+00 1.2485e+01 +#> 7.2340e+00 -3.2706e+00 -2.2007e+00 6.2254e+00 -6.8434e+00 -1.3621e+00 +#> -4.5564e+00 -4.0301e+00 -2.7368e+00 -4.7971e+00 -3.8564e+00 -4.0555e-01 +#> 5.1519e-01 1.7264e+00 4.8336e+00 -3.0540e+00 1.3621e+01 2.4650e+01 +#> 3.8365e+00 1.0922e+00 -3.1472e+00 1.0390e+01 9.9348e+00 7.2009e+00 +#> 4.4151e+00 1.0959e+01 1.6859e+01 -2.6747e-01 3.2473e+00 -2.3686e+00 +#> 3.5090e+00 1.3791e+00 -5.6040e+00 -1.0725e+01 -4.0518e+00 7.6260e+00 +#> -7.0225e+00 -6.5785e+00 -1.0424e+01 9.5121e-01 1.7987e+01 6.9230e-01 +#> 3.2599e+00 4.0881e+00 -1.3803e+01 -1.1379e+00 5.3533e+00 -1.5402e+00 +#> 2.1776e+00 -4.0299e+00 -7.1673e-01 -3.0291e+00 -2.9262e+00 2.9943e-01 +#> -2.6078e+00 -3.6891e+00 -9.8233e+00 5.5719e+00 1.2748e+01 -3.7681e+00 +#> -5.5408e+00 -1.4093e+01 4.5600e+00 4.5790e+00 1.8872e+01 1.6317e+01 +#> 1.8432e+00 -5.3269e+00 4.9184e-02 -1.3810e+01 -3.9466e+00 1.8990e+00 +#> 1.1511e+00 1.7908e+00 2.0034e+01 8.6886e+00 1.1405e+01 -4.1812e+00 +#> +#> Columns 43 to 48 -5.4550e+00 -8.2143e+00 4.7637e+00 -2.9903e+00 -7.2637e+00 -9.7950e+00 +#> -5.4919e+00 -9.6887e+00 4.7355e+00 -6.6842e+00 -9.0013e+00 -1.1458e+01 +#> 1.1702e+01 6.5723e+00 4.7653e-01 2.5971e+00 -2.8433e+00 -8.7995e+00 +#> 1.3533e+01 2.4328e+00 -3.6331e+00 1.5955e+00 -2.2165e+00 -1.7262e+01 +#> 3.8044e+00 -2.0567e+00 -3.6479e+00 -4.1294e+00 2.9667e+00 -5.5330e+00 +#> 8.1044e+00 -8.6189e+00 -5.4637e+00 -1.1795e+01 -2.1088e+01 -4.0552e-02 +#> -2.9690e-01 -6.5130e+00 -9.3919e+00 -1.9817e+00 -6.7502e+00 6.5811e-03 +#> -1.9122e+00 -1.1089e+01 -7.5356e+00 -4.9767e+00 1.1579e+01 9.7925e+00 +#> -4.7898e+00 5.8781e+00 -1.0547e+00 -5.0681e-02 -7.6112e+00 -6.2750e+00 +#> 2.2555e+00 -3.7843e+00 -4.7802e+00 -8.5480e-01 1.4987e+01 -7.9747e-01 +#> 1.0052e+01 3.6775e+00 -1.0574e+00 1.0857e+00 -2.2764e+00 -7.6102e-01 +#> -1.0788e+01 -9.3425e-01 1.2396e+00 -1.4440e+00 3.6199e+00 7.8463e+00 +#> 8.0964e+00 4.8538e+00 2.7574e+00 -5.8478e+00 -6.2870e+00 -2.7797e+00 +#> -1.7868e+00 -1.0333e+01 -2.0574e+00 4.5673e+00 -2.6122e+00 3.1900e+00 +#> 5.2439e+00 5.9873e-01 -1.0739e+01 -5.0399e+00 -2.8870e+00 -1.0834e+01 +#> -6.6098e-01 -6.5914e+00 8.4657e-01 8.7373e+00 9.3333e-01 -1.7683e+00 +#> 9.2482e+00 8.5724e+00 -3.8986e+00 -9.4177e+00 -6.1733e+00 -4.7650e+00 +#> 1.3976e+00 5.4746e+00 6.8451e-01 1.0722e+01 -6.5352e+00 -5.5735e+00 +#> -6.8063e-01 -7.1734e-01 -1.0688e+01 3.1244e+00 9.7000e+00 4.6155e+00 +#> 1.4267e+00 1.2498e+00 -3.2097e-01 -3.9869e-01 6.3889e+00 1.3758e+01 +#> 4.5269e+00 -3.7422e+00 -4.6177e+00 -3.7689e+00 5.7216e+00 5.8025e+00 +#> -2.6467e+00 2.3303e+00 1.0984e+01 1.1895e+01 -8.3679e-01 7.5611e+00 +#> 1.0457e+00 8.9304e+00 -6.0809e+00 7.7563e+00 -9.1621e+00 1.1348e+00 +#> 2.5585e+00 7.1815e+00 -3.8037e+00 -5.7326e+00 -6.0681e+00 -2.6218e+00 +#> 8.5473e+00 8.0866e+00 1.9267e+00 -5.5515e+00 -1.0811e+00 5.9082e+00 +#> 4.0650e+00 -1.3018e+00 -1.4162e+00 -5.3452e+00 1.6563e+00 3.4752e+00 +#> 3.5318e+00 -2.5060e+00 -1.2967e+01 -1.5486e+01 -1.1734e+01 -7.6932e+00 +#> 1.0912e+01 1.1026e+01 2.3755e+00 4.6339e+00 1.2357e+01 -1.4554e+01 +#> -5.2169e-01 1.8166e+00 2.7753e+00 6.4007e+00 -4.7057e+00 1.1531e+01 +#> -1.6006e+00 -7.4824e+00 7.2213e+00 -7.9190e+00 6.7513e+00 -8.9247e+00 +#> -6.4194e+00 1.5810e+00 -4.7193e+00 6.2955e+00 7.0297e+00 6.8899e+00 +#> 2.6142e+00 -1.6475e+00 -2.5523e+00 -1.0377e+01 -8.8961e+00 6.2604e+00 +#> -1.8070e+00 6.4933e+00 9.0026e+00 -7.7026e+00 2.2729e+00 -6.4668e+00 +#> +#> (15,.,.) = +#> Columns 1 to 8 0.8048 -7.1302 -5.1982 4.7184 2.3049 0.1458 -2.9323 0.9045 +#> -6.2981 8.2727 2.8437 3.7729 10.3314 2.1071 0.9795 7.1802 +#> 5.0801 1.4905 -5.0455 -4.1448 7.4240 -0.2848 -4.4704 6.6526 +#> -4.3386 -0.6268 -12.2354 15.0675 -11.1815 7.1535 0.6211 -2.3970 +#> 6.0735 2.6415 -7.8096 0.1928 -6.0289 -3.4852 -1.8490 -6.5341 +#> -5.5020 -4.1683 -21.8447 -7.1531 -10.1589 -3.6331 -6.6435 8.8253 +#> -5.4820 1.1505 -1.7518 13.0789 5.8027 -6.2200 -4.9524 6.7853 +#> 3.2055 10.5159 -3.1530 -4.1114 -8.2848 2.9294 9.1991 -6.8600 +#> 0.7853 -0.2040 -2.6513 -8.2570 4.7800 -6.0429 -1.0372 4.5380 +#> -10.0597 -1.5282 3.6515 4.6160 -6.1629 7.2925 1.3604 -2.4042 +#> 5.5991 8.1511 -5.1106 -12.7090 6.2902 -14.5277 8.0083 -2.9098 +#> -1.2806 -1.3863 8.2275 2.2868 7.8467 1.3516 -3.8672 -3.2717 +#> 1.9597 3.3709 -8.9261 7.0613 12.2671 -7.5790 -3.4860 9.1355 +#> 0.7325 -5.4724 6.2299 -8.6116 -21.3077 3.4404 -13.1769 -1.2893 +#> -4.7072 7.5819 -2.1514 10.3851 2.1876 -4.8796 -9.3614 3.8329 +#> 13.7107 5.2684 6.4132 -3.1331 -7.7471 9.2193 3.7722 -10.6658 +#> 6.7938 -13.5632 -3.4073 -6.4364 6.0194 -6.4552 -3.6073 -2.3002 +#> -3.2604 7.2644 -6.9982 3.0262 0.3497 6.5678 -3.8303 -2.5425 +#> 9.5488 12.8055 10.9672 -4.7713 -4.8149 2.6481 14.4671 -0.8024 +#> -0.1062 4.0740 0.1192 -2.8975 -3.5445 -4.3332 3.5695 -5.8257 +#> -5.9688 10.4781 -3.7207 0.8464 3.3031 -2.0499 2.5568 2.4314 +#> 6.8408 -2.4286 -2.0812 -13.5789 -11.9642 1.9043 1.6696 -5.7336 +#> -0.3294 -4.0051 4.4825 8.7400 4.2389 6.1872 -4.8511 3.3600 +#> -7.3160 -2.2606 0.7169 -5.0866 -0.0417 -2.8604 -12.7086 11.6121 +#> -1.7032 -1.4316 11.1245 0.1705 9.2713 1.7239 -2.6730 3.3785 +#> 0.5772 -2.7065 -2.8858 -4.2171 -10.9184 3.6095 -0.5072 -4.4745 +#> -8.3830 -5.4281 -6.8301 -2.5117 11.5640 -9.2411 5.6765 8.3588 +#> 2.6178 11.9343 9.2576 -3.4401 -3.8225 -3.3271 -12.3327 1.9006 +#> 5.9909 2.2253 5.8577 -8.6208 -2.9306 -6.7949 1.7518 -8.0359 +#> -3.3776 4.6775 6.3503 -5.2518 6.9190 -4.8524 -8.8138 -1.2051 +#> -6.7259 -4.4902 7.2270 7.9758 4.6113 2.5120 -4.4357 -6.3322 +#> 7.7574 -13.7695 3.9471 -4.6606 5.5693 7.0922 -10.3325 4.2199 +#> -10.8634 1.3126 2.9160 2.5213 6.1139 -5.9908 -5.1855 2.8433 +#> +#> Columns 9 to 16 -2.6243 0.4328 11.1646 8.5593 -12.1996 -7.4265 -12.9429 7.7398 +#> 3.5453 -0.1579 -4.2605 -9.5878 -3.6614 5.4460 -3.9349 6.1239 +#> -3.2706 -6.6456 -13.3301 1.4546 6.8946 4.3316 8.8452 -5.5344 +#> 8.2053 -8.7390 13.9397 24.9905 4.2868 -0.4157 15.1577 1.2894 +#> 2.3995 5.4825 -5.9720 1.8767 -4.0682 -6.2251 -14.7602 -7.7949 +#> -2.3086 8.8263 15.2860 5.3245 -0.6444 -6.3166 -6.5798 -2.2219 +#> -0.9974 9.5613 14.1790 10.5027 -4.9540 -13.3136 3.9889 0.6169 +#> 4.0747 18.7485 -9.3609 -11.3180 -9.3307 2.9839 -9.8000 -4.7789 +#> -1.2814 -5.5160 5.1901 6.1370 10.5471 -2.6609 5.4535 -13.1699 +#> 8.7097 4.1678 -5.8543 1.4070 4.3947 -2.2228 -2.4664 8.3158 +#> 4.0793 5.4742 -15.4222 1.8686 12.2919 6.3672 -7.8565 7.6350 +#> 3.6448 6.1630 1.4843 -8.0472 -2.3761 -7.7439 0.7194 10.8694 +#> -0.4451 7.7842 -7.7088 -6.5198 -5.2164 -7.2539 8.5703 9.0153 +#> 10.1100 -9.1459 23.6299 -3.6850 -6.6768 -9.6936 -10.3292 -13.5748 +#> 9.9733 8.9384 -9.7610 8.3322 -7.7173 -10.8625 0.3825 4.6803 +#> 6.4227 -1.0801 12.4142 1.6283 -1.2001 -4.3134 -6.8145 -0.7343 +#> 4.0345 9.0029 6.5919 -3.8529 -7.8487 -6.0298 -1.6113 -8.0233 +#> 0.3470 8.4495 3.4920 -5.8563 -9.9557 -10.6460 7.1741 9.1540 +#> 0.2047 10.9090 -1.1599 -6.6670 -5.3418 0.5331 -15.7860 -3.4584 +#> 2.2713 7.9691 -2.9790 -11.3593 -3.0517 5.9772 -3.5811 3.4882 +#> -1.9983 5.4185 -0.5448 1.1243 3.6822 -5.9691 1.3919 -3.8577 +#> -6.0237 1.4433 -0.1638 0.7065 -6.3657 2.9962 0.7895 -7.5776 +#> -7.1163 6.1097 -2.6729 16.1532 1.7291 5.4875 0.1021 5.3662 +#> 0.2978 3.3122 -3.9261 2.0635 -0.0566 -5.4560 10.9724 9.1025 +#> 1.3748 4.8877 8.7258 1.8500 6.0718 -2.0596 -7.8329 -2.4864 +#> 2.1345 1.4062 8.9350 -0.1680 0.5001 -3.2611 -15.8572 -11.3055 +#> 10.6560 20.8887 12.0689 -4.7632 -11.5247 -18.9330 7.3607 5.1821 +#> 11.8770 0.2722 -3.6730 -15.7264 -0.4359 -5.6856 -11.3708 0.7707 +#> -3.0294 9.8799 12.4789 0.2152 -15.0768 -6.8379 -9.5639 -8.6209 +#> 19.6157 -3.2185 -6.2283 -15.6918 4.8533 -2.8065 -9.8387 7.6355 +#> 3.2747 15.9104 18.6214 12.1918 -7.1561 -8.6773 5.4100 -6.6609 +#> 0.7128 -12.3952 4.7757 -10.6673 2.7547 13.9214 -3.6430 9.8891 +#> 6.0475 -2.3505 2.2329 1.5311 8.0533 3.1398 -1.4593 11.9712 +#> +#> Columns 17 to 24 -10.7269 -7.0341 -4.5691 -19.3047 11.3635 2.1366 -11.1086 9.7337 +#> -1.3675 -0.4500 -11.2419 -17.4118 1.6026 -4.6512 -3.6940 12.8833 +#> 1.3415 2.8172 3.8470 -3.8784 -0.9783 9.3358 -1.6777 5.7717 +#> -15.0082 -2.9175 4.9480 -1.9858 -1.1864 17.0469 8.2187 7.8884 +#> -2.2238 2.6507 1.9936 -4.5235 -12.0722 8.0846 -5.2894 9.9181 +#> 3.4597 -5.0310 -18.5341 -7.4792 -1.1131 4.3999 10.0050 1.9466 +#> -10.2704 0.7731 1.3454 -2.3996 20.8438 -5.3910 1.9644 8.1886 +#> 6.5566 1.6228 -7.1471 -9.1481 -2.6575 -5.3536 -5.5653 5.4137 +#> -1.3179 4.3835 -3.4015 -1.2802 -4.4014 -6.5830 11.9631 -9.0388 +#> -1.6018 -4.5212 7.1977 7.7312 2.8460 4.3718 -3.1728 -3.9669 +#> 5.3512 -3.0614 -7.0596 -3.8677 -1.1902 2.3101 -14.7512 8.5772 +#> -4.0062 0.7587 -3.6666 -4.0924 1.1767 -12.3443 -1.2912 0.3650 +#> -10.4419 -1.7139 2.5798 -6.1205 3.7598 7.3134 -6.3118 9.1148 +#> 4.5062 6.7279 -0.1350 3.8005 7.8222 8.7671 3.8129 -8.1235 +#> -17.0484 0.8677 5.0648 -11.1856 18.0983 -2.6071 1.6093 18.9739 +#> 4.1095 -4.1844 -1.1886 -8.2871 -4.1897 15.7950 0.3164 7.1537 +#> 6.0536 -3.2139 3.1367 -4.3572 -3.2948 19.5925 -6.7900 7.3236 +#> 9.3605 -2.2390 -7.1153 -11.1644 2.4283 -5.9793 14.6581 4.1310 +#> 11.8628 -5.5734 -1.3126 -0.1468 0.7981 4.7050 -6.4429 -5.4245 +#> 3.4164 -18.6147 -5.6152 -7.5493 -2.3179 -8.2711 -0.4826 -5.3061 +#> 2.7276 0.1413 -2.7000 -2.6496 2.5378 -9.9883 7.9165 2.5381 +#> 6.2961 9.9005 -2.7543 2.1022 2.6281 -0.8926 -2.7783 -13.4023 +#> 0.7441 -3.5912 -10.0708 7.8162 3.0236 -3.2449 1.3602 -10.5133 +#> -7.4043 -8.2482 1.9733 4.5619 -1.3130 12.2857 -3.8850 -7.3048 +#> 1.8633 0.2468 10.0071 13.2876 2.1962 6.7229 6.5548 -1.1695 +#> 5.3615 1.4604 7.1541 3.6082 6.6734 3.7640 3.4410 -0.6691 +#> -11.0977 -0.3524 -5.1584 -6.6272 7.4835 18.1811 1.3001 18.6657 +#> 9.3068 -2.9136 17.1541 3.0779 -8.9603 21.3401 -16.4021 -6.1953 +#> -2.7839 3.9002 -1.7010 4.1217 13.8331 -5.2049 7.4136 -2.0544 +#> 4.3481 -7.7533 4.3164 -6.1902 -6.0672 8.6225 -16.4504 0.4909 +#> 1.8774 3.1294 -6.3901 2.4610 15.0536 -11.9390 11.1065 -4.4631 +#> 2.7281 -3.2621 -4.2025 0.7014 -20.1567 3.1274 -1.7161 -10.5776 +#> 1.0278 -7.0991 -9.2778 -4.2494 1.0259 -11.1924 -0.1329 -13.5445 +#> +#> Columns 25 to 32 -1.7462 -1.6269 -0.7915 -0.8197 -3.9971 -10.4806 -10.2680 -2.7534 +#> -0.2765 6.7450 -2.0712 10.6486 -13.9731 16.2608 -5.7437 3.4435 +#> 7.3143 4.7691 0.7026 2.9372 -7.7866 -4.2075 6.0117 13.2946 +#> -5.4815 11.9231 6.8967 9.7563 15.3568 -1.2720 16.1653 -1.1744 +#> 3.4551 1.6304 4.1245 0.6843 4.5899 -6.6570 -6.3757 3.0269 +#> 2.4357 -3.9644 -7.9284 -0.0059 2.5904 4.4955 7.8125 -0.6694 +#> -5.8358 -7.0826 -3.6999 -0.5470 -1.9090 1.5121 7.6959 -6.5233 +#> 6.5241 -0.8224 -16.7985 -0.5287 10.2887 -3.1625 -11.5654 0.9984 +#> -0.1712 0.7338 -4.5138 -9.9664 -2.2649 0.5123 -5.3444 -5.0459 +#> -5.5832 -4.8549 5.5569 13.3486 12.9264 -5.5433 1.9425 4.5420 +#> 1.1549 -0.1853 -6.9617 1.7310 1.1553 11.2502 -7.4437 12.6783 +#> 0.9435 -6.3769 12.0910 9.3257 -1.9542 -4.7526 -1.3705 -0.8283 +#> 5.5251 9.6266 -0.4007 -1.7914 -9.0324 1.4803 18.9263 0.3953 +#> -8.7347 -4.0501 8.7921 11.5538 -12.4976 1.0826 -19.7760 -10.8927 +#> -2.3679 4.2283 3.5410 7.1489 -13.3737 2.6299 13.3520 -5.4859 +#> 2.0640 -0.0326 2.0723 10.3406 -2.1520 3.9185 1.0878 -4.0766 +#> 14.8852 -2.2712 -14.5972 -7.5811 -3.3774 -6.0105 14.9241 -8.3811 +#> 13.5728 11.6081 -1.0403 11.6453 -8.1370 3.6349 24.3273 3.0830 +#> 3.2080 5.0707 -9.2790 -8.8123 4.9685 -1.6484 -4.7333 5.8462 +#> 2.9742 -10.7546 -12.3970 -0.1404 0.0318 -1.9171 5.8307 -11.9025 +#> 1.2260 -2.8130 0.4420 1.9131 3.8702 -5.9096 1.6282 2.5812 +#> 1.4588 5.4399 -0.8877 -2.4271 -5.0846 -3.9904 -6.7946 1.9089 +#> -4.8377 -9.2531 5.5385 -5.5002 3.8937 7.2787 7.1057 2.6130 +#> -8.5997 -3.5724 -3.3184 3.1191 15.3997 9.4976 10.3288 10.3277 +#> 3.6108 -1.1064 -4.6679 -4.5344 -8.6885 -1.0672 8.9834 4.1420 +#> 9.4999 1.1589 -11.8681 -11.2911 -9.8568 -14.2836 -5.4645 -7.7589 +#> -4.2400 -2.4742 -1.2214 4.4757 1.6532 0.8790 14.6292 0.4674 +#> 8.4445 15.9800 4.0901 -7.3633 -3.6679 -3.6021 9.9330 12.6796 +#> -7.9000 3.8494 1.4938 0.4920 -3.1299 4.9682 -6.1952 0.9764 +#> 8.7349 -1.2985 -2.5513 -14.5349 -6.8205 -1.1748 -7.4282 -1.3516 +#> -3.6078 -7.8039 2.0198 1.2103 8.5414 -3.6797 -5.4959 -2.0035 +#> 8.1339 -1.8808 -8.5537 2.8481 5.4491 7.7438 2.7956 -0.6117 +#> -8.0023 -0.9707 6.7169 0.4876 6.8987 8.7232 -2.7651 8.5866 +#> +#> Columns 33 to 40 3.4080 10.7186 5.5730 4.6531 5.7616 3.4705 -11.4040 12.9549 +#> 7.7377 6.0998 -1.5554 4.7076 -8.1016 -9.8182 3.8232 -0.1717 +#> -5.6604 1.0921 -11.5067 -0.9772 2.9314 -7.4592 -9.5098 -3.6219 +#> -2.3808 -4.4176 -0.4573 5.1452 -4.5173 2.6218 4.3377 -9.7166 +#> -3.7069 5.9415 -9.7631 -3.8982 -8.0165 -2.5303 -1.2263 0.7284 +#> -0.3598 -6.7104 -0.3492 11.4228 11.0449 3.6593 7.6630 4.0337 +#> -5.1844 -7.9579 -0.3715 3.9422 3.8204 7.4046 -11.1925 0.5115 +#> -7.0582 -0.8884 6.2280 -2.8426 -14.4377 7.6894 4.5536 2.4210 +#> -1.4600 -10.2162 -8.5577 1.7303 4.5618 -11.8549 1.7316 -11.9648 +#> -4.1447 10.3017 13.7237 -4.0415 -4.9265 2.7495 3.8622 -6.7376 +#> -11.5153 2.8323 -6.5427 12.0739 -0.6204 4.8624 -1.5153 3.3605 +#> 13.2823 5.9974 1.2621 -4.7486 1.5167 5.9744 -0.3291 0.6503 +#> -1.0654 0.5911 -4.0941 8.9973 1.0366 6.0193 0.5471 0.4281 +#> 0.2312 -10.8132 1.8276 -0.5442 -4.3519 8.0915 4.7438 -0.1606 +#> -0.0113 2.5344 -11.3341 -2.8751 14.1932 -4.1816 3.7660 1.8073 +#> 16.3061 -0.7111 -8.0503 2.8067 6.5130 -9.8521 -4.1031 1.1970 +#> -10.5519 0.9086 -8.5823 -0.0814 8.8444 11.8856 1.2298 4.7355 +#> 10.0114 -5.0213 -6.0665 -1.6630 7.1819 -0.7356 4.0499 -2.1666 +#> 0.6757 -0.3162 -3.5780 -4.7094 -2.8122 -1.5863 -1.3931 -10.5494 +#> -2.0870 0.9603 7.7240 -4.6320 5.4591 0.2743 14.8810 -2.1514 +#> -1.8110 1.1190 -2.2812 2.8365 -0.6039 1.2504 2.5801 2.9844 +#> 2.4130 -9.4174 -8.4426 -12.0919 7.1439 6.6067 -6.2722 2.3164 +#> -4.1834 11.1998 3.0447 2.6957 22.4190 7.6576 -5.2527 18.8177 +#> -20.2723 1.3253 10.8418 9.0811 6.4476 3.7484 16.8607 -14.8302 +#> -2.7513 6.6503 -12.5883 -1.9143 13.6749 -2.2026 -0.0576 7.5686 +#> 0.3314 -11.3378 -0.9537 -6.7157 3.6694 -7.5760 5.5724 6.0488 +#> -8.1499 -5.8161 -9.8587 19.9884 2.0863 1.5576 -4.2066 -0.4436 +#> -7.1017 1.5927 7.4132 -3.1600 -16.2266 17.7469 11.1453 -3.6420 +#> -3.5196 -6.1302 -17.1096 1.9950 12.6488 -7.2172 1.1653 8.3086 +#> 3.9420 1.1401 4.9253 8.3876 -3.8281 9.5859 8.0676 0.9702 +#> 2.5050 -6.1517 4.7386 5.9677 9.2732 2.9076 0.0195 -0.0915 +#> -10.9789 -1.3337 12.6203 8.5702 -4.3448 -3.6564 4.6815 10.8041 +#> 3.2875 -0.0805 18.5864 12.3418 1.3776 0.4639 10.9487 -5.0897 +#> +#> Columns 41 to 48 4.5416 -4.9024 8.8311 10.3463 0.2651 -8.4339 -6.6832 7.2281 +#> -10.1922 -8.5999 0.4567 1.2905 11.9904 8.3649 -0.2820 -0.5000 +#> 2.1755 -3.7282 1.3097 2.5777 7.8753 -2.0035 14.1423 6.6787 +#> 1.1655 0.5492 4.4501 0.3171 -5.4586 2.6379 0.0697 -9.4739 +#> -6.2303 -14.1027 0.0100 -3.9314 5.2509 -8.1181 2.6001 1.2095 +#> -6.8354 1.3839 10.1818 6.8880 -11.1387 -13.3290 2.5360 -2.4812 +#> 8.3362 9.4515 4.1199 -9.3917 -14.5917 1.4332 0.4153 -3.2342 +#> 0.0806 0.2949 -9.1927 -16.4414 5.4621 -5.5275 -6.3721 2.0919 +#> -1.5245 1.7350 -2.6612 9.0850 -1.4021 -0.8051 8.9440 5.5216 +#> 2.2669 1.4136 2.1153 1.3907 -1.9028 3.3937 5.5590 1.6864 +#> -9.5550 9.3133 -0.5917 -6.6588 8.5990 -9.0556 -1.1651 0.3522 +#> -0.2138 1.6576 -7.4984 -7.4907 -8.1598 2.7208 -5.3047 4.1849 +#> -8.7061 2.1911 11.3166 -7.0701 -12.3447 0.3809 -8.7935 -1.9093 +#> 5.8310 -3.7924 -14.1226 -0.9867 1.1463 -12.4631 -7.0898 -12.4898 +#> -4.3149 6.3384 -2.0539 -0.0828 -16.0012 -1.3459 -0.2730 -11.1067 +#> 0.2034 -1.7032 -7.7734 4.6387 -4.2106 -1.3526 -4.4212 -3.4719 +#> -12.2013 -1.7224 3.9268 3.3508 -8.4421 -6.3359 -12.8927 -2.1470 +#> -21.1804 6.3041 -4.5008 -0.5402 -20.8187 0.5318 2.9913 -13.6855 +#> -3.2797 7.9700 -0.3886 -4.6745 2.1036 -4.6107 -4.7608 7.6923 +#> -15.8307 8.7412 -0.0962 7.8266 -13.6592 -0.8376 -7.8022 -13.3426 +#> 3.4127 -3.5618 -6.9771 2.4669 -2.9717 -8.7935 2.0971 -7.9943 +#> -5.0785 -3.3369 -9.3461 -5.6587 -0.6787 -3.1871 -9.3324 -3.0281 +#> 2.5578 21.7349 15.3899 10.4132 -2.5514 1.7782 12.5657 -6.7548 +#> 8.6913 11.6463 18.1946 0.7303 -4.4140 -0.7748 4.9841 -15.1897 +#> -11.2238 6.7093 4.6444 12.6038 -5.5794 -10.4248 4.1176 -1.7744 +#> 1.6525 -3.1180 -0.3574 17.7211 -0.1144 -12.3549 -4.0452 -5.5704 +#> -4.9651 13.9408 8.6249 -11.5936 -16.6941 -5.3931 -3.6881 6.0155 +#> -5.4065 -2.5217 1.4385 -0.3922 1.6123 -0.9188 -7.3587 -2.3773 +#> -8.6857 -5.0989 -6.1459 -7.0026 -8.0787 -14.5593 -10.4636 -12.8228 +#> 2.6677 -4.7651 1.0568 1.3425 2.7920 2.0309 -8.7960 8.8150 +#> 9.0281 12.5455 -2.5788 -12.3628 -11.9505 -4.1404 -0.8740 10.5614 +#> 3.1092 -4.0124 7.5946 5.5313 -0.2117 -1.3845 8.1836 4.1690 +#> 2.6316 0.1588 6.9737 10.9898 3.5931 3.1511 4.9429 -1.8327 +#> +#> (16,.,.) = +#> Columns 1 to 8 2.6537 10.0281 -1.3741 0.9900 -12.0867 -4.0296 -7.6636 0.1825 +#> 2.4319 10.9279 6.3170 -3.5904 -3.7944 -9.5351 3.8456 9.2553 +#> -7.8803 -3.3544 -1.3375 -3.0423 -1.6107 9.5581 -4.5955 2.3276 +#> -4.4451 -0.2926 3.6284 7.7152 -2.6448 14.0143 24.4411 -11.8132 +#> -7.9960 8.9860 -4.5813 7.6282 -6.9701 1.1402 -2.3306 -4.5071 +#> -4.4226 12.0704 11.6664 -6.9571 3.1467 -4.9787 5.0473 -10.9933 +#> 11.2864 -1.5729 1.7127 9.4951 -2.6290 8.0230 0.0378 -8.6601 +#> -9.4625 3.9488 -2.1094 9.5814 -13.2363 -5.8638 -6.0246 -2.4959 +#> -0.1048 -13.5206 6.7469 -4.3198 3.5434 -0.1837 -0.8146 6.3350 +#> -3.9342 7.8975 1.1074 10.9452 2.6244 -1.7462 6.9768 -5.0688 +#> 1.7446 -0.1351 -0.6821 2.8836 -5.9138 7.9013 4.5623 14.5249 +#> 6.4212 0.7026 -1.7461 -4.0589 2.0305 -1.0969 -1.0820 -0.8338 +#> 0.2226 10.2770 -2.7988 -6.5288 -9.7603 3.7378 -2.5321 -3.4819 +#> 3.0488 -11.7935 10.9536 -1.8877 11.9208 2.3212 -2.7537 1.0941 +#> 2.1382 5.3092 7.1833 -6.4094 1.9917 3.8928 -3.1746 0.3565 +#> 8.1730 0.7460 8.6897 3.0966 5.1147 -2.4684 -0.6720 -0.8584 +#> 9.6087 10.6647 -0.1727 1.2635 -4.7348 1.4423 -10.2314 2.2201 +#> 5.6906 -5.4915 6.5053 -7.2581 1.7664 -2.1022 2.1175 -0.0348 +#> 9.1430 1.9296 -7.5327 8.4665 -3.0810 1.2409 -4.6988 4.8817 +#> 1.9465 2.5159 1.7587 -3.4185 -3.5874 -3.4295 -1.3687 2.3190 +#> -6.6130 -9.0796 -1.8347 2.6115 -1.1207 9.5405 -3.5588 -4.4271 +#> -4.1785 2.8673 4.9747 -6.1164 12.9784 2.4537 -4.2924 3.0161 +#> 8.0232 4.7341 1.0049 -3.2322 8.3828 4.6395 -7.5344 8.6833 +#> -4.2088 2.6784 0.1928 0.3867 -0.6369 -3.5143 10.9183 5.2169 +#> 8.9835 -6.3049 -4.8523 -13.6846 18.6929 5.5845 -3.0672 5.3202 +#> -2.9213 -3.7387 5.5182 -3.6154 1.3478 -1.7320 -1.5216 -6.9537 +#> 9.2671 -3.2407 7.5714 14.1982 -3.3357 1.1475 6.5716 -10.1794 +#> -4.1577 8.2047 4.1041 12.3912 -11.5370 -5.1664 2.5799 3.8036 +#> 3.8338 -5.2952 6.7403 4.5124 13.3406 8.5143 -11.3196 7.2957 +#> 3.7297 14.7927 -2.2709 1.0681 -16.8867 -19.4837 -11.9069 5.0476 +#> 11.2957 -5.4608 0.8200 7.4871 -1.2263 -0.4746 -6.9441 2.1174 +#> -9.0418 0.4256 5.4519 -12.3165 6.7599 -6.1433 -4.6055 1.1761 +#> -1.4819 4.0796 0.9065 -0.9019 5.4088 -7.9440 14.5377 9.0100 +#> +#> Columns 9 to 16 -6.8426 -4.7429 -2.6128 -4.6095 2.6961 -0.8184 -12.1658 2.0165 +#> 3.9022 5.0333 6.4491 -2.1842 11.7931 12.3021 3.1147 6.1278 +#> -6.9729 2.7342 4.6440 12.5455 -3.9127 5.4592 4.1639 1.4389 +#> 8.6323 6.5326 5.2528 -3.1248 -4.5754 1.5873 -5.0231 -3.1225 +#> 2.8939 0.1219 2.8283 3.6899 2.2707 1.9394 0.6548 5.5807 +#> 0.0017 -3.8882 10.1598 7.6993 -0.0444 -7.9066 -4.5745 -12.3957 +#> 0.7785 5.4766 3.5906 -7.7531 1.0911 -5.9815 -12.2547 7.7808 +#> 1.0523 -5.5980 3.0210 -0.2250 1.8311 5.0031 -2.5257 1.5206 +#> -4.9046 -6.8045 2.1573 10.7057 0.4956 7.5452 -4.8635 -4.9029 +#> 6.7992 6.8604 -8.2919 -13.6156 -2.8000 5.0487 -7.0615 -1.5223 +#> -12.9161 2.7160 8.2835 13.1736 3.0676 -4.2209 0.4369 8.7412 +#> -0.3872 1.1803 -6.1056 4.2021 1.5244 -6.7314 -1.8443 9.4874 +#> -3.6800 -10.8111 -3.7154 7.3191 -1.2210 -15.8530 4.8161 -7.8834 +#> 10.7230 0.1590 -2.8018 5.6006 -12.5767 -6.6004 -16.2261 7.2019 +#> 0.4843 -5.2169 -1.0586 4.1696 -10.5364 -8.4845 2.7401 3.2174 +#> 0.4920 -0.2655 -8.6780 -8.1274 -1.6072 2.6221 0.2542 -3.7309 +#> -8.9139 -4.1685 0.6980 -4.3601 -0.0753 -2.4713 -6.2920 -11.3251 +#> -2.9412 -2.9277 -0.9006 -1.0688 -3.6055 7.9323 -7.5708 -12.4720 +#> -9.2443 -2.1692 -6.7854 1.7093 -8.0463 6.3150 1.6444 -13.0162 +#> -6.2650 -1.9294 -7.3652 -5.3038 -1.9747 -0.5525 -2.9674 -6.6390 +#> 8.1219 11.4121 -0.0902 -3.8964 6.4237 -0.1665 -11.2574 -0.7778 +#> -2.7504 -10.8360 -1.8227 4.4445 -2.9894 -0.9454 10.1992 5.0257 +#> -1.9871 -8.9973 -1.5199 -7.0012 -4.1810 -9.0287 1.5581 -2.0986 +#> -6.9044 -1.8868 -0.8582 5.7889 -7.4550 -9.6949 -4.8682 -18.4330 +#> -1.8776 -1.4501 -3.1208 -1.5706 -8.5306 0.0210 -0.4809 -8.2916 +#> -2.3846 0.7277 2.7183 -12.9379 -4.1543 1.4221 3.2912 -10.6667 +#> -0.3494 7.2240 10.9552 -11.7152 4.5154 -6.7795 -11.9837 -3.2178 +#> -13.4474 -15.3883 -3.8726 -5.5541 -11.9312 -2.3773 1.6819 -5.5822 +#> 0.3630 4.3099 -16.3680 -0.3777 -2.2528 -4.0222 -1.3204 0.7607 +#> -4.4686 -19.8499 -10.0706 -2.9009 12.4972 -4.3342 -9.9224 3.8666 +#> -10.1040 -6.5496 -5.6094 -16.8005 -4.2389 -6.8117 -18.5713 -1.8870 +#> 1.7998 -1.0995 1.8782 14.4492 0.5544 -1.6617 12.2205 -10.4153 +#> -6.9645 -1.6298 8.3926 5.3486 -1.4503 2.8527 -1.8521 6.0394 +#> +#> Columns 17 to 24 -3.1629 -0.0568 2.9113 14.5280 14.7589 3.1293 -12.1515 -1.2125 +#> -2.5092 -9.6162 4.5885 2.9099 8.3468 -4.3173 -0.8217 -1.2561 +#> -6.6908 -9.5136 -6.0419 10.0233 -3.2612 -4.1823 -1.2707 1.4180 +#> -12.0190 7.8139 -13.8348 0.5338 5.1126 4.1994 -21.2723 -7.0667 +#> -7.8274 -4.9705 -2.1304 4.1353 -8.3330 1.0937 0.1881 -4.3878 +#> -12.7670 5.6393 0.2593 22.5342 -10.9020 5.1398 -12.4250 -2.1257 +#> 5.8816 -4.6940 -8.5924 -8.4002 12.8787 -7.0274 -6.9082 -0.0870 +#> 7.1171 3.6475 9.1813 -9.7746 -10.4951 0.8722 1.6396 -7.0983 +#> -1.2520 -11.8828 -8.1181 8.3765 6.4687 -6.8427 9.5647 6.9806 +#> 6.4404 -2.9231 -11.3833 -3.3352 0.4295 2.9849 -0.1354 -1.9745 +#> -1.8990 -8.8391 14.1990 -4.9804 4.0018 -5.7063 6.9528 -12.5414 +#> 10.7867 -2.3682 -2.0609 -10.6862 3.9373 -3.7516 7.8622 -7.7914 +#> -3.6083 9.3223 16.5962 1.2439 -1.8558 3.5668 -11.2451 -6.5508 +#> -3.8310 5.8341 -5.7072 6.0198 7.0461 -4.2640 -0.8619 12.6715 +#> -13.2245 -0.7354 3.9689 2.9254 7.2269 -8.9177 -11.8530 4.4396 +#> -6.9530 -0.6109 2.6992 0.4603 9.3217 -0.9783 0.3191 0.5973 +#> -2.3645 -0.7434 14.0873 0.1423 2.8636 1.0353 -9.0741 0.3920 +#> -5.8839 1.9932 12.2294 0.8054 6.2002 2.5218 -4.2763 -7.6965 +#> 0.8513 7.0868 7.3695 -0.3998 7.6253 7.4863 5.9059 -3.1743 +#> 5.8895 11.1340 16.8882 2.2584 -4.4054 2.7326 -6.0199 1.4574 +#> 5.9178 1.1990 -9.1185 6.5885 -5.9489 2.0860 3.2412 2.0335 +#> 2.9457 -4.2214 2.5596 0.2399 -8.8712 3.9793 8.2923 -2.3617 +#> -1.2055 9.2519 0.7123 9.6024 -3.8783 -0.4447 2.7004 2.2629 +#> 10.3553 16.5592 -0.2477 6.4830 6.7097 1.7149 -4.0558 -19.0939 +#> -2.5081 3.0439 2.1285 8.5253 -10.4950 0.9928 4.8042 9.3855 +#> -12.9153 4.9772 3.6428 -0.2269 0.5490 2.9480 -7.1561 11.2938 +#> 3.8155 -3.4244 -3.3108 8.2549 9.9998 -4.2891 -21.9953 -8.6520 +#> -10.2267 11.0747 8.1871 0.2964 4.0452 13.9275 -8.5610 1.6311 +#> 3.8581 -6.7254 4.5158 11.5645 3.9171 -6.2779 7.3628 -2.6553 +#> -9.4337 -2.1890 12.6193 0.1468 -3.0054 15.4094 1.0468 9.5217 +#> 2.4823 0.7290 -10.0452 -5.2096 9.9998 3.4954 -4.4458 2.5061 +#> 2.7536 11.1029 3.4073 -1.4913 -11.4818 0.4184 -4.7221 -0.0167 +#> 3.4516 1.3483 3.1828 2.0351 3.8541 7.9194 1.7277 -8.4165 +#> +#> Columns 25 to 32 4.0505 8.9516 8.9033 -7.6845 -8.8602 1.5821 -8.4482 -6.3353 +#> 2.2318 11.5904 1.8198 3.8042 2.1653 2.6246 -1.8867 -2.0695 +#> 2.4190 6.7486 1.8535 -1.3479 5.0385 9.5143 5.8353 -9.7707 +#> 0.3160 0.1693 5.0241 2.3182 -5.7190 9.2947 -3.2825 9.5995 +#> 8.3221 0.3005 0.5396 -2.9437 -11.8083 7.2209 -2.4893 -14.6031 +#> 0.1779 4.2406 -7.3055 3.8547 -6.1461 -8.7149 -10.9975 -25.8239 +#> 5.6674 -5.5394 6.8157 -2.5307 -3.3665 -4.5521 -11.3748 10.3743 +#> 10.0272 -0.1362 -2.2936 -12.8996 -9.6990 -3.0641 5.2434 -3.1504 +#> -4.6833 -3.7969 -2.9460 7.8764 6.6890 1.9937 -0.5641 -7.1264 +#> 2.1160 -10.3251 14.0864 -5.1683 0.7573 -4.8519 -4.4410 3.3829 +#> 4.6316 -0.5813 6.1136 -6.9190 10.1993 -5.3839 3.2224 -3.4258 +#> 5.1394 -6.1845 -3.6422 2.5954 -3.9365 -7.0041 1.5143 0.0143 +#> 7.2168 0.3847 -4.3440 -3.7469 1.3919 -7.6400 -0.1277 -2.9737 +#> -19.3379 3.5289 -15.4244 12.3707 -1.3818 8.6222 -18.1597 -0.9059 +#> 10.2140 6.9681 -2.4340 -15.3828 8.3304 -4.1101 -3.6007 -12.4290 +#> -10.1915 15.1152 9.9963 -4.5331 -3.1466 1.2303 -2.1024 0.4509 +#> 1.8998 5.4893 1.7368 -15.3842 0.2894 0.6480 -8.8291 -15.4097 +#> -7.6948 5.5280 -9.6765 -4.6556 15.5769 -6.7131 -10.0724 2.7616 +#> -1.0178 4.2935 -0.9750 9.4335 -6.8237 -1.9242 4.2828 6.9149 +#> 0.5209 4.0523 -7.8269 -9.1706 12.9358 -13.0383 0.8386 -5.0926 +#> -6.7296 -15.0976 -5.1138 2.4820 -1.8629 2.3847 2.7592 -3.1532 +#> -5.8144 1.5334 9.3806 -8.4229 7.0976 -1.4525 14.7540 -1.5626 +#> -7.7576 -7.1915 6.7963 -7.5794 13.1287 -11.2520 -5.9283 -4.3028 +#> -10.0635 -2.0883 8.4976 2.8765 9.4888 -7.2230 -2.2691 6.7774 +#> -6.2538 -2.5045 -5.7848 -0.1290 11.1655 0.4228 1.2409 -6.2375 +#> 8.9935 3.3222 -6.6574 -15.2159 -4.5156 3.4840 -1.4515 -11.9367 +#> 7.8913 -3.0932 12.0142 -10.1397 -6.4581 -3.4470 -10.4847 -0.8180 +#> -8.3068 2.8102 -5.8828 8.7412 -5.1551 -0.1998 -16.8988 3.1662 +#> -6.2529 7.3636 0.1179 -10.5478 11.0667 0.3758 2.9674 -6.9911 +#> -2.4338 0.0214 -1.6522 0.4852 -12.8848 -3.1333 -9.7729 -2.2721 +#> 0.7034 -8.9684 -5.5336 -6.7640 -2.4479 -1.6852 -10.6143 3.1703 +#> -0.4357 8.9272 -9.1191 0.3087 0.8216 -0.8100 0.8083 -13.8424 +#> -6.3769 -3.2665 4.3365 15.7927 6.6798 -6.6067 -13.9550 3.5308 +#> +#> Columns 33 to 40 -6.9297 -11.0017 -7.3290 -0.9733 -4.0245 0.7207 -0.8083 -11.6688 +#> 0.9887 6.7787 9.3630 -0.1089 9.4970 1.2405 16.4210 5.2920 +#> -1.9242 1.4191 -1.4146 0.6657 -1.8452 4.3975 6.8781 4.3683 +#> -6.2923 -5.6739 2.7374 -5.7019 4.6604 8.9459 1.3710 5.9450 +#> -11.2656 -1.8411 1.3547 6.1710 0.0621 -7.3169 -2.9158 6.4003 +#> -0.6419 -4.1902 -7.3557 -7.8505 6.1350 9.9960 5.9794 -7.8682 +#> 0.3170 -0.8213 -2.7116 -1.0315 -3.5961 7.2136 -5.3460 -5.7550 +#> -9.0557 1.3421 7.3307 10.7694 0.1778 -19.3088 -4.8544 14.3715 +#> -10.4370 7.7783 -7.5868 -8.5351 2.8893 7.1334 14.0060 -15.1075 +#> 11.9832 -7.7026 -6.9726 1.5130 -6.5501 3.8620 -5.2957 -3.4377 +#> -2.8358 2.7340 -7.3989 -0.4648 -2.3398 5.1737 13.2490 11.8018 +#> 3.8140 6.2382 -2.4453 5.1233 -0.4008 -5.5641 -6.0084 -3.2443 +#> 7.0816 -1.5572 -3.5374 -2.6719 -1.3513 -0.7589 -18.0983 -1.5163 +#> -4.1255 -3.3912 7.7073 -9.9341 4.0253 -12.1809 12.0770 -8.5870 +#> -2.8708 6.5475 1.2193 -6.1879 -14.8457 -1.3914 6.6650 -12.9305 +#> -0.4384 -6.7738 0.2258 -4.2270 -1.8498 5.0849 -4.2075 0.1013 +#> -2.4174 6.0093 0.3795 5.1943 -10.8393 -5.2192 1.1280 -1.1510 +#> -0.2956 -3.8886 5.1311 -9.1582 -2.3592 2.3350 -3.5444 -2.0641 +#> 6.5872 -0.8988 -9.7604 11.0533 3.7212 -6.9216 2.8120 -2.2940 +#> 6.3180 0.1680 5.9209 -2.4832 -12.1692 -3.6259 -7.8507 -6.5393 +#> -3.1110 12.7384 9.9380 -5.2919 6.5667 -6.4021 5.7608 -1.6890 +#> 1.0232 0.6634 0.9311 4.7265 -3.2909 0.7162 -6.8898 15.5118 +#> 9.0084 7.1687 -3.9944 -12.5334 -15.4884 0.3428 -5.3218 -13.0022 +#> 14.7850 -3.2750 -18.7341 -5.3742 -9.2112 4.6256 -11.5047 -9.0995 +#> 6.2066 9.0095 4.8608 -8.1920 -3.6004 -5.2042 13.9607 -7.7939 +#> -7.7216 -3.3453 5.6436 -1.4099 -4.4756 -4.7878 6.4228 1.2972 +#> -1.9253 -5.2992 8.9140 2.5100 3.1209 3.7009 -0.3293 2.2241 +#> 4.0245 -5.1316 -12.5800 -14.4542 -8.1623 -18.2699 0.8162 -7.9554 +#> -4.4564 6.1181 2.9378 -5.8014 -4.5335 -0.8213 0.8772 1.1595 +#> 3.6789 1.1213 -8.5050 -13.9654 -1.0107 -13.9829 0.1980 -15.7043 +#> -1.7556 1.8067 -6.1722 -10.4958 -7.0932 -15.9986 -7.6157 -13.4282 +#> 10.2085 -5.0717 -4.6299 6.3475 8.3986 0.7032 2.2475 -8.1552 +#> 12.8033 -4.9932 -17.7588 -13.1457 2.3762 9.0886 12.7045 -8.7254 +#> +#> Columns 41 to 48 -7.4805 -1.3145 -13.7431 11.4765 -0.9648 -12.0847 4.7179 -1.4272 +#> -2.2829 2.5950 -10.7235 -2.1609 -13.8357 -9.4849 4.9866 -4.8983 +#> 2.5329 2.3297 -6.3099 -0.2002 9.5547 0.6236 0.4863 10.6733 +#> 6.6941 -8.6938 -2.0557 -5.7574 1.9355 -6.3002 1.6139 3.4015 +#> -2.2127 4.6527 -3.2228 -6.1964 3.3754 -3.8984 5.7287 11.5439 +#> 0.4154 -14.2939 3.1326 -4.1760 -2.6193 -3.7922 3.6896 7.2133 +#> 3.8841 5.6008 -0.8244 7.2883 -3.8380 -6.6507 -3.1420 4.8811 +#> -6.9786 13.0368 -1.1869 6.9741 -2.7768 -2.5288 1.3483 -3.5707 +#> 6.9048 -5.1701 4.1549 5.8017 -8.5916 2.9603 -7.7201 7.7871 +#> -11.9918 3.8684 -2.6689 -6.1250 -9.2973 1.2899 3.8853 -2.2765 +#> -7.7719 11.2667 -19.5185 4.6244 -1.3942 -8.3421 -8.6290 -1.2741 +#> 0.4772 4.3330 5.6445 -7.2651 -7.9712 -2.0373 -5.2405 -1.3881 +#> -2.8972 5.4049 -5.5721 -3.8811 19.0670 3.8932 -2.1869 0.7545 +#> 15.8109 -8.8330 7.4110 1.7833 -0.2962 14.3180 2.0180 -11.7855 +#> 2.8560 6.9780 -17.5480 10.3052 -3.1295 -4.2072 1.4636 -6.0688 +#> 0.8833 -13.2746 -8.3860 -4.9834 3.1834 8.0150 1.5190 0.3426 +#> -4.9003 1.4852 0.1459 -1.5525 0.3936 -2.1157 2.1987 7.1499 +#> 0.3587 -2.1121 -0.5518 -1.9261 -9.1747 11.4873 6.3786 5.7065 +#> -6.8265 -2.9418 -4.8807 0.7788 2.3878 3.5582 -4.0625 -3.5805 +#> -10.6284 3.5065 -4.6024 6.8657 -11.1292 15.1727 3.2767 -8.7483 +#> -5.0826 2.9713 8.4317 -5.3287 -8.3769 -4.4589 -5.1314 8.1267 +#> 0.7844 0.2151 -0.4487 8.8005 7.6252 -0.6333 5.5800 -3.0729 +#> -11.8212 -7.3095 -0.3658 0.8800 1.5696 -7.7159 -9.3507 -8.9309 +#> 5.5200 -1.2981 3.8761 -1.1395 14.0902 13.1584 0.2560 -4.1000 +#> -2.5606 -2.6205 2.2323 -1.8492 -9.6116 -7.1594 -3.9362 -3.7241 +#> -5.2702 -10.7750 -0.7620 6.2300 0.7716 -0.5113 -4.7611 -2.4473 +#> 0.7879 -8.2632 5.1632 -4.9078 3.8705 -0.7683 -0.0107 7.0924 +#> -1.8081 0.7194 -2.0832 -16.3230 15.7687 9.7112 -12.0240 2.1642 +#> -7.5212 1.5560 -3.9287 11.9568 -9.6883 0.9000 13.2467 -7.7876 +#> -9.0240 15.0953 -5.1962 -7.3965 5.9510 4.0735 -10.1522 0.6561 +#> -4.0246 -1.4343 12.8208 5.0154 -10.6280 -4.3388 -6.2407 0.0779 +#> 0.7169 -1.2498 4.6823 -0.6321 6.2968 3.7195 3.3005 -6.5767 +#> -1.1701 -0.4908 -7.7588 -6.1339 -17.1114 -5.4716 -5.5306 -2.6368 +#> +#> (17,.,.) = +#> Columns 1 to 8 -2.7704 10.5603 0.7267 -7.7291 -3.7433 -1.6578 -3.3570 -1.1029 +#> 2.5729 2.4487 -2.5715 -7.8141 4.2293 -11.1039 1.5680 8.5610 +#> -13.6032 -0.1561 2.3610 -0.0955 2.1723 4.9194 -0.1423 -4.7875 +#> -18.6177 -16.6249 7.0370 0.4203 5.7234 8.8069 -4.4259 -3.4729 +#> -3.0531 -0.4844 -2.4866 -2.8341 -2.3717 0.2669 -10.3285 1.0651 +#> 6.0293 -2.8649 -5.4014 -7.0805 -7.5987 0.7204 -1.6050 -10.0098 +#> -3.7357 7.8525 3.0803 -3.4952 0.9365 -1.8273 -1.7159 -4.0662 +#> 7.0489 -2.4002 0.2068 -1.4921 1.5017 -11.9677 -13.1278 9.2036 +#> -2.5369 -4.1299 -4.7285 6.1634 -4.5615 -4.8606 1.4767 5.1493 +#> -10.4036 -10.7121 4.2236 -9.0293 6.1111 7.8108 -7.0556 11.0722 +#> 0.9993 4.4852 -8.2668 -3.4930 0.3539 -4.8302 -1.4111 9.8303 +#> 0.9946 3.3462 -5.6483 1.6089 -5.8641 1.4127 2.5571 -4.2720 +#> -0.2602 3.9641 -1.0204 -2.6129 -3.2721 11.8840 -2.5988 -5.9541 +#> 2.0741 1.7521 1.4423 7.5366 9.0240 -22.3413 1.5828 -1.8651 +#> -0.4998 3.5266 0.7295 -2.1794 1.4527 -2.7818 -8.7797 1.6922 +#> -0.0240 -3.9744 2.0120 -8.4228 -0.8473 2.9716 -0.0982 -9.6429 +#> -6.8080 13.1794 0.6546 1.0582 -5.0821 -9.3842 -4.4272 3.9259 +#> -2.1990 -5.3386 -2.8094 -4.7910 8.2978 -9.5896 7.6713 -2.5908 +#> -3.1451 1.2174 -2.7183 0.9241 -3.2326 7.2417 -0.7122 -7.7775 +#> 20.0796 -6.6055 2.9293 -1.0359 2.0235 -13.0951 9.0062 1.1787 +#> -6.9559 2.6484 -3.8156 -2.2437 1.5578 1.2313 1.7346 -4.0805 +#> 1.6198 -3.9982 5.2072 5.4066 1.4270 -3.5072 -7.0815 -2.8528 +#> -3.6409 12.0129 5.4795 0.9914 1.0420 -4.8794 12.5829 1.7580 +#> -5.3814 -1.0697 2.6472 -0.6693 9.2690 18.7628 -1.3712 -0.7372 +#> 0.2149 9.5615 -6.8986 -1.1008 -0.6883 5.1239 1.9762 2.9490 +#> 6.4925 -3.5990 9.6852 -8.9323 -1.3355 -3.7827 -0.8172 -10.7852 +#> -5.4988 2.4479 -4.5648 -4.2705 7.0228 5.1410 -20.2350 6.6242 +#> -14.4974 7.3483 5.1486 -2.0932 -1.0024 -1.4598 11.5936 1.4477 +#> 4.3321 7.3866 -2.9594 6.7808 2.5951 -7.6477 -11.4815 -1.3703 +#> -4.0412 11.8890 1.3561 -10.6676 -13.7145 -2.2311 8.0976 10.0307 +#> -15.1334 5.8670 5.7254 -1.1431 -0.2199 -11.0346 5.3607 1.5764 +#> 9.1307 0.9232 2.6315 0.2508 -1.3830 4.0534 -3.5289 -7.6598 +#> 5.0985 -5.2785 -1.8614 -4.2947 -5.6576 1.2913 9.4377 10.3827 +#> +#> Columns 9 to 16 6.7341 1.9863 8.9667 12.0119 5.6970 0.6868 15.2604 -4.1424 +#> -4.2998 -8.5678 9.4961 2.9845 -3.5134 1.8394 -0.9021 -2.1620 +#> -3.2300 5.0553 -2.9733 -1.1748 -0.8557 1.1872 -3.1148 3.1607 +#> -11.7453 -1.7068 -5.4682 -12.8065 -2.6870 7.2390 -0.2386 -3.2514 +#> 1.0902 0.9017 1.4176 3.7273 -0.6858 1.1306 -0.6308 -6.2185 +#> 5.4508 1.4973 0.2619 -8.4212 13.0744 3.6572 -0.3702 6.1075 +#> 0.2905 4.0318 -10.3974 13.1124 2.7247 -4.9409 5.5627 7.1809 +#> 2.8171 6.4971 1.1240 0.0898 2.5301 -4.2880 -21.7980 2.0108 +#> -15.7529 4.7049 -3.7587 -10.8786 9.4058 5.3792 -2.8985 -5.8478 +#> 0.7016 -13.5083 2.7154 1.1747 -2.4312 -3.8385 9.1867 11.6649 +#> -15.5568 4.4197 -7.8619 0.2513 -10.2832 11.4538 -14.1472 5.2680 +#> 8.7746 -9.0890 -3.8961 2.9851 4.9018 -19.8198 10.5791 -2.6983 +#> 9.1059 11.0467 -2.0032 -1.5611 -0.6221 -3.1742 -0.9975 1.7780 +#> -0.7256 0.6946 3.3463 1.4316 5.4650 -0.5997 -3.0255 -15.4508 +#> -0.2171 5.6657 9.0344 2.1456 -4.4352 -4.1476 5.2174 -4.4732 +#> 2.8979 7.3370 0.8185 2.5894 -6.4739 3.8603 5.8764 4.0880 +#> 1.1180 17.6122 -5.3422 5.0036 4.3687 -5.0587 -9.5809 10.2824 +#> 3.1349 11.4819 -2.5447 -8.5379 -5.8880 -7.5417 -13.4420 5.0741 +#> 7.3487 4.7572 0.9641 0.0457 8.7562 -0.4766 -11.2908 3.6942 +#> 2.3793 3.5841 -0.3000 -11.0359 -0.0331 -2.5764 -7.8028 12.3705 +#> 1.1415 -4.4475 4.1464 -4.4200 4.0291 -16.3471 -6.8999 3.4234 +#> -1.4660 1.7638 -7.1693 4.2245 -4.1514 8.3268 -8.7105 -10.3372 +#> -0.6831 -7.3303 -3.0960 -2.8838 -2.9398 3.8557 12.8259 2.2771 +#> -1.6265 3.4067 -3.4404 -14.4183 7.1554 9.3766 -4.0037 18.9329 +#> -0.8861 -2.8863 10.9595 -3.0864 1.1135 0.2013 1.0840 1.5685 +#> 13.0322 4.8567 13.5926 3.8107 8.6678 -0.9883 -7.2145 -1.4742 +#> -1.9749 -1.3267 1.1079 12.0055 -0.5795 1.0858 -3.1108 -3.2832 +#> -0.5156 11.9009 5.1915 -12.5264 5.8314 4.6921 -14.1464 8.2093 +#> 0.1146 4.5829 6.4958 5.2190 -10.4601 6.2816 -5.5749 -10.3199 +#> -2.6049 5.9377 12.4423 2.7328 1.6565 4.3421 2.3708 3.0174 +#> 10.3795 -6.6289 -8.6209 1.4962 0.4001 -4.7952 1.3191 -1.1169 +#> 8.8352 -4.9195 3.9531 -15.5500 7.0633 -5.7947 4.2496 -3.8436 +#> -7.5820 -15.2724 -0.5123 -9.0253 3.9423 4.5669 13.7463 5.4912 +#> +#> Columns 17 to 24 2.8659 7.9970 -2.2369 13.0924 8.6808 -0.2852 -9.3335 14.6500 +#> 7.0788 -7.9463 -3.4163 -1.3608 -4.7428 -0.0517 -4.0457 -4.6531 +#> -3.1597 1.8588 -12.6884 -11.0860 4.6732 2.3709 -9.3827 -0.1688 +#> 11.8203 2.5997 4.2648 1.1745 5.2093 5.3027 1.5585 -16.1315 +#> -10.1067 1.5029 -5.0066 -9.1026 -9.8618 -9.7336 -11.3681 -7.6435 +#> 3.4721 1.6037 -5.9775 -4.3065 0.5484 4.4562 1.1557 0.8167 +#> -0.5257 -0.5474 3.3103 -3.4944 4.4801 -8.4647 0.1863 -5.7290 +#> -4.6142 -1.2074 1.7776 3.7075 7.3313 2.6778 0.3457 -1.0475 +#> 0.2307 5.1051 -4.3815 -12.3826 -3.9321 13.1424 1.0713 -13.4604 +#> 0.2598 0.6921 4.9805 7.6117 -8.1925 -8.7037 -6.8049 -0.0327 +#> 7.0166 -0.7439 2.9224 -9.7845 6.8724 9.8846 -2.1503 -5.7923 +#> -2.1224 -1.3258 13.4410 1.4991 -2.4490 -4.1900 4.0207 4.3657 +#> -3.2526 4.4835 3.9203 -3.7430 11.5783 1.6549 4.3792 6.2467 +#> -10.0576 -3.4088 4.0432 -5.1732 -17.3550 -7.1546 1.4905 0.1845 +#> -5.6070 3.0486 -5.6468 -8.1206 -5.4535 -6.7772 0.8812 8.7500 +#> 1.3125 -4.4642 -3.8045 7.3684 1.5014 -5.6692 8.8631 5.4753 +#> 1.1991 8.7489 -6.4687 1.2454 2.9509 0.9164 -8.4742 7.6956 +#> 15.4080 -4.4415 10.2667 -4.4999 5.5235 -13.4351 15.9167 7.7456 +#> 1.6523 4.7800 -0.0088 -3.5715 3.0276 -3.7797 -1.8262 4.8513 +#> 2.5692 0.6369 4.6692 -0.8968 -3.0227 4.5375 7.6932 13.8799 +#> 0.5910 -0.4237 9.7608 4.6418 -0.1699 -7.6320 -2.6444 -10.0515 +#> 1.9812 2.0879 -2.8302 -3.6234 7.9835 9.1231 5.0533 6.6013 +#> 14.6054 4.1328 -6.3627 -7.4908 -1.6861 7.2527 8.0787 7.1441 +#> -2.2543 -3.0008 -10.7692 -5.7324 -13.2544 -1.8763 -2.6397 8.1258 +#> 12.1308 5.8818 0.9452 7.1992 0.6095 -3.3979 -6.7545 2.6578 +#> 0.9111 8.0921 -0.1917 21.9292 4.9044 -2.3156 -1.1944 3.9495 +#> 9.2807 -2.8576 -10.0671 -3.6933 -3.1867 -0.5104 2.8762 -10.1065 +#> 8.1040 7.1307 5.4396 -10.0216 -8.6300 0.2725 -10.2342 4.1169 +#> 1.8882 -7.2394 1.6500 -6.4690 -6.5163 -8.4244 0.7696 2.0009 +#> -3.4141 1.8392 4.7830 4.7468 3.8604 12.8414 -2.3081 2.1538 +#> 15.9406 7.2814 5.9098 7.0757 2.5140 -1.6559 7.7894 -2.8529 +#> -6.7661 -9.5295 -6.3038 9.4841 -4.0221 11.3960 -6.2655 6.3740 +#> 15.0542 2.3747 4.9200 6.3924 -2.9990 8.0179 -2.8307 -2.4978 +#> +#> Columns 25 to 32 2.1070 1.1357 -0.6737 1.5143 -8.0808 1.4888 4.5449 1.9640 +#> 4.4534 2.5352 -0.6067 0.8445 -0.0012 1.2110 0.8128 0.5631 +#> -4.9796 3.3211 6.7698 7.0588 6.1402 3.4425 -6.0022 -3.4105 +#> 1.5599 -5.6939 3.6210 20.0176 0.4168 3.2349 0.4631 10.5732 +#> 9.3376 -4.3873 0.4279 -3.8398 -7.9957 -7.3808 -0.1321 0.8090 +#> 6.2631 -2.7879 11.4126 -0.0802 -5.3166 -7.7050 0.0382 17.2720 +#> -8.1011 6.2337 -2.9383 4.1201 -4.2928 3.6873 -0.4019 2.7793 +#> 1.7744 -5.0834 -1.6528 -6.6120 -8.9883 -0.6799 0.8348 -0.0160 +#> -13.1987 -0.0450 0.9701 -1.2428 2.4142 -1.3553 -12.7588 9.6190 +#> 7.0441 6.1168 -3.5449 4.0805 -5.3083 5.9797 8.8975 2.7549 +#> -3.8628 -1.2028 0.1499 9.4044 5.8880 -0.8799 -1.2509 -5.9398 +#> -0.9154 2.7582 -0.1849 -8.6003 1.1950 4.5682 7.6503 -1.1930 +#> 1.5326 -8.2111 4.0244 -8.2650 6.7523 5.4653 2.4483 -7.4292 +#> -4.2883 0.2562 -2.8330 -7.6422 -20.6673 -4.0809 1.7666 -4.3616 +#> -8.0310 0.3609 -3.1955 -5.4921 -6.0570 18.4650 -14.8922 -2.0463 +#> 0.1960 -3.4853 -5.3803 6.3456 0.3314 -3.9994 2.6963 3.8593 +#> -1.3784 -1.7650 0.1733 -3.8867 -5.2378 -5.9713 5.2499 -4.7100 +#> -13.8746 4.6234 -6.7364 8.9808 -0.9970 9.9604 1.1777 2.4911 +#> -7.9730 -5.1767 -4.4298 -5.4043 6.5203 -6.3957 -7.0252 7.0823 +#> -7.5692 0.2801 -6.0745 -7.7463 -13.9535 5.2160 -5.9360 0.5628 +#> 1.3956 0.3247 4.9877 0.6447 -8.8323 -2.6314 1.5729 12.2125 +#> -11.6241 -8.0160 -9.6314 -5.0019 -3.2613 4.8675 3.5103 -3.1493 +#> -2.5003 5.0246 3.1325 -0.2385 9.1923 2.3005 -0.7771 -2.0385 +#> 4.8006 -2.8842 7.8990 1.0820 7.9377 -3.5297 -0.4941 -3.6435 +#> -3.0593 -1.8972 0.7686 3.1528 1.8651 3.5969 0.1643 3.4832 +#> -1.2656 -3.8606 -6.7093 -0.4224 -10.3953 -8.6362 -0.9552 4.5440 +#> 0.2829 -5.4841 2.0222 0.5593 -0.8317 -13.1011 17.1616 8.9623 +#> 9.4220 -2.8666 -5.2726 0.5688 3.8026 -8.5452 -3.6207 -17.8513 +#> -15.7801 -7.5726 -5.9551 -7.1169 -13.2869 0.5127 -2.9001 0.7436 +#> 17.4660 -1.4932 -7.7020 -5.5903 -1.6124 6.7695 -0.5188 -9.5186 +#> -13.4290 10.1790 -6.9775 2.2547 5.4104 1.8314 2.4659 7.4010 +#> 9.0881 -1.5618 18.9436 -6.0187 7.7381 -9.9503 2.6436 -4.7875 +#> -1.2500 11.6404 3.5587 10.2993 6.2472 5.6981 -3.0613 1.0673 +#> +#> Columns 33 to 40 -6.0871 -7.8560 -5.1843 10.8660 7.2719 -2.5899 -5.5856 1.1496 +#> 0.0269 4.3973 -2.5121 -1.8696 -6.6044 3.7320 3.2216 7.9404 +#> 2.6763 -5.6407 1.6508 -4.2718 -4.0539 17.6827 -4.3344 -7.8905 +#> 7.9267 15.5898 -0.7649 6.3951 -6.6746 5.9853 5.0476 -9.6068 +#> -1.3657 3.8838 -2.8952 0.2352 -1.4886 0.8382 1.1905 -0.9388 +#> 1.1199 0.8771 -0.5850 16.6982 9.6759 1.5541 -7.3544 -0.4983 +#> -5.8925 -3.7602 2.7142 7.4090 -1.1657 -4.8527 -10.1923 9.1371 +#> 4.8444 10.4564 -11.3587 -8.9570 -0.4252 8.3873 -5.0638 -12.8834 +#> 4.1751 -4.8812 3.1374 6.2684 3.2427 4.9984 1.3892 -8.1252 +#> -3.6175 -2.0191 -2.8556 6.7477 -5.4764 1.6779 6.3614 9.1818 +#> 10.9193 10.6641 0.7074 -2.9042 1.2107 8.3301 5.3420 -7.7657 +#> -1.8419 -9.6558 9.0585 -6.4754 6.5303 -11.6757 -4.2685 9.5693 +#> -8.4170 -0.1091 9.5069 -4.8290 -7.7341 3.9342 -4.8279 -8.3420 +#> 2.0843 5.6943 2.3081 2.1958 -1.7479 -8.2845 -17.9574 -1.3221 +#> -0.1133 0.1275 2.5143 3.3279 -13.4875 3.4412 2.3122 -11.5105 +#> -0.3208 -0.4993 -2.3072 -3.3562 -9.6989 7.0908 14.1526 -1.9749 +#> 4.1466 -5.4791 -1.0779 2.0537 0.3336 0.2411 -5.6312 -10.7543 +#> -3.3062 -6.7042 5.1662 10.0127 -9.5516 1.1809 -6.6059 -13.0325 +#> 2.3824 -5.0299 -1.7504 -11.8770 5.7312 2.6594 5.1171 -4.1069 +#> 5.5454 1.3222 -4.5464 6.1858 -11.6493 1.1564 2.2874 -3.6223 +#> 4.7625 -1.0953 -4.0312 1.7370 3.8948 -3.4990 -13.1045 -3.7373 +#> 5.1932 -0.9977 -4.4916 -6.2493 1.4565 8.7194 7.8811 -4.6788 +#> 5.1834 -1.3572 -2.1052 6.3175 1.3448 -7.9595 5.8325 7.4940 +#> -3.9020 -0.5996 11.2718 6.6009 -4.9164 5.5207 -6.5937 -5.3866 +#> 5.5396 -9.1300 -2.2336 3.5168 2.0850 -6.4280 0.9162 -6.8538 +#> 5.9051 3.5531 -14.0328 7.0267 -3.1132 4.9326 2.0346 -10.9407 +#> -11.1485 6.5715 6.9071 1.6247 1.6972 -17.4898 3.0868 -1.3412 +#> -2.8866 6.7616 -4.6754 -1.8187 -7.1617 -3.5344 0.5114 -7.9344 +#> 0.9396 -10.4522 1.2673 2.3761 -4.5045 -5.8762 2.5294 -5.7347 +#> -11.7426 0.7682 -4.7147 1.3250 -2.2800 8.3357 6.3525 -7.6078 +#> -4.0112 -7.7059 3.6581 3.6947 10.7464 -6.6490 -12.4564 -3.1245 +#> 0.4887 1.7864 3.1950 -1.9047 1.3532 1.7572 -4.5550 -3.3779 +#> -3.7773 3.4754 4.8371 10.9474 3.7645 -7.2888 1.0882 11.2958 +#> +#> Columns 41 to 48 -6.0387 -11.2056 9.6341 2.8017 -12.2098 1.0781 5.1328 10.4557 +#> 7.2590 -8.2040 -2.9756 -13.0488 -3.0499 -5.4872 -1.5956 10.6881 +#> 1.2324 4.3335 -3.6939 -3.1393 10.2875 -1.5746 -9.9184 -6.8965 +#> -2.4542 -13.3423 3.5852 6.7524 -1.8796 -4.5611 7.9572 -1.0148 +#> -1.3396 4.4177 -0.2213 6.2529 3.7517 -0.7370 -3.7494 1.9093 +#> 2.6653 8.3401 -3.1142 -8.5770 -2.8581 5.9668 -4.6497 3.4966 +#> 7.7550 -8.6419 5.5229 -6.6355 3.0054 2.1490 5.7685 10.6066 +#> 7.2441 12.5011 -0.4768 -3.0536 -0.6593 1.0614 -1.2096 7.1685 +#> 5.6518 12.7438 2.9351 -12.3455 2.5277 1.2141 -7.0547 -8.5420 +#> 4.3700 -6.5105 -4.3187 3.5244 9.4576 -1.7260 4.4559 4.3307 +#> 5.7719 1.3263 -4.4880 -9.1984 9.7216 0.2186 -1.0691 5.3772 +#> 1.1191 -1.0495 -0.4789 -0.2497 -2.2224 11.6902 -0.4531 -0.3812 +#> -6.3349 -0.9310 -4.5166 -0.9552 -2.1166 3.4339 2.0261 -2.2041 +#> 12.0080 -5.1870 13.9815 5.0460 -4.5699 -8.4260 -5.1223 -2.8219 +#> -3.1362 -1.5496 1.5033 -14.7490 -5.9815 2.0039 0.5765 5.5446 +#> -5.5697 -14.0296 0.7314 -3.6337 6.9127 -2.6981 5.8657 2.1826 +#> 7.7615 4.0952 -2.6710 -11.4045 -1.7372 -2.0555 9.9285 1.1991 +#> 8.1680 -6.1382 5.9534 -15.0505 4.3782 9.7085 3.4996 4.0257 +#> -2.0523 0.5244 -3.8419 -0.8932 -11.1999 2.5236 0.9160 -3.5783 +#> 9.2817 3.7083 3.4058 -3.5454 2.9044 -1.7373 5.6485 -4.9491 +#> 2.7605 7.7002 -2.6598 -6.8164 2.5014 -1.5200 -6.9747 0.8509 +#> -5.1617 11.3389 8.3555 -4.8374 -0.2337 0.2449 -4.0933 3.7114 +#> -7.2699 -0.3516 -10.9980 -6.3405 -14.9887 2.6767 -5.0563 3.6578 +#> -1.8291 7.8125 -13.2615 6.5019 5.0765 3.9017 2.8786 -14.1297 +#> -8.8028 -0.7751 0.2333 -13.4476 -5.6117 11.4009 4.5270 -1.2784 +#> -6.5362 5.0214 12.6471 -2.5547 -4.7357 -9.1172 0.4524 8.0845 +#> -0.1702 1.2163 -0.2177 -12.3578 -8.4017 -3.1552 -3.2782 -1.3680 +#> 6.3253 -4.4724 3.3072 5.0575 -4.4235 0.6756 1.1694 -15.7861 +#> -5.6443 7.3664 4.6533 -11.5215 -10.3234 -0.0212 3.2675 7.5162 +#> -4.0903 -2.8207 -1.8034 -8.4524 6.7105 -4.1710 5.3431 -0.8284 +#> 1.0200 4.1784 1.0915 -6.5063 0.6506 9.9118 10.2774 0.8494 +#> 2.7498 3.9871 -11.0478 16.3057 -6.8933 4.1712 -5.2481 -2.9344 +#> 9.1198 -6.8364 -11.0031 2.7434 11.8762 8.8112 0.1716 0.3366 +#> +#> (18,.,.) = +#> Columns 1 to 8 -3.8668 -5.2559 -1.3421 -3.4314 -5.5902 -1.2348 7.5022 5.1467 +#> 7.3846 6.2321 -11.4451 1.7548 -20.0545 6.2218 -3.4738 5.3867 +#> -7.4439 8.5497 4.9470 3.4129 -6.3982 6.3543 4.4766 -1.0839 +#> 1.8430 2.1162 6.6763 12.9154 14.1324 -3.0693 2.5126 4.9445 +#> -0.5132 6.6500 5.4475 -7.0266 4.8493 -3.7801 6.1027 -9.6499 +#> -19.8086 -7.2166 -5.5340 -1.8946 0.1660 -2.3499 -2.5856 -1.3366 +#> -0.6751 -0.9848 11.5909 7.2691 -8.7376 -6.1050 1.9485 4.0363 +#> 8.8061 5.0337 -2.1156 0.1317 -2.0671 -4.1119 7.4026 -2.9720 +#> -11.1357 -4.2862 -1.5577 11.4097 -10.9833 5.1710 -7.6701 3.8730 +#> -2.0210 -3.4727 -6.0439 -6.3764 4.9350 0.0568 -0.2317 -5.7015 +#> -8.7964 8.9918 1.7091 1.7820 -1.7458 2.4526 10.1842 -0.5978 +#> 1.5600 -7.0254 6.6511 -0.7881 -2.8247 -1.2636 -6.7303 0.7457 +#> -9.9962 -3.3429 2.2513 -2.2121 -3.8652 -2.3950 2.6836 1.1698 +#> 5.8844 -0.5286 -6.5471 11.2424 3.0500 -0.0954 -16.5267 -0.3183 +#> -5.6467 7.9594 -12.9433 3.7634 -18.3734 8.7222 2.4835 -1.1996 +#> -5.1361 0.8167 -4.7019 3.1790 7.8726 -1.0832 -2.2899 -2.2924 +#> -4.2281 2.7615 -4.2521 -13.8693 -2.5246 -11.5597 8.7394 -0.5678 +#> -1.4691 -0.6020 -18.1740 16.6846 0.2870 10.0524 -3.2751 -0.9453 +#> -1.9186 -1.3545 -4.1410 -4.9974 -6.8474 -1.3620 -6.9289 9.6157 +#> -1.6624 -1.9641 -24.7416 13.0001 -6.9840 4.9895 -7.7299 -6.8454 +#> 7.8452 0.0362 -4.2189 1.4746 -5.9102 8.0202 -0.0475 -1.3546 +#> -3.4357 -0.9328 8.5267 1.9319 0.5512 -16.5238 -5.4286 -6.3495 +#> -17.0909 -5.6792 -10.8661 -6.1793 7.4291 -7.0865 5.0553 -8.5706 +#> -27.6425 -7.6394 -10.0787 2.3910 9.3487 -0.0833 -4.5750 -1.5715 +#> -0.3811 7.1165 -9.3529 -10.1394 -2.6536 10.3981 7.1494 6.2481 +#> 10.2249 12.1080 -6.7486 -1.7461 -4.8264 -0.0810 11.8476 -2.3056 +#> -10.0728 6.0447 4.7626 -13.1478 -1.3342 -20.2861 4.2710 2.8576 +#> -5.9852 -5.6862 -6.5108 -7.5676 11.4240 -8.8640 -1.1609 -3.7681 +#> -1.8751 1.1401 -9.4994 5.1197 -14.1007 0.8723 -8.6336 3.2089 +#> -5.1206 -13.8672 -0.9405 -14.4086 -7.9146 -5.0967 5.1145 1.7116 +#> -0.0410 -8.1056 8.9243 4.9892 -2.0418 0.7855 1.2673 3.1330 +#> -12.9567 3.8870 -2.1128 -2.1108 6.1276 7.9823 -2.7277 2.6696 +#> -9.4991 -8.8730 -2.5337 6.7111 2.3819 9.8990 -4.2177 3.4382 +#> +#> Columns 9 to 16 2.6155 -3.1598 0.5561 0.9017 -0.8640 -13.1992 -10.6243 0.8509 +#> 10.1564 4.2755 4.0640 -2.1607 5.6714 -4.9804 -1.0634 0.5981 +#> -2.5515 -1.1962 0.4510 -7.8476 2.5789 -0.8973 7.3582 4.2570 +#> 7.0324 -5.5265 -3.5624 -5.5224 -4.6802 4.4413 -5.8475 -18.2055 +#> 1.4783 3.6747 5.0357 -3.0983 2.1574 -12.1662 0.8275 -6.6068 +#> -3.2798 10.5581 -2.9460 4.3101 -4.8227 -1.8283 1.5126 3.0180 +#> 8.6264 3.7093 -6.5137 1.8973 -3.4125 1.3862 -1.5053 -1.1352 +#> 6.5437 7.7399 3.0553 -2.0195 6.2153 -9.7967 -6.6664 7.3130 +#> -2.9593 -0.3026 3.8410 -4.0726 -4.7563 5.4368 8.2580 3.8967 +#> 2.4642 -4.0941 -6.1839 -10.0695 3.2336 1.1617 3.3472 -2.6711 +#> 6.9811 2.8503 -5.4271 -6.5792 7.9685 -8.0903 5.3719 5.1022 +#> -5.9984 5.1151 1.4408 6.2052 0.7562 1.3832 -1.8367 -1.3511 +#> 0.7537 2.3841 0.1785 2.3359 -7.0487 6.0956 -9.5434 0.6965 +#> -10.3330 8.9429 7.5808 10.8931 4.8602 -14.7879 4.3269 -9.7338 +#> 9.5932 -5.9092 3.4381 4.2429 -11.2346 0.6286 -10.5995 8.1673 +#> -2.3872 2.5932 4.0034 -1.2594 -9.1051 -5.9047 -7.7488 -2.8257 +#> 10.0295 5.3519 -2.4834 2.4865 -6.3106 -6.8691 -0.9673 -1.8831 +#> 0.0335 4.8028 -5.9487 2.8925 -16.2756 5.4352 -4.0029 -3.7733 +#> -1.5558 5.3827 2.6516 -13.2862 -0.6100 0.2447 -4.9061 6.3422 +#> 3.9959 -1.8448 1.9519 10.5935 -14.5738 1.7463 -4.8509 2.2005 +#> 0.3484 -0.1279 4.0334 1.7024 3.9858 -3.5049 2.5102 -0.7298 +#> 1.4837 9.8924 3.6415 3.7319 -2.4233 1.6162 -5.2266 19.6050 +#> 2.7476 -7.0835 -13.0863 4.3597 -0.2027 8.2926 9.9258 0.5601 +#> 0.0557 -10.4625 0.9327 -7.9863 9.9888 5.2492 -0.1595 -0.1648 +#> -0.5021 -1.4746 -5.3854 -3.6369 -11.8902 -5.9471 3.5275 -2.5313 +#> 3.1709 -2.3712 2.9537 -1.2457 -8.2462 -14.8765 -9.2953 7.6170 +#> 11.9909 11.4902 -4.1442 -5.9850 -1.6904 3.0835 5.1076 -0.1874 +#> -7.7562 -0.5789 -3.5707 -10.4800 5.7294 -4.6781 -9.5600 -12.0115 +#> 3.4318 0.3919 11.0936 2.4332 -6.3888 -5.5107 2.1030 9.6941 +#> 5.8946 -0.0350 -0.2174 -5.8357 8.9972 -3.4707 -8.0726 -5.5292 +#> 4.1600 -6.9609 -3.7461 -14.6550 -0.8750 0.0199 -3.6804 -6.1085 +#> -19.3142 3.0839 -0.7835 8.6145 11.9654 5.3337 13.8425 -3.3817 +#> -0.5039 -4.3831 -2.1489 -7.5294 2.8402 2.1701 1.9791 -5.4358 +#> +#> Columns 17 to 24 -0.7860 -5.2558 8.8485 -20.9941 -9.0655 3.5294 -10.1439 -15.4498 +#> -9.5280 -5.9107 -4.1893 -8.4292 -4.4653 -0.5671 -8.2579 -5.4353 +#> 0.9009 -5.9605 -5.2960 -0.0580 4.2290 6.6839 4.1097 10.2514 +#> 7.4349 -0.9851 8.0691 10.2494 5.0533 -8.2979 2.1494 8.2970 +#> 2.4778 -6.8777 -4.8544 -1.9120 2.3932 -2.1848 -2.1859 1.3520 +#> 6.7057 -11.6855 -4.0378 3.1691 -2.3374 -4.3761 -7.1103 -8.3558 +#> 1.2603 -5.3465 7.3288 0.0823 3.4294 -5.4725 -1.6565 -1.1264 +#> 4.1951 -0.9868 8.4957 11.0593 -0.4520 -5.0130 -7.8880 -13.4144 +#> 1.7785 2.6223 -8.1713 4.1476 8.1134 12.6581 9.3881 7.1014 +#> -11.4726 -3.0691 3.9983 -6.6128 2.8902 -4.2356 -8.2007 9.8636 +#> -5.5642 -5.4402 -7.3251 3.8875 4.0312 -2.7634 -1.9829 9.1520 +#> -0.9521 6.8743 -6.0237 -2.6349 -3.3604 -9.5427 -6.5401 -3.5823 +#> 5.0215 -0.8272 0.7952 3.4548 -3.2609 -6.3531 1.7328 -9.2392 +#> 10.4847 3.6584 4.0901 -0.0507 -9.9594 5.5010 10.7630 7.5907 +#> -5.7430 -3.7902 -1.0271 0.4131 -8.0020 -3.5380 2.1057 3.7887 +#> -3.0929 -0.8988 1.4206 -3.6087 2.6007 2.3756 4.4542 5.3190 +#> 4.8061 -2.6334 6.0383 -3.5185 0.1744 -6.8400 -1.0821 -11.2657 +#> -3.1119 5.3678 2.1130 0.3014 2.6299 -6.9807 -8.1485 2.6010 +#> -1.3879 -0.9456 -4.7146 12.1872 0.9405 5.8407 -3.8907 0.7099 +#> -1.2914 4.6096 4.2702 -5.0991 -2.4684 -3.7089 -1.5956 -3.0278 +#> 0.9443 -3.4175 5.8033 2.0396 2.6150 -13.5490 -5.6807 9.9841 +#> 6.0496 5.3381 2.1143 4.7890 6.5540 11.7075 10.9428 2.0711 +#> -8.6852 -0.1733 0.9401 -4.7324 -4.7015 2.6617 -5.5006 5.2733 +#> -7.6679 5.3552 -4.1996 6.3112 -13.5730 -1.9100 -8.2240 6.4587 +#> 1.4878 2.8627 -3.5439 -5.4882 4.8272 -1.4257 -2.8101 15.8103 +#> 6.8095 -4.1825 13.2839 -1.0319 0.8730 6.4668 0.7699 -2.7591 +#> 12.1832 -2.5764 0.0200 10.0529 9.1707 -10.8723 -5.3037 6.6403 +#> 13.7693 -13.7331 1.9829 -0.2371 -11.3306 -1.6697 3.9548 -9.9244 +#> -8.8938 11.2271 -1.6461 -1.5718 3.9387 6.0064 4.2124 21.8599 +#> 8.8075 -13.4189 1.4054 -8.9115 2.0479 -12.0521 4.1982 -24.0073 +#> -3.6085 6.9181 7.9440 1.9889 4.7326 -6.7069 -8.5142 -5.6065 +#> 0.1416 6.5440 1.8905 4.0071 -8.3706 7.7118 -0.1617 -10.8803 +#> -10.0878 -4.8186 -8.6435 -13.7511 -7.2138 -1.0321 -5.4247 -8.6376 +#> +#> Columns 25 to 32 5.8348 -7.2655 -1.5712 -10.6256 -7.4372 -1.4391 -11.4552 -7.0477 +#> 0.3226 3.4154 14.7550 -4.5340 4.0858 -3.4755 3.5059 -4.8757 +#> -11.4289 -0.3527 13.9969 2.6939 -3.1083 -3.0465 -0.0892 7.0621 +#> 5.9597 8.9556 8.5256 2.0979 1.2178 6.0288 -5.4063 -13.5261 +#> -3.1176 0.5134 8.1586 4.2564 3.3197 -8.7756 -1.9437 8.7018 +#> -2.9178 2.5326 6.1470 -0.5083 -4.6868 -7.8694 -6.4656 0.8148 +#> 5.0851 4.6391 2.5312 -6.5178 -1.5761 1.3420 -15.3398 -7.1221 +#> 3.2214 2.9499 -5.3047 0.9502 -2.6619 -7.0484 6.5908 4.1343 +#> -19.2554 0.6993 0.6807 13.9843 -11.8171 -6.7210 0.4342 -8.2857 +#> -0.2347 -2.5530 -5.5027 6.8211 -3.2994 6.2850 -0.9099 -0.9908 +#> -10.3087 11.8646 7.4830 -5.8137 -0.1379 -6.7583 8.5932 -4.0547 +#> 4.7290 3.4967 -4.3611 -1.3191 9.5178 3.1170 -4.6229 10.6171 +#> -2.6505 -1.3561 7.5753 -12.7398 -1.2293 0.8599 -7.6208 9.2033 +#> 4.8080 0.7353 -2.8968 -8.4572 4.6661 -13.1305 1.8542 -9.5437 +#> 0.0877 1.0816 7.2096 3.7373 -13.8221 -5.2305 -8.0411 0.1576 +#> -0.5292 0.0716 12.2252 -8.5118 -5.0807 -5.6914 -10.2480 -9.0764 +#> 0.1724 -2.2463 3.3778 -3.3526 -14.6650 2.1788 -4.2253 -2.9203 +#> -9.5554 3.8573 12.3497 -0.4372 -4.4459 4.8477 3.3281 -11.1750 +#> 3.9233 -13.3457 -1.5832 -7.1649 -1.5820 -11.0861 3.7966 0.0886 +#> 3.8153 -7.0028 -6.0608 2.2563 -10.7381 5.2756 1.5506 -0.0917 +#> -6.7651 -2.4536 -6.6022 6.8422 8.3161 0.5805 2.1232 0.9424 +#> -0.4326 7.7848 -4.7841 3.1087 -5.1200 -4.0048 0.2341 9.0584 +#> -11.2660 -2.3502 2.1099 -0.5210 -7.9694 -0.1528 3.6844 0.5896 +#> -10.2571 7.9452 -2.3368 1.3291 -4.2773 9.0338 1.8009 1.2802 +#> -7.7180 -3.6896 -9.0833 2.9145 -2.3874 -5.3301 8.4110 -5.9553 +#> 8.5449 -13.3886 -12.8695 0.5592 -11.1546 0.8784 -2.8044 -13.2828 +#> -4.9942 1.9179 5.1382 -0.4088 -1.0402 -13.1228 -6.7094 -10.3743 +#> 1.4642 -17.0698 3.6303 -5.0720 -6.8388 4.6268 12.0566 -10.4237 +#> -6.5882 3.0184 -3.5185 0.3690 0.4860 -16.3141 -6.1981 -3.1381 +#> -4.9649 -4.1968 -2.9713 -10.4720 -15.6504 0.5116 1.8237 1.1983 +#> 2.2559 0.6364 -7.3063 -3.4943 -8.5275 -0.2126 0.0538 -0.7336 +#> 2.4349 -1.4577 7.1449 -1.2379 2.1649 0.5370 1.3001 -8.0382 +#> 2.7743 7.5586 -6.7374 3.7699 -7.0262 9.1465 6.9093 -9.5367 +#> +#> Columns 33 to 40 7.5027 6.5333 -0.0116 -5.6049 10.2883 3.3521 12.5631 1.5284 +#> 3.2782 -5.3731 -0.3792 -0.1593 -3.4329 4.7939 -11.5708 0.0229 +#> -0.3696 -6.6871 -12.1762 15.2650 -5.1596 9.6189 1.4396 -5.4593 +#> -5.5469 -9.9894 -2.5025 11.4326 -19.5473 -12.5112 -1.3405 -14.6335 +#> 3.3988 -3.4589 -13.5032 -6.4191 10.6713 -10.2992 1.6308 -3.1208 +#> 1.4465 -2.1320 -6.9581 6.5072 3.0123 3.9350 -3.9943 2.0850 +#> 4.2501 -5.4135 0.6682 -17.2987 2.2371 -4.3746 -5.0197 1.8515 +#> -3.4838 2.9367 -1.8368 -2.3554 13.1617 -18.1611 -11.9097 6.4819 +#> 4.7710 -3.2932 -7.1364 1.5846 -0.7487 19.7362 -11.5068 17.0332 +#> 4.7742 5.7331 -7.4818 -2.4472 -9.6425 -3.7683 11.4337 3.0286 +#> 2.7958 -6.7184 -14.5433 1.4455 20.1422 -12.3778 -10.6355 3.6799 +#> 2.7496 2.2887 3.8542 -11.3247 3.0819 -9.4470 -1.8825 5.2718 +#> -1.6504 3.3703 0.0674 9.4359 3.2942 -4.4481 6.6880 -6.9526 +#> 9.5634 -12.0699 11.6217 3.5743 -21.1038 19.2175 3.4960 -1.7261 +#> 8.5224 -13.1445 -10.6559 27.6880 -13.1003 -4.8193 3.7349 2.6295 +#> 5.7360 -4.9928 -0.6631 -1.0594 -14.4707 13.4307 3.8149 -11.6830 +#> 4.8149 -2.1025 -2.8097 6.9264 1.9033 -14.3990 2.6695 -4.9869 +#> 4.8526 -9.6141 2.1947 13.7938 -20.2344 10.8255 -9.7898 -8.3773 +#> -7.9815 15.2409 -3.7938 -5.3458 6.4106 19.3269 -3.7322 5.5608 +#> 9.5753 5.0733 -3.7613 17.2755 -4.4581 -1.9309 2.1579 11.7440 +#> -0.0213 6.9128 5.5643 -1.6708 -5.6212 3.4767 -15.9278 5.8476 +#> -2.4029 -7.9804 -5.8736 2.8209 3.1041 -4.5196 1.0162 -6.9719 +#> 2.7896 5.2301 5.4749 6.9678 12.5461 -0.9798 12.0340 -2.3895 +#> 1.9614 12.2263 -2.0617 15.6980 -6.3249 15.8998 7.9298 14.7955 +#> -1.4827 -4.7149 -2.3174 12.9012 -15.4193 1.5327 -5.1090 -1.9725 +#> -3.0517 2.2253 0.3231 6.1099 -6.0135 -1.0435 4.3048 -0.0366 +#> -1.8208 -2.9453 -9.7977 2.7620 10.3637 -10.1950 -0.5822 -7.4750 +#> 3.1300 9.3976 5.5887 0.2071 -5.8449 8.8785 15.9674 -6.2410 +#> -2.3326 -0.7725 -5.7696 3.1336 -5.6467 4.5828 -10.7735 6.1770 +#> 12.2428 4.0546 7.2391 -9.2279 15.3162 5.3007 8.9519 5.0663 +#> -2.0020 4.3573 5.1197 -6.6489 -1.9023 -3.3735 0.3393 5.5041 +#> -2.6711 15.2617 6.2822 1.1592 0.5775 -3.1309 7.5555 -1.9678 +#> -1.8282 3.8881 2.8608 -9.5459 -5.4770 4.2785 2.9235 10.4418 +#> +#> Columns 41 to 48 10.6075 8.1955 6.3594 9.6548 -3.9281 -5.5283 -4.3969 -5.2311 +#> -0.0549 -6.8490 6.1117 10.3936 0.6194 -1.7077 -2.4671 -8.3272 +#> -0.7429 -0.7171 -5.5075 -6.5202 4.8573 3.5552 -13.5905 0.2427 +#> -2.6324 1.8097 -6.6340 15.7721 15.1312 -0.8379 -8.1369 12.4140 +#> -1.1339 1.6051 -11.0083 -4.0315 6.4343 5.9605 -11.9441 -0.4597 +#> 8.0709 -5.0276 6.3454 1.9127 6.5007 0.9900 -8.9867 -7.4269 +#> 1.6719 4.0843 2.6772 11.4787 -1.9626 -4.5868 6.2614 -4.5262 +#> 5.7108 -1.7419 -15.6123 -16.1967 -6.1014 2.6904 1.5191 5.5925 +#> -5.1404 5.7229 10.0120 -1.0770 -4.9319 -2.1359 4.9363 -14.8991 +#> -6.5668 15.5995 1.9847 -2.6831 3.6583 0.6838 2.2724 -0.4155 +#> 0.1617 0.1575 -4.6440 3.1532 1.0411 8.8002 -9.9338 -3.5080 +#> -9.0213 -0.1910 -1.7430 -2.6874 -3.9154 -1.9654 8.2127 -10.7969 +#> 2.2560 1.8975 0.7868 5.4140 -2.2687 -6.4610 -7.9165 -0.2637 +#> 4.2571 -5.9356 1.2642 2.6764 4.9651 2.8574 3.9194 1.9587 +#> -3.9367 0.4815 6.5117 14.2914 4.0803 -9.3343 -6.3550 -8.5403 +#> 1.8754 8.8220 4.3447 3.7271 8.3342 -0.7177 -6.6892 -9.8216 +#> 2.9226 -0.6007 2.7407 10.7963 -1.4039 -2.7413 0.1470 0.0305 +#> -2.0551 -8.6517 -1.0070 0.2237 3.9467 -3.5014 -1.3727 1.9502 +#> 7.4919 14.7980 6.8258 -15.9342 -2.7612 -9.0467 -0.2408 -6.1911 +#> 4.9543 0.7682 8.2976 -7.3017 -1.8134 -4.6807 1.0936 2.2199 +#> 0.2476 -8.2005 -5.5142 -8.9249 -1.5899 1.5298 -0.2571 6.9172 +#> 3.8945 -2.5934 -1.4054 3.2700 1.6870 10.3187 -2.0064 6.1178 +#> 13.7704 1.9819 8.4786 13.2931 -0.3295 -6.7888 -4.4256 3.2921 +#> 3.7184 10.6178 7.6598 -0.3294 0.1958 -7.1434 10.6028 -3.0216 +#> 2.6404 -1.8616 13.4526 8.1869 -2.2343 -6.3750 -4.1366 6.2893 +#> 2.4454 -5.7420 5.2624 -0.4159 2.7483 -5.1336 1.8708 1.7837 +#> 7.8029 6.1585 -6.6585 17.4982 -1.5582 1.0887 -5.1489 -7.9415 +#> 5.6980 -1.8182 0.0233 -0.6164 -0.6737 -2.6375 0.4123 17.0397 +#> 1.3305 -3.6551 12.2228 10.3632 1.2346 3.8860 1.8377 -0.7569 +#> 8.7651 10.0290 5.7246 -1.3062 -12.2864 -5.9924 -4.5115 2.5171 +#> 8.9476 2.4347 4.3081 1.3072 -11.2390 -5.1412 6.4073 2.6543 +#> -2.9282 0.0720 4.3174 -8.4152 -1.8963 -8.3885 5.4650 -5.5437 +#> -7.5952 8.7533 17.1642 4.6203 -1.5054 1.8863 1.8967 -3.7182 +#> +#> (19,.,.) = +#> Columns 1 to 8 10.8395 10.7757 -3.0870 5.3283 12.8517 -5.9433 -2.2106 14.2127 +#> -2.8759 1.0786 -4.1254 0.8018 -0.2598 -8.2426 3.3772 -7.8797 +#> 0.0327 5.8181 -4.2337 -7.1393 7.7752 2.3379 -9.4840 -3.3952 +#> -4.0998 8.0561 2.5215 0.6297 0.6777 10.5896 9.7402 -16.0670 +#> -1.0089 8.3787 -2.9934 -2.6356 -0.0877 7.0668 4.5577 -0.7420 +#> 9.5502 9.5713 4.4936 -0.6498 6.3097 3.3118 2.3432 8.6385 +#> -4.0563 -1.2055 3.8899 9.1231 6.7793 -12.0974 -3.1299 9.9029 +#> -0.7493 0.1963 4.1077 -0.6971 8.1487 19.8076 6.8223 -5.6851 +#> -6.0644 -7.6939 6.7336 -12.1034 -5.3430 -4.1995 -5.5885 2.5117 +#> -0.8601 1.1812 5.3597 3.9339 -3.5057 -0.7212 3.2272 3.0324 +#> 1.7426 0.3936 4.8376 -2.4991 -0.5326 2.9792 7.5664 0.8225 +#> -3.7376 3.5097 2.6515 8.5253 5.7521 8.0427 -4.7524 -1.5751 +#> -0.2459 5.9787 -11.0733 0.3648 10.8524 4.1613 -1.0832 -7.1325 +#> 1.4894 -3.5983 -5.5288 -13.8901 -10.3781 6.5673 -5.9233 3.2369 +#> -3.6380 -4.0456 2.1913 -5.4734 18.1547 -3.7933 5.0162 3.9899 +#> -0.3314 7.9992 4.3374 -0.0042 -9.5690 -1.3394 17.0652 -4.3626 +#> -5.2768 -3.1344 -1.1358 1.3520 9.2251 5.6593 -5.9296 9.0785 +#> 9.4193 -6.8123 8.6052 -4.0533 6.9688 1.3662 15.2029 -7.0616 +#> 3.9928 -2.9939 -4.9972 -8.6826 -18.3501 -11.7910 5.2548 3.0088 +#> -5.7589 -14.4873 7.5612 -9.8233 11.5037 4.0116 11.8311 -4.7630 +#> 4.2859 0.5745 3.4160 -4.1035 3.7191 -4.9911 3.2578 -10.4416 +#> -4.0827 -9.8458 -10.4471 -12.1190 -1.5617 0.8391 -12.4993 4.9041 +#> -2.0138 -9.3498 10.1790 10.3041 12.8580 -2.0940 7.0812 16.6040 +#> 1.3709 -9.2536 -3.8364 -1.8204 -9.9328 6.5994 4.5961 10.5244 +#> 6.4022 -8.8969 3.6429 0.9117 0.9441 -10.0114 -6.0480 5.8424 +#> 10.7966 5.0398 1.4386 -5.0239 -1.1596 -8.2526 -3.6071 -0.0900 +#> 3.9841 -3.3743 -4.2212 -1.0025 -4.8112 -5.1390 10.2893 2.6183 +#> 4.2903 8.8365 0.8650 -2.0502 -0.1431 6.2520 -4.8942 -2.7032 +#> 0.3704 -17.7415 -5.7719 -10.0108 -12.9640 -12.5472 11.2287 -0.2070 +#> -1.0709 17.7364 4.6136 4.4929 12.2768 0.8676 -6.2423 -3.1522 +#> 0.5952 1.1386 14.9558 17.7456 17.0263 1.8315 10.4364 9.7502 +#> 1.5720 1.1694 -3.9902 4.9676 0.4082 23.6846 -3.0325 -1.5709 +#> 1.2193 4.3040 12.7433 8.2747 4.9245 -3.6125 -1.0537 7.9518 +#> +#> Columns 9 to 16 -4.7201 -3.1236 -4.3389 -1.9527 4.6023 6.3952 -12.6313 -21.1348 +#> -0.6620 -3.7689 -4.2902 13.6811 -2.3209 6.6041 -4.8710 -17.3554 +#> 5.1904 10.6001 -6.5149 -4.8909 -9.0926 -11.8929 5.2871 2.2837 +#> 0.7013 4.9385 11.7092 2.8884 13.2337 -17.4017 14.9012 9.3459 +#> 6.8411 -1.4300 -1.6803 -2.5781 10.4844 -4.2093 -3.0250 1.0544 +#> 12.9819 -4.7711 -14.4137 5.0339 8.2432 0.5383 -11.8861 -2.3156 +#> 2.7924 1.7839 2.2759 1.9873 -5.4555 -1.7162 15.9806 6.6636 +#> -4.0410 -15.5223 -13.1287 -2.8102 -5.5585 0.1104 -2.2128 -12.1074 +#> 17.3425 8.5051 3.8251 -12.2377 -1.1293 -5.4657 0.8050 4.6297 +#> -4.8510 6.7570 6.4678 10.1318 3.9934 -8.2748 13.6497 13.6396 +#> 9.3899 0.2428 -1.8257 5.4489 5.2872 3.9363 0.1621 -8.0930 +#> -9.2866 -7.1185 2.7267 0.8633 -1.3314 0.9149 -9.6874 5.3744 +#> -1.1862 -7.3784 -2.1169 6.4714 -8.7672 -7.9466 -5.3680 -11.5957 +#> 13.7131 -4.5149 10.2006 -6.1466 12.6197 19.8093 -3.2519 -13.6451 +#> -5.5197 10.4320 0.0603 5.5517 -6.6388 -7.9648 1.7688 3.0486 +#> -7.3826 0.1136 1.5554 -5.5288 7.0923 -0.7994 1.9077 -8.2893 +#> -4.4383 0.3800 -2.3695 1.3786 6.1753 -4.8534 12.6246 -18.0979 +#> -9.0982 -13.8421 -5.1902 6.5919 -3.6211 -3.6110 -0.9803 -8.3294 +#> 8.1154 -3.1195 0.4546 -1.1779 -3.6951 -4.3140 -1.4386 3.9892 +#> -4.9093 -4.5519 0.8064 15.6809 -2.5004 -1.2447 -16.7001 -13.4800 +#> 11.5612 -2.9265 -1.7209 2.6694 -0.4203 -5.8078 -1.4483 1.3615 +#> -5.2648 12.7933 -4.3861 -0.3763 -1.6852 4.4957 4.9136 4.4373 +#> -3.9523 11.9814 1.7973 1.4953 -2.6835 0.2446 -4.5562 17.3886 +#> 5.1934 10.6655 0.6115 11.1418 3.1180 -8.9435 6.6520 3.4674 +#> 5.1238 8.8625 4.3751 3.1573 2.5184 -5.1505 1.8224 10.4806 +#> -2.4792 6.4549 -6.0669 0.6875 5.4914 3.0087 -2.3207 -5.9153 +#> -2.4603 -2.0192 0.0122 7.6925 3.2375 -23.4830 10.0626 -7.1549 +#> 4.1439 -5.9518 5.2762 -1.1040 13.5939 0.5049 11.1101 -11.0639 +#> 8.4713 7.5715 0.9777 8.2896 0.3026 10.2661 4.7805 4.8929 +#> 7.5424 -5.8435 -1.1392 -13.0447 1.6035 0.5726 3.4629 -18.9380 +#> -6.0775 3.4473 -5.3616 -4.7318 -5.5823 -0.2472 14.1388 7.6101 +#> 0.9798 -11.0886 -8.4849 -3.8576 -9.8913 8.9775 -13.6314 -10.7274 +#> 1.0321 -0.2241 4.4221 10.9133 13.4562 14.3194 -7.3366 -0.7615 +#> +#> Columns 17 to 24 3.2404 -7.3528 -4.0721 7.6482 -2.2974 -1.6315 -8.5402 -0.2590 +#> 9.0915 -5.5299 7.1584 -6.4244 -6.2699 -4.5453 0.4297 1.3497 +#> 5.0526 -2.3287 16.0916 -8.3235 -1.0199 -0.6070 4.0202 4.3790 +#> -7.1261 11.7021 0.9335 1.3722 -7.4772 -8.3906 -0.6237 -9.8234 +#> -2.8154 -1.1493 11.3990 5.7868 -9.0052 -8.9318 -0.5868 1.0273 +#> -7.2551 -2.0263 -0.2881 3.6759 -5.8995 -0.1478 -3.0632 -8.5266 +#> 3.9098 -1.3945 3.8332 7.5970 3.7336 0.7507 -11.8417 -5.6555 +#> 0.2132 8.4414 3.7995 -9.1395 -8.8469 7.1621 7.9866 5.6705 +#> -8.7067 -4.8448 -1.2625 3.3080 0.7969 1.8951 1.2556 1.2725 +#> 3.4546 6.4591 -5.8498 4.8281 -0.9510 -5.3729 -1.1176 3.3406 +#> 8.2234 -12.0190 16.8075 -19.5644 -3.8102 -0.2338 3.2206 -3.4571 +#> -0.3564 -5.0141 -4.0951 7.5192 3.4239 0.3308 -5.1510 -4.0848 +#> -0.3188 4.4719 10.9368 -10.5394 0.3799 -0.0291 3.2460 0.9790 +#> 10.2404 0.1642 -5.1547 8.4400 -8.4119 -10.9495 2.0886 -1.9074 +#> -0.9937 5.1533 19.3627 -7.2296 -9.1919 -4.4116 3.1740 -3.1250 +#> 1.2888 -1.0215 8.0473 4.1319 -5.1494 -14.1310 -1.6532 1.7811 +#> 0.3926 6.3051 3.6569 0.1893 -11.3633 5.9868 0.7072 10.7594 +#> -7.1072 -3.0477 12.2373 -4.7590 -6.2750 -3.4573 7.7633 -8.3014 +#> -8.7176 -2.8906 -0.7154 -14.9467 5.2296 -7.2645 5.2165 10.9547 +#> 2.6475 0.4443 1.2017 -12.9848 -5.7442 7.7503 1.7651 6.4125 +#> 0.2760 -11.0673 -3.5735 1.7614 8.2979 1.2129 -2.4041 -1.7983 +#> -5.1471 -7.7904 5.4553 -3.4230 -8.1335 7.8878 6.8528 2.7138 +#> -2.6071 -6.1638 5.2968 3.1989 1.4251 11.1042 3.8692 -6.5996 +#> -15.7746 15.8617 -1.6893 -7.4314 11.2806 -12.9195 0.9744 -5.6331 +#> 4.5438 -5.6933 -0.7325 2.9716 1.4225 0.3591 2.3317 -0.8351 +#> 4.9513 8.1065 -9.9296 -2.4630 -5.4124 5.6269 1.6865 12.9014 +#> -0.4381 7.9463 5.5066 11.0034 -7.7295 -13.9815 6.7824 5.4234 +#> 11.2191 12.7392 -2.9574 -5.2218 -9.9673 -5.6994 4.9155 5.8823 +#> -5.3755 -14.1145 20.0468 -2.6852 0.5977 -10.9694 -1.0630 6.6494 +#> 0.7967 -1.2790 0.7432 -5.6644 -8.0744 14.7843 -4.1934 3.5868 +#> -8.6367 3.4889 1.5427 -0.7841 6.2432 3.9866 -7.5451 -3.6884 +#> -1.1392 12.9018 -9.7556 3.6774 4.3340 1.9292 -0.7439 9.6608 +#> -0.7898 0.4567 -9.0658 0.7583 -0.5643 -2.7536 -12.0739 -13.5161 +#> +#> Columns 25 to 32 -7.7718 -0.9114 2.4102 -2.2426 -8.2447 -8.6054 4.0331 4.6473 +#> 1.8486 -5.2750 7.4340 15.8399 -5.6456 -5.6514 -10.1373 3.1087 +#> -9.4465 4.7849 1.6913 3.6872 -6.2903 8.9057 6.9255 -5.0254 +#> 14.8307 -4.0019 0.4373 -15.9645 12.4092 -11.6289 -2.6368 -14.3868 +#> -0.2235 3.8137 3.8534 -1.3829 2.1283 -1.9937 4.9293 -5.9287 +#> -2.5982 0.9202 11.7816 -7.2020 -6.4888 -4.1582 -5.2661 -7.1377 +#> 4.0172 1.0735 2.9697 -3.6743 1.0773 9.7277 3.3830 -1.5863 +#> 0.8849 1.0239 -5.2432 0.7883 5.8000 -3.4362 8.1262 7.7836 +#> 2.9349 -5.4463 0.5602 10.1004 1.6811 -1.1345 9.2585 5.7046 +#> -6.0164 -1.3181 -0.4065 1.5494 11.7829 -14.4608 3.9402 -5.7432 +#> -17.4480 5.4720 1.7400 5.2470 -9.6330 4.0045 4.5097 -11.8818 +#> -2.0877 2.5754 2.0526 -2.6236 -5.7712 3.4179 -0.6518 3.2757 +#> -6.4103 3.7287 4.6183 -17.0621 -15.5844 5.8730 -3.4688 -5.3081 +#> 10.6664 -2.7789 -12.8203 3.8798 -2.2326 5.6436 -5.2723 0.9585 +#> 3.0664 1.0020 -7.0297 -2.8026 4.1825 0.0879 -3.6404 -1.6307 +#> -1.5760 -0.0641 2.7851 8.6238 -5.8379 -13.0112 -4.4457 4.8467 +#> -3.5930 -0.6990 -7.8559 8.1605 -8.8669 3.5128 -8.1397 -6.4161 +#> 4.2051 -4.6264 -2.4763 -4.6992 -7.7277 -11.2350 -10.4281 -6.7836 +#> -2.7288 5.3781 -0.7707 3.8182 -4.2097 2.8139 -7.3372 10.2148 +#> -7.5773 -4.9700 -8.8037 1.7967 -2.2666 -4.4975 -7.7480 15.4940 +#> 4.9859 0.0118 1.7544 -7.8191 4.6211 -0.8371 2.3063 4.5158 +#> 2.3803 3.3274 -6.8530 -3.5117 3.6384 -7.1110 5.9185 -9.9121 +#> -12.0744 -5.7320 -8.6097 9.6187 6.9884 0.7670 4.1691 3.6504 +#> -9.9666 -1.6004 -6.3201 -10.2167 -10.4199 1.9198 -1.2735 -9.4717 +#> 1.4589 4.1667 -2.6370 -1.6384 4.4989 4.7677 -0.8968 0.3371 +#> 8.3363 0.9305 -3.7238 6.0712 4.8480 -1.6509 -0.6134 7.1978 +#> 10.3574 5.0577 1.0890 -12.0661 -0.4778 3.2472 -7.8195 -2.8836 +#> -9.1254 -9.6882 -3.2917 10.9773 0.0025 3.2516 -6.4559 -5.2064 +#> 6.9633 -0.8168 -10.9703 -0.6805 3.4517 -5.9585 -4.0768 1.5045 +#> -15.8190 -6.2055 11.6044 13.8867 -15.3518 -11.9771 19.2955 3.1466 +#> -0.1233 -9.5387 -9.2900 5.5027 1.5485 3.0759 10.2926 10.8285 +#> -9.8313 0.3688 -3.3588 7.8980 -14.6756 7.7726 -2.7930 3.4806 +#> -11.8913 -9.0171 5.8413 3.9101 1.3863 -8.5099 0.7576 -9.0370 +#> +#> Columns 33 to 40 2.9796 -5.8591 14.4235 4.4064 -2.7400 0.5498 7.9668 10.1102 +#> -9.9294 -5.8406 -4.4334 0.4285 3.8579 2.0006 -2.0403 2.3884 +#> -1.8515 -4.7375 -1.2654 7.3044 15.8168 -2.7423 -2.4067 -11.8371 +#> 4.4462 -14.5330 -4.3808 3.9995 2.6533 -7.2618 -12.5946 0.2572 +#> 3.2368 -4.2141 -13.9740 -8.8299 7.9667 7.1578 -1.1920 -3.9704 +#> -3.5651 -0.3440 -16.0170 6.9821 0.7315 -7.4706 -11.1905 2.1866 +#> -1.2230 1.8032 2.2447 13.5615 -13.7201 -7.6674 10.3916 10.3359 +#> 4.5096 -0.2489 -8.7388 -5.6336 6.9253 2.8208 -13.1318 2.1139 +#> -2.1393 9.3337 -4.5421 -4.7718 11.8707 -8.6940 6.3966 -8.0506 +#> -2.9345 -15.1383 10.7131 -7.4241 -6.2026 1.8937 6.6104 3.1484 +#> -4.1044 5.3933 -2.1359 5.8452 10.8731 -3.4783 -7.0924 -6.3855 +#> -9.3098 2.1850 -4.6465 -7.2282 -2.5180 0.1917 13.3671 9.2172 +#> -5.2428 -2.8354 -1.1434 12.6908 -9.3882 13.7698 -7.8651 -0.5879 +#> -7.4181 5.1397 -13.1685 -9.6985 15.4629 -14.1861 7.3946 10.6126 +#> -4.1513 -13.3208 4.5188 0.1917 -0.8268 11.2592 3.6528 -0.6625 +#> -0.6403 -6.9440 -4.0975 -3.8169 12.0874 9.4207 -10.7510 3.7073 +#> 10.7357 -5.8957 -1.9186 8.6922 -3.0942 8.8761 -14.8420 1.1825 +#> -11.9155 -2.5841 -12.9962 5.0492 4.6539 -0.6236 -4.0959 -4.7469 +#> -11.6471 4.8921 5.0823 -5.0213 9.7070 3.0254 -12.2953 3.9251 +#> 17.0879 -4.0036 -1.0710 0.4942 -2.4409 6.5400 -0.3668 1.1020 +#> -4.8131 4.7524 -9.4196 -5.5224 -1.1471 -14.8131 9.7049 4.3751 +#> 0.8249 -11.9130 -4.0199 1.9304 6.6916 6.5608 -3.2226 -4.6872 +#> 13.3971 8.0947 -1.4470 8.0229 -7.0477 -9.1122 13.7528 0.1964 +#> -2.6919 -9.1373 5.1295 -6.3844 0.0863 -2.1316 -0.3428 -0.3389 +#> 0.2724 3.4743 7.8588 -0.7303 -2.4073 1.6905 4.5028 -7.8179 +#> 10.6832 -0.2296 7.1542 9.0711 2.4966 -3.8835 -5.3003 2.0246 +#> -8.4005 -7.8312 -15.2336 6.9674 -1.8741 -5.7587 -17.6714 15.3893 +#> -4.9575 -0.1593 -1.1159 3.2139 7.9358 12.5113 -9.8510 -4.5967 +#> -10.1662 -6.7715 1.6739 -8.9887 3.1111 -5.8976 13.2658 -4.7742 +#> 0.2688 0.8505 10.0891 4.0398 -2.2841 15.1094 4.5957 -6.2957 +#> 0.2787 5.7893 4.1793 1.8786 -1.3759 -13.6348 14.0891 11.3234 +#> 3.5430 14.9865 0.7801 -0.4608 -9.5619 -6.3796 0.0200 -5.6714 +#> -4.2869 2.2431 11.6142 1.4366 2.0043 -7.9774 14.1313 -5.3636 +#> +#> Columns 41 to 48 2.7537 -8.4081 0.8808 1.8540 4.4716 2.6500 -9.3430 9.6118 +#> -2.9996 6.7433 -0.1671 -6.4098 -9.4328 10.6736 -6.7612 10.3954 +#> -1.8288 3.1022 -3.4933 -9.7086 -6.0963 12.0057 -4.3052 4.2917 +#> 13.1396 -4.0272 0.2545 -14.0850 5.3521 -8.7964 -0.6994 -10.0611 +#> 5.0306 6.8330 -0.3310 -1.9005 -12.3920 1.2069 -0.4763 -4.8570 +#> 8.2146 13.0620 8.1760 -10.1780 -11.1487 -3.5516 8.0910 1.5439 +#> -2.7358 -5.7154 -1.5612 -1.4865 -8.1558 0.7511 -5.1962 -2.5678 +#> 7.7914 12.1014 7.8445 9.9661 -15.9547 1.7632 -3.3189 1.1190 +#> -0.3257 11.3838 13.9759 -1.6907 -12.5652 -9.6671 -2.0921 0.4294 +#> 5.1802 -7.9128 0.9615 -11.0031 3.9570 4.3288 -0.5359 -6.5904 +#> 2.7734 5.8330 -4.6551 2.0356 -21.9550 20.8268 -5.8366 9.2868 +#> -1.5492 -6.3833 3.1915 5.4757 -1.5982 -6.8798 11.5828 -2.1211 +#> -2.1020 -1.9674 -2.8494 -1.0265 3.5978 8.0475 4.5782 2.6527 +#> -9.5135 -1.2447 -12.3409 -11.5975 -0.7767 -3.5692 1.9104 6.9491 +#> -9.3450 -11.3317 7.6089 -13.9269 -11.1711 11.9842 -3.0715 7.0820 +#> -5.8388 -2.1508 -2.8305 -5.4169 9.6731 3.7725 0.5182 -6.3051 +#> 2.8293 2.7246 0.6847 -6.7214 -6.0762 5.8649 1.2158 -1.6557 +#> -0.9985 2.9063 -2.2778 -4.0852 -3.4789 3.1871 -3.4138 5.5535 +#> 4.2588 -2.1259 -6.4136 10.6392 -5.6241 0.1208 1.3676 -4.4714 +#> 9.0142 7.3940 8.1806 1.3387 3.6750 12.4905 -4.2329 11.7774 +#> 1.2015 2.8161 -5.4220 1.0295 -9.5838 -0.5959 3.4263 8.4852 +#> 3.2337 4.1017 9.9291 2.2344 0.6333 -3.3308 3.4500 4.4569 +#> -3.7095 -3.7454 -13.8820 8.2878 0.2788 11.7937 0.7563 6.4430 +#> 3.9434 0.0988 2.9141 -5.8733 0.7279 1.4817 9.6820 -0.6106 +#> -3.4528 -12.7778 -4.5525 4.0766 4.5152 -3.7994 -9.8088 0.2060 +#> -4.0679 -4.0544 4.9276 -0.5949 -0.2665 -0.3859 -5.7662 -0.2687 +#> 0.1229 1.3327 -2.0212 -10.0097 -17.2825 -2.8027 1.7124 -8.1562 +#> -14.8658 -6.8483 -3.5400 4.9178 -5.4110 2.2206 8.5238 1.9944 +#> -3.6567 -6.3353 -7.6350 1.1357 -8.3796 2.2761 -6.0554 9.7495 +#> -6.2436 -2.9375 10.3187 7.0562 -1.3654 1.9643 2.6747 7.7124 +#> 0.9717 -5.3566 -0.0893 9.3926 -4.3974 -7.2100 -5.8098 -6.3129 +#> 3.7503 5.8549 2.2083 3.3030 3.9939 0.3026 0.6920 5.8119 +#> 3.0689 -2.5120 5.5766 -6.1170 1.9618 -0.7384 1.8105 4.9841 +#> +#> (20,.,.) = +#> Columns 1 to 8 2.6317 -3.2704 6.4893 4.0617 -3.9221 -14.5928 -0.2165 0.3166 +#> -1.7656 12.3500 -5.9235 10.2710 -5.4963 2.4739 7.8553 -11.0974 +#> 4.2000 -3.5453 -0.8064 -4.0630 6.7606 -12.7523 -2.3810 4.4675 +#> -2.4432 7.5452 5.9066 -4.3508 14.2043 -4.4407 -11.5158 0.9818 +#> 2.1510 -1.3800 5.0692 -2.6042 4.9786 -15.2360 5.3043 -2.0795 +#> 6.1902 -6.8906 -4.6501 0.0899 -6.4293 -5.1811 -13.5472 0.0140 +#> -8.3217 -5.8681 -2.1790 6.5348 -0.1611 -11.5489 -0.5299 8.3786 +#> 11.3522 8.5959 -1.9855 0.1147 -7.1166 -0.7803 18.1329 -13.3221 +#> -0.4951 1.8566 -10.2217 2.6549 1.3131 12.8182 -29.6905 9.1980 +#> -6.8784 -12.1935 10.4695 -6.8811 5.5331 3.3465 0.9023 12.8057 +#> 7.2101 4.4492 -3.0272 1.0221 2.6727 -11.9136 11.3500 -10.3183 +#> 0.3164 1.4806 -0.3643 4.1276 -0.4381 3.7842 5.1534 8.5298 +#> 1.8226 0.0824 13.5183 -1.3857 6.3781 -17.4850 4.9547 -2.4414 +#> -6.9603 13.8362 -16.1456 -0.7156 -13.0805 6.3018 -11.9658 -1.1504 +#> -3.5703 4.7417 6.4240 1.6624 -17.1390 -2.1930 4.6340 -4.7743 +#> -2.9960 10.0738 5.9415 2.8126 2.9826 1.8867 1.4020 6.8104 +#> 0.6208 -2.6765 3.8367 -0.8087 4.7581 -26.0039 -1.0620 -3.7928 +#> -3.1788 13.3034 7.1760 9.3869 -2.0361 3.8333 -6.9656 2.2902 +#> 1.5968 2.1896 1.7029 12.7194 -3.5679 7.4415 -0.9195 -7.3017 +#> 8.2864 3.4497 4.3539 4.6864 -14.1377 14.5084 -3.1976 -2.3807 +#> 5.5481 -7.7046 0.3324 -1.6636 5.1512 6.7901 -11.8125 -2.2419 +#> 9.1587 1.7614 -5.1246 1.2268 11.3454 -3.3256 10.4029 -4.0478 +#> 0.3763 -6.8253 7.2491 -6.7257 4.1883 -1.1559 2.6184 12.4293 +#> 2.1107 -7.9201 10.0034 -7.4355 -0.6598 -4.3904 -17.2219 7.2302 +#> -1.0541 -4.4945 4.3513 -0.1292 -3.6126 -1.0940 -5.8077 3.4546 +#> 5.3242 -8.1556 1.4671 -1.5818 -14.3665 11.7792 -6.1844 -5.0912 +#> -3.0772 -1.5444 10.0528 -2.3464 -1.0673 -25.1990 1.1664 8.0409 +#> 5.6377 4.6678 6.3061 -6.4699 6.3947 -1.2818 4.1777 -8.6689 +#> -2.0267 0.4467 0.3525 10.9873 -1.3003 0.2035 -5.4983 -4.9964 +#> 3.6723 -0.4729 0.4000 -5.0117 3.4031 -7.4031 10.6463 -10.4689 +#> -4.9988 -2.3497 5.1630 6.9212 -2.5057 -1.7708 1.4655 9.3307 +#> -1.1346 -1.0179 0.6928 -3.7315 -5.7301 5.4592 -11.9378 9.1582 +#> 1.1608 -1.4629 -0.4300 3.2595 -3.4088 12.4622 -8.1716 2.4827 +#> +#> Columns 9 to 16 12.3189 -7.4661 -4.2392 11.7499 1.0018 -4.1385 4.2201 -5.1919 +#> -2.3632 3.5253 -4.6502 -1.3243 -3.5071 -4.1476 -13.1182 -9.8020 +#> -4.7953 1.5255 -0.6499 4.0517 2.5277 -1.7646 -5.2528 5.2599 +#> -2.0369 -3.6353 -6.1835 8.7470 9.1838 -5.3791 8.8067 20.0277 +#> -1.2960 5.6209 -3.3121 -4.8769 -3.9266 -5.8039 -12.2958 4.2274 +#> 10.5936 -6.9445 -0.8961 -12.4010 -6.4007 -5.3509 12.7792 -2.2874 +#> 2.5691 -13.2197 0.6038 8.3391 -5.8939 14.4678 -0.5205 -4.0887 +#> 4.1848 11.3388 -1.9578 -9.1297 -9.4681 -5.6152 -6.0008 0.7851 +#> 4.4057 -1.0972 9.0309 1.3224 -3.7618 0.6563 -8.6499 4.4606 +#> -5.0686 -4.4629 7.5355 -1.2612 -10.0848 2.7247 7.0528 7.2881 +#> -4.9709 15.5319 -4.2474 -8.5831 6.5794 12.3218 -7.9114 11.4092 +#> -2.8564 -7.3116 -5.2396 -1.0795 -0.0653 9.1868 -0.2262 -5.8491 +#> -0.8342 -1.9837 3.4885 -13.1955 5.6293 5.9005 -0.3317 3.5845 +#> -6.5592 9.3104 -11.2125 8.8877 8.0341 -5.8824 -1.1890 6.4209 +#> -0.6094 -4.6323 1.4846 -8.3095 3.2218 11.1752 -8.5526 -3.3383 +#> -9.9859 0.8720 -2.8285 -1.3160 9.5755 1.7600 -6.3442 -4.7246 +#> -1.9063 13.7199 1.2022 -13.4989 -8.4142 -0.5966 2.7934 -10.4546 +#> -11.8290 3.8885 2.5128 -10.7260 -14.9850 10.9759 0.1504 1.9472 +#> 6.3647 -0.5885 8.9794 4.3158 -16.9057 -9.6695 -2.0466 6.2499 +#> 12.7378 9.0779 4.4035 -12.7644 -8.6863 11.0101 2.7448 2.6279 +#> 8.2543 -6.9941 4.6757 13.2053 -15.9934 -2.1725 -3.7339 2.6762 +#> -15.5257 3.3541 -1.4097 -7.8562 7.5251 -3.9706 10.9223 -5.9378 +#> 4.2843 4.0547 3.5963 -3.8833 3.8719 17.3283 17.0613 1.3608 +#> 1.8062 2.1106 3.0785 -10.7370 -4.8876 5.7470 17.1128 20.2215 +#> -1.0501 1.8343 10.6267 5.2201 -3.1459 -7.6928 2.0141 -13.0849 +#> 9.1640 -0.2273 2.5444 12.4343 -5.9568 -9.7919 0.5788 -16.2262 +#> -6.0636 -8.2957 0.5474 -7.2691 -3.9940 -7.2372 3.5615 -0.1772 +#> -0.0116 18.5618 -0.0501 1.4772 -11.5969 -6.3888 10.5140 6.0256 +#> -6.4140 -6.3266 8.7698 4.9552 -8.6961 2.5264 -6.4764 1.3719 +#> -2.1428 13.2581 0.0243 -6.2169 2.2053 5.1598 -8.1174 -8.1981 +#> 6.9769 -3.2637 -1.7238 15.3795 -5.4236 9.5380 11.7204 3.4857 +#> 4.2541 -5.3250 8.2638 -8.9295 5.8716 -3.8351 14.4074 -9.4258 +#> 6.9027 -1.9696 -0.0950 6.5029 3.8962 6.0836 7.7747 8.1169 +#> +#> Columns 17 to 24 3.1896 7.1136 16.5850 21.3689 14.1253 10.9252 10.6069 -12.9360 +#> 3.1916 -2.3407 10.7862 1.1791 -2.3508 -0.8567 -2.7486 -0.3680 +#> -8.0882 0.5215 -6.1814 -15.5077 -8.7990 6.2791 3.8403 9.6876 +#> -17.2918 -3.3978 4.2483 -4.9023 2.4044 0.9558 9.9897 0.8995 +#> -4.9701 6.9746 3.1969 -1.7714 -6.7167 -6.3982 10.8513 7.5809 +#> 12.1795 22.3433 16.0185 7.4013 8.3001 8.4499 -1.4369 -6.1516 +#> -8.4451 1.9695 -4.3517 12.2279 -4.1713 1.0889 -4.5477 -12.6213 +#> -7.2546 1.2788 0.9460 6.0031 3.7693 -3.9883 -3.7087 -3.4668 +#> -6.9514 10.6885 7.4600 0.1471 4.4466 -0.3061 -0.6925 8.2213 +#> 11.2392 3.6268 8.8963 -4.9476 -0.7547 -13.0547 -8.4332 2.4718 +#> 6.9078 0.5906 1.6236 0.2955 -10.1980 3.3020 3.8726 -2.5545 +#> 5.3922 0.0548 -2.6088 1.4309 -0.8519 -11.7812 -5.1712 -7.9835 +#> 16.3538 -0.2866 -3.9257 4.2461 6.4649 6.9675 0.3583 -0.1692 +#> -8.3542 3.7870 -5.1359 0.5153 2.7189 7.9143 9.8094 2.3193 +#> 14.3007 -4.8221 10.5206 -3.7007 1.1044 12.1687 2.2237 -6.5298 +#> 6.7927 -0.3696 10.7242 6.0659 2.8485 18.9118 10.8026 -3.7626 +#> 10.4871 3.5258 1.9896 10.0899 2.9323 1.0943 11.1587 -5.3213 +#> 4.1095 11.5633 -3.8133 0.0403 -2.0528 20.9037 1.9324 -0.6095 +#> 0.3955 10.9574 14.2628 6.2530 -1.9365 -3.1222 1.6854 -4.4150 +#> 13.8606 4.9778 6.4612 10.9626 3.8364 14.6639 -11.1712 -4.2862 +#> -17.4411 15.5994 -10.8096 -7.8068 -2.7533 -5.6480 -10.9167 3.3260 +#> 10.4801 0.7587 1.2777 -0.4488 3.1472 9.4578 1.6560 -1.7689 +#> 9.2851 3.1067 -2.2496 5.5730 -5.7733 7.4467 1.4873 -16.7869 +#> 20.4946 1.6005 10.7375 -8.5792 11.5899 -0.3161 -6.1408 5.1686 +#> 2.6878 7.3988 4.1201 -9.0180 2.5914 1.3089 -4.1484 0.6474 +#> -1.9797 -2.2724 6.8919 5.8167 0.9384 17.1842 -1.2179 -0.5146 +#> 1.6663 7.3121 14.5475 2.8160 2.4474 -5.1266 8.7430 -11.3458 +#> 5.1560 5.9187 -0.3446 8.4645 -2.7297 -9.8186 14.9239 3.5063 +#> -1.9313 12.5361 7.1592 1.6826 1.1412 16.7364 -5.3221 0.5639 +#> 15.9264 12.0776 -3.7066 18.7699 9.2437 -3.5412 -1.8549 1.2673 +#> -12.8781 7.2273 0.9743 14.6515 2.3926 0.9963 -7.2469 -6.9089 +#> 10.2803 -10.6271 -10.5320 -2.2252 6.0630 -1.4732 -3.5993 7.3931 +#> 14.1778 5.7967 7.9909 6.9608 8.6100 -7.6136 -8.2632 -0.3995 +#> +#> Columns 25 to 32 -4.9114 -6.5843 7.0307 -4.9146 -3.4639 2.0096 8.6154 -10.3412 +#> -4.9881 -16.5373 0.8815 -14.8672 -3.2208 0.1541 8.4281 -1.7680 +#> 1.6603 -3.2193 0.9856 0.7298 3.6267 -19.4214 7.9226 -6.4101 +#> 0.8370 -3.7783 -5.5143 7.5889 -0.3336 -3.7922 0.4118 7.4096 +#> 9.0980 0.0702 -6.6039 0.2466 3.8681 -4.9208 5.6830 3.2396 +#> -9.4309 0.0826 -21.4884 -13.7381 -2.1915 -3.1371 6.9177 1.3061 +#> 3.6304 3.2765 4.7168 4.3914 3.3643 -3.9210 -5.0385 9.1815 +#> 4.5755 -4.3560 -1.8081 -0.4241 7.7188 -4.3440 3.7476 -0.3437 +#> 2.5515 4.0793 0.5688 -14.8879 1.5542 -3.5263 -1.1997 -0.4273 +#> 1.3687 -4.1839 -7.1615 -0.2557 6.2164 2.9262 11.1157 1.1417 +#> 3.3176 -6.9254 -1.2322 -7.1140 4.5201 -15.2834 -0.9299 -4.2925 +#> -4.0570 3.7860 -1.8286 -5.3160 5.1573 9.9160 -7.1879 1.8805 +#> -10.5045 -1.4338 -2.9479 6.3257 11.2079 -2.9802 1.2451 -5.4419 +#> 8.1824 5.0542 4.5832 2.3177 -7.6769 18.4706 3.9747 1.6845 +#> 4.4435 -1.0758 11.1831 -0.3116 4.0288 -13.3231 11.6478 -3.9394 +#> 0.9642 -0.9493 -6.7294 4.8991 -14.8633 -0.9488 9.4906 4.4281 +#> -6.7962 0.2925 1.0403 8.6866 0.5212 -5.6361 0.6018 9.0195 +#> 6.7747 -2.7812 -1.8297 2.4707 -7.2375 5.6537 1.1593 5.4003 +#> -3.9716 3.7339 -5.5811 0.7016 14.3519 -5.9226 2.1348 1.1818 +#> -3.7538 -4.7981 2.3779 2.0818 4.0368 1.3383 10.4708 -10.9149 +#> -2.3645 2.2764 7.5331 3.0271 4.3130 -6.0624 3.7986 -1.1034 +#> 6.1696 -3.0635 -5.1319 4.8253 0.4628 2.3354 -4.9309 -5.5517 +#> -3.8062 0.3722 3.1567 12.7407 -13.9317 -2.3135 -2.0125 -4.3503 +#> -11.3840 5.4055 -3.2454 -4.5566 6.4311 1.2773 12.4571 -1.6851 +#> -3.2685 7.4117 20.0026 1.3566 -8.3665 -6.4926 -3.3746 0.9071 +#> 4.8057 2.7966 7.6597 14.1099 -1.2854 -0.8436 0.8242 -3.6776 +#> -11.2720 1.0225 -14.1237 4.8509 1.6023 -5.1638 4.0037 15.5045 +#> 3.3805 14.0331 -2.4699 1.1550 9.8972 6.2443 -3.5412 3.7678 +#> 11.9026 2.1099 17.6332 -3.5072 -6.8630 -0.3439 9.0685 -6.9890 +#> -4.7753 -5.2764 4.8345 -8.9518 8.0896 8.2099 -5.8078 -4.4894 +#> 4.6541 9.9653 13.8764 5.6798 0.0742 6.0066 -5.7651 4.0624 +#> -12.9363 -4.7503 -6.8934 -10.8562 -9.2596 6.5418 2.2233 -3.3519 +#> -1.8764 -4.3724 -5.7903 -28.4369 -0.7876 9.9454 -1.7579 -5.5863 +#> +#> Columns 33 to 40 -7.9782 -0.1174 9.2081 -0.5138 6.2838 3.0893 -4.8058 -18.1583 +#> -4.4992 0.1592 -4.9528 -6.1422 -2.1497 11.9840 -6.4663 -16.9782 +#> 14.0196 -7.9512 4.7963 -6.6332 -5.8331 3.4767 2.0578 12.3614 +#> -3.7671 -4.8483 3.5894 9.3979 -6.9846 -5.1328 16.0085 -1.0437 +#> 8.4310 -3.9356 -4.7848 -3.9968 -1.5736 -8.0616 3.9807 -3.8722 +#> 7.6377 6.1733 -11.0966 -2.6771 -0.5499 -6.9654 1.3978 -3.7241 +#> 1.2850 6.9083 3.2346 -6.3490 -5.6001 3.6724 -7.4045 -5.5348 +#> -2.1955 -0.1386 -3.9806 -3.2931 -5.8467 -5.0261 -4.8924 -4.6912 +#> 9.8305 12.6788 -7.4050 -5.2257 -9.9318 -1.4535 0.6971 3.5317 +#> 5.0632 -4.2288 2.2580 -1.8263 11.1705 8.9084 -5.8374 -6.0012 +#> 8.1201 -5.6505 0.6060 -6.7071 -0.3699 -6.9170 -7.7779 6.6524 +#> -4.3955 8.7702 -3.0082 6.5812 -2.8098 0.0620 -10.2726 0.1328 +#> -0.2795 -6.3769 -0.1139 14.9688 -2.7549 -1.6537 -5.3231 3.2797 +#> -5.0211 -0.9620 3.7316 -4.8731 -1.6097 -6.1001 9.7407 0.7674 +#> -2.1829 4.2108 -0.9183 4.6779 -8.4763 -4.7964 -2.1435 1.3281 +#> -5.8912 -4.4834 -3.0533 2.5049 -3.2299 -9.6439 -1.0027 -6.7984 +#> -9.0744 -9.8956 9.6744 -9.2164 -2.7103 1.2476 7.4178 -0.1035 +#> -5.2211 -5.5057 -3.2928 11.1094 -12.1035 4.6906 3.1275 4.5405 +#> 3.7232 -2.3739 -8.5902 0.9798 -7.2881 -9.8486 -16.0251 5.2795 +#> -9.9477 8.4432 -1.7972 5.5190 -6.1098 -1.9647 -7.1623 -4.2358 +#> 8.4954 12.9459 -10.8776 -2.0965 -1.3022 6.1228 -4.4629 14.5403 +#> -0.2208 -9.9050 4.5437 -6.8559 -9.4238 -9.3158 -0.3664 -1.7320 +#> 7.6426 -1.2814 7.7386 11.7444 1.8633 2.9149 6.0874 0.1475 +#> 8.4882 -16.9385 8.4649 11.7110 9.7894 4.5368 -6.7920 7.8080 +#> 2.8875 1.8071 2.7394 -8.7541 5.8108 0.4024 7.5638 15.7222 +#> -6.2697 8.1250 4.2751 1.8647 -5.8885 1.8181 5.5904 -1.6848 +#> -4.3160 3.0767 -5.3289 -8.2283 -2.4016 -2.6025 4.7989 -1.3792 +#> 0.9026 -8.5282 7.1421 8.1328 1.1274 0.1584 5.5324 -8.1130 +#> 2.6120 -2.0762 -6.0001 -9.1572 -8.5995 -4.7389 -6.9053 10.6624 +#> -0.1218 1.0712 0.8658 -5.4335 12.2737 6.7308 -10.1385 -27.2439 +#> 2.2173 4.4802 9.8104 -1.4112 -12.2861 10.1242 -5.8051 -2.2389 +#> 2.0607 -7.1254 3.2760 10.5545 4.6498 -2.4794 7.7428 0.6788 +#> 0.9462 1.8375 0.8980 3.4936 8.3570 7.2085 -0.4131 -14.5003 +#> +#> Columns 41 to 48 2.9347 -6.5413 -2.7144 7.3507 10.2499 19.5012 11.9704 4.0226 +#> 7.6969 -0.3759 -0.9515 14.8465 -0.4770 11.8782 5.7558 -0.1092 +#> 0.5673 10.7646 -7.6328 -8.9557 -3.4129 4.7266 -8.4608 -7.6704 +#> 12.3682 7.7268 -4.8314 -16.3501 -8.0761 13.1647 14.0554 -3.6347 +#> 2.5149 8.1191 7.5053 -0.6208 -6.2784 5.1073 10.8315 -0.1904 +#> -7.1793 8.9993 -1.1687 5.0640 10.9705 17.1949 10.9811 15.4482 +#> 11.3650 -10.3334 -1.9698 -1.5696 8.0683 0.7480 -4.9720 -0.0693 +#> 12.6532 5.2129 1.0802 4.8707 -3.9001 -10.7755 -1.1094 5.5256 +#> -14.9249 4.7129 12.2226 -3.0380 -5.4822 3.5384 4.7636 -6.5954 +#> -2.6631 -4.3724 -6.3074 2.9076 8.8519 -2.7298 14.7076 0.3063 +#> 8.0633 -0.6514 7.6628 2.0957 8.4992 -3.0437 -8.3949 3.6666 +#> -4.4896 3.1980 7.0277 4.3710 -2.4215 -6.1094 -0.3414 0.4719 +#> 0.2284 4.9868 -11.0636 3.0958 9.8737 9.6359 -6.2769 -3.0681 +#> 6.8440 -10.4315 -2.9086 -9.5762 -5.9412 16.2668 4.3936 -12.5563 +#> -2.7370 14.5742 -10.9801 4.9551 7.1912 6.3545 7.9185 -2.8689 +#> -0.6319 5.5352 -1.6038 -6.4819 16.2483 5.5396 7.8797 5.5145 +#> 0.2896 1.5684 4.5287 -11.5754 8.5899 3.2760 10.2598 -2.6193 +#> 3.8999 3.8732 -5.7280 -6.9490 2.7778 4.4373 2.9833 5.8405 +#> -3.0089 1.5390 0.4579 12.6652 14.0262 4.2430 -16.2028 6.4233 +#> -9.0877 3.0460 0.8589 5.1432 6.2836 -4.5454 6.0026 -2.2429 +#> 3.9197 -0.0397 0.7961 -1.7077 -6.5249 -14.8881 -7.8815 0.5807 +#> -0.6347 14.0922 3.1064 -5.3681 0.8794 -1.9129 -5.5581 0.0186 +#> -17.1356 -7.0935 -0.7364 4.1804 11.8833 -3.0501 -9.5860 -0.3069 +#> -15.5952 -0.5495 -18.1164 5.3012 11.3696 -3.1717 2.4292 13.4601 +#> -9.2839 -1.0520 5.2714 -3.2606 -0.7096 -7.9491 -6.9085 2.7991 +#> 6.4440 2.0277 -9.0858 -7.5276 14.1867 -2.4964 5.7862 -4.2194 +#> 6.3172 0.8966 -1.3876 -12.1631 14.6530 12.8441 6.9321 6.2326 +#> 0.7124 -12.4130 -8.9603 -6.4906 19.0114 13.5405 -3.5782 -0.8632 +#> -1.8861 1.0826 -0.0976 1.6771 -4.5812 5.2838 6.5466 4.5431 +#> 7.3507 -2.6981 5.3992 2.5461 17.2578 5.4251 -1.5992 -7.3955 +#> 7.2382 -4.5121 -0.9355 -7.8539 0.3607 1.1957 -6.9880 0.4112 +#> -10.8835 -1.8415 -7.2738 5.6169 -6.4167 2.0098 4.2866 -6.7110 +#> -9.0879 -6.7933 -0.8936 10.1913 3.3567 10.5202 13.4314 6.9530 +#> [ CPUFloatType{20,33,48} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_conv2d.html b/static/docs/reference/torch_conv2d.html new file mode 100644 index 0000000000000000000000000000000000000000..c393fa07247b1a77ac6560301a27bdbb2912837d --- /dev/null +++ b/static/docs/reference/torch_conv2d.html @@ -0,0 +1,342 @@ + + + + + + + + +Conv2d — torch_conv2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Conv2d

    +
    + +
    torch_conv2d(
    +  input,
    +  weight,
    +  bias = list(),
    +  stride = 1L,
    +  padding = 0L,
    +  dilation = 1L,
    +  groups = 1L
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iH , iW)\)

    weight

    filters of shape \((\mbox{out\_channels} , \frac{\mbox{in\_channels}}{\mbox{groups}} , kH , kW)\)

    bias

    optional bias tensor of shape \((\mbox{out\_channels})\). Default: NULL

    stride

    the stride of the convolving kernel. Can be a single number or a tuple (sH, sW). Default: 1

    padding

    implicit paddings on both sides of the input. Can be a single number or a tuple (padH, padW). Default: 0

    dilation

    the spacing between kernel elements. Can be a single number or a tuple (dH, dW). Default: 1

    groups

    split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

    + +

    conv2d(input, weight, bias=NULL, stride=1, padding=0, dilation=1, groups=1) -> Tensor

    + + + + +

    Applies a 2D convolution over an input image composed of several input +planes.

    +

    See nn_conv2d() for details and output shape.

    + +

    Examples

    +
    if (torch_is_installed()) { + +# With square kernels and equal stride +filters = torch_randn(c(8,4,3,3)) +inputs = torch_randn(c(1,4,5,5)) +nnf_conv2d(inputs, filters, padding=1) +} +
    #> torch_tensor +#> (1,1,.,.) = +#> -3.5486 -7.1217 -3.2540 -5.9799 1.1629 +#> -4.3378 2.2990 -16.4566 9.0228 -5.5777 +#> -2.6722 -9.2867 4.0511 -4.4372 10.4134 +#> -4.7778 -0.7326 -1.6019 12.0401 -3.1893 +#> 0.2123 -0.5914 1.1211 3.0260 6.8859 +#> +#> (1,2,.,.) = +#> -3.5645 -0.6611 1.0267 0.0105 -2.2313 +#> -4.8099 10.2621 0.1669 0.9807 -2.2823 +#> 0.7857 0.4305 0.4022 7.5398 0.2195 +#> -1.5292 7.5032 10.2237 -4.5421 -3.0606 +#> 4.4675 1.8375 9.2117 -1.7632 -6.9053 +#> +#> (1,3,.,.) = +#> -4.0560 -3.3721 -3.9417 1.2809 1.6676 +#> 3.7261 1.6556 -7.7465 -0.6448 3.6757 +#> -1.8130 -5.0859 -3.2322 4.3222 5.7618 +#> -2.3173 -7.4043 2.5139 15.0216 0.5374 +#> -1.4209 -1.0366 0.8718 4.0616 3.1472 +#> +#> (1,4,.,.) = +#> -9.5668 3.2701 6.5622 9.6684 2.5335 +#> -4.4349 3.4779 -1.2737 2.9751 1.7985 +#> 2.6595 7.2363 4.4882 0.3747 1.3252 +#> 5.3966 -4.0692 -2.2576 9.7051 -0.5021 +#> -0.2109 4.6618 0.3269 -6.7229 -2.3171 +#> +#> (1,5,.,.) = +#> -6.8665 -0.7471 3.0223 5.9422 0.0773 +#> 0.1951 6.6746 4.5183 0.2309 -3.8129 +#> 4.9832 2.5449 -5.4966 3.8797 3.9093 +#> -1.2040 -1.1130 7.1904 3.6236 3.7172 +#> -1.3379 -5.7016 6.7872 7.4608 4.0352 +#> +#> (1,6,.,.) = +#> -3.9338 0.3756 -3.0094 -0.7198 3.5040 +#> 0.3583 -1.8715 -4.4996 11.1578 11.3811 +#> -7.6832 -2.1231 13.6712 7.9567 4.1675 +#> 7.7207 2.5612 9.3426 5.4903 -9.0039 +#> -1.2806 1.9558 6.0637 -3.7703 2.7139 +#> +#> (1,7,.,.) = +#> -3.5015 -2.8558 -3.0872 2.0101 8.1843 +#> -2.7114 -4.0357 0.2954 1.3045 2.9680 +#> -0.0320 0.3198 -2.8986 -0.1796 -4.9480 +#> 6.8524 -9.7840 11.4722 0.6094 -8.8633 +#> -2.3049 -2.9029 -4.5625 -5.2812 -1.8420 +#> +#> (1,8,.,.) = +#> 4.1203 -6.8209 -7.9792 0.5355 -6.1896 +#> 8.1814 -14.2806 2.7375 -18.4326 10.9079 +#> -0.8891 -8.0756 0.7869 10.5435 0.2842 +#> -11.1754 5.4648 9.2422 8.0793 -3.8445 +#> 3.1076 1.0749 -4.5827 4.5922 -1.6976 +#> [ CPUFloatType{1,8,5,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_conv3d.html b/static/docs/reference/torch_conv3d.html new file mode 100644 index 0000000000000000000000000000000000000000..88e365a37ddaa5eafc57369c6027d965af3f6b0b --- /dev/null +++ b/static/docs/reference/torch_conv3d.html @@ -0,0 +1,285 @@ + + + + + + + + +Conv3d — torch_conv3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Conv3d

    +
    + +
    torch_conv3d(
    +  input,
    +  weight,
    +  bias = list(),
    +  stride = 1L,
    +  padding = 0L,
    +  dilation = 1L,
    +  groups = 1L
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iT , iH , iW)\)

    weight

    filters of shape \((\mbox{out\_channels} , \frac{\mbox{in\_channels}}{\mbox{groups}} , kT , kH , kW)\)

    bias

    optional bias tensor of shape \((\mbox{out\_channels})\). Default: NULL

    stride

    the stride of the convolving kernel. Can be a single number or a tuple (sT, sH, sW). Default: 1

    padding

    implicit paddings on both sides of the input. Can be a single number or a tuple (padT, padH, padW). Default: 0

    dilation

    the spacing between kernel elements. Can be a single number or a tuple (dT, dH, dW). Default: 1

    groups

    split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

    + +

    conv3d(input, weight, bias=NULL, stride=1, padding=0, dilation=1, groups=1) -> Tensor

    + + + + +

    Applies a 3D convolution over an input image composed of several input +planes.

    +

    See nn_conv3d() for details and output shape.

    + +

    Examples

    +
    if (torch_is_installed()) { + +# filters = torch_randn(c(33, 16, 3, 3, 3)) +# inputs = torch_randn(c(20, 16, 50, 10, 20)) +# nnf_conv3d(inputs, filters) +} +
    #> NULL
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_conv_tbc.html b/static/docs/reference/torch_conv_tbc.html new file mode 100644 index 0000000000000000000000000000000000000000..30016df1d48ec3d90a0913ae9667711ceb3a9e99 --- /dev/null +++ b/static/docs/reference/torch_conv_tbc.html @@ -0,0 +1,256 @@ + + + + + + + + +Conv_tbc — torch_conv_tbc • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Conv_tbc

    +
    + +
    torch_conv_tbc(self, weight, bias, pad = 0L)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    NA input tensor of shape \((\mbox{sequence length} \times batch \times \mbox{in\_channels})\)

    weight

    NA filter of shape (\(\mbox{kernel width} \times \mbox{in\_channels} \times \mbox{out\_channels}\))

    bias

    NA bias of shape (\(\mbox{out\_channels}\))

    pad

    NA number of timesteps to pad. Default: 0

    + +

    TEST

    + + + + +

    Applies a 1-dimensional sequence convolution over an input sequence. +Input and output dimensions are (Time, Batch, Channels) - hence TBC.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_conv_transpose1d.html b/static/docs/reference/torch_conv_transpose1d.html new file mode 100644 index 0000000000000000000000000000000000000000..5c7b195bc99384f0291add3b04ce07b7b14a8980 --- /dev/null +++ b/static/docs/reference/torch_conv_transpose1d.html @@ -0,0 +1,5342 @@ + + + + + + + + +Conv_transpose1d — torch_conv_transpose1d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Conv_transpose1d

    +
    + +
    torch_conv_transpose1d(
    +  input,
    +  weight,
    +  bias = list(),
    +  stride = 1L,
    +  padding = 0L,
    +  output_padding = 0L,
    +  groups = 1L,
    +  dilation = 1L
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iW)\)

    weight

    filters of shape \((\mbox{in\_channels} , \frac{\mbox{out\_channels}}{\mbox{groups}} , kW)\)

    bias

    optional bias of shape \((\mbox{out\_channels})\). Default: NULL

    stride

    the stride of the convolving kernel. Can be a single number or a tuple (sW,). Default: 1

    padding

    dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padW,). Default: 0

    output_padding

    additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padW). Default: 0

    groups

    split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

    dilation

    the spacing between kernel elements. Can be a single number or a tuple (dW,). Default: 1

    + +

    conv_transpose1d(input, weight, bias=NULL, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor

    + + + + +

    Applies a 1D transposed convolution operator over an input signal +composed of several input planes, sometimes also called "deconvolution".

    +

    See nn_conv_transpose1d() for details and output shape.

    + +

    Examples

    +
    if (torch_is_installed()) { + +inputs = torch_randn(c(20, 16, 50)) +weights = torch_randn(c(16, 33, 5)) +nnf_conv_transpose1d(inputs, weights) +} +
    #> torch_tensor +#> (1,.,.) = +#> Columns 1 to 8 -0.6004 4.0901 4.1162 0.6192 6.3756 12.6002 -2.0543 -8.9684 +#> 2.5562 5.4577 -6.9559 1.3808 0.9724 -2.1635 0.0468 -9.2402 +#> 2.1054 -3.9812 4.3883 -11.3571 -7.2336 -7.9630 -7.3477 -3.1796 +#> 3.4706 -3.2964 -6.8210 -0.9201 -4.3900 -9.7930 -4.8746 6.7569 +#> -4.4986 -1.9795 0.7281 3.7279 2.8359 -2.9289 4.9091 0.0196 +#> -0.1566 -1.7666 6.7628 21.8539 13.1175 3.0355 11.8317 -10.9293 +#> 1.9233 -4.0127 5.4726 -2.8186 -15.2970 -14.9055 9.0864 9.7691 +#> -6.2536 -1.2273 1.0028 2.6480 -3.2984 0.7050 0.2384 -0.4342 +#> -0.7137 -1.2991 -1.9438 -0.9533 -0.0313 6.3989 -8.8965 1.5028 +#> 2.8515 1.2079 7.5720 11.2757 4.0208 4.0302 -18.4834 2.6042 +#> -1.2608 -1.9830 0.4157 5.8398 -8.0424 2.8853 1.9586 -11.1629 +#> 0.9166 3.7020 6.3919 5.6656 8.5167 17.4386 9.7697 11.3048 +#> 3.7391 5.0856 3.8721 1.7100 -1.3962 -9.1984 3.3265 -3.9582 +#> -1.6088 0.2460 0.5347 -2.3824 -9.6884 -0.5348 -6.0122 -0.1030 +#> -0.6148 -1.2322 1.0224 6.9799 -1.8193 11.8577 -19.9366 2.6108 +#> 1.4129 6.4468 -1.6165 -3.0123 -14.8057 -1.4289 -9.7638 -1.8674 +#> -2.9941 -9.5087 2.1096 5.6394 -6.3901 -4.2324 -3.9090 -13.8909 +#> -0.4531 2.0755 -1.2331 -5.6545 8.4489 -3.9393 -8.1968 3.5906 +#> -0.5256 3.3574 -1.5862 7.9468 9.7259 0.9548 15.2170 12.6929 +#> -1.0922 2.8931 -0.3569 1.4262 -5.5169 -1.0022 13.1522 6.1245 +#> 4.8329 0.7218 -0.3335 -5.6755 14.5178 -8.6349 4.7855 4.8308 +#> 1.7461 -3.7079 -1.3584 4.6693 -4.8814 9.9969 0.9081 5.7656 +#> -0.2636 -0.1443 -7.4224 -3.2104 -8.8714 -8.0212 -6.6852 0.9089 +#> 5.7567 -0.5389 7.3149 -8.6823 -13.8152 18.6386 15.1764 7.8335 +#> 0.3944 4.6797 -2.1249 -4.5136 -3.7679 4.9865 -0.3764 0.3817 +#> -1.5968 2.2421 -3.5420 -13.0538 7.0071 4.8528 1.0440 4.9850 +#> 1.6772 0.7801 6.6048 -10.8922 -14.4722 11.5427 -2.8412 7.4468 +#> -5.5516 -3.5058 2.0037 -2.6942 0.7408 0.6017 -1.2814 -1.5558 +#> 3.5480 -1.8899 -5.4336 -3.3593 -4.9892 -6.0104 -5.0009 -13.4594 +#> -8.6330 -0.3369 9.3590 11.0104 -5.5114 8.3425 -6.3278 -5.8439 +#> 2.4533 -2.4106 -7.8900 11.0399 -10.0707 3.2837 13.9738 1.2788 +#> -2.3656 5.0306 2.3163 -9.2069 8.0224 -4.3115 8.8182 8.4418 +#> 7.1396 -1.4040 2.8544 16.0946 -2.0559 -9.8389 10.0793 -3.5420 +#> +#> Columns 9 to 16 19.6296 -1.0571 2.2479 6.0396 17.4604 5.4292 -5.2999 -3.4805 +#> 4.4689 6.2760 -6.7538 10.4388 -5.3242 -4.4771 19.5997 -9.2312 +#> 7.3714 2.9589 3.8303 -3.8989 -1.2139 -7.2902 -12.3022 5.5554 +#> 1.1485 -3.1741 -14.2423 -4.7028 -7.3247 3.6348 0.5571 11.8129 +#> -3.9914 -9.8326 -10.1469 11.5117 13.0456 -7.4854 8.4747 -4.2242 +#> -8.1024 11.8125 9.0316 1.8900 19.6748 6.5998 -6.3706 1.9563 +#> -4.3288 -4.2767 6.9220 -6.2718 -14.5755 6.9661 11.9982 -11.3933 +#> 3.6337 -0.4349 -2.0598 -2.9402 -2.3109 -3.6216 -9.9334 -6.8303 +#> 3.0969 8.0662 -1.9271 0.9911 11.9432 -13.8506 -0.3506 -8.2148 +#> -0.3625 -10.1219 -3.2581 12.9066 -2.7811 -1.5516 1.4874 0.0285 +#> 4.2390 11.4472 14.2799 -4.3681 -1.2805 -7.7423 -7.4996 -9.5044 +#> 5.9248 17.1279 8.5384 12.4752 7.6657 4.4720 3.8107 18.1527 +#> -6.5947 -2.0549 6.2950 1.4028 -10.9088 -7.4368 15.4439 -4.2578 +#> 8.6277 2.5786 -2.0515 -2.8812 -0.1932 2.6490 5.3511 2.3057 +#> -5.8411 -6.6885 1.8910 -0.9538 5.1784 10.1539 -13.0269 13.5722 +#> -11.6944 4.1298 -3.8437 -2.5096 -6.4828 -6.6850 7.6715 2.7347 +#> -8.0584 8.5316 2.6845 -5.3957 2.3331 -4.4324 3.8076 -0.7175 +#> -18.2143 11.8918 10.6286 11.9950 -1.4405 -5.5318 6.0004 2.2942 +#> 8.1489 -2.0809 8.1918 16.5408 13.5879 -19.5593 -3.7664 -10.7886 +#> 11.0072 -6.6765 -4.8078 11.7145 -4.5950 12.0940 4.1818 0.9954 +#> 3.1661 -1.3012 6.0041 -4.4507 7.4049 -0.6932 13.2061 3.2743 +#> 9.5199 7.2679 -3.3639 -11.7749 8.3600 -3.3641 -5.7926 3.3618 +#> -6.4810 -12.4436 -1.3938 3.0750 -16.1594 1.9717 -5.5333 4.2702 +#> 11.1419 0.0955 19.3511 -4.1583 -0.4624 3.2259 1.0754 -2.9664 +#> 8.9533 -0.9478 -5.3318 8.1153 6.2505 19.1452 1.0230 8.2152 +#> -2.8706 -8.6784 -12.3010 -7.2931 -2.0150 5.7070 -7.1829 -2.9493 +#> 5.2020 -1.9136 1.4974 1.2598 -0.3027 -1.9155 -8.9421 2.2337 +#> -7.7501 -9.0444 -9.3237 -10.6197 3.8131 -5.0688 -24.0951 2.3849 +#> -7.7414 4.0537 -19.9329 0.5265 10.7112 -3.8184 -6.8621 -5.1801 +#> -4.2732 10.6180 -12.4161 6.6465 -0.6175 0.5263 -16.3148 11.5405 +#> -6.2823 -6.9256 3.7962 -9.9520 -9.6631 15.2088 6.3811 -9.0371 +#> -19.0446 0.8602 14.5508 -8.9178 -2.6391 -13.0266 -3.0735 13.5590 +#> -4.9628 7.5579 22.1511 -1.2725 -2.9494 -4.4844 2.3112 17.1739 +#> +#> Columns 17 to 24 -4.1825 21.5207 11.4520 -4.5708 -1.5438 15.3972 3.2125 17.3042 +#> -0.4366 5.4250 5.0380 4.0624 -3.6303 -3.8972 -14.9252 -4.3104 +#> -0.7165 -2.3864 3.1508 8.7325 -1.1502 -21.1438 -1.3657 -13.6428 +#> -10.3651 5.3801 -12.4743 -2.2434 -5.3823 -7.6754 -6.2727 5.5183 +#> -0.4864 11.9181 -14.7448 -1.3623 -10.4562 2.1550 1.3195 12.3553 +#> -1.9183 -2.3853 14.9228 18.9550 -17.5423 7.9292 17.5551 -2.3659 +#> 8.5572 -10.7424 1.5513 -10.6038 8.8510 -9.7853 -6.9093 -1.2731 +#> -3.3836 3.1122 15.9079 -7.9366 4.0856 -15.1774 15.0615 -10.2502 +#> 2.9098 7.0142 3.7753 -14.3875 -5.1446 -10.9741 -6.6907 -0.5269 +#> 11.1435 -2.2287 -18.3215 5.5178 0.0646 1.9037 8.9206 5.6702 +#> 15.9882 5.7715 -3.0261 12.0738 -12.1655 9.7216 10.7748 -6.2806 +#> 8.5648 10.9936 13.3946 6.0727 4.1547 15.0799 19.8217 5.5906 +#> 9.4260 -1.6321 -10.8740 1.2734 6.4871 -3.9937 -5.1651 -6.5331 +#> -21.2690 2.0586 1.4541 17.1033 -13.2072 0.4539 -0.3309 -7.4712 +#> 10.4926 4.8057 -0.4733 -2.7449 3.5052 -10.6738 22.6786 -1.7802 +#> 6.1693 -14.3340 -7.6485 -10.2864 -0.2380 6.3123 2.6203 3.5415 +#> -0.6640 -8.3534 12.0272 10.8964 2.9217 6.1839 2.3538 9.3204 +#> -5.1073 13.8817 7.7285 11.8317 4.1813 -16.0389 19.8230 5.0212 +#> 0.3832 -2.1313 -3.2197 6.2428 -7.2094 -1.7911 0.1555 0.9442 +#> -1.6238 9.3784 -3.2428 1.1754 8.0022 -9.8045 0.1608 -5.7132 +#> -2.2772 4.6684 3.3027 1.0269 7.1341 8.5374 3.5203 12.7100 +#> 10.5552 -6.7427 9.7302 -4.9299 -4.2069 -10.8554 -1.9141 0.6340 +#> 3.6097 -13.6879 -19.2162 11.0106 -3.8462 1.1233 -4.3375 2.9411 +#> -2.6338 -16.1336 1.1961 -16.3511 -2.5227 -17.6820 -18.0181 9.3209 +#> 4.9945 16.4619 -9.6469 -2.7882 5.7165 5.6490 -14.4887 -7.6353 +#> -9.1514 -8.5315 -15.5044 -5.3980 -3.1871 -6.8574 -0.2664 4.4388 +#> -9.5559 -2.3091 14.8938 -4.0996 5.5592 -19.4200 1.9836 6.5751 +#> -8.0059 5.4177 4.6515 -3.9604 -13.8529 -17.5115 6.4735 -8.0561 +#> -13.0087 7.8845 4.5359 7.1285 -9.0164 -0.6873 -22.5152 -2.0334 +#> -9.1964 0.6174 18.8030 6.6753 -1.9048 -5.0334 -0.6214 -9.7074 +#> 2.8011 -1.5303 2.3510 1.6620 -4.6654 -3.1248 -1.7060 6.6866 +#> -8.1370 -1.5563 -0.5429 4.6920 2.2020 -0.8108 9.9542 -7.6876 +#> 3.8961 21.6270 2.4845 -1.5813 6.1576 2.7277 -5.1375 4.4078 +#> +#> Columns 25 to 32 -2.3601 -3.4322 9.3053 -8.1667 4.9778 -13.8512 -0.7769 -1.5251 +#> 3.6972 12.7780 3.2833 -11.2094 4.9365 0.8090 -4.1411 11.1159 +#> 2.1876 7.3518 -12.6594 7.8482 -3.3922 7.2875 11.8075 -1.1018 +#> -2.9391 6.5277 3.6980 -0.4259 -6.1169 5.0371 14.8063 -8.8458 +#> 11.8637 -1.4726 21.8518 12.0867 -8.8219 4.1038 1.4818 -1.6840 +#> 24.4477 -2.4871 -4.6144 14.6080 -6.4686 -1.9640 -10.3194 0.6770 +#> -1.7146 9.5057 0.6013 -5.5986 7.1479 -2.1409 -8.9335 13.1021 +#> 4.6454 12.8925 -9.2337 17.1118 -3.7436 1.9554 0.4862 4.6698 +#> -4.8602 -19.4728 -0.3412 12.0404 -2.2184 12.8902 1.5057 1.0611 +#> 4.5539 -0.5962 -8.7740 5.9415 -14.5654 -1.0543 1.0969 -4.6110 +#> -13.4540 -1.8438 -27.4494 10.4325 12.9727 -5.3659 4.3214 -9.1497 +#> 8.4932 10.4420 -0.1234 -2.1550 -5.8957 1.4477 -0.5030 -4.5601 +#> -2.8887 10.2157 4.6852 -2.6711 5.0831 -4.4824 -4.4490 12.0549 +#> 10.8636 3.0547 -14.1468 -11.5786 -4.1872 14.1289 3.3529 3.5294 +#> -5.0578 19.5757 -10.7164 14.3858 -15.5695 -23.5430 0.6562 0.4816 +#> -17.2403 10.0789 8.6334 -16.7596 5.0725 1.1319 -1.7669 2.2916 +#> -1.0254 0.8765 1.0145 7.5937 8.4460 -11.7051 -8.6109 -3.9665 +#> 4.5205 6.8878 -2.7549 -11.9323 -22.9036 11.9790 -22.1282 10.1852 +#> -14.1557 -5.4662 9.6428 -7.9654 0.4415 16.7461 5.6598 14.2657 +#> 11.7360 15.4267 7.0044 -0.2906 3.8354 -16.8175 11.6116 4.2682 +#> 7.5793 -3.2285 -5.5165 -6.5840 2.1186 12.6062 -6.2278 -9.1656 +#> 2.0305 10.7111 2.9078 11.1756 6.8596 -5.1708 12.8105 1.4350 +#> -19.5372 -9.6168 -6.2032 -6.6678 -0.1795 -2.0621 0.1891 -0.2887 +#> -6.1471 -4.1238 5.0940 -0.5406 3.4143 4.3145 17.2420 14.2220 +#> 15.2717 -11.6314 0.6753 4.6649 -8.3632 -4.0830 -2.4835 2.9289 +#> -7.8195 -8.8384 11.2586 -6.0931 10.6657 -1.6245 5.1102 -8.9102 +#> 17.8543 5.0104 -12.4566 0.1958 -16.1957 -0.1449 12.1718 8.7589 +#> -1.8529 -4.3217 -6.6898 3.4766 3.4081 14.7981 -5.4841 -4.5221 +#> 1.1381 1.2830 11.2350 -3.1520 -9.8428 4.2556 2.6595 8.2908 +#> 1.7770 -4.4640 4.3794 23.1427 -4.7616 -7.9387 2.9123 -2.6110 +#> -8.3071 4.1271 -4.8753 1.2700 12.8085 -4.8894 -3.6147 3.9395 +#> -0.0611 6.6493 -13.0310 -1.8303 1.7900 -0.6020 -5.4802 -13.4728 +#> 26.6121 8.2568 -6.9124 -11.3372 -3.8023 4.2649 -4.5585 14.7970 +#> +#> Columns 33 to 40 -13.1377 -0.0706 5.6916 6.5058 -3.0752 4.5630 -7.6258 20.9413 +#> -0.8843 1.0609 3.0657 -0.0612 -3.1638 -3.3998 3.0816 3.0866 +#> 7.3812 -1.7633 7.9183 -13.8968 14.0850 -20.5245 4.3020 6.5696 +#> 4.5379 5.9704 3.7029 3.3116 -22.2722 2.8277 2.2261 -4.4455 +#> 1.5286 7.1339 3.0548 3.2838 2.5137 -0.4061 8.9326 17.9624 +#> 2.3315 -1.2700 3.9492 2.8422 18.2107 -7.2438 21.5324 3.9252 +#> 9.6432 0.4445 -2.2403 4.9207 -0.7474 -7.9729 -0.5129 -15.4101 +#> -20.4920 -17.6694 -5.8191 13.5121 3.1379 -5.5635 -1.6764 -9.0776 +#> 3.2790 -1.0290 15.0159 6.6719 5.8982 -3.4839 -0.0748 10.8644 +#> 10.9863 9.2873 3.2989 2.4354 9.8572 -2.2714 7.1368 4.1491 +#> -7.7011 3.7981 -5.6473 -9.6907 14.5500 -3.3643 -1.8682 0.4615 +#> 4.9033 -13.4153 5.4821 1.2573 5.0443 -2.3632 3.0864 5.1709 +#> -6.3609 11.7518 9.7436 6.3601 7.2588 -2.8778 2.2560 2.2091 +#> 0.6333 9.5380 -16.4339 -7.4011 -12.0568 0.4711 9.7366 -8.5114 +#> 16.3144 8.5388 0.5593 -8.6230 -5.3358 -1.3064 -3.4502 -4.9662 +#> 5.6114 -3.9022 8.0300 3.4313 -1.6205 -13.8910 -4.1147 -7.6525 +#> 15.3002 6.0428 -9.9738 -10.1090 10.0225 -1.4212 0.5714 -10.6084 +#> 6.0626 -5.3162 7.5097 0.3177 0.6618 -12.0088 5.9090 -13.9977 +#> -2.9831 -4.7394 1.3440 4.8945 17.2124 4.3742 -1.8140 -3.4268 +#> 3.9576 0.6434 -12.3714 -10.8635 9.2416 -4.4522 5.8705 -3.2069 +#> 3.4398 -2.3467 14.6558 -7.6711 -6.7356 -5.1956 5.4958 -12.6438 +#> -3.3017 -4.6428 -0.7086 -10.4358 6.3154 0.3267 -4.0852 -0.8664 +#> 10.5626 2.0228 -14.4048 8.2696 -0.0745 -10.8902 -6.6894 5.1885 +#> -12.5296 -12.9663 -4.4452 -2.9155 5.2775 18.9949 2.4824 13.1430 +#> 0.6722 -7.0047 6.4576 -3.7775 -9.1132 6.3353 -3.5569 4.1473 +#> 6.7089 -7.4115 10.6349 5.1406 2.1408 6.6096 -5.9116 9.5312 +#> 7.9793 9.1878 -15.3085 -6.6413 1.8546 15.5129 1.8919 7.0326 +#> -10.7967 -1.9923 -12.8950 -10.9955 15.2740 7.0095 8.7453 1.6567 +#> -3.5113 -4.6141 5.5052 -2.7090 -0.7984 4.3020 0.6053 -0.6996 +#> -2.2838 2.9193 -10.9042 9.5657 -5.6694 -0.0304 8.0606 17.9457 +#> -3.7685 -2.6690 0.9492 -7.3359 4.6296 -4.3763 -5.5983 -1.0029 +#> 16.1059 -0.4918 -5.9363 4.5314 -2.9774 9.2013 2.6621 -8.6090 +#> -1.3673 11.2294 0.8330 0.1796 -1.9448 13.2667 -1.8795 -11.9470 +#> +#> Columns 41 to 48 5.4623 6.0557 -2.3547 -10.5777 -8.9394 -17.5632 17.0012 -7.7506 +#> -3.4502 5.7822 -22.9018 -16.1433 3.3162 -7.2222 4.9175 5.3236 +#> -6.8991 -4.6364 -2.5290 -20.8305 -5.5299 15.0443 -3.6268 12.7072 +#> 9.9013 -1.1473 -9.4256 -2.5440 0.7606 13.8442 -2.8150 8.2952 +#> 0.2988 -2.1047 12.3417 4.8989 -4.7860 6.8126 0.6023 6.9697 +#> 6.4256 -18.5609 12.9657 -13.4234 3.5923 -3.0290 -19.5210 7.9649 +#> 8.5411 7.2910 -1.3473 12.0052 -4.6327 -5.4855 -9.0772 21.2862 +#> -8.1589 -3.7239 15.3635 -0.2900 -8.4298 -5.8980 8.3588 -31.6000 +#> 9.1120 3.6091 0.5912 13.4243 -0.9888 -7.5009 12.9340 2.7174 +#> 13.6130 -5.7139 2.6131 -4.1449 -13.8080 12.9982 1.1384 -5.4977 +#> 8.9638 6.3303 -9.3563 11.8058 8.1987 8.6516 -2.2029 -17.6940 +#> -4.9596 7.8049 -4.9706 -1.9639 -1.2503 4.1113 3.3422 -3.7928 +#> -6.4181 6.0623 -8.2092 0.7996 -6.1743 0.6363 -7.4182 -8.2864 +#> 1.0932 -2.6376 -9.6817 -8.3699 9.8409 12.7849 4.8182 11.9833 +#> -14.7259 -18.3420 11.0260 6.2939 -17.9582 10.5616 6.2648 -1.0947 +#> -10.5056 2.1502 -4.2209 -0.2779 12.3265 4.9331 -2.0322 -2.0515 +#> 8.0115 -9.9201 -15.0605 4.7110 2.5659 -7.8833 -16.4231 15.7626 +#> 7.2102 9.5568 -11.0781 -2.8903 -0.3292 7.6369 1.6897 1.3017 +#> -4.6736 16.3344 13.0339 2.9969 4.7427 -0.4004 -0.6330 -3.7610 +#> -12.2635 -13.3897 -0.6776 12.6669 -17.6320 -6.1654 0.0280 -10.9851 +#> -7.5166 -0.4481 -4.1911 -11.3640 3.6991 14.0500 -13.1051 7.0555 +#> 3.5955 -1.8552 1.2731 0.7391 -11.2164 -4.5338 -7.1506 -1.2045 +#> -7.6455 -12.3654 -11.3886 -10.4583 -8.9162 8.7736 17.7514 16.4935 +#> 7.3194 24.3887 9.7357 -3.6405 0.1455 -5.7188 6.6516 -4.2392 +#> 15.0456 13.1130 12.7170 -3.8100 -19.4943 -16.5896 17.4840 3.2342 +#> -7.2503 7.9477 19.6389 4.8204 7.8325 -9.6368 7.2113 -12.7309 +#> 4.2267 -4.0082 -7.8041 -1.5592 9.4208 10.1543 16.0493 4.4434 +#> -4.3093 -18.7923 -1.8592 0.0332 5.8462 -1.1693 6.3583 -8.1359 +#> -3.6780 -9.1685 -5.1180 -12.9000 9.9408 -2.7031 4.7989 23.9640 +#> 3.3705 -4.7995 2.9122 10.1147 -6.5252 -11.1011 16.6597 -12.7891 +#> 5.2562 6.5468 -9.5708 1.3940 -0.2515 -9.5245 -12.9337 19.8222 +#> -13.3648 3.8419 3.6211 -19.4950 -16.4319 5.5172 0.6711 7.0542 +#> 10.5676 3.0193 -13.6869 -17.7664 9.5023 12.4828 -11.6906 13.0204 +#> +#> Columns 49 to 54 10.4049 -4.3937 -1.2158 1.0199 -1.4881 -1.6759 +#> -0.5652 -7.3973 0.0691 -5.2152 -3.7536 1.2808 +#> 7.0183 5.8691 -4.2220 4.2102 11.6324 4.5266 +#> 6.6726 -7.8257 -8.2892 5.9208 -1.7354 7.0313 +#> -0.5905 4.8962 -0.2178 0.9379 3.7152 1.9126 +#> -6.6017 8.1995 -0.5955 -5.2730 -2.5379 2.6127 +#> -5.8200 -4.8252 5.8403 -2.0860 -1.0756 1.2638 +#> 11.4338 11.5370 -5.4214 6.2038 -5.4447 4.8322 +#> -18.9999 2.5189 10.5246 -0.0110 5.5061 -1.5023 +#> 4.4337 -3.9522 -12.7223 7.7476 1.5671 0.0794 +#> -18.2992 -1.8082 26.3236 1.9072 4.0479 1.4653 +#> -0.3543 -1.8854 9.7096 -1.0631 -4.9233 3.6277 +#> 4.8155 0.9072 -1.4398 -4.3319 3.4289 -4.0279 +#> 11.1888 -19.1820 -9.4257 0.9476 -2.8059 2.2915 +#> 0.8938 -8.0130 -3.0255 -4.9254 -6.8895 3.7856 +#> 0.6107 -3.4682 -3.0902 8.6203 5.0539 -7.3057 +#> -6.0682 12.0741 9.0698 2.2445 -7.1664 -12.1935 +#> -1.4495 0.8049 -3.1648 9.4595 -1.4624 0.4464 +#> 0.0692 -8.6347 -14.3722 -1.6164 5.0661 0.5902 +#> 9.6713 0.1688 -6.0017 1.9842 -6.2445 -7.3938 +#> 21.8469 -12.3380 -8.1051 -1.8002 0.4829 3.4799 +#> -12.2342 16.0267 3.8473 -11.1799 0.1126 -5.3088 +#> 0.2123 -21.1952 2.5425 9.4203 -1.4694 0.0316 +#> -3.6046 -4.2468 5.5372 -5.5574 3.6863 3.7841 +#> 1.9404 5.0886 8.4009 -16.4564 0.9253 11.0061 +#> -6.1988 -1.1603 8.1998 -3.0862 -3.1927 -1.6622 +#> -14.0336 -7.8097 7.9282 2.5255 2.7384 2.9216 +#> -10.8568 -11.8154 6.2506 3.0923 1.0809 1.9786 +#> -28.8676 -2.6244 11.4707 1.6968 0.9274 6.8802 +#> -18.4692 4.5212 -2.0207 4.3433 -9.6943 -3.0231 +#> 1.0947 -4.4575 -4.9881 -1.8650 -1.7975 -0.3476 +#> 5.9016 -4.6479 -9.4835 8.6000 -5.9623 -2.5017 +#> -3.7405 -2.3338 1.5422 -2.0316 -3.3168 5.1828 +#> +#> (2,.,.) = +#> Columns 1 to 6 -7.7358e+00 2.7064e+00 4.1159e+00 2.3887e+00 9.3367e+00 -1.4353e+01 +#> 8.5316e-01 3.7687e+00 -3.4629e+00 6.8568e+00 -7.2856e-01 -3.5939e-01 +#> 4.5799e+00 -6.9411e+00 2.7158e+00 -5.9324e-01 6.3918e+00 3.3153e+00 +#> 4.7981e+00 8.9044e+00 5.5984e+00 5.0699e+00 -1.4592e+01 -2.0785e+00 +#> -4.6695e+00 -2.5667e+00 1.3037e+00 -1.0801e+00 -5.1928e+00 -2.5828e+00 +#> -3.8563e-02 -2.9945e+00 -2.9541e+00 4.8150e+00 7.5176e+00 -5.7202e+00 +#> 1.0594e+01 -1.3108e+01 6.0147e+00 3.7226e+00 1.9505e+00 1.4969e+00 +#> -7.6931e+00 -4.9361e+00 -2.7434e+00 4.6197e+00 1.1686e+01 9.4690e+00 +#> -9.7675e-01 4.7890e+00 -6.4019e-01 1.1604e+01 -5.3119e+00 1.1268e+01 +#> 3.5569e+00 -1.6191e+00 1.7762e+00 -3.0177e+00 -1.7011e+01 9.1518e+00 +#> 1.0447e+00 -6.3763e+00 -4.8049e+00 1.2495e+01 7.7488e+00 1.6410e+01 +#> -7.6849e-01 6.6844e+00 -3.0198e+00 1.0458e+01 -1.2576e+00 1.5203e+00 +#> 4.3539e+00 8.1895e+00 -8.7703e+00 9.5344e+00 -3.5441e+00 -1.4626e+00 +#> 3.7191e-01 4.0707e+00 -3.7622e+00 1.2110e+00 -1.2392e+01 -3.5298e+00 +#> 6.9584e+00 9.4487e+00 2.3289e+00 1.1768e+01 -2.5903e+01 -1.8819e+01 +#> -2.9398e-02 1.9986e+00 -1.7783e+00 -1.2250e-01 4.4910e+00 2.3489e+00 +#> 4.1574e+00 -1.2356e+01 -1.4853e+01 1.4956e+01 1.0511e+01 -4.9196e-01 +#> 2.0067e+00 -3.8693e+00 -8.2226e+00 1.2391e+01 -1.7505e+00 -5.3938e+00 +#> 2.2505e+00 7.4454e+00 -3.4723e+00 -1.5694e+01 1.0488e+01 7.3954e+00 +#> 7.9690e-02 4.2664e+00 9.6114e+00 -2.6502e+00 -8.2685e+00 4.5040e+00 +#> 2.7958e+00 -7.8079e+00 9.3878e+00 -1.3784e+01 4.6990e+00 -6.3731e+00 +#> 3.5866e+00 -2.8282e+00 -4.5961e+00 1.0287e+00 2.9950e+00 -7.6990e+00 +#> -3.3657e-01 1.6978e+00 -9.6235e+00 8.0490e-01 4.4091e-01 1.0902e+01 +#> 2.5087e+00 5.0536e+00 -3.8724e+00 1.0526e+01 1.6037e+01 -2.1520e-01 +#> -6.3437e-01 -1.1003e+00 8.1602e+00 -4.9335e+00 -1.2635e+01 9.8150e+00 +#> -2.1249e+00 3.7480e+00 7.7365e+00 -6.1236e+00 6.1852e+00 -1.3069e+01 +#> 7.3109e+00 1.2901e+01 -8.8473e+00 -3.7098e+00 -8.9201e+00 -6.5198e+00 +#> -5.4844e+00 -2.5238e+00 -4.1557e+00 -1.8064e-01 5.5304e-01 6.7819e+00 +#> -1.9159e+00 1.0311e+01 3.6737e+00 -1.1483e+01 -7.9527e+00 -6.1219e+00 +#> -1.0295e+01 3.7689e+00 -1.4383e+01 1.2151e+01 -4.4508e+00 -8.2422e+00 +#> 4.2292e+00 3.6646e+00 -1.0031e+01 2.9930e+00 2.1887e+00 -7.5918e+00 +#> 9.7002e-01 -2.7367e+00 7.7447e+00 -1.0757e+01 6.6942e+00 -8.9944e+00 +#> 7.1615e+00 3.2208e+00 -8.1345e+00 -7.2343e+00 1.8508e+00 7.8587e-01 +#> +#> Columns 7 to 12 6.1971e-01 -3.5566e+00 -6.0313e+00 -9.2937e+00 -2.9808e+00 6.3143e-01 +#> 6.1591e+00 3.1784e+00 -1.4260e+01 -4.0602e+00 1.9287e+01 4.2227e+00 +#> 4.9860e+00 1.1289e+01 1.4267e+00 -1.4943e+00 -9.9253e+00 2.7712e+00 +#> -5.2414e+00 1.2068e+01 -8.3227e+00 -5.1654e+00 4.5057e-02 -6.2535e+00 +#> 4.2992e+00 -5.7681e+00 8.0550e+00 7.8340e+00 -7.1266e+00 1.9123e+00 +#> 1.3649e+01 -8.0137e+00 -4.5874e+00 1.6664e+01 -6.1277e+00 -3.3137e+00 +#> -7.7574e+00 6.1543e+00 7.9361e+00 -3.3967e+00 4.5374e+00 -1.9442e+00 +#> 1.8258e+00 2.5381e+00 4.9820e+00 6.2882e-01 -2.1422e+00 -1.5931e+01 +#> 2.7219e+00 -8.0431e+00 3.3396e+00 -4.2228e+00 -1.0965e+00 3.3755e+00 +#> -2.8221e+00 -2.8199e+00 -7.3222e+00 2.3526e+00 -1.6953e+00 1.0169e+01 +#> 1.9237e+01 -7.6633e+00 -1.0494e+01 -5.6553e-01 -1.5653e+01 -1.5750e+01 +#> 1.7894e+00 -5.2464e+00 -5.6364e+00 7.6857e-01 -1.0334e+01 -1.7855e+01 +#> 1.8345e+00 -7.4619e+00 6.2523e+00 1.5827e+01 -7.7351e-01 -1.5328e-01 +#> -5.8410e+00 -7.1343e+00 -4.4032e+00 4.0195e-01 3.6980e+00 6.1992e+00 +#> -4.7707e-02 1.5093e+01 -1.3516e+01 -4.6621e-01 -1.1089e+01 3.5860e-01 +#> -1.9751e+01 1.6217e+01 1.2023e+01 2.6548e+00 2.0095e+01 -1.4241e+01 +#> 6.4834e-01 1.0354e+01 -9.9857e+00 -4.7512e+00 -3.9769e-01 -1.2231e+01 +#> -1.1477e+01 6.8674e+00 -3.0546e+00 -5.0975e+00 2.7640e+00 -7.8965e+00 +#> -3.2730e+00 7.8723e-01 -4.4877e-02 7.8095e+00 1.6561e+01 4.8170e-01 +#> 1.2527e+00 -8.8599e+00 -1.6237e+00 3.5457e+00 -5.1429e+00 -4.6922e+00 +#> 1.3649e+00 4.6369e+00 -1.6378e+00 -4.3386e+00 7.2234e+00 -7.3588e+00 +#> 9.8566e+00 8.9358e+00 -1.2413e+01 -5.5862e+00 -1.2195e+01 3.5331e-01 +#> -6.1922e+00 1.8767e+00 -2.6397e+00 -7.5956e+00 4.6306e+00 5.0578e+00 +#> -1.8053e+00 -9.5046e-01 2.7005e+00 3.5801e+00 4.6201e+00 -1.5657e+00 +#> -8.2460e+00 6.8687e+00 -1.0745e+01 -1.4851e+01 -6.3045e+00 1.3622e+00 +#> 9.1886e+00 -1.7410e+00 -4.4931e-01 -2.5491e+00 -4.5174e+00 1.1401e+00 +#> -2.4219e+00 1.1363e+01 -5.0151e+00 -7.2247e+00 3.0972e+00 1.2977e+01 +#> 8.8036e+00 7.9221e+00 -5.6821e+00 5.5599e+00 6.7689e+00 8.8789e+00 +#> 1.2687e+00 1.1066e+01 8.3913e+00 -5.0425e-01 1.0345e+01 1.3019e+01 +#> -1.8805e+01 4.4200e+00 -3.2719e+00 -3.9947e+00 -8.4469e+00 -6.8574e+00 +#> 1.2520e+00 -1.8530e+00 6.9782e-01 -6.9486e+00 -1.1507e+01 1.4238e+01 +#> 4.3166e+00 -1.6821e+00 -1.5755e+00 -1.6891e+01 -2.0169e+00 -8.6049e+00 +#> 1.1485e+01 -9.6323e-01 -3.6946e+00 -4.7002e+00 -6.9156e+00 -3.2433e+00 +#> +#> Columns 13 to 18 7.1991e+00 -3.1020e+00 -1.0215e+01 7.3400e+00 -3.8720e+00 -3.3105e+00 +#> 7.5273e+00 -1.8029e+00 1.1510e+01 -6.4125e+00 -5.0547e+00 -9.4835e+00 +#> 1.0358e+00 9.7088e+00 -2.3843e+00 -1.5570e+01 2.7400e+00 -2.6007e+00 +#> 1.7425e+01 -1.6332e+01 6.0853e+00 6.0137e+00 -3.3011e+00 -1.4179e+00 +#> 9.4562e+00 -1.8261e+01 -7.5022e+00 -7.5939e+00 -7.0870e+00 -7.4055e-01 +#> -3.9737e+00 1.8817e+00 -8.4407e+00 8.2835e+00 -4.1121e+00 7.4358e+00 +#> 6.4169e+00 1.1986e+00 -1.8904e+01 1.0954e+01 -6.5748e-01 -7.1496e+00 +#> -1.8261e+00 5.0867e-01 -1.4623e+01 -3.4692e+00 -4.8791e+00 -1.0374e+00 +#> -5.0341e+00 -1.4850e+01 2.4858e+00 1.1707e+01 5.5279e+00 5.3110e+00 +#> -2.7140e+00 9.7383e-01 4.0812e+00 -3.3657e+00 6.4222e+00 1.9005e+00 +#> -1.6522e+01 -1.1063e+01 6.8178e+00 -2.1295e+01 7.5161e+00 3.4128e+00 +#> -8.0671e+00 -1.9479e+01 -4.5783e-01 1.4737e+00 1.6592e-01 6.2285e+00 +#> 1.0884e+00 -1.0263e+01 1.7437e+01 2.9594e+00 -2.4843e+00 6.6805e+00 +#> 1.6568e+01 8.4857e-01 1.1532e+00 -3.9707e+00 7.1131e+00 -6.0201e+00 +#> 6.9973e+00 -3.9699e+00 2.1605e+00 7.8246e+00 8.5465e+00 -1.2609e+01 +#> -1.0380e+01 3.2347e+00 -5.6959e+00 -8.4189e-01 6.5300e+00 -2.4554e+01 +#> -1.1610e+01 4.5740e+00 -3.2509e+00 -1.4320e+01 -2.0172e+00 3.4778e-01 +#> -1.8972e+00 7.9535e+00 -1.1391e+01 8.0844e-01 9.2623e-01 -7.4139e-01 +#> 3.2082e+00 1.7688e+00 6.5023e+00 6.8089e+00 5.2405e+00 -4.9714e+00 +#> 5.2694e+00 -1.3602e+00 1.5576e+00 1.1241e+01 4.4270e+00 -1.4899e+00 +#> 6.5272e+00 -6.2469e-01 7.4468e-01 -8.0070e+00 -5.7550e+00 -3.5396e-01 +#> -4.3746e+00 1.2775e+01 7.9490e+00 3.3861e+00 1.5139e+00 -2.3440e+00 +#> 2.9495e+00 -7.8901e+00 4.4748e+00 -5.6787e+00 2.1329e+00 -1.4135e+01 +#> -1.9857e+00 4.7930e+00 1.0535e+00 1.0709e+00 -3.0928e+00 6.7112e+00 +#> -6.3406e+00 -3.5523e+00 1.2149e+01 -6.1518e-01 1.1473e+01 -6.0265e+00 +#> -8.7913e+00 5.0781e+00 -9.3632e-01 1.7481e+01 6.8463e-01 6.5693e+00 +#> -1.1081e+01 -3.8875e+00 -1.0769e+01 -2.0818e+01 1.7809e+00 -6.9261e-01 +#> 6.7785e+00 9.7152e-01 -7.7956e+00 -1.0056e+01 6.0275e+00 5.2517e+00 +#> 1.1351e-01 -9.0045e+00 9.5697e+00 2.3714e-01 -9.0294e+00 -2.9469e+00 +#> 1.3735e+01 3.8854e+00 -2.9617e+00 1.0569e+00 1.4450e+01 1.2248e+01 +#> -2.5803e+00 5.4637e+00 1.1467e+01 -4.1106e+00 5.5802e+00 -3.3711e+00 +#> -5.2019e+00 4.9950e+00 -1.9193e+01 1.9517e+01 -4.0764e+00 1.0695e+01 +#> -9.5082e+00 1.0037e+01 -1.1583e+01 -8.1655e+00 -1.2235e+01 1.6803e+01 +#> +#> Columns 19 to 24 2.2936e-01 -1.6888e+01 1.3698e+01 -6.3099e+00 9.9215e-02 -2.9322e+00 +#> 1.9769e+00 3.4036e+00 -7.5839e+00 -7.6613e+00 1.7599e-02 1.2105e+00 +#> -1.8907e+01 -2.0247e+00 3.6977e+00 -5.8704e+00 -6.2005e+00 1.5775e+01 +#> 6.5952e+00 7.7901e+00 -1.2322e+01 7.3395e-01 -9.5554e+00 -4.4870e+00 +#> -1.4227e+01 -2.1848e-01 2.5902e+00 -5.1566e+00 -2.3201e+01 -2.9847e+00 +#> 8.3254e+00 -2.6919e+00 -7.2369e+00 -2.7206e-01 -3.8869e-01 -1.1716e+00 +#> 4.6815e+00 -1.1390e-01 -3.7367e+00 3.8517e+00 -1.4061e+00 -6.6726e+00 +#> -7.7048e+00 2.2018e+00 5.9940e+00 6.2055e+00 8.6355e+00 4.7104e+00 +#> -4.9984e+00 1.8139e+00 8.1921e+00 -6.0335e+00 -5.7494e+00 -1.3691e+00 +#> 3.3706e+00 -1.1477e+01 1.1708e-01 -1.2528e+01 -4.0107e+00 6.8041e+00 +#> 2.3757e+01 -7.6485e+00 7.0026e+00 -1.4405e+01 1.2900e-01 6.8069e+00 +#> 1.7010e+00 1.7767e+01 -9.5315e+00 -3.5809e+00 -8.8127e+00 -5.7658e+00 +#> 6.8095e+00 4.8048e+00 7.5958e+00 -1.7835e+00 -5.2495e+00 1.0999e+01 +#> -1.2062e+00 -1.4216e+01 1.6161e+00 -4.8818e+00 9.8680e+00 -4.7297e+00 +#> 4.6286e+00 -2.3997e+00 9.1565e+00 -1.8091e+01 -1.8798e+01 -1.7613e+00 +#> -3.3277e+00 -9.2900e+00 6.0133e+00 3.0082e+00 -2.0833e+00 -7.1763e+00 +#> 4.2549e+00 -8.8263e+00 -7.8801e+00 3.9957e-01 -3.5176e+00 -1.8507e+00 +#> -4.9498e+00 1.5408e+01 -8.1279e+00 -7.4771e+00 2.3541e+00 5.4949e+00 +#> 4.1214e+00 -1.2128e+00 1.2296e+01 7.6001e+00 1.4415e+01 -1.6055e+00 +#> -4.9501e+00 1.3041e+01 -4.9621e-01 1.8703e+01 -1.8289e+01 1.0630e+01 +#> 1.0798e+00 1.5102e+00 -2.2588e+01 -1.9352e+00 1.3287e+01 -4.5660e+00 +#> 1.4408e+01 -7.8796e+00 -1.0958e+01 3.5238e+00 1.0787e+00 1.1946e+01 +#> -2.9766e+00 -3.4401e+00 2.6354e+00 -1.8404e+01 -8.8099e+00 8.9852e+00 +#> 1.6289e+01 -6.3441e+00 1.6091e+01 1.5075e+00 1.1968e+01 -2.3290e+00 +#> 5.2946e+00 6.6794e+00 -3.5291e+00 2.2196e-01 -2.1849e+00 -7.8252e+00 +#> -3.6348e+00 1.4114e+01 -2.7772e+00 8.4282e+00 -1.0159e+01 1.4503e+01 +#> -7.9663e+00 -1.0334e+00 -3.8956e+00 -9.7743e+00 -7.9716e+00 2.0617e+00 +#> -2.1056e+00 -1.1841e+01 4.4612e+00 1.1785e+01 9.6248e+00 -2.4593e+00 +#> -7.4521e+00 -2.6863e+00 6.6458e+00 8.8731e+00 -2.6243e+00 -7.9622e+00 +#> -1.4993e+01 -7.1811e+00 -1.8982e+00 1.7245e+01 -1.9409e+01 -7.2242e-01 +#> 7.3318e+00 -1.8824e+00 4.5499e+00 -7.1541e+00 1.9782e+00 -3.5518e+00 +#> -4.1648e+00 1.9168e+01 -1.1092e+01 -1.5164e+01 -3.6274e+00 1.6950e+00 +#> 2.6833e+01 -1.0341e+01 -1.7529e+01 -1.1678e+01 -3.8251e+00 -1.9653e+01 +#> +#> Columns 25 to 30 8.9089e+00 -2.2109e+01 1.4243e+01 -8.6281e+00 4.8957e+00 6.5300e+00 +#> -1.6790e+01 2.2822e+00 1.2295e+01 -1.3033e+01 -3.7050e+00 1.8875e+01 +#> -1.0381e+01 -3.2040e-01 -7.3908e+00 1.1772e+01 4.1685e+00 -1.2452e+01 +#> -1.0310e+01 4.5585e+00 -1.0004e+01 1.9990e+00 5.6939e+00 2.0298e+00 +#> -1.0004e+01 1.0409e+00 -1.3124e+01 -2.0799e+00 -7.1760e+00 7.3562e+00 +#> -6.0570e+00 -5.3023e+00 -2.1541e+01 1.9820e+00 7.8665e+00 6.3329e+00 +#> 6.2140e+00 3.1022e+01 -1.2335e+01 -1.1288e+01 1.1501e+01 -4.5731e-01 +#> -1.8300e+00 -1.7596e+01 4.1389e-01 -1.1390e+01 2.6726e+00 2.1959e+00 +#> -1.4649e+00 -8.5470e-01 5.2110e+00 1.0870e+01 -5.0161e+00 -5.2889e+00 +#> -8.2089e+00 -8.4885e+00 -7.0544e+00 1.2611e+00 3.7091e+00 6.7700e+00 +#> 7.2513e+00 -5.8998e+00 -8.3026e+00 -3.4304e+00 1.7832e+01 -1.1176e+01 +#> -9.1592e+00 -8.7092e-01 -1.0351e+01 -1.0681e+01 -1.2339e+01 6.6825e+00 +#> -6.7008e-01 -4.8440e+00 2.3226e-01 -2.9946e-02 5.6478e+00 1.3199e+01 +#> -1.2542e+01 -5.9341e+00 1.2123e+01 1.3029e+01 -1.2662e+01 -4.4325e+00 +#> -6.6169e+00 1.6597e+00 -3.5043e+00 -1.1193e+01 8.6377e+00 -5.4953e+00 +#> -5.0659e+00 2.7551e-01 1.3744e+01 -3.0966e+00 1.8892e+00 -4.4114e+00 +#> -8.6836e+00 -2.4412e+00 -1.4585e+01 4.2378e+00 -6.7566e+00 -8.1642e+00 +#> -1.2766e+01 -6.4137e+00 -8.1060e+00 4.6149e+00 -9.7195e+00 -7.7202e+00 +#> -1.4313e+01 -6.9195e+00 1.1693e+01 -1.4572e+00 -3.4242e+00 -1.8018e+00 +#> -1.0537e+01 1.6098e-01 -1.2272e+01 2.7677e+00 -8.9725e+00 -6.2632e+00 +#> -6.2350e+00 6.9955e+00 4.7859e+00 -2.2685e+00 6.8022e+00 2.9070e+00 +#> 4.5809e-01 -8.3880e+00 -1.9776e+01 5.2140e+00 3.7813e+00 -6.2155e+00 +#> 3.0042e+00 1.4669e+01 6.6397e+00 -1.2993e+01 6.9356e+00 2.2650e+00 +#> 1.1576e+01 -1.2529e+01 6.9593e+00 1.8562e+01 2.0904e+01 9.1805e+00 +#> 3.0764e+00 1.5342e+00 1.4406e+00 2.7671e-01 8.5106e+00 1.1056e+00 +#> 1.0280e+00 4.0112e+00 3.4976e+00 -3.5274e-02 -5.2699e+00 -3.1552e+00 +#> 1.1858e+01 -4.7703e+00 5.7537e+00 2.1920e+01 5.4029e+00 5.0997e+00 +#> 1.2290e+01 6.4362e+00 4.0488e+00 1.4499e+01 -6.0501e+00 -3.2366e+00 +#> 6.2689e+00 7.3161e-01 -1.2452e+00 1.4861e+01 -8.7672e+00 -1.1689e+01 +#> 1.1985e+01 -1.7174e+01 -1.0914e+01 7.4671e+00 -7.0140e+00 -3.4464e+00 +#> -7.8820e+00 3.4978e+00 3.0521e+00 -1.5181e+01 1.3071e+01 -1.4274e+01 +#> 1.9783e-01 -1.6555e-01 -2.6493e+00 1.3774e+01 4.8866e-01 -1.0697e+01 +#> 7.7743e+00 -1.2427e+01 -5.4104e+00 -1.0587e+01 1.4412e+01 5.0859e+00 +#> +#> Columns 31 to 36 -1.1357e+01 1.1981e+01 9.9716e+00 1.1857e+01 -1.8115e+00 -1.4695e+01 +#> 7.3422e+00 1.5548e+01 3.2082e+00 1.0147e+01 1.4100e+01 1.4162e+01 +#> 3.0844e+00 -7.3096e+00 6.9814e+00 2.2799e+00 5.8196e-01 -4.6313e+00 +#> 7.6082e-02 -9.9290e+00 -1.0213e+01 -8.2199e+00 -3.0799e+00 -8.5401e-01 +#> -3.4770e+00 1.3473e+00 1.4197e+01 -1.0314e-01 9.3214e+00 -4.5633e-01 +#> 1.0109e+01 -1.0152e+01 5.8044e+00 1.2882e+01 -4.3226e+00 1.0007e+01 +#> 7.5746e-01 2.2464e+00 1.1969e+01 -8.5846e-01 1.3138e+01 9.1428e+00 +#> -4.1221e+00 -2.3758e+00 -6.9433e+00 -7.5041e+00 -6.2797e+00 -4.0670e+00 +#> -5.4838e+00 5.2215e+00 1.3257e+01 1.2602e+01 8.5995e+00 1.5516e+01 +#> 1.5377e+01 7.5229e-01 2.2312e+00 3.4551e-01 1.2492e+00 5.7592e+00 +#> 9.3748e+00 -4.4612e+00 -1.1006e+01 -2.0006e+00 1.4334e-01 -1.1820e+01 +#> 1.6497e+01 -8.6081e+00 -6.0907e+00 5.8540e-01 -2.5686e+00 2.6386e+00 +#> -9.0482e+00 1.9906e+01 2.4323e+00 2.7765e+00 2.0396e+00 8.2006e+00 +#> -3.8722e+00 -8.3080e+00 -2.0545e+01 3.4558e+00 -2.0635e-01 -3.7895e+00 +#> 9.1895e+00 -2.5371e+00 8.2614e+00 4.9813e-01 -5.2952e+00 3.0628e+00 +#> 1.1221e+01 8.7408e+00 1.2788e+01 -4.7302e+00 2.2632e+00 7.0985e+00 +#> -7.5462e-03 2.7481e+00 -8.8928e-01 1.5583e+00 1.2498e+00 1.1898e+01 +#> -2.0618e+00 -1.0152e+00 7.9617e-01 -8.6114e+00 -6.5780e+00 2.5092e+00 +#> -5.6837e+00 3.7266e-01 1.2264e+01 -5.9856e+00 2.4086e+00 1.5265e+00 +#> 9.8641e-01 1.1195e+01 -9.3773e+00 -6.0877e+00 -2.7005e+00 -9.2722e+00 +#> -5.5643e+00 2.6269e+00 5.6519e+00 2.0768e+01 1.3134e+01 -4.0507e+00 +#> 6.6763e+00 -1.2067e+01 8.3925e+00 4.6361e+00 4.4979e+00 4.3787e-01 +#> -4.4566e+00 4.6770e+00 7.7471e+00 5.9833e+00 -3.9309e+00 -1.0741e+01 +#> -1.5834e+01 3.2567e+00 2.3690e-01 9.5044e+00 -1.1210e+00 -1.2841e-02 +#> -2.4358e+00 -5.8075e-01 -5.3243e+00 1.2629e+01 8.5652e+00 1.7653e+01 +#> 9.1133e+00 -1.1740e+01 -3.6817e-01 -5.1786e+00 -1.1297e+01 4.8665e+00 +#> 8.5231e-01 -1.4404e+01 2.4547e+00 1.9263e+01 3.3342e+00 9.9445e-01 +#> -6.6792e-01 3.1188e+00 -9.2717e+00 1.3892e+00 -1.7736e+00 1.7867e+00 +#> -3.4408e+00 7.4662e+00 1.0113e+01 -1.6295e+00 -3.7887e+00 -6.8620e+00 +#> -5.6302e+00 8.0703e+00 6.8752e-01 1.4518e+01 -9.6824e-01 1.1248e+01 +#> -1.8732e+01 -4.1066e+00 1.1552e+01 1.5887e+00 -3.4372e+00 -3.8972e+00 +#> 6.2236e+00 -1.1276e+01 -2.0439e+00 -3.1262e+00 1.2531e+00 2.0769e+00 +#> 1.0418e+01 5.4322e+00 1.2515e+00 9.2656e+00 2.2542e+01 1.0728e+01 +#> +#> Columns 37 to 42 6.0988e+00 -8.0439e-01 1.8655e+00 -5.1543e+00 -1.9880e+01 -1.4004e+01 +#> -1.9940e-01 -6.6575e+00 1.6445e+01 -1.5268e+01 -1.3676e+01 2.8681e+00 +#> -6.7479e+00 -9.1959e+00 -3.7102e+00 1.6594e+01 5.5031e-01 5.4496e+00 +#> -6.0228e+00 2.1354e-01 8.0222e+00 -6.6496e-02 -6.0220e+00 1.3493e+01 +#> 2.6637e+01 3.2170e+00 7.4214e+00 2.3912e+01 -1.2703e+01 4.4444e+00 +#> 9.1571e+00 1.4790e+00 9.4493e-01 2.1464e+01 -9.7413e+00 6.8209e+00 +#> -6.4138e+00 -1.7047e+01 -2.1116e+00 -5.2546e+00 -7.3529e+00 -9.4311e+00 +#> -7.6436e+00 1.5305e-01 -3.3620e+00 1.1230e+01 3.0282e-01 -9.2001e+00 +#> 6.4844e+00 9.1075e+00 5.0587e+00 -8.0096e+00 -4.7839e+00 9.3991e-01 +#> -1.8610e+00 1.5192e+01 -8.9527e+00 2.0508e+00 6.9849e-01 1.4954e+01 +#> -1.0395e+00 -3.6823e+00 -7.6426e-01 -6.1870e+00 1.3009e+01 1.7793e+01 +#> -1.3253e+01 -1.1113e+01 3.0313e+00 7.1198e+00 -2.9371e+00 8.8453e+00 +#> 8.1860e+00 -1.3867e+01 6.3761e+00 -1.4327e+01 -6.9700e-01 1.3793e+01 +#> -1.2574e+00 -1.6937e+00 -3.5716e+00 -1.3432e+01 2.9394e+00 4.8733e+00 +#> 1.1381e+01 2.4025e+01 -4.6198e+00 5.5551e+00 -9.6417e+00 -1.0214e+01 +#> -1.2939e+00 4.9727e+00 7.6788e+00 -3.4079e+00 5.5547e+00 -7.1482e+00 +#> 1.2859e+01 6.2153e+00 4.7319e-01 4.6990e+00 -5.7847e+00 -1.0949e+01 +#> -2.2701e+01 1.3047e+01 5.7216e+00 -7.4328e+00 -1.2135e+01 3.5048e-01 +#> 1.4917e+01 -8.3515e+00 -1.0262e+01 8.2380e-01 6.2886e+00 -9.5574e-02 +#> -6.3037e+00 -4.2987e-04 -2.0615e+01 2.1270e+01 1.2580e+00 1.0511e+01 +#> -6.2005e+00 -3.6623e+00 4.5152e+00 -9.2875e+00 -3.2735e+00 6.2376e+00 +#> -8.5251e+00 5.6726e+00 -1.0867e+01 3.3117e+00 1.0386e+01 -4.3640e+00 +#> 2.3705e+00 -8.4754e+00 7.1369e-01 1.0543e+01 3.1853e+00 8.7148e+00 +#> -1.0430e+01 -2.7178e+01 5.6021e+00 6.4076e-01 8.2638e+00 1.2219e-01 +#> 1.0713e+00 7.1091e+00 -6.5412e+00 -1.4832e+00 1.4617e+01 8.5215e+00 +#> -5.7575e+00 5.8882e+00 6.2337e+00 7.9177e+00 -2.3209e+00 -1.0861e+01 +#> -7.8439e+00 -3.4696e+00 -4.0875e+00 -9.2443e+00 -4.3091e+00 -7.2878e+00 +#> -1.6487e+00 6.6453e+00 -6.2274e+00 1.2108e+01 7.4702e+00 3.9402e+00 +#> 1.1521e+01 5.7844e+00 1.4997e+01 -6.6601e+00 -7.1250e+00 -5.4158e-02 +#> 1.6169e+01 1.8308e+01 -9.9830e+00 1.2630e+01 -5.8414e+00 -1.6383e+01 +#> 3.4601e+00 4.3866e+00 3.0461e+00 5.2795e+00 -1.1307e-01 -7.0420e-02 +#> -6.7810e+00 -9.2056e+00 -6.5165e+00 1.7744e+01 6.2753e+00 1.7646e+00 +#> 2.3100e+00 -7.1944e+00 -9.5489e+00 -7.0821e+00 -4.1903e+00 9.0515e+00 +#> +#> Columns 43 to 48 1.8651e+01 5.8580e+00 -2.5540e+00 5.9798e+00 1.3616e+01 5.9374e+00 +#> 1.4042e+00 -7.1437e+00 5.3403e+00 4.8087e+00 -4.3136e+00 -4.4749e+00 +#> 4.2473e+00 -3.3378e+00 -5.9252e+00 8.7385e+00 -2.1737e+00 5.5311e+00 +#> -7.1610e+00 1.5589e+01 -8.4832e+00 -1.0002e+01 8.0409e+00 -1.4458e+01 +#> 9.0307e+00 7.9130e+00 -1.3612e+01 3.4962e-01 8.6784e+00 -4.7658e+00 +#> -3.8549e+00 2.0299e-01 -7.7250e+00 -4.7844e+00 -1.3173e+01 2.1174e+01 +#> -9.5103e+00 6.2254e+00 1.5409e+01 -2.4048e+00 8.4910e+00 -7.8213e+00 +#> 1.4605e+01 -1.4826e+01 4.2888e+00 -1.0561e+01 -5.2321e+00 -1.0529e+00 +#> -7.9933e-01 -3.1178e-02 5.4608e-01 6.5385e+00 9.7176e+00 2.0845e+01 +#> -5.3888e+00 2.0419e+01 -2.5235e+01 4.9453e-01 4.0366e-01 1.4562e+01 +#> -1.5129e+00 -1.3035e+01 5.7489e+00 3.9341e+00 -7.8148e+00 2.0222e+01 +#> 1.7552e+01 3.7504e+00 9.3178e+00 -3.3478e+00 -1.0304e+01 1.3753e+00 +#> 1.0735e+01 -9.2614e+00 -4.0927e+00 1.0137e+01 6.5703e-01 6.3034e+00 +#> -1.8100e+01 -4.4140e-01 -3.2765e+00 1.1385e+01 8.2657e-01 -9.7269e+00 +#> 7.4825e+00 3.7591e+00 -9.3485e+00 -2.8601e+00 -5.7009e+00 -1.0053e+01 +#> 2.8396e-01 -1.3266e+01 -1.0565e+01 4.4111e+00 1.4520e+01 -1.1182e+01 +#> -1.6583e+01 1.5507e+00 -1.2863e+01 9.1985e+00 1.0691e+01 2.8953e+00 +#> -1.2593e+01 -1.3589e-01 -9.3040e+00 1.3394e+01 -1.0354e+01 1.3999e+01 +#> 8.2749e+00 7.5436e+00 1.7821e-01 1.1863e+01 6.5062e+00 -5.8205e+00 +#> -7.1312e+00 1.1181e+01 3.9970e+00 -1.1016e+01 8.4926e+00 -1.1297e+01 +#> -1.1315e+01 -2.8348e+00 9.9462e+00 -1.4857e+00 -2.0638e+00 6.0903e+00 +#> 1.4766e+01 1.5735e+01 6.4633e+00 -1.0304e+01 1.8137e+01 1.3001e+01 +#> 2.0535e+00 8.0652e+00 -6.2877e+00 1.0530e+01 -3.1022e+00 -1.1258e+01 +#> 1.7652e+01 1.0152e+01 -2.7717e+00 9.9863e+00 1.9430e+01 1.6897e+00 +#> -3.1389e+00 1.5090e+01 5.4868e+00 -1.5961e+00 -1.1954e+01 2.6257e+00 +#> 2.6598e+00 4.5923e-02 1.9518e+00 3.6742e+00 4.0554e+00 -4.1224e+00 +#> 5.6864e+00 7.4536e-01 1.2337e+00 -9.2960e+00 6.0701e+00 6.4188e+00 +#> -3.7751e+00 8.2547e+00 -1.3889e+01 8.6632e+00 -1.6368e+01 -8.1808e+00 +#> -1.3147e+01 1.0182e+01 -9.7701e+00 -6.5206e+00 -1.8821e+00 -3.4522e+00 +#> 3.7844e+00 1.0395e+01 -9.9069e-01 -8.6665e+00 9.4882e+00 -1.9181e-02 +#> -2.2905e+00 -4.3181e+00 6.0473e+00 7.4193e+00 -1.3721e+01 -1.1281e+00 +#> 5.2884e+00 1.1093e+00 9.2624e+00 -6.7758e-01 1.8620e+00 -6.4113e+00 +#> 8.1688e+00 9.3747e+00 4.6817e+00 -1.0918e+01 1.2850e+01 2.5548e+01 +#> +#> Columns 49 to 54 -7.9121e-02 -9.1412e+00 -1.0781e+00 8.9441e-02 -2.5786e+00 -2.5158e-01 +#> 1.3240e+01 -2.2763e+00 4.5066e+00 3.8285e+00 6.0378e+00 4.7827e+00 +#> -3.1463e+00 2.6070e+00 -8.4117e+00 1.5027e+00 8.7442e+00 4.2795e+00 +#> 1.3520e+00 -5.2432e+00 -2.7057e+00 -9.7851e-01 -8.7705e-01 -3.2616e-01 +#> 5.5362e+00 1.0861e+01 7.3578e+00 1.9651e+00 -1.9482e+00 3.4265e+00 +#> -1.0524e+01 5.4771e+00 8.4625e+00 -8.9160e-01 3.9857e+00 3.4231e+00 +#> 1.0566e+01 9.0016e+00 -1.3289e+01 -4.3572e+00 1.4488e+00 2.0633e-01 +#> -1.1994e+00 -1.0704e+01 3.1709e+00 -1.4812e+00 1.5413e+00 5.3184e-01 +#> 8.8366e+00 9.7180e+00 1.0314e+01 1.0961e+01 3.0913e+00 1.0606e-01 +#> -1.3399e+01 2.8587e+00 -1.0157e+01 -5.0444e-01 -4.7002e+00 -6.9904e-01 +#> 5.0243e+00 2.1031e+00 2.2536e+00 3.3043e+00 3.4988e+00 -1.4744e+00 +#> -4.8099e+00 -1.8123e+00 5.6880e+00 -5.0763e-01 5.4416e+00 -2.5526e+00 +#> -4.8819e+00 -1.3431e+00 6.0133e-01 2.1635e+00 -3.5245e+00 2.3077e+00 +#> -3.7879e+00 -5.3912e+00 -4.9927e+00 7.2770e-01 6.8276e+00 -2.4188e+00 +#> 1.3457e+01 1.4833e+01 2.3735e+00 7.1478e-01 -1.3741e+00 -1.6697e+00 +#> 3.7287e+00 1.2958e+01 -2.6546e-02 9.0811e+00 1.4764e+00 -5.4536e+00 +#> -7.3302e+00 1.3150e+00 -7.3113e-01 1.1774e+01 4.5585e+00 2.0654e-01 +#> -7.4635e+00 -3.2945e+00 -1.6004e+01 1.0148e+00 4.5388e+00 -2.1846e+00 +#> 4.0827e+00 4.6811e+00 1.1506e+00 8.8552e-01 -2.1057e+00 2.4387e+00 +#> -6.3624e+00 -9.1315e+00 -1.6520e-02 4.3562e-01 -4.4460e+00 -1.1886e+00 +#> 5.0401e+00 5.8519e+00 9.1493e+00 -4.4009e+00 6.4286e+00 -1.5992e+00 +#> 2.8538e-01 -9.3507e-02 -3.7667e+00 4.3281e+00 2.3204e+00 -3.5145e-01 +#> 1.6218e+00 5.5460e+00 -7.3334e+00 -5.5465e+00 -5.7455e+00 3.0386e+00 +#> -2.5979e+00 4.6217e-01 -3.6948e+00 -9.4825e-01 4.1991e+00 2.1228e+00 +#> 7.4229e+00 3.5157e+00 2.8468e+00 -3.2869e+00 3.3928e+00 1.8340e+00 +#> 8.2374e+00 2.4794e+00 -7.6380e-01 -7.9588e+00 -4.9124e+00 -1.4156e+00 +#> -1.3288e+01 -2.0089e+00 -6.0241e+00 1.8306e+00 3.4282e+00 -1.2858e+00 +#> -1.3545e+01 1.6046e+00 -1.0394e+00 2.9090e-01 1.8811e+00 5.5214e-01 +#> 2.4362e+00 -1.8237e+00 1.6450e+00 3.1722e+00 4.5288e+00 5.4734e+00 +#> -1.0636e+00 -3.5529e+00 3.3539e+00 2.2426e+00 -4.1244e-01 1.8183e+00 +#> 1.4988e+00 2.4386e+00 2.3093e+00 1.1566e+00 4.6186e-01 1.3810e+00 +#> 1.6040e+01 -9.8615e+00 -1.0179e+01 -8.8022e+00 -3.5965e+00 -2.8926e+00 +#> 4.9511e-01 4.6599e+00 -1.6281e+00 3.8184e+00 5.6074e+00 -5.0963e-01 +#> +#> (3,.,.) = +#> Columns 1 to 8 3.0031 -1.1713 1.7735 -9.0906 -7.7523 1.6255 -8.1038 0.4803 +#> -1.4395 3.3060 -4.9207 0.5426 -26.5450 -9.1573 1.2341 2.6821 +#> -1.6181 -4.7474 6.6856 -8.8715 -6.7656 -9.3924 18.3535 16.4979 +#> 2.0125 10.5115 0.4946 1.4673 1.5072 12.2754 -7.2831 1.1050 +#> -1.0779 -1.3903 -5.9078 -2.7975 -10.6282 -1.9355 -28.0926 -8.5147 +#> 1.3459 1.0233 4.4565 -3.9460 11.7021 -22.0011 3.7328 2.8262 +#> -7.9638 -3.2818 11.9780 -3.6959 1.7928 -2.5773 2.7235 0.0797 +#> -5.8846 1.2887 -4.5946 -7.9901 -17.1901 -6.6256 16.0995 3.1597 +#> -0.3332 -5.5103 -9.8488 -3.2059 0.0323 -9.4431 -18.4452 5.7317 +#> 5.8413 -0.3776 3.9603 -10.9460 -8.4634 -3.9871 -12.5948 -3.5386 +#> 4.0921 -6.9083 -7.3737 -5.5127 1.0160 -10.7198 -0.8637 15.9982 +#> 4.4676 6.3974 11.2690 20.4401 8.4547 -0.3921 2.8117 5.6110 +#> -0.5407 3.5666 -0.3481 -1.5658 -5.2985 -3.3953 -7.0007 15.7980 +#> 3.4430 1.8497 -11.5248 7.5493 -7.5274 3.9757 13.4175 1.6013 +#> 3.4110 3.6490 7.0635 -6.0675 4.6773 -5.6185 -17.0664 -5.3096 +#> -5.1638 -3.6366 0.1630 -7.2286 0.7935 8.5049 -1.6619 -2.8189 +#> -2.0293 -11.2180 -3.9819 -13.4532 7.4374 -11.3310 -10.2651 7.9634 +#> 0.4887 9.5984 6.3274 3.2128 7.1249 7.1410 -0.4603 1.0461 +#> 0.6471 0.3731 -1.7175 8.2021 5.7231 3.8515 -1.8011 -4.4598 +#> 1.6338 3.5299 2.0626 4.3411 2.0501 -2.9471 -5.0114 3.5891 +#> -6.8317 2.1409 -0.3158 -7.3118 -9.3937 14.5959 15.3571 -1.8564 +#> 0.8886 1.4641 0.5028 -12.2130 -5.3902 -10.1844 11.3079 10.3780 +#> -3.0151 -6.1362 3.1912 -2.0550 -4.8657 0.0678 -20.8676 13.3277 +#> 5.0685 -0.4177 1.6027 -5.8127 -10.5129 -8.3065 -5.8800 -2.2088 +#> 5.2699 -5.7045 -5.1716 6.8226 -15.0316 -5.0530 0.6497 -12.4067 +#> -1.5064 -1.0875 -1.5125 -5.5702 -3.2429 -2.1734 -12.1101 -1.8075 +#> 2.8814 -4.0411 -7.4772 1.5799 -8.6595 -7.5146 2.7112 4.5064 +#> 1.8012 -6.0426 -9.7752 2.7970 -1.7619 -5.3261 -3.2915 4.0403 +#> 1.3697 6.3530 -4.4796 10.4673 0.5710 -10.7031 -6.3763 1.9706 +#> -1.2342 1.2121 -13.5034 2.4156 -13.5557 -21.2035 -15.4164 -4.9243 +#> 4.0965 -2.9095 4.9796 -3.4179 8.6316 2.9157 -10.0892 -8.3838 +#> -5.2568 2.4127 -7.6201 -11.9952 -0.9887 -11.3262 8.5635 5.2253 +#> 7.8430 2.1430 5.2939 -7.9930 -0.9956 -8.6677 3.6763 -15.1486 +#> +#> Columns 9 to 16 4.6593 6.7269 6.6702 7.3907 -7.8672 -4.0630 12.8214 -2.5460 +#> -0.0965 -5.4140 0.5344 5.3575 -7.3039 0.7799 0.9683 -3.1604 +#> 0.3504 0.1819 -1.6127 9.8124 32.5733 3.7362 -8.9751 -4.3400 +#> 2.6242 -8.9156 -0.2250 1.4677 0.5044 3.6535 2.1986 7.7236 +#> -1.3133 10.9565 -1.7728 17.1786 -19.9343 9.0192 8.5705 -6.0406 +#> 8.5221 -6.1071 0.2244 -8.0260 -2.7828 3.6833 0.7928 7.3192 +#> 15.0199 -16.7423 6.0564 -5.2131 2.0064 14.0828 -14.1567 1.9929 +#> 2.3899 -3.1128 -5.2550 3.8801 17.2259 13.7856 8.9296 -0.2724 +#> -5.8980 -4.0879 -1.9533 0.1919 -4.3493 6.1849 10.4726 7.2466 +#> -3.7371 1.9459 -1.1663 1.6162 -2.5395 0.5327 -0.2901 -3.8612 +#> 2.2095 6.9287 -12.3987 7.1235 4.1101 9.2574 -5.3279 -2.2198 +#> -7.3193 7.8056 3.6111 -6.5194 6.3612 10.7283 4.3855 11.0293 +#> 2.1896 -5.5781 -19.1569 0.7391 -6.5319 9.0854 -7.2169 -9.5070 +#> -6.4358 -2.3173 -6.2730 4.3687 0.7691 -6.1351 0.5836 -1.7487 +#> -18.7487 3.9496 5.3738 0.0837 6.9914 -12.1602 -4.9256 6.7643 +#> -3.4495 -10.0417 12.7282 7.4786 2.5694 -8.6762 -0.5433 0.1506 +#> 12.5367 -4.5606 8.8003 9.8323 -8.9455 -14.3511 0.7297 22.4463 +#> -13.3844 5.5442 6.8929 19.5213 3.6610 -9.9020 -5.0455 12.4286 +#> -4.0541 -8.0233 -8.1308 -1.7781 -2.0019 6.3873 8.4276 2.1452 +#> -5.3457 2.0029 -2.7451 7.9457 -5.2178 2.0413 6.7447 -14.8059 +#> 5.7557 1.3620 1.3234 17.9600 13.0151 2.1660 -8.7398 3.9648 +#> -10.6699 -8.9068 -6.6008 3.4348 21.5199 -1.6352 5.8334 11.9984 +#> 5.4020 4.2524 9.1617 12.0283 -8.3139 2.7388 -14.3782 -6.9844 +#> -8.5022 -32.2319 -7.4838 -10.9457 4.3391 -4.9729 -9.7857 8.1494 +#> -11.2569 -1.4706 -11.1208 -12.0966 11.7453 8.9875 5.0079 -7.2054 +#> -19.4426 -4.2547 15.5110 -7.6708 -4.6302 -1.4468 5.8384 -4.6624 +#> 0.7718 -2.8727 -7.0337 -3.5872 18.4001 -19.5692 -16.5489 7.8874 +#> -7.3169 9.3362 1.5757 -19.8433 2.6673 -4.9955 9.7552 -10.4626 +#> -1.2032 -3.5806 4.6284 -4.4536 -7.2250 -2.1550 11.7586 6.2467 +#> -0.7258 3.4492 -22.7294 -15.0464 0.7035 -2.5275 18.2421 8.7096 +#> -9.1511 -5.0277 6.5897 20.7037 -0.6972 -2.7146 -6.0418 8.4277 +#> 8.5743 -4.4994 -6.9698 8.8198 18.6992 0.6420 -12.9564 2.3944 +#> 14.5578 -1.8890 -8.0158 2.9650 10.0440 9.4335 8.7453 -0.1934 +#> +#> Columns 17 to 24 -6.6739 -3.3174 4.8387 -0.9273 -7.7149 0.7350 0.7539 -12.7757 +#> 1.4936 1.5664 -0.7254 8.8269 5.0336 -2.1062 11.5775 3.8742 +#> 4.1964 8.1240 -8.9542 9.2426 -4.2874 1.6129 11.9477 -14.8053 +#> -17.7976 13.5713 10.5377 6.5788 0.5665 7.3900 -3.5301 -8.3864 +#> -7.7135 24.4112 8.5829 9.0302 5.6441 11.9017 -15.2928 9.8670 +#> 7.8455 -14.1607 24.9822 7.2399 -3.6282 -10.9977 15.6533 24.1705 +#> -2.7380 3.3217 -5.2483 -17.4094 0.6183 -3.6812 10.6658 -13.4362 +#> -0.0283 -14.1412 3.7245 -10.9191 -5.2927 11.3169 34.0355 -6.1253 +#> 0.9896 3.7909 15.6755 2.6959 6.6624 5.3276 5.7858 2.5801 +#> -0.8955 7.1207 5.7362 -3.7618 2.5286 10.6009 -9.3078 1.7687 +#> 16.1548 -19.1678 -5.6831 7.8994 8.6877 4.6436 4.2476 -0.4299 +#> -8.1035 -10.2413 11.0808 18.7428 -6.1531 1.4083 1.5032 -1.9261 +#> 3.4789 13.5798 -16.1593 0.7770 -2.3820 -0.6284 -3.4782 2.3536 +#> 21.1276 3.7961 14.4141 -6.9490 3.5795 -0.7582 2.4345 7.0882 +#> 0.3352 -0.0920 -0.4495 -0.4418 -4.9307 5.1436 -14.9609 3.9155 +#> 6.8881 -5.7349 -14.4124 -3.8325 12.5811 -4.6771 0.8841 -3.3465 +#> 8.0628 -6.7419 -3.9527 3.5416 -3.5149 -7.7816 0.2831 9.6582 +#> 14.2947 -15.2795 -2.8609 4.6593 1.2110 10.0167 4.2606 -11.5680 +#> 5.3466 0.4190 -8.5543 10.8854 11.1437 4.1192 3.3871 19.8865 +#> -0.9948 18.3307 -3.0989 1.2748 5.2957 7.5191 -3.6155 5.4813 +#> -15.4406 7.2360 -3.9158 -8.4858 -4.7730 -5.4790 -4.4418 -26.1587 +#> -8.9448 -11.9719 2.7618 -3.9798 -12.0544 15.9367 4.1552 -0.8900 +#> 13.3973 -8.4427 -18.6767 6.8483 8.1360 -23.6531 -16.4057 0.3555 +#> 7.2695 19.2310 8.4524 9.4012 13.6135 10.9855 18.6647 4.0355 +#> 0.2357 -2.5369 10.6670 -2.6963 4.3938 9.4022 11.7677 2.9247 +#> 3.1197 -2.7420 -5.7643 4.8510 -6.9641 2.9854 -12.6033 -3.6164 +#> 6.4381 4.2443 -6.4563 -1.7414 -12.8242 -11.2641 -6.9360 -1.4475 +#> -5.7302 -2.3058 -2.4990 -9.3361 -6.7829 -4.5237 -4.0134 13.4664 +#> -0.4313 5.6937 0.0022 -5.6486 -7.6801 -14.7791 3.9556 18.4725 +#> 6.2085 -5.9200 8.8480 2.7069 -17.4849 -15.5185 20.1055 15.0809 +#> 16.2159 -3.3914 -14.8833 21.7363 12.9022 -0.7343 -8.1583 12.4803 +#> 11.2343 1.4257 -5.3833 -12.7218 -2.6865 -2.5140 -6.7985 -2.5359 +#> -4.0532 2.6016 7.4993 -8.4987 9.5665 3.0245 7.0870 -9.0506 +#> +#> Columns 25 to 32 1.1221 -15.1468 8.3570 1.9067 -11.5429 -14.4144 9.4440 2.2931 +#> 2.9613 -5.0968 10.2646 1.3281 -7.5577 -0.9216 7.4214 8.1904 +#> -8.5052 -17.4922 -10.5043 -0.4389 8.9870 1.2418 -0.9949 -5.0835 +#> 12.2524 6.3351 15.4138 12.0733 -1.9435 0.8878 -7.0024 2.1013 +#> 1.0187 -7.7225 25.5166 18.5899 -5.6871 13.7447 -8.9432 20.9760 +#> -3.3446 12.4658 6.8752 5.7418 -1.4096 2.0620 6.0016 2.0485 +#> -14.6137 -1.3447 4.2214 11.4609 -1.4652 3.6985 -15.4739 -4.4912 +#> -14.5247 -6.0115 -16.5187 17.6410 3.8002 10.4642 -4.3999 0.9559 +#> -1.8676 -14.0983 -5.6664 -3.0940 -13.4328 13.5993 -5.7134 1.2287 +#> -0.6292 19.5502 6.4174 4.8042 6.3895 -3.5560 2.3204 18.9714 +#> -2.3795 5.8969 0.7731 -1.4679 -4.4090 -2.2931 3.7329 -12.4271 +#> -3.1622 11.9378 16.6575 -6.7612 1.4359 -12.4484 -17.6017 -9.2899 +#> -1.4348 2.1966 15.6815 -7.2354 10.0188 18.4178 15.3872 2.5706 +#> 3.0502 7.7196 -17.1957 -10.0506 6.4401 6.7961 -15.4598 -2.5249 +#> 2.4114 5.5757 7.4796 11.6125 -6.2794 -11.1033 -1.4154 7.4313 +#> -12.3398 4.9861 6.3509 4.6999 10.8454 -8.6616 4.3145 -0.3505 +#> 5.1506 1.4426 -4.8630 10.4596 -1.3728 4.9981 -0.5317 -2.3504 +#> -18.7493 9.9155 6.3305 19.4469 -4.2548 -9.9538 -23.5458 1.5363 +#> -5.7270 -28.5966 1.4610 2.6186 6.9747 19.2051 6.0497 0.9164 +#> 10.9339 -6.6366 12.2284 3.0518 2.6061 11.2897 0.1941 -0.2602 +#> 19.5594 -11.9835 -1.9872 -15.9810 6.5773 -5.2519 2.7786 14.8894 +#> -1.3409 -11.1748 1.0937 -3.5925 -0.7549 12.0844 4.0608 1.2738 +#> -11.5565 -7.7279 -11.3881 -1.5980 -15.1130 -6.5238 -1.9475 0.3795 +#> 0.7774 -9.4843 0.2672 -19.9400 0.8753 -2.3304 -3.0680 -8.8586 +#> 7.9566 3.6271 -5.0917 -6.8523 -9.8662 -2.2442 -16.8183 4.5610 +#> 1.2773 -15.6154 14.5905 -14.0544 11.1515 -6.0772 14.6152 -1.7339 +#> 14.9440 12.5391 -9.0561 13.2410 4.0097 -13.9463 2.5720 -6.4587 +#> -2.3829 -6.7018 -11.9655 -5.8682 5.5277 0.0934 -6.2478 10.2131 +#> -3.7337 -10.5398 -7.9042 -4.4180 1.0297 4.7407 5.2402 3.7955 +#> 5.4218 8.4790 -25.0305 20.2131 1.5218 1.6669 1.7754 2.5871 +#> -2.5454 3.7441 10.6950 -0.2284 2.8383 -7.8144 3.1613 -14.6570 +#> -4.4937 -1.8892 -11.6622 -7.5895 1.2404 -3.1188 -1.9563 12.0491 +#> -8.3915 34.2823 -6.9476 7.6439 -1.6938 -6.7918 3.5380 -9.0375 +#> +#> Columns 33 to 40 3.3995 -0.3210 2.9188 5.1383 -1.3407 7.5646 4.3350 12.8163 +#> -4.6758 3.8246 -0.4502 -5.3551 -2.4679 -1.6020 6.7734 6.7087 +#> -1.8866 -7.8625 -11.9390 -9.3608 4.7640 -17.6644 17.5815 -8.8841 +#> -1.3425 -8.0310 -8.0548 20.8914 3.6614 0.4057 9.1090 -8.9089 +#> -6.8993 11.7285 -3.0791 9.8707 -14.5684 -1.5579 -4.5657 -11.7019 +#> 17.2565 -3.6128 2.5843 -14.9282 12.2219 -13.2551 -13.5065 6.2647 +#> 6.6300 10.2063 -5.0556 -5.2237 -5.6799 -5.2010 -1.2620 -4.6160 +#> 1.6600 0.4635 -14.1997 -5.3116 5.8385 -3.3701 18.2749 4.3410 +#> -2.3960 7.0271 7.9698 -5.0902 6.2283 0.9159 -4.6657 3.5176 +#> 3.0836 3.9876 -4.8300 0.3241 -0.3630 5.0639 4.8222 -4.8342 +#> -4.8762 -0.1001 10.6387 -7.2324 -4.1635 0.1551 7.3297 6.6931 +#> 10.2620 -1.0456 -2.0783 4.0471 5.1181 8.0840 8.2518 10.5336 +#> -2.2210 18.0167 -1.2456 -8.7770 -17.6564 7.3704 -13.3693 -9.1483 +#> -14.8682 -7.7006 -2.0882 18.8748 1.6923 9.6779 13.6605 -12.6538 +#> 16.1572 -16.5525 13.6661 5.4012 11.1833 -5.3037 7.9277 2.4325 +#> -3.3884 -5.0985 4.8811 -17.6648 5.1383 -0.4609 3.3404 1.5507 +#> -5.6561 -0.2068 7.1859 -16.5431 -5.6844 -0.4892 2.3343 8.0768 +#> -5.9822 10.8257 -0.3373 0.9161 -4.8604 5.9253 18.3434 -0.3593 +#> -12.5291 13.5229 8.5409 -9.3303 4.1437 3.7960 -1.9802 5.5924 +#> -3.9977 -1.4491 -8.0225 2.4355 -5.1887 0.9020 11.6591 -8.5988 +#> -10.6864 4.7166 -12.1693 15.2190 -7.3645 -0.4941 -1.6018 -19.0628 +#> 5.8257 -6.8657 -9.8551 -12.6238 2.9744 -7.7327 6.4582 7.8213 +#> -1.5285 12.3356 -3.8897 -9.5486 8.2061 1.1672 -4.6765 -4.8519 +#> -9.7547 0.5000 -2.3187 13.6487 -4.9364 0.3583 2.2306 5.9768 +#> 6.6336 -7.8624 11.8713 8.1273 20.9814 1.9781 8.6172 1.3056 +#> 16.1903 -10.9793 9.7522 1.6372 -2.8432 2.4354 -15.3075 -1.4748 +#> -3.4112 -1.5999 9.6158 17.9750 2.0975 -6.0335 9.8225 -14.3505 +#> 2.6330 -1.7996 6.3781 10.6677 -3.4005 -14.3448 -2.8834 -5.9152 +#> -10.8092 -2.3071 3.1028 11.0733 -1.5603 -22.6859 -2.0244 7.1346 +#> 18.9459 6.7388 -8.3142 14.8399 17.1676 -13.3548 8.5614 -7.7465 +#> 8.6905 11.5966 5.1644 -5.1298 5.1416 5.4848 -8.0632 -0.1872 +#> -4.2913 -1.5870 7.7001 11.6801 -4.0722 4.0107 8.4250 -17.5451 +#> -5.4688 7.5811 1.7385 -0.4735 -8.9413 0.2439 17.5193 -2.7576 +#> +#> Columns 41 to 48 0.0465 -3.4862 -13.2738 3.8935 -3.2092 10.1039 5.0745 3.7190 +#> 2.0494 -9.1959 -2.2920 -5.7282 3.3374 8.6497 -8.7164 -0.0291 +#> -9.8911 -0.7584 2.2297 -5.7634 -4.3136 2.0147 20.0442 -3.9719 +#> 5.5081 -8.8566 2.8731 -13.9006 20.8200 -5.5648 7.5882 5.7305 +#> -3.1711 -12.6775 -3.9172 -8.7452 14.4456 -12.8824 -1.8546 3.4528 +#> 2.4446 -3.4813 6.2757 0.4589 -4.7091 2.1109 -9.6654 -2.8753 +#> -6.4858 3.2467 10.2775 -8.8833 -7.7731 -9.0477 -1.6386 4.3995 +#> 1.5574 -6.3637 4.2262 -9.4756 -3.0853 -1.0484 17.7309 3.0610 +#> -11.3941 4.9077 1.1466 -2.9488 7.8308 -3.5794 -14.2437 10.8045 +#> 4.6599 5.4957 -0.6567 -3.0111 -2.0842 -7.0308 -1.1045 -7.3985 +#> 2.8322 -12.0449 5.3779 -0.5008 5.0249 24.7133 4.6630 -1.7614 +#> -1.8948 -7.6056 -7.6986 10.9410 12.1283 -5.6003 17.2878 16.8752 +#> -5.6440 -6.6151 8.7042 8.5497 -7.9938 -2.3105 1.3292 0.2753 +#> 13.1700 -2.1738 -1.6815 -2.9282 -2.1487 -7.5467 2.2566 -3.1159 +#> -6.7321 21.2448 -4.8457 3.3978 -0.0036 4.4460 1.1738 -13.2747 +#> -9.0073 17.5459 -4.0688 -5.1694 -2.4779 -4.3687 -1.9631 4.2734 +#> 0.6338 17.4965 0.2533 -18.1077 3.7392 13.4510 -15.2351 -10.5991 +#> 13.9620 18.9477 -1.3132 -15.6907 0.9643 0.4753 2.6462 1.9783 +#> 3.1352 -6.3590 -10.6552 -13.6167 -6.0599 1.4963 -10.2603 11.7957 +#> 15.1292 -5.5503 1.0119 -0.6963 3.3856 -11.4307 13.1196 11.5283 +#> -12.3759 -6.4238 12.8371 2.5045 -1.7021 -2.0936 4.3229 2.1624 +#> -12.5627 13.9249 -3.5476 -14.4480 -1.9823 13.2438 -2.1036 3.9360 +#> 8.0561 -6.5157 18.4071 -12.5351 6.8261 -7.4138 9.1498 13.4108 +#> -12.7752 10.7888 -21.2931 -4.3270 -9.9881 14.7584 12.8067 6.9470 +#> -6.1149 7.5197 1.2564 14.0175 1.7411 3.7951 5.6573 1.6170 +#> 10.2582 -2.0958 -2.3722 -8.6935 -3.2190 -4.3119 4.8846 -6.9076 +#> -15.0583 0.4817 -4.4535 4.1261 -9.5967 12.2711 -1.1233 -17.4156 +#> -2.5126 -6.0924 -11.5254 11.3586 6.2273 2.9722 -9.3137 1.5586 +#> -2.3737 11.0494 -4.2991 -7.8241 3.8737 -0.8572 -3.2632 -3.1751 +#> 19.0771 1.6400 2.4942 -4.3823 -1.9991 -1.5681 -1.9602 -1.9225 +#> -9.8879 12.0578 -4.9132 4.8810 -0.5849 8.4618 6.0484 -0.8199 +#> 2.6374 -3.6763 16.9745 1.4207 4.6623 -1.5556 8.6995 -6.1866 +#> -17.2081 3.8919 -6.3286 12.8504 3.4418 0.1833 1.7541 1.8093 +#> +#> Columns 49 to 54 4.2277 15.2095 -5.3106 -3.5254 -2.0239 0.5410 +#> 1.0530 -1.5268 -8.3209 -7.5667 -4.3293 -0.3304 +#> 14.5093 15.5059 3.6394 -7.8419 -0.5077 -2.7458 +#> -4.2606 -1.3002 -6.4896 4.1244 -7.6892 4.5164 +#> -3.3746 -5.3938 2.3163 -0.2463 -1.7688 -0.0258 +#> 1.9585 -18.0378 7.8877 0.2530 4.8485 0.6589 +#> 3.0419 10.3141 12.3120 -6.6679 -10.8065 -2.5067 +#> -4.6297 6.5639 -12.5321 -0.1158 -5.2502 -2.2560 +#> 10.4186 -1.9900 9.9362 8.4835 5.7325 0.9782 +#> 1.6694 9.0592 -15.8919 2.0805 1.6827 -2.1085 +#> 5.7053 11.3829 -7.9181 5.3085 7.7903 -0.2385 +#> 22.8388 -0.1096 1.3239 -8.0848 -2.0209 0.9309 +#> -16.6073 2.2912 -6.8168 -4.4002 6.6341 -2.6954 +#> -8.1871 8.7314 3.0309 6.2616 -1.7341 -0.1670 +#> -7.3059 -1.1614 -12.2037 7.7507 -8.1071 1.1435 +#> 2.9345 -2.6649 2.3461 -2.0491 -1.6806 -2.3369 +#> 3.1239 4.0808 13.8076 -2.9526 -1.5092 -7.7307 +#> 3.0788 18.8719 5.5230 1.3509 3.6166 -5.6440 +#> 4.1107 -5.4855 5.2623 6.4164 1.0479 1.9502 +#> 1.7431 5.8753 -4.8010 -18.8203 -9.6599 -5.9157 +#> -0.0724 -10.8421 -0.9135 -2.0209 -0.0128 1.9955 +#> 11.8888 -1.6736 8.6527 -6.8223 -2.6404 -4.3233 +#> 7.3335 0.9019 -22.4590 1.2885 -7.3577 -2.0053 +#> -1.2451 2.6736 7.4570 -4.3535 -1.6255 1.7764 +#> 11.3746 -9.8116 5.5320 2.7358 -6.3190 4.3476 +#> 6.3236 -14.9279 2.5646 -0.5103 3.6606 0.7772 +#> 6.1821 16.4744 -10.2067 -5.7784 -0.8729 1.1509 +#> -0.3717 -5.3564 -3.1124 -13.4197 -5.1752 2.2853 +#> -3.0716 -4.4458 12.2943 -3.3567 1.3988 1.0051 +#> -22.5080 -8.9841 5.5813 7.1742 1.6875 -3.7041 +#> -9.3930 12.2926 -11.7914 -1.4915 -3.2901 -4.0539 +#> 0.4054 -1.7884 -2.7561 -2.5213 -9.4311 1.7463 +#> 2.3147 26.5946 3.4709 -9.9254 -3.4216 1.8680 +#> +#> (4,.,.) = +#> Columns 1 to 8 0.2219 0.5287 -7.2462 3.4678 4.6731 -0.1877 -16.9202 9.5219 +#> -1.9105 3.5858 5.7166 -1.6073 -1.7945 -8.8200 -5.3217 12.1512 +#> 4.7816 1.2403 -0.7889 -2.0563 6.4095 -0.4141 7.0459 -21.4778 +#> 1.6454 -5.1022 -1.6111 -1.3982 2.7441 -22.8674 8.2726 6.3412 +#> 7.9354 -11.1167 3.1809 -0.3604 3.7264 -11.9987 -1.9954 -0.3730 +#> -3.4294 -3.1767 -4.2183 -4.3765 -1.3808 4.5756 -0.6131 -1.5962 +#> 0.1901 2.4103 8.1420 -5.3695 -5.3668 -10.9919 6.8256 -16.6919 +#> -1.6335 0.6493 -0.5593 0.5749 -9.4058 25.1887 -13.2565 -1.3850 +#> 6.2220 2.0828 0.7704 24.1788 8.7479 -5.7883 -0.6454 9.4337 +#> 1.0294 -0.5649 -3.3157 -5.5922 8.3732 -3.2402 -8.2360 -0.5550 +#> -2.9694 -7.1426 3.2555 5.3949 2.0131 15.1476 3.9756 13.2564 +#> -1.8526 -10.5077 1.2471 -1.1577 -13.0580 -2.4768 -1.2320 1.6977 +#> -1.5976 2.1847 4.7230 -6.0342 -0.5115 0.7416 -6.4935 -0.2440 +#> -0.8131 -3.3292 -1.9453 -2.7024 -12.0865 0.0555 -1.3913 1.2639 +#> 0.9961 -5.2428 -11.8401 11.0223 -13.4765 -10.1120 12.0808 13.4634 +#> 2.8489 6.0960 -1.7192 5.1269 1.7591 -13.9983 -7.1799 11.6296 +#> 1.4397 -10.4571 -4.6499 5.7463 1.5327 -22.5954 12.3886 0.7576 +#> 5.2328 -12.1251 -12.2410 13.5180 -7.4857 -8.4028 -13.6096 -9.3512 +#> 5.8540 -5.6182 -1.6361 10.4952 11.7672 4.1303 0.0729 2.5591 +#> 1.0030 4.6119 1.1199 -16.2927 -7.2099 1.8887 -0.6048 -17.4918 +#> 1.1425 -2.4799 9.2321 -15.8380 2.6032 -11.9649 5.2088 -0.5586 +#> 2.0217 3.5377 -2.2017 10.9276 3.7871 10.5646 21.3250 -5.9608 +#> 4.1862 2.7262 -0.0812 6.6039 8.8524 -19.9248 -1.2731 14.4253 +#> 1.9577 6.5865 -5.6471 11.7944 17.1732 -0.8650 5.9225 0.6377 +#> 0.2850 2.7053 4.6109 9.1281 2.9065 3.1857 14.4020 16.7285 +#> 0.3713 8.4739 -6.7031 4.5455 0.3447 0.4787 7.7604 -12.5854 +#> -1.7503 7.2190 -8.6510 -3.4702 5.6751 13.9228 -5.3114 -15.2229 +#> 3.4015 -3.7903 -3.1152 14.0049 5.0853 7.4088 7.7160 -17.8688 +#> 0.8898 5.4846 -2.9675 -2.9970 0.9324 -12.9664 15.2617 1.2304 +#> -1.9995 -0.5858 -4.9704 12.2816 -4.0337 16.3481 5.3325 14.1150 +#> -3.1097 3.8707 -0.5704 6.0035 2.6616 -4.3465 -4.7927 -6.2101 +#> 1.9746 -2.8249 1.1796 4.5028 -2.0795 10.9674 2.1140 -6.5755 +#> -3.4156 3.5885 1.1321 -6.4363 5.4823 14.2754 -5.5120 -9.4005 +#> +#> Columns 9 to 16 13.2124 2.8994 -5.3735 2.2360 25.7976 3.0137 -11.4937 1.8268 +#> -2.8935 4.7092 -6.5326 -8.3361 3.1229 -11.7171 2.4599 -8.0837 +#> 3.1654 -6.8736 -3.0040 17.3870 -6.3694 -0.7925 16.5050 -2.2160 +#> -9.2479 15.3839 -4.0923 1.8941 -11.9576 -3.3405 -3.0182 11.0526 +#> -8.7475 11.4002 -3.1246 9.1074 -8.3205 5.0534 -1.0137 23.3715 +#> 12.7605 -12.0524 4.9179 -14.8454 11.8215 -6.9595 -3.8740 -3.3183 +#> -22.4466 30.3703 16.0598 -25.0331 -1.7203 8.4954 18.3378 -19.4747 +#> -5.5124 -7.1481 0.1281 2.4150 11.9876 -12.5131 5.0769 -19.8594 +#> -9.5846 1.3561 -8.8571 -5.7119 7.8162 -3.4918 1.9622 20.0733 +#> 7.9827 -11.1137 3.3657 29.9334 -8.8698 3.8945 -0.4167 0.9169 +#> 15.6365 -15.8412 -11.0218 3.1657 -6.8065 -12.0324 -1.1061 5.1023 +#> 12.2327 13.6443 0.3516 12.6832 -1.4542 6.8167 -0.6641 -9.1212 +#> 10.2712 -2.0197 -4.6885 14.2755 -10.2557 -6.5737 3.8165 6.1315 +#> -4.2317 0.6908 12.1939 6.9378 -3.0615 -2.2660 12.1141 -0.1512 +#> 0.8666 10.2340 -6.8133 9.7195 8.6313 -6.9433 -9.8662 2.2463 +#> -3.7955 -2.5939 2.8869 -10.2202 -0.2583 -2.9208 -11.6945 -8.5631 +#> -14.7348 3.9317 20.9995 2.2973 2.0594 2.2833 -6.1081 1.9177 +#> 2.8468 8.1607 3.1404 11.0361 3.6421 12.4354 4.0767 -15.4768 +#> -14.6342 8.3660 5.1886 -8.6252 -4.2892 -5.4175 16.5140 15.3920 +#> 2.0241 12.6425 1.9053 15.0312 1.9797 3.7726 0.5551 20.0474 +#> -5.4475 -1.1319 5.2913 3.7935 -8.1448 7.8411 4.2528 -14.5803 +#> -1.9503 -1.1509 -2.4265 4.8543 2.2002 4.1706 -1.2490 8.4582 +#> -9.7253 15.4198 13.5754 -10.7198 -9.2916 12.8682 6.8419 0.6993 +#> 1.9818 19.2110 -14.5814 -20.0495 -1.6095 -3.7466 19.6918 9.6997 +#> 3.3216 13.9218 -4.2295 -6.2760 4.0693 -21.0303 3.8601 5.3268 +#> 8.4889 -5.6500 7.9181 3.5116 11.6170 14.6077 0.0909 12.0469 +#> 3.3008 9.4569 -8.8514 -13.8592 -17.5208 3.9864 1.5469 -23.5100 +#> 1.5347 7.3003 -11.3367 -1.6975 8.1932 -9.5387 -1.2249 1.8121 +#> -8.1982 0.8516 -10.5033 -8.6144 16.8025 0.9546 -9.9261 5.5325 +#> 8.1604 14.4861 -6.1633 -7.9642 6.1587 2.8146 -9.4546 5.0518 +#> -2.6576 6.2556 -0.7938 -7.1543 -17.7354 18.7747 9.7348 -2.2436 +#> 4.5735 3.3980 0.8378 -2.6990 14.0849 14.1037 4.1190 2.0277 +#> -6.6686 3.3696 8.5207 8.9040 0.9597 -2.7365 3.4979 -2.5273 +#> +#> Columns 17 to 24 4.1596 -25.7686 -15.1404 -8.9288 -1.3671 5.0678 -1.9301 16.9985 +#> -12.4490 6.7502 -3.9174 -4.8482 19.0792 -15.0144 4.8911 -1.4765 +#> -8.8776 2.3609 -4.1307 -8.2305 5.7766 19.5054 6.7913 8.6220 +#> 3.4641 3.1098 -7.0934 0.3988 4.3806 -22.5267 0.0561 -17.0301 +#> -12.8308 2.0799 13.8156 1.8082 -13.9609 -2.5745 7.5044 -5.9666 +#> 0.2498 12.5673 7.5503 -16.1104 -3.4187 -7.2543 -4.3433 -7.1847 +#> -3.1907 2.8129 9.4351 -13.6088 4.9873 2.4302 5.7697 3.3274 +#> 0.9187 1.9439 -1.7335 -4.2249 -4.8246 -5.5245 -5.0744 -8.4851 +#> 4.3851 2.8204 -13.0176 11.6872 -17.4204 -15.5696 2.8239 5.9843 +#> 11.3976 8.9254 -14.8285 2.2775 -20.1134 -1.8695 16.1060 13.2642 +#> 0.3391 -2.1935 -8.9088 7.7289 -12.7010 -16.2638 -16.0576 0.9651 +#> -6.3201 18.0630 -12.1047 -4.1500 -3.2474 7.1294 -10.6871 -10.4556 +#> -7.5286 1.0544 6.0757 6.5414 -12.5292 13.6479 10.5169 12.7154 +#> -9.8171 6.7652 1.5683 -0.0256 13.5938 -4.6878 1.4223 -3.7993 +#> 12.3174 8.9059 4.7810 11.0903 -9.1580 3.7943 -17.4670 -18.2115 +#> 1.8995 18.5803 -6.1339 -18.1816 -7.4241 4.7667 20.6264 -8.6507 +#> -13.2026 4.1181 -2.7518 15.8183 4.5620 9.4490 16.4680 -21.5272 +#> 14.1773 11.3816 -31.4019 -0.7659 -6.8353 8.2688 10.8511 -1.9778 +#> -4.7224 15.2913 18.8708 -15.4398 -13.3489 -17.6808 1.6728 -0.1029 +#> -20.5853 -5.6671 22.0880 3.5830 2.7769 8.0856 14.4633 -13.8408 +#> 1.3969 -6.0445 1.2313 4.9702 18.8178 4.0873 4.1045 5.1362 +#> 10.2653 1.1919 -5.6667 18.5522 -7.2126 -5.5894 -17.3211 -6.0522 +#> -8.3299 8.8381 4.4417 -3.4878 8.0462 15.5097 18.6968 -12.0810 +#> 5.5836 -1.8382 -13.5796 -16.5298 -22.5308 -2.2137 -13.6045 3.5380 +#> 0.4382 3.1033 -5.0076 -6.2842 -7.6827 -15.5193 -3.6441 3.2390 +#> 9.8403 -0.5997 2.9366 -9.6024 0.6152 1.8874 -15.6021 -0.0891 +#> -7.9489 -19.4334 -19.7915 12.7997 5.3265 13.1515 -16.4574 18.4732 +#> -7.7003 -0.6809 5.8432 7.4024 1.7996 -12.3416 -5.2899 -5.3632 +#> -8.5651 -3.3111 -9.8433 8.6207 22.2462 -7.7504 0.8410 2.2015 +#> 2.5760 2.7540 -8.8211 8.0310 -4.4091 -4.6158 -13.2911 -11.8524 +#> -4.8951 -2.8482 2.9793 0.8178 -14.6286 2.1986 5.1434 -7.7913 +#> 2.7216 4.1392 20.0401 -19.5082 15.5115 -0.1485 -4.7362 14.6629 +#> 16.3551 -8.6646 5.3361 7.1782 -4.6457 -13.8959 -5.2264 20.3985 +#> +#> Columns 25 to 32 19.2265 2.5813 4.2602 -8.6212 8.0270 14.7208 -2.4313 -6.6042 +#> -8.4114 20.3017 2.0620 -4.8383 3.4691 -1.5760 4.6170 2.7803 +#> 0.6465 -5.7817 -16.7433 -17.2697 -2.9205 9.8512 -2.9238 3.5182 +#> 8.6801 -3.3657 -11.3030 3.9110 -2.6238 -10.8564 -13.4369 11.0938 +#> 9.6046 1.2772 3.9488 -5.2463 12.9554 -0.5244 -21.8934 4.8245 +#> 6.2853 -1.1873 -4.1437 6.8021 -12.8606 -7.8836 7.3618 22.9472 +#> 8.4518 11.2520 -10.6945 10.1761 11.1940 -17.1313 13.7517 -15.8747 +#> -0.8503 0.0473 -15.0965 -7.3103 -9.4960 12.9389 -3.5424 4.7263 +#> 0.6302 5.3077 -1.5759 12.8717 -0.6424 3.9268 8.6920 10.4053 +#> 19.6230 -6.9193 -3.5887 1.7106 -1.2620 3.1657 -0.0184 8.3143 +#> 2.9247 -12.4803 -2.2822 4.9216 5.1835 -3.6888 10.2193 19.3383 +#> 0.8281 -12.9886 -14.3478 8.3200 3.7656 -11.4106 -8.8675 9.3077 +#> -14.8788 -0.5899 7.3588 4.3446 6.0987 -8.5687 2.9230 -4.8289 +#> 10.2902 10.2134 4.1634 1.2769 -5.2478 -5.8166 -0.0322 1.0880 +#> 18.3145 -10.1861 2.7633 6.5026 7.1369 9.8278 -10.0902 0.1051 +#> -20.0572 3.9915 -14.7786 -1.9932 2.8509 2.7118 -5.6367 -7.3608 +#> 0.6559 -3.5271 -4.6592 7.9450 1.1289 -7.8782 0.4414 -4.6317 +#> 9.6705 3.3123 -22.6803 17.0145 21.1617 -5.5084 -2.1212 -4.3812 +#> -13.4505 9.5107 -3.5649 -16.0908 2.5337 4.6970 -9.4074 0.9188 +#> -11.0729 -12.7696 13.6597 -2.0072 -5.0806 18.4795 -5.9133 -9.5014 +#> 4.7037 3.6743 5.2145 -4.1880 -2.6656 -5.2377 6.0863 10.4891 +#> 11.7670 -10.4386 -0.2109 -0.2333 -12.2052 6.4594 5.4473 3.6028 +#> 0.8061 -4.8452 -6.2903 -3.1591 -0.0587 -14.1828 -18.9741 -0.2332 +#> -0.3267 25.3989 19.3063 2.1349 4.2234 -3.1067 2.6432 -11.7914 +#> -2.3602 2.0220 12.4222 2.7937 -4.9585 0.8077 -6.2972 6.2894 +#> 6.5054 10.4263 11.3934 6.9248 -3.4319 9.5436 9.2800 2.6421 +#> 11.7031 -9.1718 4.6275 7.3577 3.4125 18.0781 7.2450 -10.2295 +#> 3.3806 -6.7652 -12.1229 -5.9293 -3.1413 14.5744 2.6997 2.5371 +#> 3.0301 8.9022 -7.5954 -6.3892 -10.5618 2.1032 10.6880 5.5672 +#> 12.4864 13.4421 6.6533 3.7224 -13.4527 14.6014 -17.2829 22.0472 +#> -4.2936 -0.0431 9.9653 3.0915 14.5693 -10.5417 2.3160 -8.4353 +#> -3.0880 -6.1304 -2.0531 -7.1627 -12.8319 -2.8523 4.6373 10.1443 +#> -0.0269 -6.6783 -4.6217 -4.0809 12.2070 -0.7288 11.2078 -6.6478 +#> +#> Columns 33 to 40 6.4185 4.8604 3.7222 4.1845 -6.9717 6.2573 -8.9588 -2.0797 +#> -10.8805 10.3958 6.5959 0.2492 15.6693 -2.3024 -5.1993 -5.0015 +#> -11.0611 -18.4653 -17.0961 -2.1793 -15.8091 -4.8495 -4.0930 -7.7277 +#> -12.6086 -3.9492 8.4564 3.7170 4.2184 0.4347 1.7343 5.0374 +#> -3.5199 10.0360 18.4011 6.6154 7.6686 10.3938 8.6690 -3.2892 +#> 11.9896 14.9657 17.3100 13.6816 5.7155 -6.2593 2.5469 -7.6239 +#> -2.0192 -12.7663 -1.8799 -9.2099 24.5372 -6.6509 -7.4240 -7.4372 +#> 3.2858 -6.3573 0.9446 10.3753 2.7172 -6.0686 2.2428 -4.7485 +#> 2.8410 12.0668 -3.0514 1.8473 -11.1713 1.2479 2.0316 -0.6065 +#> 4.6437 15.0374 0.2794 18.3464 4.3572 0.1622 18.0440 10.0460 +#> 7.2547 14.2274 -9.1562 9.7975 -19.9968 -19.8045 -1.2774 7.8726 +#> -4.9674 16.0535 1.7580 11.0541 -4.9818 -0.5423 -10.2666 5.8336 +#> 5.3223 13.3867 -5.7377 -0.4243 -2.8942 0.4821 -2.8297 4.5070 +#> -32.6253 0.2349 4.3426 12.1137 1.6440 7.2439 5.9732 7.4394 +#> 11.8648 0.1440 -2.2802 -1.4521 4.3316 2.3332 12.4300 -0.3200 +#> -6.3452 9.5664 0.8558 -22.2897 -3.8459 7.0089 5.8969 -4.4485 +#> -8.6390 -1.5985 0.7078 -6.7466 3.5989 12.8649 -1.7226 -2.5022 +#> -12.6954 5.1820 -15.4668 -0.1136 9.0383 -1.8060 -2.1011 7.6085 +#> 9.8640 12.0282 15.3925 1.3519 11.9642 12.9207 -1.3220 -7.5618 +#> -13.5007 1.6434 8.9013 9.6739 -0.7869 16.0522 6.1873 -7.6691 +#> -13.1869 -10.5223 0.5893 -10.4336 -11.4600 2.9302 -11.7560 -9.4367 +#> 1.0218 -12.6724 -3.8524 0.6912 -3.0898 -4.9917 -17.9133 2.3959 +#> -10.5114 -3.0623 -21.4408 -11.9590 -4.1554 7.4104 2.1276 -18.3623 +#> 6.8585 23.4705 12.6679 11.8100 -3.3717 -1.6442 -19.9979 11.4009 +#> 7.6246 2.4456 -1.0381 5.7176 5.9947 -3.6588 -2.6176 -2.4339 +#> -2.1380 11.9104 1.4937 -0.9187 11.4557 -1.7577 1.6467 7.3846 +#> 3.8289 8.6479 8.8680 8.2986 2.5710 9.3239 -2.7676 5.4409 +#> 2.0634 5.6217 8.0566 7.1804 -6.9430 -5.0080 -0.1164 -3.7963 +#> -22.5194 -0.9365 0.0082 -14.0502 2.1485 -1.5372 -13.6636 -8.0557 +#> 4.1657 6.2588 0.7398 27.1365 6.5433 14.9252 -9.3923 10.3682 +#> 4.6615 2.4031 -6.2987 -6.8812 4.4682 8.5512 -1.3710 -3.4787 +#> 3.2988 -2.1579 8.7593 -13.0047 -9.2239 -4.7590 8.9181 5.5893 +#> 3.6827 0.2190 2.4700 -10.8235 -2.6666 -6.8960 -14.1972 14.2358 +#> +#> Columns 41 to 48 6.4534 -9.1895 -8.6435 -14.7488 11.7945 7.9069 6.2200 2.6855 +#> 3.7099 6.7005 -6.4447 14.9308 13.9055 3.2064 3.0763 -3.9926 +#> 1.3041 -1.2253 -5.3743 -0.0238 -11.3995 13.4731 0.8305 -0.6777 +#> 2.3425 4.7941 5.5960 4.5050 9.3140 1.8727 0.0980 -16.1363 +#> 15.9776 6.7659 0.1963 -6.5183 9.9605 -3.8728 2.0315 -4.3087 +#> -0.4646 7.3267 9.4552 -7.7112 -11.6981 5.7818 2.1668 1.2031 +#> 4.4953 -2.8655 0.4263 -9.0706 6.0559 4.9343 3.4586 -1.6319 +#> 9.0060 6.4304 -1.7217 -12.2141 7.1489 3.2940 1.0976 20.3411 +#> -6.8470 11.7559 -8.2976 8.8400 -4.5367 -5.4740 -9.5925 10.1248 +#> 3.1792 11.0095 -5.4095 8.5358 -0.0891 7.4820 7.9858 -9.3940 +#> -4.8595 -5.1274 -6.9625 -6.2827 -11.5960 15.2161 9.9113 4.1383 +#> -2.7703 -4.5346 1.3240 -4.7023 2.2619 1.5294 -0.7932 5.1330 +#> -13.8488 3.7725 -7.4467 10.0005 2.3924 7.0655 -4.2412 -10.6243 +#> -0.5703 2.1365 6.5394 15.4758 6.1899 5.1621 -9.3508 -9.5881 +#> -1.6777 11.2626 2.7623 -8.8503 4.7320 -9.9948 -2.1906 -1.3836 +#> 2.0351 4.1408 -7.0070 4.6746 15.7931 -0.7728 -12.3294 -0.6707 +#> 4.6545 8.8259 -4.0885 -11.8345 1.9865 11.8464 3.1237 7.3947 +#> 1.1175 -3.9012 -12.5408 13.5431 -3.7331 10.7896 -0.2898 4.3303 +#> 11.6799 16.1524 6.9541 -2.2194 0.3828 6.8236 9.9355 12.6040 +#> -1.0700 -0.2820 13.9910 11.0524 -3.5197 -5.9382 -7.1684 0.8504 +#> -6.0114 -3.4103 -2.3126 -3.8858 -10.1844 -6.1619 11.4913 -2.3328 +#> -6.6570 2.1689 0.3162 -20.9597 -7.9286 15.7913 4.5882 6.8197 +#> 17.7398 9.6953 4.2466 -5.7422 -8.0725 12.1194 6.9694 -12.5887 +#> 4.1084 9.9369 -2.8953 4.8401 22.3587 7.2082 -10.6898 -19.8510 +#> 0.9589 -12.2802 -4.2077 4.4325 0.7391 -14.0510 -5.7468 2.9880 +#> 5.0825 1.6433 2.1427 0.0308 -9.4286 -7.0943 -9.0435 8.6887 +#> -8.7570 -4.5735 -2.5489 15.2376 4.0394 0.9269 -10.9205 -5.9028 +#> 1.9380 -4.2203 -3.1124 4.4167 -10.2325 -5.5920 -8.7279 -4.0147 +#> -2.2084 2.4891 -9.7241 4.3068 0.6057 -15.8200 -11.3434 -10.7842 +#> 5.4077 1.6802 -15.9240 -6.0652 -2.5681 20.3799 -8.0014 9.7064 +#> -5.6949 8.5338 5.7999 -3.7980 5.1719 8.5332 2.3118 -9.0947 +#> 3.2450 2.4662 7.0022 -5.3419 5.7970 -2.7587 -6.6008 -4.9079 +#> -8.3299 -12.9051 4.1120 -0.4969 15.4392 4.8217 14.1742 -7.5018 +#> +#> Columns 49 to 54 -4.7991 0.0500 -8.7191 0.4213 3.1326 -0.7528 +#> -18.3523 3.6540 0.0161 -1.2858 2.6736 -0.7228 +#> -2.1636 -4.9768 4.3219 -4.3713 4.8987 -2.0903 +#> -2.0971 -9.9748 3.3338 7.4499 -2.8762 5.8465 +#> 6.8928 -4.1073 5.6118 12.3544 3.8265 7.2727 +#> 2.9705 1.9568 2.5834 0.8506 1.9960 -2.9966 +#> -2.9388 -2.1280 1.5803 12.5152 -2.4432 -0.8005 +#> -12.0056 14.8970 -7.3867 -2.2539 -4.7291 0.8013 +#> 2.5186 1.8716 2.1189 1.9451 -2.0923 -5.3365 +#> -10.2855 -1.1241 3.2962 4.4069 3.3473 2.6102 +#> -7.0721 -2.5420 -7.3420 -13.0171 -2.2379 -6.7108 +#> -3.4175 -3.6209 7.7254 -8.8157 -5.4033 0.7490 +#> -8.5337 9.6271 -0.0745 4.8559 10.0779 -5.1408 +#> 0.0217 1.3013 4.7639 -2.0329 -0.8737 -1.2641 +#> 5.1340 4.6470 -8.7506 -1.6405 -3.4937 10.2746 +#> 11.9159 22.8571 5.2302 -9.1702 -1.6322 -2.3057 +#> 20.3475 1.4480 3.6028 -0.7666 -0.9609 -8.4161 +#> -10.5663 6.8000 1.0616 -5.9306 5.6407 -3.3033 +#> -2.6967 -0.7815 -8.6606 1.8991 2.3074 -1.4894 +#> 0.0664 -9.5043 -8.0940 1.5034 -11.0601 2.7597 +#> 5.8905 10.2688 2.1554 -3.0246 4.5676 1.4414 +#> 6.9913 -4.2888 -9.0173 -2.8255 -3.0490 -5.1996 +#> -4.3810 -6.9911 -6.0053 4.0219 3.4453 1.5885 +#> -15.1787 -9.3592 0.6733 5.3358 0.3754 -2.1221 +#> -3.7403 -7.2926 5.3745 1.8880 -3.5720 0.1754 +#> 5.1834 -0.0021 -0.0035 -0.1351 1.0208 4.0843 +#> -5.8821 0.7086 10.6882 4.8212 3.1791 6.6074 +#> -9.5721 -9.9010 6.5255 2.3268 -2.0515 2.9862 +#> 15.4546 -1.7387 9.6608 1.5877 6.8544 0.1361 +#> 1.1707 -2.3197 5.2251 4.3118 1.0230 0.6757 +#> 6.7378 -2.8661 -13.6288 2.6468 3.8596 -1.8382 +#> -3.3849 -7.5928 5.2608 6.9661 -7.8696 2.8806 +#> -1.0976 -8.2777 -1.6198 6.4236 0.9753 -4.2247 +#> +#> (5,.,.) = +#> Columns 1 to 8 0.1270 3.9969 -8.8403 -17.5017 18.6949 -11.8889 1.7688 23.0865 +#> -2.0743 -3.4313 -0.5353 -2.2056 -7.4029 -3.6609 -11.6210 7.4690 +#> -4.2211 3.8374 1.6643 8.0054 -14.7087 3.0723 3.5819 15.7280 +#> 0.3868 7.4607 2.6972 6.7585 1.5712 0.7332 9.5469 1.5579 +#> 1.3800 9.2863 -11.0584 -3.1627 4.3456 -5.3272 3.4245 1.0607 +#> 0.1366 0.8656 5.4564 4.3627 -13.1777 8.9084 -16.2988 -6.1773 +#> -5.2455 -6.2299 3.7452 2.0602 -12.6547 -3.8861 -0.2208 0.7433 +#> -0.9262 -0.8542 4.7811 0.1665 -13.8330 -4.0361 10.8870 10.7425 +#> 1.9948 6.3012 -24.9735 14.5074 13.6684 -16.8739 -3.7360 8.0338 +#> -2.1510 4.7077 2.5011 -3.1044 -5.4948 8.0186 17.5787 2.0567 +#> 0.3403 -5.0138 -2.3260 -5.7637 -0.8710 22.6176 -5.8266 18.9274 +#> 0.1255 -1.3090 5.7327 4.1940 -10.9863 10.3695 8.2213 -12.3300 +#> -3.2420 -5.2769 -12.7078 16.8987 8.1143 -4.5887 -6.8306 9.4650 +#> 2.1145 -4.0721 15.0845 -18.5331 -5.0826 12.1581 -11.6930 -4.2580 +#> 1.7775 6.6388 3.0794 2.0254 -6.6043 -8.7520 -7.1985 -0.0161 +#> -1.2429 -10.8730 0.3081 -3.5270 -1.2679 6.2354 -9.5832 -6.1032 +#> -3.7292 5.2118 -1.6073 -2.0752 -0.5546 -11.9674 -14.3058 -0.1385 +#> 1.5663 -0.2134 1.1244 8.9575 -21.3111 9.7652 14.7048 -16.1164 +#> 5.0537 -5.4365 -4.2810 -3.5680 -0.4654 -0.6910 -1.4555 16.7210 +#> 2.3733 5.7109 -5.0402 14.3269 -3.7080 -0.5469 1.3461 0.0370 +#> -10.6768 1.6675 -2.6818 -6.6235 -23.3109 4.4723 -2.9306 -15.1552 +#> 1.3749 5.6761 6.1372 0.0540 -15.2654 -10.2012 9.1777 21.9168 +#> 0.0939 1.4988 -5.8306 2.3862 5.4255 -7.5310 8.3593 22.5439 +#> 0.6239 0.0600 -10.5110 -10.8921 20.7994 -3.7823 9.1831 12.2325 +#> 4.4807 1.2006 11.4099 4.1220 4.0172 12.5550 -5.6524 -5.8280 +#> 2.3734 5.6991 -7.4481 6.7318 2.9518 -3.5768 8.0330 -2.3949 +#> 1.4166 3.1967 4.9052 -8.5509 -0.2633 4.0145 -2.5000 -6.6647 +#> 8.5022 1.2671 4.9056 -3.2028 -5.1207 17.0980 12.4183 -21.3307 +#> 3.3560 0.0335 0.1486 10.0135 4.2337 -11.4092 -7.9652 -3.1330 +#> 3.8520 6.2577 5.6189 -4.1615 -5.3337 5.7469 0.9777 1.5486 +#> 4.1924 -3.3068 -6.0355 8.9054 1.5996 -12.0054 -4.6602 14.9232 +#> -4.0917 6.6781 3.7390 -4.2841 6.7789 11.4633 13.1591 -20.8491 +#> -1.2403 -5.8101 11.5358 -20.3429 -9.8043 10.3301 5.6508 -9.9822 +#> +#> Columns 9 to 16 -5.0519 -15.6536 -5.5278 10.4839 4.8238 -4.2297 -4.9211 10.4956 +#> -3.2430 -9.5777 -0.2008 11.8599 -3.4722 2.9282 -10.5189 -11.0866 +#> -8.1572 -0.2395 8.4587 -11.2127 -0.8930 0.3864 -2.3457 -6.3919 +#> -12.6023 -9.3652 -15.9834 9.0576 -6.4162 8.6291 -16.4968 -5.3234 +#> -6.2092 -16.6057 -8.0656 4.7572 0.6566 16.6773 -9.9437 12.1628 +#> 3.4973 -15.5520 8.4753 0.7503 13.3989 7.6743 1.1990 -8.3222 +#> 3.2941 0.5389 -4.4373 -8.0926 10.3525 8.2620 -0.3922 -10.5982 +#> 2.0992 2.8307 -11.3777 5.5632 -18.6747 -11.0641 12.3123 -5.2210 +#> 11.6860 -2.3646 6.6022 9.7806 6.7111 -13.4087 -1.0562 4.7586 +#> -16.2824 -14.1014 15.6424 -2.8804 -1.8653 16.2567 -1.7422 9.6859 +#> 7.2852 -5.1156 -4.8869 -1.8487 0.9758 -6.1300 -9.8608 4.3139 +#> 8.4879 -16.2979 -22.0021 20.8787 3.8001 -2.2933 2.3055 -6.7258 +#> -11.6384 -10.7877 23.9544 -0.3630 -4.4196 -5.4371 0.1727 0.2736 +#> 8.2331 -0.9939 -10.9051 6.3860 -10.9876 -2.9687 -0.3035 -6.2201 +#> 1.6205 -7.3983 3.2433 9.5837 -15.1221 1.2189 -11.8776 -10.1880 +#> -8.5982 -2.8860 -1.5523 12.6660 -6.2845 -4.3429 8.6657 4.9786 +#> 10.7378 -2.4435 6.9686 -13.8020 0.4491 12.7697 -4.6896 -0.3179 +#> 12.3907 -9.3405 -8.8491 -3.5816 5.9158 -2.1643 14.6102 -11.7558 +#> 11.3377 7.6532 -9.1046 0.7160 -15.4803 7.0581 8.9869 -3.3178 +#> -1.8110 0.7145 0.5814 -6.3573 -9.7225 -9.5813 -7.1352 6.4063 +#> -4.0320 7.8878 5.9525 -5.4456 -2.6309 3.0996 5.5376 -7.9531 +#> 1.6578 5.8366 5.3673 -11.4888 -13.9038 -17.5789 -1.9719 -0.9318 +#> -7.2453 1.9663 4.3650 -0.3790 -4.2484 14.1084 -14.0829 -5.9840 +#> -3.2372 2.3716 7.5951 -1.4341 1.9291 -2.3323 -13.2522 0.9026 +#> 17.4940 -8.7963 1.2609 8.2443 -0.4105 2.7368 -0.9383 5.9460 +#> 0.5119 19.2834 -2.7669 -1.6191 -0.8475 -12.9259 8.1604 5.5678 +#> -15.2675 -2.3728 4.2026 7.0293 6.2566 -2.7296 -3.3734 -2.0884 +#> 10.2617 5.2372 -4.6707 6.7844 4.9344 5.8431 -10.6913 -6.4078 +#> -0.4118 -5.9068 8.8041 9.4516 3.0169 -2.9376 -12.5616 7.4301 +#> 5.8621 4.0873 -13.9898 4.6101 8.8701 -0.1944 -10.0589 -2.1204 +#> -8.7893 -4.7496 3.4217 -19.9301 1.4026 5.6201 -9.7724 3.1006 +#> 3.7177 25.8277 -7.9515 10.2657 1.2519 -10.6447 -4.5983 -19.6696 +#> -3.3835 -6.4632 15.6626 -6.6609 2.5483 8.2629 -11.0463 -8.7981 +#> +#> Columns 17 to 24 -15.1061 1.9276 -13.8666 -3.8967 -5.5002 -7.8204 -5.1597 -13.6479 +#> -1.0532 -4.3306 5.4546 -4.7892 15.1088 8.0288 12.6441 1.9789 +#> 8.0984 1.0747 -1.2822 -10.0374 10.3140 14.7900 -11.8292 -3.7487 +#> 3.2155 -0.4762 6.4662 4.6925 -4.8926 -7.3207 5.0228 13.0175 +#> 11.5186 -11.5963 9.4002 -5.7629 -9.2373 -13.5491 -1.5002 -1.1277 +#> 8.0051 -9.2534 -0.8459 -10.4178 -5.8362 4.5994 0.2911 5.1845 +#> 18.8482 -9.6356 0.5151 -1.5948 9.9338 15.7383 -7.3023 1.8789 +#> 13.5459 5.4399 -7.1362 2.1980 21.2082 1.5343 -2.2570 -28.6290 +#> 3.2007 -6.6374 17.2961 6.0226 -6.0808 1.0173 10.7136 -12.6136 +#> -13.5292 2.4464 -7.7585 -0.6386 5.8336 -11.2572 1.8914 10.2651 +#> 9.2183 -7.1085 -16.7239 -14.4238 -13.1062 12.2904 -4.7747 -12.3438 +#> -5.3513 -1.4276 -10.7133 -1.0104 -14.7948 -11.3771 -10.5380 -14.1806 +#> 2.2582 2.5082 3.1395 -10.3802 -3.4846 9.7888 6.3816 8.8894 +#> -3.3255 8.7914 3.5600 -2.4824 6.4942 3.9103 0.1821 -1.1500 +#> 4.7220 1.1739 -5.6123 -1.4683 0.5952 -0.7315 -0.1751 -5.2895 +#> -10.7894 0.0108 3.8297 3.5149 5.1771 -7.0988 13.1429 9.9300 +#> 4.9622 -7.1838 -20.6221 -4.5445 -6.3929 14.4627 5.7682 -5.9828 +#> -1.6371 -2.6708 -7.7051 -12.2669 3.6014 1.2710 1.0021 -16.3431 +#> -10.6782 5.9850 32.8903 12.4231 12.3848 -13.1794 11.3659 16.1553 +#> -8.6677 -10.8305 2.6711 5.8227 -3.2384 12.1780 -8.5230 -3.7069 +#> 9.8047 1.7091 -6.1964 9.3361 2.7999 -3.9781 -13.4320 12.7182 +#> 5.2453 -3.3518 -10.8381 -3.0425 19.1940 6.6235 -9.0769 -0.6954 +#> -17.2991 -1.8443 13.4762 2.0591 -8.9836 2.1469 12.4268 3.2675 +#> 0.7351 22.4695 6.1724 5.6455 -0.3949 1.2288 10.5278 -0.1145 +#> -2.4389 -3.5736 0.5515 -5.5292 -1.3270 -3.0887 -18.4237 4.8149 +#> -4.3389 5.3588 7.6871 10.6304 4.6827 -5.4639 -4.1754 6.1267 +#> 11.6110 -3.9579 -5.9381 -4.1842 -5.2819 14.5803 3.7449 -9.4213 +#> -1.9401 6.1597 5.4595 6.9357 -1.2076 10.2617 7.7148 -11.8499 +#> 0.7595 -0.6014 15.9818 -10.5274 -5.3796 3.8521 20.9919 0.7645 +#> 1.8653 10.0281 6.2075 -15.7851 -14.1920 9.4755 13.5518 -15.5913 +#> -1.4591 5.5454 -5.1016 -0.7982 4.8222 -3.2032 7.6409 -8.4062 +#> 6.5436 3.7843 -8.6999 -4.8098 5.4861 4.2819 8.1165 7.3718 +#> 4.6000 -5.8893 -23.4787 -4.9795 3.0594 12.3016 3.2690 3.5217 +#> +#> Columns 25 to 32 1.7778 -0.1741 10.0144 0.1100 -14.6647 -5.5856 -0.2637 10.2384 +#> -1.3931 8.4956 1.8278 -9.6910 9.9097 2.5671 -10.6541 4.6897 +#> -13.2821 3.1024 -6.2520 -8.1819 -10.8312 15.8243 -6.2202 -5.2134 +#> -10.4369 10.2615 -5.4063 7.5572 11.9416 7.1635 -1.3847 -0.4782 +#> -3.4464 9.0079 6.4960 4.4430 5.3117 4.7168 5.1332 11.8318 +#> -1.6587 -3.9442 1.0908 10.2382 -5.6771 12.8907 9.4950 4.1755 +#> -5.3847 27.2970 -5.2261 -12.2230 22.8968 -14.2896 -15.2572 -4.1222 +#> 13.7583 -10.0067 0.2167 -21.0369 1.2837 8.8987 5.9922 -3.0999 +#> -4.2537 4.1788 16.4749 -2.8583 -2.1226 -9.5117 2.1922 1.8162 +#> 8.5287 -2.9122 -2.1248 10.5133 4.8937 7.5389 0.2121 -7.8859 +#> -4.9846 -12.8030 -1.8882 0.1910 1.2181 6.9836 14.2577 22.4123 +#> -7.4522 -12.7975 -0.8079 -0.8627 8.6350 11.2845 -2.3146 5.7266 +#> 12.1104 14.6532 19.1453 -9.0659 -7.2229 -6.0188 16.3080 10.1372 +#> 4.5465 -9.0641 0.4322 -7.4698 1.1671 11.2206 -14.3092 -11.6783 +#> 17.7936 -0.8289 -13.6353 6.1814 -2.8911 8.2300 5.4011 4.8218 +#> -7.9323 17.8275 -0.4554 -3.4589 4.4595 3.7615 -5.4832 -1.0095 +#> -13.9743 16.0892 -2.6171 -3.5611 4.0792 -2.1144 13.8133 4.5982 +#> -2.6170 -0.6755 -12.1117 -24.0577 1.1589 6.5868 -0.5188 1.1435 +#> -8.4654 -2.7579 1.2969 -9.0796 -10.1263 12.4411 -3.7248 2.8195 +#> 17.3934 -8.4432 -2.7909 2.7379 -8.4793 -13.8044 10.1999 -10.4044 +#> -7.7334 -2.5178 5.2465 -9.3406 10.2474 -0.2369 -22.2037 4.8796 +#> 4.0159 6.8714 -7.3192 -4.6508 1.0704 2.8704 6.4501 5.2904 +#> -11.5786 5.6984 3.3727 3.5654 -7.6489 -6.9025 -9.9804 -4.0167 +#> 1.5916 4.3510 19.0703 3.0556 -2.2498 -7.1878 -0.2999 11.1407 +#> 10.9710 -12.5355 -3.5661 7.9113 -5.3299 -0.6712 -6.0232 -4.9191 +#> 6.9522 7.4351 -1.8514 17.6501 -4.5240 -8.5315 -0.9537 -1.2942 +#> 11.7416 -2.4716 7.3769 2.1249 2.2503 -5.5305 -3.4552 5.2614 +#> 4.4133 -14.9014 2.6159 18.2596 -0.2846 -0.0281 -4.2213 -7.2945 +#> -14.1361 20.3555 -7.8349 -0.0203 -15.6937 -0.6371 -3.8028 12.6171 +#> 19.4651 -5.1343 14.9903 0.6087 -10.3531 4.2478 19.4004 3.1429 +#> -2.1876 17.4168 -5.6756 -0.2676 -2.2310 -0.9550 -5.5168 9.5355 +#> -5.8493 -14.3634 -6.6581 10.0440 -2.8606 -2.0556 -11.0157 -2.0355 +#> -14.2199 -5.4934 -0.0826 1.4885 16.0165 12.4117 -7.0615 12.5529 +#> +#> Columns 33 to 40 10.1500 -2.7168 5.8167 -12.4654 -1.3378 -12.2143 10.4488 -13.0792 +#> -2.6546 -1.7643 -5.9071 -11.1366 -4.9924 21.9747 -8.3145 -10.3978 +#> -11.8286 14.7212 11.9621 -10.4938 7.4112 25.7293 24.8885 3.2072 +#> 6.3994 4.2064 -11.9686 9.1049 -4.7787 18.2113 0.4549 6.3143 +#> 1.8633 5.0401 -10.5132 15.4074 -3.6229 13.2258 -1.8025 10.2815 +#> -8.0038 -5.8750 5.6899 7.1259 -5.7372 5.9912 -1.7020 7.0553 +#> 0.6529 11.6060 5.7158 -5.1726 -16.5611 5.9302 2.3731 4.5060 +#> 7.9616 11.2526 7.7906 -3.9221 -15.5615 6.2789 18.7343 3.2657 +#> -9.6967 -2.3485 -3.2977 5.0751 11.0015 -3.8198 -4.6981 13.0108 +#> -6.1833 0.3852 -13.5263 -10.9368 -5.7111 13.8614 3.1115 -2.4188 +#> -23.3818 -4.6403 1.6466 5.0277 14.6978 11.7769 1.6959 -0.8511 +#> 18.4458 -0.8086 3.2779 7.4938 2.9144 -8.4173 12.3187 1.5119 +#> -13.7431 -0.5227 -0.7258 -5.3151 -15.2641 4.0030 -0.8535 3.9266 +#> 5.3062 3.4875 -2.3348 3.8489 -0.5105 -10.3150 -0.6041 9.8378 +#> -1.0193 -8.3886 9.1420 5.2717 -15.8106 -10.3659 -14.0255 -4.1194 +#> -2.4097 -10.2394 -4.6149 -4.6161 2.0708 6.0866 15.4649 -12.5058 +#> -14.6599 -14.0379 13.4541 20.2061 -3.6042 -2.7530 -13.5095 -7.4449 +#> 0.3103 -1.0956 -8.0902 1.3008 -3.4313 11.7765 -11.8600 3.8552 +#> -2.7998 6.2498 -0.1702 0.4605 -13.8449 -8.1785 18.2829 23.7498 +#> 6.0590 20.4857 10.4082 -2.4494 2.3333 -1.4283 0.1231 7.1851 +#> 7.4581 7.6981 -12.6472 -2.2774 -1.0708 1.0064 -2.8286 -11.5675 +#> -2.5189 14.7925 3.5630 -9.5981 2.0895 -14.7525 16.8936 -3.0053 +#> 1.2041 3.4081 1.7201 2.0051 -10.9110 22.4472 -5.5768 7.2891 +#> 8.0471 -4.4368 10.0651 -10.3965 15.8845 -15.5329 25.6666 3.4862 +#> 15.5102 -10.4686 4.5244 -19.9958 6.8780 -8.7682 -9.2341 18.1238 +#> 15.5442 3.4110 -5.3646 -3.8514 8.0869 -0.3192 3.4853 -9.9307 +#> 0.9164 -5.3659 4.1568 -6.6659 -5.4278 -18.0584 0.3826 -19.3212 +#> 6.0171 -2.7288 6.8094 5.1870 9.5104 1.4394 1.4395 2.5156 +#> -3.5162 5.0776 -5.7641 -4.2334 -6.3650 5.7626 -3.5435 1.7901 +#> -1.3093 -1.8369 -2.8865 0.9459 -20.2915 -32.7982 -7.0939 -1.9332 +#> -4.3264 4.8188 8.0833 -4.7750 -3.5830 1.9510 -1.8569 11.8234 +#> 14.5494 4.4043 -10.1908 4.0622 0.1435 3.1325 11.6746 -11.3710 +#> -0.5474 -17.7964 -5.1160 -7.5432 10.3011 2.1522 -2.5519 -6.5769 +#> +#> Columns 41 to 48 -7.7728 13.0417 -6.2141 7.5805 -10.5306 2.4039 -2.6211 -10.8351 +#> -5.7943 1.4155 -0.0184 -6.7851 -2.8573 5.4923 14.2143 -4.9217 +#> -10.2013 17.2115 -5.5376 -0.1993 -0.0483 -1.8005 27.7821 2.4018 +#> -0.1497 -10.1295 20.7939 -2.0661 7.0674 4.5289 -5.6561 -18.6444 +#> 2.6511 7.1128 -12.3257 2.0484 12.3790 -4.2943 9.5533 -16.7716 +#> -12.8534 -8.1058 -1.1198 1.3922 3.3961 -13.1037 8.0641 -0.8374 +#> 2.6301 -4.3767 11.9773 -15.8264 0.5274 13.3549 0.8531 -15.4280 +#> -0.4952 -4.6740 6.5916 -6.6276 -3.5350 -1.0065 8.1074 3.3948 +#> 6.4617 4.4943 3.4736 9.9566 10.4772 -0.0510 6.4968 18.4616 +#> -5.7202 -5.0081 -2.1739 9.4466 -5.9768 -1.9245 -4.2107 -12.0566 +#> 6.8396 9.8853 -4.6098 8.1681 12.2168 -11.0984 0.6794 6.4963 +#> -6.9582 -14.8138 6.2660 14.0955 1.1016 8.0584 -5.1369 -3.8343 +#> 3.1336 2.0385 -10.6221 1.4631 -8.8730 -11.5658 -1.9890 -2.1705 +#> -3.2012 -3.2624 0.0124 5.5875 -5.9079 3.6307 9.1475 -5.5163 +#> -16.6238 1.3657 13.1135 -18.5553 19.3835 3.0428 -11.0630 -6.1474 +#> 1.9540 -12.7126 17.6025 9.4565 -13.1545 15.5756 5.4707 2.3039 +#> -2.2161 -2.3298 4.7162 -23.5897 -1.6494 14.3266 2.2557 10.9696 +#> -0.9794 -0.4933 2.9129 -6.6436 10.5483 12.2195 3.2601 3.5748 +#> 15.1612 -3.5543 -3.3447 12.1451 -9.2359 -12.5165 8.5579 5.8165 +#> 2.1669 -5.3160 -11.2051 -5.0594 3.0964 -6.5364 -2.7889 0.1063 +#> 1.6183 8.3523 0.4388 3.9910 -6.5673 2.6309 7.4823 -3.9073 +#> 3.9793 9.4371 4.4418 -7.3555 -1.5988 0.2323 -1.8985 1.3787 +#> -8.1983 -4.2067 11.3807 3.0023 -6.7143 0.3628 8.3436 -5.4336 +#> 2.1477 16.8822 -6.4990 4.1027 -12.6306 5.3138 -21.9907 -14.3988 +#> -19.9449 14.1068 -1.2428 4.0588 19.1461 -7.3755 1.1968 6.9337 +#> 10.8277 5.7167 1.2530 0.4044 -8.9044 -2.3803 -22.9909 -10.8675 +#> -8.5395 15.7964 -10.0465 -10.4169 0.0549 -0.2287 -14.4526 -14.1527 +#> -9.4452 4.3918 -11.0792 -5.8634 6.1970 -3.8583 -18.6164 3.9002 +#> -9.7997 3.0892 7.0320 -14.7294 4.5604 1.5742 12.2626 3.5791 +#> -0.0732 -13.5142 6.8419 -11.9071 1.5294 5.4908 -1.9896 11.3291 +#> 1.4363 9.0709 3.7630 -4.3767 -0.1546 1.2374 5.7477 -3.1649 +#> 2.7391 -9.0105 3.2614 4.6591 1.7479 -4.8373 -15.5717 0.2825 +#> 3.0020 6.7660 -1.4641 -1.4705 18.1625 7.9863 -2.3651 -5.9829 +#> +#> Columns 49 to 54 -10.3971 8.8077 -4.6116 0.1249 -1.6678 2.0824 +#> -2.2096 -8.2924 13.3718 6.6673 1.4004 -1.3016 +#> 2.5277 1.7145 -0.9568 -5.3421 -2.9864 1.5047 +#> -6.6332 -8.7000 7.2478 1.0313 0.7790 -3.3170 +#> -3.6021 10.2331 -10.4041 4.7000 6.0417 -7.5707 +#> -1.3524 0.1230 4.3883 -6.7944 4.1890 1.4641 +#> -2.5252 -11.2596 -0.6901 6.4817 5.9568 -4.7086 +#> -7.5398 3.9761 2.6150 -2.4091 3.3055 0.5490 +#> -0.6691 5.7260 -2.5497 1.0488 4.6558 -0.9856 +#> 9.7165 11.1001 -10.1059 -2.6323 -0.2516 5.0828 +#> -1.5492 4.6172 8.0683 -9.1370 -10.2819 4.1062 +#> -14.9461 -1.1056 4.2353 2.6166 -2.3585 3.9567 +#> -2.1073 -2.8609 0.2629 12.1419 0.1297 0.5681 +#> 0.1319 -6.9208 7.5661 -4.5516 -1.4141 0.7718 +#> -5.0699 17.0210 6.4682 2.1013 5.7954 -3.9657 +#> 8.7692 -9.5887 -5.7504 2.7774 -1.2659 0.4100 +#> -2.1969 1.8587 0.1379 -7.1127 11.8105 5.3066 +#> -5.8700 -6.6889 -5.8325 0.0023 -2.7781 9.8854 +#> 11.8987 -5.3737 -11.4409 1.0054 2.0471 0.0693 +#> -8.3490 -3.2622 -2.3483 9.3945 11.1448 -5.8516 +#> 13.3893 9.0998 6.4305 0.8584 -6.0464 -0.4977 +#> -10.9009 24.4256 12.4352 -9.8091 3.5869 -3.4174 +#> 12.5356 -5.5965 -0.5642 5.0351 3.2751 -1.5896 +#> -11.1168 -3.6532 -3.4759 7.6531 -0.7342 0.2704 +#> 7.4699 11.7731 6.1229 3.4599 3.3510 -0.4367 +#> -4.2270 -2.4345 2.1732 2.9828 0.3095 -2.4458 +#> -9.8393 14.6494 10.6712 -6.4012 -0.0743 1.9862 +#> -1.1818 -4.4589 0.8286 -7.5110 1.1673 0.7609 +#> -1.1280 -5.8980 6.6434 0.7556 -1.9274 5.3015 +#> -9.2338 3.2889 10.3630 3.2861 9.8004 2.4265 +#> 3.9075 5.8852 3.2487 6.3270 -4.0139 4.3934 +#> 15.9540 1.2406 1.6622 8.0269 -0.0123 -0.9241 +#> 2.7061 11.8639 1.6918 2.4294 -7.4127 6.2571 +#> +#> (6,.,.) = +#> Columns 1 to 8 -2.3933 -2.8231 -14.5649 1.2265 6.9484 16.7138 2.5152 -3.8714 +#> -2.3410 0.0383 -1.8482 -1.2086 21.3870 2.8062 -8.6456 0.0021 +#> -1.8909 3.2607 -0.2578 0.7585 1.2088 4.6651 -14.7558 -7.0397 +#> 1.6005 3.0437 2.7254 0.4729 0.4200 2.9602 -5.7026 7.4910 +#> 7.7097 5.1390 -4.2629 -5.3246 -13.7337 25.3440 -7.3598 4.1057 +#> 8.0588 -5.3984 7.8002 -8.1057 -0.1618 24.6554 -12.0122 3.7527 +#> -2.1816 7.3111 -13.8940 -13.4551 19.3834 -5.2697 6.9472 -6.4791 +#> 4.0208 -3.4722 -17.0254 7.7089 -7.8556 7.9077 1.4991 -18.5755 +#> 4.4408 -3.0581 2.1746 0.8543 -2.8558 7.9674 8.2400 19.3803 +#> 2.6803 -8.3320 -0.8502 18.1305 3.0918 -4.5918 -7.5745 2.3753 +#> -1.4478 -10.6514 -0.9545 -0.7664 1.4152 -5.6242 19.3267 14.1202 +#> 9.0945 -8.8592 -4.3406 0.4397 -4.6462 6.7036 -5.7391 24.0809 +#> 1.9166 -6.8628 -7.8494 3.6546 17.2094 3.9769 -4.0957 3.9199 +#> 1.7243 -9.9460 6.4939 4.1563 10.2148 -24.9290 -1.3239 -3.4306 +#> -0.1722 -10.1212 -3.0323 5.8537 -12.9172 1.2957 9.8134 -4.4496 +#> -0.8825 1.3276 5.5673 7.1959 13.0362 -25.3318 -13.5864 6.1727 +#> 1.5064 -4.7732 4.0040 -23.9419 3.9104 -2.8041 3.1461 1.7080 +#> 7.4478 0.0707 -10.1757 2.4835 -1.7016 -3.2533 1.0278 6.0833 +#> 1.6632 14.5217 1.0237 -7.3828 0.9577 13.4193 -0.7450 -11.0810 +#> -0.3358 -0.7283 -2.9503 -3.8002 -5.4520 9.4111 -3.8151 6.5682 +#> -12.6043 3.4720 4.7416 -0.9385 -0.6316 5.4884 -2.0362 -7.3382 +#> -8.0593 7.6517 -8.7342 -5.1107 20.6218 9.9912 16.3979 -8.4791 +#> 1.8494 -0.2744 10.7613 -6.2050 -19.5197 -5.7177 -5.4184 -5.1411 +#> -3.0168 6.6093 -3.3874 -2.2436 18.0242 -1.7924 1.4984 6.4660 +#> -12.2303 -14.3616 16.1126 2.0701 11.4154 0.3789 4.9664 -0.8457 +#> -4.4937 8.3347 -13.6119 7.7609 -20.5116 0.1745 -0.5136 15.9122 +#> -7.5482 -0.6359 -0.0841 -5.4823 0.1912 0.8122 20.5052 -2.4496 +#> 6.8388 0.3139 7.5516 -9.9683 -10.5476 -2.7888 -1.1222 -9.8913 +#> -3.1357 9.2803 7.8254 -11.2691 -0.8377 2.4385 -8.8452 4.2665 +#> 0.1097 -9.1326 -7.8229 -12.0185 2.6693 23.2933 20.2058 -1.6425 +#> 0.5991 3.7670 0.2063 -1.4881 -1.1862 13.1080 -3.0750 10.8134 +#> -8.0925 2.0068 -6.0206 21.5375 -20.4440 3.3259 -3.7572 -20.7331 +#> 0.4287 2.6760 2.0979 -4.7913 11.6869 12.6154 1.7529 7.2859 +#> +#> Columns 9 to 16 1.5429 -16.2952 -4.6099 4.4046 -3.0855 6.6220 1.6809 -15.7730 +#> 4.9859 -10.9777 -9.2018 -1.2260 11.0141 17.4103 -11.0088 -14.0926 +#> -5.6609 1.2191 -2.1708 12.3986 10.5315 3.7087 14.3801 0.3327 +#> 0.4658 15.3275 5.7982 1.7107 -2.0811 -0.2691 -4.6511 -22.7007 +#> 12.6620 -10.0261 15.5672 -11.6777 -9.5031 -12.7180 1.2024 -19.7086 +#> -6.3520 -7.2483 -7.2158 -5.1041 -6.8909 -7.9554 -0.5690 7.9753 +#> 2.5586 1.4266 -5.7646 -22.6115 14.7957 6.7313 -12.9693 6.5472 +#> 2.1179 -6.2491 6.2176 3.6738 -3.9064 6.0592 17.8334 -1.8069 +#> 5.5736 -10.2409 9.7896 9.8987 5.8255 9.5428 3.6325 2.3045 +#> -11.4319 7.3671 -8.7400 -20.3050 -16.0279 -13.2162 -0.9830 -18.5106 +#> 1.1721 0.5274 -10.7990 15.6938 5.4564 0.7109 -6.9359 22.1753 +#> -8.8085 9.1855 9.5483 1.9305 -12.1039 1.4839 -9.9639 -15.0440 +#> -2.3187 -12.9011 -8.9538 11.1161 -0.4570 -9.3071 -13.1429 4.4905 +#> 2.0216 8.9996 8.6279 19.9109 -6.0012 11.1547 5.3846 -15.7855 +#> -5.7400 11.2579 -7.9448 -0.6473 -16.4727 -14.3253 -12.2542 -4.0223 +#> 9.3313 22.7818 -15.1444 2.1813 6.9884 -2.3573 2.4577 7.7630 +#> -7.4960 5.6692 -14.7068 11.3074 -1.1772 1.8731 -1.3718 19.7326 +#> 3.1983 -1.7400 0.2044 0.1885 5.6679 -7.9948 6.3472 -15.2114 +#> 7.0347 10.1483 11.5145 -8.7661 0.6672 15.2777 8.6046 -5.1687 +#> -15.9128 -1.2608 -3.7305 -6.6174 -0.8022 -12.1907 0.5581 -6.8652 +#> -2.6660 -8.1604 13.6352 -12.5818 4.3933 0.8350 -0.8670 6.9186 +#> -14.4307 4.6077 1.2247 8.5692 7.3176 -7.7655 11.5433 26.7380 +#> 0.2708 1.2055 -9.8883 -8.4861 0.2955 1.5498 -8.4476 -3.7971 +#> 4.1670 4.8443 3.1621 19.7748 17.0420 10.3664 -0.9168 3.7278 +#> -2.1689 7.2192 8.4080 -3.2654 -4.2774 14.1565 -3.8137 7.5986 +#> -1.7891 -7.4096 -4.3061 -13.8280 -5.9388 -12.7266 11.5695 -13.9269 +#> -12.7099 -0.0428 -14.1989 -5.1375 -8.5218 -9.1595 -16.8514 -6.4152 +#> -7.9016 4.6911 -10.4946 -12.6796 -4.2166 -1.3278 1.8187 1.4088 +#> 0.2709 1.4040 -2.1233 0.3287 17.2023 -10.9112 5.0172 -3.8812 +#> -9.1995 8.2871 -14.2229 15.7726 -10.5637 -11.2286 15.9094 2.9421 +#> 2.8056 -0.9664 6.7691 7.8808 10.5534 -9.2041 -1.0348 3.5583 +#> 7.8407 -16.9434 -1.9920 -17.0384 -4.8415 -8.9256 11.2713 -15.5950 +#> 11.6981 0.1152 0.9315 1.8805 8.5721 -14.7573 0.4268 9.2058 +#> +#> Columns 17 to 24 1.6778 -6.4415 -0.1352 -8.3186 2.3479 -2.9124 -3.0235 5.4116 +#> 1.7527 1.8941 -7.2753 -6.6662 -0.6948 5.5409 7.5639 -11.6145 +#> -19.2897 2.8418 7.8411 9.0814 12.8986 2.8308 10.1728 5.7908 +#> -1.6884 14.0491 19.4663 -11.5493 1.3748 4.0061 10.2249 -6.3584 +#> 1.6027 -2.9045 7.8118 -5.7253 -2.4294 1.6924 8.8773 2.7202 +#> -6.7086 -17.3329 7.9259 -2.4976 -7.2582 13.8021 1.8096 8.7034 +#> -12.4561 6.8747 -3.2511 4.0770 1.9898 13.9476 0.8878 -12.9841 +#> 4.3855 -2.7669 -3.9468 -3.1256 -11.9641 -10.6414 -4.2919 -4.2538 +#> -3.9837 0.9678 -11.9469 -3.0423 3.9294 16.7164 -1.3809 6.7290 +#> -2.8724 -9.0192 2.6321 0.5427 17.7220 -3.3728 0.8575 10.2964 +#> 22.2418 -9.5127 -9.0471 -19.1665 12.1969 -8.3111 6.4907 3.1734 +#> -0.2076 -2.4305 5.1785 2.1002 -0.7725 1.4751 1.8629 -5.4309 +#> -5.0746 -0.7222 4.2633 6.0595 -1.4383 6.2370 2.5049 6.4484 +#> -2.7657 6.0351 -9.5651 -15.2493 6.4888 -9.2095 4.7027 -5.8996 +#> 9.5221 4.1215 11.3843 1.7492 -3.3621 -11.1883 -7.6068 1.9288 +#> -0.0211 8.0418 4.4088 7.3769 15.1228 -22.4525 -11.0030 4.5516 +#> -7.2525 5.1525 -8.8931 -5.2451 4.2944 -7.1268 -4.7372 -10.3392 +#> 1.4353 4.9326 -9.9241 6.8887 -8.8230 5.2892 -2.2666 2.2917 +#> -1.3109 -19.8390 -10.8671 -9.9451 -1.5207 -11.8419 2.0018 12.9587 +#> 0.4461 -10.1880 3.2137 2.4221 5.3363 0.4954 -15.3489 6.2938 +#> -4.6830 3.2804 -1.1812 -8.2964 3.7031 2.6834 4.3869 -19.4382 +#> -9.0388 12.4784 -3.2806 -7.6163 1.6500 -1.9572 -12.4062 -7.6018 +#> 7.8623 -13.5411 18.3099 14.9309 9.6770 -14.5300 0.7799 -1.9263 +#> -18.3736 3.3309 -10.0669 -12.7049 -3.2633 1.2947 3.9347 9.7269 +#> 11.3697 3.8972 6.4741 -3.7142 -12.9091 6.1751 -3.9448 -7.4640 +#> 8.2126 -0.8895 4.4201 7.0476 -13.7706 1.3601 -4.8520 6.2506 +#> 8.5005 18.1041 -16.0235 -2.0557 6.6932 10.8115 -3.1100 -13.5942 +#> 7.9085 -2.9537 4.6969 13.3141 6.4515 -1.0102 -2.8777 5.5790 +#> -7.1045 15.9441 13.3113 4.8020 8.2366 4.4381 8.0431 -7.1640 +#> -8.8677 -2.9996 9.1404 2.6122 -21.6991 -16.3302 2.8424 6.2128 +#> 10.3883 -5.7031 -1.9179 -10.6424 -7.1544 2.9624 -6.7363 8.1942 +#> -0.9672 -1.5058 -4.0205 6.8180 1.1208 9.9072 3.4915 -13.4807 +#> -7.6882 1.6917 -7.1852 -13.2910 8.2148 22.9102 -9.2222 -9.8807 +#> +#> Columns 25 to 32 -5.2330 -1.3815 2.1893 -13.3959 -20.0664 -1.1699 2.4693 0.6031 +#> 3.2243 -1.5599 8.9246 -5.4088 -5.8966 -15.8264 3.9623 -7.8535 +#> -12.6833 2.4453 10.2644 6.5495 -2.7492 3.6596 1.9944 7.6586 +#> -3.8781 -10.5531 -3.0427 -4.5520 4.2960 9.7992 -3.2453 6.4634 +#> 0.0360 1.1191 -0.1727 6.7454 -9.5639 5.6739 2.7641 2.8032 +#> 2.8271 -7.2727 -0.0424 9.8217 -3.7367 -21.3563 0.2294 11.2744 +#> 10.2004 0.9974 -3.1559 -1.7444 11.6059 -4.6158 12.4775 -5.4706 +#> 4.2351 3.3644 4.5505 -12.7553 -10.2433 -4.5546 -1.5327 21.5162 +#> 4.4139 13.4896 1.2470 -2.8440 0.5755 4.0851 16.5470 3.8477 +#> -4.0246 -4.0716 -12.7473 3.2761 4.2678 -6.4974 -14.6987 2.1921 +#> 3.4868 12.5677 -2.0901 2.2527 5.1330 -0.4987 4.6309 2.8002 +#> -7.3434 1.1411 -22.2636 -4.9744 -1.3532 1.0651 -18.1702 9.8706 +#> 4.9933 -8.0530 6.8392 -4.6851 2.0099 -0.3594 16.5478 1.6143 +#> -6.7089 1.2296 -3.3766 -22.8374 -6.2673 -0.4287 -0.7818 -2.8487 +#> -20.3780 -10.1946 -14.1414 -1.8622 0.5909 -3.7518 1.6533 -5.9114 +#> -7.5912 -1.6410 -11.5662 13.6936 8.9424 15.1482 8.7913 -13.7951 +#> -0.9836 -3.2381 -11.7469 -7.2732 -3.1640 -25.6097 -7.5313 15.5874 +#> -4.0414 9.6809 -10.4774 -4.9385 1.1550 -7.6655 -16.8005 -0.4686 +#> 11.9079 12.2473 6.1715 -0.1022 0.0843 11.3742 3.1708 -18.4931 +#> -6.2798 -6.1154 17.0070 -10.1745 -13.4848 -1.4222 -6.4125 -15.9544 +#> 4.5880 -1.8172 1.3404 -0.7641 9.0390 -0.8292 8.8150 -0.0080 +#> -8.6819 13.4015 -8.5101 2.4812 10.9497 0.8663 9.6707 3.3289 +#> 0.1464 -8.4763 0.8412 2.7965 2.5752 -0.5562 -12.5688 -0.9830 +#> 15.1069 4.9995 16.4681 -0.5231 -7.8215 -13.6747 6.4876 0.6850 +#> -8.4894 -3.5768 -6.4240 -3.0812 15.4772 12.1769 10.4209 -3.3678 +#> 3.8084 1.9189 -1.7310 8.7086 -3.6002 2.5440 -0.2780 4.0019 +#> -1.3956 2.6794 -9.5926 -6.8309 6.3613 -9.5152 -0.6415 8.0382 +#> -2.3512 -6.4113 17.4262 1.2770 2.9487 0.0967 -1.5846 5.8918 +#> -7.3301 2.1242 0.9905 2.8800 -5.8558 10.1734 -3.6074 3.0594 +#> -13.2812 -16.7233 -9.6330 -11.3087 -9.2737 1.5179 -0.8227 28.9040 +#> -2.7239 6.9531 -0.7233 4.6741 4.4453 3.3907 6.6621 -4.8867 +#> -5.7248 -7.8697 -9.8186 -2.3132 -7.1376 -3.2057 -12.4708 8.3674 +#> -4.3927 1.4272 -8.7889 3.6932 10.4733 -8.7547 -10.5809 13.0088 +#> +#> Columns 33 to 40 -0.1658 -4.2741 -13.0167 2.8125 19.4319 -5.4442 8.7345 11.3106 +#> -5.4128 4.5412 -14.4241 -6.9806 20.9891 1.9883 -8.5468 -11.6336 +#> 3.9230 -5.9303 1.0094 -2.0760 -18.3764 2.7971 -9.8910 -17.1911 +#> -0.7150 -1.6136 5.7454 4.3717 -10.7253 1.1466 0.7195 -8.8246 +#> 4.0325 -3.4147 6.5115 6.8854 -28.3982 9.8396 -4.9240 6.0618 +#> -0.7968 7.2536 -8.6915 -4.6818 -6.5899 11.5288 8.7270 26.6901 +#> -4.0894 -10.4742 -2.8013 3.5833 2.8203 5.2279 -3.2011 -19.0499 +#> -7.7248 -1.2338 -5.4393 13.5333 1.4814 6.0945 0.7191 -8.0978 +#> 7.6185 -0.8284 -4.9946 13.8686 -2.9363 -3.2145 6.9183 -2.4251 +#> 5.0350 -11.5492 14.6761 9.1763 -7.2025 -0.9546 6.7639 9.4197 +#> 16.2627 1.5523 -14.3638 -2.5518 -6.0636 -0.8085 1.3869 -2.2191 +#> 2.2576 -7.7883 -0.2024 3.2101 2.8854 11.1889 -3.9979 2.4221 +#> -5.9915 4.2164 -8.7803 4.5399 19.8924 -5.1557 -5.6777 -5.8437 +#> 7.3394 6.0241 -1.8508 -1.9891 10.3307 4.8951 -12.2203 -3.0790 +#> -22.0107 -2.2640 7.6870 9.3464 0.2916 -5.7537 7.5604 3.1492 +#> -10.7479 -3.7193 16.7831 -15.1875 13.6446 -8.0983 -22.4715 -2.1607 +#> 11.9150 -5.6264 -2.5958 -14.5199 -7.6892 2.8243 6.1238 13.7542 +#> -12.1732 1.5765 -3.3213 -6.7741 1.4289 15.2627 -7.1139 -3.3994 +#> 14.9497 12.3898 13.8701 10.1220 -3.4746 14.4940 23.5101 4.9444 +#> -7.4806 3.8434 -10.1420 10.8480 -3.5622 -4.8281 -7.2726 -2.8693 +#> -4.7255 -2.2326 5.0139 0.5783 -5.6597 5.1953 -15.6661 -0.2395 +#> 8.3898 -11.8495 7.7439 12.7849 -1.7928 -6.8798 10.0592 -1.0226 +#> -4.4024 -0.1014 9.6252 -3.3212 -20.1394 -5.8556 4.4151 3.6770 +#> 8.4042 9.1342 9.2365 -7.5792 24.0859 0.5709 12.2008 -14.5676 +#> 0.0516 -2.5967 -7.9509 20.5446 -4.4098 -3.6914 0.0778 4.5274 +#> -7.2622 18.3369 13.0771 0.7538 5.7499 -6.8232 7.2616 9.0808 +#> -12.1754 -9.0076 11.1918 1.9653 11.7068 -2.9894 4.2650 -11.6029 +#> 13.0535 1.4470 -8.0668 -17.0397 -5.0373 12.8577 6.7251 -4.5107 +#> -3.0594 2.9478 3.9375 -13.9612 1.8279 -0.7990 -6.5206 -1.8909 +#> 4.2958 2.5674 -5.0706 3.6540 7.2688 10.8935 35.1408 6.1412 +#> -7.2251 -0.6509 7.0742 -7.2755 6.0282 -2.5625 -7.4393 1.4161 +#> 5.6039 12.7617 -3.2069 -7.9473 7.4579 20.2680 0.9503 -9.4715 +#> 1.8726 -0.4505 -8.4528 -3.1456 22.2555 8.8612 -14.3986 1.5937 +#> +#> Columns 41 to 48 -7.0365 -7.1911 -6.9832 -6.6034 1.8335 7.5551 3.6796 8.7989 +#> -3.6209 -3.4856 -6.2510 2.3313 1.1667 -8.5825 9.9867 1.9490 +#> -12.8983 -1.8999 2.1977 4.7087 24.5768 8.9532 3.6447 -12.3067 +#> -0.1780 2.2829 -1.9048 13.2251 2.4473 -1.6018 -13.6998 -0.8499 +#> -4.5863 8.7913 -1.2143 -1.1927 -10.1472 7.0486 7.8177 -2.5055 +#> -10.8818 -10.3920 -4.0015 -27.1391 4.7582 -6.4043 12.1196 2.8603 +#> -3.2827 6.1637 3.9089 -1.5718 -0.1650 2.7290 5.3034 -7.1740 +#> 12.8172 -3.8016 1.2016 -2.3019 -5.9471 -3.1962 3.1356 -7.3677 +#> -2.7111 -1.8917 -2.2459 -13.3483 7.5332 7.3395 15.2006 3.9319 +#> -4.5157 -11.7697 -5.2488 5.5137 2.6628 10.3019 -2.9456 2.4325 +#> -8.0180 -11.1056 -4.5557 -1.5262 9.2000 -6.9966 7.3665 -15.5740 +#> 16.5981 2.1450 -13.4648 6.9906 -3.8754 -6.5307 -2.1259 -2.9880 +#> -14.1285 -9.4364 -1.0108 7.9575 13.1723 -3.3897 3.5740 -9.5083 +#> 4.1621 -1.5070 -2.2087 21.1738 -1.1345 -12.7376 11.6858 12.5840 +#> 10.2081 -7.1385 -0.2708 -9.7277 -2.5300 8.6155 1.8450 1.3057 +#> 13.8926 7.6847 -7.3537 -0.5823 9.2490 -7.8143 -11.8197 2.1618 +#> -7.0815 -0.0924 1.7060 0.2990 3.1005 -2.8214 0.9185 18.8770 +#> 10.3080 -2.8859 -18.1575 0.8361 14.2335 -10.9023 -10.3132 6.2573 +#> -2.3226 10.1153 -9.8714 -11.1489 8.7382 11.8088 3.5335 -8.6061 +#> 0.9529 6.2439 13.7589 2.4170 -15.8774 -0.7123 7.9430 -5.7493 +#> 3.0389 -8.2618 0.8278 -5.5897 2.5569 -11.8408 5.3491 3.7439 +#> 0.4842 9.9147 -1.0470 -2.6416 20.6687 -2.1343 -11.0162 -4.9360 +#> -12.7470 9.6998 6.0791 5.8153 15.2451 23.3747 -4.8672 -11.0663 +#> -21.7379 2.9156 6.4470 3.1172 8.1243 8.6528 -10.7619 4.5328 +#> -1.5000 -0.2806 16.6390 0.5521 -15.4529 11.5029 -1.5393 1.3152 +#> 1.4420 10.6928 4.3994 -12.0668 -0.2689 -1.5219 -2.0803 8.6118 +#> -1.3304 5.7595 8.6643 -0.6670 -0.9824 5.3312 -9.3878 -6.1657 +#> -2.6928 5.0323 4.3580 -2.5496 -8.2724 4.0027 -8.0189 9.2038 +#> -14.3821 7.1752 1.4434 -7.9288 3.9231 6.6579 -7.0609 2.5534 +#> 0.6453 -4.4102 -4.5158 4.6314 9.5885 2.2223 2.2485 -1.6284 +#> -3.5814 -5.0103 3.0189 -1.3147 -7.1761 -6.2938 -8.0212 -0.3543 +#> 7.2415 6.0512 3.7243 10.1745 9.9101 0.0936 9.8109 6.2233 +#> -8.6651 -7.9083 -5.1292 5.5781 5.1312 -1.0708 -4.8641 11.3148 +#> +#> Columns 49 to 54 -10.8975 11.6077 -6.2775 -1.0497 -13.6208 -1.8142 +#> 13.1456 -9.1939 0.3886 -4.1949 -1.6206 -4.2013 +#> 6.0395 -7.3898 7.8786 14.0081 -3.9137 0.4371 +#> 2.2743 -9.5347 1.2135 -10.0645 11.8385 2.1302 +#> -0.4494 16.9023 -1.9057 -6.1568 3.9840 -3.3445 +#> 0.9947 2.2809 -8.2821 9.3761 -6.5058 -5.3371 +#> 0.0837 -7.2570 10.4998 -5.4291 2.0629 -1.2072 +#> 2.0734 -4.8856 22.5101 0.4016 2.8442 -2.9386 +#> 12.2798 -4.8926 1.5630 -0.9470 4.9285 0.0782 +#> -10.5178 16.6957 1.3643 -8.1838 -7.0226 -4.0855 +#> 8.0503 -0.6926 -5.9212 16.0828 1.3463 1.9198 +#> 4.4199 -8.7152 -7.8884 2.5225 -3.3634 1.3823 +#> 7.1209 -6.7880 -12.6276 -14.1402 7.5011 3.6135 +#> 2.4809 -16.6612 3.1206 1.3862 8.9676 1.8443 +#> -17.2778 -1.3190 5.9414 -9.6632 -3.6013 2.3111 +#> -1.8095 1.7647 5.5546 9.7994 -1.9943 -0.8855 +#> 4.7578 -7.2273 2.8811 0.5475 -9.1510 -3.1228 +#> 3.7045 -0.2151 17.4081 -3.1675 -5.3228 -5.7981 +#> 9.6104 13.4691 -1.4277 1.9960 6.9995 -3.7394 +#> -13.1755 1.6854 9.4084 -9.5875 -0.8298 4.6526 +#> -0.5333 -10.8859 2.6966 -0.2496 5.6243 -0.3994 +#> -11.5627 -13.2616 10.7873 4.5478 2.5188 5.3648 +#> 14.5499 9.6646 -10.2373 1.7955 -1.3717 -3.4287 +#> 13.2639 0.8773 1.1604 1.2519 3.5976 8.2300 +#> -17.1914 0.5858 -4.3062 1.3593 6.9147 7.4865 +#> -4.8861 20.8843 -4.2372 3.0550 -6.0051 1.4374 +#> -3.9068 -33.6806 1.2267 0.0336 -3.5000 3.5497 +#> -10.3744 -0.5291 7.1012 4.2670 -4.3717 -0.9353 +#> -1.4874 -10.1441 -6.8394 12.4491 -1.1372 -2.3226 +#> -0.8102 -29.3352 5.3444 -8.5302 3.6247 1.2677 +#> 10.1890 5.5535 -6.2732 13.2793 6.5547 -1.0918 +#> -2.8983 -3.8226 1.3922 -5.4163 -2.0137 0.7241 +#> -13.1667 -1.6609 -2.3496 -2.3967 -2.1740 3.1770 +#> +#> (7,.,.) = +#> Columns 1 to 8 -1.1510 3.3446 -4.6907 -4.0923 -9.0119 -1.8544 -11.5465 10.2266 +#> 2.1530 -10.3229 -13.4920 -10.5353 -7.1878 4.3803 -1.0222 1.2178 +#> -0.0279 2.5698 -14.4998 0.8784 10.8162 -5.7057 7.7050 -9.5477 +#> -4.2213 -5.9699 -0.4114 -2.1312 6.4255 7.8751 -3.3435 0.9462 +#> -1.5995 8.4146 -2.0530 4.2347 3.0521 -4.4703 -8.0160 -9.5067 +#> 5.6889 -3.7806 -3.1521 -3.1988 -12.7691 -0.8307 9.4835 -12.2073 +#> 4.7189 1.5370 -4.2539 1.2660 7.0476 11.2019 -15.6511 -4.4649 +#> 5.5624 3.4705 -2.2476 -13.6931 9.5659 -6.6446 -3.9418 -8.1081 +#> -7.8403 -4.5030 0.9928 -18.9725 -0.6898 -3.9355 6.6413 -12.1244 +#> -2.3357 0.3881 -5.6700 -0.9241 3.7925 -16.9245 4.3452 0.9876 +#> -0.0763 3.1268 -3.8460 -4.8019 -14.2220 -17.3226 7.2250 -0.6345 +#> -2.4367 6.0777 4.7913 -12.2375 -6.7339 0.4671 9.0299 3.9063 +#> 0.6186 -2.9219 -2.7057 -0.7764 -2.6864 -7.8872 4.4441 10.2005 +#> -0.5241 -2.6075 -1.9883 -0.3281 4.3225 6.6077 7.0158 -7.8009 +#> -3.4496 -11.0826 -11.6018 -4.6082 25.9934 -16.3178 -11.9316 -8.0340 +#> 1.5328 -4.1684 -6.1294 4.3586 5.1483 0.1453 -6.2353 -2.7218 +#> 3.5766 6.2953 1.1756 -2.9681 -14.6940 -2.2812 -2.3563 -2.7273 +#> -2.8772 0.3871 2.0151 -0.8548 10.4352 -24.0016 3.5732 3.9069 +#> -4.8925 -2.5335 5.6550 5.4676 -15.1168 -12.5531 0.4016 9.9141 +#> -1.1719 -0.4005 -3.1729 2.5641 -3.1845 -10.5227 -14.5495 -7.9963 +#> 0.0419 2.4212 -12.3028 10.4835 -0.1816 6.0758 -1.5183 7.0821 +#> -0.2427 -4.8821 1.2099 2.4364 0.6832 -11.2929 -12.2063 -6.9524 +#> -1.1091 -2.1504 -1.6837 5.4961 1.8894 -3.6969 7.8218 -4.1573 +#> -2.8983 -1.7186 6.0291 -6.4366 -7.9518 0.8214 -14.5556 7.1323 +#> -6.7137 -2.2434 -6.1717 -17.8276 -1.7784 9.0319 3.7935 -14.0787 +#> 2.4205 -0.9703 4.8595 9.9742 0.5819 -7.0176 -16.9618 -4.5669 +#> 0.8702 -2.8651 8.0752 1.4832 19.5654 -0.9157 -2.8576 -7.4101 +#> -0.2374 6.4131 4.4602 -5.2020 4.4795 -0.2797 7.7103 -2.0048 +#> 0.3967 -5.9701 1.5705 4.2041 3.2667 12.0663 -2.2622 -15.7162 +#> 2.2880 -3.6944 5.7249 -23.6550 3.7084 8.1872 3.2680 -27.8372 +#> -4.2591 -4.9057 4.0272 1.2348 -9.4757 -9.8123 1.3069 5.6520 +#> 1.8418 -9.5065 -8.6097 3.2440 -5.6186 -1.7234 4.7165 5.6540 +#> -5.0394 -3.8266 1.3010 -4.1619 -13.8733 -5.5402 8.6290 6.4715 +#> +#> Columns 9 to 16 7.9968 3.1824 1.6528 -5.3501 2.8699 -6.5961 5.7740 -12.2310 +#> 0.7302 -4.2759 1.8779 -4.9163 -3.0430 -7.2339 11.5894 -6.2907 +#> 10.8629 17.9219 -1.4833 19.6404 -5.1833 3.2871 -3.9377 2.2159 +#> 3.5275 -4.0493 0.6487 -13.8285 -3.1606 -10.5714 8.2817 -5.4528 +#> 2.0357 -2.3497 -8.4855 -2.4340 -0.7470 0.5274 6.4336 11.4348 +#> 0.1513 0.0801 2.9735 9.0915 5.5422 6.9197 1.6482 4.7752 +#> 2.4132 6.8459 -3.9542 12.0085 -9.6651 -3.9433 7.8995 5.2293 +#> 5.2293 2.7419 20.1011 -7.3417 7.4816 2.0838 -1.1586 5.7965 +#> -9.5940 0.7334 5.8397 11.7022 9.7054 9.1951 -2.6624 -2.2839 +#> -5.4442 -7.4897 5.2764 -9.4087 3.7028 -4.2671 -6.9070 -0.4682 +#> -5.3623 19.0221 7.5789 -11.4834 -3.0362 10.1072 9.7035 -12.8669 +#> -6.9003 8.0125 1.4583 -3.7150 1.7329 17.1508 -2.5087 -7.2431 +#> -9.6946 -9.9120 -0.9855 12.4596 -5.2041 -3.8580 -1.7943 0.2785 +#> -5.7165 -4.5681 8.4780 -17.5609 -6.8587 0.9480 -0.0522 0.0123 +#> -1.7467 11.0275 -7.7045 -11.1939 -0.5519 -0.3445 -1.7720 -18.8731 +#> -3.1978 -5.6833 8.4781 12.3317 2.4258 -2.4807 7.0808 -4.3378 +#> -13.6104 7.2438 4.5246 8.7591 -20.6504 1.5312 0.3517 12.4284 +#> 12.5804 -1.3469 16.2424 -1.5184 -10.0994 5.1005 -9.0594 11.6394 +#> 11.3501 -2.1988 2.7424 12.4550 15.6389 3.3059 6.8598 2.2757 +#> 11.3390 -3.5430 -2.9019 -0.1729 -2.5298 -6.6461 6.2712 -2.0054 +#> -1.1417 9.1742 -3.3956 15.1536 15.2132 5.6723 -0.3860 9.3219 +#> 9.2356 9.3205 0.0930 5.9758 -12.2520 3.8998 -10.7852 8.9788 +#> 2.4628 3.8309 -6.3892 -0.0402 16.0499 -17.4290 4.0783 -12.4398 +#> -4.7559 -7.5739 -16.7031 -4.0543 -18.5902 -1.6052 16.7595 1.5219 +#> -4.2533 -5.4680 -2.1122 -4.1514 6.1736 0.0462 6.5320 -11.1835 +#> -0.7842 -4.2493 7.3948 -7.3262 7.4715 -8.2523 4.5708 -4.9115 +#> 10.5087 5.8179 -25.7325 -25.3737 -9.0901 2.1340 -2.1222 1.4530 +#> -2.2761 -6.2733 -7.1621 -6.9965 -1.7172 3.2972 10.9736 -4.0085 +#> 12.1618 -6.2517 -6.5980 2.9502 4.2635 -4.8189 -5.1003 -4.1420 +#> 3.9728 -11.8250 15.6391 -13.9557 -14.7237 -11.7873 -5.1508 4.1413 +#> 9.6275 -2.4659 -10.3977 11.2501 3.9619 1.8273 8.9549 -1.4237 +#> 1.5820 -7.8791 2.5443 8.2473 9.9800 4.2415 -4.5965 10.8618 +#> 10.0454 -13.3391 -6.2946 12.4930 -10.3633 -1.6016 2.9378 10.6521 +#> +#> Columns 17 to 24 -0.2017 -14.2036 10.9189 -6.3093 13.3421 -5.2021 -10.8571 5.6784 +#> -3.7236 4.2107 11.5695 4.1625 12.5397 3.5341 -1.0868 3.1356 +#> 12.3779 -15.8945 -11.7514 6.5719 7.3708 12.7398 -4.8171 -10.1031 +#> -6.3732 -3.1501 -0.9881 9.8138 7.9670 -6.8364 2.6885 6.0887 +#> -17.8622 13.3396 -1.6779 -2.7707 9.6506 -6.1295 -11.8552 12.9470 +#> 5.1236 5.7030 -4.1452 -5.1171 -0.0168 8.9045 9.7367 14.3302 +#> -9.8137 4.5833 -17.7446 3.9888 4.8591 21.3616 -9.9362 -9.1715 +#> 15.0867 -4.7968 -8.5767 -16.3302 0.7973 23.9033 -5.7737 1.1635 +#> -8.7811 -3.0575 2.9939 -10.3879 9.7657 9.5343 3.0429 20.9961 +#> 10.0502 -8.2041 2.8456 11.3417 -2.4953 6.4162 16.7315 -4.6014 +#> 22.0820 -21.9904 -11.3016 -22.8116 -23.9028 3.8842 22.1187 -0.2680 +#> -1.6472 1.6026 -7.8115 -3.6156 -3.2747 -7.2421 5.1116 -1.7513 +#> 5.3359 2.4528 0.3550 1.3230 7.9741 -1.6141 2.0854 12.3013 +#> 13.7491 -6.5297 11.0942 -6.7257 -3.1055 -16.7902 9.7864 -7.6694 +#> 8.0004 0.4933 32.4629 4.3232 -5.2497 -1.7230 -17.6472 2.9151 +#> 6.4136 -0.2973 -0.9971 -1.3520 3.8680 13.8084 -2.5657 -16.9238 +#> -7.1824 -2.1916 4.0267 4.1113 -13.9599 9.5921 5.3595 3.1066 +#> -5.2619 -8.1361 -7.5799 -2.7969 -0.9957 -13.9166 4.1316 -8.2160 +#> 11.3372 16.3689 8.8371 -7.3994 -2.0990 3.0968 10.8980 15.3794 +#> -3.8021 10.2220 -1.2202 14.9014 3.2765 3.3371 -11.9771 -2.4916 +#> -6.8762 4.6891 2.5929 -0.4768 9.1211 -9.8519 -9.6648 -11.6314 +#> 9.7460 -6.4685 -4.7942 21.3611 -3.0056 17.9657 1.6230 1.0961 +#> 11.9743 -0.9533 11.4663 -6.1106 2.7248 3.4914 -5.2735 -14.6978 +#> -1.5166 -8.1734 -6.6487 -5.7655 2.7949 -8.5387 -3.7835 29.8814 +#> -4.0876 4.7392 1.2242 -15.8987 7.1728 -9.1029 11.9292 8.2534 +#> -8.9881 -4.8746 -9.5954 21.2173 10.0572 -1.6628 1.2289 5.8971 +#> 7.0142 -4.1670 5.6467 15.3771 -12.8819 -14.6869 -11.2740 -0.9921 +#> -0.2972 -2.1018 6.0177 0.5221 1.4981 -6.0632 11.9915 -3.5423 +#> -3.3258 10.5987 -5.7706 -3.9644 12.1994 -9.6323 -1.2799 8.8518 +#> 11.6141 -4.6315 -6.4642 -3.2025 -0.1698 17.3934 3.8328 13.8618 +#> -2.5127 -4.7518 6.7672 -2.5339 -1.2967 -2.0569 -1.7243 6.5882 +#> -16.7856 4.9965 -0.3747 5.9526 2.7530 9.1097 7.1798 2.2725 +#> -5.8005 -2.7035 -3.8047 -0.9284 5.5126 2.2023 8.9701 -2.5880 +#> +#> Columns 25 to 32 -3.3465 -12.1665 -4.7266 3.3576 -3.1950 -2.8802 17.6598 4.7404 +#> -10.6723 -15.0681 1.1665 5.6881 -4.7752 14.3467 4.8009 5.4295 +#> -17.6142 -1.5378 -2.4484 -7.1934 -10.2677 1.5343 12.1779 -3.2596 +#> 6.8188 7.3524 -2.2905 4.4959 1.7009 -2.8864 -18.7653 -4.7065 +#> 16.6051 6.1646 -1.4842 0.5813 1.0516 11.0768 -12.5964 14.5830 +#> 16.7302 -7.0112 3.0949 -5.1746 22.0134 -4.1310 8.7279 24.2889 +#> -10.7381 -3.4775 6.6372 6.4803 -3.1785 -5.5057 -2.6962 9.3441 +#> -1.8757 -11.1710 9.7184 -5.2805 -8.8909 -5.4596 0.0158 -14.2830 +#> 1.7568 -7.5622 -1.3491 2.7702 -10.3292 -1.1636 2.6564 4.2344 +#> 0.7000 1.7821 14.5931 -9.0773 5.2029 14.6774 7.8485 2.8135 +#> 1.6623 1.7492 1.3100 2.6948 -13.5191 -4.6533 10.4787 15.1358 +#> 7.3247 4.5906 -4.4544 2.1359 4.3850 -3.7317 -1.3749 13.7357 +#> 0.8120 14.9744 2.7189 10.4039 -2.4289 12.5968 2.3494 2.9934 +#> 2.5820 -5.7150 -8.7422 5.8390 -3.7284 -3.0733 0.3706 -9.7306 +#> 4.4213 -16.0351 -3.1989 7.5213 -1.6001 0.9968 -4.5627 0.9700 +#> -2.7907 -7.1278 -2.0649 4.2140 4.5857 5.0167 -4.1840 -13.4642 +#> 10.4783 -7.9112 -17.1973 -10.2099 -3.1797 -10.2205 1.3935 8.0176 +#> -0.7982 2.0720 -6.4560 -8.0288 -6.7375 3.6489 -0.0936 -5.3324 +#> 4.8929 8.4467 5.4012 3.9065 0.2363 16.5182 -2.2986 -0.8222 +#> -4.1151 8.6142 1.8951 6.1967 -1.7194 -2.3320 5.2856 -4.6069 +#> -19.3944 -1.7963 -4.2656 -8.2972 2.7705 5.6570 6.9059 4.4057 +#> -0.4050 3.9011 -2.3972 -3.1239 -17.2196 -5.5653 12.2193 -0.9968 +#> -5.3592 -3.0987 -7.6986 -7.4511 -4.2069 -3.4384 -4.8321 12.0722 +#> 20.0403 19.5729 15.6391 6.2954 0.4169 -3.8861 12.2240 -12.1667 +#> 3.5800 -11.0232 14.0919 -0.2808 -7.4968 -12.0439 -4.0667 -9.9398 +#> 10.0156 11.4773 6.1229 -1.3124 -8.8166 -8.6302 1.0571 -11.2056 +#> 1.6166 6.8320 0.6153 6.1363 -8.2500 -11.0222 11.3847 1.4746 +#> -0.7793 -0.4208 -0.2259 -6.4614 9.4015 -4.7347 2.8738 -6.9280 +#> -1.6318 -12.4125 3.9271 7.4695 -5.8701 -11.2740 -3.0332 1.2802 +#> 28.0953 2.1393 -24.2158 -4.0061 7.0950 -8.6099 2.1225 -2.6146 +#> 3.7211 9.1366 -11.3096 -4.2825 -2.1420 -2.1255 -12.4218 9.6057 +#> -1.3382 -5.4725 -1.8357 0.5935 -9.3416 -3.5055 -1.9799 -12.6972 +#> -11.4266 8.7223 2.7088 -8.5663 -2.2837 -0.0327 6.6363 6.6428 +#> +#> Columns 33 to 40 6.7635 0.2676 -0.4159 9.0724 -7.2259 -4.9980 -5.8763 8.4493 +#> 4.4937 2.5311 1.9138 0.3078 1.9588 -3.5497 -1.1907 0.5561 +#> -10.2568 -3.1322 11.5117 -4.3087 -4.6816 17.6844 8.7753 3.3748 +#> -5.3756 -7.6747 2.2086 0.9428 3.6217 1.3725 7.7266 3.1351 +#> 14.0019 -4.5984 14.8477 0.3603 -5.1469 1.7272 16.7233 -14.4129 +#> -5.7418 -11.3068 6.4387 4.5919 -6.4510 -4.8739 1.7584 13.3482 +#> 11.4781 1.1217 3.0755 7.8919 6.0445 -1.6499 9.7943 -12.8667 +#> 0.0311 -16.3330 5.5842 9.4925 2.7538 -3.7333 10.3775 -1.5636 +#> -0.8140 3.1831 5.1449 -2.5414 6.0609 -1.0483 10.0245 -6.0115 +#> -10.3292 -5.1663 4.7419 -2.7025 -10.1061 1.4991 -2.2201 3.2466 +#> -7.9465 -14.7259 9.1298 -5.0068 -12.2494 12.7274 7.7615 -3.5029 +#> 10.7479 -8.9968 6.5722 3.0814 -1.4771 -5.4336 1.0488 6.0063 +#> -6.4348 0.7132 4.3070 -8.7904 8.6975 1.1075 -2.0351 -5.8650 +#> -4.7090 10.8575 2.0938 -4.5925 13.6765 5.7960 -10.5568 10.8240 +#> 4.3676 6.4377 -5.5873 -2.7393 -1.0691 -10.3165 7.6410 -2.0931 +#> -7.1456 5.1314 9.0001 -1.0220 1.6024 7.0429 -20.3988 11.5544 +#> -9.4779 8.7472 4.8977 -4.8484 1.3657 -1.3088 -2.8118 -0.2669 +#> -1.8931 -0.8126 5.4710 -1.1423 11.9743 2.1789 -1.0370 2.7930 +#> 12.8539 -7.2875 -3.3640 4.9971 9.4708 5.3407 0.1027 3.0712 +#> 5.3685 13.3009 -0.5570 -1.6243 12.8898 -6.8893 -4.9286 5.7364 +#> 21.6231 7.1604 9.1344 -1.7378 -3.5171 1.3859 -0.7098 -8.5414 +#> -8.8463 -5.5049 -3.2001 -1.2804 4.5352 1.3429 7.7656 -17.9811 +#> -3.8347 -4.9080 -4.4590 -21.8495 -7.2955 6.4522 -7.3021 11.5973 +#> -4.6035 6.9409 1.1151 7.4165 9.8849 3.9993 -3.3113 -7.1636 +#> 0.8386 3.7905 -14.2038 14.4263 -17.5220 -10.7280 8.3832 -17.0463 +#> 4.4654 5.3002 3.3907 -0.2470 3.5849 13.1716 -12.6129 0.5816 +#> -0.3431 5.6900 -5.8313 -2.5538 3.6540 3.3511 -0.2439 -17.6064 +#> -6.1437 -9.6946 -9.5116 -0.1163 -5.2854 0.0935 9.3468 10.3278 +#> -19.4557 -1.8373 2.0779 -1.2904 -2.0416 -1.6558 11.6877 -1.9486 +#> -12.2806 -12.7014 -19.7658 -2.3969 5.6329 -8.3815 2.2392 0.5735 +#> 1.0984 12.8486 -2.2627 -1.3134 12.2106 5.7082 -9.0247 -9.3327 +#> 7.9066 -3.8783 -10.4517 -15.9973 -3.3150 -6.4859 -11.3980 13.3674 +#> 3.4618 3.1344 -6.4312 3.0740 -4.2376 -2.6805 12.8825 -7.3342 +#> +#> Columns 41 to 48 -8.2446 -14.1066 7.4180 5.9619 -4.1717 -2.6857 -4.5057 -2.5795 +#> -0.3471 14.8749 0.6670 5.6553 -10.9413 2.5896 -3.8698 4.2730 +#> 8.1345 -4.1757 3.7532 6.9645 -14.8808 1.3076 -9.9735 -9.2789 +#> -5.8473 13.6811 -7.2703 -5.9008 -4.0263 -8.5845 3.9876 3.7570 +#> 11.3226 6.9115 4.8852 15.7906 -0.8682 8.0937 8.1273 5.7852 +#> 17.3444 -6.6595 6.7296 5.8521 9.9824 -1.5831 3.1717 -4.8209 +#> -16.4776 10.5050 2.9079 -8.5284 2.8132 -16.2037 8.6382 0.7495 +#> -23.1697 -3.9381 -13.5591 5.6852 -0.6891 -3.2555 8.2294 -8.8434 +#> 3.8975 10.2780 -12.6152 5.3798 -15.7025 -3.5821 10.9657 -7.8737 +#> 15.2915 -0.5255 1.8512 13.8459 2.2610 3.5411 -1.5479 -1.5811 +#> -8.6926 -1.1466 -7.9095 -5.5129 3.6154 -5.7410 -18.9909 3.5314 +#> 0.0673 7.6417 -4.5786 -8.5491 2.4879 -8.6047 -16.9503 3.3925 +#> 6.1347 -0.2703 0.6698 2.9947 -18.5610 6.9394 14.4577 5.3886 +#> 6.0268 12.8496 4.6240 -11.4068 6.4780 1.9538 -23.4217 2.5520 +#> -2.0623 -2.9128 -12.5985 18.0719 -3.6771 9.9461 14.1066 -9.3032 +#> 10.3474 0.1668 -9.4097 4.7920 -0.7543 0.7548 9.4571 7.3780 +#> 8.5396 -8.9624 -13.7220 6.6342 26.1063 -2.2718 12.9850 -12.8724 +#> 4.9769 -0.4265 1.8554 6.0727 -5.0484 3.0449 -11.8768 -16.2155 +#> 3.4199 11.6511 10.9404 5.5802 2.0642 -3.1614 -32.3605 -4.1784 +#> -4.5305 -7.2991 9.0261 1.8573 -2.2770 17.8930 -1.7181 -2.7132 +#> -5.0601 7.4311 6.1313 0.2610 -3.9911 9.5264 -9.0407 1.1036 +#> -14.7860 -6.6657 -13.7383 0.9878 12.0193 9.0834 -0.4269 -0.0988 +#> 5.7226 1.2954 4.7022 12.1605 -8.3241 -11.2832 3.1006 -1.2237 +#> -2.4029 4.1849 -4.9961 -8.9011 -17.0678 -17.9874 -4.4997 -1.9155 +#> -12.5505 12.1856 -3.0765 -2.0942 0.7412 16.8770 -4.1587 -3.8859 +#> -2.1647 -20.1726 -5.2616 -6.9617 -11.7598 13.4308 4.2533 1.5311 +#> -5.6661 -1.5919 -12.8315 6.9896 2.5813 -7.4786 -2.9170 -0.3050 +#> -6.6409 -7.6123 -2.1276 -1.4862 -4.7415 -1.6506 -15.4718 8.7143 +#> -1.0364 8.5024 -13.1561 -5.9855 -8.4445 -3.3230 10.1388 2.4691 +#> -13.9208 -7.7000 -11.4377 3.8244 -0.5969 -7.5562 7.3189 1.4867 +#> 9.7942 -4.7978 3.3158 9.0595 -9.0613 1.8362 3.8362 1.1675 +#> 1.0413 0.4598 13.2523 -17.2823 3.7930 -3.0105 2.8211 -2.3156 +#> -2.8977 7.6209 9.4072 -5.2823 11.0450 -18.1353 -5.5680 -11.8955 +#> +#> Columns 49 to 54 -2.5928 -7.1004 6.6081 1.7156 5.4287 0.3410 +#> 13.2924 3.9968 -1.9209 8.0944 2.1424 3.7392 +#> 6.7899 -10.5478 -0.7002 -1.6988 -6.1060 -0.1987 +#> 7.4364 10.9952 3.2161 2.4305 -4.5592 -6.1626 +#> -9.6062 -3.1601 0.2585 3.1764 -6.7485 1.2784 +#> -3.3731 1.0436 4.3733 -4.6247 4.0325 2.0421 +#> -4.4729 -0.6966 4.3825 8.6256 -3.5688 -8.9680 +#> 5.6425 -15.1321 -2.3662 0.1096 2.5494 -4.4745 +#> 1.9892 -0.5806 -9.2635 2.6563 5.5949 3.1520 +#> 10.0269 8.4024 7.9782 -0.0012 -3.3929 -4.4011 +#> -3.4735 0.3273 1.0878 9.8868 12.0104 5.5299 +#> -5.3334 -3.3646 2.0360 2.3895 6.0884 -1.9941 +#> -8.2128 9.0170 6.6366 1.8423 -5.0797 2.4894 +#> -1.3292 -2.9701 2.3141 -7.0520 0.3502 -4.1201 +#> 2.2892 0.8809 8.7419 8.5965 -1.6637 2.2177 +#> 0.5825 -2.6708 -1.1449 0.7359 -10.7928 -3.5240 +#> -19.0753 -6.4115 -2.3333 0.4257 1.2642 9.3979 +#> 6.4100 -9.9433 0.5144 -0.9337 6.1790 -7.8487 +#> 7.3200 -6.5671 -8.7311 1.1663 -4.1018 1.6795 +#> -4.5269 5.5922 10.9591 -1.6388 1.5133 5.6238 +#> -3.2997 -5.9542 -4.4557 -3.3275 1.7602 -5.1058 +#> -7.5134 -0.7009 5.4777 15.1911 -0.0899 3.9860 +#> 4.1985 1.1996 0.2030 -4.6156 -3.7712 0.6755 +#> 8.4728 19.2328 3.1302 -10.3603 -1.4621 -0.7851 +#> 13.4330 -4.9662 7.7158 -7.3931 -4.1295 -6.4962 +#> -8.0304 8.5032 -9.3079 -5.2790 2.8347 2.4643 +#> 7.0726 22.4254 3.3912 5.3902 8.3798 1.6767 +#> 21.0437 3.2168 -6.0675 -4.4073 3.2828 0.1357 +#> 12.8162 6.2139 -10.2245 -6.9193 -3.1154 1.8901 +#> -2.3231 0.6678 9.1898 6.6339 4.3099 5.7301 +#> -5.9029 -2.0468 13.5363 -1.7687 -7.4800 2.0132 +#> 0.1635 7.0316 -4.7992 -14.0503 4.2662 -0.6726 +#> 4.5819 2.0853 11.7331 0.3567 9.5702 -6.5673 +#> +#> (8,.,.) = +#> Columns 1 to 8 -2.7529 0.8192 -7.1341 -6.5282 11.1871 4.4511 -21.7879 -1.4713 +#> -3.1048 -7.8195 -0.2788 -6.0910 1.3162 -9.9899 -16.0470 -0.5912 +#> -2.9547 -5.6337 4.0594 6.9994 6.0116 4.2885 12.9769 0.2763 +#> 11.0393 -3.4285 -3.4719 3.9043 6.3892 -3.4250 14.3331 -7.9237 +#> 9.0916 1.6536 -3.8375 16.7278 3.2863 8.3293 5.2747 5.0463 +#> 13.7368 -2.6661 1.8664 3.0571 4.3028 0.9034 -16.2985 1.4940 +#> -8.0692 -9.9216 5.6677 -6.9962 -15.2937 -7.3063 15.3760 3.5604 +#> -9.0523 8.6461 -0.4937 3.0509 5.6696 2.0694 -5.2931 3.3585 +#> 2.3426 -9.7887 -13.0725 -3.5629 -4.1340 -4.8556 -3.6626 4.4876 +#> 8.1377 -3.9488 -4.9067 -6.2834 3.1234 -3.5747 4.6790 -8.5427 +#> -3.0277 0.6315 8.4442 7.6313 4.6818 -7.2317 -26.7969 -3.1051 +#> 9.9994 4.5279 2.0965 10.3779 7.8810 -0.4459 16.5216 11.3792 +#> -2.8980 -9.2737 -1.2585 1.7333 6.8224 -17.7776 -10.4174 3.9152 +#> -3.9673 -0.4021 -2.0732 12.1576 -10.3663 1.5133 4.0214 4.7282 +#> 7.0511 0.4957 4.1995 -5.9918 0.0764 -12.3378 14.4978 -5.6393 +#> 0.2674 -7.7723 6.0367 -7.7422 -0.1882 -5.4300 -14.0436 12.4318 +#> 0.9067 -13.9798 15.4094 6.1903 8.0087 -11.8253 -4.7356 12.0953 +#> 1.1894 4.2628 -1.1142 2.2573 -7.1488 -11.9867 -3.1376 -4.5180 +#> -1.3142 -6.1627 -11.0111 -10.1632 5.6541 -1.5477 -22.9148 -1.1882 +#> -3.0296 4.3970 1.8265 -2.7886 0.0352 16.6297 15.4871 1.1961 +#> -5.0692 1.5222 -4.8863 6.1280 -4.2142 4.1118 10.0311 -7.1992 +#> -2.7119 0.2040 -0.3274 -2.1352 10.6831 2.7113 2.9998 2.2773 +#> 4.0870 -8.3630 1.3345 5.7558 8.6777 6.6120 4.7221 12.0395 +#> -3.0057 -11.6187 -1.4478 -0.9246 3.5884 -3.1255 4.0088 -2.2778 +#> 0.9240 4.9592 7.4613 6.5776 -2.6666 14.3401 17.3487 -2.4898 +#> -2.3642 -2.1005 -9.3355 -9.3243 -10.9523 6.9057 7.6943 -7.1152 +#> -3.1120 -1.7855 -4.4008 -13.6525 -6.0453 -0.5619 8.5483 -13.9939 +#> 2.8317 2.0840 1.2822 -0.4408 -2.4619 14.0995 -2.7047 -13.0475 +#> 7.0575 -5.8074 -4.1218 1.6133 -3.2632 13.8478 -4.2430 -8.0296 +#> -0.8498 5.2949 6.6301 -0.3180 12.0754 -5.2642 -18.0740 -1.4874 +#> 1.4737 -7.5670 5.6318 2.6489 -0.7847 -5.5680 3.2172 -1.7407 +#> -5.1628 8.3471 -3.3556 6.5424 -9.1413 10.8724 14.4525 -8.6271 +#> 6.2300 0.4581 -3.5530 2.6457 -8.0018 3.6802 -4.7906 -3.5306 +#> +#> Columns 9 to 16 2.6059 0.2068 18.7862 2.4858 5.5476 0.0574 8.6758 10.5389 +#> -5.0830 10.2386 11.1307 0.6531 6.4362 -3.3191 0.2904 -6.1147 +#> 15.8550 17.1665 -3.4921 3.4039 -17.2131 1.2187 -0.2304 -0.9924 +#> -9.5623 2.8004 -8.9082 2.4214 -2.3780 -14.2012 -8.7973 4.5452 +#> -2.8062 -6.0448 -13.6052 3.1826 -10.9059 -17.3714 4.2268 1.8483 +#> -27.0679 -5.9070 5.8950 0.8538 -6.8051 7.9535 -3.5834 13.9089 +#> -9.0877 3.0419 6.0590 5.6717 -0.5998 -3.2637 -8.2851 9.2411 +#> -14.4284 7.5758 -1.0864 -11.7635 2.7582 -10.3796 -12.3267 6.0934 +#> -7.5945 12.7614 12.6524 -0.1052 4.6192 5.3419 6.2143 7.6067 +#> -6.7993 5.3109 -10.7815 15.6736 -7.0126 1.1654 -10.1354 1.9374 +#> 2.6210 18.7662 2.4249 -17.9167 -8.3412 1.4214 10.6310 1.9863 +#> -2.9500 3.1937 7.4129 3.2962 14.4996 0.4553 -0.1157 5.2943 +#> 5.7296 -1.6762 2.8099 -2.4858 -2.4683 -5.6878 -6.6655 3.5194 +#> -8.0123 1.1176 -3.9421 1.1844 5.9917 -4.3421 -8.6695 -1.2122 +#> -16.6057 5.8500 -2.5749 -3.0693 14.9980 -1.7176 6.8648 -9.9549 +#> 4.9229 -12.4192 -3.1418 -18.3803 3.1458 -5.5795 -6.2845 2.4021 +#> -9.7384 -9.2445 -14.0000 -6.0651 1.8076 7.7087 3.2962 13.9044 +#> -5.8195 12.8527 -4.2801 -4.9660 2.3438 18.3197 -7.5903 2.4078 +#> -3.8759 -1.0662 10.1111 2.1849 3.8999 0.2588 -8.7367 -11.6584 +#> 9.5571 -12.2928 -2.4735 10.2014 -12.1671 1.6716 -0.6683 -6.9441 +#> 5.0652 10.4032 -11.4672 3.9434 0.9181 -3.1731 -6.7213 3.8030 +#> 5.0064 4.5650 7.9435 -7.7492 5.5196 8.4759 -6.5715 2.1932 +#> -6.2972 2.7103 10.7781 7.4434 -12.7144 -2.8269 2.7386 -7.9616 +#> 12.6700 25.3529 21.4739 -0.1103 -7.4641 3.7377 0.1204 10.8745 +#> -6.6823 19.7743 13.1868 15.4460 1.7699 1.6169 14.1696 -10.2873 +#> -5.2075 -23.1905 4.7842 -5.2366 -8.8474 9.9627 0.4570 -7.2879 +#> 10.5321 15.3019 10.4495 12.4939 -11.4734 -2.0136 16.9252 -5.5005 +#> -3.2051 0.4821 -8.7248 2.0482 -11.4909 8.4767 -4.1454 -6.0090 +#> -2.6472 7.5251 8.0112 -1.4287 -3.5853 0.8019 6.6144 -6.8130 +#> -16.1993 -7.3859 14.0124 8.9923 5.5679 -0.9503 8.3413 3.4190 +#> -0.9727 -14.9218 -5.1464 0.0045 0.5696 -5.9744 2.7713 -0.7118 +#> 4.2479 6.4606 -0.9434 -4.8116 -8.2512 6.2986 3.6989 -2.9903 +#> 12.5828 15.4578 -2.0097 7.4678 18.1141 2.4997 -9.8066 7.8747 +#> +#> Columns 17 to 24 -9.6185 -5.2363 -6.4971 2.9259 5.3478 -1.3745 -11.1241 5.3987 +#> 5.0377 -10.5867 -14.1546 1.7548 -25.1766 -3.7241 14.0057 1.3400 +#> 7.3620 -7.2763 -4.2834 -3.2599 -12.6782 -1.8999 0.6878 0.6934 +#> 3.7993 -3.0477 -6.4918 -2.9106 12.9166 -10.5429 19.3072 -3.5060 +#> 1.1725 -6.3711 2.1429 5.8187 0.5333 1.2548 12.6958 -7.0119 +#> 7.1364 1.7381 -5.0821 2.3576 -1.2286 11.1923 -10.4377 -13.2910 +#> -12.4025 1.6296 7.5249 -17.2821 -11.1699 6.4293 -10.1163 -0.0761 +#> -1.7264 8.4885 4.0593 4.8025 11.4839 -7.3611 1.4249 -6.4636 +#> 3.5115 6.1677 -3.1524 3.9877 -9.3612 0.2320 -6.9287 0.9210 +#> 1.7669 -1.4957 -1.1088 -5.6035 12.9229 -4.6847 -8.2598 -2.7438 +#> 4.8460 -4.1944 5.4778 14.8436 -2.7482 1.0736 -4.6544 6.0900 +#> 16.3952 -1.7126 -2.9432 -0.8184 6.0616 2.6004 3.9056 -10.1400 +#> -4.8629 -2.0349 -4.2152 11.5647 -15.5042 -6.9312 5.8201 7.9488 +#> 2.3042 -10.7469 -1.8377 -7.0127 -10.5825 -5.2462 5.5303 1.1535 +#> -14.1828 -6.2652 -12.6412 7.8766 3.5697 -0.5576 -9.0542 6.4479 +#> -1.8594 -12.0191 1.1954 -3.4492 2.6362 -1.5502 1.0217 11.0366 +#> -2.3127 -12.8279 1.9349 5.3822 -0.3430 -8.6638 -9.1417 -1.8167 +#> 0.4679 -2.3985 -4.0016 7.9150 3.4129 -0.2558 -6.5079 -0.5395 +#> 13.7021 14.7185 -0.9215 -2.1865 -5.6755 3.5487 -1.3128 0.8928 +#> -5.6018 0.6378 5.0673 -22.4938 -12.6288 -2.9217 6.7909 -6.8253 +#> -5.8383 -3.5588 -1.0980 -5.1938 -6.0518 2.1598 3.9049 -2.2670 +#> 8.3973 -2.8062 3.6439 -7.4556 7.8676 0.9450 -9.9529 19.5965 +#> -6.7550 -6.3885 -4.9750 -1.8393 1.2241 6.1629 10.4999 -8.8812 +#> 1.9692 -6.6736 -4.2791 -6.9886 -0.1644 -3.6476 -1.0289 19.9012 +#> 14.5276 6.4475 3.3525 12.8286 -7.6074 20.2843 -3.5258 6.6068 +#> 1.7833 13.3814 0.1390 -5.2190 8.0605 -5.4690 0.5839 -14.1667 +#> -10.1046 3.2568 4.2568 -0.8551 9.1603 -7.0637 8.3551 19.1710 +#> 5.4195 3.1397 0.0737 2.5032 8.6055 6.7515 -1.5177 -0.5559 +#> -7.0993 0.6425 1.4658 -0.1229 -0.3991 -2.2585 6.4073 8.8977 +#> 5.6671 3.0705 3.2044 3.6582 31.1265 -14.4046 2.2149 -6.1952 +#> -5.4701 -14.7604 8.3713 9.8865 -24.9880 11.3028 1.5293 4.6248 +#> 0.1451 5.1354 -3.6193 -6.1743 28.6728 -9.9950 10.5469 -1.0057 +#> -0.6977 -5.1401 1.7695 1.4553 -2.8253 -2.7118 2.4154 24.9040 +#> +#> Columns 25 to 32 -7.1700 11.6366 7.2286 8.8610 -3.0368 -3.5862 7.6909 15.8158 +#> 7.5102 10.8181 2.6158 19.5021 -4.5559 -1.4966 -5.9985 -7.0599 +#> 3.9965 -4.6997 -5.6602 -11.8554 18.1068 -2.0255 -15.3478 -3.8725 +#> 15.7267 -8.1247 1.8175 -6.2360 -1.4675 -4.8595 5.2477 -14.5281 +#> -8.9276 5.8084 10.6656 -21.9621 7.2696 -9.8578 10.2391 -8.6043 +#> -5.5958 4.3886 3.6833 -3.8290 1.8972 0.2591 -2.6312 9.3042 +#> 16.8863 6.3746 -14.4350 5.9461 -7.5033 5.9999 -13.7993 -2.1955 +#> 4.5430 -1.1801 -5.7656 3.3432 0.2830 31.4724 -10.8935 5.0580 +#> -2.5888 -4.2127 28.1604 6.3434 -21.9281 -1.4052 10.1537 5.8521 +#> 11.1758 -4.5972 -3.2592 -2.2157 11.2609 -4.3342 8.0417 4.7088 +#> -10.4686 -9.2667 -5.1960 7.8169 14.3069 -7.7104 19.9064 16.9445 +#> -1.9276 -11.7381 -1.7933 -17.9961 5.6561 -0.1927 21.0128 0.4839 +#> -0.4672 8.2286 -2.6962 -1.3400 -7.7344 -9.1882 10.3880 -0.9670 +#> 8.2809 9.7850 1.1968 8.3133 16.6780 -8.3523 -2.4638 0.1139 +#> 11.1578 3.0020 6.4177 -8.3652 -7.9631 17.1988 8.0300 8.5071 +#> -0.0551 5.9983 5.7762 2.2087 9.6836 -9.9556 6.5206 -2.1632 +#> -3.6828 6.6844 19.4924 15.1757 4.4342 -9.5140 17.7458 17.2699 +#> 0.4415 -11.3196 3.8629 2.5370 15.5115 -5.7184 6.3163 13.6840 +#> -1.1127 1.6238 -3.6081 -1.9011 3.7526 -3.0333 -2.1946 -10.1399 +#> 10.7830 -5.8261 -6.1259 -10.7859 -0.4588 11.1663 -6.9900 -4.2215 +#> -1.4840 4.4005 -0.6102 -6.3680 17.9664 8.3212 -18.3523 4.9031 +#> 3.2974 -10.7969 -0.3268 -9.9017 7.4522 6.7771 -3.5755 3.8225 +#> 7.3126 -3.5130 -7.6412 -2.8489 -1.1771 -8.6129 22.5971 -16.1773 +#> -0.3023 -13.9086 -10.5268 5.9208 -12.7641 -0.8612 -13.7925 4.4451 +#> 13.9948 -7.1070 4.2837 -14.2782 -14.0873 22.4218 -6.5989 -5.5982 +#> -3.4005 -8.0452 -0.1647 3.6283 8.3792 -6.1744 3.5500 0.9821 +#> 5.4092 -11.6146 -20.8671 3.7039 5.2048 -0.4829 -6.9013 13.6288 +#> -6.7347 -13.6019 2.6732 -11.2069 -1.1424 5.3381 -22.5442 -1.3596 +#> 4.3021 -3.2848 6.6782 -12.2661 1.3848 -8.1107 -0.6805 -17.9858 +#> 17.8665 6.8305 15.7989 -0.8882 0.4430 14.3679 9.7999 -4.2770 +#> -13.5004 3.0550 9.7904 12.1239 -6.2687 0.7774 8.3725 3.0753 +#> 10.9506 -5.6546 -27.5954 5.8827 9.0555 -2.9388 -2.4872 10.3958 +#> -1.1340 -7.0988 -14.1755 1.7169 14.1141 0.9794 -3.7148 12.2845 +#> +#> Columns 33 to 40 1.4384 -4.6102 7.9683 2.7660 7.9938 -1.0803 -8.9664 -1.3981 +#> 19.5743 0.8177 3.9418 5.5161 9.0790 10.3763 1.0911 -3.6978 +#> -5.6234 7.9174 -1.2428 13.7931 -5.4066 -0.4459 15.9044 -4.9218 +#> 7.1198 -5.6939 -1.3782 -2.8288 -7.2263 -1.5326 -8.7972 -7.8809 +#> 2.0581 5.2294 10.4372 1.7771 17.5576 -2.2725 -11.5410 -10.8612 +#> -14.4360 16.1779 -8.2027 8.4059 -3.5090 -2.6810 -7.2530 5.8010 +#> 11.7753 1.6925 0.0918 15.1680 -12.1380 0.0158 -10.3996 -7.2048 +#> -7.0974 10.7667 -10.2725 7.3891 -17.3839 -2.9920 -2.4067 -1.1251 +#> -18.5467 4.7870 9.5273 13.4591 19.5929 1.9708 -6.4323 3.8563 +#> -13.7966 10.4357 2.8547 0.2278 -6.0680 -6.4969 7.0030 -6.4925 +#> -15.3292 -8.6361 -4.0396 5.9759 -8.9914 -28.1967 -2.3788 21.7079 +#> 4.3733 -4.7229 -4.6953 8.4878 0.0928 -9.9990 -1.3342 -1.0123 +#> 1.7047 -0.8910 16.3733 -11.1787 -1.2181 -17.9266 -9.3722 12.1319 +#> 3.6155 -6.4393 -10.4984 14.2492 0.6283 19.7343 17.0469 -1.6854 +#> -3.7397 -4.4730 10.8904 1.0943 -2.3979 -6.4401 -2.0084 0.4883 +#> 2.3665 9.4115 3.8417 16.7768 1.0740 -2.6458 1.6131 -7.3026 +#> -10.7354 -7.7811 5.5553 -1.1074 -8.8395 -2.0963 6.2890 -2.4415 +#> 3.8500 1.9622 -7.5663 12.1152 -9.1200 -2.7134 2.1331 -7.6049 +#> 6.3665 7.5608 -5.2248 0.8428 15.7291 16.6072 10.2988 -1.4232 +#> -7.8167 19.1128 -2.7300 -3.5795 -5.0596 5.3191 7.9266 3.7644 +#> 10.8489 8.5284 -1.8161 10.5279 -3.5506 3.3824 3.4306 -8.5454 +#> -13.2690 -0.5293 11.1187 -4.5714 -14.9835 4.9575 21.8038 17.2836 +#> -4.4791 7.3667 -4.0882 9.4668 4.9189 -6.9242 -1.7868 -3.7555 +#> 12.5676 -4.2350 14.2625 -5.9949 -1.6661 15.3602 5.4294 10.8306 +#> 0.6979 -12.6459 2.3800 8.3869 0.8838 5.4597 12.7520 6.8345 +#> -4.8699 13.0221 -1.8689 -20.3537 -4.7671 1.1076 -5.6562 -1.5576 +#> -5.1534 -12.7627 7.0405 -8.1219 -15.5294 -5.6776 3.9444 0.8268 +#> -2.5128 1.5007 -11.5941 -0.8027 11.0840 2.9976 -9.7512 -9.2726 +#> 21.1139 -10.3481 10.7656 5.1777 -2.5764 2.7313 -1.9686 -6.8200 +#> -7.9482 -3.3359 -0.4480 9.9027 -4.0077 -1.9207 -1.9934 9.1539 +#> 0.7318 -13.0766 10.2638 0.4785 11.2678 8.6835 3.3261 4.6726 +#> -2.1868 6.1275 -9.7815 2.8962 -16.2180 -6.2915 -4.5034 3.0573 +#> 15.5121 -6.8184 10.7959 12.5375 -13.7759 4.4653 -0.9482 -0.9945 +#> +#> Columns 41 to 48 15.1234 -10.3035 18.9469 -9.3543 -7.7956 -18.9362 -0.6373 -16.9786 +#> -19.6301 -9.1963 0.4785 15.1186 -9.2118 6.7499 7.7051 -9.3235 +#> -11.8282 -12.6039 -13.7831 3.7439 -23.2749 4.1191 -1.8422 22.8505 +#> -6.4394 3.4854 -12.3427 10.9700 -5.3929 0.1003 -1.4660 15.0701 +#> -0.5888 -1.2878 12.4392 16.2797 4.4355 10.1608 -3.6143 9.7532 +#> -11.4246 18.0569 21.2602 4.3748 2.1658 -14.8113 -24.5796 2.4979 +#> -14.2909 4.8997 -17.6585 0.7266 16.5683 3.5429 1.4633 -10.7058 +#> -13.0987 -7.3477 -3.6767 -15.0189 -17.3225 -2.4886 -1.2141 6.8781 +#> -4.2019 -7.1041 6.8834 3.1884 -12.3768 -3.9347 -16.8851 -4.9041 +#> 1.1614 15.5545 6.9877 -7.2775 -7.2149 -2.0622 -5.2004 15.0375 +#> 17.3684 5.4120 -3.9649 -26.9709 -22.3064 -3.7620 -0.4840 -7.7502 +#> 8.2134 16.4462 9.2899 4.5362 -3.4135 -17.8783 -23.2531 -8.3197 +#> 4.2102 -8.7115 5.7801 -1.2040 -16.3119 16.9851 -4.2366 3.0893 +#> -7.9770 -1.9856 -1.2772 4.4906 -0.2718 3.9672 12.5172 -9.9648 +#> 0.6586 9.1921 12.7662 8.2750 -10.7659 -4.8818 4.7952 3.0726 +#> -13.0078 -7.4784 -15.6910 2.9226 3.6097 -7.4876 23.6680 2.0568 +#> -7.8754 5.0710 4.8404 2.5667 3.2970 19.4876 2.7635 2.3988 +#> -7.3073 -5.4642 2.1707 0.9878 -13.8835 -4.3143 -0.7010 0.1638 +#> -6.1368 8.6022 -2.3794 -4.9842 -2.6207 -16.6344 -0.5803 -23.0859 +#> -1.4737 -6.8409 9.4013 3.6841 -2.5767 10.2542 -13.6277 14.4757 +#> -12.7047 -0.9505 -7.3066 -5.3821 -0.2884 -0.8365 15.3252 2.6350 +#> 8.7984 19.0534 3.6122 -15.4152 -12.9271 -0.1066 -6.9598 8.6080 +#> 2.0097 2.7893 -0.8227 4.1339 -0.6433 9.9833 1.5379 22.7720 +#> 22.9658 2.7287 0.0924 -7.3743 -8.6120 -10.5792 -8.3092 -34.4353 +#> 8.3216 0.8046 -2.7417 -2.8410 12.2403 -14.6068 2.8725 -18.7368 +#> 5.1215 -0.3085 6.0280 -10.1927 -2.0305 -1.7860 -14.4816 7.4972 +#> 3.4380 4.4268 -8.7352 4.2027 -5.2833 9.8693 -19.4086 -7.5258 +#> 2.5557 0.1169 -2.1515 7.2013 3.8568 -4.1783 -1.7116 2.7032 +#> -16.7468 -16.8727 -1.2258 18.1821 -4.3218 -5.5793 4.0499 -5.7701 +#> 15.1284 14.2322 -1.3899 17.1513 -5.2165 -4.9669 -25.0727 20.5660 +#> -0.5518 -23.9666 -10.6307 -12.3171 -6.0767 7.3313 4.0299 -18.5027 +#> 6.5236 7.7597 2.4812 -4.6051 0.3597 -4.3789 6.9711 9.4499 +#> 13.7426 8.1034 -3.4679 -12.3309 2.7366 -7.2719 14.5037 -21.5581 +#> +#> Columns 49 to 54 28.2266 -24.0744 -3.0613 -9.8396 -4.7171 8.8575 +#> 11.6383 2.1390 -7.0953 1.6003 -7.2498 0.8229 +#> 20.9584 2.1395 5.4265 12.1389 -4.7847 8.8648 +#> -17.1530 16.8176 -12.7773 3.5272 1.3383 -2.3195 +#> 15.4277 2.0832 5.0761 4.7348 -12.5190 2.1667 +#> 9.4932 3.9052 -16.7069 0.1884 -13.9748 6.2520 +#> -20.4911 21.7655 -0.5098 -0.9561 -2.9878 -0.4902 +#> 5.1038 6.4373 -13.7096 -10.1529 -1.5253 -2.9657 +#> -0.1040 -13.4354 5.6028 -10.4792 -0.9341 -0.8847 +#> 6.3219 -4.8316 -6.8600 -8.3210 3.8692 3.7136 +#> 13.8827 -26.3527 -8.5919 -7.6520 3.4166 -2.0100 +#> -6.1717 11.1735 -1.6515 -11.3219 -0.4028 -5.4764 +#> 4.0893 -19.1908 6.6321 6.2797 5.8809 -0.7183 +#> -4.3332 4.8074 8.4485 13.3534 1.9774 -8.3769 +#> 13.6668 7.4613 -10.1284 6.3912 -4.4749 -7.1188 +#> 0.8212 4.6318 -10.7527 -10.8872 2.0984 -10.7213 +#> -4.6114 8.9248 7.3142 2.4720 3.0747 -8.1015 +#> -15.3217 19.8732 -13.4930 -7.6570 6.2652 -3.6485 +#> -0.9852 -9.7691 -1.8700 -3.5571 -12.5527 0.9310 +#> -12.0288 20.0358 15.8906 4.8544 -3.7699 -5.6404 +#> 1.3941 -0.1866 11.3678 -6.3311 10.9359 -7.0841 +#> 12.0489 3.4691 5.6454 -3.1639 3.1077 -4.2794 +#> 11.4797 13.1703 8.9269 9.0371 -7.2471 10.1417 +#> 6.3791 -32.7896 6.8775 -5.3002 2.7534 2.1391 +#> -7.3106 7.0380 -16.5145 8.6710 -12.6582 3.1825 +#> -17.3828 5.1435 -6.8716 3.5323 -5.5878 -1.0485 +#> -6.2563 -2.6207 10.9603 -5.9135 3.8629 1.1014 +#> 10.1823 7.6165 0.5025 -4.5793 -2.8579 0.7155 +#> 18.8798 10.5151 -3.2269 10.4497 -10.7137 3.3351 +#> -12.9150 14.6824 -9.3680 -12.7423 3.8019 -5.0127 +#> 18.3241 -10.5744 21.1866 7.0562 -11.9339 -3.2777 +#> -18.0871 16.1602 5.9427 1.3406 6.7964 3.2303 +#> 9.2389 4.0168 -4.3945 -2.0490 0.0984 1.5683 +#> +#> (9,.,.) = +#> Columns 1 to 8 4.0875 -13.1660 6.1413 1.6866 -7.9687 11.6420 0.5888 21.6606 +#> -2.9234 -7.8627 3.5386 6.7541 8.4469 -0.6668 10.9694 5.8437 +#> -2.8451 9.0504 -2.0656 -3.7975 15.4361 -10.9013 9.2741 13.7776 +#> -1.3023 -4.6213 1.9568 -8.2512 13.9255 -7.4534 11.3668 7.4748 +#> 1.5525 -2.3667 -5.8136 -7.4960 5.6232 -1.6982 0.8556 11.8351 +#> -5.5165 1.7084 -7.7292 5.4986 3.2420 7.6540 -0.2872 9.7683 +#> -2.9122 13.6950 -12.6256 7.8668 13.7894 1.3318 -6.7552 -0.6951 +#> 2.8501 -1.6932 3.2410 0.0887 -4.4175 3.7085 -3.0652 11.5549 +#> -1.7622 -6.1368 2.4826 7.4410 9.7392 3.6960 15.6597 6.1769 +#> -7.6209 1.6972 -3.0448 1.9954 -8.1176 2.7204 13.5791 -2.6416 +#> 5.6467 2.8106 4.7776 -9.6193 2.1398 6.4787 13.7958 19.0501 +#> -0.1869 -5.5733 4.4560 -7.0290 11.0339 -1.5122 17.4960 10.0469 +#> -0.5857 -10.2198 14.7713 16.3169 -1.5068 1.1987 4.6137 -15.1676 +#> -1.8894 -15.2043 7.0605 -11.0119 -11.8196 1.7829 -0.6664 -5.7275 +#> -7.3169 6.6425 3.8902 3.2093 6.3221 -1.0500 -2.6227 -9.2949 +#> -0.2854 -6.3065 10.3913 8.1978 -4.6598 -6.8949 7.0462 -0.3631 +#> -2.6381 13.2912 -1.1430 -12.5023 8.5150 -10.3241 5.7787 -1.6207 +#> -3.0306 0.1186 -11.0713 11.8305 -11.7191 -8.0379 20.6490 -3.5747 +#> -3.7223 -4.9261 -7.8515 -3.3555 -0.5600 4.2985 -4.3782 6.1615 +#> -0.7275 -6.7427 6.0927 -0.9234 -3.6294 -4.5123 -4.4903 -12.0694 +#> 1.7222 6.4943 8.4215 -5.9419 12.6630 7.3062 5.2832 -1.2977 +#> 0.4108 15.7560 -8.1166 1.0658 1.7338 -23.5575 3.8808 -4.2467 +#> 2.3169 -0.3698 6.6242 -16.5452 5.7667 2.0703 -6.2268 8.1523 +#> -0.0676 -4.9680 -3.6224 0.9647 3.0948 -3.4271 1.3068 5.9702 +#> -3.0543 12.9846 3.5692 -9.7936 6.0392 1.1089 -6.0544 -0.5150 +#> 0.9283 4.9139 -8.5166 8.3903 -1.7509 -5.2080 7.8338 -6.4984 +#> -3.3726 5.7619 -6.9525 -0.1470 19.7022 -3.8281 0.6373 10.6404 +#> -0.5416 1.2048 -10.0297 -3.0471 4.5664 10.2175 0.1318 -5.1289 +#> -2.1869 0.0989 -1.0279 7.7973 11.0315 -9.5611 -5.9728 8.5446 +#> 3.6401 -5.9616 -2.8115 -12.7283 13.7908 -3.5134 -8.3821 -3.8347 +#> -1.2762 -0.9875 -2.2588 5.1875 -8.8331 6.0745 3.8728 -0.2223 +#> -1.7755 2.7929 -3.2278 3.5319 2.0656 9.9338 -0.1135 -2.6921 +#> -0.7828 2.1875 -2.6695 1.7830 -1.5953 19.4810 6.1483 15.0575 +#> +#> Columns 9 to 16 4.2088 -23.9762 -6.8043 8.8212 8.7422 -2.3903 3.2339 -5.2097 +#> -7.2594 4.7735 -12.3461 -4.8847 -10.4494 7.4665 8.4385 -12.5377 +#> -11.8828 4.3953 11.8320 21.5989 11.6513 6.9019 -6.6922 -2.8721 +#> 6.7395 10.5906 -5.7747 7.1411 -16.0808 -6.9254 -5.4521 -10.9185 +#> 8.2466 1.1895 -0.1548 -2.3253 -28.0649 1.2423 -9.5628 -0.1640 +#> 3.5897 -17.9191 -17.7579 -27.1033 14.4983 -2.0833 1.3654 15.6925 +#> 10.1009 8.5532 -0.2808 -0.2258 -16.1121 9.4849 3.8841 -3.2685 +#> 1.0302 2.9111 -1.6666 23.5359 -4.0120 0.3524 -16.0500 0.5214 +#> -1.5642 -4.4428 -2.7665 1.2616 7.2632 7.8903 -8.8398 -5.6303 +#> -6.0943 -19.7397 -0.9850 -0.2281 -13.3369 1.8731 2.2916 9.9167 +#> -6.0780 -1.6448 7.8172 2.0459 7.7387 2.5029 -23.2533 -12.3586 +#> 13.2757 11.1168 -9.2176 -5.5516 0.8257 -8.8642 -13.2948 -7.4019 +#> -9.3961 -10.9207 8.7869 1.3486 -11.6727 6.0785 -0.1468 1.0374 +#> 4.1470 8.7967 9.1593 8.6602 15.7938 -2.6806 10.9040 -1.8152 +#> 6.2388 -2.0511 0.3299 -10.8950 -11.9330 2.8885 0.4726 9.5734 +#> -4.9775 19.8541 15.4489 -0.7692 -9.5588 -11.9333 12.7164 -12.2642 +#> -5.1962 9.7119 14.1155 -5.6186 -3.1858 15.5836 12.0592 2.3122 +#> 14.0698 19.7998 3.3043 3.4178 0.8533 -0.0373 5.3411 2.4784 +#> 5.4941 5.4324 4.7054 10.5663 -9.1261 -4.5124 7.2460 -6.0025 +#> 13.6831 8.8033 -6.7323 11.9500 -30.1266 4.7476 -18.0578 10.4861 +#> -6.9986 -4.8844 -8.5996 7.4400 -0.4392 3.8846 5.0058 3.8753 +#> -3.3200 -5.5672 1.5313 17.5748 3.4141 -7.4009 -13.9479 3.4515 +#> -10.8194 6.7758 -1.8326 -5.6283 1.0390 20.6476 1.7635 -15.6967 +#> 9.8613 -2.6945 4.2391 -7.6302 -0.9557 -6.1139 -7.9384 -6.0483 +#> 11.0115 4.5217 -7.0642 -9.2148 7.0387 -8.5065 0.6995 -6.8530 +#> -5.7435 4.7510 -8.7562 -3.7314 -0.9758 -3.0986 -6.8076 8.7105 +#> -8.1038 -2.4940 -4.7496 -17.7449 6.4212 -9.0613 -17.9790 7.3966 +#> -6.8502 -18.9183 -7.4141 -2.1069 7.5214 -5.9213 -2.8063 7.5368 +#> 1.5855 -9.5122 -0.6550 -0.7084 24.4028 1.6516 0.6475 -11.2705 +#> -9.6477 -0.4903 -17.6532 5.7543 18.2415 -4.9971 -3.8286 -3.2842 +#> 14.1147 9.0895 6.6478 -18.3653 -13.5341 10.5265 -6.7646 -13.0001 +#> -16.9131 -3.4685 -19.9777 -4.9386 -1.7271 4.9551 12.4286 7.8517 +#> 4.1537 -11.3263 3.8526 -6.3522 3.6299 -13.6234 12.4613 -4.8751 +#> +#> Columns 17 to 24 -4.2706 9.6318 -14.4805 6.3895 -3.7929 8.0710 2.0545 1.8292 +#> 7.8162 6.6203 -5.8975 -1.1224 1.9272 -0.8045 6.8998 -3.2840 +#> -6.8472 -17.6737 7.2496 -3.0438 8.6990 3.2873 -2.7906 2.0976 +#> -2.9556 -0.9176 17.6425 -7.5155 -2.5665 -10.1610 3.8583 -6.7622 +#> 1.1791 13.9903 6.0277 3.7011 9.8324 -2.8115 -1.3943 7.6187 +#> 0.9433 16.6836 17.6236 -18.7736 -6.3167 -4.2612 8.3085 -3.4196 +#> -3.4771 -1.7288 19.2656 10.8092 3.1097 2.7039 6.5544 13.4471 +#> -2.5811 -7.8217 -10.2289 7.0625 -8.1599 -0.8722 -11.0821 -5.3660 +#> -0.3557 0.1654 -27.0798 7.3281 9.1698 4.4319 7.1142 11.8328 +#> 2.6711 12.0621 26.9639 3.9798 -1.4104 7.9621 2.4684 -6.6909 +#> 4.1163 -10.5928 -14.7403 -12.4781 -14.6933 10.3249 -18.3088 4.6522 +#> -0.0078 -4.2932 4.7690 0.9550 -3.7099 -4.5541 -7.9861 -6.3120 +#> 4.3695 13.4931 2.4103 11.5745 7.9793 8.5670 -1.6531 1.4744 +#> -5.4559 5.6987 -10.2102 -7.2425 0.9817 -2.2712 6.4125 3.5513 +#> -1.8864 2.3376 6.6529 -13.3615 -10.9942 -2.3597 0.7256 -9.1127 +#> 0.1496 1.5402 -2.3539 3.0123 9.9166 5.6713 -8.0756 3.9683 +#> -0.4101 9.1029 2.3711 -10.7000 -15.2477 -17.3745 -7.0343 -9.0581 +#> 1.8222 -8.6733 6.3450 13.1310 1.8973 -3.9703 9.2409 0.7033 +#> -6.0650 -3.3313 -14.9866 13.3368 21.1127 25.5643 12.0376 15.6202 +#> -6.1982 0.8577 7.0105 -3.4303 5.7572 -0.5078 -4.1691 -14.3487 +#> -4.9384 -2.2306 12.4091 9.5809 -7.9701 15.5971 -6.1461 3.4497 +#> -14.6010 -19.7534 -3.0943 -15.2183 -10.4810 3.3483 -6.5617 -10.0276 +#> -1.8306 1.1335 2.9998 -4.5367 0.8741 -3.0733 -2.7563 4.2682 +#> -5.0458 1.9771 -33.9085 -3.2428 1.2707 16.8337 14.7150 10.6823 +#> 2.4681 -7.3226 -22.9366 -3.9452 -18.8797 -18.5527 -2.8076 -8.6607 +#> 9.9208 3.2776 10.6686 -4.7430 14.8613 -3.8122 4.8039 -4.9235 +#> 9.9328 -2.9447 -0.5335 -0.8264 0.1861 9.5124 5.4681 -14.0606 +#> -3.1846 8.4049 -1.4289 -6.1339 -9.3869 -2.7514 0.4952 -11.9531 +#> 18.0310 2.2193 -14.0216 -13.0492 22.2719 -4.2757 4.5596 -10.6983 +#> 7.9559 1.0702 -5.5950 -17.2573 -13.7766 -13.7111 0.9190 -4.5991 +#> -0.3905 -3.3458 1.7576 6.3945 5.7497 -2.7466 2.4251 13.0997 +#> -4.5891 0.2517 7.7112 -8.0639 -1.7789 -8.1066 0.9122 -13.0374 +#> -5.3413 -4.9465 5.5711 6.2203 -0.3533 2.2089 -8.5250 1.5109 +#> +#> Columns 25 to 32 -2.5417 0.8453 4.7376 -2.9612 1.8473 1.4810 4.5415 7.2610 +#> -4.2711 2.9126 -7.8663 -0.3670 -9.6876 -15.3951 3.4643 -1.2463 +#> -15.1669 -2.1372 9.1019 -1.4552 4.6120 4.7544 -4.4678 2.5946 +#> -8.8096 -2.5299 0.3544 3.5773 1.0737 -6.6828 -2.0851 1.3127 +#> -7.1648 -0.3307 -9.3747 4.5719 3.2879 7.4797 11.3753 8.1376 +#> 4.2475 2.2863 4.1592 2.2073 18.8083 1.7558 5.6452 -11.2770 +#> 1.8582 -17.5141 -9.8492 6.6420 0.0380 -7.6875 -7.2550 6.8817 +#> -12.4302 -8.5575 13.1428 6.6443 5.4586 4.6629 -0.2422 5.3181 +#> 11.7403 1.0674 -9.5835 1.3690 -7.6413 -6.8457 7.1888 -11.0833 +#> -8.1675 -11.4043 -6.3836 -7.3862 -8.4281 -5.4854 7.3219 -0.0165 +#> -0.8766 21.4948 12.8014 14.8860 26.1611 7.7665 -12.3399 -11.0969 +#> -6.5066 -8.8206 3.9323 7.7160 7.8918 5.5574 10.4558 9.2808 +#> -6.7053 -1.6588 1.5652 -2.6664 -4.8871 1.6638 4.5137 -3.4380 +#> 0.3392 14.3931 -0.6118 0.2134 -6.2186 -5.7362 -4.2839 4.3567 +#> -13.5350 -6.2299 0.3904 -16.4546 -7.2557 -0.6942 9.1569 -9.9394 +#> -4.5590 -2.8492 0.1492 -0.4079 -4.7893 -0.5491 17.3962 3.4775 +#> 2.0336 5.1708 -12.4232 0.2248 8.3460 1.0982 -0.4847 -4.7426 +#> -5.0570 -7.8702 3.1474 -8.8002 -2.3591 1.0060 1.5078 -5.2314 +#> 16.9447 12.7382 -13.2697 -0.0683 5.2825 -3.0754 -7.3953 -5.5983 +#> -7.7573 -5.0490 -4.0720 1.8675 -6.4763 -2.1763 3.5788 14.7622 +#> -4.0019 6.1244 5.4763 0.6726 1.2970 4.2389 2.8034 4.1622 +#> -0.9995 6.5293 2.1186 10.6913 2.2119 4.5758 -7.5272 -8.9917 +#> -8.7199 -8.4839 -18.1779 -9.1905 -4.0585 1.9482 3.9999 -7.3518 +#> 12.7110 9.5307 15.4975 13.9518 -9.2565 2.2380 -5.5922 -5.2402 +#> 4.5704 -11.6175 -7.3932 6.6459 4.7025 -2.2518 8.3495 -2.0091 +#> 10.7717 2.8472 6.3731 -5.9258 0.8502 -2.0102 1.9226 -6.5894 +#> -14.1405 -2.1212 6.4797 -14.1395 -16.3247 7.2465 -6.4626 -4.0388 +#> 4.1977 -2.3055 -2.6956 0.9332 2.7282 2.3741 -8.7264 8.0648 +#> -12.1615 -6.7460 -9.2023 -3.4595 -13.0378 -0.4108 11.2732 -8.4836 +#> -9.0775 -24.4088 0.1750 5.2929 -2.1573 3.3092 -13.1942 -8.2572 +#> 3.6687 12.6269 0.1908 -6.8676 -1.9316 6.3263 -9.2124 -11.4998 +#> -17.3930 -2.7537 -1.4206 -6.4436 -3.9444 -11.5823 -4.6937 7.5235 +#> -11.4818 -5.7549 5.9569 -1.7157 -7.7764 6.1034 -10.4827 3.0300 +#> +#> Columns 33 to 40 -4.0546 5.7033 -1.9717 4.9453 -4.0232 21.8872 -3.3472 2.4886 +#> -10.6006 -6.1495 1.9815 4.9314 1.9719 6.6286 6.7427 2.7484 +#> 5.8264 -10.3392 18.2553 -9.3866 -14.3963 -13.8339 0.8560 -11.0095 +#> -7.8975 11.0141 -15.8547 3.1282 -5.9673 8.7539 -0.4567 -4.7988 +#> 7.7529 -1.0745 -16.7453 -11.1365 -7.0175 0.1077 -11.2151 -0.8434 +#> -0.7133 -7.7374 -0.7937 -10.8897 16.1370 -13.8290 -0.1160 3.4583 +#> -1.5516 8.6448 -4.6002 1.4309 -3.2235 2.6007 12.0963 -13.2540 +#> -7.9362 -4.9958 26.8412 -13.2264 -6.6898 -11.1547 -2.7267 -0.6085 +#> 4.1175 -7.4804 -3.3964 -10.2496 22.5975 -22.6124 -4.8042 -10.3592 +#> -8.3552 -2.6763 -0.5044 -19.3676 10.0768 3.1593 15.9853 -3.9306 +#> -4.9406 -2.3733 22.2437 -21.2701 4.0356 -19.9860 -16.7914 8.4032 +#> 9.8218 -10.7498 10.6639 1.5719 0.2460 -3.3337 -1.8820 14.1830 +#> -11.1259 -1.1850 0.7274 -3.5176 4.5723 -1.3078 3.5844 4.4614 +#> -9.0342 1.3072 0.6087 5.9559 14.0140 2.3856 11.1323 -6.3673 +#> -0.9597 6.0675 3.4484 -16.5896 14.6504 2.3404 13.2214 3.1287 +#> -15.1154 -12.5263 6.2127 -7.7274 -0.1448 3.1026 -2.2499 -11.8948 +#> -10.4774 -1.3816 -4.5550 -23.3595 11.2419 -2.9943 2.6173 5.4653 +#> -1.0691 5.1118 5.7607 -16.5816 14.0878 -3.2260 17.9101 -9.2509 +#> 4.7743 -1.2142 -1.9710 12.9453 -5.1697 -6.0383 -12.7658 -3.3180 +#> -4.7639 -9.1992 5.1599 2.0450 11.3600 -2.9904 -4.9618 1.1367 +#> -0.2035 7.4995 6.7237 9.6282 -15.1450 0.0953 -1.5054 -3.1165 +#> 9.3922 6.3088 10.0187 -6.8202 0.7742 -5.7718 9.5027 -9.4457 +#> -7.3586 9.0690 -9.2550 -11.7036 6.6068 -4.6989 -0.0246 -0.4834 +#> 8.0758 11.4354 -0.4742 16.5706 -1.5196 -2.9838 -5.4655 -11.9395 +#> 16.8395 -8.0241 -7.3224 20.5876 14.2268 -10.0591 16.2582 -3.5242 +#> 6.7963 10.5324 -16.6560 5.6012 -9.8153 2.5755 -13.8786 3.6098 +#> -4.7407 3.8492 14.7185 -4.0867 17.5908 16.4011 11.1086 -21.1926 +#> 13.4565 -10.2485 -0.9491 5.4584 7.1630 -14.4297 -6.8817 1.7544 +#> 4.9086 3.5352 -14.2149 16.3630 4.2086 3.2492 -4.4649 -5.6819 +#> -2.4664 -7.4547 7.8175 -12.1694 21.7147 10.8425 13.5847 6.1566 +#> 14.5458 1.8772 -4.1889 -15.3383 -8.2980 6.9765 3.1014 -15.1226 +#> -1.3925 3.0163 -11.4281 19.1352 -12.2901 -1.0417 -13.7466 13.6779 +#> -0.7463 1.0728 -0.6218 9.9406 -9.6542 3.4510 20.0130 -7.8743 +#> +#> Columns 41 to 48 7.7196 0.3849 -8.3918 -7.7087 9.4628 6.3112 -4.8614 0.7898 +#> -10.5400 6.7290 -3.3070 11.4925 17.5507 2.6617 10.3062 -3.3887 +#> 2.3790 1.5194 5.8885 -2.5441 5.1343 -5.0835 -19.2448 -6.2814 +#> -13.8772 -7.0727 5.0106 -2.6973 10.7691 4.7806 1.1937 1.7533 +#> -0.1879 11.4678 7.5956 16.2828 2.8012 5.0949 5.4573 7.6978 +#> 18.0733 16.1860 10.2843 -3.6556 7.2638 8.1729 11.3880 -15.7216 +#> -9.6233 -3.4684 -1.5559 2.1601 -13.1356 7.2361 16.7344 -10.8216 +#> -16.5723 -7.2646 -11.1144 -3.9776 -0.2065 13.8502 -8.2284 -9.6323 +#> 1.5238 12.3028 8.9714 17.9027 -4.7627 -8.3126 16.8880 7.9462 +#> 10.9103 -5.3810 -5.5318 5.7035 21.4334 2.3475 -2.6262 -12.9918 +#> 8.0442 8.9567 -11.8374 -5.4994 7.8793 -1.9821 3.3344 9.2903 +#> -12.1435 6.2812 10.7557 -8.9740 -0.2044 10.1391 0.1696 -13.2730 +#> 1.4029 9.6779 -5.9944 21.5273 12.4168 -2.9909 -7.2323 -0.2837 +#> -15.4937 -15.8609 -4.5449 -11.5226 10.0661 -19.1849 -13.9259 4.5161 +#> 11.5241 4.0590 -15.7215 -2.1419 -1.3034 5.4464 3.6719 15.3133 +#> -4.4282 -7.1290 -0.5408 10.7173 9.4064 -10.2900 -1.1956 -0.3817 +#> 13.4626 14.8415 -15.4962 -9.9346 -15.0116 -11.9595 17.3005 7.6668 +#> -0.6313 -3.6673 -7.9908 -18.4216 -0.2426 -4.6290 -3.6879 -7.4714 +#> -10.8215 -7.7815 6.3899 14.1215 -1.5517 -4.1354 17.5691 -0.2966 +#> 11.1281 -7.7009 6.5719 10.8587 -6.1595 -8.7532 -4.3086 15.5671 +#> -9.7915 8.7561 1.0023 -2.0281 -1.7689 -4.6823 -12.4498 -6.9113 +#> 7.7028 4.9294 -7.8488 -6.5750 -9.0153 11.6397 4.5690 -0.1016 +#> 7.1106 -1.6929 -1.8737 2.1624 -13.8151 -5.3098 15.0948 0.1795 +#> -1.3814 9.2974 5.5842 13.2400 12.6897 12.6315 3.4940 -2.6392 +#> -11.3209 -8.2444 16.8397 17.8638 5.5211 16.4997 15.2528 0.0082 +#> 3.0446 -4.4829 4.3880 -10.7411 -0.3064 -13.2129 2.5039 9.2481 +#> -6.7606 10.0870 5.7216 -5.6511 -6.1712 -2.0900 -17.3603 8.3145 +#> 6.8814 -0.1777 5.1503 0.9947 5.6574 2.5916 0.6318 -2.3832 +#> -0.4918 0.3861 4.7932 -4.0897 4.0422 -1.6061 2.8181 -8.4493 +#> 5.8768 -11.2762 5.9376 -21.8132 -10.8796 26.4854 3.2273 -6.3154 +#> 2.7235 -7.0789 -3.0443 2.1171 -7.4423 2.0577 9.3470 26.1017 +#> 12.9806 -5.9052 5.8160 -3.9925 4.2319 -11.9046 -10.8885 -0.4372 +#> 7.3300 -9.5418 7.7864 2.6138 10.5347 17.3636 -21.5352 0.4739 +#> +#> Columns 49 to 54 -14.8691 -9.1324 10.1655 0.7449 2.1667 5.3363 +#> -4.4574 -0.1098 -3.1628 -9.3421 1.4818 0.8749 +#> 4.6459 9.2058 -0.2434 0.4508 -7.6855 3.1409 +#> 4.7861 28.0300 -0.7577 3.9693 -3.3897 -2.4329 +#> -21.8895 4.7810 -9.7748 -11.4574 1.2092 -4.2950 +#> -4.4833 5.0633 4.8649 8.0878 0.2127 2.9271 +#> -5.7122 0.5988 -9.7677 0.9075 10.6970 0.2351 +#> -1.1009 15.3324 9.8893 -11.4473 4.1047 0.4827 +#> -11.1070 -3.0292 2.0740 -3.7679 4.7414 -1.2884 +#> -8.6583 15.6834 -9.2855 8.0586 0.4443 -0.7787 +#> 0.2194 4.4984 11.6936 7.5706 -2.2380 -5.1833 +#> 4.4206 17.4562 11.1093 14.2875 3.0627 -1.2111 +#> 0.6527 2.9872 14.8202 -1.0611 -1.8122 1.5132 +#> 14.1045 -4.4124 -9.2904 -4.5478 -2.2182 -0.7207 +#> -1.4828 3.6784 3.2015 2.2996 -12.7189 -7.8033 +#> 5.9095 8.7084 -5.1051 0.3851 -1.0407 -5.2680 +#> -6.3020 -4.3540 -4.5427 -1.1850 -3.6061 2.5747 +#> -1.0570 4.7170 2.8429 7.9308 2.2215 -0.9558 +#> -12.0819 4.4918 7.6590 2.5101 9.8169 4.9874 +#> 0.1378 8.4419 -1.8811 0.1350 -1.8110 1.3977 +#> 7.6449 -6.2310 -13.1248 -9.3925 5.5972 0.2361 +#> 9.6474 5.3826 7.5009 -9.5686 -8.0677 1.1340 +#> -32.7488 -2.4500 -8.0628 12.2802 -0.2749 -3.8348 +#> 8.0790 -6.9282 3.2445 -1.6245 -2.1593 2.9972 +#> -0.7125 0.4441 -7.0608 -6.4584 -8.6841 0.2525 +#> 3.5664 -0.2363 -2.8050 7.3713 -2.8613 -0.4915 +#> 17.1820 -11.7188 -3.8804 -3.7392 -4.7100 -1.1502 +#> 8.2372 -7.1018 -3.2752 3.9507 -2.3532 1.3612 +#> 0.1603 4.1866 2.4981 3.8943 -6.4633 -2.6680 +#> 1.7882 -4.5337 6.1890 -0.3154 -10.4658 -5.8276 +#> -7.1726 -8.9011 0.1020 -3.4402 -8.8854 -3.4632 +#> -10.3952 -2.3793 -0.3542 3.8725 4.9056 2.3551 +#> 7.1893 7.1117 1.6026 -7.5178 -0.4649 2.7714 +#> +#> (10,.,.) = +#> Columns 1 to 8 2.2363 -2.1157 -3.9890 -1.4472 -12.1587 1.9112 -0.6772 2.6816 +#> -0.5097 0.8234 2.5852 -2.9076 -6.4124 -4.2266 -6.0710 1.3366 +#> -5.7555 2.2560 4.2838 -1.0270 8.4408 16.8556 -2.1342 -1.6465 +#> 0.5127 -2.0896 1.1144 -0.6333 -3.0143 -9.4323 -1.9678 1.8573 +#> 1.4213 -2.9222 6.7214 -9.3739 3.3599 -8.5956 -6.5851 -9.5308 +#> -4.4239 7.0383 1.5490 -1.8766 -19.2846 -8.9789 6.5690 -12.5547 +#> -4.9949 2.4635 -9.9984 3.4919 3.7359 -2.8006 -7.1007 19.4063 +#> 0.9228 1.9885 -0.7494 10.7508 -14.2172 14.6552 7.3359 -7.5264 +#> 8.4246 -6.7451 0.3721 4.4351 3.0836 -5.5432 9.8858 13.3133 +#> -4.4414 2.3311 8.9908 -10.3337 -15.8835 -5.9958 -0.2566 -16.4942 +#> -6.2991 -1.0376 8.1352 2.4895 7.1721 5.1369 5.8482 -9.7590 +#> -0.2901 -10.0629 -0.6162 4.4312 3.4176 -8.6422 5.5362 -2.1296 +#> -2.1814 -7.6086 5.7908 -6.4812 -8.6060 5.2281 4.8243 -3.4728 +#> 3.2813 -1.3609 -0.0201 -12.3297 -2.0040 -1.9826 -3.4319 15.8522 +#> 0.5173 -4.4348 2.7485 4.3449 -20.0676 -13.4316 6.2142 -15.6838 +#> 1.4934 1.6267 -5.8460 -4.2022 7.2496 -0.9272 0.6064 4.2118 +#> -7.5285 1.3303 3.2797 6.9902 -10.4241 -16.2242 9.1904 4.0080 +#> 5.1461 -5.2510 0.6457 1.7364 -2.2899 -17.4661 4.1976 10.2075 +#> 5.8689 -0.0052 -11.2868 -1.8224 19.4149 8.4131 -1.6432 4.1972 +#> -1.6723 5.1238 -1.1278 -8.6816 -5.3589 16.0121 -6.3449 -11.0641 +#> -3.6626 -7.9673 3.6362 -7.0441 6.2639 15.9587 -14.8696 14.3943 +#> -1.4947 -3.7085 3.2552 10.2461 2.8955 0.1107 13.2408 -2.0492 +#> -0.9754 5.7900 -2.2947 0.9517 18.2419 -2.0235 -7.2679 -3.8560 +#> 3.6322 -3.7929 -5.4700 2.1984 -3.2789 8.3245 5.7207 18.7523 +#> 5.2059 -0.2214 -2.8451 1.5933 -0.0468 -8.6477 0.1390 1.3842 +#> -1.2361 -0.2643 -4.9882 -0.0226 8.1592 -3.9915 -1.6402 -2.3398 +#> -0.0395 -5.7144 -8.8134 -10.7175 -6.4796 0.8209 4.4671 -2.2380 +#> 4.9342 4.5964 1.2061 0.4309 2.9231 0.1605 -1.3336 -8.6909 +#> 6.0918 -0.9102 -4.8624 -10.4539 10.7629 -5.7115 8.8977 1.2112 +#> 4.2715 -0.6758 -2.4629 7.5929 -9.4385 -23.0817 6.5401 0.1794 +#> 1.2896 1.9627 -0.7723 5.5046 3.7965 0.8929 1.7333 -12.3365 +#> -1.6969 0.4768 -2.0244 8.8202 -0.3477 -9.0021 -7.3399 15.2483 +#> -5.3663 -2.8496 1.1370 5.4573 -9.0503 2.6561 5.2536 -5.2445 +#> +#> Columns 9 to 16 -1.9999 0.2756 -3.0347 -12.2593 3.9656 -27.9082 6.1364 -6.9188 +#> 2.1805 3.2268 -20.1301 -2.3668 -1.9788 -0.9791 7.7772 -13.9050 +#> 10.4914 -14.8245 5.8770 -2.1505 4.3295 17.9494 3.2332 15.6589 +#> -11.3138 20.1980 2.4907 1.7703 1.7816 12.1020 -16.8125 19.6667 +#> -12.8666 9.0573 -20.4490 7.0089 -22.6402 -28.0731 -14.0704 2.2834 +#> 3.1369 -21.4233 -1.3923 -9.6448 -9.6338 5.1032 -20.7964 -10.3669 +#> 4.1008 9.9791 9.0406 -1.1264 -4.4379 8.5063 -1.5124 8.5893 +#> -11.0239 -6.4762 10.6674 2.0926 -1.1683 15.1673 10.6290 -4.3064 +#> 9.1525 0.5704 3.5861 7.8905 16.3161 -14.5311 7.6131 1.7053 +#> 5.6472 -29.9151 -6.5300 -20.4768 -10.8918 -3.2348 -23.2803 -10.3696 +#> 9.7585 -10.2933 -3.6008 7.3111 9.5163 11.7164 0.6670 -3.8617 +#> -17.8701 -2.2579 4.9358 -3.7360 -7.5334 2.2365 -1.2352 -15.2224 +#> 4.6645 8.1105 -11.1601 3.5535 -20.6257 -13.1826 15.6582 -9.6127 +#> -1.1510 -13.3487 -2.8594 -3.8370 4.1328 7.3028 -8.5192 0.5595 +#> -3.8685 -14.6638 10.8199 2.5929 -21.6798 -18.7749 -20.0722 -6.4741 +#> -3.6035 -13.9078 16.3609 15.9312 25.7936 -5.2368 -4.4059 1.2649 +#> -8.0685 -13.5076 -16.3485 9.0274 3.7127 -14.7530 -16.8515 -5.1507 +#> 1.5580 -15.9283 5.5596 -4.7943 7.0539 2.2376 -9.1937 4.7592 +#> 2.2859 8.7128 -2.9579 -8.4410 1.2170 -10.6162 7.5178 -5.7933 +#> -18.0433 -5.6030 -8.8691 8.4996 -26.8533 -17.3229 -0.6320 5.0746 +#> -7.9684 18.0999 -9.5078 -11.4769 1.3021 13.0954 -2.8317 -3.7281 +#> -2.6961 2.3346 -4.2845 -11.5774 -4.2805 -2.6057 3.8918 -0.9071 +#> 3.4042 -13.9383 -8.8197 3.8370 -11.1391 3.3404 -2.8338 21.0822 +#> 16.8104 11.1530 13.4996 -9.7915 8.7716 -2.5054 12.0095 -11.5617 +#> 14.1501 1.8761 6.3314 -10.1340 -7.7371 10.3086 13.3223 0.9572 +#> -0.6895 9.7227 9.2780 1.8902 -8.7706 -9.8212 3.1535 -13.4027 +#> 9.2930 3.7463 1.2078 -19.1367 -7.3601 7.2493 -4.3065 4.2799 +#> -3.2989 -6.0517 -4.5894 3.2898 14.5887 4.2458 4.4147 16.0410 +#> 5.0267 8.3714 0.1451 6.8633 16.1347 -1.1124 5.0697 20.6016 +#> -3.2677 -17.3143 -7.0599 0.8349 -13.2482 -17.2869 -13.9283 8.5212 +#> 21.3014 -3.3553 2.7272 -0.7305 -11.4003 -4.8428 9.4332 1.6574 +#> -15.5308 -11.2574 -10.1778 -22.1764 -18.2551 6.8741 -14.3040 7.3803 +#> 12.2890 -8.7037 -0.7001 -30.3416 -4.4090 7.3282 -19.0024 5.6816 +#> +#> Columns 17 to 24 11.3415 1.8475 1.1421 -1.9459 -20.8720 -4.1786 15.6824 -8.0598 +#> -9.8086 1.3116 8.6122 -8.0612 -3.7797 -2.9260 -5.7208 0.0336 +#> 5.2273 11.1455 1.5187 3.8744 -12.5833 -14.4831 -12.2603 7.3923 +#> -25.1665 6.8883 -12.9580 0.7983 6.3732 0.4855 -4.4769 0.0224 +#> -23.7427 8.5282 -13.7325 2.0021 -7.4680 6.8203 1.4800 2.3046 +#> -10.6971 -2.8438 -5.5806 25.1138 -11.0431 15.1867 -4.4431 15.7058 +#> -9.2288 0.0881 -3.0729 -16.7718 0.6031 -6.5455 -2.6865 1.0175 +#> 4.0480 10.4326 11.0936 -11.2328 -8.6291 4.2397 -7.3878 0.1333 +#> 1.5080 1.1348 12.0883 -6.0027 -18.7562 -11.7351 7.0525 -7.5212 +#> -16.6178 14.5494 -7.9325 1.3820 -6.5049 3.9571 2.1005 -2.2390 +#> 30.8940 13.3603 -0.6191 2.7103 -6.4713 11.2503 -9.1462 -2.0153 +#> -2.8805 -5.2401 -11.1418 -16.3078 -3.2876 6.3194 4.8781 2.4407 +#> 7.1361 -22.5099 -10.3275 2.2974 8.9481 12.9058 9.1893 -6.4378 +#> -23.3701 5.0497 -1.9278 4.7373 -1.8211 -5.9550 -10.0134 -10.5139 +#> -11.5440 -10.2881 -7.1974 -0.4086 14.8722 15.4445 5.4690 -6.9529 +#> 10.5349 8.7367 -5.6713 -8.3642 4.2299 4.1340 2.8754 -13.9507 +#> -0.5988 5.8704 -8.2566 5.5887 -14.3388 7.6471 -11.1970 3.2961 +#> -0.2537 10.4046 -9.4422 -10.4438 -7.7936 -14.9798 10.1950 -15.7083 +#> -21.3917 6.3735 6.5893 -2.8601 -14.1835 2.0417 -2.5771 -13.8505 +#> -23.9282 -19.6325 -5.5975 -8.8023 1.6138 0.0271 5.3163 4.2856 +#> 2.1600 -5.2791 -5.4355 4.8082 1.0244 2.1854 -3.3372 16.7090 +#> 6.8382 -2.4120 21.7143 4.5327 -0.8197 4.1578 7.4679 3.8636 +#> 4.2070 18.5672 -7.0864 5.2927 -12.6891 -6.0291 0.3712 6.0586 +#> 8.6083 -6.0751 1.5819 -4.1973 6.8597 -7.1708 2.9634 -7.9239 +#> -9.8341 17.0530 13.1495 1.2774 17.5114 1.3631 -12.2239 7.6702 +#> 0.1338 -17.9398 5.5986 -1.6880 3.3359 -13.7695 15.4135 -8.5532 +#> 19.4245 -18.9081 5.8314 -7.6547 8.9253 -9.3756 5.7626 13.6244 +#> 4.7026 22.9031 8.1282 5.4685 -7.1700 -1.5186 2.0961 5.5197 +#> -14.1119 6.1434 1.1908 25.2888 -1.2966 -10.8851 -0.1169 12.7462 +#> 5.1816 -5.7916 11.5887 1.7730 15.3466 9.7472 7.4484 6.8338 +#> 1.6397 -10.5599 -10.0957 11.6892 8.6562 -9.3152 -3.9449 -10.9035 +#> -4.4536 4.4691 15.2725 0.5402 3.1361 -6.1849 2.5591 -6.1711 +#> -1.8157 10.1398 -5.5183 1.7201 -1.8657 -3.3718 1.1055 2.2464 +#> +#> Columns 25 to 32 6.4815 5.6893 -0.9227 -3.1664 -6.2062 -4.3568 4.5600 -3.0158 +#> 12.8531 2.1956 1.1480 9.7768 -3.6533 -9.2790 -7.2291 -0.9515 +#> 4.8664 -1.7881 -7.7357 17.3084 -8.7027 13.6272 1.0505 20.4756 +#> -17.7767 14.0274 -2.2175 1.9212 -6.6565 5.5522 -2.8884 -1.8843 +#> -0.8330 11.0690 8.7098 1.9187 0.7351 3.5510 4.4887 5.0016 +#> -4.3257 8.1221 2.6468 0.4250 0.2066 2.7744 -10.2055 5.2186 +#> 1.2346 0.7154 1.0156 6.1360 4.9314 -6.6754 13.3389 1.6581 +#> -1.7103 -5.5087 -3.7728 10.8461 -4.9028 -10.6431 -5.0634 -2.1074 +#> -0.2228 -0.4752 -7.8371 1.3122 -15.3493 -8.4393 -11.0277 8.3516 +#> -5.9093 9.9790 -1.4842 1.6851 1.4589 12.7874 4.4788 9.0027 +#> -0.4178 -19.4052 1.5148 -3.4644 -5.5994 -8.1708 -26.6034 5.4712 +#> 3.9157 -0.2716 19.1154 -0.7381 -3.5439 6.2129 19.1589 4.5414 +#> 7.1570 11.1482 -7.5456 -4.1460 -4.8931 13.3967 -3.7882 7.5528 +#> -1.6330 -0.4864 1.6889 -9.7207 10.3118 -3.4479 -9.1503 -2.1856 +#> 5.0068 12.6816 -11.8438 4.0864 0.2849 7.5290 8.5680 -6.5796 +#> 1.9399 2.8008 -0.7312 -21.1728 5.3786 -3.8231 7.9454 1.1585 +#> -4.7803 18.4195 -2.4929 -5.0381 12.4804 -1.6395 -1.5868 -1.6330 +#> -13.5353 -7.7027 0.2612 3.4031 2.1050 9.2101 3.0357 -8.1287 +#> 2.4584 -16.7182 -4.7704 -2.7177 -5.0968 -22.1300 -13.1682 5.2942 +#> 8.8140 6.0968 -12.3037 20.1467 3.0207 16.8412 -15.3083 24.1607 +#> 2.2434 -1.5179 9.1971 -1.3209 -6.3701 10.1950 4.6083 7.7751 +#> -8.9829 -4.7765 7.2443 -5.7954 -16.3653 -1.3180 -6.4603 -1.6012 +#> -2.6718 13.0925 -13.5202 7.0508 4.7404 7.2232 4.7078 12.7123 +#> -0.9451 7.0915 -2.9803 -0.7490 -5.6806 -15.7917 0.3480 -9.7282 +#> 22.9171 -2.5859 -2.9637 12.2590 3.6550 -3.5028 8.7403 -2.8165 +#> -6.8268 1.9172 -9.0970 4.3419 -7.7849 7.8244 -9.3282 -0.7085 +#> -11.6038 0.0854 1.1722 3.9985 -2.3884 10.6390 4.8932 -1.6232 +#> 6.2525 1.6452 2.3449 10.9179 2.4850 -11.0971 -8.7406 -2.9657 +#> -4.8382 -4.9508 1.7669 5.4140 10.2555 0.8560 5.6142 -5.5381 +#> -12.4554 7.3132 3.8855 2.8679 -12.0267 1.9528 -11.5046 0.5150 +#> 1.5025 -1.3696 -11.2275 -4.3624 -0.0759 0.8373 -6.3970 -9.6812 +#> 4.5267 8.3538 -7.7346 -3.5750 1.7459 3.5132 -3.8898 13.1212 +#> -18.6642 5.1290 7.8952 -2.3550 4.7407 -7.8264 5.7572 -14.2221 +#> +#> Columns 33 to 40 8.0496 -4.3295 1.8407 -8.9827 7.7951 0.9815 -3.3443 -10.9863 +#> -3.0410 10.0320 5.7510 3.9766 -7.4743 -2.5651 2.2061 -10.4112 +#> -10.6715 -1.7391 -1.1629 1.3610 -5.0009 0.7881 7.1177 11.1729 +#> -18.8247 6.3556 -7.0350 11.2825 13.5470 17.5848 -3.1697 3.9358 +#> 0.6382 2.6878 3.0333 10.8931 -6.3605 29.8396 6.9935 7.7576 +#> 9.3820 1.4815 -0.3291 -6.9713 -3.5959 9.0425 5.1229 -5.8787 +#> 1.1551 6.9779 7.5625 -5.4963 3.3594 -3.2003 -8.1196 3.4297 +#> -9.8684 12.0555 -15.4573 1.9966 -8.1726 15.2350 8.3469 1.2325 +#> 9.0008 5.6839 1.3163 -6.8948 -17.9637 1.1142 -2.0898 -4.6909 +#> -3.6163 2.3786 3.5571 -3.2093 1.1328 16.8561 -1.2516 7.7491 +#> -1.7443 -8.1475 -6.9335 21.6469 0.9863 -0.8011 5.3470 -11.8187 +#> -4.3721 7.7496 -15.3290 -14.7002 -4.6889 9.6844 5.8437 -5.3226 +#> 9.4806 -3.6792 -5.3814 2.6639 0.3295 9.1594 -12.4843 -13.4395 +#> -15.7484 6.3044 -6.4635 10.6292 -1.2039 1.8374 6.8766 6.9814 +#> -4.5046 -9.6416 -6.1771 1.9989 12.0743 0.4090 9.4904 -8.5855 +#> -19.5573 -13.9309 4.8088 -15.0457 15.1453 7.5878 1.0427 -0.1884 +#> -8.1178 -12.3319 10.5809 0.4396 -3.8857 7.0447 21.8247 2.4879 +#> -10.7616 21.6806 -9.9731 -0.2034 -8.8837 13.3950 0.4176 8.9088 +#> -1.9661 7.1422 7.4603 -2.7760 -14.4412 -18.2499 3.2360 10.8341 +#> -3.7387 -1.8276 -4.5835 -1.3713 -8.7792 8.4404 4.5047 5.8481 +#> 0.9969 2.9623 -11.4680 -2.5941 -0.0900 -2.7210 -19.6152 3.4699 +#> -7.1210 -11.8914 1.7412 -5.4167 4.6137 -12.9696 2.6714 5.5981 +#> -17.3747 1.1312 1.7433 8.5802 -1.5415 14.1470 -9.4081 3.8514 +#> 9.3312 -0.7556 22.4715 -8.4808 4.1353 -12.0296 -13.0816 -17.7008 +#> 1.0289 -14.4718 4.8107 13.3372 -2.9651 -10.1529 -4.4134 -15.8602 +#> -7.1483 6.0918 6.7277 0.3636 5.1824 0.2863 -7.3600 16.8108 +#> 8.6143 12.9924 17.0444 6.9125 -0.3173 -12.9158 -9.7363 -15.0327 +#> 12.7596 7.4542 4.2434 9.1233 -2.8078 -10.4957 16.8506 4.0370 +#> 0.6383 -4.2978 6.9013 2.2868 -4.1186 -13.9069 3.5185 12.9860 +#> -27.9366 11.4565 -8.4703 7.0572 -0.2669 2.7892 11.4372 -5.5336 +#> 2.1539 -15.5910 4.9511 5.3579 5.9842 -6.1158 2.0469 -9.6203 +#> -3.7822 7.1565 -5.9621 6.0425 -6.5943 1.6693 -0.4180 5.4690 +#> 16.8042 -5.0099 5.0895 -8.8010 0.1436 -1.9357 3.0880 -13.2745 +#> +#> Columns 41 to 48 0.3146 -14.3353 -5.9365 7.4027 -17.7053 -7.8412 -10.0810 -18.0308 +#> 4.0662 7.7610 1.7881 9.0038 8.5724 7.1975 -7.4737 -9.6039 +#> 4.9590 -4.5158 -6.8833 4.6137 18.8929 2.1048 -5.4436 1.9336 +#> -4.8528 -3.6431 13.7987 12.6808 9.2454 0.6520 3.5415 7.5007 +#> -0.6831 6.4074 18.4871 25.3673 5.4122 15.7172 23.4501 4.4716 +#> 16.4760 -2.8680 0.7216 4.0668 18.1909 3.2577 2.4170 20.2340 +#> -8.1073 -0.9868 13.7834 -1.9514 -3.2242 2.6656 3.5993 0.2573 +#> 3.5294 16.4313 -18.7772 4.2040 -2.7209 8.4672 8.0065 -2.4085 +#> -1.4619 -3.0826 8.1454 5.4698 -2.2318 -1.3324 12.1368 -9.4877 +#> 4.4847 -13.7462 19.8069 15.7085 13.8782 1.7101 -5.4297 1.1614 +#> -4.5525 7.3460 -3.3067 -9.1236 1.7664 -1.3738 -2.8044 -0.9467 +#> -27.4670 1.2062 -18.4750 -15.0367 6.5579 -3.3533 -0.4274 1.1769 +#> 12.5707 16.2844 5.2190 -4.8410 -0.9088 5.8897 1.0212 -7.7595 +#> 10.1050 3.4524 -1.3630 -3.5027 0.3427 -5.4786 -1.7661 -1.0505 +#> 4.3743 -3.2571 -2.2518 7.7369 0.3507 5.3110 0.2836 -12.8981 +#> -4.3187 6.1857 -5.9451 -8.2904 6.1723 17.5522 9.5769 -9.2986 +#> 10.3207 -4.8456 6.7989 5.4208 4.2143 -0.1745 3.9681 -5.4716 +#> -8.8992 -6.6920 -4.0081 -19.4168 4.1258 0.2592 7.3254 -13.1612 +#> -5.0748 -3.2694 3.9104 -5.0094 -4.3356 9.6339 17.6926 2.7489 +#> 7.9159 1.2296 5.7530 5.2686 -4.5178 7.0554 9.1664 8.6424 +#> 8.9341 -11.6695 -4.1183 -4.7487 3.2787 -8.7734 -8.8676 2.4187 +#> -0.5846 -7.5384 -8.2017 -3.2226 -1.1003 -13.9250 -9.9109 -12.9002 +#> -7.1770 -5.6694 13.1071 12.3576 -5.2566 -2.9248 -14.2614 1.8318 +#> 6.0011 8.2952 10.7749 2.2634 -13.4215 -1.2809 6.6806 2.5869 +#> -12.1651 -4.9027 -9.3990 -7.3810 -19.8350 -3.1701 -17.9258 -5.2328 +#> -3.2122 0.5319 12.5485 8.6093 4.8249 6.0053 -13.9898 13.6141 +#> 2.4600 6.9859 -0.7049 2.6028 -12.7993 -13.0092 2.9656 12.3087 +#> -7.3083 -3.6930 15.8727 10.6552 -6.9516 -5.3436 5.7612 1.9576 +#> 5.3848 -2.5760 -9.2433 -0.3591 -3.4518 1.0722 10.9195 -8.5985 +#> -8.0646 4.3434 -2.1281 -1.4199 -16.7954 -12.6490 0.7447 -8.1730 +#> 13.3801 4.9655 -2.5100 -0.3978 11.2956 -0.5729 5.2475 5.3386 +#> -6.9883 9.6422 18.8881 -9.8414 -0.1467 -2.5258 -16.4064 -2.6196 +#> -5.7340 -4.1890 -1.8319 4.0902 1.0645 -4.4699 -11.0413 4.2681 +#> +#> Columns 49 to 54 -15.0626 4.6630 2.1379 -3.9945 -4.5883 1.5795 +#> 17.6983 -1.3270 -3.1154 2.2603 2.6928 -0.7149 +#> -0.2805 16.9501 10.5462 -11.3031 -4.7032 6.6998 +#> 7.2113 20.6352 1.0219 1.2684 1.2694 -4.6234 +#> 7.4568 -4.2505 2.3414 -5.1908 -2.3375 -5.6689 +#> -18.9978 -11.5822 -6.5452 2.4236 2.7432 0.1641 +#> 3.4828 6.0013 3.6968 -10.0672 -6.4488 1.8463 +#> -1.6269 8.8388 1.7039 10.8039 4.8676 -0.8757 +#> 8.0023 7.3665 -3.6373 0.8060 4.4465 2.0906 +#> 14.0379 5.0369 -8.3252 1.9085 -1.2375 -0.7666 +#> -8.2903 -10.8521 5.5031 -3.2708 5.1663 9.9725 +#> -4.5692 0.8189 -5.1834 1.5868 1.9432 1.3214 +#> 12.6362 -6.4726 5.5729 -9.4261 -0.5406 2.7630 +#> 6.3896 4.7119 -2.1385 0.5686 2.1703 0.6736 +#> -5.1342 -2.3181 -4.9941 5.9961 -0.5455 -3.2953 +#> 2.0037 10.6868 5.2003 7.3448 -5.1892 -0.2359 +#> -5.1612 -9.5109 -19.9095 5.0390 8.7671 4.5186 +#> 6.5665 0.5478 -0.2521 2.9676 0.2838 2.1890 +#> -2.5895 -3.7767 13.7813 15.5430 -1.4989 -2.7487 +#> -6.8314 -2.9919 -14.5160 -7.0359 6.2852 5.7361 +#> 17.4565 -9.9268 -0.5212 -2.3981 -5.5776 -4.8160 +#> -4.4211 -1.5263 -10.7164 1.1898 7.4499 6.3166 +#> -0.1586 1.6893 3.3332 -7.7713 0.1963 5.0846 +#> -10.3228 -10.7535 5.2761 14.6063 2.9479 2.6619 +#> 11.5348 7.4959 1.8269 16.0055 4.7889 0.3962 +#> -4.4697 -7.9666 -13.3593 -4.0063 -2.3265 -0.5683 +#> 9.0687 0.1223 -1.3498 -4.1347 1.9401 0.9255 +#> -10.6185 -4.0579 -4.0332 -3.4902 -6.2295 -0.2965 +#> 0.9226 7.3937 2.3543 1.6309 -2.5689 -1.2707 +#> -19.4518 6.9227 3.6227 9.0081 8.0081 1.1943 +#> 5.5501 3.1205 11.6577 -3.1306 2.7364 2.7878 +#> -13.3373 6.9455 -4.3216 -6.7553 2.7839 0.4027 +#> -0.9990 5.7631 0.4375 -1.4641 -1.9611 1.0508 +#> +#> (11,.,.) = +#> Columns 1 to 6 -3.8360e+00 1.5939e+00 -3.7936e+00 1.8678e+00 -1.8081e+01 4.4164e+00 +#> -1.1471e+00 -2.8029e+00 -2.5762e+00 5.1902e+00 6.6967e+00 7.1688e+00 +#> 2.5485e+00 -4.8021e+00 -3.1657e+00 -1.4638e+01 1.5666e+01 3.7014e+00 +#> -1.6291e+00 1.9018e+00 -4.8757e+00 1.5393e+00 -2.5465e+00 1.8616e+00 +#> -1.8028e+00 -2.3302e-01 5.7494e+00 -1.0798e+00 1.0129e+01 9.0177e+00 +#> -6.6964e-01 1.0868e+00 6.3221e+00 5.0807e+00 4.7175e+00 2.8995e+00 +#> 3.0178e+00 -3.2517e+00 5.1202e+00 5.3788e+00 1.1414e+01 -1.3915e+01 +#> -4.1800e+00 1.3642e+00 3.0657e+00 -1.0666e+00 -4.3976e+00 -3.3562e+00 +#> -3.0942e+00 -9.7947e+00 9.8732e+00 -1.2089e+01 9.8079e+00 4.0579e+00 +#> -3.1702e+00 4.5015e+00 2.2437e+00 -7.6928e+00 -7.5949e+00 -1.4488e+00 +#> 2.8843e+00 -2.5495e+00 5.6960e+00 -4.4285e+00 -6.2185e+00 -2.1271e+00 +#> -6.1734e-02 -2.3013e+00 4.0962e+00 -2.6230e+00 -5.5801e+00 6.3505e+00 +#> 5.1560e+00 2.2354e+00 7.4479e+00 -2.7796e+00 6.7787e+00 7.4205e+00 +#> 8.6645e-01 -4.0719e+00 -1.4128e+01 5.4836e+00 -1.8982e+01 -1.4326e+01 +#> -1.6762e+00 1.3226e+00 6.9057e+00 7.8310e+00 -4.7439e+00 -4.6881e+00 +#> 3.0498e+00 7.4968e+00 -6.1214e+00 8.7065e-01 3.5176e+00 1.2662e+00 +#> 1.5084e+00 -3.7030e+00 5.4233e+00 1.1332e+01 9.5553e+00 -3.7449e-01 +#> 1.7620e-01 -7.8269e+00 -1.3969e+01 -5.6421e+00 -2.0002e+00 -4.6152e+00 +#> 3.1169e+00 -8.0233e-01 -3.1288e+00 -1.1862e+01 -6.6224e+00 -3.2739e+00 +#> -2.7442e+00 2.3985e+00 3.2201e-01 -6.7745e-01 -4.5126e+00 -7.6190e+00 +#> 3.8572e+00 1.3669e+00 4.1118e+00 -1.1800e+00 1.6259e+01 -7.0737e+00 +#> 3.7801e+00 -1.9738e-01 9.7668e+00 -3.9714e+00 4.9906e+00 -1.9493e+01 +#> -1.6236e+00 6.5810e+00 -6.8311e+00 3.3564e+00 -2.2126e+00 1.1171e+01 +#> 7.3322e+00 -2.2437e-01 5.6662e-01 -1.2073e+01 -9.4299e+00 -4.0470e+00 +#> -1.7680e+00 1.6458e+00 -7.3639e-01 -6.0911e+00 -1.2497e+01 1.1654e+00 +#> -7.3167e-01 1.8133e+00 -2.0562e+00 5.6627e+00 -3.3552e+00 -6.1999e+00 +#> 5.3507e+00 2.7325e+00 4.9057e+00 -3.1438e+00 9.1261e+00 -6.7998e-01 +#> -3.0363e+00 -8.1935e-01 -3.0917e+00 5.6052e+00 6.7996e+00 4.9521e+00 +#> -1.8488e+00 3.3741e+00 -8.2848e+00 -5.4631e+00 1.0128e+01 5.8941e+00 +#> -1.2998e+00 1.3414e+00 -4.5637e+00 4.8459e+00 -8.2052e+00 -1.0843e+01 +#> -5.1679e-01 -2.2994e+00 -7.7908e+00 -5.5937e+00 -1.0556e+00 2.5691e-01 +#> 3.5901e+00 -2.1805e+00 -6.4358e-01 6.8101e+00 -1.3151e+01 -3.8822e+00 +#> 3.8115e+00 4.7139e+00 -3.8519e+00 2.4953e+00 -2.6490e+00 1.2095e+01 +#> +#> Columns 7 to 12 1.1355e+00 8.0373e+00 9.8857e-01 -3.5068e+00 6.9162e+00 9.3497e-01 +#> 2.8756e+00 -2.6017e+00 1.1818e+01 -8.8997e+00 -3.1929e+00 -7.8767e+00 +#> 9.8390e+00 8.1572e+00 1.4665e+00 -1.1117e+00 -4.8860e+00 -3.9219e-01 +#> 7.8412e-01 2.0463e+01 3.2234e+00 2.9016e+00 5.3400e+00 3.6160e+00 +#> 5.6648e+00 1.6924e+01 1.1407e+00 4.5146e+00 8.7476e+00 -3.6236e+00 +#> 1.2562e+01 7.6311e+00 5.9790e+00 5.1021e+00 -4.0275e+00 6.5509e+00 +#> -1.3679e+01 1.5848e+01 9.6090e+00 -1.3240e+01 -9.9157e+00 -1.6351e+00 +#> 4.1676e-01 4.6696e+00 2.1223e+01 -1.5720e+01 1.4410e+01 4.6688e+00 +#> -9.9709e+00 5.2292e+00 5.8353e+00 7.9057e-01 3.5686e+00 2.8194e+00 +#> 1.9564e+01 1.1953e+01 -1.2233e+01 1.0940e+00 -8.2733e-01 2.0802e+01 +#> 7.7502e+00 -7.1128e+00 7.5082e+00 -2.4273e+00 6.4676e+00 4.0435e+00 +#> -5.3417e+00 1.2981e+01 8.5225e+00 -1.3927e+00 1.4157e+01 3.2939e+01 +#> 9.7841e+00 -7.4163e+00 -6.2632e+00 1.8813e+01 -4.8736e+00 -5.7193e+00 +#> -5.2379e+00 -1.6308e+01 8.9360e+00 -2.1223e+00 -9.0632e+00 -8.2404e+00 +#> -1.5874e+00 2.3141e+01 -2.1215e+01 -1.1144e+01 6.3798e+00 7.7378e+00 +#> 2.7432e+01 -6.5000e+00 -9.0771e+00 -1.0587e+00 5.5313e+00 7.7932e+00 +#> -4.2467e+00 -2.2803e+00 -1.1580e+01 -7.6551e-01 -1.0155e+01 -3.6816e+00 +#> 3.7888e+00 1.7301e+00 -8.4693e+00 -3.7254e+00 -6.6073e+00 1.8567e+01 +#> 2.5772e+00 -1.0706e+01 9.7069e+00 4.7409e+00 5.5050e+00 6.3782e+00 +#> 2.6113e+00 -6.4904e+00 -1.4243e+01 1.3646e+01 1.1248e+01 -1.3836e+01 +#> 1.9483e+00 8.2244e+00 5.3361e+00 -1.8463e+00 3.8943e+00 -5.7493e+00 +#> -1.2684e+00 1.0993e+01 -6.9079e-01 9.9783e-01 1.4274e+00 1.8779e+01 +#> -3.4843e+00 -4.9284e+00 -2.8626e+00 -1.5959e+01 3.7211e+00 -1.3182e+00 +#> -1.2014e+01 2.7731e+00 -8.7801e+00 1.4702e+01 1.3477e+01 2.8389e+00 +#> -7.1466e+00 1.6777e+01 1.2249e+01 6.4850e+00 1.4657e+01 -3.4285e+00 +#> 3.8257e-01 -9.9398e+00 2.2121e+00 6.9386e+00 -1.5108e+00 -7.0370e+00 +#> -1.9808e+00 1.1421e+01 -1.3566e+01 -1.3428e+01 5.7615e+00 1.9043e+00 +#> 9.5193e+00 5.8011e+00 -1.1230e+01 -2.5744e+00 -9.9539e-01 -1.8106e+01 +#> 7.9076e+00 6.8125e+00 -1.8321e+00 5.3211e+00 -4.6525e+00 -8.1244e+00 +#> -3.5788e-02 -2.8294e+00 1.1478e+01 4.6395e+00 6.3767e+00 2.9761e+00 +#> -8.7438e+00 8.3134e+00 5.7363e+00 1.3889e+01 -5.7648e-01 2.6462e+00 +#> 1.5458e+01 4.3068e+00 -4.7382e+00 -2.4897e+00 -9.8575e-01 1.3976e+00 +#> 9.4647e+00 1.0196e+01 -9.4177e+00 9.1721e+00 3.3925e+00 1.1460e+01 +#> +#> Columns 13 to 18 -1.0379e+00 -5.1405e-01 7.7139e+00 -1.0642e-01 8.3741e+00 1.9785e+00 +#> -7.2977e-01 -1.2779e+00 -8.6253e+00 5.1659e+00 2.0163e+00 -7.9648e+00 +#> 1.9047e+01 -7.8211e+00 -1.7251e+01 1.0182e+01 -3.4768e+00 3.5280e+00 +#> -6.8537e+00 -8.3516e+00 -1.0993e+01 6.2705e+00 -1.7694e+01 -4.0229e+00 +#> 1.1758e+00 5.2104e+00 -6.0128e+00 -1.2200e+01 -1.4170e+00 1.2333e+01 +#> 1.8981e+00 -7.7969e+00 -8.2999e+00 7.0828e+00 6.2507e+00 2.2976e+01 +#> 1.0308e+01 2.0559e+00 -1.7678e+01 2.1769e+00 -3.3622e+00 -1.1646e+01 +#> -8.8872e+00 -1.0547e+01 -1.0122e+01 -1.6779e+01 4.8407e+00 -1.0983e+01 +#> 1.9676e+00 6.8608e+00 1.3503e+01 -1.0649e+01 -3.7704e+00 6.5000e+00 +#> -5.4224e+00 -2.0540e+01 -3.5309e+00 3.0255e+00 -1.4995e+00 1.4269e+01 +#> 3.9895e+00 -6.3869e+00 5.0963e+00 -4.7499e+00 4.6711e-01 5.8398e+00 +#> 1.1262e+01 1.1528e+01 -4.1378e+00 -8.8242e+00 2.8453e+00 -4.3191e+00 +#> -7.2129e-02 -2.3884e+00 -2.0270e+00 5.6990e+00 -2.8940e+00 1.8328e+00 +#> 9.1767e+00 -4.4345e+00 1.6328e+01 2.1799e+00 -7.5807e-01 -2.0310e+01 +#> 2.5884e+00 -1.7368e+00 -1.2152e+01 1.0830e+01 -7.4779e+00 3.1142e+00 +#> -1.3374e+01 -1.4053e+01 -4.6780e+00 -1.5641e+01 -1.0669e+00 2.0404e+00 +#> -1.4194e+01 1.7131e+00 6.0105e+00 7.9512e+00 -6.1213e+00 1.9179e+01 +#> -5.0826e-01 -6.4392e+00 -3.9512e-01 4.7914e-01 8.4086e+00 -4.8526e+00 +#> 1.5437e+01 1.6998e+01 1.0508e+01 -8.6180e+00 -3.1337e+00 -7.2217e+00 +#> 2.4477e+01 1.0709e+01 -9.2398e+00 -5.8075e+00 -1.3620e+01 2.6313e-01 +#> -3.0655e+00 -1.0128e+01 -6.8721e+00 4.9362e+00 -1.2731e+00 -8.4537e+00 +#> 1.6656e+01 -1.2754e+01 1.5588e+01 1.4962e+00 -1.3077e+01 -1.4743e+00 +#> 2.4511e+00 -2.1280e+00 -1.1724e+01 5.4683e+00 9.7342e+00 8.6476e+00 +#> 1.1642e+01 5.3199e+00 1.3054e+01 8.3679e+00 -1.0100e+01 -1.1214e+01 +#> 1.3779e+01 8.0466e+00 -1.1280e+01 1.2072e+01 1.1482e+00 1.6226e-01 +#> -3.8898e+00 -1.2344e+01 6.9595e+00 -4.3888e+00 -1.5284e+00 -5.0525e+00 +#> 9.0238e+00 1.5506e+01 4.0511e+00 -7.3665e+00 -1.4392e+01 -1.0761e+01 +#> 8.3401e+00 2.9931e+00 -9.2663e+00 -7.3426e+00 -2.0989e+00 1.3918e+01 +#> -5.0650e+00 3.7366e+00 1.7401e+00 -3.5033e+00 -1.0104e+01 -7.8859e-01 +#> -3.7941e+00 9.6133e+00 5.1849e+00 -5.6589e+00 7.7796e+00 2.6745e+00 +#> -1.9627e+00 -5.5493e+00 -8.1682e+00 -4.2417e+00 -2.1606e+01 -2.9949e+00 +#> 7.0163e-01 -1.0841e+01 -8.8551e-01 -8.6048e-01 1.1358e+01 -1.1409e+00 +#> 1.2621e+01 1.4819e+00 -3.9977e+00 -4.2345e+00 -1.2149e+01 -6.6803e-01 +#> +#> Columns 19 to 24 4.0309e+00 -9.1221e+00 -3.9438e+00 -1.7986e+01 1.8854e+00 9.0591e-01 +#> 5.4623e-01 5.4963e+00 2.4605e+00 -4.4006e+00 1.0538e+01 1.6970e+01 +#> -1.0713e+01 6.1633e+00 -2.0267e+00 3.7437e+00 -1.0923e+01 8.9042e+00 +#> -1.1127e+01 8.0173e+00 2.9128e+00 3.2007e+00 4.7226e+00 -9.0789e+00 +#> 6.1319e+00 1.1253e+01 6.9746e+00 2.8675e+00 5.8655e+00 1.0044e+00 +#> -1.6951e+00 9.3632e+00 6.8477e+00 -7.9658e-01 -5.9129e+00 1.8410e+00 +#> 7.1308e+00 -2.0794e+00 -7.7926e+00 8.8771e+00 -4.4297e-01 8.4418e+00 +#> 3.3092e+00 2.3140e+00 6.0336e+00 -4.7366e+00 -7.6794e+00 4.8340e+00 +#> -8.0594e+00 3.9725e-02 -9.1456e-01 -4.6298e+00 -3.2125e+00 1.3890e+01 +#> -8.9300e+00 1.3681e+01 -9.5088e+00 1.2522e+01 7.4292e-02 6.4323e+00 +#> 2.0328e+00 2.6531e+00 -1.2640e+01 -9.4904e+00 -1.5841e+01 5.4049e+00 +#> -4.0114e+00 1.4736e+00 -1.1151e+01 -8.2712e+00 4.5151e+00 -7.3063e+00 +#> -4.3667e-01 -2.3232e-01 -3.1493e+00 1.7114e+01 7.8338e+00 1.8030e+01 +#> -4.7537e+00 2.0193e+00 -1.0430e+00 -1.0014e+01 3.9959e+00 -2.3063e+01 +#> -9.8200e+00 1.0670e+01 -5.9706e+00 2.4227e+01 -2.7277e+00 5.0799e+00 +#> -6.7298e+00 4.2245e+00 2.7938e+00 -2.8428e+00 2.2324e+01 -6.9202e-01 +#> 8.1541e-01 1.5880e+01 -1.3547e+01 -9.0919e+00 4.8563e+00 -1.3871e+01 +#> -1.3599e+01 1.1264e+01 -2.1159e+01 -3.1379e+00 -7.7998e+00 9.0941e-01 +#> -1.5343e+01 -2.8572e+00 -5.8122e-01 -3.8335e+00 -9.4521e+00 -4.6786e+00 +#> 7.6362e-01 -2.8310e+00 9.1508e+00 6.4683e+00 5.6846e+00 4.6314e+00 +#> 9.5910e+00 3.8809e+00 8.4984e+00 -3.5116e+00 1.0446e+01 -1.0756e-01 +#> -4.8337e+00 9.8921e+00 -5.1602e+00 -7.1428e+00 -1.0196e-01 -7.2319e-01 +#> -1.2248e+01 1.1094e-01 -5.7928e+00 7.0802e+00 5.0713e+00 -3.1212e+00 +#> -2.4178e+00 -1.0519e+01 -2.3732e+00 -2.7023e+00 -5.9592e-01 -7.1338e-01 +#> 6.3050e+00 -3.9132e-01 7.2173e+00 9.4026e+00 -2.7707e+00 9.6987e-01 +#> 2.0358e+00 -1.3545e+00 7.1738e+00 -3.6405e+00 3.7646e+00 -1.6337e+01 +#> -6.3457e+00 -1.4147e+00 -6.5676e+00 1.1348e+01 1.1023e+00 9.8485e+00 +#> 1.5842e+00 3.5686e+00 5.5394e+00 -1.1322e+00 -3.6309e+00 5.8952e+00 +#> -8.2128e+00 9.2595e+00 1.7783e+01 -4.6057e+00 3.9662e+00 1.4638e+00 +#> -1.2341e+01 2.5213e+00 1.2736e+00 -2.4336e-01 -1.6478e+00 -1.7396e+01 +#> -9.3529e-01 2.8352e+00 -6.1613e+00 1.5112e+01 -1.2392e+01 1.0438e+01 +#> 1.2253e+01 -4.4346e+00 6.0388e+00 -2.3199e+00 -1.7197e+00 -7.9958e+00 +#> 1.8004e+01 -3.0359e+00 -1.5593e+01 -4.7352e+00 -9.6357e-02 1.7268e+01 +#> +#> Columns 25 to 30 4.5417e+00 9.9356e+00 -7.9734e+00 8.6843e+00 1.9243e+00 7.8274e+00 +#> 8.7007e+00 -4.7809e+00 -3.4190e+00 7.8836e+00 -5.8560e+00 4.9606e+00 +#> -2.3389e-01 1.1928e+01 -9.1595e+00 5.2450e+00 1.2959e+01 -8.0261e+00 +#> -2.3453e+00 -1.1489e+01 3.8974e+00 -2.7657e+00 6.5121e+00 8.7784e+00 +#> -2.8829e+00 7.7365e+00 -2.3080e+01 1.0360e+01 4.4827e+00 -1.6566e+01 +#> 1.7922e+00 1.4484e+01 3.0660e+00 -1.2048e+01 7.5874e+00 6.2713e+00 +#> 5.5566e+00 -1.4929e+01 5.3574e+00 -5.9776e+00 -9.7370e+00 1.1398e+00 +#> 2.4957e+00 -1.4012e-01 -2.0386e+00 -2.8146e+00 1.4249e+01 2.2646e+00 +#> -4.2497e+00 1.3767e+01 -3.7657e+00 7.9206e+00 2.9652e+00 -9.6100e-01 +#> -1.6080e+00 1.4514e+01 -2.0387e+01 -1.7081e+01 1.1001e+01 1.1186e+01 +#> 2.7661e+00 2.3567e+01 9.5477e+00 1.2551e+01 8.0121e+00 3.5173e+00 +#> 1.3002e+01 -6.6859e+00 1.1111e+01 4.0720e+00 2.4169e+00 9.7231e+00 +#> -7.4153e+00 -5.0619e+00 -1.1198e+01 1.4229e+01 -6.1807e+00 4.0081e+00 +#> -1.9085e+01 -7.4257e+00 -1.2958e+01 8.1068e+00 2.7206e+00 2.1854e+00 +#> -1.0049e+01 5.6731e+00 8.5600e+00 -1.7086e+01 8.3425e+00 -8.1282e+00 +#> -1.0292e+01 -1.0067e+01 -2.2118e+00 8.2715e+00 6.0819e+00 -3.0324e+00 +#> -3.3625e+00 4.6885e+00 -9.0741e+00 -1.1488e+01 6.0181e+00 -2.8482e+00 +#> -7.8853e+00 6.8509e+00 4.0404e+00 5.2101e+00 -3.8899e+00 3.3171e-01 +#> -9.4386e+00 1.4857e+01 -2.6685e+00 1.1545e+01 9.6147e+00 1.5497e+00 +#> 2.8378e+00 -1.1177e+01 -1.1782e+01 3.5629e+00 1.0285e+01 4.6724e+00 +#> -1.3464e+00 3.1451e+00 1.8565e+00 1.0258e+00 -1.0872e+01 9.4655e+00 +#> -1.1968e+01 6.8576e+00 7.9288e+00 -1.9535e+01 1.2262e+01 1.3630e+01 +#> 5.4541e+00 7.6256e+00 -1.1775e+01 8.9696e+00 -1.0420e+01 -1.0184e+01 +#> -7.0127e+00 -1.0383e+01 5.1228e+00 1.2812e+01 1.4420e+01 6.0488e+00 +#> 4.6218e+00 -4.0049e-01 1.6324e+01 2.4842e+00 9.8292e+00 5.9595e+00 +#> 5.6140e+00 -1.7067e+01 6.4086e+00 -3.8410e+00 -6.4977e+00 4.8475e+00 +#> -4.9237e+00 -6.8551e+00 7.2600e+00 9.0261e-01 7.7543e-01 -1.7415e+01 +#> 7.6608e+00 1.7461e+01 -9.9003e-01 -1.1081e+01 1.3593e+01 -2.8464e+00 +#> -4.0654e+00 1.7042e+00 1.9005e+00 -2.1048e+00 2.3070e+00 -8.9824e+00 +#> 4.1790e+00 2.9341e+00 -4.1916e+00 -8.8421e+00 1.0892e+01 -9.1511e+00 +#> -6.2952e+00 7.3589e+00 1.1996e+01 1.7767e+01 -1.4724e+01 -6.8176e+00 +#> 1.6905e+01 -5.8042e+00 -2.2659e+00 -1.0988e+01 1.2340e+00 5.7880e+00 +#> 2.8737e+00 3.9366e+00 -6.4715e+00 -8.6918e+00 7.6655e+00 8.2046e+00 +#> +#> Columns 31 to 36 4.9854e+00 -3.8726e-01 6.5628e+00 6.7788e-01 -5.2493e+00 4.1982e-01 +#> 1.4268e+00 2.0383e+00 -2.1152e+00 -4.2073e+00 9.1627e+00 1.5472e+01 +#> 8.2502e+00 -1.0515e+01 -1.5214e+01 -6.1506e+00 5.3685e+00 -1.2504e+01 +#> 5.2293e+00 -9.7807e+00 4.1479e+00 -2.4804e+01 -5.1699e+00 7.7294e+00 +#> 5.3212e+00 1.3778e+00 -3.7450e+00 -1.1298e+01 4.2642e+00 4.4505e+00 +#> 1.5083e+01 -1.1646e-02 -6.4796e+00 9.1971e-01 7.9833e+00 1.7602e+01 +#> -2.6859e+00 2.7046e+00 9.2215e+00 2.6360e+00 -4.5803e+00 2.3114e+00 +#> -1.6276e+00 -9.2005e-01 -2.5471e+00 1.0154e+01 -1.1071e+01 2.7301e+00 +#> -4.7220e-02 1.0298e+01 5.7568e+00 3.7165e+00 5.3836e+00 1.2636e+01 +#> 1.3372e+01 -4.1079e+00 3.2401e+00 -1.5440e+01 1.0938e+01 -2.2595e+00 +#> -6.2048e+00 1.0810e+01 -4.6557e+00 -5.7423e+00 1.9922e+01 6.8378e+00 +#> 1.6257e+01 3.3141e+00 2.6023e+00 -7.5886e+00 -5.4325e+00 1.6676e+01 +#> -1.5218e+00 1.2508e+00 -2.9083e+00 1.2418e-01 1.8454e+01 -3.9098e+00 +#> -7.6743e-01 -1.0442e+00 -6.1045e+00 -7.6160e+00 -7.2777e+00 -8.3719e+00 +#> -5.6189e+00 1.0683e+01 4.7618e+00 -1.0877e+01 1.6728e+01 2.3661e+01 +#> -3.1498e+00 4.8652e+00 7.7276e+00 -1.7810e+00 5.4526e-02 1.3808e+01 +#> -1.6673e+01 2.2490e+00 4.5205e+00 -7.0093e+00 2.0609e+01 5.7754e+00 +#> 1.0943e+01 -1.8617e+00 1.0474e+01 -7.6102e+00 -1.5107e+01 -2.9932e-01 +#> 2.8436e+00 2.7909e+00 2.2022e+01 1.6944e+00 -1.2976e+01 -3.5025e+00 +#> -5.4795e+00 3.9904e-01 -6.4733e+00 6.7021e+00 7.2860e+00 -2.8783e+00 +#> 2.6436e+00 -5.1108e+00 3.1331e+00 1.4062e+01 5.9850e+00 6.8239e+00 +#> -8.2778e+00 5.1245e+00 3.0526e+00 6.2872e+00 1.4696e+01 1.1970e+01 +#> 8.6024e+00 -5.6751e+00 -9.3626e+00 -1.4129e+01 -3.8140e+00 -1.1897e+01 +#> -4.4061e+00 -2.3178e+00 9.2872e+00 -8.1672e-01 1.2030e+00 -1.2162e+01 +#> 5.3659e+00 7.7213e+00 5.7185e+00 9.0630e+00 -5.1373e+00 -3.1592e+00 +#> 3.2050e+00 -2.3414e+00 -6.9714e+00 7.1941e+00 -9.8566e-01 -3.3091e+00 +#> -1.6919e+01 -1.2293e+01 -6.6814e+00 3.7901e+00 5.9487e+00 8.8619e-01 +#> -1.2029e+01 -7.6166e+00 -4.6589e+00 -1.7603e+01 -2.6050e+00 4.8997e+00 +#> 2.4186e+00 -5.6908e+00 -2.5564e+00 -1.6754e+01 8.0733e+00 -1.1196e+00 +#> -8.7796e-01 6.5197e+00 6.5565e-01 -8.6806e-01 -2.6335e+00 6.8496e+00 +#> 1.1316e+01 2.8948e+00 4.4362e+00 4.8354e+00 -2.3211e+00 -1.2829e+01 +#> 8.5893e+00 -3.7059e+00 -8.4572e+00 6.7882e+00 -8.4576e+00 8.2972e-01 +#> 1.6636e+01 -8.8189e+00 6.4427e+00 6.5319e+00 6.9739e+00 1.1096e+01 +#> +#> Columns 37 to 42 4.4878e+00 -2.3642e+00 -9.1210e+00 1.5661e+01 -1.9917e+01 -6.3785e+00 +#> 1.1384e+01 3.5865e+00 -2.5288e+00 -1.7164e+01 8.7337e+00 -1.0503e+01 +#> -6.2520e+00 3.9174e+00 -1.2509e+01 -5.3240e+00 3.7293e+00 -1.5637e-01 +#> -1.9859e+00 -6.6096e+00 1.2515e+01 -1.4296e+01 3.2204e+00 -6.3126e+00 +#> 1.6918e+00 2.0285e+01 -3.0068e+00 3.2718e+00 -1.4610e+01 -1.4036e+01 +#> 9.0918e+00 -5.9544e+00 7.9972e+00 -1.1101e+01 -1.3772e+01 -1.9439e+01 +#> 1.8324e+00 -3.4902e+00 1.1935e+01 -2.5032e+01 1.8524e+01 -5.9759e-01 +#> -3.1921e-01 3.2242e+00 2.3243e+00 1.0866e+01 5.6447e+00 -4.0742e+00 +#> 2.0699e+00 1.4610e+01 -1.7303e+01 -2.6887e+00 -1.1823e+01 1.4420e+01 +#> 5.1856e+00 2.0304e+00 -7.1594e+00 -5.7598e-01 -1.5643e+01 -1.7092e+01 +#> -5.6571e+00 -6.3264e-01 -1.8718e+01 -1.6022e+00 -1.6949e+00 8.5261e+00 +#> -2.5168e+00 -3.4719e+00 6.6238e-01 7.8183e+00 -7.9532e+00 7.6461e+00 +#> 5.2890e+00 4.8125e+00 -9.6915e+00 -3.7230e+00 -5.9298e+00 6.9522e+00 +#> -2.5441e+00 7.1421e+00 2.4809e+00 -3.4307e+00 -1.7882e+00 6.1988e+00 +#> -7.7466e+00 9.6058e-01 1.4210e+00 -8.9925e-02 -5.7440e+00 -1.0791e+01 +#> -5.5046e-02 -1.0465e+01 1.9446e+00 -4.2025e+00 5.9414e-04 -7.0922e+00 +#> -1.1868e+01 2.7056e+00 1.4955e+01 -4.6315e+00 -2.0707e+00 1.0120e+01 +#> 2.2510e+00 5.3352e+00 -6.9040e-01 -3.6846e+00 -4.9931e+00 -2.6069e+00 +#> 3.9388e+00 3.7917e+00 1.4283e+01 -2.5320e+00 -5.9300e+00 -9.6440e+00 +#> 9.9951e-01 1.2395e+01 -1.7485e+01 6.0842e+00 4.0708e+00 -1.4335e+00 +#> -5.4947e-01 -1.1178e+01 1.4815e+01 -4.7548e-01 1.6524e+00 -6.1346e+00 +#> -1.8547e+01 -3.5868e+00 5.7109e+00 1.0381e+01 5.3679e+00 4.0875e+00 +#> -6.2024e+00 -3.8099e+00 3.1032e+00 8.7504e+00 -2.4905e+00 -3.1567e+00 +#> -5.7587e+00 2.6403e-01 -6.2622e+00 -3.2774e+00 -3.4080e+00 1.7630e+01 +#> -6.5531e+00 -5.1616e-01 -2.5381e+00 4.3673e-01 1.3827e+00 -1.7608e+00 +#> 1.3258e+00 7.4871e-01 7.2413e+00 2.9663e-01 -6.5212e+00 7.2607e-01 +#> 7.7659e+00 7.0330e+00 -1.2670e+01 8.9590e-01 1.4491e+01 1.8087e+01 +#> 6.2934e+00 8.0264e+00 3.6147e-01 -7.0904e+00 7.8345e-02 2.3515e+00 +#> -1.1642e+00 1.2683e+01 6.3582e+00 -1.7169e+01 6.1291e+00 -2.4972e+00 +#> 1.1013e+01 1.7106e+01 7.4904e+00 7.7218e+00 -1.7584e+01 4.4086e+00 +#> -1.2509e+00 1.5248e+01 -3.4874e+00 1.3855e+01 5.7941e+00 -9.0559e+00 +#> 2.0705e+00 -1.1685e+01 3.1573e-01 -1.4187e+01 6.1962e+00 1.0098e+01 +#> 4.0630e+00 7.0444e+00 -6.7645e+00 -1.2123e+01 1.0104e+01 6.0825e+00 +#> +#> Columns 43 to 48 -1.3225e+00 7.5155e+00 3.5457e+00 -7.7401e-01 -7.9913e+00 -2.8263e+00 +#> -4.4239e-01 -6.5029e+00 -5.3231e+00 -1.2749e+01 1.2061e+00 1.3572e+00 +#> 6.6092e+00 7.3148e+00 4.0908e+00 -1.6144e+01 4.0651e+00 9.7613e+00 +#> 1.6555e+01 -4.9039e+00 2.0471e+00 2.6363e+00 1.6037e+01 7.5261e+00 +#> 5.6720e-01 -1.8519e+01 -1.4613e+01 -1.0380e+01 7.9935e+00 -2.4992e+01 +#> -9.9498e+00 -1.4021e+01 3.0135e+00 9.9928e+00 -1.4162e+01 -7.7319e+00 +#> 4.5391e+00 -9.1612e+00 8.2764e-01 1.4789e+01 8.5675e+00 -4.3570e+00 +#> -1.1531e+01 5.0008e+00 -1.0593e+01 6.8532e+00 -7.9979e+00 -2.3096e+00 +#> -6.9519e+00 9.2419e+00 -5.3410e+00 4.6456e+00 1.2305e+01 -1.4149e-01 +#> -5.0381e+00 4.8369e+00 -9.1629e+00 -1.7132e+00 -5.3706e+00 -1.9919e+01 +#> -2.1738e+01 -6.7851e+00 -8.8066e-01 -8.6924e+00 -1.1466e+01 9.4515e+00 +#> -9.2470e+00 4.1282e+00 6.4433e+00 1.4437e+01 -5.9255e+00 5.0156e+00 +#> -3.8086e+00 -8.3930e+00 -8.6682e+00 -2.9186e+00 8.8949e-01 -7.8708e+00 +#> 9.3219e+00 -2.1789e+00 2.8101e+00 1.1117e+01 3.1088e+00 3.8468e+00 +#> -4.7248e+00 -1.1072e+01 -4.4787e+00 1.2252e+01 2.8345e+00 -1.1620e+01 +#> 2.5926e+00 1.7134e+01 -7.0157e+00 7.1150e+00 -1.0001e+01 7.1872e+00 +#> -9.4637e+00 -1.9046e+01 -4.2980e+00 5.0236e-01 8.4141e+00 -5.4341e+00 +#> -8.8937e+00 2.2311e+01 -1.7071e+01 9.9567e+00 1.3399e+01 -7.8803e+00 +#> 9.9015e+00 7.5980e+00 1.0382e+01 -6.2705e+00 -5.7278e+00 -1.9555e+00 +#> -1.4069e+01 -1.9179e+01 -8.3042e+00 -3.1122e+00 1.2979e+00 -1.2923e+01 +#> 3.4753e+00 -3.3074e+00 6.5502e+00 -2.8901e+00 4.0644e+00 -1.2498e+00 +#> -8.5863e+00 -1.1009e+00 4.9903e+00 -9.2786e+00 8.5524e+00 4.5484e-02 +#> -2.3117e-01 -6.9398e-01 -2.4986e-01 -7.5937e+00 -1.0116e+01 1.9911e+00 +#> 9.7825e+00 3.6422e+00 8.0094e+00 -2.3208e+00 -7.2454e+00 1.1305e+01 +#> -3.8630e+00 8.8709e+00 7.7070e+00 2.0446e+01 1.2637e+00 8.0608e+00 +#> -6.2783e+00 -9.8522e-01 -1.4196e+00 -8.4213e+00 -2.2625e-01 -1.6038e+00 +#> -8.0083e+00 7.4533e-01 2.5935e+01 -1.0613e+01 4.8281e-01 2.1464e+01 +#> -2.3609e+00 -1.1517e+01 -3.8818e+00 -1.6465e+01 -5.4758e+00 3.3605e+00 +#> 1.6953e+01 1.1260e+01 9.7832e+00 -1.2325e+01 -4.8735e+00 1.1821e+01 +#> 5.7424e-01 9.8175e+00 2.3017e+00 -1.5989e+00 3.4220e+00 1.0485e+01 +#> -6.5441e+00 -8.2770e+00 -4.5998e+00 1.2889e+01 -2.1546e+00 -1.7968e+01 +#> -5.9971e+00 -1.7395e+01 -8.5698e+00 1.1068e+00 6.6695e+00 -4.2924e+00 +#> -1.5701e+00 9.1711e-02 -1.4028e+01 9.8882e+00 3.9283e-02 1.3230e+00 +#> +#> Columns 49 to 54 -3.8444e+00 1.1858e+00 4.9792e+00 9.6627e-01 1.2525e+00 -1.7345e+00 +#> 7.1986e+00 2.9036e+00 -1.3004e+00 1.1473e+01 -4.6877e+00 1.6839e+00 +#> 3.4666e-01 3.0801e+00 1.3069e+01 1.7583e+01 -8.5806e+00 4.5497e+00 +#> -2.2805e+00 1.7387e+01 -1.5286e+01 -5.8295e+00 4.2371e+00 -2.7819e+00 +#> -2.4679e+00 7.7251e+00 -2.3069e+01 8.4689e+00 -3.3133e+00 -8.6648e+00 +#> -3.6375e-01 -2.5602e+01 1.0000e+01 3.5638e+00 4.2016e+00 9.5066e-01 +#> -1.1269e+01 6.4835e+00 3.1930e+00 -4.3767e-01 -1.6418e-01 -6.3701e+00 +#> 2.7810e+00 5.8144e+00 4.0933e+00 -1.3157e+00 1.1435e+01 -3.0485e+00 +#> 8.1508e+00 1.4555e+01 -2.8499e+00 -2.8760e+00 8.9614e+00 2.1232e+00 +#> -8.3261e-01 2.7100e+00 8.7359e+00 -8.2285e+00 -6.2578e+00 -5.3856e+00 +#> 2.2066e+00 1.0378e+01 9.5569e+00 9.5588e+00 1.6320e+00 1.0084e+01 +#> 1.4588e+00 9.2064e-01 -7.1439e+00 -5.2017e+00 -2.4140e+00 5.3526e+00 +#> 1.3612e+01 5.6620e+00 -1.1620e+01 2.3625e+00 -1.1330e+01 1.5248e+00 +#> -3.5426e+00 -7.0384e+00 -2.9504e+00 -9.5061e+00 4.3853e+00 4.7988e+00 +#> -9.7679e+00 1.7907e+00 -7.9331e+00 -6.1811e-01 5.9971e+00 -3.7143e+00 +#> 7.6239e+00 6.9541e+00 5.5589e-01 -4.7093e+00 2.4645e+00 5.1278e+00 +#> -9.8085e+00 8.3869e+00 8.0053e+00 -9.6987e+00 8.2273e+00 5.5820e+00 +#> -9.3834e+00 6.4200e+00 1.0142e+01 -8.7463e+00 -4.5796e-01 3.4890e+00 +#> 7.2693e+00 2.4162e+00 -4.3967e-01 4.9801e+00 -6.2719e+00 -4.2498e+00 +#> 5.4299e+00 -4.0730e+00 -4.8699e+00 -3.7527e+00 2.8188e+00 -6.4849e+00 +#> -1.6949e+01 -7.3988e+00 -6.1241e+00 7.4695e+00 1.5441e+00 1.4237e+00 +#> -9.0940e+00 6.1137e+00 9.6657e+00 -2.4854e+00 7.8322e+00 2.6870e+00 +#> 1.3036e+01 1.8695e+01 2.8623e+00 1.2134e+01 -8.4328e+00 -2.1355e+00 +#> 1.8717e+01 7.8995e+00 7.2458e+00 -7.6360e+00 -9.9172e+00 -2.4202e+00 +#> -3.4935e-01 -4.2851e+00 -3.0568e+00 1.6756e-01 -2.4791e+00 2.2609e+00 +#> 8.0906e+00 -7.4999e+00 -4.3320e+00 1.7309e+00 -6.8967e+00 -3.5653e+00 +#> -1.0714e+01 -1.1555e+01 -2.1318e+00 -1.1573e+01 1.0170e+00 1.3795e+00 +#> 2.5068e+00 -1.0478e+01 1.8676e+01 2.4759e+00 1.5531e+00 -4.1684e+00 +#> 8.1488e+00 -8.4433e+00 -7.5819e+00 1.6040e+01 -4.2868e-01 4.2592e+00 +#> -1.5774e+01 -4.5053e+00 -2.0056e+00 -1.8684e+01 1.1285e+01 5.9708e+00 +#> 1.0074e+01 -1.4951e+01 -4.2046e+00 1.2449e+01 -3.6492e+00 5.5716e+00 +#> -5.2416e-01 1.1741e+01 1.2132e+01 -4.0776e+00 -2.5397e+00 -3.4564e+00 +#> -7.4641e+00 3.1837e+00 1.8403e+01 -2.4848e+00 -5.9302e-01 -9.3820e-02 +#> +#> (12,.,.) = +#> Columns 1 to 6 5.6711e+00 -9.7715e+00 8.0139e+00 -3.1456e+00 -8.2597e+00 -8.6188e+00 +#> 4.5534e+00 -2.6602e+00 5.7101e-01 4.5924e-01 -7.1426e+00 -7.6855e+00 +#> 2.3944e+00 -1.8833e-01 -1.2372e+00 1.3183e+01 -1.1774e+01 -1.2614e+01 +#> -3.7058e+00 1.2332e+01 -7.1135e+00 1.0252e+01 -1.1324e+01 1.5886e+00 +#> -1.3778e+00 1.3744e+01 1.0130e+01 1.7991e+01 -9.5907e-01 4.4537e+00 +#> 4.3923e+00 2.0173e+01 -1.1748e+00 2.2084e+00 1.0591e+01 -2.1620e+01 +#> -4.6225e+00 -4.0510e+00 4.9409e+00 -8.6606e+00 -4.5434e+00 4.4733e-01 +#> -1.6025e+00 -6.5477e-01 1.8544e+00 -2.0565e+00 -2.4117e+00 3.3357e+00 +#> 2.8552e+00 -1.0342e+01 1.0375e+01 4.7829e+00 -7.2180e+00 -8.8688e+00 +#> 1.0235e+01 9.7207e+00 3.1417e+00 4.0030e+00 -7.3955e+00 -6.9170e+00 +#> -5.7200e-01 5.5964e+00 -1.5976e+01 8.3666e+00 6.3403e+00 -7.6915e+00 +#> 3.4703e-02 3.5169e+00 3.8781e-01 1.0636e+00 4.0983e+00 -2.7044e+00 +#> 4.6174e+00 -2.4054e-01 -8.3332e-01 -1.0104e+01 5.7178e+00 5.9626e-01 +#> -3.6530e+00 6.3614e+00 -1.4573e+01 1.8428e+01 -1.0996e+01 7.7436e+00 +#> -7.8050e-01 1.7219e+00 1.0551e+01 -7.6440e+00 -1.9886e+00 -2.9778e+00 +#> 3.0868e+00 -2.7772e+00 -5.4846e-01 -4.6270e+00 7.8908e-01 -1.3109e+01 +#> 2.8907e+00 -1.3217e+00 -1.0009e+01 1.4561e+01 4.7109e+00 -3.6655e+00 +#> 2.1374e-01 -3.3225e+00 1.0521e+01 3.3724e+00 -1.4473e+01 1.0237e+01 +#> 2.7409e+00 1.5870e-01 8.3594e-02 1.3877e+00 6.6924e+00 -2.4924e+00 +#> -2.1034e-01 1.1392e+01 4.4399e-01 -2.0182e+00 6.3715e+00 -1.6278e+00 +#> -2.7978e+00 2.1835e+00 -3.4727e+00 -1.1099e+01 1.1308e+01 1.9844e+00 +#> 1.4916e+00 -4.6658e+00 7.5224e+00 -4.0165e-01 4.3843e+00 2.9431e+00 +#> -4.8740e+00 3.3266e-01 -1.0033e+01 2.4769e+01 -1.0689e+01 5.7757e+00 +#> 9.7244e+00 -2.0223e+01 1.5571e+00 -6.1857e+00 -4.8121e-01 5.3037e+00 +#> 2.1985e-01 -6.2206e+00 1.8928e+00 4.3969e-01 -4.0883e+00 1.2525e+01 +#> 3.6974e+00 -6.7978e+00 1.4884e+01 -1.1140e+01 1.9589e+00 2.8536e-01 +#> -2.5475e+00 -1.5448e+01 -9.8222e+00 2.7567e-02 -1.6978e+01 -1.3631e+01 +#> -2.5236e+00 7.1482e+00 9.9684e-01 1.6637e+00 -1.8525e+00 -3.7578e+00 +#> 6.8408e-01 -4.5971e+00 -2.3067e+00 1.6949e+01 -1.8636e+01 -1.2232e+01 +#> -6.7232e+00 9.1325e+00 2.0426e-01 2.6954e+01 -8.9083e+00 -1.2419e+01 +#> 2.1591e+00 -1.0398e+01 1.2408e+01 -2.9166e+00 6.0366e+00 1.0637e+01 +#> -2.0474e+00 9.3563e+00 -5.2706e+00 -7.9547e+00 7.8486e+00 1.2689e+01 +#> 3.8286e+00 6.1758e+00 -7.4604e+00 2.8888e-02 -1.1185e+00 1.1341e+01 +#> +#> Columns 7 to 12 -1.1826e+01 2.2621e+01 5.7299e+00 -1.0211e+01 2.0500e+00 -2.0580e+00 +#> 2.2837e+00 -4.1392e+00 -1.3015e+01 4.3746e-01 1.4448e+01 4.2647e+00 +#> 1.8809e+01 -5.0689e+00 1.1920e+01 4.9381e+00 -1.1874e+01 1.6306e+01 +#> -7.0689e+00 -8.3381e-01 1.5031e+01 5.1379e-01 9.0282e+00 -1.0080e+01 +#> 9.0968e-01 6.8403e+00 -1.2286e+00 -7.2483e+00 1.4132e+01 1.1597e+01 +#> -1.2094e+01 -3.4152e+00 2.6146e+00 1.4391e+01 -4.1691e+00 -3.0251e+00 +#> 1.3840e+00 -6.1495e+00 -6.8200e+00 -7.8748e+00 1.6628e+01 8.4340e+00 +#> -1.9125e+00 8.5565e+00 -3.7415e+00 -9.7756e+00 -1.5671e+01 -1.0357e+01 +#> -2.6787e+00 -3.3692e-01 -1.3210e+01 4.1409e+00 5.5250e+00 6.2864e+00 +#> -6.5676e+00 1.9990e+00 -4.1325e+00 1.8645e+01 4.8182e+00 6.1783e+00 +#> 1.2527e+01 -9.8162e+00 -3.1660e-01 -4.4354e+00 -1.9086e+01 -7.7893e+00 +#> -2.1272e+00 -1.3353e+01 7.4139e-01 1.5879e+01 1.2691e+00 8.5015e+00 +#> 4.1590e+00 4.9067e+00 -1.2046e+01 -1.4323e+00 1.0125e+01 -2.5866e+00 +#> 3.5672e+00 4.8377e+00 3.1483e+01 4.4150e+00 2.4836e+00 -7.8096e+00 +#> -4.1977e+00 -1.7131e+00 5.3965e+00 6.3721e+00 6.7158e+00 1.0346e+01 +#> -5.3131e+00 -3.7630e+00 -3.8306e+00 -2.7599e+00 5.5997e+00 1.4647e+00 +#> 1.8103e+00 6.5942e-01 7.3887e+00 -9.9311e+00 8.3289e-01 5.5044e+00 +#> -4.5116e+00 -5.5039e+00 -1.5688e+00 1.6182e+01 -3.9394e+00 -1.0381e+00 +#> -2.6093e+00 -3.5263e+00 -5.2329e+00 -3.5045e+00 5.7408e+00 -3.6470e+00 +#> 1.3142e+01 1.6011e+00 4.2480e+00 -1.1670e+01 1.3931e+01 2.5416e+01 +#> -4.3017e+00 -2.4624e+00 8.3654e+00 6.6246e+00 3.9868e-01 -1.0769e+00 +#> 6.2551e+00 -3.5745e+00 7.6361e+00 -5.0444e+00 -7.1267e+00 -1.1137e+00 +#> 2.0551e+00 -9.9614e+00 -1.9466e+00 7.4334e+00 1.4679e+01 -4.0151e+00 +#> 4.6821e-02 1.1432e+01 -3.3222e-01 4.6008e+00 -7.8557e-01 -1.3359e+01 +#> -2.3926e-02 6.8382e-01 -4.8464e+00 7.1137e+00 2.9912e+00 4.5205e+00 +#> 4.2275e+00 6.5084e+00 -1.7213e+01 8.4103e+00 -5.6102e+00 3.6258e+00 +#> 1.3469e+01 -2.8784e+00 -8.9576e-01 2.5106e+00 -5.1594e+00 1.0925e+01 +#> 2.6589e+00 2.8767e+00 3.8252e+00 -1.2938e+01 -2.2467e+01 -3.1135e+00 +#> 5.5026e-01 2.3104e+01 1.6266e-01 -8.8421e+00 -2.3897e-01 -1.3208e+01 +#> -3.0139e+00 7.5189e+00 2.7617e+01 1.6454e+00 -2.2950e+01 -7.1717e+00 +#> 2.6843e+00 1.2358e+00 -8.7613e-02 -1.3074e+00 1.4932e+00 -4.6459e+00 +#> -7.2866e+00 6.1055e+00 -1.9969e-01 1.4076e+00 -7.0442e-02 8.3251e+00 +#> -7.3165e+00 6.8355e-01 -6.6699e+00 -9.3733e-01 1.8232e+01 2.1962e+00 +#> +#> Columns 13 to 18 1.9015e+00 -7.1054e-02 6.5036e+00 1.8471e+01 -7.8308e+00 -1.3398e+01 +#> 1.4368e+01 -8.2049e+00 8.0801e+00 -3.8359e-01 7.3189e-01 -1.1775e+00 +#> -1.4420e+01 -4.6961e+00 -1.6440e+00 -7.7410e+00 -6.0583e+00 -3.1094e+00 +#> 3.3701e+00 -3.4220e-01 -1.1748e+01 -5.6438e+00 -3.8476e+00 1.4843e+01 +#> -9.6171e+00 -6.9606e-01 -4.0281e+00 -1.1368e+00 -6.0511e+00 -1.3137e+01 +#> -9.2864e+00 -2.6392e+00 -1.1496e+00 -1.6844e-01 1.0979e+01 7.7849e+00 +#> -9.9460e-01 1.0206e+00 4.4142e-01 1.1463e+01 -7.1979e+00 -1.1835e+01 +#> -8.3785e+00 2.4509e-01 -4.0796e+00 3.0417e+00 -3.3664e+00 -1.6919e+01 +#> -2.7345e+00 6.1984e+00 3.6193e+00 1.6176e+00 1.8195e+01 -8.8061e+00 +#> -6.9595e-01 1.5538e+01 -5.7967e+00 4.6592e-01 -7.5308e+00 -2.6751e+00 +#> -8.8344e+00 1.1063e+00 1.5160e+01 -2.0454e+01 3.6625e+00 -7.1725e+00 +#> -5.0083e+00 1.1612e+01 -1.2413e+01 -1.8029e+00 -1.0512e+01 -4.8151e+00 +#> 7.4919e+00 7.1667e+00 1.0053e+01 -3.3491e+00 -1.1306e+01 6.5037e-01 +#> 2.3728e+01 -8.9091e+00 -1.0776e+01 -8.8852e+00 4.8242e+00 5.6458e-01 +#> 8.7586e+00 -9.7872e+00 1.0937e+01 2.2949e+01 -7.2551e+00 -6.7513e+00 +#> 9.4950e+00 1.3832e+00 -1.6443e+01 2.3965e+00 4.3156e+00 4.0602e+00 +#> -3.6678e+00 3.3257e+00 4.8965e+00 1.3294e+01 6.1973e+00 -1.0398e+01 +#> -2.4726e+00 2.3567e+01 -2.0121e+00 4.4666e+00 -2.9066e+00 -2.0574e+00 +#> 6.9333e+00 1.1008e+01 9.4940e+00 2.6360e+00 -2.3520e+00 -9.7747e+00 +#> -6.8040e+00 -6.3517e-01 -1.3560e+01 3.8996e+00 4.3230e+00 -1.5269e+01 +#> 5.5237e+00 -5.9878e+00 -4.8637e+00 -5.7554e+00 3.2201e+00 -9.9667e-01 +#> -1.5524e+01 4.4229e+00 1.0867e+01 -6.8404e+00 -8.2170e-03 -1.1930e+01 +#> -8.9235e+00 2.8391e+00 7.8580e+00 5.2690e+00 -1.1646e+01 -1.6284e+01 +#> 1.1135e+00 2.0755e-01 -5.3085e-01 4.4674e+00 -2.2772e+01 1.2490e+01 +#> 3.5108e+00 -1.7097e+01 -1.0687e+00 -6.3410e+00 -1.7460e+01 3.0247e+00 +#> -1.5075e+01 4.4179e+00 2.9446e+00 4.5553e+00 3.9893e+00 1.0847e+01 +#> -3.8475e-01 -3.0224e+00 1.3595e+01 -6.9657e-01 -1.1146e+00 7.6620e+00 +#> -2.6613e+00 -1.5902e+01 -9.7376e+00 5.7318e+00 5.6692e+00 1.2321e+01 +#> 5.4724e+00 -1.1878e+01 -4.7547e-01 9.2209e+00 -3.4451e+00 1.6410e+01 +#> -2.8532e+00 3.8302e+00 1.2172e-01 -6.0661e+00 -4.2698e+00 -1.0130e+01 +#> 1.6465e+00 -5.3229e+00 7.0761e+00 -1.5705e+00 -7.0562e+00 -2.7597e+00 +#> -6.1864e-01 -3.3426e+00 -9.9110e+00 -9.6252e-01 7.0601e+00 6.6784e+00 +#> 1.9348e+01 3.2684e+00 -5.1465e-01 3.5049e-01 -1.0837e+01 9.5411e+00 +#> +#> Columns 19 to 24 -3.8948e+00 -2.6723e+00 -6.6318e+00 1.6838e+00 -3.7660e+00 7.2476e+00 +#> 1.1216e+01 8.1438e-01 -3.6275e+00 -2.4376e-01 -9.8312e+00 -7.3129e+00 +#> 1.5957e+00 -1.2974e+01 1.8953e+01 9.7969e+00 -1.1822e+00 4.6118e-01 +#> 1.1569e+01 2.7565e+01 6.6624e+00 -1.0608e+01 -4.3190e+00 -8.5527e+00 +#> 1.7083e+01 4.3328e+00 -4.9356e-01 3.5066e+00 -7.9629e+00 1.8458e+00 +#> 6.6433e+00 2.2971e+00 -1.3667e+01 -4.0295e+00 -1.0085e+01 -4.5433e+00 +#> 1.5841e+00 -2.4033e+00 5.9013e+00 -2.2349e+01 1.7666e+00 -1.9011e+01 +#> 8.8770e+00 1.1447e+01 4.4222e+00 2.0156e+00 3.1334e-01 6.5320e+00 +#> 4.4325e-01 -9.0440e+00 -5.9457e+00 3.4535e+00 -1.6740e+01 5.7113e+00 +#> 5.1010e+00 6.8260e+00 -8.1804e+00 -2.4639e+00 -1.3020e+01 4.3330e+00 +#> 1.1201e+01 2.3794e+00 2.9129e+00 1.5070e+01 -1.1008e+01 -5.5706e+00 +#> 4.8113e-01 8.7237e+00 1.0651e+01 -1.0053e+01 -1.4475e+01 9.6086e-01 +#> 4.8966e+00 -4.8608e+00 -4.7656e+00 4.5974e+00 -5.3435e+00 -3.1645e+00 +#> -2.8056e+00 6.0239e+00 4.9811e+00 9.3587e+00 -1.2518e-01 1.4811e+01 +#> 1.5243e+00 5.9463e+00 -1.9694e+01 2.4770e+00 -7.4116e+00 4.2682e+00 +#> 1.7486e+01 5.6142e+00 -2.6774e+00 -5.4607e+00 3.1753e+00 -7.7612e+00 +#> -4.7767e+00 -4.1876e+00 8.2306e+00 1.8335e+01 1.6702e+01 -6.1712e+00 +#> -1.0461e+01 7.7642e+00 9.8448e+00 -3.9824e+00 -1.4456e+01 1.0921e+01 +#> -3.0338e+00 -3.8039e+00 3.7088e+00 1.1555e+01 -6.6480e+00 -7.0763e+00 +#> 1.2589e+00 -9.1476e+00 -1.0267e+00 -9.8251e+00 1.3203e+01 -1.1121e+01 +#> -7.1104e+00 -2.1209e+01 -4.1617e-01 -1.6712e+01 7.2966e+00 8.6806e-01 +#> -1.3472e+00 2.4518e+00 2.5488e+00 5.1634e+00 8.3433e+00 -6.9640e-01 +#> 4.9083e+00 1.1894e+00 1.6084e+01 1.8053e+01 -8.3202e+00 5.9620e+00 +#> 1.4155e+00 -1.2155e+01 -8.0382e+00 1.2269e+00 -2.4019e+00 -4.7523e+00 +#> 5.7603e+00 2.8282e+00 -6.3438e+00 9.6194e-01 4.2417e+00 -8.6298e-01 +#> -6.2398e+00 -1.3997e+01 -1.0611e+01 -1.2185e+01 5.8461e+00 -1.5036e+01 +#> -9.8415e+00 7.8770e-01 -7.4182e+00 -8.3712e+00 -5.5750e+00 5.2374e+00 +#> -6.6236e-01 8.8197e+00 2.5809e+00 9.2737e-01 -2.7482e-01 -3.0782e+00 +#> -4.8489e+00 1.2334e+00 -8.1420e+00 8.0714e+00 1.8021e+00 1.0030e+01 +#> 2.2816e+00 2.9167e+01 5.3001e+00 1.2647e+01 -1.0302e+01 1.7506e+01 +#> 4.0581e-01 4.3354e-01 4.6884e+00 2.4039e+00 -7.0114e+00 -2.3394e+00 +#> -1.7465e+01 -2.9286e+00 -3.3655e+00 -5.9884e+00 1.8096e+01 4.9700e+00 +#> -8.3931e+00 9.1925e+00 -6.2284e-01 -3.7626e+00 7.2228e-02 -1.4662e+00 +#> +#> Columns 25 to 30 2.0364e+00 3.4924e+00 8.8576e+00 1.9560e+01 -5.3159e+00 -6.7273e-02 +#> 8.5199e+00 1.2460e+01 1.0849e+01 -1.2488e+01 1.1534e+01 8.3744e+00 +#> -5.2998e+00 -1.3045e+01 7.0058e-02 -1.6556e+01 -5.1436e+00 1.2320e+00 +#> -1.9245e+00 2.6752e+00 2.5943e+00 3.8957e+00 1.9414e+00 -4.5554e+00 +#> -1.7785e+01 3.6812e+00 1.3851e+01 7.9270e+00 1.4260e+01 -5.3344e+00 +#> 1.7070e+01 7.5528e+00 2.5537e+01 2.0648e+00 9.1443e+00 4.2659e-01 +#> -5.8135e+00 -1.1264e+00 5.3336e+00 -7.6384e+00 7.7942e-01 -1.4103e+00 +#> -6.6164e-02 -1.1689e+01 -5.0414e+00 2.4737e+00 -8.3804e+00 2.4184e+01 +#> -1.2746e+01 -1.4828e+00 -5.7543e+00 -5.3110e+00 5.5415e+00 -1.6850e+01 +#> 3.1513e+00 1.5225e+01 6.6620e+00 1.6091e+01 1.0120e+00 7.5204e+00 +#> -1.2891e+01 7.4804e+00 8.0265e-01 -1.2574e+01 9.4596e+00 7.2928e+00 +#> -4.2098e+00 4.1588e+00 6.9672e+00 2.2453e+01 9.1519e+00 1.8179e+01 +#> -4.1001e+00 1.8540e+01 4.0887e+00 -1.0073e+01 1.8330e+00 -1.4200e+00 +#> 5.6480e+00 -1.3350e+00 -1.2036e+01 -9.5990e+00 7.4656e-01 3.2980e-01 +#> 3.4966e+00 -1.6570e-01 1.8722e+01 2.3947e+00 -2.9675e+00 2.2816e+01 +#> 2.8179e+00 -3.1789e+00 -1.1038e+01 5.5216e+00 5.2880e+00 4.8121e+00 +#> -1.4641e+00 7.2584e+00 7.5047e+00 -1.4729e+01 1.1529e+01 4.0212e+00 +#> -2.1840e+01 1.6311e+01 -2.2091e+00 8.8685e+00 -2.6726e-01 1.0857e+01 +#> -6.0878e+00 -1.9527e+01 -2.7790e+00 -2.3459e+00 9.2915e+00 3.6225e+00 +#> 3.0910e+00 -8.8166e+00 -1.2194e+00 2.1275e+00 1.4776e+01 -6.3646e+00 +#> 3.4392e+00 1.2337e+00 -2.4477e+00 -4.2940e+00 1.2196e+01 -8.3100e+00 +#> 2.1095e+00 -1.6299e+01 -1.8262e+00 -8.5428e+00 5.4610e+00 -1.8177e-01 +#> -8.0396e+00 -3.5281e-01 -4.7305e+00 -2.0184e+00 -1.3290e+00 -1.2946e+01 +#> 6.7542e+00 2.7995e+00 -2.6588e+00 -5.7228e+00 -1.7821e+01 -1.2070e+01 +#> 8.5798e+00 1.3482e+01 5.4124e+00 4.5472e+00 4.2665e-01 9.1178e+00 +#> 7.9867e+00 -1.7613e+01 -3.2339e+00 4.7845e+00 -1.2815e+01 5.4119e-01 +#> 6.9358e+00 -7.6092e+00 8.9106e+00 1.4474e+01 -5.1735e+00 6.0925e-01 +#> -3.7007e+00 -1.1889e+01 1.2766e+01 -4.3393e+00 -8.2446e+00 -5.4101e+00 +#> 1.4022e+00 -2.8613e+00 -1.5519e+01 5.5836e-01 -5.3816e+00 -1.6839e+01 +#> 3.9563e+00 -1.7790e+00 1.2531e+01 2.5413e+01 -6.9938e+00 -5.0654e+00 +#> -1.3314e+01 1.3742e+01 -5.1880e+00 -2.6514e+00 7.9159e+00 -3.9425e-01 +#> 5.2115e+00 5.2123e+00 8.2073e+00 4.7590e-01 -1.2351e+01 6.1773e+00 +#> -1.2535e+01 1.6295e+01 5.8316e+00 1.8009e+01 7.4288e+00 4.5441e+00 +#> +#> Columns 31 to 36 1.2023e+01 -6.7178e+00 -5.8166e+00 1.1401e+01 1.5205e+01 1.9883e+01 +#> -1.2687e+01 -8.2098e+00 5.3369e+00 -8.2628e+00 1.0745e+00 -1.1268e+01 +#> 9.6230e-01 -1.3118e+01 -6.6789e+00 1.8343e+01 4.4205e+00 -7.8813e+00 +#> 4.5552e+00 -2.2099e+01 -1.1815e+01 -8.8782e+00 -3.4950e+00 4.7670e-01 +#> 1.8077e+01 -6.6238e+00 -1.3619e+01 -7.2434e+00 -8.6265e+00 -7.9528e-01 +#> -1.0967e+01 9.8506e+00 -2.7321e+01 -1.0223e+01 6.2490e-01 -5.6411e-01 +#> 1.4522e+01 8.1797e-01 -8.0892e+00 4.3848e+00 1.1477e+00 -3.4365e+00 +#> 4.7297e+00 6.4393e+00 -8.8589e+00 6.6986e+00 1.0247e+00 3.6755e+00 +#> -1.7666e+00 7.6427e+00 -2.5547e+00 -4.3410e+00 -1.5351e-01 -1.1309e+01 +#> -3.9239e+00 -2.5789e+01 -8.5965e+00 1.2510e+01 -7.7351e+00 -1.1317e+01 +#> -7.5335e+00 -1.1775e+01 -6.7104e+00 1.6478e+00 1.5399e+01 3.0482e+00 +#> -1.0282e+00 -3.0531e+00 -1.1335e+01 -1.5999e+01 -1.5555e+01 1.6338e+01 +#> 7.3122e+00 -8.6837e+00 6.5041e+00 -1.4024e+01 4.4772e+00 -5.9621e+00 +#> -2.8027e+00 -5.2603e+00 1.3327e+01 5.9999e+00 -5.2299e+00 -3.3551e+00 +#> -2.3727e+00 -1.3826e+01 -1.2738e+01 -1.9202e+01 -7.7574e-01 -1.4906e+00 +#> -8.8552e+00 -2.1012e+00 1.3027e+01 8.1915e+00 -6.9022e+00 -4.1185e+00 +#> 1.3801e+01 1.0830e+01 -5.0325e+00 4.9789e-01 -2.2028e+00 -1.6712e+01 +#> 1.0335e-01 -3.7909e+00 -6.2674e+00 1.4032e+01 1.0125e-01 -4.1395e+00 +#> 9.5018e+00 1.9901e+00 1.2336e+01 -1.1491e+00 -1.4146e+01 -9.2964e+00 +#> 1.6288e+01 -1.8714e+00 -1.2278e+01 -6.2869e-01 -2.4788e+01 -1.2325e+01 +#> 1.3845e-01 -4.5797e-01 6.1840e+00 8.4224e+00 9.5258e+00 -1.0124e+00 +#> 1.2613e+01 -1.3998e+01 -1.5340e+01 -4.1053e+00 6.9608e+00 4.7141e+00 +#> -5.0590e+00 1.6985e+00 9.5863e+00 8.1763e-01 -6.1154e+00 5.2919e+00 +#> 4.0963e+00 -1.0605e+01 -5.2540e-02 6.2528e+00 1.6638e+01 1.9486e+01 +#> -1.9335e+01 -3.2307e+00 -2.5775e+01 2.1714e+00 9.3489e+00 5.1945e+00 +#> -1.1523e+00 6.8666e+00 6.6111e+00 1.3814e+00 -8.8345e+00 8.1187e+00 +#> 2.5454e+00 -1.1054e+01 -3.9274e+00 1.4365e+00 7.5614e+00 2.1234e+00 +#> -1.0611e+01 3.2993e+00 -4.9833e+00 5.5129e+00 -1.5963e+00 -3.0104e+00 +#> 3.0948e+00 7.7989e+00 2.8320e+00 -6.8328e+00 5.3871e+00 -7.3793e+00 +#> 1.0356e+01 1.0498e+01 -2.1031e+01 -2.4123e+01 2.2733e+00 1.2569e+01 +#> -5.8569e+00 -2.2212e+00 8.6687e+00 -1.8951e+01 1.2854e+01 9.3725e+00 +#> -1.1501e+00 -1.2831e+01 -6.0599e+00 6.2460e+00 -4.7948e+00 -3.0693e+00 +#> -2.7834e+00 -1.5981e+01 -2.3720e+01 -5.7574e+00 4.2840e+00 2.6247e+00 +#> +#> Columns 37 to 42 -9.1028e+00 1.2591e+01 1.0310e+00 -5.8421e+00 -1.1338e+01 4.1153e+00 +#> 3.1706e+00 2.9428e+00 1.4845e-01 8.9454e+00 1.3134e+01 6.5534e-01 +#> 4.5841e+00 -8.7716e+00 8.0809e-01 -1.6475e+01 4.4807e+00 -1.0819e+01 +#> 2.3524e+00 -5.6207e+00 -2.0095e+00 -4.4384e+00 1.3316e+01 -6.8045e+00 +#> -2.4933e+00 -3.9729e+00 -7.7103e+00 1.1894e+00 -1.1831e+01 -5.8958e+00 +#> 4.1788e+00 -4.9825e-01 1.6073e+01 -3.8185e+00 1.1832e+01 -8.7843e+00 +#> 8.6549e+00 -4.8510e+00 2.8659e+00 1.5565e+00 5.0214e+00 3.4118e+00 +#> -7.3109e-01 9.8369e+00 -4.7401e+00 9.5182e+00 5.5939e-01 -4.8440e+00 +#> 5.2137e+00 1.8856e+01 -3.0772e+00 1.2977e+01 -1.7254e-01 3.3656e-01 +#> 1.0825e+01 -4.8495e+00 -1.2025e+01 1.1839e+00 -1.6483e+00 1.5632e-01 +#> 7.4011e+00 1.1780e+00 -8.3667e+00 3.9612e+00 -1.0989e+00 -1.1149e+00 +#> 7.2148e+00 9.1705e-01 1.3965e-01 1.2729e+01 -8.3906e+00 6.4803e+00 +#> -8.2265e+00 3.5863e+00 -6.5682e+00 -4.1911e+00 -4.0294e+00 1.6310e+01 +#> 6.1826e+00 -8.1974e+00 -1.2904e+00 1.0439e+01 -7.9703e+00 -7.8956e+00 +#> 5.9539e+00 1.3534e+01 -3.9589e+00 6.4846e+00 -3.7262e+00 -9.1065e+00 +#> 7.2161e+00 -2.5541e+00 -9.6471e+00 2.1303e+00 -5.7376e+00 3.6745e+00 +#> -5.3257e+00 4.5439e+00 -1.3490e+01 -5.4855e+00 9.1892e+00 -9.2638e+00 +#> 1.5159e+01 -8.0027e+00 -1.8171e+01 3.2348e+00 -1.1226e+01 -1.2237e+00 +#> 4.0426e+00 4.7529e+00 -3.9383e+00 4.2050e-01 7.3034e-01 1.5667e+01 +#> -1.0191e+00 1.3406e+01 8.5509e+00 -4.3237e+00 1.7792e+01 -1.0651e+01 +#> 9.1773e+00 -5.7015e+00 5.4302e+00 -3.8808e+00 -7.5434e-01 -8.2778e+00 +#> 1.1293e+00 1.3398e+01 -6.2309e-01 1.1985e+00 1.2024e+01 -1.1281e+01 +#> 4.1953e-01 -1.8520e+01 -1.6476e+01 -2.2505e+01 -8.8054e+00 -3.2068e+00 +#> 1.7972e+00 1.9958e+01 2.3866e+00 1.9869e+01 5.1840e+00 2.0026e+01 +#> 7.1083e+00 6.2996e+00 2.2023e+00 5.7702e+00 -3.2933e+00 1.0233e+01 +#> 3.9714e+00 2.4181e-02 1.1250e+01 -1.2165e+00 1.1519e+01 -5.1492e+00 +#> -4.7458e+00 2.2553e+00 9.3586e+00 7.0260e+00 -8.9473e-01 3.7660e+00 +#> -9.3217e-01 2.3749e+00 2.4645e+00 7.2412e+00 -1.8842e+00 -3.2332e+00 +#> -3.2802e+00 1.5417e+00 6.4068e+00 1.9257e+00 2.9942e+00 -1.0548e+01 +#> -8.2871e+00 -2.6355e+00 4.1110e+00 4.1654e+00 9.5214e+00 -1.0637e+01 +#> -8.8577e+00 -1.2026e+00 3.2324e+00 -1.0275e+01 8.1027e-01 5.3804e+00 +#> 1.2246e+00 -4.9161e+00 6.2280e+00 -1.3336e+01 3.3344e-01 -1.7871e+01 +#> -2.5499e+00 3.3478e+00 6.1585e+00 -9.2499e-01 -4.8937e+00 -7.7639e+00 +#> +#> Columns 43 to 48 -1.8949e+00 3.1792e+00 -2.1589e+00 6.3949e-01 3.2063e+00 -1.3578e+00 +#> -1.0469e+01 -5.6708e+00 7.5980e+00 9.6664e+00 6.7643e+00 -6.5708e+00 +#> -6.1512e+00 -2.0264e+01 3.4358e+00 6.0738e+00 -1.3076e+01 2.5472e+00 +#> 7.7560e+00 1.4065e+00 1.2090e+00 -8.3934e+00 1.8032e+00 8.1045e+00 +#> -1.9726e+00 -3.6610e+00 -2.0917e+01 -5.1021e+00 -8.5341e+00 9.6339e+00 +#> 3.1744e+00 -1.5621e+01 3.4864e+00 -1.6545e+00 4.0293e+00 -5.3260e+00 +#> -8.8298e+00 6.7961e+00 1.1437e+01 -2.5266e+00 3.7172e-01 9.4313e+00 +#> 3.0545e-01 1.8151e+00 9.5635e+00 -1.7820e+01 -6.6283e+00 -8.8928e+00 +#> 1.2522e-01 4.0336e+00 -7.4177e+00 9.1586e-01 -4.6258e+00 3.6882e+00 +#> -1.2605e+00 -9.0548e-01 8.1084e+00 -6.0548e+00 -1.6634e+01 2.2741e+00 +#> 1.8113e+00 1.5702e+00 5.0019e-01 2.8852e+00 -4.3100e+00 2.9644e+00 +#> 7.3638e+00 3.9861e+00 -3.0309e+00 -2.8950e+00 -3.3484e+00 -6.0369e-01 +#> 7.5221e-03 3.2168e+00 -1.2645e+01 1.7435e+01 -3.2664e+00 3.6868e+00 +#> 3.1162e+00 -5.5322e+00 1.7958e+00 -8.8683e-01 2.2310e+00 -1.3305e+01 +#> -1.9391e+00 3.5524e+00 6.6669e+00 -3.1837e+00 -1.0453e+00 -9.0635e+00 +#> 2.0948e+00 1.3118e+00 -6.5282e+00 -1.7253e+01 2.3108e+00 8.9559e+00 +#> -1.7999e+01 -3.3407e+00 -5.3289e+00 4.9456e+00 -1.0145e+01 8.4892e+00 +#> 2.7427e-01 8.0205e+00 1.4450e+01 -1.3079e+01 -1.2436e+01 4.7463e+00 +#> 4.4269e+00 5.9370e+00 9.3929e-01 -1.3290e+01 -1.3577e+01 -9.1268e+00 +#> 1.6066e+01 -8.7729e+00 -2.0591e+00 4.5703e+00 4.9376e+00 -7.4957e+00 +#> -1.2894e+01 -1.7018e+00 -6.3262e+00 2.8053e+00 3.8833e-01 1.0766e+01 +#> 2.5407e+00 1.3492e+00 5.4623e+00 4.9909e-01 -1.1941e+01 -3.1936e+00 +#> -4.5454e+00 -4.9026e+00 7.8692e+00 1.5897e-01 -8.2075e+00 7.7568e+00 +#> 1.6057e+00 1.8280e+01 -6.0971e+00 1.3906e+01 -2.5730e+00 -3.1180e+00 +#> -2.2184e+00 2.2779e-02 6.5701e-01 2.4582e+00 9.5279e+00 -3.7017e+00 +#> 2.0165e+01 -1.5902e+01 9.6445e+00 -7.0694e+00 7.1113e+00 -1.9141e+01 +#> 6.5520e+00 9.2279e+00 8.5650e+00 1.9217e+01 4.2980e-01 -2.1048e+00 +#> -1.4847e+00 7.6447e+00 -1.0691e+00 6.1191e+00 4.5094e+00 -4.1965e+00 +#> 4.9171e+00 -3.5977e+00 -5.1303e+00 9.4302e+00 1.0969e+01 -2.8175e+00 +#> 1.7654e+01 1.2303e+00 -1.8604e+00 -6.7042e+00 8.0041e+00 2.6355e+00 +#> 3.4652e+00 -8.8835e+00 -2.4696e+00 -6.1194e+00 -2.3170e+00 1.8372e+00 +#> -3.2396e+00 -1.2917e+01 1.1964e+01 -2.1103e-01 1.5640e+01 -9.4318e+00 +#> -7.8141e+00 4.2299e+00 2.8658e+00 7.7334e+00 -2.7448e+00 -8.3782e-01 +#> +#> Columns 49 to 54 1.2333e+01 -1.3767e-01 -3.5458e+00 -2.4091e+00 -4.3892e+00 -1.8596e+00 +#> 3.0590e+00 -8.6098e+00 1.2912e+01 1.1340e+01 -2.3048e+00 1.8209e-01 +#> -1.4247e+00 -1.1627e+01 2.7382e+00 2.6747e+00 1.1848e+01 8.3825e+00 +#> -1.0018e+01 6.1788e+00 -8.4593e+00 7.7510e+00 2.9908e-02 -1.5930e+00 +#> -6.7280e+00 1.1571e+01 9.2223e+00 4.1258e+00 -8.2129e+00 7.0230e-01 +#> 1.0891e+01 9.3680e-01 -7.9255e+00 4.5878e-01 -7.3924e+00 -4.2198e+00 +#> -9.4668e+00 -5.4795e+00 1.5067e+01 -3.1557e+00 -4.1782e+00 -1.7167e-01 +#> 3.4064e+00 -4.4980e-01 1.1592e+01 7.2165e+00 -2.5246e+00 -2.8478e+00 +#> 1.0886e+01 -1.5934e+01 1.2154e+00 -6.4077e+00 -4.7183e-01 -4.6793e-01 +#> 1.0714e+01 4.9262e+00 -1.0091e+01 -4.0105e+00 -2.2166e+00 -6.6209e+00 +#> 2.1358e+01 4.1649e+00 -2.9043e+00 -6.3058e+00 2.6100e+00 -5.6352e-01 +#> -4.0338e-01 1.6002e+00 -8.1261e+00 -5.7080e+00 -2.5969e+00 -1.8784e+00 +#> 7.2134e+00 -4.0635e+00 -1.1044e+01 -7.3213e+00 4.8594e+00 2.2916e+00 +#> -8.6441e+00 2.0574e+00 -2.0262e+00 8.5201e+00 9.9343e+00 -2.0685e+00 +#> 8.1453e-01 -3.3945e+00 1.0309e+01 6.3051e+00 -2.7420e+00 8.4013e-01 +#> 5.6428e+00 1.0729e+01 6.6803e+00 -2.7419e+00 1.7320e+00 -1.8016e+00 +#> 1.6561e+00 1.4647e+00 -7.4047e+00 9.6098e+00 7.3993e+00 -9.3957e-01 +#> 6.3324e+00 6.9388e+00 -5.6508e+00 -1.0262e+01 6.1348e+00 -3.7880e+00 +#> 2.7018e+00 -4.5579e+00 -5.6484e+00 -1.1570e+01 -1.1231e+01 -6.9494e+00 +#> -1.4790e+01 -5.7864e-01 5.0066e+00 8.4038e-01 9.4816e+00 -3.1154e+00 +#> -1.1103e+01 -2.8144e+00 8.4787e+00 7.7905e+00 6.3268e+00 8.5337e-01 +#> -5.4284e+00 -9.9734e+00 8.3666e+00 4.5437e+00 4.6585e+00 2.5157e+00 +#> 1.3540e+00 -1.0940e+01 -5.3762e+00 1.2806e+01 1.4043e+00 7.3552e-04 +#> 6.4825e+00 -8.2156e+00 -4.1786e+00 -1.4547e+01 -8.1273e+00 6.3691e+00 +#> -1.3291e+00 -2.9907e+00 4.6872e+00 -4.9224e+00 -8.3817e+00 5.2724e+00 +#> -3.5581e+00 2.5445e+00 -7.8640e-01 -2.6496e+00 -6.5729e+00 -4.9490e+00 +#> 5.3952e+00 -1.0344e+01 1.2800e+00 -1.8468e+00 3.4351e+00 2.9873e+00 +#> -5.5920e+00 3.8213e-01 9.8507e+00 2.6040e+00 2.8464e+00 -1.8464e+00 +#> -1.4649e+01 -1.0690e+00 5.6015e+00 1.5896e+01 -5.8986e+00 8.7044e+00 +#> -1.1121e+01 -4.9316e+00 2.6215e+00 2.8442e+00 4.1814e+00 -4.4011e-01 +#> 3.7989e+00 -9.5559e+00 1.2084e+00 6.3876e+00 7.4033e+00 5.4886e+00 +#> -1.5470e+01 5.8021e-01 -1.7981e+01 -6.3462e+00 5.2296e+00 -3.3604e-01 +#> -2.3717e+00 8.1532e+00 -7.8498e+00 -2.2843e+00 -3.7729e-01 8.6050e+00 +#> +#> (13,.,.) = +#> Columns 1 to 8 0.6225 -1.5633 -1.0825 5.1088 5.7446 -3.3708 3.6607 11.8080 +#> -5.9414 4.0864 -1.2822 13.1553 -15.7463 6.3882 1.5634 -11.4881 +#> 3.8365 -8.4567 4.2103 13.8929 -3.8816 -6.1134 -0.6663 -8.0852 +#> -3.0222 6.8643 -2.3826 -0.7312 -5.0444 10.4230 3.6401 -5.1320 +#> 2.9885 0.6388 -1.7581 -0.5176 8.0534 9.4546 17.4677 2.8854 +#> -0.0614 2.0208 -14.5500 0.9292 16.2407 1.4846 5.7644 0.9823 +#> -5.0461 4.9234 2.7604 1.9248 -0.2402 -1.4744 6.1263 -18.5446 +#> 1.5126 -6.5236 7.4569 -6.4902 0.9539 -34.3821 4.7376 -6.4473 +#> -1.8858 -1.7096 -1.9603 0.5967 -2.9680 -0.6489 -4.0181 -0.8944 +#> -0.4119 -1.9274 1.7104 6.9449 2.2967 14.1728 4.3677 5.9133 +#> 3.1806 0.2433 -6.9227 -6.6097 -3.4068 -1.5102 1.1768 -7.3246 +#> 5.1976 3.0144 2.7979 -7.2828 7.7405 4.1381 7.9900 10.3155 +#> -1.6468 0.6064 -5.1478 9.8157 2.6589 6.8723 -2.0592 11.5436 +#> -0.2124 -2.1939 -5.1560 13.0934 -6.1557 5.6263 -11.1988 0.6105 +#> -0.7770 5.0216 -9.4610 3.9703 5.4409 11.2737 12.1204 -2.3762 +#> -4.3581 6.0825 2.3620 -0.5420 -0.5396 -2.1976 -2.9201 -17.2563 +#> 1.2618 5.1261 -7.6242 3.7173 9.8331 17.3887 2.6580 -6.2977 +#> -1.8163 -2.6399 9.1186 -5.1573 2.7076 10.3487 -5.5950 6.4227 +#> -2.2549 2.9415 -1.3807 -0.5117 -13.7537 -9.9439 -7.9778 -11.1709 +#> -0.0283 -3.4669 0.2005 4.9303 1.9195 -14.3671 -4.5576 6.9302 +#> 1.2863 6.2312 5.7416 -1.8370 -6.7823 11.3672 5.8564 1.9807 +#> 1.8187 2.6548 4.8856 -0.5090 3.4795 -2.6129 -6.3327 3.1360 +#> 0.8995 7.3851 -0.9977 -1.0382 -0.0063 7.6577 -0.6447 -18.8161 +#> 0.1038 -3.5217 -9.7995 3.8576 -10.7382 -3.8209 -21.4523 -5.3858 +#> 0.1988 -0.2407 2.9182 9.4107 -7.8575 8.6792 11.2470 1.5597 +#> 1.0691 -7.2791 10.4935 -11.5572 2.8200 -11.1891 0.6554 4.3099 +#> 3.9875 -1.9605 1.6879 8.9195 6.2382 12.0618 -4.7136 -1.3624 +#> 1.0449 -2.2370 -5.4446 3.2154 -7.8951 -3.5772 -16.4724 -0.0223 +#> -4.2487 -0.0201 -5.0634 15.8200 3.4258 2.2750 -6.9639 -1.5070 +#> 7.8249 -2.3258 -11.5096 6.5237 25.4575 12.2537 -2.4187 9.4461 +#> -4.5060 3.7490 -4.8310 -6.0221 -7.5922 4.1831 -8.2478 1.6914 +#> 4.5197 -6.3969 6.6634 -7.9376 11.1415 -1.7244 3.6967 -5.8130 +#> 1.6796 10.2047 2.9963 0.4611 15.4367 8.6640 12.3652 9.3268 +#> +#> Columns 9 to 16 -1.2855 0.0185 -9.0798 23.0098 1.5778 7.3526 15.9490 -2.5839 +#> 12.9727 -1.6580 6.0191 -1.8907 3.2326 -4.4916 -12.2917 9.3610 +#> -9.4650 -3.3754 9.2407 -20.8298 -4.0272 -4.0452 -8.5894 -13.6104 +#> 4.6112 -15.6477 8.9300 6.9903 -3.0968 0.7586 -4.6015 -13.4000 +#> 8.4996 -4.4138 -5.4577 -7.4059 -19.6451 -5.3132 -17.6285 -8.1284 +#> 17.0900 -12.0884 7.6606 -5.2643 -3.9708 8.9101 -12.8181 1.2724 +#> 11.7912 3.3413 1.8484 -10.1811 -3.7404 5.9754 12.8199 3.5822 +#> 9.0085 0.0326 -8.7861 4.5516 -3.4405 -11.4276 -1.4082 -14.7301 +#> 14.3183 -13.1309 -5.4163 -2.6748 -15.5476 -1.9758 -9.0136 7.4097 +#> -2.9459 1.3235 14.2310 -4.8895 1.0256 18.3548 3.3083 8.2479 +#> -10.8172 -6.8999 -12.8204 -11.6486 -5.6255 2.9987 7.2506 -16.9556 +#> 8.6517 2.0668 -5.2789 2.6444 16.9798 5.7512 -6.2904 9.3246 +#> 8.5083 -4.5114 2.7383 -3.9136 2.9081 11.4299 -2.0102 4.7976 +#> -14.5124 2.3619 11.0900 13.8907 5.7813 -8.9663 6.9377 15.0322 +#> 9.4700 -5.7718 21.7840 1.7330 -13.4566 9.4243 11.2065 3.2765 +#> 7.1058 -3.1070 -12.9466 -0.9314 6.2793 -5.7981 -1.6923 4.2464 +#> 9.8869 2.1072 0.9882 -8.5688 -6.7740 5.2935 -1.1322 -6.0721 +#> 13.6170 10.1805 -0.6006 5.0214 24.5619 17.5630 12.9840 8.9247 +#> 0.7140 1.3764 5.9720 0.2036 2.1168 20.9245 2.4649 -10.0140 +#> -5.4863 -4.5560 6.1671 2.2992 -9.4693 3.2962 8.1224 -7.1518 +#> 4.0313 -3.1694 -3.7682 8.7248 6.9283 -6.7974 1.0717 3.5391 +#> -12.6850 -11.1544 -3.5832 -17.0292 -10.2922 10.5102 -8.5805 -16.6701 +#> 1.6295 -9.6518 -9.2382 -21.4327 -4.1098 14.8623 6.2514 -18.4324 +#> -4.7173 -13.7172 -15.1778 -13.1720 -12.6725 4.9590 -3.7975 -8.5846 +#> 4.4336 -0.8218 7.3717 -5.4434 -4.8480 -20.0243 -2.8340 7.2669 +#> -3.9407 -5.8612 -9.2493 2.7658 -6.3236 7.3682 0.8614 -2.2647 +#> -5.1057 15.3606 17.8084 0.8658 -4.1440 11.4043 27.9573 -0.7385 +#> -14.7314 4.1165 11.2977 -13.5234 -15.3198 -13.3219 -12.4830 -9.5864 +#> 12.5962 -2.0848 20.9790 6.7316 -7.9422 -7.3325 -11.7881 2.1356 +#> -2.2800 5.7363 11.4628 7.3722 10.9769 -12.4952 -6.5846 4.3698 +#> -5.5631 6.5627 -2.8050 -8.0543 13.7225 5.1217 -13.3229 -10.1688 +#> 1.1927 8.0427 -9.9975 1.4130 -20.1412 10.6100 13.6041 -6.3356 +#> -3.2324 13.7741 -1.9013 -4.1233 -2.3054 2.5272 6.2873 4.8383 +#> +#> Columns 17 to 24 18.1636 2.6890 5.8660 -0.9663 -7.8354 13.3825 -5.7938 -3.0844 +#> 14.9101 -8.1264 9.3529 -5.6501 -13.8379 -5.1529 12.0903 6.6171 +#> -12.9787 -4.5172 -3.5492 -0.1676 -0.3024 -20.2456 -17.2140 -3.7271 +#> 15.0217 -17.9985 -2.7245 0.7354 -10.6781 -4.0954 16.4370 -3.9958 +#> 19.8714 -15.6207 3.7955 -11.6631 0.5926 3.2357 -10.5831 -6.8354 +#> 7.3107 10.1958 -12.5282 -15.6849 9.5947 -0.6776 -3.5694 -6.8568 +#> 4.1721 6.1826 -3.5171 -1.6282 -4.5980 -12.7351 2.4134 1.4011 +#> -3.8929 0.8567 -2.3181 10.7230 -0.4437 -1.1194 -2.6472 -1.0988 +#> 0.5700 -4.2833 0.8950 9.9760 0.1999 -7.7351 8.4031 -1.9698 +#> 8.6987 1.8773 5.9658 -6.3321 10.9693 8.2531 -7.0048 -16.8536 +#> -25.2377 -12.1537 14.2061 -8.3470 7.8966 -2.2503 -19.2228 6.2746 +#> -4.2712 -3.8228 -5.6966 -14.1824 -10.1671 -10.6292 -2.9392 -2.9507 +#> -1.0011 -7.9093 8.0571 -11.0527 12.1593 -3.7356 -10.7960 -6.1334 +#> -0.5150 -10.1990 13.7810 -3.9238 -9.3145 12.6934 7.1857 6.6778 +#> 6.2490 -4.6729 -2.4496 -4.4163 1.3302 24.1066 -2.6552 -12.0586 +#> 3.7691 3.7795 -1.8822 9.3887 -7.6862 -0.9493 0.4402 -6.8497 +#> 7.8804 20.2961 4.4132 -0.6763 1.9556 -9.0676 -0.4367 -9.1325 +#> -8.1008 11.3290 19.9934 18.0099 -12.1368 -15.7701 15.0487 -3.7897 +#> 3.9090 -12.1362 11.0174 8.8886 1.6636 4.9241 -14.1295 5.4046 +#> 2.5688 -3.1138 -14.4614 0.4418 -14.4975 8.2753 -17.5903 -9.1665 +#> 9.6189 -2.4555 4.4813 -10.1254 -0.2373 -9.0948 8.7192 4.3740 +#> -16.1791 5.0023 -5.0693 12.8547 -3.6058 11.3588 -5.5056 -5.0991 +#> 5.2873 -8.9815 3.3710 -13.0669 14.1989 -8.1130 -9.1110 9.0331 +#> -23.5598 -30.5964 -14.4885 4.7473 2.7410 -11.7885 -4.4209 -2.4463 +#> -0.0130 -15.6218 -14.0751 1.8073 -3.6559 1.4849 7.6913 4.5255 +#> 4.9514 -0.2254 -3.8016 15.9771 1.8918 12.4013 -6.2868 -1.3038 +#> -19.2015 2.9579 -6.1137 19.9605 3.5282 -10.5849 6.1530 0.7995 +#> -0.4617 -3.9831 -4.4857 -0.1848 1.6324 9.0417 -0.4186 -10.2384 +#> -1.0973 -10.0601 -0.4049 9.9800 4.7209 -10.4516 18.6407 1.3664 +#> 2.4310 13.5651 -10.4308 18.9115 8.8863 17.2386 15.7466 5.9440 +#> -19.0119 17.7831 11.2511 -6.0253 -4.8593 -1.7501 -1.0230 5.3960 +#> 3.3324 -6.8259 -16.2877 11.6091 16.1198 6.0207 -9.4439 6.7558 +#> -8.2590 8.0797 -6.6256 10.1228 -0.1918 -10.1314 1.6924 -16.9398 +#> +#> Columns 25 to 32 9.7602 -7.7804 8.1099 -10.5406 -0.8016 4.2707 1.6632 6.3764 +#> -6.5471 -11.5912 -6.5146 0.6060 -2.6289 6.8809 -8.7830 14.0514 +#> -8.0664 -0.9994 24.0894 -6.0419 11.4410 7.0581 0.7835 15.4865 +#> -12.7290 8.7368 -21.3734 12.8849 -0.0553 3.9926 3.6383 1.3704 +#> -13.4071 3.8879 -24.2288 11.2635 1.5915 7.5209 6.7797 -0.2068 +#> -17.5532 -16.1468 -2.0649 10.6989 -0.0317 -5.5492 -0.6623 -1.1360 +#> -5.5422 15.5088 -10.3694 -7.2312 -0.0041 -6.3378 14.6600 -10.6514 +#> -7.5865 1.1804 -1.6392 -22.0979 -11.5349 -7.3807 9.2519 7.6128 +#> -10.6179 -12.5250 1.7685 8.5996 14.2387 -8.7196 -6.3942 6.2967 +#> -4.5517 3.1659 1.0517 5.2230 1.6644 7.6189 7.8894 -9.2089 +#> -0.7227 -8.7862 21.3328 -13.5036 9.0519 -3.5375 -10.7146 7.8418 +#> -7.3776 -9.2339 -9.0334 4.7565 9.8160 3.7270 15.0688 7.1635 +#> 11.3902 -10.1863 1.0542 -6.8610 2.8108 3.0329 -15.4997 3.2724 +#> -1.3498 1.5421 4.7036 2.1384 1.6352 -6.2726 8.5540 15.2884 +#> 4.6787 -2.6138 12.6692 5.2700 -5.3515 0.3652 10.9815 -3.3373 +#> -2.6148 11.6006 1.1225 -5.9275 5.0149 -0.7353 2.5467 -10.4715 +#> 3.8180 12.0771 8.0940 0.3466 -15.9737 -17.4497 7.3648 3.8229 +#> -1.6993 19.6717 -6.1632 -9.6537 8.7153 -3.2533 13.1750 5.6769 +#> -8.5416 -11.7091 -9.0161 -15.5721 -10.8837 2.5474 8.6095 1.1659 +#> 8.6536 2.1465 4.0729 -7.8629 -11.2442 10.7150 0.3810 13.0305 +#> -23.0618 1.2090 1.6410 8.9639 2.0674 0.9350 -4.3420 6.6411 +#> -5.8685 3.1908 9.4136 -13.6394 -15.1120 -19.4785 5.9027 11.2434 +#> 9.5898 2.5199 9.4495 4.7774 8.1902 15.0195 -1.8043 7.9227 +#> 11.5086 -11.9990 0.6903 -15.1623 -4.1851 0.9766 6.7075 16.3212 +#> 21.3647 -18.4335 -2.4270 7.1990 4.3983 2.1114 0.8842 -0.2500 +#> 3.3664 -6.8320 -6.3752 8.8395 -9.9285 1.5100 -6.8595 -5.1740 +#> 9.0478 -0.2612 10.8768 -13.0471 -8.4355 -2.3569 -8.9594 -6.3812 +#> 6.6571 -4.9372 4.6812 -3.2220 -4.3514 8.6508 -2.3734 -10.8352 +#> -9.3337 13.9356 -11.9485 16.9562 -3.8194 1.4336 -10.1326 4.5468 +#> 11.5920 -1.3552 0.0446 -6.0608 -11.8932 -7.2355 -17.2701 6.8176 +#> 13.2213 -3.6443 -12.2204 -14.1070 7.7651 0.5954 2.4866 11.5992 +#> -23.3829 12.7079 -3.0813 7.9698 -1.7549 -0.0543 3.8429 -3.8380 +#> -11.4183 11.7327 -9.6460 -18.0117 17.6143 -1.6256 9.8787 -2.0024 +#> +#> Columns 33 to 40 -3.9341 -2.0305 0.6524 -5.1456 0.6766 5.5498 1.0476 -0.1857 +#> -6.8117 2.9403 7.8930 -7.9923 3.1811 -1.0809 1.7309 -3.3199 +#> 2.2975 -11.8925 3.8148 10.5640 -6.8398 25.4739 1.6435 14.8361 +#> -10.5531 -3.5991 -4.7588 -8.2663 7.6058 2.9165 0.2993 -17.7117 +#> -8.8943 -11.8832 10.5742 -11.1564 0.5177 -2.3430 -7.9058 -8.9088 +#> 4.5182 4.0905 -3.4065 -13.8010 -11.5456 -3.7047 -10.3269 11.0212 +#> -11.1254 3.4482 14.1089 -15.2183 2.6742 4.0473 3.2420 -20.2393 +#> 0.8347 2.2787 4.5881 2.3749 8.7257 -4.3932 7.1358 -8.3630 +#> 5.0211 9.5129 -15.4526 -4.9501 10.4043 12.4970 -3.7509 23.3894 +#> -10.2102 3.5111 -12.3805 3.7743 -6.8626 -0.0747 -2.9689 9.1161 +#> 8.2601 3.4696 -12.2933 -0.6306 -1.6229 4.7930 -7.3460 32.7231 +#> 2.5878 10.6950 -19.4801 -2.9348 -1.4626 -0.3699 2.5573 5.6626 +#> -6.8752 0.9755 -4.6640 -17.8662 -0.8453 3.5109 -4.6187 9.9316 +#> 1.6028 -0.1389 -1.2994 -6.2128 10.0624 -2.3837 2.1566 8.8314 +#> -14.0722 1.5607 -20.0004 22.8786 -0.1401 2.0176 -14.8383 -13.5920 +#> -18.6510 -0.4177 -2.9582 4.9505 12.8327 7.7842 1.7678 -8.5907 +#> 7.1897 -2.1071 1.5479 6.3302 1.3973 -2.8158 -8.7517 1.4023 +#> -6.8289 3.7599 1.2994 -1.9434 0.1248 5.1686 1.8590 8.0154 +#> -1.5625 2.7974 14.1518 -10.1855 -17.9860 -7.5408 8.3709 10.3564 +#> 0.7383 -5.3753 11.1572 9.0830 -4.7921 5.3966 -17.6511 0.3852 +#> 1.2908 -10.6373 9.6046 0.0826 -3.8387 0.5976 2.8609 -9.5399 +#> -2.6528 -1.6465 -1.9558 13.9648 7.2380 -0.6112 -14.5066 2.6307 +#> -2.3397 -0.2024 0.8239 -6.0347 4.0913 11.3913 14.4397 4.5889 +#> 11.2896 10.6101 -10.7042 -15.5441 -9.2189 -9.8412 5.1248 32.5710 +#> 11.0244 5.5595 -7.1076 16.0597 6.2505 6.3358 -0.2796 -2.1579 +#> -9.9230 0.2472 -9.2166 5.0312 3.5981 -7.8504 -7.2184 -1.3071 +#> -2.4879 -11.3545 -12.4110 24.2905 -8.7519 -17.0857 -10.2919 -3.4404 +#> 4.1187 -2.6143 1.0428 10.6671 -13.0915 -3.5546 0.5629 6.4200 +#> -3.3104 -12.0760 -0.7076 -3.0841 24.2404 -3.1691 -6.9119 1.7941 +#> 3.1939 -8.4331 -17.0943 10.9742 6.5465 8.0851 0.8856 -8.9553 +#> -14.2499 13.0122 -2.6418 -2.2588 -0.4159 14.4637 -3.2607 8.2385 +#> 18.2165 -3.9107 0.1991 -8.0249 0.6290 -2.4179 11.3165 6.8118 +#> 3.9876 -3.3598 14.0990 -5.3588 -12.7347 -11.6581 -0.1995 11.9736 +#> +#> Columns 41 to 48 2.6384 12.6728 -3.7734 -20.3231 -0.2065 -1.2895 5.8673 -0.0072 +#> -0.5856 -7.6191 9.3754 -7.0757 -1.0990 -0.6045 -2.5725 0.2833 +#> -11.6955 1.5578 -5.1756 -6.8691 -13.8330 1.5411 -2.2008 13.6485 +#> 5.0361 -17.7714 10.7466 -6.2763 13.0006 -3.0142 2.9823 1.0376 +#> 8.8774 -10.9829 0.9405 -9.5145 2.8328 -4.1716 0.8532 2.6348 +#> -10.9172 3.7905 8.3874 7.9075 -8.7231 7.7682 -9.8785 -1.4428 +#> 4.1869 -4.6672 -1.5456 -1.7447 -8.1386 3.2080 0.5161 8.1082 +#> -0.7679 5.4786 5.5345 8.9032 5.1916 1.9881 -1.1946 8.5584 +#> 2.3608 22.4209 -18.1195 -9.3485 -0.4505 0.1381 -21.7665 -0.1648 +#> -8.8235 4.9310 -6.2566 -16.0138 3.0802 -1.5473 1.1070 1.7021 +#> 1.9942 9.4574 4.0311 11.4089 -5.2419 4.3092 -11.9660 12.5830 +#> -2.6881 3.7668 2.4198 13.9039 5.7139 14.8638 12.3989 -1.7952 +#> -22.4898 -14.8158 -6.6019 -5.1961 5.4858 -0.3881 -13.2620 -0.4163 +#> -4.5293 8.1611 5.8838 -9.0136 -5.5433 -14.5050 -8.3331 9.2668 +#> 7.0383 0.8013 0.0123 7.6101 25.3196 -5.0556 -8.8034 6.1687 +#> -13.9259 -4.7097 -6.8706 4.4354 4.7897 29.0184 -1.8586 -9.1406 +#> -8.2563 5.5888 -0.1159 8.7976 -2.7876 12.0876 -13.8446 21.1724 +#> -4.6292 2.1923 -8.3140 1.8247 4.4025 0.4105 -1.3992 6.8583 +#> 7.0735 2.5204 0.8192 -3.7221 -21.1281 -4.5568 -1.9114 -10.8041 +#> 0.4631 -5.1953 3.0787 2.2137 9.7525 -2.0934 -5.1761 10.7299 +#> 3.2992 2.7955 6.4101 6.6697 -15.3544 -2.3311 10.5766 -5.4545 +#> -4.7215 12.9134 5.2489 11.9368 -10.3907 0.9388 -7.1882 20.6407 +#> 11.2167 -1.1713 -4.6229 -26.7602 8.4682 6.1601 7.4882 -2.4842 +#> 11.0823 13.9282 -1.9220 -9.2037 -4.9446 -20.9689 -21.6739 -3.2960 +#> 22.2801 11.3753 4.0951 17.2048 6.1801 -12.2547 10.9081 -2.8286 +#> 6.4324 -4.8378 -0.7057 -3.4801 0.3702 -13.0156 3.2830 -12.0592 +#> -4.2670 7.1882 -2.7934 2.9694 10.5591 -1.4846 6.7703 20.9930 +#> 3.9284 -4.1591 -8.8767 -2.8856 -2.4679 -2.1686 12.8365 9.9197 +#> 1.9951 -20.0595 -4.7230 -15.3246 16.8989 2.5943 -2.6376 3.5854 +#> 0.8159 7.6094 -14.7591 -1.7713 14.2379 1.4623 -9.3418 0.7935 +#> -5.7499 8.6248 12.7241 -6.9128 -4.7761 -6.7299 -4.8388 0.9684 +#> 24.4516 -7.3553 -1.6329 0.4585 -2.1951 -10.1090 9.4520 -1.1079 +#> 2.9964 -5.9975 10.7465 8.8183 -7.0952 -0.7106 4.2629 1.4456 +#> +#> Columns 49 to 54 8.2625 0.9033 1.5123 3.2150 -4.9727 -0.7974 +#> -9.1411 -6.7768 -3.3312 -7.6552 -5.8629 1.5708 +#> 19.4400 -2.8266 0.6595 2.9002 2.2003 -3.4884 +#> 10.0413 0.8295 9.7387 -5.7218 2.9886 1.8164 +#> 0.7244 -4.3088 15.7654 -5.2667 6.0944 -3.1314 +#> -3.9781 -0.5492 3.8910 -8.0746 9.1950 0.4971 +#> -4.2548 -0.8059 1.0549 7.2977 -5.5230 -1.4808 +#> 6.5204 -2.4275 3.5413 -2.1586 -8.4760 1.7143 +#> -17.1326 3.4917 3.2997 2.7240 0.5655 3.8225 +#> -7.7256 -0.1063 4.6172 -5.3888 7.1422 -8.5851 +#> 7.6347 -2.0811 -0.6710 -2.1723 -7.2139 1.1386 +#> -1.6509 1.1366 16.1597 -1.2614 -3.2961 1.4994 +#> 4.6051 22.8175 1.6403 -9.0837 7.6439 -2.1571 +#> -3.5945 -6.0262 -16.3896 0.2016 0.9746 1.5790 +#> -7.6154 -7.2821 4.4517 5.6029 -6.9573 -4.2717 +#> -10.4473 -0.8336 -9.6946 9.4865 -7.1611 1.8194 +#> 11.3759 -6.8616 -11.5624 3.5180 -1.9766 -3.6941 +#> 2.0064 -1.6333 4.0976 0.6205 -2.6042 -4.9661 +#> -12.8823 -7.8360 1.0782 5.6972 5.6660 2.1111 +#> 1.9252 -9.6661 8.4916 -9.2061 -4.0192 -8.4211 +#> 6.6718 -3.4963 -4.0314 -7.0760 5.7587 3.5493 +#> 12.1766 11.8530 -6.2429 -4.2156 -5.7470 1.1385 +#> -3.2129 3.4765 6.8705 2.2789 -1.6691 -9.8592 +#> -3.0113 9.5897 -7.1651 3.0243 4.2135 -0.0837 +#> -7.6214 2.8420 1.4394 -0.0282 -3.7063 6.0629 +#> 8.3831 -2.2977 16.5964 -4.5028 2.6921 -2.9995 +#> 7.3325 -0.6589 -8.6266 0.7577 -7.1119 -4.5866 +#> -5.9435 -12.6229 2.5341 8.3034 5.0543 -2.7943 +#> 1.8347 6.9768 7.7201 0.5520 6.6773 1.3603 +#> 11.4995 9.5357 -4.6271 4.5563 -4.1167 -1.1788 +#> 1.0763 0.1444 -15.0228 -0.1753 2.3464 -6.2756 +#> 11.7925 -10.6398 18.1778 6.3685 -4.0435 -2.9618 +#> 6.0329 12.3226 -8.7054 2.7213 1.1492 1.0349 +#> +#> (14,.,.) = +#> Columns 1 to 8 11.9867 -8.2984 -1.3146 -3.6354 -8.8529 0.2349 -13.9622 -3.0493 +#> 6.4384 6.9374 -6.6913 7.1586 -13.3554 1.9107 0.1068 -1.9499 +#> -1.5772 0.2219 10.3118 -10.6105 -2.3904 0.7299 8.2559 3.1453 +#> -0.2696 18.1811 0.1068 4.8140 7.3397 -9.6986 2.2253 4.3820 +#> 6.5211 9.0709 10.7426 21.6991 15.5631 1.3897 4.9629 -9.8507 +#> 4.8956 11.2489 10.0215 12.3291 6.3059 -13.7399 -11.5145 -16.4382 +#> -13.6097 1.8697 4.8802 -6.9369 -8.3503 -5.2314 -4.0539 16.6210 +#> -8.3542 -3.8929 5.4088 -9.4494 1.2373 2.9799 -0.9539 5.0020 +#> 7.5931 -16.0784 -0.6130 4.4867 1.8511 -10.0700 -6.7818 -12.6482 +#> 6.8347 7.8955 15.9790 5.6663 0.2463 -7.2013 0.4226 -2.7929 +#> 12.1881 -6.6606 -13.5196 -0.0920 -4.2267 3.3949 3.1965 1.4301 +#> 1.1594 -2.0000 7.4615 -1.4799 10.2481 -19.5450 -6.4259 -7.2719 +#> 6.6965 -1.9487 -3.2567 2.7485 -5.5610 -3.7478 17.9432 9.7405 +#> 9.2307 7.2232 -16.9909 18.2683 -5.5177 18.6778 6.8506 8.0292 +#> 1.0844 5.2928 12.9493 8.5347 5.1063 -22.1321 -3.8124 -10.9738 +#> 0.1242 -0.8362 0.9491 -2.1039 9.1767 -10.5851 -22.4580 0.0073 +#> 4.1889 -6.7578 0.6158 10.6993 20.2475 -2.6803 1.4228 2.6368 +#> 2.6003 12.7629 -5.6436 -10.5305 16.5167 -12.4023 -1.8523 -2.2946 +#> 1.3399 -0.8541 -0.2925 -1.9588 -3.1782 7.4115 -7.8541 -7.5948 +#> 1.3034 5.2238 8.1993 11.3116 1.9543 6.8484 19.9934 1.7398 +#> -9.7040 1.3257 0.3924 -16.1826 0.2296 -2.0072 -7.8176 16.1278 +#> -2.4648 -0.0334 8.7678 -7.9293 4.9946 2.8806 12.8632 3.9208 +#> -2.9757 -2.2769 7.5569 5.6977 -13.9843 3.7280 0.5950 5.5230 +#> 17.0316 -4.8927 -7.4138 -23.9129 -16.3091 15.0418 12.8346 4.5024 +#> 0.2592 -3.1293 5.7996 9.5155 -17.7701 -0.6414 -0.9153 -3.0268 +#> -4.5513 -0.0853 3.2202 -2.6910 4.5919 -0.7635 16.7898 -1.5109 +#> -1.3973 -1.0692 -12.9523 -30.2056 -19.6100 5.0399 13.8209 -17.2502 +#> 2.2549 2.7606 -5.4813 0.8905 5.9473 8.5687 -7.6555 -13.5678 +#> -0.9458 7.3816 -7.9265 2.9466 -7.6597 -5.3167 8.1822 -8.8753 +#> -1.8768 8.9373 6.3757 4.5212 5.9159 0.5309 -3.3872 -5.6588 +#> 4.2539 -4.2977 2.2383 4.8838 3.9870 5.2324 12.4818 -3.0790 +#> -4.5559 10.2216 -4.1422 -6.7525 7.4266 1.5538 1.6988 16.4077 +#> 4.8749 15.8087 -3.5180 -10.5315 -0.0794 -6.0009 1.2058 3.4717 +#> +#> Columns 9 to 16 17.1385 -1.7777 -3.5087 -12.1099 -3.3899 5.2232 10.7039 -0.3877 +#> -13.5326 -20.5020 3.1858 -1.0639 2.6542 6.7054 -14.8185 4.5870 +#> 5.9097 -11.6857 -10.4692 11.6955 -0.4011 -2.0465 -2.4632 6.9488 +#> -14.1932 9.3081 3.2745 8.7878 5.5004 0.1761 -14.6211 -10.5968 +#> 1.7818 -3.1959 3.9106 -5.2226 0.7281 -2.4754 -10.0146 -12.3474 +#> -6.9109 10.7561 4.2927 3.9453 -0.1506 -6.4972 -4.0949 17.0473 +#> -10.4787 8.9851 -4.0980 -3.8323 18.6653 14.4851 -15.8948 -3.3505 +#> 2.8277 -15.8193 -3.6551 1.9299 0.3159 23.0975 6.4151 5.7881 +#> 7.1573 -2.5123 11.9745 -6.2904 -11.5491 -2.1794 -3.4662 10.7510 +#> 10.7676 -11.3751 1.2635 -1.9772 5.0680 -24.8550 3.4418 -7.2806 +#> 14.6815 9.9974 -4.7321 6.8144 4.9209 -5.8303 -0.1228 2.7059 +#> -7.8457 2.3963 8.4782 11.1909 11.9533 5.7583 -7.5739 0.8734 +#> 17.4703 -11.4663 -10.2138 10.8576 10.6437 -6.2552 7.0891 6.3207 +#> -17.5415 -19.9175 6.7340 2.6164 -5.2055 -2.9767 3.3248 -7.6270 +#> 13.8038 8.1952 15.0012 -1.9493 -11.4842 -2.7522 -1.7889 7.3613 +#> -9.8399 -5.3707 15.4129 13.8695 17.0482 4.0208 -13.5116 0.9707 +#> -9.0796 2.1351 -9.5623 -11.4923 1.1438 -4.7025 -0.7771 20.3234 +#> 1.8303 -4.7797 9.7657 -7.4804 14.0735 -10.1588 8.9940 2.3020 +#> 8.9459 3.2066 15.0946 -13.5423 8.3755 28.6378 0.6761 3.6107 +#> 2.0416 -12.4461 -8.3948 12.2424 -6.6186 2.3483 6.4522 -8.7031 +#> -23.7058 11.7056 20.3069 -2.6221 8.4900 9.8045 7.4044 -1.5314 +#> 1.4330 6.2013 -19.5130 -3.8664 3.1176 6.3897 3.1219 22.7205 +#> 7.3159 -3.7498 -0.9872 -9.3042 -9.2403 -1.6763 -0.1357 -9.7366 +#> 2.7242 -2.1545 -1.2852 -8.8201 2.2035 -5.8164 -9.1770 7.0053 +#> -3.7575 -21.8548 7.5362 -27.8420 -9.4957 -0.5877 -18.0524 -18.7607 +#> 3.0822 21.1661 3.2644 13.9712 -8.8555 -9.6856 3.4375 0.3204 +#> -7.6758 1.7168 -13.5409 -0.6890 -6.3293 -11.9538 -7.7964 3.2132 +#> -1.1651 -1.6261 8.0642 -12.0310 7.2915 -9.8800 -0.7667 0.9424 +#> -10.3043 3.9737 -5.0173 -13.3134 5.6635 5.2256 -6.4463 3.3400 +#> 6.5242 -11.1280 -29.3445 -12.3845 -20.2978 -2.3998 -5.4099 6.7119 +#> 12.9018 3.5089 -1.3941 -5.3138 -3.4071 3.0327 9.5516 0.7164 +#> -7.0746 5.9619 -4.6661 8.6978 -15.0780 -2.8786 -3.4132 -4.3365 +#> -8.3428 5.4275 -7.3953 7.4961 24.9181 -2.1467 -8.6705 -3.6659 +#> +#> Columns 17 to 24 11.5171 -13.5963 2.5003 6.5864 -14.5384 6.9039 -18.7580 0.8669 +#> -1.2107 10.7887 6.0819 -11.5975 -2.2031 6.5728 2.1513 -7.7375 +#> -0.8223 3.6664 2.4372 -5.6217 12.5742 19.4965 0.0526 6.5964 +#> -6.2581 3.5735 1.2482 8.0885 10.9134 -4.5984 10.8286 3.3268 +#> -3.3290 -4.1021 11.1832 16.9330 4.6907 11.7326 29.6726 -0.0969 +#> 0.5262 4.0599 14.4213 -10.2620 6.5553 -11.4401 20.3248 -26.7666 +#> 2.4857 11.3926 -14.0582 -2.2133 8.6209 -7.9137 4.1798 -1.4231 +#> 5.2495 -0.5675 -2.2643 -7.5104 10.9990 3.7068 -6.7966 13.7726 +#> -19.1619 10.3072 6.5310 -1.0618 5.0327 5.1669 14.8004 21.6915 +#> -16.9673 -8.3238 -10.4902 18.6426 11.9763 -15.1956 -3.9650 2.7984 +#> -11.7504 0.0668 8.7361 0.3543 -6.3230 8.1647 -6.3996 3.7345 +#> -7.2738 0.4172 5.9568 -0.7624 4.1710 0.5172 3.5015 -5.8163 +#> -15.7977 1.5926 6.3499 -7.6268 -13.1398 14.6137 -6.5688 -8.8103 +#> 0.7629 4.0633 -4.5180 -0.3671 5.7061 -1.2815 -5.2170 -8.4716 +#> -11.7767 0.9209 -18.8888 4.4665 9.5710 -6.4211 1.0147 1.5405 +#> -8.4317 9.7537 1.9881 4.6121 0.4938 -6.5393 -0.0713 4.9697 +#> -15.2060 4.7632 1.2563 12.2544 6.4372 -11.2441 5.9420 0.2902 +#> -20.6139 -0.3935 -16.5252 -0.3189 15.2685 -21.3963 -1.8786 9.5508 +#> -1.3409 -5.6721 -2.4587 4.6961 -12.9813 -0.7488 4.6960 6.8340 +#> 1.4624 1.1889 -4.2159 4.0801 2.9847 -1.3100 12.9584 1.6323 +#> 23.1831 2.5809 -5.2848 -6.6509 9.1711 -1.1559 -11.4886 1.3686 +#> -0.6514 6.4197 -7.5416 6.3921 15.6665 0.7941 -8.1033 14.6386 +#> -12.0915 -0.6876 13.3686 5.8638 -0.8935 3.7644 7.6196 0.2200 +#> -1.6452 -1.5316 7.2437 4.4482 -12.4975 17.8050 -11.5380 9.0727 +#> 7.0956 3.6777 -3.2342 12.4985 9.9739 5.1889 13.3412 -8.5417 +#> 6.1458 -12.5611 14.1390 -12.5054 9.4268 -0.0469 2.0785 0.5694 +#> -2.4983 -8.0480 -4.1426 1.6966 -8.7688 -7.3796 -19.6699 10.0829 +#> 9.6098 8.0111 7.6299 5.3213 -1.2586 -9.8363 14.0288 -0.7746 +#> 6.7143 0.5243 1.7446 -9.3635 -3.6307 -1.0245 14.2477 -7.5727 +#> 0.0423 7.0274 4.0858 9.4900 -4.4992 -6.8116 8.3830 -0.1123 +#> -13.3977 7.0101 2.2001 0.4743 -2.4360 19.8296 -4.3704 5.9854 +#> 17.4984 -14.0824 0.6152 4.4569 16.7575 -3.1909 -0.1884 14.3114 +#> 1.5785 -11.2978 -3.4116 15.3848 3.2945 -3.2251 1.2248 5.3961 +#> +#> Columns 25 to 32 -8.1145 -8.5795 5.9131 2.4377 7.5201 -4.6986 -7.5430 -9.7138 +#> 3.7041 3.2672 -11.4226 -7.4969 2.8387 -8.5854 -3.1510 -4.9110 +#> 26.2928 3.8416 -3.0306 1.4455 -18.7729 5.4855 -0.0242 -3.6069 +#> 7.4172 -7.8073 13.4746 -4.8996 8.2154 11.3917 6.2756 -6.2200 +#> -6.6342 8.8344 6.9771 1.2821 8.9269 15.9903 12.5617 -2.2922 +#> -6.2727 0.8454 -2.9785 16.1220 12.7200 -8.8158 13.0746 -13.5324 +#> -4.1055 -2.8899 -2.6837 -10.1672 -14.6019 4.6622 2.9432 6.0218 +#> -11.6356 14.2121 1.5131 -4.1166 10.9975 -2.0590 -14.1106 7.0641 +#> 1.2110 12.9263 -2.7073 9.8011 -4.0078 4.8110 -5.7098 -7.0139 +#> 2.5019 -4.6459 14.4936 6.9029 6.8220 -6.1545 4.6982 -13.7960 +#> 3.8880 24.0579 5.2231 22.2702 10.4620 -6.8659 -7.0574 10.2661 +#> -4.2881 5.0802 -7.6389 1.0890 5.0370 18.1272 -6.3395 11.5115 +#> -6.9992 13.3076 -3.4046 -7.0203 2.2997 -4.9772 3.6725 1.6278 +#> 15.8015 5.4040 -0.5921 1.0171 13.0780 -0.1570 -21.2774 4.6421 +#> -14.5559 -4.4831 4.5585 -9.3380 16.2462 0.5818 12.1105 6.9620 +#> 1.8914 4.9895 2.9694 -11.0574 -16.9952 12.5293 -15.0569 18.8390 +#> -4.6414 -9.7090 15.4887 5.8955 -1.0994 2.9043 1.4137 4.9682 +#> -2.6412 -7.0216 5.2076 -16.7756 10.3232 17.7661 -11.6975 3.4358 +#> 10.6992 9.7291 6.3405 7.0945 -3.1234 -4.5210 -12.6039 2.7055 +#> -0.9061 -3.8552 -3.4819 -5.2309 -2.0386 7.6870 8.0089 5.3179 +#> -9.2466 -1.9734 -17.7327 1.4467 -4.5935 -2.4652 -7.3554 -4.5967 +#> -0.0229 -3.8766 6.8379 15.7671 -4.2786 8.3920 -1.8504 0.6503 +#> 13.0757 -6.5117 6.0628 -2.8543 -5.9146 -3.3948 7.3113 -8.8372 +#> 12.8777 10.6938 11.7491 13.2491 -6.4082 14.3670 8.2762 -10.5485 +#> 20.8602 5.2665 -2.1849 8.4760 4.6720 -9.7380 14.8252 2.4705 +#> -2.9191 2.4862 5.1356 3.3433 5.7164 10.1240 10.1418 -12.5545 +#> 7.7613 -17.3256 -14.2300 3.0357 -8.0037 -9.3167 5.4464 -9.5051 +#> 0.3402 -13.6247 8.6591 -8.0319 -4.1533 -5.3817 -8.5196 -2.2487 +#> 12.3752 -11.3022 -5.1591 -7.1980 -4.7129 -1.5999 6.0342 0.5003 +#> -8.3133 -7.0496 17.7399 11.1410 0.1447 5.0498 -7.6757 -12.1452 +#> 11.3473 -1.1404 0.2236 3.8494 6.2571 -3.1258 6.3751 -2.4485 +#> -6.3385 12.0757 2.5867 -5.1963 3.9730 2.3733 -4.1170 -6.3596 +#> 6.0022 -3.2219 4.8640 2.0869 3.6794 0.9510 3.1537 -5.3880 +#> +#> Columns 33 to 40 -9.7750 4.3122 7.7314 -0.3517 6.2544 -6.1334 0.5192 -9.4471 +#> -8.7052 3.9934 23.8465 8.6498 2.4528 -4.0451 -4.4595 15.7021 +#> 10.9452 6.8185 23.5919 10.0206 7.8168 -9.7620 0.6737 12.7527 +#> 8.2015 -5.9087 11.5235 -4.9371 -8.2185 -6.3917 -5.5924 2.4774 +#> 8.4680 6.6133 -5.3330 5.5582 11.6515 19.1548 10.5538 9.0030 +#> -9.4199 2.2284 6.7835 8.8719 -8.4643 -8.7587 -6.3010 -1.5074 +#> -1.1034 -8.4749 -5.3583 14.8149 8.7404 20.1333 -4.1000 1.9190 +#> -7.1595 4.5246 -1.5435 -8.6677 -2.4809 -10.2686 4.5958 1.0697 +#> -3.0526 5.9754 -1.7177 0.9364 7.2218 15.6521 5.0914 15.7620 +#> -2.3993 3.3825 17.5335 7.8474 18.1009 -8.3220 6.9126 1.3761 +#> -9.2370 5.9037 4.8368 -10.3760 1.5467 3.6940 5.9009 2.6261 +#> -6.8671 -5.5725 11.4637 1.8629 -4.3842 -8.8654 -5.1225 -19.9867 +#> -2.1253 8.3315 -0.9771 7.8220 11.6228 3.0225 -10.3436 4.4569 +#> -0.0098 0.1681 7.3899 -14.7242 -10.5119 -3.0313 13.5913 16.3598 +#> 2.0848 13.5177 -1.9371 -9.8813 7.2109 0.1098 2.8727 5.3730 +#> 9.2950 -5.0439 1.8789 -8.6116 0.4375 0.2704 -4.6135 -0.4481 +#> -15.4528 6.8408 4.1062 -1.2382 -1.1624 14.0390 -3.2953 9.6390 +#> -2.4949 -13.5640 2.6893 -2.2827 2.5954 -0.7560 -3.5461 7.4404 +#> -4.1343 -22.1728 -12.7734 -8.1149 0.2042 -6.0266 -0.7888 5.2665 +#> 18.2782 1.1584 9.0241 2.2745 21.5432 4.8197 -8.2883 -2.4910 +#> 9.1289 5.9599 6.8255 7.6997 -0.3532 -4.1977 8.8119 0.5369 +#> -10.6763 6.4529 2.8220 1.1191 1.3069 -0.0376 -0.4241 1.5943 +#> 3.2870 1.2273 15.5045 11.5385 12.0046 2.5891 -2.5266 10.8700 +#> -10.8855 2.7544 -10.3731 -7.6112 5.5987 2.8348 0.2626 4.2871 +#> 1.8254 3.8367 3.7787 0.6154 1.4018 -6.2460 -8.3149 -3.8919 +#> 4.5782 -14.2862 -4.3419 -2.8506 12.8187 6.0623 9.9037 -8.7852 +#> -7.6454 7.2889 -7.3449 -14.5525 -4.5972 8.2140 -0.5723 -8.6042 +#> 8.7800 0.2766 6.5277 -5.1133 1.4634 -2.1700 5.2418 -1.3187 +#> 4.1663 6.2197 9.3275 -3.1985 -8.7473 2.1206 -2.6255 6.9047 +#> -7.2054 0.4252 0.9457 -13.5936 -9.6846 5.9752 9.3577 7.8062 +#> -7.9872 -9.4947 -3.2777 -1.0915 7.9006 4.6527 2.2801 -0.2176 +#> 9.3088 -7.8706 0.1679 11.2275 8.6029 8.0369 -5.1506 -6.3863 +#> -9.3878 3.4855 -3.5517 18.5091 4.3952 13.0829 -0.8334 -5.4597 +#> +#> Columns 41 to 48 6.5011 -16.1431 -4.3466 -14.9900 -11.7130 7.8898 -2.9533 -2.2251 +#> -0.9887 -5.6208 3.8264 6.5192 -9.3852 7.1299 3.5867 -6.3774 +#> 0.2947 -3.7010 10.4453 -2.2354 -0.0995 9.3341 13.2049 6.7727 +#> -6.2542 8.5937 1.3642 -10.2065 -0.6303 -10.4907 -6.5015 -0.1814 +#> 11.8759 14.5775 7.2311 -4.1882 -3.3403 -1.6617 -13.8284 -15.8210 +#> 8.9016 -4.4975 -2.8111 -3.9421 -8.8392 -15.6729 5.8049 -9.0963 +#> -22.4988 -3.1013 5.6064 -1.3716 -0.4750 4.9586 -0.3411 -1.9196 +#> -11.7136 -0.1444 0.6934 4.2718 -4.5249 26.0992 -3.3704 -2.0736 +#> 3.3940 -4.8608 -6.7398 2.2891 -11.9959 19.4911 -4.5567 7.0563 +#> -2.8931 0.6912 2.3908 -20.5148 -8.7039 -21.1667 -5.3784 -4.7638 +#> 3.5246 2.6905 -2.7401 12.7459 -13.7531 -1.9057 -24.9776 -5.5714 +#> -8.2436 12.1626 8.5889 0.0052 1.3230 -19.5498 -0.2498 -2.6468 +#> -4.6138 -4.6218 -2.2591 8.9042 -4.3583 6.2058 -3.3368 1.2475 +#> -8.1050 7.6328 -15.1233 -5.7670 -5.6250 -2.8747 -12.5963 17.1892 +#> 3.3816 9.4714 9.2402 4.2461 -1.8082 1.3889 -18.4537 -7.4553 +#> -6.8785 -3.8493 4.7792 8.7750 5.6983 3.3413 -2.4371 7.8816 +#> 14.1797 -8.5735 1.2438 11.8582 -10.3231 -2.7905 7.6571 -1.2268 +#> -18.1333 1.1585 -0.1916 -7.3058 3.1073 -9.1316 -14.7963 4.7654 +#> -8.7561 -18.2628 -10.8891 4.9068 10.5885 -1.6268 -4.1198 6.0831 +#> 0.5767 -5.7960 14.2352 -6.2464 -0.0318 4.7683 -3.9952 -16.9078 +#> 4.2374 6.0610 8.8447 -2.9822 9.9090 7.2188 8.9347 9.0626 +#> -1.2002 2.3952 6.9179 6.7029 0.9641 11.7896 4.9396 10.7570 +#> 8.7479 3.4975 6.1097 3.5737 -9.6930 2.4751 -0.9593 6.4604 +#> 8.7248 -4.2146 0.1575 -8.3680 -4.5994 -3.6923 -11.4719 -6.3495 +#> 8.2636 8.0527 1.2133 -4.7065 7.7905 4.3437 1.3388 6.6275 +#> 11.6542 -13.8952 6.7065 -26.8512 8.7871 -20.8686 13.1924 -11.7947 +#> 3.9899 0.6227 -15.7883 -3.1894 -3.9283 -0.7139 -5.5691 -12.3483 +#> 3.3880 -5.5641 -5.1218 -7.5778 7.3389 1.1221 15.2900 1.8776 +#> 0.4520 -0.0513 -6.6910 -1.2088 14.5641 14.3009 8.1152 2.6959 +#> 2.3548 1.1817 -16.2080 1.7125 -10.1196 9.7649 3.8981 3.3544 +#> 7.7892 8.2897 0.6544 8.3422 -3.1110 -0.1419 2.9984 8.1290 +#> 2.9468 7.7314 1.1350 -15.7203 7.8031 -26.0266 4.3703 14.3819 +#> 6.5798 21.6411 6.7600 11.7019 -0.3172 -9.8948 -6.3248 9.0993 +#> +#> Columns 49 to 54 5.0734 -3.1985 -3.1484 -18.5449 3.5854 2.6389 +#> -9.9198 1.8784 -2.8820 -2.0754 -2.6528 -2.0361 +#> 13.2967 -9.1146 -10.2911 8.6274 2.0238 0.7990 +#> 4.9978 6.5459 5.7172 1.0634 -1.8348 0.2181 +#> -6.6560 -6.9395 -2.7945 -13.4165 0.3720 -5.3638 +#> -13.8443 17.2028 1.6859 -3.0717 7.0974 -2.4310 +#> -11.1912 0.7051 0.4575 -1.5135 -5.2907 3.6489 +#> -0.9601 -1.2169 5.9644 -14.7754 -2.9408 9.7640 +#> 6.9968 3.5194 -5.5172 12.0147 0.2977 -1.2645 +#> 4.0465 -13.0005 -4.5254 -6.8678 -5.6679 -0.6868 +#> 11.8746 9.7976 -6.3857 0.3301 6.3049 -1.9373 +#> -4.6632 10.6690 11.6423 1.4945 0.5620 1.4400 +#> -3.6426 -3.7228 -9.2963 2.9672 8.7875 0.7035 +#> -8.5415 -6.0032 8.7938 11.3102 -3.4259 1.2885 +#> -4.5487 10.1005 8.8807 -16.3164 -13.8255 -7.4006 +#> 4.4433 -13.8975 -6.4573 1.1518 -4.9269 3.9875 +#> -16.5790 -15.7375 1.8982 9.4079 -4.0299 2.0403 +#> 5.4469 -11.0663 3.1077 5.7961 -17.1680 6.1061 +#> 11.3496 7.0669 8.9131 1.8658 6.6558 4.8192 +#> -1.0470 -21.5752 14.9542 8.4510 -1.2142 -0.3483 +#> -10.2189 -3.1078 -1.9010 -6.4678 1.8478 -1.3810 +#> 11.3472 6.0519 4.0031 1.0326 -0.1843 5.3557 +#> 5.0970 -4.6407 5.5243 0.9926 -4.7836 -8.5118 +#> 5.8961 23.3211 7.8782 7.2337 11.0661 -2.1141 +#> -1.7034 11.4659 6.8905 -1.7727 1.0258 4.5055 +#> 3.9860 -2.7932 6.7847 -3.8188 0.9939 -9.3229 +#> -5.9320 4.4995 -13.8126 -5.9319 -6.1371 -5.0455 +#> 7.7796 -4.0133 4.3013 2.0061 -6.8397 -7.0406 +#> 10.1231 5.2064 -10.2621 -0.2242 0.5774 -5.2299 +#> 3.2700 5.6559 3.3086 2.3718 -15.8170 2.4293 +#> 3.7895 3.7239 -2.7195 -0.8528 -0.1359 -2.3084 +#> -2.4946 0.8061 17.5376 4.8175 -4.9913 -5.3384 +#> -6.9893 11.9924 -3.7919 -3.3553 0.5683 0.4901 +#> +#> (15,.,.) = +#> Columns 1 to 8 3.1122 3.8611 4.8205 -1.6298 6.9329 9.8906 2.1740 -4.2024 +#> 8.5203 9.7147 -1.5809 -0.6421 -2.0104 2.4745 -14.1152 -2.4426 +#> 7.0298 6.1547 -9.4351 -0.5382 -19.4606 -12.2878 -1.7764 -3.0312 +#> 0.2865 -0.9511 5.3513 4.3796 -2.7623 -13.8982 -14.1224 0.6164 +#> 0.0973 -0.4816 -4.8435 -3.1812 0.4628 -2.6683 -0.5602 22.2579 +#> 0.4775 3.2235 -1.8869 0.2404 13.6895 -0.6795 15.2676 5.6317 +#> -2.1784 -1.1375 4.5558 3.1385 -4.9571 3.7874 -5.9118 10.8517 +#> -4.7388 -6.9078 -10.4307 -2.2684 6.6817 -1.7626 -18.4548 -10.9733 +#> 6.7687 13.5322 -5.5984 -12.1420 -1.2412 5.6165 -13.5941 -2.0271 +#> 2.0455 9.0392 1.6625 6.0172 3.8340 7.0512 1.2465 -1.9772 +#> -2.1886 -6.4816 4.6749 5.9960 -10.1334 -6.3482 -2.7780 -6.9412 +#> 5.0527 4.9251 2.5347 7.6254 -1.3561 3.7016 -3.2117 -11.3581 +#> 7.7865 4.6978 -7.2020 -4.0605 8.9861 15.5755 -12.3648 11.8017 +#> 8.7464 -11.9171 12.6224 5.7931 -9.1825 1.7586 8.2715 -19.4382 +#> -3.1150 -6.2088 0.4635 8.2511 8.0410 4.9534 2.9653 17.6959 +#> 0.9174 -0.7699 -4.1018 7.9598 0.7056 11.8100 10.3111 -20.1756 +#> -5.3045 -6.8303 -2.2864 -6.5579 11.0886 22.3864 7.9265 -1.5356 +#> 4.6490 -1.5498 1.7408 14.1294 2.5618 18.0424 -1.0681 -17.9160 +#> 3.5954 9.3813 17.2237 18.8808 -6.6940 8.5774 -5.3845 -30.3545 +#> 5.2688 -3.6840 -5.1910 -5.7160 -11.5041 4.3481 3.6597 7.6335 +#> -1.1897 0.6484 3.4448 4.7273 -4.5146 -2.8914 7.4103 30.1499 +#> -4.3721 1.5688 2.7589 0.5297 1.1048 0.7333 -2.0578 0.0610 +#> 5.4195 2.0839 -1.5410 -10.0713 -13.5664 1.3268 -0.3629 4.4647 +#> 9.1930 9.5706 -7.9102 -7.7028 -10.7286 -22.2398 -20.2083 -15.1622 +#> -5.6316 3.1994 6.1807 0.7756 1.6676 -14.9653 -9.9982 -4.2857 +#> -5.1895 3.7136 -17.8547 3.9746 1.8439 -4.8682 12.8033 -7.1466 +#> 2.1943 9.0551 -0.9147 -4.1940 -11.3143 -11.8204 12.1720 16.6048 +#> 2.7969 -7.6488 -2.1236 -4.0005 -30.9301 -11.7749 10.6926 -4.1824 +#> -1.0040 2.9021 -5.1487 -15.2700 2.6248 -2.1149 6.4104 3.2766 +#> -3.7081 -1.6625 4.2692 -1.5709 11.0265 10.0323 11.2276 -8.9382 +#> 2.6578 5.1897 0.1735 -5.7386 5.8531 -11.7441 -14.0351 17.3496 +#> -2.1352 6.3114 10.2460 -8.7306 -20.9668 -3.4366 25.2204 -8.0201 +#> 5.8776 15.5592 22.1419 -8.1323 9.0823 -11.0189 -2.7949 18.9450 +#> +#> Columns 9 to 16 -2.4820 8.6576 5.9839 -1.8184 -9.2920 5.6428 13.7748 8.7812 +#> 21.4926 4.5923 -11.8291 0.6987 11.9017 6.1472 3.6525 2.2308 +#> -4.7901 22.6197 -4.5075 -11.4693 -5.4537 5.0686 5.0970 9.4705 +#> 7.0716 0.2369 -2.7498 1.5082 -6.0825 -12.4953 -6.6059 -1.0555 +#> -2.6514 12.4401 14.5083 4.0852 4.0618 4.1491 26.6751 3.6393 +#> -3.9949 11.5821 4.5012 7.7274 12.3697 3.9179 -5.3370 3.5276 +#> 12.0472 -18.0762 7.6975 5.1272 3.4334 -6.3926 -1.2136 -6.6883 +#> 4.8552 -11.6885 -1.4379 12.8208 0.1779 -9.7202 -6.0333 5.4248 +#> 5.1050 5.3917 -5.1519 1.0647 -4.9101 -10.4036 2.9865 5.9236 +#> 15.4829 12.2093 -8.3707 -0.1868 17.1047 -2.8015 6.6041 -1.1927 +#> -10.6184 4.6587 -6.7819 -5.7780 4.3709 -7.5517 1.9509 2.6140 +#> 17.4269 5.2521 -7.8666 3.7005 0.9033 -2.5157 -3.4188 5.0107 +#> -0.0271 20.8237 -0.5649 -6.5968 5.7183 8.3035 -5.4941 -7.4708 +#> 2.0521 5.0497 -8.4153 2.5045 11.8565 -9.7868 -6.0838 -5.0893 +#> 2.5230 -5.0851 4.8680 -1.1900 0.4753 0.8048 5.0609 -11.2180 +#> 6.2523 2.1532 -4.1158 -8.2963 5.3402 -1.7555 5.6635 -5.3381 +#> 8.4795 10.2285 -10.0182 -3.3915 14.9372 7.8887 9.9017 -7.2846 +#> 6.7871 2.0844 -7.8338 3.5193 14.5492 2.6393 7.4061 -12.4265 +#> -8.6737 9.2819 15.6804 4.1469 1.2852 -19.8180 -7.3381 10.9151 +#> -10.3532 8.9571 -0.5642 -3.9206 2.9838 9.9372 -15.1253 1.8576 +#> 7.4981 -7.1064 3.3215 9.7494 2.7893 3.1182 -2.1001 25.5508 +#> -4.8811 -3.8547 0.6671 -8.4423 -12.9163 -11.9388 -7.4008 9.0730 +#> -0.4422 -5.3445 -14.6563 5.8467 6.8341 -4.3484 21.7684 -0.6717 +#> -18.2219 -7.6869 16.1875 -7.0942 -22.3077 -7.1760 -10.6216 -3.2437 +#> 4.1118 -20.1772 -7.1432 -6.9128 -18.2796 2.3830 0.2257 -3.6572 +#> -3.8686 11.1459 2.9378 0.2278 -14.0622 9.4110 -20.2442 1.3321 +#> -6.3837 -4.3450 9.4582 7.1413 -9.9865 -0.6224 -0.6129 -0.0896 +#> -13.8273 0.5218 13.4575 -12.8272 -4.1247 -1.7526 7.2794 0.0497 +#> -16.4962 -2.7740 19.6613 0.7536 -15.6204 -9.4792 17.8453 -7.5782 +#> -14.1994 -2.2635 12.2250 11.7569 -18.9640 -5.6786 2.5413 4.4004 +#> -6.9980 -4.5628 2.6912 -0.7503 -9.6685 -0.6845 15.9196 -2.9514 +#> 3.3739 12.9420 -3.2897 -14.8992 -4.0071 17.6254 -12.3500 10.5809 +#> 4.4941 -1.2783 2.5828 -7.5568 4.0072 6.3986 7.0076 -7.2597 +#> +#> Columns 17 to 24 2.7415 -14.5243 7.7590 -11.1564 -9.3308 -7.2492 -1.2720 2.2529 +#> -11.5159 10.8135 -16.2901 4.3608 8.0637 -9.7066 -8.7165 1.8619 +#> 0.0860 12.3243 -31.5053 4.8606 2.1148 0.3605 -5.5557 -0.4323 +#> -11.2532 -3.9996 0.5845 3.8652 22.2795 -1.9149 6.9725 8.2957 +#> -2.9669 11.4869 -0.4343 -9.9054 -2.8446 -8.0256 1.0610 9.3770 +#> 10.0457 -3.4253 -4.7768 -7.3880 -3.9299 8.4463 -1.8519 6.3592 +#> -1.4612 4.7071 -2.8134 3.0198 10.9095 1.7908 -3.9836 5.6276 +#> 6.5276 4.2167 5.7684 -0.7467 0.8570 -4.0737 -14.7975 9.9143 +#> -5.9155 16.2076 -8.2101 8.2188 -2.8018 7.0341 2.4280 7.5394 +#> 1.4553 -11.1364 -17.4621 10.4478 -7.7759 12.9649 5.3697 -5.1533 +#> -4.2117 7.7752 -15.3058 13.0305 -7.9705 0.6638 -10.2953 23.3320 +#> -5.2965 1.2251 -1.9521 -11.8184 1.4711 0.9470 0.6935 -0.8830 +#> -4.1606 4.4501 -20.9371 14.0476 11.0796 3.0351 2.3615 -5.1264 +#> 2.8865 -12.0706 -5.3796 -0.6273 -10.8796 4.0407 -12.8834 -2.8364 +#> 2.2295 -2.5573 2.4217 7.7431 -0.0052 5.6104 0.5390 5.1543 +#> -10.9820 -3.3330 7.8870 20.0087 -1.9390 15.2596 6.6805 3.3873 +#> -3.5583 0.9338 -2.4335 5.2367 -8.3011 -2.0619 -16.8000 8.4627 +#> 5.1808 1.4008 -11.4610 12.9522 -1.9121 -5.4007 -1.5541 4.0131 +#> -5.7829 10.3096 6.7852 -5.5373 -18.5659 10.4546 -6.8009 -3.4478 +#> -5.0784 10.0420 -11.7954 2.7051 -16.1865 3.7774 -13.4981 -10.3071 +#> 3.2760 5.2329 -1.7290 -3.1013 2.8472 -4.8237 1.2745 19.0922 +#> 6.6286 6.4568 0.0930 18.2459 -13.5861 -7.7898 -8.3763 -5.8059 +#> -9.0000 -10.3056 -13.3352 -3.0509 -3.2558 3.6195 -7.3976 10.1592 +#> 1.1644 -4.5979 -6.7606 -2.0620 1.7103 -17.1890 -7.9710 -19.0924 +#> 9.3919 16.9688 15.8126 0.2450 7.8224 1.2130 2.5086 -3.5090 +#> -1.9533 1.1459 0.4736 10.3822 1.3048 -6.4972 13.3227 -29.6419 +#> -7.0148 1.7177 0.0313 6.3580 9.1552 -11.2252 7.7353 7.3579 +#> -4.7833 -0.1028 2.7229 -5.2943 -12.8086 18.4304 -4.4705 -3.4158 +#> -9.2536 4.5833 5.5145 -0.7567 23.3196 -10.2063 17.2270 3.5223 +#> -15.1389 8.7041 12.2054 -12.8302 -0.7823 13.7584 2.4872 -0.4881 +#> 14.4617 3.9974 -7.9709 21.8895 -4.8486 -19.2757 6.6048 5.4127 +#> 6.0082 -18.5477 2.9759 -20.5861 8.7631 10.0609 4.0358 -11.9522 +#> 14.4334 -10.0376 8.1364 -16.9371 15.9354 -13.9389 7.9507 10.4880 +#> +#> Columns 25 to 32 4.4647 -7.0875 2.3327 1.8307 7.3989 0.4803 -3.6664 -6.0721 +#> 5.8742 -8.4197 6.2609 -6.9642 -2.8786 2.0274 -7.7869 10.8764 +#> -14.0883 8.3286 -1.4974 -4.1067 10.7286 3.1396 0.8001 -1.4272 +#> -8.3975 1.1112 -14.3248 14.9886 0.6196 3.2767 -2.5909 4.2835 +#> -13.5545 9.2322 -5.9542 -2.5594 1.4618 16.6292 -8.4196 12.4791 +#> -0.7116 11.6965 -10.5454 -3.5316 4.3152 9.0918 5.8082 -13.3541 +#> -11.4994 4.4063 -5.8573 -17.3401 -6.0474 18.5556 -1.1967 6.8481 +#> 12.0583 18.5907 9.8987 -8.1644 0.4287 2.6380 3.0048 0.5976 +#> -7.0870 -14.3711 0.4314 -16.2165 -2.3792 -12.6777 -4.3406 6.0900 +#> -5.8610 11.7489 -7.8149 -6.6605 9.2336 2.3260 7.7857 -4.9880 +#> 0.3623 -1.4200 13.0666 -2.4665 -4.7151 -3.9452 12.5038 3.2693 +#> 13.9317 -6.6125 -2.8563 9.5944 0.0860 4.3587 14.8142 -6.9758 +#> -13.4904 -2.4979 6.6735 -11.6457 0.9460 -1.9097 -13.3650 8.9596 +#> 1.9304 -4.0284 0.7302 14.4181 -7.0430 -10.5987 7.4161 6.4617 +#> -0.0161 -4.2945 2.9026 1.7768 3.9085 -6.9946 -4.5485 -2.9448 +#> 6.8562 -10.8369 -5.6905 -11.0385 0.1285 -4.7716 2.3382 -0.4609 +#> -6.1776 19.8724 18.4406 -8.7792 -20.6198 5.8089 7.3201 0.5096 +#> 1.2028 3.0662 -5.1415 -0.9428 4.9533 -6.4426 5.5798 0.8762 +#> 4.4632 -17.6349 -13.0296 -4.2663 -16.4941 -6.0344 -5.5801 8.4649 +#> -6.9736 13.5637 2.4916 -1.7900 11.5020 8.7405 2.0377 -4.7403 +#> -6.1378 -4.8473 -9.7515 2.3663 -1.1794 -0.7174 -5.0061 2.4843 +#> -3.4997 3.5160 9.5409 -3.1535 -11.6830 -3.5705 2.6792 1.4836 +#> -11.8763 4.6721 -1.8193 3.2386 11.2250 7.8626 -1.8528 0.4861 +#> -8.0426 -14.7109 4.3708 2.2663 13.2169 6.3566 -9.9510 6.0865 +#> 11.0725 -3.9107 0.6830 -4.5783 -1.6197 -2.3960 10.7319 -13.7979 +#> 7.3039 0.8473 -9.9158 6.4912 4.9668 -9.9030 -4.0687 -9.6390 +#> 0.3819 1.3212 18.2018 8.2165 6.2052 4.6291 5.0011 0.7982 +#> 6.8205 -2.5124 9.6381 9.0118 2.8916 -1.5941 9.1080 -7.1449 +#> -13.4964 -8.6642 14.5336 15.9082 -0.8775 -10.1957 -18.6116 3.5536 +#> 0.9977 4.0760 11.8884 2.6569 0.1379 -7.2319 -0.1400 -11.4456 +#> -9.2941 1.0872 5.3523 -7.4826 2.3029 -2.4002 -1.3751 -1.8256 +#> -3.2221 5.9747 -5.6958 20.7800 -4.3516 4.8361 0.0648 4.8459 +#> -24.3830 2.8290 1.2183 -13.2337 -16.3716 6.1607 7.4295 -0.7610 +#> +#> Columns 33 to 40 1.8205 -1.9031 12.3617 -5.0926 -3.0377 7.9483 11.3565 5.3461 +#> -0.4750 7.1601 -2.3730 -7.5397 -10.2509 -7.0522 -0.8622 7.4636 +#> 16.1951 -2.6339 10.4763 6.7128 -2.4033 1.5973 0.5560 -12.5422 +#> -5.8209 -14.4871 6.3812 -20.7239 6.6207 4.5128 -6.8785 -0.2301 +#> 10.9282 8.5717 6.2680 2.4384 0.5621 -0.7995 -20.6212 -2.2964 +#> 9.4476 -14.1702 4.5617 -12.0078 -8.1397 -2.5639 -2.7928 11.8655 +#> -12.6462 2.1810 1.2845 9.7156 -20.5358 -5.9804 -10.9644 -0.5939 +#> 3.5793 5.2351 -6.4354 -4.9213 -9.9919 7.4415 -3.4633 -9.8310 +#> -12.8611 2.8465 0.1158 0.1859 -1.2493 9.6240 1.1403 -0.6776 +#> -0.3449 7.4077 -0.7959 -7.9119 11.8331 -4.9754 -14.2622 6.4361 +#> 0.7595 -17.6125 -5.8273 -3.2121 8.0339 2.5602 12.3164 -3.8753 +#> -1.0412 -5.4503 -3.5347 -13.2825 1.0322 6.5559 13.9156 15.9307 +#> -9.2140 6.1835 -6.6971 -3.4164 2.1543 -10.5855 -3.5077 6.1390 +#> 4.4808 -10.5445 7.2821 -3.9587 9.7382 -2.9853 2.7431 -9.6809 +#> -2.5871 -4.7222 -5.5093 -2.2494 0.6249 9.5892 -3.0757 10.6005 +#> 1.6629 -3.8746 5.5493 5.2350 -16.2316 -10.5616 -7.7546 3.6809 +#> -0.4978 -3.4466 13.3552 -10.9518 -5.6727 -18.6570 -15.8404 -3.5805 +#> -13.1766 -0.4777 0.2826 -0.6377 -9.3905 -3.6232 -9.2420 4.7829 +#> -3.6613 -5.0635 2.4324 11.3758 -8.1947 11.6599 0.3691 -0.2076 +#> 12.5498 0.2379 11.0158 -4.5225 -6.0512 -6.8703 -1.2930 -6.1242 +#> 4.9730 -2.3177 -4.2980 5.2178 -5.1310 0.4080 2.9159 -8.6934 +#> -11.7598 6.6139 2.1697 -0.6532 10.4587 21.6341 -3.2036 -3.5018 +#> 13.5214 -2.9392 21.8901 4.7567 0.3415 -9.1058 12.1258 9.6166 +#> -8.3603 1.6233 -2.6932 7.0846 15.2218 17.6835 15.8797 10.3380 +#> 2.6707 0.4024 -3.1447 2.7802 9.8796 16.6959 23.8264 7.0454 +#> 1.6237 -1.3032 -4.3692 1.1793 9.3805 -1.8181 -2.0411 -1.3207 +#> -0.9425 6.4807 -5.7085 6.3386 10.6126 2.3979 -8.2275 1.4569 +#> 11.1681 -1.0505 0.3317 1.7951 -0.9978 7.5406 -8.7477 -3.5775 +#> -5.3785 7.9169 11.0730 -17.4246 -5.1151 6.9251 -13.2934 -0.3874 +#> 5.2756 -3.2304 1.5981 -14.7328 11.5731 7.4060 9.7360 -6.7432 +#> -15.2574 4.5115 7.5608 14.3726 -2.1682 -10.3855 -5.8989 9.8928 +#> -0.3856 -8.3256 2.6750 5.4766 2.3158 6.9830 10.7369 -5.5621 +#> 0.8368 -4.4529 -2.9222 -10.4655 7.5028 3.0049 -13.9050 6.4207 +#> +#> Columns 41 to 48 10.6267 2.0566 -0.1899 -4.2216 10.1488 4.8102 -8.9333 -12.9299 +#> 2.8428 -1.9617 12.4254 7.3913 4.2177 2.8855 -6.2039 1.6922 +#> -4.9332 -2.4444 -0.4150 4.8136 8.5222 -3.2814 -2.5849 -4.5969 +#> 2.6271 -12.1750 -6.9400 -3.1252 9.6071 -13.4722 10.1572 2.5616 +#> -5.8844 -6.8112 -7.7014 -8.0022 1.7783 4.1806 -2.3968 -17.2372 +#> -6.7355 1.5915 -9.2817 -0.8666 6.2460 14.7628 11.8597 -12.6760 +#> 17.6917 5.5680 1.8349 -1.1989 -4.6840 7.0459 -3.3207 10.4828 +#> 5.6398 -6.0632 1.6924 0.6557 -4.7924 10.7048 -0.6390 -3.2781 +#> 15.3371 2.9338 8.0760 13.9686 1.7372 -3.0089 -2.3561 0.3753 +#> -11.5604 -2.6880 -11.4365 2.3654 7.7114 2.9627 13.5443 -12.7850 +#> 9.6262 9.4812 -0.5602 -2.8509 0.3019 -4.0123 16.3873 13.6085 +#> 20.5345 -8.0568 -11.0704 -9.2712 -10.8319 0.2600 9.1865 -6.3293 +#> -11.1006 -12.9762 4.1232 -8.3061 -6.4398 -0.4549 -17.1532 6.3581 +#> -7.1097 4.8705 16.1682 -2.3171 4.2617 -22.1824 5.9422 -4.8015 +#> -10.4708 -1.7752 -7.4766 -12.3306 1.3784 -6.3603 3.2971 -7.3621 +#> -1.7025 -12.1037 9.3636 7.3521 4.8636 7.5377 2.7529 10.1537 +#> 11.0573 9.7915 7.8233 -2.0144 -3.4713 -2.7266 16.8523 -8.3005 +#> 2.3746 1.5750 5.2558 7.2855 -5.3842 -2.6478 9.7472 3.5639 +#> 5.4006 -0.8139 -6.0416 7.4403 6.9999 -15.6390 -5.8958 10.2864 +#> 5.7947 3.8600 -1.4406 -14.5901 -4.3508 -9.8071 -14.7991 1.8049 +#> -5.0717 -16.6773 7.0603 -2.3248 -4.0577 -4.1785 0.2248 0.3476 +#> 7.0438 -5.2534 -8.0476 -2.4107 -17.2631 -6.7523 1.3226 -2.7569 +#> -1.7567 -9.6459 -2.5314 -1.1619 10.4042 -12.3261 -9.5224 -4.6514 +#> 12.4365 2.8640 10.7389 -0.2652 -2.4801 -11.0003 -14.6895 -1.4004 +#> 4.1451 0.1326 -14.8814 -10.9389 14.0706 -7.5585 4.3588 -4.7046 +#> -12.1984 7.8382 -6.0839 3.8610 -9.9043 7.9125 -14.7297 2.9360 +#> 5.7610 14.6969 7.7319 7.3925 -3.0844 4.3962 4.0488 14.4855 +#> -0.1482 8.7862 1.5423 -5.4913 -0.2252 -7.1675 13.4628 -12.1098 +#> -12.1577 -7.5669 12.8937 7.8985 -0.8192 -7.9449 -0.0794 -7.7434 +#> -9.4886 -0.4424 -1.1893 -10.5959 -9.3650 -1.0334 1.2311 -10.4145 +#> -16.8748 -7.4350 -2.3928 -9.6226 3.4923 3.5035 -0.6490 1.1591 +#> 0.8123 9.7618 -9.3335 -0.5018 -11.4629 -11.9040 -5.3682 7.9294 +#> 2.4693 5.7884 11.3076 -19.0914 -11.9091 -3.3234 10.0314 -6.7216 +#> +#> Columns 49 to 54 0.4406 1.6129 11.1615 -6.4500 -0.8354 0.9156 +#> -8.4724 3.7934 -0.6207 7.8873 1.1957 0.5047 +#> 1.6362 -12.6187 8.1702 9.1539 -0.2782 4.0106 +#> 1.5858 13.5181 3.5678 -1.2366 7.3036 -0.1232 +#> 9.1165 15.2435 -6.9428 1.9598 -6.4923 3.1891 +#> 9.3808 -9.5047 -2.5205 5.0731 10.0928 -2.0794 +#> -14.7524 16.4280 6.8513 0.7544 -8.9306 1.7082 +#> -11.2358 15.5635 3.2996 1.5688 1.8301 -1.2618 +#> -1.0093 0.3102 -0.9837 1.1016 -3.9838 -0.2361 +#> 12.1988 -5.1664 -0.1060 0.1254 0.0174 0.3351 +#> 12.4086 -1.4730 -5.9202 -0.6916 -7.4567 -2.5951 +#> 5.1866 12.7100 -2.6120 -5.1527 2.4302 -4.9550 +#> 23.0999 -1.2934 -6.9460 1.9001 1.8792 2.4004 +#> -13.5533 -15.6596 -1.9504 10.2901 6.8115 -5.0187 +#> 0.1487 21.4774 -5.9941 -1.9582 -7.3697 -3.1072 +#> 0.5098 7.4638 3.3007 4.9267 1.0696 2.9138 +#> -13.9605 -15.3375 2.3721 4.2856 7.6262 0.1114 +#> -0.0946 -1.9983 7.4403 -1.6219 -3.5262 0.3071 +#> -3.5941 6.7896 1.7399 4.9699 -0.1605 -0.2439 +#> -10.2240 2.4551 6.6242 -12.3264 -2.7923 -6.0393 +#> 4.6394 -4.9876 -2.6554 5.6472 1.9260 -3.1263 +#> -12.1955 8.9774 -6.0791 -7.6644 2.5696 -1.0618 +#> 3.4853 -8.6396 4.8650 4.8160 -5.9536 5.4277 +#> 5.7871 -7.7097 -5.2413 -6.2673 -2.1149 -1.7612 +#> -6.8536 1.3883 -7.2248 -8.3984 0.0079 -0.3945 +#> 4.2400 -1.7337 3.1773 -7.3190 2.1686 -9.1689 +#> 19.0523 14.2516 7.0283 -3.9777 -12.5752 -2.5832 +#> -11.7205 -0.0090 11.9184 2.2404 0.3422 -3.7061 +#> 4.4009 -0.8376 5.5358 9.0462 0.6890 3.6592 +#> -13.7356 4.7588 4.9515 -8.4374 4.7421 1.8016 +#> 2.9980 -13.3701 -1.4301 3.5365 -8.7095 5.2190 +#> -13.4774 -7.1882 -0.0140 -5.5037 2.3036 -1.9311 +#> 12.4994 -2.9208 4.1533 -4.6218 -6.7373 2.3593 +#> +#> (16,.,.) = +#> Columns 1 to 8 5.9169 1.1962 10.7501 -8.9790 4.1484 -0.2881 -2.9131 8.8346 +#> 1.6393 -2.6889 5.0957 -4.3603 -10.1045 4.5340 -2.5791 -9.9538 +#> 2.0981 1.8680 -2.1006 13.5482 -2.9988 -9.8013 2.6953 -3.9434 +#> -6.9594 14.0226 -6.9573 13.5674 -4.3679 -6.6665 -3.0626 -10.0226 +#> -4.8158 17.7432 -5.9582 0.1271 5.2878 -2.4874 4.2246 7.6162 +#> -1.8618 3.9992 -14.8552 -0.6717 2.1653 -0.0528 -0.5305 9.8745 +#> -6.0365 5.1183 3.4952 -4.1085 2.3846 7.8327 4.9367 -2.7905 +#> 1.3103 -4.7215 -2.1793 4.6908 -3.7660 5.4065 -9.4307 -6.7514 +#> 4.4123 0.1263 6.5382 -10.5020 3.3511 4.1023 9.8486 -0.5677 +#> -1.5796 0.2654 -6.1811 12.6362 -3.4534 5.1137 0.9683 4.5397 +#> 6.5069 5.2308 -1.0415 3.2129 3.6340 -3.6598 2.7702 -4.7992 +#> 1.8092 2.6591 -5.5465 0.3925 6.2483 -0.5837 4.6502 -8.0858 +#> 6.4339 -4.2291 -5.4463 1.3123 12.0333 -0.3098 3.8704 -5.4030 +#> 6.8779 -5.5842 3.2166 17.6905 -1.6929 -5.4125 4.0909 6.6283 +#> -4.6292 0.9189 -11.1377 -4.3370 -5.0335 11.1261 0.1276 8.0493 +#> -2.5658 -3.2492 -2.2783 0.5461 12.7220 2.3526 -4.2149 -18.9606 +#> -4.9198 7.3656 -4.8668 -0.3271 -0.8615 0.2618 14.6728 2.8823 +#> 0.3484 0.3916 -8.9037 6.6532 -2.4009 6.8033 1.5034 -17.2913 +#> -1.1485 5.0058 4.2632 -2.1971 3.7979 10.3216 7.6086 1.4941 +#> 0.4886 3.0998 -2.3149 6.9763 -3.5498 -11.9070 0.7731 2.6125 +#> -5.4350 -4.2100 6.8747 -11.3444 3.5116 -6.8341 3.3729 9.3017 +#> -0.1810 0.0186 11.3952 -3.4121 6.9950 3.7474 -9.6391 4.4926 +#> -5.7175 13.6596 -11.6647 1.7630 -8.6601 3.9700 8.8271 -19.6870 +#> 16.2903 2.8260 23.8470 -2.6987 14.3097 5.0003 -10.3122 4.4237 +#> 0.7487 -4.2665 9.0854 -0.7235 -13.1581 12.4358 -19.0049 -16.0640 +#> 0.0781 1.3698 9.2325 4.6129 5.4748 -4.3969 0.6132 9.7189 +#> 5.8890 -1.4452 5.4483 -10.7550 -18.6493 -0.3483 2.3382 5.1563 +#> -0.0916 5.1133 -2.3725 -7.2628 -11.0122 -9.0826 1.5038 6.7757 +#> -1.1501 0.0571 -6.8683 -0.1019 -5.7275 -0.2904 -3.8972 -6.5371 +#> 1.7825 3.4269 -11.7473 3.5808 7.4465 5.7079 -0.9744 -11.0135 +#> 3.0592 1.6905 3.4647 -1.9238 -10.8315 8.9985 -2.4843 -8.3443 +#> 0.2737 1.7902 -6.4242 2.6640 -10.9324 -10.8223 11.0115 4.6460 +#> 1.6822 3.0623 0.2343 2.7980 -4.0485 7.8199 1.1451 12.8511 +#> +#> Columns 9 to 16 -13.7071 -0.7653 5.5538 -2.4353 -5.7447 -0.6709 -7.2778 6.2663 +#> 14.0684 8.3597 -5.1964 -0.3258 6.5183 -12.6385 1.4381 -19.2791 +#> -4.4823 -5.3254 3.3574 7.8482 6.4927 19.5868 1.6706 5.6892 +#> 11.6002 -4.3351 4.4546 -8.9352 -0.6136 -0.0973 8.0068 -8.3215 +#> -2.1117 7.8508 4.9740 -4.2024 8.0122 -4.6648 -3.8557 6.6858 +#> 19.8712 5.1868 -11.2870 0.4251 6.9703 -12.9883 -13.2475 18.2691 +#> -9.1562 10.4813 -1.2635 11.8747 -4.2531 -1.0797 11.2160 -19.6530 +#> -0.0567 -4.9667 9.7458 2.3701 10.9538 1.9237 -1.9627 13.9311 +#> 1.3140 8.7365 -0.1949 -8.5719 -7.5008 -2.7805 -0.9558 -5.4660 +#> -6.1256 -3.5703 -2.3028 -9.0828 -8.1802 3.4247 -11.0404 14.6100 +#> -12.9374 -12.2777 -3.5171 -5.9202 13.1994 8.6371 -5.0024 -0.5268 +#> 12.8851 3.0432 2.3660 5.1235 5.7813 -8.8750 -0.5470 4.2925 +#> -2.6762 -1.8033 -3.3926 -17.6865 -2.9754 -8.5596 -1.3519 -10.5077 +#> 7.7198 -7.3484 3.8967 2.3075 7.3826 -11.7406 6.7200 3.9621 +#> 6.3456 -5.4186 -5.6147 -19.3007 -6.5337 -8.0445 -1.3140 11.4247 +#> -3.4923 8.3719 1.3613 -0.5927 -7.4595 4.1947 12.5511 2.8711 +#> 2.5924 -1.3840 -6.6778 -1.6072 6.5471 11.2356 -6.4557 3.2157 +#> 6.1776 5.3566 11.3179 0.2979 3.6626 14.9784 0.3117 -15.7688 +#> -11.2680 6.5108 7.6255 2.8750 -15.6441 -11.5801 4.3952 -6.3325 +#> -5.5521 -5.5296 5.4242 2.3838 2.0523 0.7012 -18.1893 -0.5384 +#> 14.5744 -0.8031 -4.6487 10.3956 5.4341 -4.0979 -0.9086 -0.3972 +#> -11.0830 -22.4040 -12.0306 -11.2804 -2.2865 3.0656 -12.6764 -6.4913 +#> -16.6999 4.3738 4.0502 -10.0046 3.9659 2.0011 9.7486 -4.4500 +#> -8.8754 -5.8829 -2.1642 10.0776 -10.7525 -10.7539 -0.5335 -16.6962 +#> -3.3995 -13.1827 -8.8088 7.4763 -1.9271 11.0060 17.0620 9.4130 +#> 3.0052 -0.0882 -7.0467 -5.8017 -14.0206 -15.3782 -9.5878 8.9883 +#> -3.8261 -5.8923 14.9247 11.1826 5.4076 6.7611 10.0125 -4.7802 +#> -4.5243 -8.8796 -3.0152 7.7899 -9.3244 3.0303 -14.7213 3.7061 +#> 13.4022 -0.5678 -2.1548 1.3116 -4.6149 -1.8015 13.2173 -2.9844 +#> 6.0882 -17.9818 0.5556 -23.7100 0.1930 -4.3171 1.8997 -3.6025 +#> -7.7793 9.4382 3.9473 -2.6181 0.2232 8.3356 -3.1850 -7.8183 +#> -4.7888 -1.6125 -8.2620 -6.1865 -4.4570 -9.0457 1.7380 -9.7770 +#> -2.7259 -10.7252 -7.6792 2.6634 -2.2631 3.8497 -15.9251 -20.6218 +#> +#> Columns 17 to 24 -7.8796 17.9775 -9.3992 -4.8710 -10.0097 -0.4248 12.5003 -13.9830 +#> 10.9893 3.6070 5.4122 -4.1925 -10.8265 11.4295 4.7736 3.5215 +#> -9.8660 6.7317 8.8891 -2.7798 -4.1303 -5.8500 2.5645 3.4559 +#> 12.4319 -7.9728 -2.8910 -1.3465 -2.0578 -2.5140 -7.0054 8.4268 +#> -0.9489 -7.8873 -3.4151 0.7461 1.9786 -4.1699 3.1521 5.1590 +#> -7.7628 14.5598 -3.7605 -2.4563 -4.8887 4.3739 -1.3184 12.0368 +#> -3.3868 -9.5283 23.5325 12.5333 -6.4736 -6.5903 7.1718 10.8270 +#> 12.0811 0.8471 3.1682 7.7866 -11.2046 -0.6210 -5.9061 6.1771 +#> -2.5386 6.6288 -21.1735 -7.4060 8.6651 16.5632 -3.3783 -14.9110 +#> 6.4171 12.3525 3.2848 -7.7431 6.7203 13.0827 2.9858 3.2090 +#> -14.8683 -2.8080 -0.1957 10.3795 -7.0103 15.6995 -15.7298 -8.2846 +#> -0.5186 -0.5380 -1.4989 -1.5238 -1.6552 -0.1317 3.3847 -10.4952 +#> 1.0079 -0.7870 -3.5615 3.2061 10.0451 17.6399 3.1742 -9.9340 +#> 8.9299 -8.4056 -10.5260 -3.7712 -14.4148 0.4000 -15.8502 13.8139 +#> -4.4611 -6.0424 -8.9023 10.5212 4.2197 7.3905 -12.1776 11.8185 +#> -9.9123 -1.9639 -10.0037 -0.6602 -2.5114 6.8629 1.1878 -6.3813 +#> -0.0282 -2.9304 -9.4680 -8.3135 -6.3681 -4.3604 -7.0652 -3.8044 +#> 4.8215 22.9106 7.5484 -2.5532 -2.5809 3.1510 -2.8192 -4.3745 +#> -13.7029 -13.7980 -6.9890 16.0019 -1.9355 -5.9001 9.0478 20.1010 +#> 3.2395 0.1785 3.6520 19.3297 5.5748 -2.6324 0.0679 -6.5968 +#> 8.1041 -7.6415 4.9666 -0.1479 6.9535 -1.0444 -1.1585 13.3160 +#> -10.1763 -5.5170 0.5665 1.8729 -0.9565 -3.9006 -0.9908 -0.9861 +#> -22.9678 2.1149 1.5818 9.4493 1.6239 2.5172 -7.3676 5.2019 +#> -15.0124 3.9553 4.8910 -3.0632 4.0516 -6.1032 -3.0397 -16.0125 +#> 4.1737 -6.5068 -0.9279 4.9228 5.0661 7.6071 -5.4581 6.6090 +#> -15.5276 15.1106 2.8998 -10.0532 15.5078 -12.6153 10.8250 -12.8329 +#> -14.1448 -0.5968 25.9646 -1.1691 6.4707 4.0210 -4.2684 -3.9871 +#> 4.7639 3.5075 -4.7696 -0.6260 -0.1757 -1.9947 14.4427 11.0419 +#> 2.0681 9.3647 10.1557 -3.0895 -13.3740 0.8708 2.6143 5.3274 +#> 8.0578 -6.6474 -0.7361 4.9535 -6.1931 2.7753 -4.5427 11.5385 +#> -11.6095 3.9669 5.9931 11.6033 -5.9513 -1.0784 -0.9202 2.6523 +#> -0.9995 9.6205 4.9987 -3.6116 6.1728 -1.6928 6.5078 6.2526 +#> -5.6990 10.3888 -4.3727 -2.2719 -4.1629 15.1052 5.0346 4.0389 +#> +#> Columns 25 to 32 -3.3179 8.9114 10.0572 -1.2587 2.0931 -2.0530 6.5594 10.9616 +#> -4.7203 -7.3808 -6.6912 -12.0078 -11.0555 5.6575 7.7934 18.6705 +#> -19.2785 -15.9866 1.3843 -0.8469 15.9078 -0.2410 7.6636 8.9358 +#> -11.1760 14.2224 -3.0896 16.3962 -5.9672 8.0302 -13.1924 -1.9202 +#> -11.5296 10.4649 -3.0178 2.9463 8.7748 -14.6963 4.4653 2.7594 +#> -7.3800 -5.3960 -17.0513 0.9882 0.4144 -4.9462 18.2287 -5.4529 +#> -4.0838 -15.5619 11.6511 8.8858 -12.9532 -13.0284 -0.6605 8.6867 +#> -9.4081 -3.5767 -5.3829 2.1063 6.0295 -9.8064 17.7653 -5.9768 +#> -12.7177 4.9938 -7.4952 -13.0969 8.9979 -0.0998 22.3145 1.7401 +#> -5.8592 4.4302 -6.8574 3.3468 -11.9368 -0.0754 11.5460 0.0574 +#> -4.4656 5.4609 3.4491 -10.8961 7.8371 3.5985 7.8025 2.8597 +#> -3.9979 -0.6659 -5.5074 15.7577 5.8280 6.6209 15.0434 -1.5492 +#> 10.2756 9.7925 1.1668 -2.0181 -2.6645 5.3276 -1.1349 -4.2664 +#> 1.6781 -0.7389 -0.0576 0.9104 0.7526 -4.0417 -13.5008 0.9477 +#> 9.2704 6.1267 2.2287 14.4398 -15.2803 8.1811 -15.1956 -4.7375 +#> -0.8798 2.0092 -5.3028 -3.4708 -1.6346 7.8313 4.4540 -8.5716 +#> 6.6299 13.0193 -4.3130 6.2526 1.2605 -7.0203 4.4134 -8.3241 +#> -4.6752 5.8241 -1.0570 6.9613 -6.6649 -9.4870 15.5721 -8.3716 +#> 2.1473 -12.8502 -18.3775 -20.7149 5.0487 -0.8780 -9.4605 4.9170 +#> 7.3915 0.3985 4.3123 -3.5218 -0.3456 -9.1704 -0.8021 1.1542 +#> 2.5691 -6.5108 3.8852 -2.2915 9.4971 -0.9335 8.3368 -17.9244 +#> 8.7376 4.6082 6.3856 0.9325 0.8904 -2.0705 -6.0855 1.2072 +#> -4.9679 9.4473 -8.5908 14.6577 9.2367 2.4137 -5.8935 -3.3195 +#> -8.3799 -11.7817 8.2112 -1.3504 -3.6656 14.2825 -1.3165 6.0722 +#> 1.7320 -12.4078 11.2184 0.9507 -8.4417 12.5083 9.1634 12.5494 +#> 13.9281 3.4773 13.7172 13.0891 -9.7119 -8.2638 -8.7622 -8.8073 +#> 6.4454 5.8153 5.1188 1.5204 -21.2378 -1.3159 -18.2051 5.7055 +#> -4.9967 -9.7521 -2.4479 -0.7929 5.7172 -4.6572 -1.0606 10.8816 +#> -2.7617 4.1111 -7.0416 0.6337 0.8852 2.1034 -5.1787 8.5571 +#> 8.7010 12.4941 -5.7907 16.1933 0.8232 -8.7499 -21.4545 14.7660 +#> 3.5851 8.8012 -3.2526 0.5185 -1.6775 -6.9595 -13.1621 6.2687 +#> -7.7303 -11.4324 19.5842 10.0752 1.3555 1.9197 8.3356 4.5247 +#> -8.8562 -2.9946 6.5604 -1.4950 -7.3620 13.2178 17.4251 11.9080 +#> +#> Columns 33 to 40 11.3978 -11.5616 0.6751 5.4931 -21.9684 2.6750 8.7489 24.5304 +#> -6.9687 -10.7086 -13.1778 -13.8945 8.1555 6.2153 16.2339 -10.6723 +#> -8.1479 -2.2703 4.3881 -1.6490 -1.7727 1.6611 19.1424 -9.4395 +#> 1.6611 6.1769 -3.1931 -0.8321 6.1536 5.8515 -5.7873 4.9654 +#> 12.7029 -4.0359 3.3536 7.6909 -10.7435 21.4226 5.7366 34.2568 +#> 9.1381 -17.1925 -5.0425 14.0283 11.5891 -0.8054 6.9017 -3.6643 +#> 1.6606 3.6179 -16.8114 -10.9507 3.6880 -12.8685 -8.8030 -13.1678 +#> 0.9630 3.8960 1.9926 5.5745 6.0234 -7.7944 2.0300 -8.8346 +#> 11.5074 6.9423 6.5628 -7.9312 -0.3811 2.8890 3.3496 -13.8868 +#> -9.8659 -9.3607 -5.8141 -3.7807 -1.1790 7.4838 8.1174 15.1448 +#> -4.9651 9.1740 14.4638 3.6277 9.6858 8.8757 4.3408 -11.4244 +#> 8.6441 -1.5692 8.1847 0.1239 8.9716 -2.4765 -10.7623 7.3977 +#> -11.0943 -16.6178 2.2971 -21.6193 -2.5390 -10.3741 9.8438 -5.6642 +#> -9.2277 14.6166 -0.9204 14.9913 10.7844 13.5524 2.8325 -8.9513 +#> -2.6473 -5.8741 4.6406 -5.3104 4.9784 -7.5659 7.1071 -9.0259 +#> -1.1382 0.5198 -2.9474 5.7955 6.4414 -5.7807 3.7976 -12.1651 +#> 6.7978 -6.5910 1.2208 6.1146 7.8964 16.9533 8.0562 -5.8046 +#> 4.1896 14.8452 -4.5841 -7.3017 -0.0823 4.4017 4.8627 -7.9858 +#> 7.3247 18.5870 7.4510 -3.6479 -25.6386 -0.6795 -4.1227 -13.5621 +#> -10.5895 -10.6946 8.3915 -3.0894 -2.1166 -8.1934 3.9229 11.6619 +#> 3.3745 3.2772 -12.0464 0.5004 -2.8965 -5.3103 -15.6017 10.8435 +#> 10.7255 3.9382 5.0526 6.5678 -8.1970 -12.1692 -4.3047 2.8854 +#> -5.4066 0.8166 -0.8165 -12.6045 -2.5846 7.5421 19.8140 -4.3701 +#> 7.6552 7.8124 14.6502 -2.3297 -11.4268 -10.0371 1.9409 12.4794 +#> 1.4078 9.8272 7.0404 -1.8239 10.8952 11.4932 1.9102 -10.4790 +#> 9.4703 -6.6227 12.2637 -6.4715 -15.9110 -14.3271 -7.6908 14.2522 +#> -0.8424 -4.3798 9.7508 -1.0919 -3.3694 -15.9257 12.9108 2.7339 +#> 1.0147 -4.9658 -6.1954 5.5696 -5.3329 -1.1068 4.5829 -5.8869 +#> 9.7037 -5.5950 -16.5860 7.1283 -6.5315 -2.6857 2.3856 8.0047 +#> 14.0389 2.8653 0.3209 4.5985 11.4294 -2.1647 10.7592 -13.9907 +#> 14.3837 19.9179 -1.5585 -13.3698 -8.6395 -1.7041 3.9278 2.5171 +#> -13.9244 -0.2977 -3.2740 4.1151 -4.9771 11.6901 -6.4768 6.6685 +#> 5.1153 -0.3700 -17.0048 8.4165 -6.7035 -6.2654 -1.5954 9.6956 +#> +#> Columns 41 to 48 -22.1942 -7.2350 -2.4917 -11.1559 -3.0166 -15.0017 -1.1812 -3.4713 +#> 0.2388 8.1962 -9.7739 9.0305 8.3463 2.1590 -5.6769 5.0629 +#> 16.7023 -3.2802 0.8704 2.1747 14.1589 -8.2684 -3.2724 7.1870 +#> 4.6840 -7.5805 9.2810 2.9401 -2.8086 6.2953 -9.3358 -1.4327 +#> -0.5781 10.5564 19.4567 -8.2992 1.6499 1.0847 2.9731 10.7564 +#> 9.4773 -0.5292 16.9066 -6.3917 -11.6539 15.5057 4.0474 7.7430 +#> 15.5874 1.3924 -16.0389 4.0388 5.7586 4.8510 -7.7798 -2.4974 +#> 21.3204 -14.2573 3.3693 5.9481 -10.7813 -0.9151 -5.0470 -10.1517 +#> 5.6014 -5.1494 19.4992 6.3122 -8.5826 -4.0738 2.6674 7.9877 +#> 0.5577 6.1221 16.2732 -1.8582 -3.3096 -1.8466 8.7034 20.5187 +#> 3.1797 10.5174 -15.2399 -2.6417 6.3028 1.2196 18.4807 -4.0511 +#> 3.0401 -8.5193 -1.1578 0.3379 -1.0156 -7.5664 -18.8893 -6.9766 +#> -8.8674 11.4856 -8.3224 -7.0834 7.2036 6.2827 18.5061 9.4553 +#> -0.7081 3.2447 -13.5620 -3.5952 5.2871 -7.9405 -12.0735 1.1528 +#> 15.1621 -0.6657 17.3880 -4.1401 -5.4776 1.6688 -5.8632 9.7832 +#> -4.7534 14.7511 -7.6977 -1.0605 6.7663 8.0379 -1.9791 2.8543 +#> 11.4958 0.2522 -19.5168 -14.8940 -3.2251 9.1275 9.5365 5.7808 +#> 16.8602 -8.5958 1.7279 0.3534 -15.7748 1.2065 -5.9608 -1.8861 +#> -11.9294 23.8705 10.0658 -9.9190 11.2422 13.1915 2.9278 -8.3528 +#> 20.0953 -0.7472 17.5934 -3.3260 18.8082 -8.5177 0.2490 -6.8630 +#> -12.7300 -2.4564 -1.6036 -1.8352 -7.7443 -7.1272 -13.3813 -3.6956 +#> 8.1264 -13.6377 10.9397 -3.0560 6.5067 2.8503 -4.2379 -10.5363 +#> 0.5117 11.3553 -9.0868 -2.7140 1.1550 5.6160 11.9133 14.7809 +#> -3.9240 5.4078 -13.7294 -4.3688 6.5080 -3.5070 -1.7219 -4.4902 +#> 12.5034 -12.8467 2.1763 5.1926 3.1938 -0.7747 -20.0967 -9.0441 +#> -4.2370 4.1992 7.6872 -5.2593 -1.7053 -11.8188 -0.1682 -4.0788 +#> -5.0200 -27.1574 12.7201 20.7038 0.9286 -12.9077 0.3077 10.2574 +#> -6.0969 0.4821 11.8856 5.4467 0.9379 3.4836 -5.4542 6.9657 +#> -18.5719 -0.9974 -5.4086 15.3446 -2.6354 -2.4166 -15.3805 5.1253 +#> 6.2883 -29.3754 -1.2569 -1.7976 -6.8801 -0.9421 3.6779 3.4648 +#> 14.4022 7.4876 -4.4341 -0.7004 5.4160 -1.7521 -5.0956 -13.7839 +#> 7.8083 2.2676 -7.3100 7.3198 -1.0796 -7.9814 2.6069 -0.7932 +#> -2.4366 -11.5112 3.4781 12.4499 -3.4112 -10.5143 -14.6968 5.0345 +#> +#> Columns 49 to 54 -8.9649 15.0480 -3.5127 -1.3708 -6.2487 0.3050 +#> 14.9347 5.8413 1.5031 -1.6481 -4.4734 -2.3108 +#> -6.8708 -4.6939 13.0038 -4.7202 2.7074 -7.4918 +#> -1.4283 11.7403 9.1098 -4.2097 5.4451 -4.1161 +#> 7.9117 21.6279 -2.7317 -0.8931 -9.2567 -3.0612 +#> 17.0366 -33.7473 4.8260 0.1720 -1.2427 -4.2173 +#> 9.5700 -15.3509 12.6820 -0.3305 0.5376 -6.5525 +#> -13.2608 -18.3225 18.4981 14.1568 -3.1979 -10.7866 +#> 8.7527 -11.6705 -10.1354 -10.2437 4.1236 5.4367 +#> 7.5527 -4.3760 9.8673 -0.5702 4.7787 -5.6126 +#> 1.2647 -21.6524 -21.0066 7.6477 2.8037 3.0143 +#> -8.7994 -14.5644 -19.7484 6.1574 -0.9049 -1.3891 +#> -4.8802 0.4612 -3.7767 2.4743 1.4301 0.2956 +#> 7.3854 10.7632 8.3053 1.8451 6.9022 -4.2529 +#> -3.2762 -6.7379 8.6647 5.6959 -1.1013 -0.0037 +#> 0.3988 -6.0855 12.6957 1.2420 -4.7745 3.7277 +#> 12.0424 -22.9109 0.8896 5.6065 4.0492 -0.3889 +#> -12.7498 -0.4841 11.0614 -5.0944 14.2974 -8.2520 +#> 3.4900 2.0673 -2.8354 1.1162 -1.4959 -4.3088 +#> -7.9860 7.1940 -1.3905 5.9913 -1.7375 -5.5433 +#> 1.8201 5.1878 9.1053 -6.9162 -5.2749 -1.3472 +#> -13.8737 -16.1515 4.4713 11.9625 -2.8959 -2.3175 +#> 7.7018 -2.5956 0.8244 3.0606 5.3660 -4.8304 +#> -20.7461 -7.6867 12.4397 -6.9317 -7.0717 6.5692 +#> -3.1832 -9.8930 -9.0189 2.7933 -0.3160 -0.1822 +#> -2.9797 16.1053 -1.1040 4.7778 -8.6608 3.9642 +#> -7.8118 1.3081 0.3643 -13.3631 -4.6473 1.9124 +#> 16.6642 3.7141 -11.1340 -17.0547 4.8123 2.1086 +#> 1.4257 8.5336 7.0028 -12.1989 -3.7266 4.4054 +#> -13.9857 -10.3642 0.7977 5.1235 10.0793 -1.8005 +#> -11.1381 18.8368 9.4772 8.2808 1.2748 -3.9059 +#> 5.8320 -3.8253 -8.4454 -7.3230 2.7275 3.6790 +#> -0.2858 -1.3838 -1.1742 -20.6285 5.9709 7.5458 +#> +#> (17,.,.) = +#> Columns 1 to 8 -0.0036 -3.3190 -7.9587 3.2906 -13.0091 4.0434 11.5652 -12.1427 +#> 0.1164 0.8379 4.9234 -1.3137 0.0675 -5.0679 -4.4452 -4.7952 +#> 2.5500 -3.7011 0.9437 1.4541 1.8920 -14.1025 0.3019 14.6240 +#> -1.2880 4.0459 1.9433 3.9558 2.5960 2.6825 -9.3180 14.3707 +#> 1.4861 -3.1195 -1.5166 -4.0141 -7.8198 -5.3168 16.8417 13.6179 +#> -2.4961 -1.8708 -3.0573 -7.6208 14.6806 10.5994 3.2349 -19.1036 +#> 1.1798 0.3461 9.7883 -3.9783 3.7118 -5.1532 -10.9065 17.8243 +#> 3.5643 -8.1262 0.6550 16.2425 7.3746 0.6471 -4.3637 16.9330 +#> 8.2370 0.4671 8.6061 -3.5247 -2.3181 0.9235 -2.5817 -4.7410 +#> 1.2592 0.1341 -8.5860 -10.3599 1.3681 3.0343 -3.2627 -4.4536 +#> 2.4246 -4.6030 -0.9959 1.6058 -1.8115 10.3210 11.3543 -14.6947 +#> 1.3383 -1.8282 -1.9980 -7.7775 0.9999 10.5046 7.7484 -1.0406 +#> 6.5106 -5.1481 -8.9645 2.7458 13.4778 -1.9987 -8.3724 -6.8411 +#> -6.7584 -5.6272 -3.1131 18.0075 -6.0840 -12.9692 4.0645 0.7947 +#> -4.2316 5.9853 7.8798 -3.3417 -11.0788 -4.6102 7.5990 -4.8012 +#> 2.0482 2.4839 2.4645 -0.5146 7.5531 -6.1914 -5.2242 5.2656 +#> 2.3681 -6.2423 -1.4921 -1.4588 13.6384 11.2711 6.0663 0.3827 +#> 1.7283 0.8490 -4.8233 -8.1522 1.8019 -3.3521 1.7968 13.4382 +#> 1.9979 13.1230 7.3525 -13.3938 -4.5078 -1.8459 -14.6914 -14.1974 +#> -1.2166 -5.0405 1.4720 3.4205 -11.9439 -16.7012 5.8988 9.3834 +#> 2.9653 -4.7849 11.0924 -3.6218 6.1173 -2.4609 -12.5736 11.4327 +#> -3.2606 12.5382 9.5843 -9.3089 4.5541 2.0092 -20.0066 -9.3231 +#> 0.5372 6.2045 -6.6611 -15.5596 -8.4444 7.8671 -0.7634 -4.9848 +#> 1.0054 0.0214 -4.3680 3.8120 -2.7707 -7.4050 -8.1512 -9.1003 +#> -7.9268 -6.0573 9.4077 1.9014 0.7067 1.0413 14.1705 -11.5789 +#> -3.5318 8.0966 9.8679 -7.0404 -7.6394 3.9163 -2.4689 0.5888 +#> -11.6240 2.5042 14.4123 2.6386 -19.1348 4.2073 11.3132 4.3702 +#> -4.9533 1.3656 2.4042 -2.3242 -2.4680 -8.1332 13.3980 -0.8918 +#> -7.9514 6.3341 5.5485 2.2275 1.4123 -1.7811 3.2591 3.3837 +#> -6.7042 -2.2874 -10.2507 20.4313 9.5276 8.1279 -1.0739 12.8549 +#> 2.1646 6.3275 -8.4244 7.2597 -2.9179 -12.6174 12.2693 -3.4996 +#> -2.9582 1.8840 -1.1082 4.9597 -8.9087 5.5940 -13.2627 15.1192 +#> 3.8620 15.6053 -14.6896 8.3630 5.0803 -12.6752 -7.6925 2.4458 +#> +#> Columns 9 to 16 18.2032 -19.6725 5.0508 -16.0836 9.1070 -3.1893 -5.8347 -3.0840 +#> -5.6912 -0.4025 4.0813 -1.5855 -9.3055 -12.0094 -11.9593 -5.9454 +#> 6.7649 18.8722 -8.4775 10.3950 -4.3235 -0.8341 13.1874 3.2388 +#> -8.8735 16.5639 -1.0106 5.3047 10.7215 -8.9410 15.9507 2.6645 +#> -0.4064 15.6966 1.1777 7.8093 18.6075 0.4738 1.0298 -12.1063 +#> -2.6083 22.5461 -18.7206 15.4192 -32.0458 7.3132 -8.8752 -11.7511 +#> -13.7117 -11.5101 -0.8774 1.9782 -4.6365 11.1330 -1.2761 7.9712 +#> 0.2302 13.4904 1.5463 -1.8252 -9.0678 9.8361 -8.8496 1.8389 +#> 5.2399 -2.1986 -10.1448 -6.6668 -12.4374 9.5085 -8.5832 3.4156 +#> 1.6309 11.2047 5.7111 5.6381 -10.8640 -0.8878 3.1294 -7.3463 +#> 0.7705 9.0933 0.9354 -3.5888 -14.0555 -11.2154 15.2315 -3.7600 +#> -1.7876 8.2053 1.9111 -0.1420 7.0392 -7.9765 4.7696 2.9816 +#> -0.0192 -12.5377 -9.9169 2.0928 12.4499 2.4241 -1.6729 -4.4325 +#> 8.1069 11.8041 0.5192 7.5075 -3.9413 -1.4742 8.3041 6.3053 +#> -13.3357 2.6357 6.9307 9.0956 -4.7934 -2.9089 8.3363 -13.9510 +#> 7.2659 -13.8078 -1.1089 -15.7769 9.4236 -5.1112 -6.0908 2.2405 +#> 7.9741 -7.2104 -7.6181 20.0668 -10.3272 -1.1638 11.3504 -12.6043 +#> -9.1118 9.9215 -0.0853 15.8825 -8.3135 -1.1377 3.6858 0.1344 +#> 8.6729 1.6374 1.3439 -9.6331 1.6315 -8.2283 -9.1082 4.5495 +#> -5.4776 6.7521 -3.4657 10.1412 11.8968 14.7285 -1.7380 -11.2804 +#> -2.5355 -4.1345 2.2051 -18.1531 5.0060 -6.3627 4.9419 4.7983 +#> 9.2168 -8.3501 1.9289 11.9154 -9.8096 11.2362 9.2350 -8.4550 +#> 4.9740 12.1595 -3.6848 10.5441 7.1024 -6.7502 10.8825 -5.2812 +#> 21.5238 -15.0916 12.0990 -2.5791 12.4567 1.5288 -7.3818 16.1056 +#> -0.0058 2.3107 13.5580 7.0735 4.1138 10.0373 -8.5884 9.2899 +#> -1.8384 -0.6716 -4.3395 4.3135 13.5270 2.1079 2.9250 -11.1521 +#> -0.6385 -10.9262 -4.9343 5.1412 -14.9500 -13.1119 13.7721 2.5448 +#> 2.3601 5.2188 0.3091 0.2517 -6.2330 -9.7139 7.5875 -0.9963 +#> 1.1930 7.7077 -11.2370 3.9249 7.5846 -2.9404 1.1477 -1.9224 +#> -2.4618 3.0866 -5.5732 13.2863 -11.5555 8.1338 -1.0781 -8.0933 +#> 4.2506 -6.1249 -2.8451 6.1572 13.4412 4.0567 -1.8001 -1.1474 +#> -16.7791 13.2047 4.3998 5.3425 -3.4872 4.9933 11.7567 -7.1929 +#> -4.3003 -4.2831 10.9765 5.8159 -5.9722 -1.2655 -5.2305 18.9444 +#> +#> Columns 17 to 24 -12.1367 1.4567 -1.6269 8.3249 -1.6047 3.3990 -11.6582 -5.8878 +#> -2.3760 0.4142 9.0234 -13.6288 -8.3573 -6.6818 7.0287 -2.7305 +#> -9.1737 0.7451 20.8649 17.2731 -19.0187 9.8993 7.9144 1.6249 +#> 9.1767 -5.3467 -4.6508 -9.8663 1.7865 -8.4075 5.7143 -0.6771 +#> 10.6411 -19.1595 0.1464 -0.1775 -1.1914 -10.3426 -1.2584 -7.9563 +#> -16.1971 -13.8179 -9.4703 20.5316 -9.9628 -20.8515 9.6431 -10.2868 +#> -9.6250 10.1539 14.2932 -11.5516 -4.8499 -2.0416 12.4956 8.9299 +#> -12.3655 0.1540 9.0695 -9.6254 -7.8712 -19.9817 20.3502 4.4879 +#> 1.8061 -8.6229 -6.4227 17.4155 4.0062 4.4123 -12.7788 -12.0072 +#> -4.9458 -5.6366 -4.4784 0.7285 -4.5531 -13.3600 -4.7004 -15.0519 +#> -0.2497 -0.2428 -8.2563 3.6055 -14.8665 -2.6090 -15.8119 5.3822 +#> 4.3693 -6.9149 -12.2015 4.5604 -0.3401 5.8007 -17.0388 0.4493 +#> 15.0418 -0.5247 5.2963 3.6107 -8.5075 -11.3623 -12.6868 1.0413 +#> -4.0378 8.6373 2.0527 -0.9846 -0.3666 13.2075 6.5006 9.9071 +#> -0.7308 7.5805 -3.0739 -0.0865 -14.7430 -19.0587 12.9800 -11.4091 +#> -6.2874 16.6758 7.1346 -7.2085 -4.7202 7.4124 3.7254 5.7929 +#> -11.5627 6.5587 -4.1144 19.9498 -6.9683 -10.0468 -7.7720 15.8065 +#> -12.7454 8.7868 12.2793 -10.7621 -4.4990 2.5450 -0.2208 -17.3568 +#> 1.5817 1.2042 8.9885 -2.1994 4.8149 15.1747 2.5702 -2.2010 +#> 3.3618 3.3082 -4.8614 -7.8732 -11.8926 -7.8178 -5.5999 -2.6310 +#> -7.0714 -4.9754 6.3195 -3.3233 5.6727 17.1982 2.4234 7.0407 +#> -0.0079 4.2773 -2.5152 -5.1012 -6.6415 7.7634 -1.7209 15.0072 +#> 0.7598 -0.0868 11.9345 15.8469 4.9130 -2.6162 11.1826 -5.4076 +#> 3.9734 2.6924 -6.5429 4.9290 -8.8703 13.0501 1.0853 -1.9093 +#> 0.0759 -10.3995 -20.7333 -6.7558 -3.4664 -5.0556 -3.4500 -2.6451 +#> 11.9588 -0.9773 -5.5220 -9.4147 9.2083 2.0006 10.9240 -26.0863 +#> -8.0222 -7.1364 -6.2283 -4.9439 -18.3233 -12.8074 0.2920 3.7361 +#> -14.7121 1.6619 -5.7169 9.3542 3.5858 1.4770 -0.7576 -9.8484 +#> 1.3114 -0.9283 18.1555 -3.2016 4.1839 -9.9645 2.9604 -3.2908 +#> -5.9759 -8.9707 -0.6552 9.0356 -7.6296 -30.9177 -8.4567 9.6805 +#> -7.4921 -3.0522 -4.5990 -1.9557 0.6606 3.0873 -0.2393 -6.4692 +#> -1.3348 -6.8173 -5.2836 -9.4862 13.9301 -0.0991 6.0113 -13.2417 +#> 4.5449 5.1664 -15.3847 1.6871 7.4429 -1.8639 -16.2640 4.9284 +#> +#> Columns 25 to 32 -14.1407 14.2955 3.8695 -6.0790 -5.2760 6.6175 3.1503 -5.2982 +#> 2.2688 -11.4986 -2.2534 2.4166 13.8815 -4.4936 -12.1703 -4.5053 +#> -1.4144 -5.8853 19.2525 14.6247 19.1070 11.7330 5.7779 -4.3484 +#> -3.9388 -1.7344 -10.0980 9.1879 -8.3052 11.6017 -14.8255 3.6860 +#> -10.8255 7.7143 13.6104 -3.6169 -5.8611 -6.1461 10.0281 -2.9510 +#> 3.6030 -19.8067 4.5132 -1.7495 -11.6521 -12.9786 19.7649 3.0201 +#> 5.1040 -18.6112 10.5612 5.6103 -2.6788 0.9387 1.4030 1.6843 +#> -1.2084 -16.5771 15.1098 9.0082 0.5725 0.1737 -1.0725 -11.1100 +#> 8.9186 -5.9483 -3.8864 -6.2679 17.5105 7.6420 1.4482 -30.5893 +#> -5.3877 -5.3120 1.5567 -4.8999 -12.9138 5.5266 -4.2428 12.0746 +#> 4.3041 -1.7397 -6.7371 0.3417 -0.7745 -5.9680 -4.7485 -9.4796 +#> -1.8727 -14.7426 -10.1967 8.3783 -5.9343 -5.3843 10.4344 -4.4727 +#> 1.0782 4.0079 0.5302 -19.9129 13.8336 -5.9290 2.6223 -10.6001 +#> 3.2687 -4.3255 -7.6249 -5.0333 3.5369 2.8440 1.9130 13.2635 +#> 0.9298 -3.1962 4.1605 -0.2121 -19.9007 -2.3541 -9.4520 1.5965 +#> -9.8351 13.3726 -3.0548 1.2052 -6.0823 8.8489 -14.4292 4.4013 +#> 13.7231 23.7698 -9.7120 5.3621 -10.5620 -14.4767 -2.1569 3.0922 +#> 17.7819 -11.9477 8.1932 -11.3599 5.3066 3.3444 5.0477 1.9190 +#> -11.2310 -17.4655 -21.1093 -16.8335 11.1529 14.9433 -1.9176 0.8984 +#> -8.4986 1.0614 5.4820 -7.7777 -7.8456 -2.3261 6.5574 0.4245 +#> -5.7616 -5.6309 18.0811 10.8869 4.5236 -8.1555 -11.5226 -2.2178 +#> -6.5431 0.4128 -2.1879 8.4302 -3.1771 -6.8854 -6.7822 -16.3649 +#> 3.8306 9.3733 2.6196 13.6446 6.9131 2.6113 -12.4660 7.1945 +#> -20.6794 -3.3141 -15.1161 -15.1085 6.8808 24.0385 13.2143 -3.2077 +#> 9.8282 -19.6173 -10.8193 -0.2638 8.3347 14.2335 -5.1957 0.3102 +#> -3.0323 9.2271 1.8307 0.7336 -7.8705 -2.3976 -3.2060 3.4770 +#> 3.7052 3.8880 4.7929 -8.8234 -3.5462 5.8435 11.7265 5.0594 +#> 11.1393 -4.2118 3.1103 -1.0103 5.0755 5.7154 5.7432 5.4446 +#> -2.5627 11.9907 5.3111 3.7991 6.3624 26.5475 -4.0819 5.1281 +#> 4.9443 1.2031 -10.6381 1.1163 -11.6207 -7.0394 7.4165 15.7235 +#> -14.2614 3.5230 -7.0514 4.8836 11.7337 -6.1559 -7.5084 -3.4756 +#> 0.3203 9.2310 4.0731 2.0453 -3.5192 7.8414 -11.4932 -3.4787 +#> -1.9585 11.0028 -7.8681 -5.2722 -4.7085 -0.3444 -8.8240 2.5753 +#> +#> Columns 33 to 40 -16.2086 3.6009 4.1534 -4.0031 0.7665 6.8490 7.3541 8.2326 +#> -11.6117 0.8177 2.1269 3.1290 4.0927 12.8487 -2.8535 9.8357 +#> -20.3408 2.3152 4.2274 8.3990 -4.7943 -18.4804 21.1509 -7.6883 +#> 0.1038 12.0059 -14.0866 7.1420 -2.2766 2.9557 -9.2178 -4.7261 +#> 4.3252 7.8233 -10.1021 0.0185 6.5683 -8.0528 -3.2560 0.2315 +#> -8.5887 -11.0928 -14.6416 3.7763 4.4483 -10.3446 1.0966 1.6011 +#> 11.9414 -6.7054 -2.7639 -4.9038 -6.6649 3.8405 5.4326 0.9827 +#> -19.0930 -8.3548 13.6712 -4.4705 -7.1463 8.9296 -1.0585 20.3146 +#> 16.2528 -3.6523 0.9896 -21.1536 2.3888 4.4423 -6.8638 6.4089 +#> -1.6519 6.3646 3.1551 9.5345 -4.8651 5.2906 18.3544 -2.8488 +#> 2.6591 4.9864 -21.2580 -10.1801 6.1868 3.7901 9.3343 -12.3642 +#> -1.5676 -9.3029 -11.8483 -2.9516 -1.4575 -6.7623 6.5269 -3.6121 +#> -2.2296 -7.6398 6.7909 -5.8588 6.1594 12.8406 -4.1241 9.5473 +#> 11.0382 10.7851 -4.9545 11.5578 -4.3485 -11.3629 -4.7165 -5.0466 +#> 2.5669 -4.2435 8.9381 0.1753 8.1956 -0.5499 2.4431 21.6045 +#> 4.7957 -14.3638 -1.0203 5.1330 -5.1298 -4.5777 4.2896 -5.9634 +#> 24.2212 -21.3843 -6.0477 10.0261 4.5586 2.9284 4.3612 -9.6834 +#> 3.6108 2.4465 4.1788 -12.8152 -10.0255 19.3243 9.6198 7.5142 +#> -8.3931 -8.3866 -14.1537 -3.7318 -8.6589 -18.7726 8.6055 17.5604 +#> 0.3024 -15.7799 -0.8323 5.2878 1.8991 -9.5312 3.1859 2.5497 +#> -1.8618 12.1423 -3.1689 -2.1018 6.6899 -7.0607 -4.2293 0.3692 +#> 6.0903 -19.3583 0.6555 -4.0463 1.7142 1.0252 5.1535 -2.7416 +#> 2.6232 16.2810 -4.7329 12.3208 -6.1677 5.3871 1.9002 -0.7168 +#> -10.6743 -21.4390 1.2288 1.0133 1.3441 -7.4375 -8.6747 -3.8495 +#> -2.5638 5.9309 4.8542 -4.5298 23.0909 -7.1586 -0.3635 1.0118 +#> -7.4935 9.6163 -7.2168 7.3669 -16.4921 3.3259 -2.4198 10.2347 +#> 4.2367 -0.5784 16.5164 4.3086 -7.5247 1.8854 3.8563 -7.8103 +#> -9.3610 -8.2013 -2.5560 12.9635 -3.8509 -10.7876 1.4986 -1.3885 +#> -14.0475 -2.9250 -2.8980 14.5107 -5.2680 -4.4482 -5.0714 14.8558 +#> -6.7825 -16.8933 9.7776 1.8856 4.4857 17.3968 -0.1486 13.6344 +#> -4.0811 1.1356 -6.8292 -15.4074 11.5754 -3.7018 8.3617 -1.3083 +#> -11.2235 4.8559 3.5214 23.2757 -6.6173 3.2491 -0.4854 9.0660 +#> -13.1656 -2.8782 9.3092 -10.9921 10.8435 1.7970 18.7206 -7.0084 +#> +#> Columns 41 to 48 -0.4794 -4.7200 2.1935 -2.2066 4.2906 -7.6101 10.5657 6.5193 +#> -10.3014 -12.9307 1.8835 2.5713 2.2012 2.0539 -13.8011 7.5363 +#> -17.2547 -12.5349 -20.9318 -4.4617 -0.9685 -0.3390 5.2662 -6.5757 +#> -3.1898 5.6737 -7.1999 14.6965 -5.5819 -2.1300 0.7077 -3.1309 +#> -18.0966 1.1532 -11.4858 7.5103 -4.0690 8.0907 -7.2170 -7.7530 +#> 7.6633 1.8808 5.7849 7.3424 -3.0240 -9.6336 -4.3158 -4.6648 +#> -8.0560 1.5696 3.1842 -11.2185 7.8985 8.3951 -3.0128 -21.7698 +#> -10.0482 14.3059 5.0131 -2.6909 9.0579 1.8615 20.8154 9.7676 +#> -1.6295 4.7432 -4.8663 -3.7719 1.1519 -3.5064 15.8951 2.9470 +#> -5.0773 -6.4014 0.0620 10.7953 -1.3257 3.5800 -8.8108 3.2778 +#> -7.8998 4.0090 13.3705 -15.3735 -3.4188 3.6468 -1.2002 8.7656 +#> -2.2991 7.6728 -7.0008 19.0875 -10.4308 -5.2981 -6.5476 -10.0805 +#> -7.6098 -5.1771 5.7833 4.8127 -7.2318 -4.9819 -16.0469 1.4642 +#> -1.1119 2.1865 5.0103 0.7753 -13.7961 -3.5765 11.4363 11.7848 +#> 5.3285 0.6544 -4.3014 -3.7732 -6.6757 -4.6680 9.2389 -3.0111 +#> 14.9243 -18.9852 1.5592 5.8168 3.2338 7.5046 -9.2199 -1.9998 +#> 5.8316 -16.0814 24.0642 1.9585 -3.5264 -6.1538 -9.1210 -2.4992 +#> -13.5651 -5.3863 -7.1816 17.6731 -13.0657 7.3824 1.5581 -5.1607 +#> 13.1493 4.8888 -5.5545 0.1581 -0.9223 11.4728 1.0332 -0.4469 +#> 1.6263 3.3358 0.6984 -6.3469 14.6469 -9.9027 -5.6818 1.4462 +#> 7.5497 7.6437 -17.7139 3.6806 -12.6042 4.5378 7.5226 -16.3628 +#> 15.6895 -10.2874 11.3661 -2.3743 -5.0022 2.3989 6.9662 1.3720 +#> -14.7766 -5.7658 -9.8210 -3.8484 9.2184 -5.0199 -10.5909 4.3750 +#> -3.3873 3.1920 10.2297 8.2557 -5.2131 -17.5189 8.5251 12.7148 +#> -4.4102 13.5598 -20.2984 -13.7089 -0.8403 3.0563 10.2671 -2.3559 +#> 15.3842 5.0043 14.6081 9.0073 4.0625 -3.2320 -2.1856 -2.8255 +#> -6.7683 7.2567 4.1482 -2.9104 -3.1228 -1.1391 10.5654 6.2337 +#> 4.1170 8.3056 0.6752 6.1380 -1.4899 6.1811 7.6111 -5.8355 +#> 10.8976 -17.5127 -13.1384 4.9627 -13.8710 14.4422 -2.8103 3.5432 +#> 7.4061 7.1341 -0.5488 2.8806 0.9175 -2.6211 14.1834 31.8526 +#> 12.3116 0.2204 1.3383 -13.2747 -2.8086 10.5885 -3.7188 6.7941 +#> 0.7298 13.8047 -3.9216 -1.4572 7.6106 1.1530 8.8140 -11.3650 +#> -5.5953 -14.6633 2.4963 2.2599 -9.8951 4.3039 -9.2876 -5.9474 +#> +#> Columns 49 to 54 -7.3718 0.1724 -6.6997 9.3962 -6.5077 -3.1607 +#> 23.4490 -5.6180 1.4565 -7.3384 -1.8619 4.1763 +#> -0.4912 2.9085 -4.5126 -6.7124 5.1127 -3.9138 +#> -6.0853 3.5843 3.1338 -1.4118 0.0586 -5.0726 +#> -0.1282 6.5454 -0.7282 3.4190 2.0427 5.7622 +#> 4.9669 -6.5563 -12.0750 -7.4261 1.5339 1.5126 +#> 8.9537 5.2251 -4.8157 -0.5032 3.1482 -3.5577 +#> 0.7154 0.0666 -7.2519 2.2708 5.7407 -4.2322 +#> 0.7558 -4.7962 7.9415 11.3416 -3.6383 4.6629 +#> -3.2890 -0.2584 6.8779 -1.2153 3.1597 -6.3844 +#> -8.5711 10.4096 9.5683 -2.4923 -11.0658 -11.7025 +#> 2.7361 6.0141 1.2901 -1.3441 6.5208 -3.1641 +#> -6.9538 -8.9816 1.3693 2.0008 -4.1646 2.1112 +#> 0.4658 10.2723 5.8848 5.8259 13.0186 -2.1762 +#> -11.7348 10.4812 -4.3369 7.0428 11.4794 -2.2028 +#> 7.4112 -19.0299 7.3612 1.7996 -5.1023 3.9962 +#> 4.9686 9.4521 8.7250 -0.0540 1.7540 11.1828 +#> -8.2419 -10.3812 8.1248 3.6823 12.9724 -3.7185 +#> 2.8024 -12.8766 3.1866 10.2768 -4.1207 5.6507 +#> -1.8880 14.1398 4.2122 -11.5026 11.6045 -3.2251 +#> 17.1062 -8.0658 -5.3075 0.5834 -0.4006 -2.4423 +#> 8.6359 3.7877 3.2643 3.3035 -3.9406 0.0275 +#> -6.1192 17.6782 7.8204 -6.9942 0.3734 -7.6414 +#> 11.0851 6.6772 3.3842 12.7506 5.1887 7.7000 +#> 14.8332 12.8735 2.1239 3.1732 5.5896 2.0718 +#> 7.2465 -5.4545 3.8073 -0.9183 7.1958 -0.9900 +#> -5.3976 8.9492 1.4537 1.1187 2.3930 -5.6091 +#> -4.3078 4.0652 -8.3012 -19.9772 -1.4070 -4.8790 +#> 1.6937 -16.0303 0.0988 -2.5211 -4.9906 9.1394 +#> 0.4467 17.8078 17.1774 -6.6270 12.9747 0.3961 +#> -11.5709 12.6492 13.1732 3.6568 -3.5198 0.5396 +#> 12.2177 7.3196 -6.3694 -1.8782 3.0797 -0.6957 +#> -2.2324 14.6047 -0.8001 13.8972 -4.3510 5.3483 +#> +#> (18,.,.) = +#> Columns 1 to 8 -4.3911 5.4973 1.8484 1.9810 -5.6260 -0.5801 -15.8868 6.4607 +#> -5.0212 -5.5476 -7.6658 -1.4139 -0.4396 2.2836 -5.9576 -3.4828 +#> -5.4325 -0.4088 2.7157 -8.4452 1.3914 -0.2093 11.4034 -4.1957 +#> -0.7190 -5.5509 -2.5918 -12.2629 18.8802 5.8711 1.6982 2.9475 +#> 0.4850 -0.5469 -7.6596 -6.5053 8.6592 13.8677 -21.4914 5.5691 +#> -1.4620 -6.3371 -3.4570 -11.5751 7.5142 -8.2088 -6.2354 -10.5874 +#> 3.2021 -4.3906 -7.1871 6.7673 10.5294 -7.6768 -3.2767 -2.1408 +#> 5.6072 1.9384 3.5318 2.1610 -5.8641 2.2806 9.2333 -28.8971 +#> -13.9020 7.9718 -2.5898 -4.9725 -1.6510 -4.6305 -11.0958 1.3548 +#> -2.8662 -1.9706 -2.5360 -8.9197 -11.1579 2.4577 11.8483 -11.4991 +#> -4.1481 -2.0496 9.5735 -1.7490 -13.7465 -1.5257 5.2256 -3.2159 +#> 4.4538 2.4108 -11.2044 -3.1763 -8.6365 4.4801 8.9262 -7.8778 +#> -5.8780 0.8098 -5.3211 2.7571 -13.6454 -0.8512 -0.6691 23.8480 +#> 2.6878 0.7378 0.9408 -2.5093 5.4975 -2.4101 10.2283 -5.4472 +#> -3.9325 1.6233 -8.9639 -7.7578 5.8783 2.2209 -9.6604 -4.5376 +#> -0.4879 1.8802 -7.2441 -4.4474 -0.9615 6.2233 10.8690 -14.4598 +#> -0.7505 5.6795 7.6239 -14.6193 -3.4399 14.3743 -3.5018 -0.3198 +#> 0.3104 9.1646 -9.0805 -7.8947 -5.8951 6.6030 17.4737 -4.6253 +#> -0.9207 7.8309 -8.4844 0.3523 -14.6567 -8.1897 -1.8762 0.6771 +#> -0.2594 -6.4661 2.0367 -5.6595 13.3032 -10.9836 -9.3047 7.9829 +#> 0.5242 -3.2823 0.8238 8.1593 -0.9138 -7.1425 -15.7148 2.3371 +#> -5.6616 8.2627 -0.1257 6.7354 5.7825 -9.0814 10.6219 -0.4017 +#> -0.0744 -2.6918 8.6814 -16.0938 13.5337 4.3540 -6.6646 9.3643 +#> -16.0383 11.2291 1.4746 11.9791 -1.7396 -6.4750 -7.7020 10.6767 +#> -3.9449 -7.1955 2.8869 4.3371 7.5388 -6.7228 3.2023 5.3479 +#> -0.9995 1.3863 4.3533 4.3314 12.2726 -15.1262 -0.1044 -1.0410 +#> -5.1356 3.6454 7.3923 1.7739 0.4303 6.9459 -2.3443 -5.2192 +#> 2.5240 0.3547 -3.5044 4.3585 7.6539 3.9435 -0.9854 1.7310 +#> -5.0543 3.4230 0.4311 -1.0042 11.0358 -8.9949 3.6246 18.5413 +#> 6.2018 1.1566 0.8363 -9.6308 8.2723 10.8877 11.3666 -13.1488 +#> -5.3042 4.6787 -2.5811 -0.9704 -1.7178 -2.1207 -6.6445 6.3538 +#> -1.3022 -8.0480 8.6952 1.9588 3.4376 -23.2013 0.1420 5.0201 +#> -1.6073 -7.1981 0.7532 -1.0488 -11.1509 -5.6928 18.6251 1.7090 +#> +#> Columns 9 to 16 7.6927 1.4596 13.0779 5.9595 0.2990 5.2401 -9.1685 -4.2340 +#> -19.2284 -8.5529 -1.4149 11.4336 10.2557 -4.8607 -16.2310 3.1786 +#> 5.2186 0.3449 -5.2681 -1.7545 6.1023 1.9320 2.4020 7.8770 +#> -9.2335 9.5286 -9.9603 -1.4513 -8.5965 12.4769 2.8153 6.8837 +#> 1.7237 -10.9426 4.0296 -7.5040 1.0439 12.2765 11.8449 -5.4733 +#> -8.3496 -20.7904 19.8178 -11.1970 4.5073 31.3651 -11.1798 -22.0608 +#> 5.2126 3.1247 -12.3427 -2.4787 3.8732 -15.9318 6.2172 2.2279 +#> 11.4959 -15.5201 4.8571 5.7709 6.6421 -19.3220 8.6169 -13.8630 +#> -6.1861 -8.2190 11.6082 -3.3027 12.0702 -15.4140 -8.6867 -8.4738 +#> -10.8348 2.5051 -12.9579 -0.2584 -6.0464 3.2162 3.6795 -5.0257 +#> 11.7900 -7.5695 7.3021 8.0940 -0.3820 -3.9853 -7.1611 1.7104 +#> -3.5033 4.5399 8.8900 -1.7946 7.6678 3.3881 13.1164 11.0010 +#> -15.3460 4.2485 -2.2991 -9.0732 -13.1693 -5.2471 1.1429 3.8441 +#> -5.8969 2.7695 5.1643 1.2429 10.3890 -2.7055 -6.0126 16.3083 +#> 8.5507 -1.9368 -6.1374 1.8580 -17.3128 9.9511 5.4091 -5.9266 +#> -5.6076 2.7743 -1.6566 11.0053 -11.9033 -10.1063 9.6658 6.6356 +#> -9.3610 -7.9023 0.7382 2.5932 3.6247 9.8894 -2.4512 -8.8385 +#> 8.9837 -6.6969 -10.0175 4.2795 13.5702 -6.6152 17.8442 -2.8948 +#> -15.7365 -10.9606 -4.4099 -13.5879 1.5271 -2.8249 -17.4113 4.3666 +#> -17.7783 -4.8231 -1.6511 -9.8570 -1.3040 10.0096 5.7238 5.7183 +#> 6.9739 16.3830 1.5270 4.4500 -2.1711 -5.9195 0.3354 -5.8155 +#> -8.6017 2.4775 -6.2702 -2.7926 4.4337 1.5416 7.3396 9.1336 +#> 9.5212 2.5359 2.9281 -11.4843 -7.7582 4.5839 -7.2885 -9.0561 +#> -17.8258 13.1264 0.3094 1.1877 21.5266 -2.0017 -9.3248 1.4888 +#> 4.2441 2.4671 1.9099 3.5147 19.1303 0.3357 1.9493 0.9496 +#> 1.1641 -8.2187 -1.0420 -9.8006 -15.3996 12.6898 -12.9848 1.7504 +#> 5.8278 8.0116 -16.8496 -6.5744 -1.8002 2.4645 7.1946 2.5312 +#> -6.0108 -4.0062 -7.3233 -2.1383 15.2241 4.4813 -12.1431 -2.0238 +#> 0.3557 4.6276 -8.4960 0.1130 11.3615 2.8329 -9.3739 2.4401 +#> 5.9013 1.5485 -7.0356 0.5447 1.4222 0.1700 6.2146 -8.5471 +#> -1.7002 3.5084 -14.1795 7.6799 -6.8140 13.4071 14.9627 -9.0716 +#> 15.5815 -4.9196 4.5648 -13.9866 -3.6228 -2.3111 -12.6678 -5.1569 +#> -3.9958 7.5002 -8.8081 8.3486 11.7740 -7.7243 19.9508 -6.5645 +#> +#> Columns 17 to 24 1.1347 2.5347 -2.8868 -3.3789 -5.0710 8.2973 -12.6579 16.1731 +#> -11.5549 -4.9196 -7.8892 -5.2357 -6.8317 -5.2695 -6.6647 -5.2959 +#> 2.1417 -11.1165 5.8018 -9.2976 -4.1183 -7.0978 -5.2528 -15.3420 +#> 2.0741 -5.3868 0.3100 0.1789 25.6531 -13.9063 17.0234 -10.6949 +#> 5.3593 16.3898 1.4933 15.7945 2.9275 -12.2839 18.2287 1.2301 +#> 19.4208 6.8820 -1.1850 -5.9584 -0.1833 -13.4068 30.5236 -17.4262 +#> -5.4856 -7.0090 2.3754 -5.4628 -8.2741 -9.6361 -3.6279 -5.0407 +#> -6.5071 3.6426 -22.6544 -4.3219 13.8968 8.3234 -15.4233 -7.8661 +#> -3.1956 9.5999 -14.0066 7.7170 -12.2087 -14.1807 -3.1584 6.2416 +#> 3.7422 4.2694 7.8953 -1.7984 1.1673 4.2236 15.3837 -13.6411 +#> 14.2686 -2.6712 -18.7982 -5.5391 -2.4245 4.4291 -3.1279 -3.5791 +#> 9.1189 0.0225 -11.5063 -5.8858 -5.8044 1.0564 23.4499 -4.9465 +#> 1.7194 10.7878 -15.6375 9.9320 -2.6833 21.7980 -1.9116 -0.0044 +#> 3.9943 -17.6837 9.9010 -0.0358 6.4139 -3.3084 0.7509 -13.5968 +#> -5.7291 12.0397 -1.5095 -5.5217 -5.2337 6.6550 14.6906 0.7526 +#> -5.1364 -18.5721 4.7661 -10.3700 6.6646 -7.4164 -13.3842 -5.5470 +#> -1.0711 6.4061 5.4447 -6.3443 -1.3465 -15.8051 25.1032 -5.0133 +#> -6.1450 15.4360 -8.0648 -7.5083 -6.3784 -12.4713 3.3846 -5.0549 +#> 4.9954 3.8794 6.9410 -0.0043 -1.6452 -11.6989 -7.0418 -4.5863 +#> 6.1630 -2.5573 1.2800 4.9611 -8.7608 -5.2573 4.6300 2.7334 +#> -4.4718 -5.3688 -1.9845 5.1600 0.4501 -7.0452 -16.2698 5.8914 +#> 0.2358 -0.7003 5.2938 9.5373 10.0164 14.2044 3.2263 3.9779 +#> -3.0353 0.2633 8.0138 -12.7800 -3.9198 -3.9936 17.1907 -3.8080 +#> -2.8725 -9.3989 -8.8084 -7.4599 -4.0630 18.1741 -5.8362 3.5222 +#> -3.4204 -19.2696 3.7916 8.3952 2.4625 10.1270 1.0981 4.5824 +#> -8.6086 -2.0380 13.7854 7.8583 7.4181 2.7249 -1.5048 -2.7189 +#> -4.1792 -1.9535 -0.2705 -0.0452 -15.2096 3.8035 -8.7664 5.4589 +#> -0.6465 -2.6124 8.4671 -2.1038 -4.6120 -9.2987 16.5657 -7.5394 +#> 4.7889 -10.3359 1.2910 -3.1306 8.0027 -12.9061 9.8752 -2.3037 +#> -1.2102 9.4415 4.3168 -4.2345 9.6880 19.6927 29.5038 6.3476 +#> -13.8122 -3.7279 -6.0201 7.7620 -3.1934 -0.1139 -16.0485 8.4671 +#> -5.2801 9.7130 7.0902 -12.6219 18.6469 1.0631 10.1957 10.0229 +#> 13.3858 -6.0519 5.5020 -3.9310 5.2673 -0.1442 2.0037 17.0299 +#> +#> Columns 25 to 32 -3.8738 3.9220 -12.5185 6.4561 0.0799 5.5301 1.6065 5.2487 +#> 1.4690 -6.9025 -2.4601 -4.2432 4.1212 1.4848 0.4622 1.9091 +#> -0.8780 2.4436 -8.1781 -10.4016 -8.1332 8.3900 4.3642 9.5500 +#> -1.2560 -3.2221 -0.0284 6.5983 1.0221 -5.9329 5.6549 7.3524 +#> 0.7555 -4.4902 -11.1791 -23.5908 7.3431 -8.1840 8.0723 0.2110 +#> 1.8144 -1.8614 -1.3185 -4.0768 6.4284 -3.2070 -11.3596 4.9718 +#> -0.1717 1.2119 -11.3041 -4.1853 15.7426 -1.5315 12.3409 -11.2353 +#> -2.0192 -1.1751 10.6557 -5.3755 -8.8156 0.9094 -11.1758 -7.5134 +#> 9.2353 -9.6842 -9.6485 -2.3479 9.3271 -5.3431 10.0440 8.1175 +#> -4.5267 -11.8635 6.1133 -2.1379 6.6027 1.5822 -5.5570 -7.7038 +#> -11.3767 -6.1185 -0.4371 -6.2906 -8.4045 -3.0140 -5.9909 9.3692 +#> 5.3097 17.9685 -5.6408 2.5726 0.1373 -2.0362 -4.4878 13.1866 +#> -0.2003 -18.0279 2.8840 7.6310 5.8091 -16.0159 2.7507 -9.0756 +#> -4.3898 6.3232 9.0166 0.3971 -13.0539 9.7434 -0.7697 14.1789 +#> 4.1869 -5.1755 2.1565 -3.0330 4.2686 -5.8388 -1.7538 -22.2728 +#> -13.1864 -5.5980 2.3033 1.6994 -5.2467 4.6051 -3.7793 2.0549 +#> -2.9787 -10.3587 17.2497 -18.6068 2.2088 -2.4777 -5.9609 -0.2677 +#> 6.8232 -0.2212 -1.2915 -2.6751 8.7935 0.6610 -1.5027 7.6642 +#> 0.4888 3.7794 8.7645 1.5360 -6.1666 -8.0935 -0.3406 3.8793 +#> 2.7377 -5.2790 0.5362 -1.7392 -6.0426 4.2360 12.0210 -16.1900 +#> -2.1322 2.6928 -3.7036 -4.7929 -3.1372 -2.6002 -12.2420 8.5769 +#> 2.5564 -7.6670 9.2444 -8.3447 -1.8659 -14.2068 -6.3827 -13.5205 +#> -8.6856 -5.2431 -15.0486 -4.4206 11.4039 -5.5736 4.4667 13.0206 +#> -8.1077 -0.0292 -2.1540 1.7868 -6.3482 -1.5963 13.8036 11.8692 +#> 7.5198 6.7072 7.6203 -0.7738 -11.0714 6.2842 -9.9921 -5.9641 +#> -13.2923 17.4194 -21.3287 2.7288 5.8553 -2.0353 -6.1919 3.6219 +#> 7.8293 10.1106 8.7589 -6.9789 9.7184 10.9510 7.1282 -3.0372 +#> 1.8077 0.6001 3.8597 -9.2054 -12.7888 12.6369 1.9466 -11.6437 +#> 7.4352 -3.4234 4.3904 7.3170 -8.2280 11.0989 4.0127 -4.6733 +#> 3.7689 -1.8453 18.5053 -8.0752 -7.2611 6.8307 -10.3481 -6.0274 +#> 4.2616 -8.9127 -6.4516 1.1904 -7.8745 -11.4709 15.2262 -2.8091 +#> -11.1368 12.1557 -11.0857 6.0072 -4.2969 8.1991 -7.0327 -9.7476 +#> -1.2019 -2.4516 -1.3659 -0.5371 -10.7445 3.9925 -10.0558 -12.4318 +#> +#> Columns 33 to 40 -8.5350 0.0875 9.6567 -15.5151 12.1594 0.2972 -6.5993 -4.7356 +#> -5.1730 -10.0375 -5.3178 8.7592 3.4126 -6.6100 -0.4462 2.7282 +#> 19.0838 6.3240 1.1039 -7.6161 -12.7377 5.3932 -16.2631 10.2438 +#> 4.3135 -3.5741 -5.6618 -5.5723 -6.1734 -9.6130 2.0577 5.0947 +#> -1.6806 0.4366 -7.4311 -0.9253 -9.2197 -4.2929 -7.8661 -4.0315 +#> -6.7793 6.6410 -2.3139 -6.6337 -5.3094 0.3094 -5.8473 -0.0352 +#> -1.0434 1.2943 -9.3600 5.2054 4.6067 10.3721 0.2044 -1.7094 +#> -2.7223 -0.6669 2.0470 9.0565 -3.1020 3.7929 -4.7129 -4.6957 +#> 1.7211 3.1714 -3.0869 5.9535 -1.7719 6.4213 -1.8118 2.8516 +#> 18.6640 -4.9960 2.3894 -0.6455 0.7528 -16.8513 -22.7559 -6.7013 +#> -1.3260 21.0229 11.0511 11.1625 -12.5773 -8.3422 1.8465 5.4793 +#> 3.0718 3.7418 1.0446 11.1606 -5.0686 -10.1680 1.5955 -10.0100 +#> -10.1795 2.2387 1.0678 2.0980 0.5267 -4.3053 -2.7337 4.6885 +#> -1.4100 -0.6188 -1.5873 -9.6573 -17.2998 -1.8995 6.9750 6.5807 +#> -2.1986 -12.3286 -0.7730 -2.9716 6.0537 19.9523 4.4846 -20.3818 +#> 2.2540 -10.7448 6.4450 5.5551 -14.0252 -12.9895 10.7741 0.6383 +#> -3.2762 -6.2054 -12.4799 -2.6375 -16.2244 4.1103 15.7387 0.7658 +#> 2.2558 -1.1088 8.7188 8.7920 -14.1842 -0.3710 -2.6653 -16.4752 +#> 9.9595 0.9145 -7.1987 0.1316 15.5304 -11.5121 -17.1075 -18.9906 +#> 0.7257 11.0916 -5.9431 -12.6732 10.7616 7.0296 -18.1772 0.6022 +#> -8.4137 -3.2743 4.7066 -0.6106 -1.2597 6.7818 -0.1954 8.4503 +#> 2.2677 2.4595 -2.5608 7.4922 19.8550 7.0307 -10.3801 -6.9346 +#> 3.2337 -5.8513 -10.3497 3.4862 -2.5652 -1.0337 -8.7844 -4.6261 +#> 17.0239 -3.5383 19.6143 2.4119 1.9370 -10.6873 -7.2749 7.7795 +#> 15.5725 4.1913 0.5852 14.9739 7.9995 16.0967 -4.3813 2.8909 +#> -6.8186 -3.1106 -4.5133 -11.0466 13.4662 -5.2009 0.3870 0.9202 +#> -0.6736 -15.3876 9.1537 -6.8619 -6.4745 16.5738 6.1942 0.9360 +#> -3.0737 -8.4914 -6.8561 -9.2497 2.5473 5.0432 9.7131 10.9237 +#> -7.1638 -11.7686 -5.0851 2.8317 4.6247 6.7577 9.9449 7.3291 +#> -18.6639 -18.1522 -1.8617 -2.1253 2.2573 12.0296 19.0306 -1.1478 +#> 3.9248 -7.9585 -0.6073 -8.6901 -2.2986 4.8878 9.5865 -7.5159 +#> -6.4216 -1.9025 -1.6467 -9.5660 0.5932 -0.9738 -7.3584 0.2400 +#> -1.2851 0.1969 19.3340 12.5870 -6.2235 -10.5510 -13.2139 -0.5947 +#> +#> Columns 41 to 48 2.0407 2.9510 2.1377 -3.1482 -3.5517 -1.6021 14.0215 5.6081 +#> -2.8634 -1.2369 5.2037 9.4677 -2.8090 8.5956 7.9898 -0.5333 +#> -0.7450 -0.5756 31.8037 -5.6067 4.3648 -9.8439 9.4060 0.9765 +#> 5.5924 4.6208 -4.7390 1.3896 5.1584 8.9603 -19.1332 11.0267 +#> -7.9844 -1.0559 11.0491 1.2988 -5.2304 -8.4710 -5.0629 5.1838 +#> -6.3647 -8.2324 -8.2835 9.3915 6.6549 2.4155 -17.9305 5.7850 +#> -7.2929 -3.4562 10.0331 -1.7906 -5.8117 -7.5337 3.8928 17.7967 +#> 0.0798 9.0012 8.6507 9.0891 -11.2270 -6.5867 3.6516 18.7555 +#> -6.5055 -3.0581 0.1773 -7.4135 8.0395 -6.7013 -16.4906 0.7733 +#> 6.1016 2.0575 2.5795 0.0249 8.2600 5.5781 -1.7872 6.3839 +#> 16.5275 -0.9233 7.0861 -4.6934 -7.0417 2.9589 -4.6347 3.9824 +#> -0.4278 8.0586 -5.4852 2.8071 14.5673 11.0391 1.6705 14.5835 +#> -2.8442 -12.2410 2.2064 -10.4224 -3.4541 0.6471 8.3707 -9.5317 +#> 5.1208 13.7774 -21.4315 16.4837 4.1118 -1.6831 -6.3328 -3.8604 +#> -3.4582 -5.1599 11.4076 21.8845 8.7755 -3.6688 -0.1227 22.9978 +#> -4.8443 19.5697 -7.3775 -2.9987 -1.4183 2.7953 -1.1646 -5.3746 +#> 10.0052 -6.8885 -7.1253 11.0419 9.9484 -7.6375 -7.7870 10.2694 +#> -1.6002 23.8070 -6.2378 -9.8259 5.4546 11.1698 10.0522 5.9604 +#> -19.6452 2.8931 -2.6409 -10.8057 -8.9000 -0.9451 1.7805 0.4884 +#> -0.8447 -8.7494 13.6706 9.4764 7.5104 -7.8797 16.1753 10.0951 +#> 1.0271 -1.0085 5.8542 -2.4994 3.6851 1.7053 1.4021 -0.8466 +#> -13.5364 5.9332 9.0672 -12.8885 0.8499 -11.0641 3.9007 16.8223 +#> 7.2468 13.2686 21.9044 3.8828 -4.1238 3.4955 -10.0625 6.3312 +#> -7.9251 -1.5882 8.9027 -10.5558 -13.2154 -10.4163 10.8657 -12.6363 +#> 4.4245 -8.0527 0.3378 19.5951 -3.7173 -10.0441 1.2527 -1.2587 +#> -6.4083 0.5048 -2.7162 -9.1202 -9.5379 -1.4137 11.4220 -13.2740 +#> -2.2673 -4.0698 17.3785 1.1086 -5.3697 2.8517 16.9722 15.4926 +#> 4.7849 0.3172 -14.7810 12.6112 5.7173 -9.8020 -8.7283 -3.6317 +#> -12.4094 8.4149 -1.3401 4.7318 -2.2228 -6.4433 -2.3998 12.2818 +#> -1.0780 1.9622 -18.8152 7.4658 -4.2106 -13.1520 -15.2373 22.1706 +#> -8.8288 -0.7321 15.0267 -6.7328 2.3793 -3.1096 7.7510 -11.4905 +#> 12.4244 2.2718 -4.3188 13.4401 3.7834 2.9043 3.9654 -4.6028 +#> 3.2978 -2.8197 -1.1388 1.0209 6.0064 8.2707 11.6162 -5.2744 +#> +#> Columns 49 to 54 15.6366 -2.3198 -10.2642 1.5620 8.4289 -4.5042 +#> -13.2333 16.8606 5.2800 3.8640 1.8557 -1.4473 +#> -8.5576 -8.3046 -3.4812 0.9038 -0.0236 -3.5858 +#> -4.5940 3.1661 4.2015 8.0996 -4.4305 -1.9093 +#> -14.5974 -7.1334 -1.0637 3.5828 1.2339 2.5546 +#> 15.3339 -4.6737 7.5107 -15.1962 10.1448 -1.9309 +#> -1.2980 -5.4444 1.6974 6.4495 -4.8496 -8.1780 +#> 10.3719 -23.1087 9.7942 6.6577 2.8608 -4.1214 +#> 16.3583 -0.6505 -6.5720 -1.6248 1.3788 -0.4460 +#> -0.9532 -7.5573 12.7045 10.1062 2.0182 -1.4121 +#> 1.0502 -1.7704 -5.8809 6.1531 0.9003 -0.1029 +#> 4.9284 -3.1047 6.9632 -4.9418 -7.1467 -0.6355 +#> -10.0474 11.3646 -8.2997 0.2088 4.3690 0.6673 +#> -6.8664 13.9372 4.2527 -3.8616 -2.4935 0.8499 +#> -2.9825 -6.4267 -2.6215 1.2597 4.8453 5.8249 +#> -12.8910 2.5260 14.1257 1.6857 -5.3704 4.9627 +#> 3.3596 -1.9308 1.8196 6.9653 3.4050 7.2779 +#> 9.6907 -0.6020 12.7136 -3.0734 2.4047 -0.6476 +#> -0.0868 -5.7963 -6.1393 5.7635 3.1106 -5.9301 +#> 3.9145 -6.2789 -2.0478 0.5966 2.0519 -1.6803 +#> -1.7202 1.0289 -2.8785 -9.7041 -9.5608 3.8213 +#> -2.7978 -21.7646 -8.9645 14.0012 -1.5626 0.9232 +#> -6.1989 -0.9096 -6.5068 3.1514 6.2610 -1.1243 +#> -13.9085 2.9254 -2.3032 1.3641 -5.4963 -8.0123 +#> -2.2217 3.5599 -0.3479 -0.7171 4.3949 -8.4972 +#> 6.1102 -6.4740 6.4523 -12.9943 0.6849 0.2070 +#> 0.6102 6.2528 8.6411 1.7952 -4.9242 1.0878 +#> -0.0187 -8.1872 8.5016 -5.4746 -3.8439 -3.6778 +#> -9.5806 1.2556 -1.5308 0.2019 2.0435 1.0044 +#> 3.9516 -3.5034 0.2421 7.7355 11.8162 5.8535 +#> 0.2947 13.0730 -19.5379 -0.3044 5.2665 7.0298 +#> 9.6607 -5.9342 12.9901 -2.8921 -7.3424 0.0711 +#> -10.5470 6.4672 13.2806 1.4521 0.7254 -5.5319 +#> +#> (19,.,.) = +#> Columns 1 to 8 1.0767 5.5533 -4.7036 9.9412 -1.1217 -2.2419 1.8039 -5.7265 +#> -8.6722 -5.3671 9.6119 -5.1046 0.2392 1.5390 -1.9327 -11.1522 +#> -1.5390 -9.6345 -7.3554 16.1637 -0.7063 -2.6086 -5.6490 13.4125 +#> -1.3161 6.6821 9.2491 9.2681 -1.5992 -10.0013 -5.6260 -14.3100 +#> 4.1478 1.8199 -13.8644 9.2892 -11.3088 -17.8347 12.9773 1.0151 +#> 4.2774 0.4376 -12.1486 -1.4250 -11.3629 1.0010 14.5349 2.7159 +#> -9.5957 -10.0090 9.3964 -11.6175 9.9439 1.5080 -4.2625 -14.0303 +#> 1.8417 -3.3999 3.4323 -9.7912 13.9317 -0.4858 -2.4816 3.5579 +#> -1.1186 3.8074 -10.4497 0.0204 0.5021 14.5979 12.0954 9.1226 +#> -0.0734 2.5938 -1.4849 3.2009 -11.6996 4.1094 -8.8224 16.3809 +#> 0.7290 0.7407 -13.0315 9.0456 7.0214 18.5311 -14.6049 21.0846 +#> 6.7994 5.7353 9.3056 5.6686 -1.1127 -3.9791 -8.0180 -9.2956 +#> -4.5172 -9.8966 -1.6582 2.0723 -8.5300 8.9774 -4.6183 -8.2331 +#> -1.9031 5.4573 3.7323 9.5858 -2.3931 -3.7597 -14.8737 -11.7221 +#> -1.2665 7.6661 -8.5460 7.0667 0.4723 -13.6319 -6.8802 10.5505 +#> -1.7393 -4.5569 8.2047 -5.7268 -5.8031 6.0560 -1.6962 -2.7832 +#> -2.9019 -9.2061 -13.3871 3.5014 0.2425 4.3444 2.0498 3.8161 +#> -2.4064 0.5608 10.4820 -0.3968 -4.0915 9.8314 -9.2122 -0.0182 +#> -4.4806 0.9583 1.7148 6.4891 -3.7685 12.5118 11.5523 1.7418 +#> -4.1795 4.3758 2.3889 3.8275 5.5741 -2.8142 5.4183 -10.0492 +#> -2.5554 -4.5916 -0.9749 -7.9303 -7.7465 -1.9804 -3.6414 -1.7364 +#> -0.2133 -3.2456 -1.4034 14.3690 17.6698 8.4782 -5.1666 18.6391 +#> -2.6984 1.6757 -1.8009 9.5193 -5.9498 1.4157 9.3605 -1.7934 +#> 2.0156 3.8572 0.4427 14.9490 8.4395 -0.4469 -6.9477 -14.0243 +#> 5.4003 21.3595 0.1373 1.2464 6.0059 -10.9409 -4.1660 -4.8872 +#> 4.0334 -10.4847 5.6587 -4.9414 3.0759 -4.8128 12.8768 -1.0747 +#> 1.5269 -4.5568 -2.0269 5.0052 -1.8830 -16.4411 -14.7579 6.1195 +#> 1.8340 9.0677 -12.6624 1.7791 -0.0315 -4.7311 8.5713 5.6756 +#> 2.8952 3.3245 -10.5208 -1.0060 -3.9409 -7.1061 -1.5845 -4.5052 +#> 9.5655 2.2102 -10.4304 5.0988 13.1694 -2.3234 -10.7082 -4.1711 +#> -3.5340 9.0905 5.9303 12.0301 -16.5272 4.6653 -3.9841 3.0749 +#> 3.4951 -6.8877 7.5635 -1.8781 4.6857 2.5944 13.6946 -0.5487 +#> -2.0527 3.1539 -6.7138 6.3938 -1.3217 13.6770 -8.9408 6.8614 +#> +#> Columns 9 to 16 -14.0036 -6.9333 -7.5914 -7.3397 21.1018 -20.7922 -7.4453 -2.9514 +#> 15.3105 2.2971 0.2982 4.3908 -0.2667 -6.4674 -13.3505 -1.4617 +#> -6.1115 2.5714 -7.5683 11.4539 1.3155 3.3049 -21.5070 19.7832 +#> 4.9959 1.4983 9.3627 3.7017 -10.2146 26.5324 -2.2526 5.2295 +#> -12.8752 19.7129 -0.4196 -7.5762 -2.0572 13.2759 -4.4405 -9.2029 +#> 10.0756 2.7735 -23.5013 12.2906 -5.4782 21.6938 -17.0761 -4.0252 +#> 14.6526 6.0746 4.7952 13.9198 -3.3412 16.1222 -16.5424 -3.8929 +#> -12.3913 -2.0690 -13.4521 7.9808 6.5945 -14.5495 -2.6231 -0.5771 +#> -9.7536 9.1040 9.0998 4.7534 -6.1598 8.4980 -7.8844 -9.6756 +#> -0.1418 -16.9912 -0.5980 16.0907 0.1257 8.7807 2.0439 -1.3475 +#> -3.1518 -2.1738 -6.6052 6.8677 1.5873 -14.2726 1.7766 -10.5044 +#> -2.8683 -5.1831 -10.5308 -18.5447 -12.5690 -1.3005 -3.5881 -11.1699 +#> 3.7412 16.8748 3.9050 9.9851 15.3700 -2.9613 -5.7636 10.6608 +#> -4.2591 -12.5602 -3.5333 3.8479 8.8281 -7.0624 6.9942 5.4780 +#> 8.9278 -4.0972 0.0137 -4.8261 -4.0591 3.0358 11.8793 4.4076 +#> 8.2928 4.9787 6.3531 4.0125 -0.8324 -8.5832 0.3094 7.7585 +#> 7.5487 -6.4172 -12.2168 14.1711 -11.3847 16.0068 -8.3022 -10.8496 +#> -10.7014 -9.5926 -0.8300 -0.6476 3.2910 5.1632 -0.4048 2.1139 +#> -5.1289 5.3220 8.1721 0.3316 -0.0183 -1.2238 11.9751 4.5372 +#> -0.4399 7.5658 -5.5506 -0.7805 -14.4911 -0.9694 -9.7239 15.7855 +#> 6.4175 4.7623 -4.1142 -0.3659 -2.9082 0.1668 -7.6900 3.3325 +#> 12.2541 -4.5770 -4.3425 -3.8493 -7.1113 -17.3472 -5.5664 -3.4450 +#> -7.3949 2.0393 -7.3935 0.2332 -7.9718 13.4079 -3.7224 -2.4832 +#> -4.1343 -18.4931 -1.6467 -11.0502 -0.3584 -12.0707 -9.2867 4.7150 +#> 2.5687 -21.6220 4.2097 -8.2748 -15.1219 -3.1143 -11.4862 -14.7020 +#> 3.6943 0.4919 7.0880 8.1900 2.3499 4.6997 11.1992 16.1555 +#> 11.3095 2.7934 11.0052 9.1714 1.3916 -6.2684 4.4728 -10.0944 +#> 4.7105 -5.1521 2.2129 -2.0156 1.4167 3.6537 -3.3794 0.4796 +#> 5.8399 3.3100 8.1257 4.9947 11.2183 4.3283 -7.2079 8.7814 +#> -7.1103 -10.6905 7.1750 1.3918 1.9821 9.5506 -4.6820 -24.4189 +#> -8.6839 9.8079 -4.0095 -13.7002 16.0389 -5.6545 7.1707 -1.1640 +#> 4.9739 -0.7280 -13.5759 0.1589 -8.2647 -3.7519 9.5167 -11.3627 +#> 4.5451 0.6956 -10.7291 -7.4561 -4.6505 -7.3631 -4.7628 -28.0731 +#> +#> Columns 17 to 24 -3.8539 -2.0230 -4.8859 -11.9436 -22.1095 -5.6628 -5.5931 -13.9555 +#> 9.4386 -11.0877 -7.7920 1.4645 9.1822 -4.6735 3.8736 13.9348 +#> 1.1258 3.4778 3.1203 -1.8828 16.5443 14.8341 -2.4198 13.1858 +#> -1.4624 3.6322 7.5670 24.5240 18.2129 0.3216 -0.8468 -6.3034 +#> 5.6501 -0.1130 7.7071 24.8891 7.6820 5.8292 -2.0203 -7.4934 +#> 7.5183 12.9281 9.2345 12.9773 -14.5725 15.2097 -5.0126 6.9257 +#> 10.3476 -9.5865 4.2492 -0.4222 -1.4051 -22.3373 21.0602 0.2441 +#> 13.8562 -4.1246 -11.0882 6.7276 1.1437 14.5107 9.8191 -8.3293 +#> 22.9158 -10.9553 1.8614 -6.1393 -14.1027 3.3065 -4.3564 15.2460 +#> -2.5361 5.8835 10.0634 10.9973 1.2540 11.9511 -12.0903 -2.0412 +#> 7.4133 5.8439 7.5822 -8.0505 11.4378 -2.9357 -7.5842 -0.7917 +#> -1.2279 1.6481 -0.9516 6.5344 5.2117 4.4362 -8.1969 0.8317 +#> 8.7589 -13.4882 5.3295 -5.6887 -3.2925 5.1488 5.2311 11.4647 +#> -14.5725 0.6747 -9.5241 -7.4326 10.4742 -2.8029 -6.1621 0.9784 +#> -2.6426 -23.0778 0.8946 12.7232 1.5908 -0.3250 -11.8063 -12.0918 +#> 4.6674 -16.3255 0.2027 5.4052 10.1676 -15.8522 -3.5531 -9.6944 +#> -6.6342 5.1806 -11.5974 -1.7326 -1.8143 -11.4650 -5.1670 12.2587 +#> -1.6934 -12.3489 0.2702 3.2438 -8.1793 -12.0089 -19.8978 12.2621 +#> -11.0990 -2.8589 -5.5567 -11.1047 10.5503 -6.1740 0.0674 1.6390 +#> -6.9366 5.6578 6.1890 1.4972 9.6568 24.4663 0.9538 15.1331 +#> -4.2098 -9.1040 -11.4043 -5.8215 -4.2459 -18.2579 12.9748 -2.4634 +#> -16.1518 -5.9413 -5.0830 -8.7746 6.3358 1.4073 15.6339 -5.8825 +#> 5.7514 2.2376 8.7201 6.7791 -7.1000 -1.2431 -6.1248 14.9285 +#> 18.2804 4.7080 8.9722 3.6387 -0.8141 5.1707 8.3539 7.4838 +#> -6.5839 12.5141 -3.7244 -6.3505 10.4050 -3.5125 -13.8111 19.2087 +#> 3.7694 3.4359 -3.1777 4.5373 -13.3302 10.7501 -3.6951 -17.2893 +#> -3.1303 0.0702 2.6647 -9.0818 -6.5239 -7.3000 -10.3534 4.1345 +#> -1.6505 30.5225 14.7744 -2.7572 1.8382 22.8058 -15.6121 2.9228 +#> 6.1693 -2.7620 -0.2159 -3.6507 -9.1278 13.1196 10.4311 0.6940 +#> -15.8986 3.7884 -11.8322 -23.4437 -10.0117 12.9315 -7.4701 2.8522 +#> -5.2591 -14.5160 12.0403 11.5505 -9.5525 -14.6422 1.0221 14.5150 +#> -2.2123 23.6488 -8.6307 -2.5041 -1.4410 1.0480 -1.4282 -8.2588 +#> -10.4613 -1.3295 1.1473 6.4935 -6.8058 -4.4212 -4.8149 1.6436 +#> +#> Columns 25 to 32 -9.6950 5.8274 -2.7130 7.8220 0.2628 -8.4575 -0.8611 -2.0891 +#> -0.6262 -7.6070 10.3993 2.3214 -4.4756 -17.3426 -5.3815 3.6312 +#> -3.7904 -8.9074 11.7960 -15.4559 2.9198 -3.6039 -18.5744 2.5687 +#> 1.0892 5.1254 1.9500 -0.2883 -5.9760 -1.8075 12.4245 1.2843 +#> -3.4794 -13.5702 10.2551 -18.4022 4.0738 6.1874 4.3489 10.5939 +#> -13.8473 -8.3958 20.2520 -2.2035 11.2639 -12.5962 10.4556 -23.7736 +#> 5.4967 -0.0896 1.8114 5.0066 9.4052 3.1489 -13.4869 21.5895 +#> -14.4500 2.4206 5.2329 -3.3601 -14.6554 -11.6877 -4.2139 10.0467 +#> -4.6730 -7.9645 8.5485 12.7164 0.3668 -3.0959 -2.8246 -2.3478 +#> -2.8326 19.0015 2.2847 1.0802 -2.5495 -7.1782 5.1540 -5.5282 +#> -7.0435 -18.7726 -0.9276 2.9793 7.8034 12.0036 15.0883 -8.5713 +#> -10.3140 10.2727 9.5996 8.0028 -9.0517 -2.4783 -1.7422 -0.5073 +#> 12.4743 10.3130 3.6449 3.9975 1.7386 4.4202 1.0105 3.8690 +#> -26.0719 -2.5164 -14.8735 -10.9506 -7.5253 1.9838 8.8550 -7.7213 +#> 3.2727 2.7772 15.2995 -7.4337 -21.3232 -17.8804 11.6360 0.8503 +#> 0.1637 11.9308 -3.0914 -3.8390 5.9435 -6.4313 -0.5033 2.3434 +#> -19.7685 -25.8448 -22.1217 10.0720 6.8253 -7.4429 3.5742 8.5442 +#> -12.8470 -3.7712 -0.5459 9.1840 0.1786 -23.6156 -13.7719 7.9354 +#> 14.0543 3.6764 8.9798 2.1178 8.6580 11.0799 -3.9781 -10.0693 +#> -6.5107 16.1062 1.6875 -5.3284 -9.7218 -9.3557 -3.5293 0.6473 +#> 11.4312 -2.4190 -6.2085 -5.4153 -8.2941 10.3688 -9.1526 17.0744 +#> -8.0095 4.4340 9.9769 8.8788 -12.9031 7.2673 14.6156 8.1925 +#> 3.3411 -13.2837 -0.9161 -9.1904 11.4688 -3.5172 11.5384 -8.2214 +#> 0.6083 11.5753 -15.9066 14.4976 1.0073 19.7245 -9.7316 -5.6104 +#> 5.7921 14.3940 -1.5066 -1.9038 3.0477 5.1525 -6.0519 -14.2574 +#> 5.7377 0.6755 0.4734 -2.9928 1.3565 4.9964 10.1169 -1.9241 +#> 11.6454 11.2528 -1.9511 11.1411 -7.6800 0.4682 3.3893 7.9758 +#> -8.2559 4.2834 8.2124 7.4970 0.2958 4.1579 -6.5515 -10.6424 +#> -0.0748 -10.6938 20.9447 -6.5912 3.3622 -14.7842 -8.1799 -8.8367 +#> -29.9325 2.2621 -1.3697 21.0621 1.0445 -4.1829 10.6821 -12.7333 +#> 10.7188 -6.6811 2.9166 -17.8202 15.5555 -5.1319 -6.5339 -5.7573 +#> 17.2279 -6.4359 -11.4349 4.5974 9.9781 -0.8728 -17.6340 -4.9185 +#> 9.8801 8.1995 7.8229 -4.0455 6.4705 -8.6501 -6.0530 14.8059 +#> +#> Columns 33 to 40 -16.8134 0.9345 6.0130 3.2657 4.0334 12.5669 -3.8526 -7.4433 +#> 4.8040 -7.5728 -8.6687 -4.9897 10.4568 -0.3818 2.9275 7.0264 +#> 17.1487 17.1811 1.8131 -1.6268 3.2504 -1.6103 2.1148 4.8444 +#> 4.3901 -14.4819 4.4570 -3.3131 -4.5546 9.1669 -4.3040 5.5706 +#> -5.8960 -3.2672 8.1288 -3.7381 -2.1924 22.5451 2.3667 9.7284 +#> -18.2572 5.4731 11.3168 13.8111 -14.0473 0.3453 14.3634 3.4614 +#> 11.5613 -18.0741 2.7858 6.7078 4.9347 -8.2687 12.4922 -6.4614 +#> -5.2524 -8.8967 4.0937 -0.9864 -10.7034 18.0523 -4.1173 2.9440 +#> -7.9853 -3.3826 -1.6270 -9.9387 -0.3615 0.6658 -3.2412 -16.5801 +#> -20.2413 -3.5176 13.3472 -8.2069 1.5136 7.9762 -8.3682 10.7648 +#> -3.7595 -11.8931 -1.3305 -8.0434 -15.8344 -6.0567 1.9021 -2.7385 +#> -5.2238 -10.1297 -4.3264 1.0212 0.9195 -4.5202 -14.3401 -1.9053 +#> 1.1637 19.7997 -1.2302 -3.6157 10.0791 -6.5270 -0.1198 0.9702 +#> 21.7248 -9.0051 0.7113 -6.1913 15.0238 15.0909 -7.3654 10.6735 +#> 7.9912 -5.0314 10.6472 5.0058 -11.3351 3.3964 -3.5761 12.9362 +#> -3.6125 4.4261 6.5276 1.2179 9.4282 -2.5485 -10.9319 -1.7716 +#> 13.3769 3.6996 2.7557 6.6103 -3.1640 5.6600 15.0018 2.5374 +#> 4.6061 -6.1245 1.1185 4.9069 3.9026 -9.1939 -12.4780 17.3670 +#> -3.2972 4.5357 -3.6831 -12.5809 5.1991 7.6571 -7.7334 -16.0696 +#> 4.4995 -3.4061 -18.2221 0.1657 -17.2780 18.2298 4.7350 -1.9094 +#> -4.8294 9.7842 11.0807 7.1513 14.7856 -11.5080 11.0651 -5.7143 +#> 1.3501 -6.8370 -11.7194 -5.1865 -18.3552 -2.8589 14.2047 -23.2365 +#> 7.9918 1.8035 3.4218 3.9900 5.4208 6.7179 -4.7383 31.7702 +#> -9.9291 -6.0875 -19.0885 -27.4261 11.9650 0.2927 4.1253 -10.5055 +#> -5.5741 -25.3679 -2.6794 -6.6270 -4.9728 -0.6329 -14.6550 -16.3996 +#> -8.0340 20.9211 -18.7240 3.0184 0.9541 8.7990 -0.4756 2.7910 +#> 9.2895 -9.3025 -8.1933 0.8929 4.0431 -3.9183 -4.1492 -4.6099 +#> 1.2121 1.7955 -3.7963 7.0246 -2.8241 7.2822 10.2972 6.1966 +#> 9.2500 21.6554 10.0764 -3.8195 1.0120 -0.5537 -0.8079 -14.6708 +#> 7.1617 -10.3057 3.1645 3.6164 -22.4676 10.5733 -3.8868 -2.0448 +#> 1.6206 -5.9916 -17.1144 5.1716 5.7639 -15.1969 4.0864 4.3495 +#> 5.8028 -2.6665 2.5533 8.6684 2.9800 7.9136 2.4781 10.7375 +#> 11.7868 -12.8855 -3.9318 0.7253 -8.3808 -15.1319 -0.2693 10.1590 +#> +#> Columns 41 to 48 -16.5444 -4.6287 2.0314 -1.7905 -4.3152 -15.1268 -3.5579 8.1316 +#> 8.2578 12.1657 16.9449 -4.7423 10.7516 -5.3044 -13.8603 -8.9339 +#> 5.1447 3.9740 11.7261 1.3244 3.9746 7.0768 -6.8797 4.1232 +#> 7.0068 -9.2230 0.2787 0.7377 7.8000 0.7086 16.1432 -8.6581 +#> 18.8627 6.4956 16.8406 17.1304 3.2918 3.1755 -3.4407 4.3053 +#> -4.4693 -1.4799 6.3924 1.7607 -1.3521 -4.0490 -14.7393 -5.2026 +#> 1.0006 1.6647 -13.1156 12.6615 8.3567 -6.5948 -15.2573 7.6170 +#> -3.1967 4.9423 -4.7073 11.9696 9.9641 -9.7513 -0.7074 6.3584 +#> -0.4424 8.1667 -5.0007 0.7535 -9.9669 -12.6861 -11.3693 22.1647 +#> 11.3647 -1.3075 7.4077 0.7178 -2.3886 -5.3439 3.9297 12.7206 +#> -2.3143 2.2302 -7.5687 -31.6458 -2.5112 18.9413 4.5091 9.2022 +#> -6.8183 -8.6902 -19.0367 -7.5001 -12.1462 -5.3157 -3.3004 -7.1680 +#> -6.7658 2.4393 6.3413 -8.4221 -5.2463 10.0096 4.4987 4.1712 +#> -7.3988 -14.1535 11.3764 -4.7804 18.2793 12.4318 11.0025 -13.3344 +#> 4.3526 2.2977 11.8006 -1.2596 -24.6957 3.3288 7.7652 13.5084 +#> 32.4653 8.8435 9.7412 8.9345 -2.9832 -1.6264 4.4315 12.2145 +#> 18.7072 -0.1537 7.3249 1.2747 6.8130 15.6451 -5.6066 1.3597 +#> -1.5957 -5.7835 -7.2907 -4.1485 -2.1856 -5.6710 6.6938 0.1075 +#> 2.2760 2.9900 -7.1284 -5.2640 0.3314 5.4664 -5.5271 3.6673 +#> 1.9755 -3.5697 5.8237 4.3192 2.5900 1.1106 -3.2682 -3.8377 +#> 1.5259 -1.8319 7.3504 1.0233 8.1935 -9.2719 18.0310 -7.7718 +#> 3.9253 -0.7988 -11.2892 15.1197 -1.6806 11.3325 3.1717 15.6728 +#> 16.1292 10.9454 15.4373 -15.8961 0.9757 -9.1515 8.7367 13.7797 +#> -19.1597 -8.2421 -16.0136 -7.3302 -11.1742 -0.3500 -1.6897 -6.1324 +#> -18.3563 7.7518 -7.3949 -7.3812 -2.3338 -13.3431 -7.0959 -10.0065 +#> -11.7076 -5.5947 -0.5833 -1.9684 -8.2818 1.4432 12.8440 -6.4148 +#> -14.6965 -15.2207 -16.3022 1.4870 -10.0094 0.8704 -8.3260 -19.1158 +#> 0.3915 0.7392 13.2677 7.1700 -0.7309 -7.1851 -14.4830 -18.1360 +#> 2.5043 2.7492 12.0607 4.7828 6.7693 -7.3077 -5.7348 -11.0223 +#> -7.0619 -29.3620 -5.8536 13.0872 7.5924 -0.6150 19.7299 -5.6692 +#> -11.1114 4.6274 8.7869 6.7250 15.6530 23.9817 -3.5366 6.9809 +#> -5.1315 8.5302 10.0534 -6.3178 -12.8647 -10.8650 13.3111 -11.5817 +#> 4.6134 12.7501 -11.3046 3.7991 -6.4595 5.7412 -5.2063 1.9086 +#> +#> Columns 49 to 54 -5.3282 -3.2266 -24.8687 -10.5104 -4.4439 0.3163 +#> 10.6467 -5.4864 2.7542 -0.7171 0.6800 2.0922 +#> 11.7002 10.2993 6.8480 0.6578 17.3164 6.8616 +#> 3.6185 -18.4867 4.6401 4.8705 8.8435 -0.9595 +#> 15.3718 6.8593 14.3551 -5.4822 5.1366 6.1417 +#> 3.1160 9.3470 -6.9606 -0.0294 -3.2393 -4.2674 +#> 15.7196 -1.8938 -7.7399 0.0166 2.4686 3.5718 +#> 6.9156 4.8775 -7.8181 1.5310 -2.6594 -0.0888 +#> 3.7727 7.0841 3.0279 -0.1965 -8.6360 -1.8248 +#> -8.9784 -22.4677 -8.8660 0.3609 9.7901 -3.4502 +#> -4.3600 -10.9410 -5.0836 -3.6610 -5.5090 -5.2339 +#> 9.3056 -11.7218 -16.2098 -2.7711 -2.8203 -7.1593 +#> -3.8441 -9.5755 2.5076 1.8440 6.0984 3.8825 +#> -3.3463 6.1568 11.1819 16.0915 8.8665 -5.5455 +#> 0.6073 0.7520 -7.7763 -8.8121 -3.8470 1.5911 +#> -11.1212 5.0537 2.1586 9.1666 -4.0692 -2.6149 +#> -4.3383 20.6349 7.5382 1.8032 -7.1276 -2.8183 +#> 6.8676 -6.5512 -13.4673 -7.1652 2.2067 -3.7488 +#> 7.9024 13.9121 -0.7281 0.6378 -1.9312 -0.9425 +#> 1.3689 8.0473 18.6780 -11.5955 9.1147 -0.9635 +#> 14.7714 -2.3066 3.0062 11.6449 2.5655 -0.1592 +#> 1.5034 1.5028 -8.2844 1.8364 0.1551 -0.7391 +#> 3.1903 -9.6047 8.1067 1.0414 14.3937 0.3468 +#> -16.7015 10.6327 1.9860 5.6200 -2.7926 4.6682 +#> -15.6331 -11.5799 -9.7693 -5.3290 -3.0923 0.8328 +#> 13.5332 2.7533 10.4474 -6.1639 1.1230 -7.9291 +#> -13.7635 -8.4313 -3.9145 -5.8836 -5.1160 1.8951 +#> -0.4143 18.4338 8.0957 -5.2611 -2.2945 -2.4783 +#> -4.6864 9.1464 19.1237 -1.0431 -8.9225 2.5674 +#> -14.4443 -4.0471 -0.9250 -16.8284 -11.6586 -4.4408 +#> -4.8694 3.6092 10.6813 0.2885 4.8268 5.6979 +#> 12.0734 2.5061 -1.6331 1.3433 4.1259 -4.5298 +#> 0.4723 3.8604 -15.6149 2.0348 -7.0461 7.8540 +#> +#> (20,.,.) = +#> Columns 1 to 6 1.1531e+01 2.3331e+00 6.3103e-01 -1.0747e+01 -1.5133e+01 4.4643e+00 +#> -2.2385e-01 -2.7180e-01 -6.3996e+00 -6.3756e+00 2.2921e+00 5.5969e+00 +#> 2.5445e+00 -3.3252e+00 -1.3379e+01 -2.0287e+00 -1.2538e+01 -2.3544e+00 +#> -7.9704e+00 -2.9126e+00 4.3689e+00 -9.9908e+00 1.1437e+01 4.4595e+00 +#> 2.0039e+00 -1.3223e+00 3.4075e+00 -1.4926e+00 3.8605e+00 -1.1802e+01 +#> -5.0724e-01 1.2890e+01 4.9779e+00 1.1625e+01 -2.1969e+01 -4.6666e+00 +#> -1.3738e+01 4.1919e+00 9.3695e-01 -1.6508e-02 4.2423e+00 -3.7811e+00 +#> 5.0110e+00 2.4779e+00 4.7327e-01 4.6535e+00 -1.9481e+01 -6.0639e+00 +#> 1.1933e+00 3.6992e+00 -7.0678e+00 -8.9165e+00 2.4942e-01 6.7458e+00 +#> 5.3256e+00 1.0658e+01 -4.6348e+00 -8.8527e+00 -4.4973e+00 -7.1829e+00 +#> 8.4373e-01 7.9508e+00 -7.1050e+00 9.3471e+00 -9.7752e+00 -7.6119e+00 +#> 4.2762e+00 -3.7394e+00 1.2157e+01 -4.3657e+00 5.3120e+00 -5.2498e+00 +#> 4.2000e-01 -1.4188e+00 -3.5735e+00 6.4012e+00 1.5363e+00 6.0558e+00 +#> -8.0048e-01 -6.7814e+00 -5.5306e+00 3.3690e+00 -1.9144e+00 1.9936e+01 +#> -1.0017e+01 3.4138e+00 -7.7884e+00 6.4376e+00 1.3389e+01 -4.7419e+00 +#> 2.2364e-01 -3.2301e+00 -7.9332e+00 -1.2841e+01 4.2582e+00 1.8374e+00 +#> -4.4040e+00 4.5913e+00 -6.1392e+00 -1.2316e+01 3.4954e+00 -1.1173e+01 +#> -2.2540e+00 3.2320e+00 -5.9081e-01 -5.3758e+00 3.4184e+00 -5.9921e+00 +#> -2.6445e+00 9.4472e-01 6.2066e+00 6.4064e+00 2.4575e+00 3.5394e+00 +#> 2.5899e-01 -2.1468e+00 1.3507e+00 6.7354e+00 5.0850e+00 3.9325e+00 +#> 7.8227e-02 -5.2020e+00 2.0918e+00 -4.1192e+00 2.2587e+00 -3.3492e+00 +#> -3.0478e+00 1.3071e+01 4.8683e+00 6.8381e+00 3.9535e+00 -8.2438e+00 +#> -3.8922e+00 -7.1828e+00 -1.1203e+01 -7.5899e+00 -1.4261e+00 1.1600e+00 +#> 7.7494e+00 7.2217e+00 7.2037e+00 5.6197e+00 6.8696e+00 3.1197e+00 +#> 2.1424e+00 5.3235e+00 1.3030e+00 -9.0769e+00 1.5741e+01 1.3477e+00 +#> 3.4640e+00 -1.1975e-01 6.6389e+00 -4.4436e+00 1.1040e+01 -1.5087e+01 +#> -7.3618e+00 -6.3839e+00 -3.2193e+00 9.0840e+00 1.5088e+01 9.2531e+00 +#> -1.0281e-01 1.0142e+00 1.4126e-01 -1.8113e+00 6.3071e+00 -1.0526e+00 +#> -1.0200e+00 9.5171e-01 -4.2450e+00 -7.9963e+00 1.2206e+01 1.0523e+01 +#> 3.7093e+00 9.6528e-01 4.0630e+00 1.1129e+01 -3.9470e+00 5.5017e+00 +#> -3.9856e+00 3.6659e+00 1.6750e+00 1.0511e+01 -8.8842e+00 -3.3586e+00 +#> 3.0463e-01 4.3400e-01 -2.9486e+00 -3.8748e+00 -8.4585e+00 -1.4113e+00 +#> 6.9056e-02 5.3960e+00 -2.7156e+00 7.2227e+00 -6.3782e+00 -1.2635e+00 +#> +#> Columns 7 to 12 7.9277e+00 -3.3597e+00 -1.2598e+01 6.1640e+00 8.6454e-01 -1.4289e+01 +#> -1.8540e+00 4.6359e+00 -4.1865e+00 -5.1530e+00 2.3492e+00 -8.1270e+00 +#> 9.1591e+00 -9.1336e+00 2.2569e+00 -2.5140e+00 1.1702e+01 -9.2587e+00 +#> -7.6002e+00 6.2050e+00 -1.3418e+01 -7.7016e+00 1.2713e+01 -2.0433e+00 +#> -3.8421e+00 1.9191e+00 -4.5832e+00 -2.3329e+00 9.7887e+00 -2.1843e+01 +#> 6.0568e+00 -2.4789e+00 7.5743e+00 -9.3153e+00 -6.3102e+00 5.5473e+00 +#> 9.7806e-01 1.3823e+01 -3.1965e+00 -6.0133e-01 -3.1926e+00 1.0389e+00 +#> -1.1084e+01 9.0128e-01 -8.2534e+00 -1.1053e+01 5.0405e+00 9.5454e-01 +#> -1.9832e+01 -1.6358e+00 7.7623e+00 -1.1648e+01 1.3623e+01 -1.1985e+01 +#> 5.8391e+00 3.5531e+00 -3.1517e+00 1.0902e+01 6.9349e+00 -1.1707e+01 +#> -1.2187e+01 -1.9061e+01 2.7532e+00 -8.7982e-01 -1.2811e+00 -1.1784e+01 +#> -9.6197e+00 1.4408e-01 -3.8304e-01 -1.0293e+01 -4.2343e+00 -9.3015e+00 +#> 2.1816e+01 -1.0358e+01 4.4754e+00 4.4965e+00 -3.9147e+00 -8.8297e+00 +#> -1.3955e+01 7.2483e+00 -2.5442e+01 -7.0836e+00 5.3737e+00 -1.0677e+01 +#> 7.2140e+00 2.3756e+00 -4.7037e+00 -4.8433e+00 1.1872e+01 -8.0447e+00 +#> -1.3389e+00 1.1117e+01 -1.4307e+01 9.4055e+00 1.2201e+01 -1.6476e+01 +#> -4.9345e+00 8.2168e+00 -3.9515e+00 4.4601e+00 -1.3724e-01 -1.4649e+01 +#> -1.0054e+01 1.1957e+01 -9.0555e+00 -1.1360e+01 1.1955e+01 -1.2255e+01 +#> -6.8150e+00 6.2035e+00 -7.0091e+00 -4.7487e+00 1.4248e+01 -1.3114e+00 +#> 1.1368e+01 -7.3924e+00 9.4632e+00 -9.4340e+00 4.1829e+00 -1.3339e+01 +#> -1.4158e+00 1.2500e+01 -8.5006e+00 6.2598e+00 -2.9507e+00 -2.1564e-01 +#> 2.3398e+00 -2.4304e+00 -7.3535e-01 2.8813e-01 6.6898e+00 1.7336e+01 +#> -3.1147e-01 1.6517e+00 1.2610e+01 7.1454e+00 9.4211e+00 -1.4088e+01 +#> -7.4251e+00 -8.1059e+00 -2.5371e+01 -1.2935e+01 -1.6045e+01 -2.7882e+00 +#> -1.8432e+01 7.2453e+00 -5.2321e+00 -1.3105e+01 -9.3911e-01 -1.2354e+00 +#> 9.3714e+00 -1.6181e+01 1.5188e+01 7.1519e+00 -1.1995e+01 2.7365e+01 +#> -1.7714e-01 -1.3419e+01 6.6561e-01 -8.0800e+00 -1.1812e+01 1.2695e+01 +#> -7.3460e+00 9.6275e+00 2.8878e+00 4.3720e+00 1.5972e+01 4.4419e+00 +#> -6.1104e+00 1.5898e+00 -7.9875e+00 9.2666e-01 5.0628e+00 7.6020e+00 +#> -5.2511e+00 -5.0651e+00 5.6202e-01 -1.8023e+01 -1.7160e+00 1.4579e+01 +#> 4.1947e-02 -1.7834e+01 2.7451e+00 1.2405e+00 1.3470e-01 -1.2129e+01 +#> -5.9979e-01 -3.5876e+00 5.7801e+00 3.4267e+00 -6.0223e+00 2.1237e+01 +#> -7.8867e+00 3.8303e+00 -4.2517e+00 1.7581e+00 2.2147e-01 -9.6697e+00 +#> +#> Columns 13 to 18 -1.3265e+01 -1.3388e-01 -6.2657e+00 1.8457e+01 -2.0316e+01 -8.9335e+00 +#> -8.2219e+00 2.7945e+00 -1.0981e+01 -1.2172e+01 -4.2599e+00 6.7829e+00 +#> 1.5608e+01 8.9277e-02 3.2390e+00 7.8762e+00 1.2230e+01 -6.7688e+00 +#> 8.7358e+00 5.5304e+00 -5.5587e+00 3.9099e+00 -4.6915e+00 -1.2642e+00 +#> 1.0147e+01 3.6444e+00 -9.1277e+00 1.7335e+01 3.6767e-01 4.4335e+00 +#> 5.7939e+00 -1.8102e+01 1.1411e+01 -8.8106e+00 1.4680e+01 6.0197e+00 +#> 7.5897e+00 -4.5091e+00 -9.6216e+00 1.3621e+01 3.3132e+00 -2.2411e+01 +#> -7.9518e+00 5.3377e+00 -8.7286e+00 1.1977e+01 1.1974e+01 4.4068e+00 +#> 5.2793e+00 1.2346e+00 1.2539e+01 2.2888e-01 1.6566e+00 -4.3003e+00 +#> 6.2172e+00 -1.6921e+01 4.8339e+00 4.0403e+00 -3.6163e+00 6.3318e+00 +#> 3.7147e+00 1.1907e+00 1.7518e+01 9.2517e+00 9.1858e+00 7.7552e+00 +#> -1.0452e+01 -2.0280e+00 6.5720e+00 -1.0286e+01 -6.1103e+00 6.2985e-01 +#> 1.8934e+01 1.2379e+01 1.6522e+01 -8.6650e+00 -3.7289e+00 1.0112e+01 +#> 2.5255e+00 -1.0635e+01 -2.0998e+01 -9.0092e+00 1.6594e+01 1.1383e+00 +#> 1.7404e+01 -1.6339e+01 -1.1579e+01 -6.3186e+00 -1.3261e+01 -1.9673e+00 +#> -9.4890e+00 3.6748e+00 6.6153e+00 7.0064e-01 -1.4821e+00 -5.0903e+00 +#> 1.5503e+01 -9.9915e+00 3.1989e-01 -4.1594e+00 6.4457e+00 -4.3074e+00 +#> 7.8736e-01 -8.5897e+00 -2.0359e+00 -1.3591e+01 -4.8745e+00 -2.1896e+01 +#> -6.6362e+00 -9.8757e+00 -5.5616e+00 1.8116e+01 7.5799e+00 -1.7781e+00 +#> 8.3797e+00 6.0021e-01 -1.7778e+01 4.7960e+00 -5.5892e+00 1.2583e+00 +#> -5.1661e+00 5.4751e+00 -8.1088e+00 -5.9602e-01 6.1550e+00 3.6812e+00 +#> 1.0024e+01 -1.3289e+01 4.1283e+00 1.8308e+00 1.7409e+00 -3.9152e+00 +#> 1.5095e+01 1.9088e+00 6.1760e+00 2.3120e+01 -1.4459e+01 1.5964e+00 +#> -4.3251e+00 1.3200e+00 5.6500e+00 1.0793e+01 1.1772e+01 7.8633e+00 +#> 3.0207e+00 8.6253e+00 3.7148e-01 -8.2103e+00 2.1514e+00 3.3441e+00 +#> -4.2380e+00 8.7565e+00 5.1626e+00 1.0956e+01 -1.1340e+01 1.2452e+01 +#> -4.7337e+00 -1.4291e+01 -8.9942e+00 -1.3272e+01 2.6020e+00 -8.6399e+00 +#> -2.9628e+00 1.7688e+00 9.4542e+00 1.3147e+01 7.1706e+00 3.1746e+00 +#> 6.3297e+00 1.0754e+00 2.7610e+00 -4.0563e+00 -7.0332e+00 1.6860e+00 +#> 2.2331e+00 4.8680e+00 1.0615e+00 -1.3488e+01 3.2870e+00 3.1170e+00 +#> 1.5144e+01 -8.8461e+00 3.1123e+00 -4.4244e-01 -4.8833e+00 4.7693e-01 +#> -6.3397e+00 1.5592e+01 1.5269e+01 1.5849e+01 -6.6470e+00 2.9379e+00 +#> -3.5580e+00 2.0140e+00 2.5515e+00 -1.6820e+01 5.1384e+00 9.3253e+00 +#> +#> Columns 19 to 24 -4.3144e+00 4.6143e+00 3.7621e+00 1.0236e+00 9.1765e+00 1.1586e+01 +#> -3.2139e+00 -2.9037e+00 1.1098e+01 -4.6513e+00 1.6938e+00 4.7404e+00 +#> -1.2711e+01 6.3123e+00 -4.1875e+00 -1.0495e+00 -1.1496e+01 -8.3215e+00 +#> 1.1090e+01 7.4233e-01 -3.5049e+00 4.8077e+00 -4.0711e+00 -1.1453e+01 +#> 1.6252e+01 8.5940e-01 1.1081e+01 4.6098e+00 2.5550e+00 3.6903e-02 +#> -1.2179e+00 1.0425e+01 -3.1774e-01 1.9092e+01 -1.3186e+01 9.3608e+00 +#> 3.7483e-01 4.5561e+00 6.0095e-01 9.6300e+00 -2.3309e-02 -4.5975e+00 +#> -1.2796e+01 2.8253e+00 4.3625e+00 7.1202e+00 3.4291e+00 8.9793e-01 +#> 7.0558e-01 -1.2783e+01 -1.1601e+01 -5.2674e+00 4.8640e+00 -8.5976e+00 +#> 1.7004e+00 1.4138e+01 -2.1057e+00 -2.3106e+00 -1.0992e+01 -2.4524e-01 +#> -9.8371e+00 -1.7429e+00 -1.6777e+00 -7.0428e+00 -4.9677e+00 -1.0101e+01 +#> 8.7128e+00 1.9727e+00 7.3768e+00 6.1230e+00 5.9434e+00 8.9918e+00 +#> 1.5984e+00 -4.9178e+00 2.1932e+01 -5.4599e+00 1.1959e+01 -2.2573e+00 +#> -6.9175e+00 -4.1474e+00 -2.1975e+00 -5.3939e+00 -2.2144e+00 7.2748e+00 +#> 1.0882e+01 2.0940e+01 3.1680e+00 3.3280e+00 4.6097e+00 -4.0240e+00 +#> 7.1740e+00 6.6605e+00 5.2177e+00 -8.3093e+00 -1.1249e+01 1.3216e+00 +#> 5.9196e+00 -4.6431e+00 2.4043e+00 1.3990e+00 2.1708e+00 -1.1351e+01 +#> -9.4173e+00 7.0889e+00 -6.5662e+00 3.4494e+00 -1.8142e+01 -2.3038e-01 +#> 6.4866e+00 -4.0087e+00 6.9798e+00 1.2574e+00 -2.6856e+00 1.7630e+00 +#> -6.4867e+00 8.2717e+00 3.6461e+00 6.8531e+00 8.2888e+00 1.5702e+01 +#> -1.4982e+00 1.2453e+01 2.7548e+00 8.9527e+00 -1.0024e+00 4.8150e+00 +#> 1.5337e+01 -3.2413e+00 5.8117e+00 1.6994e+01 6.7664e+00 -1.1449e+01 +#> 4.7731e+00 1.0507e+01 -1.6246e+01 -1.7924e+01 7.1068e-01 -3.8325e+00 +#> 1.8137e+00 -7.0561e+00 7.9566e+00 4.3107e-01 1.0817e+01 -5.4474e+00 +#> -4.6797e+00 1.4865e+00 -1.6564e+01 1.5662e+00 -3.3163e+00 -3.9403e+00 +#> 3.1976e+00 2.9680e+00 -7.3438e+00 1.4924e+01 -2.1774e+00 9.0913e+00 +#> -3.2800e-01 -6.5956e+00 7.9274e+00 6.7508e+00 1.2148e+01 -9.3619e+00 +#> 4.5274e-01 -2.6912e+00 -7.2021e+00 4.2023e+00 -3.9267e+00 2.5256e+00 +#> 4.3481e+00 -1.2686e+01 -1.1956e+01 1.0051e+01 5.9468e+00 -2.1925e+00 +#> 2.0461e+00 -1.3921e+01 -5.5452e+00 5.2529e+00 2.0601e+01 -2.6521e+00 +#> -4.6468e+00 1.4440e+01 -7.3033e-01 -9.9996e+00 -8.8389e+00 -1.5808e+00 +#> -1.2480e+00 9.1248e+00 -2.8179e+00 1.0731e+01 -1.5468e+01 1.7226e+00 +#> 1.3306e+00 8.6867e+00 4.8440e+00 9.5960e+00 -1.1540e+01 -3.9837e+00 +#> +#> Columns 25 to 30 5.4699e+00 5.6998e+00 -3.4012e+00 -1.1067e+01 8.6554e+00 6.6413e+00 +#> 1.4069e+01 -9.7582e+00 3.2213e+00 -2.2708e+00 -2.1654e+01 -1.5022e+00 +#> -3.8468e+00 1.0437e+01 -1.3451e+01 -1.7239e+01 1.4509e+00 7.5753e+00 +#> -5.2113e+00 2.8510e+00 -6.9533e+00 9.5826e+00 1.6959e+00 4.5609e+00 +#> -1.4356e+00 1.7240e+00 -2.1492e+01 6.4118e+00 -1.2253e+00 -2.2039e+00 +#> -2.2775e+00 4.8589e+00 -4.2910e+00 -5.1652e-01 -1.4957e+01 -5.9976e+00 +#> 1.0456e+01 1.3280e+00 -3.5628e-01 2.1212e+00 -8.3177e+00 -3.2242e+00 +#> -5.4305e-01 -1.2461e-02 -4.7155e+00 -6.2646e+00 3.0596e+00 -1.0798e+01 +#> 6.1351e+00 -6.4744e+00 -3.7152e+00 -7.4972e+00 -3.5417e+00 -6.1697e+00 +#> -1.1345e+00 5.3915e+00 -1.0067e+01 -4.4847e+00 -5.1633e-01 -1.4962e+00 +#> -1.7299e+01 7.8617e+00 -5.3225e+00 -5.7595e+00 9.3727e+00 2.8566e+00 +#> -3.7784e+00 6.6955e+00 2.7104e+00 6.3596e+00 1.0636e+01 7.4172e+00 +#> 3.5782e+00 -2.2840e+01 1.7752e+00 5.2072e+00 -1.1504e+01 -2.6621e+00 +#> -1.4476e+01 5.4797e+00 -1.4041e+00 3.9727e+00 6.1982e+00 3.7617e+00 +#> 5.4233e+00 1.8527e+00 -1.8342e-01 -6.8265e+00 1.1266e+01 -1.5702e+01 +#> 4.4169e+00 -1.5739e+01 -1.1726e+01 7.3365e+00 -7.8890e+00 -8.5225e+00 +#> 1.2108e+01 -3.1495e-01 -1.5868e+01 6.5639e+00 -8.2699e+00 1.5419e+00 +#> 1.1193e+01 -4.3754e-01 -5.9623e+00 -4.0880e+00 7.9017e+00 8.8718e+00 +#> -9.3012e+00 1.3934e+00 -5.8818e+00 -1.1752e+01 -8.1547e+00 -5.0979e+00 +#> 1.4485e+00 -5.6431e+00 8.8961e+00 -4.3433e+00 1.0669e+01 -6.7620e+00 +#> 7.8333e+00 1.4045e+00 -1.4081e+01 6.5516e+00 -1.7546e+01 -6.2451e+00 +#> 8.1935e+00 3.1147e+00 2.5687e+00 -8.3399e+00 -3.0574e+00 -7.1151e+00 +#> 1.3414e+00 4.2013e+00 -1.7803e+01 -5.1214e+00 -5.5807e+00 1.5435e+01 +#> 2.8192e+00 -2.4385e+00 1.2972e+01 -3.1511e+00 1.5503e+00 9.6873e+00 +#> -7.7992e+00 9.0596e+00 2.0192e+01 -7.8699e+00 7.2253e+00 -6.7786e+00 +#> -2.9993e+00 -1.3195e+01 1.2465e+01 1.7358e+00 -3.1596e+00 3.3080e+00 +#> 6.0420e+00 1.3294e+01 1.7468e+01 -4.0664e+00 1.5949e+00 9.8797e+00 +#> -3.8004e+00 1.5496e+01 -1.4409e+00 -8.1662e+00 1.1723e+01 1.1117e+00 +#> 3.1524e+00 -2.1490e+00 -1.3218e+00 1.4153e+01 -2.0530e+01 8.6345e+00 +#> -9.4948e-01 -3.9658e-01 9.1029e+00 -5.6274e+00 1.4035e+01 9.4233e+00 +#> 5.4670e+00 -5.4394e+00 7.3929e+00 -1.1360e+01 2.4601e+00 -4.9093e+00 +#> 1.7102e+00 -4.0448e-01 -8.3279e+00 -4.1029e-03 -1.1221e+00 -1.2746e+01 +#> 1.0214e+00 9.5807e-01 1.8192e+00 -1.2437e+00 -1.7461e+01 -5.0998e+00 +#> +#> Columns 31 to 36 1.6438e+01 -1.3438e+01 -2.0719e+00 -1.0913e+01 -7.5714e+00 1.8081e-01 +#> -8.8542e-01 2.0675e+00 -8.0859e+00 6.9595e+00 -8.2521e+00 -1.8931e+00 +#> 1.0092e+01 1.7633e+01 5.2209e+00 -1.6351e+01 -1.9979e+01 9.8331e+00 +#> -6.9253e+00 1.1116e+01 -1.1913e+01 -4.0747e+00 4.2845e+00 2.2116e+00 +#> -1.9262e+00 1.0957e+00 -4.9534e+00 -5.6227e+00 -1.0123e+00 1.1315e+01 +#> 4.2937e+00 -2.6627e+00 -1.2887e+01 4.3937e+00 -5.1109e-01 1.0921e+00 +#> -7.9952e+00 1.4324e+01 -3.9196e+00 -2.7443e+00 6.3004e-01 -4.6276e+00 +#> 9.4714e+00 -4.8978e+00 -1.0007e-01 -2.0158e+01 1.6750e-02 -1.0488e+00 +#> 3.1494e-01 -6.2173e+00 6.5896e+00 2.9818e+00 -1.0756e+01 -4.3337e+00 +#> 1.3194e+01 -1.4014e+00 2.6727e-04 -1.6177e+00 2.1996e-01 1.5266e+00 +#> 1.9788e+00 -8.5207e+00 -3.9199e-01 -1.0973e+01 -1.4129e+01 1.7517e+00 +#> 9.3051e+00 -2.2612e+00 -7.2882e-01 -1.8655e+01 -1.1064e+01 -5.8109e+00 +#> -1.1440e+01 -3.3624e+00 -3.3130e+00 9.9564e+00 1.2247e+01 7.4611e+00 +#> 1.4041e+01 8.0288e+00 4.4719e+00 6.5836e+00 5.6681e+00 4.5749e+00 +#> -1.0175e+01 -2.4834e+00 1.7094e+00 6.6462e+00 9.4967e+00 5.3757e+00 +#> 4.1961e+00 -2.3064e+01 -1.0312e+01 5.6536e+00 -2.5044e+00 -4.9450e+00 +#> -2.5085e+00 -4.3374e+00 -8.5296e-01 2.7185e+00 -5.6022e+00 6.4684e+00 +#> 1.0241e+01 9.5646e+00 6.8356e+00 -1.2182e+01 4.2921e+00 5.7431e+00 +#> 1.3466e+01 1.3720e-01 1.0346e+01 2.1120e+01 -5.1694e-01 -5.4295e+00 +#> 3.2274e+00 2.0233e+01 -1.4584e+00 -4.0477e-01 6.6957e+00 -4.2500e+00 +#> 4.9743e+00 -2.1034e-01 -1.1772e+01 -5.7535e+00 8.7506e+00 -7.1696e+00 +#> 9.0055e+00 8.2756e+00 5.8166e+00 4.2791e+00 -2.4567e+00 -6.4340e+00 +#> 3.1091e+00 2.3341e+00 -1.7254e+00 2.9272e-01 -1.0347e+01 6.9994e+00 +#> 7.0724e+00 -1.0171e+01 1.3105e+01 1.9351e+00 -1.1987e+01 -6.6013e+00 +#> -1.6740e+01 1.5330e+00 1.0160e+00 4.8406e+00 -1.2461e+01 -3.5074e+00 +#> -9.7528e+00 1.1982e-01 1.5609e+00 1.0209e+01 5.4214e+00 -5.4774e+00 +#> -3.3251e+00 -4.8822e+00 4.5186e+00 -1.5024e+01 -1.3310e+01 -6.2242e+00 +#> -2.7370e+00 -4.1817e+00 -3.3404e+00 -7.9251e+00 -7.3348e+00 1.0770e+01 +#> -5.2601e+00 -7.2051e+00 5.9309e-01 1.5419e+01 -1.4580e+01 -6.6892e+00 +#> -1.4980e+01 -7.9538e+00 1.0906e+01 -3.5722e+00 1.7001e+00 1.7724e+01 +#> -7.8612e-01 -7.4339e+00 -1.1043e+00 2.8421e+00 -7.2769e-01 2.5934e+00 +#> 1.2884e+00 1.1799e+01 2.6829e+00 -1.6889e+00 1.7691e+01 -9.7053e+00 +#> 1.0168e+01 4.0578e+00 1.3561e+01 -4.0530e+00 -4.0119e+00 -9.7104e+00 +#> +#> Columns 37 to 42 -8.2735e+00 3.1403e+00 -1.1959e+01 1.1864e+00 -8.7998e+00 -2.9534e+00 +#> 9.1179e+00 -6.3446e+00 2.2182e+00 -1.5197e+00 -1.2004e+01 -1.0454e+01 +#> -8.7442e+00 -3.2043e-02 1.6638e+00 5.4491e+00 1.7899e+00 -1.4846e+00 +#> -1.8708e-01 -7.6960e+00 -5.4669e+00 -7.6985e+00 -7.7320e+00 1.1031e+01 +#> -2.1632e+00 -7.0626e+00 1.4560e+00 -6.1482e+00 4.3042e-01 1.4981e+01 +#> 5.2970e+00 -1.5019e+01 2.8836e+00 -6.4169e+00 1.1324e+01 -1.0338e+01 +#> 9.4684e+00 -1.3241e+01 4.9726e-02 -2.9331e-01 1.3627e+00 2.9866e-01 +#> -1.2529e+00 -3.2748e+00 1.9241e+00 9.2193e+00 -1.0986e+01 -2.7363e+01 +#> -4.8738e+00 -7.7993e+00 -4.7592e+00 -2.0297e+01 -8.3224e+00 -2.2505e+01 +#> -1.0972e+01 -2.9503e+00 1.1717e+01 -1.6971e+00 -5.3677e+00 -2.8579e+00 +#> -8.9193e+00 -6.8548e+00 -1.2901e+01 -1.0337e+01 6.7866e+00 3.4333e-01 +#> -1.5797e+01 -5.8071e+00 -1.9483e+01 -8.7037e+00 5.0029e-01 -1.0541e+01 +#> 3.2845e+00 4.1856e+00 -3.9064e+00 -3.7897e+00 2.1681e+00 7.0478e+00 +#> 1.4918e+00 6.1159e+00 4.3892e+00 -7.7357e+00 -8.0483e+00 -6.6913e+00 +#> -2.9716e+00 -1.3224e+01 3.6103e+00 5.5090e+00 7.1380e-01 7.3106e+00 +#> -3.1293e-01 6.9401e+00 7.3920e+00 -2.7220e+00 9.2464e-01 5.9456e+00 +#> 1.5518e+01 1.1428e+01 7.0763e+00 -2.1403e+00 9.3160e+00 -3.3511e+00 +#> -1.1750e+01 -5.4353e+00 7.5489e-01 4.7460e+00 1.0737e+01 -6.3573e+00 +#> 9.4763e-01 2.6830e-01 -3.0907e+00 -1.5246e+01 -1.6409e+01 -2.3621e+01 +#> -3.2955e+00 2.3546e+00 1.1154e+00 -2.8251e+00 -8.3575e+00 5.5872e+00 +#> -2.2356e+00 1.0417e+01 6.6118e+00 1.0563e+01 -4.5935e+00 9.5886e+00 +#> 2.5937e+00 -9.0757e+00 -1.9578e+00 -6.8779e-02 -1.0149e+01 -1.8089e+01 +#> 5.6172e+00 2.3983e+01 5.6968e+00 -3.7683e+00 1.6258e+01 1.9859e+01 +#> -6.4705e+00 -2.5923e+01 -2.7608e+01 -1.9793e+01 -2.2804e+01 -1.3724e+01 +#> -3.4172e+00 -1.4107e+01 -9.9054e+00 4.4123e+00 -1.1000e+01 5.2239e+00 +#> 2.5845e+00 2.8300e-01 -3.3345e+00 -1.4225e+00 -9.2023e+00 1.4570e+01 +#> -5.9657e+00 -2.1567e+01 -2.3819e+01 1.1180e+00 -1.8332e+00 -1.8851e+00 +#> 5.3897e+00 3.4841e+00 3.8876e+00 3.3017e+00 5.1111e+00 3.0648e+00 +#> 1.4952e+01 -5.3567e+00 -2.6438e+00 9.7077e+00 8.9029e+00 6.6048e+00 +#> 1.3372e+01 -5.9119e+00 -1.4201e+01 -9.9048e+00 4.3797e+00 -1.7422e+01 +#> -3.7871e-01 2.2780e+00 -5.7702e+00 -2.5621e+00 1.3025e+01 1.0764e+01 +#> 3.2211e+00 4.6307e-01 1.1810e+01 5.7527e+00 -5.7293e+00 6.3445e+00 +#> -2.5170e-01 -1.1096e+01 -1.0406e+01 5.4917e+00 -1.1964e+01 -5.6287e+00 +#> +#> Columns 43 to 48 1.0823e+01 1.0227e+01 5.2452e+00 -7.6024e+00 1.0346e+00 -7.9165e+00 +#> -6.8778e+00 -8.0179e+00 -3.0957e+00 -5.3885e+00 -7.4486e+00 -5.7790e+00 +#> 7.6706e+00 2.3105e+01 7.8097e+00 -1.3647e+00 -1.2281e+01 -1.5098e+01 +#> 2.6246e+00 6.9635e+00 -1.8214e+01 5.7573e+00 -1.0707e+01 7.4098e+00 +#> 2.3787e+00 4.8133e+00 -4.4525e+00 5.3442e+00 -1.4970e+01 -3.1860e+00 +#> -1.7354e+01 1.6283e+01 1.5564e+00 1.3106e+01 -6.1499e+00 -4.8105e+00 +#> 5.7187e+00 -1.2279e+01 7.5568e+00 1.6225e+00 6.1519e+00 2.5067e+00 +#> 5.5771e+00 7.8960e+00 3.8538e-01 1.5720e+01 -6.6727e+00 -1.1263e+01 +#> -1.6229e+00 -3.2439e+00 3.1027e+00 -2.5326e-01 -8.8451e+00 -8.2672e-01 +#> 9.0183e+00 4.2075e+00 -1.4198e+01 6.5227e+00 -1.2879e+01 -8.1087e+00 +#> 6.6603e-01 3.8175e+00 -2.9692e+00 -1.5503e+00 1.2798e+00 -7.5683e+00 +#> 1.8157e+01 1.5004e+01 -2.7700e+00 1.1028e+01 -4.5021e+00 4.6181e+00 +#> 7.3547e+00 -5.4684e+00 3.7659e-01 -1.4494e+01 3.1610e-01 1.0328e-01 +#> -5.1568e+00 1.1580e+01 -1.5136e+01 -1.8530e-01 -7.0306e+00 -5.5930e+00 +#> 7.5433e+00 -1.9105e+01 -6.2412e+00 -5.3192e+00 -3.9712e+00 1.0104e+01 +#> 1.3997e+01 -7.2630e+00 -1.5452e+01 -1.4429e+01 -2.8080e+00 2.6409e+00 +#> 4.5980e+00 -2.1559e+01 -1.2334e+01 -4.2134e-01 7.6498e-01 3.6497e+00 +#> 1.3544e+01 6.7151e+00 -1.9943e+00 8.4802e+00 -1.3441e+01 3.5801e+00 +#> -2.1211e+01 7.9966e+00 5.1467e+00 1.9884e+00 1.4655e+00 -1.7772e+01 +#> -6.7888e+00 9.6563e+00 2.9522e+00 -3.2235e+00 4.3726e+00 -6.4796e+00 +#> 1.1923e+01 1.0458e+01 1.1862e+01 6.3710e-01 4.6522e+00 3.3642e+00 +#> -2.0412e-01 2.3011e+00 7.5140e+00 -6.2739e+00 1.2391e+01 6.0583e-01 +#> 8.2744e-01 4.1622e+00 -1.2777e+01 -6.6249e-01 -2.0946e+01 -5.7807e+00 +#> -4.2178e+00 -9.3935e+00 1.4077e+01 -1.0468e+01 2.5169e+00 -1.2825e+01 +#> -7.4545e+00 -1.7324e+01 1.0080e+01 4.3010e+00 6.2536e+00 5.2832e+00 +#> -1.7112e+01 7.4615e+00 -5.2570e+00 -4.3795e+00 4.2586e+00 1.0613e+01 +#> -4.2153e-01 -1.6174e+01 -3.5876e-02 -2.6960e+00 5.8841e+00 1.0271e+01 +#> -1.0999e+01 9.6090e+00 4.4693e+00 3.4537e+00 -4.9400e+00 -5.7933e+00 +#> -1.1098e+01 4.3504e+00 4.1122e+00 6.6750e+00 -1.3501e+01 5.7151e+00 +#> -1.8164e+01 -7.8379e+00 -1.6745e+01 -2.6554e-01 -1.3939e+01 -4.3230e+00 +#> 4.1970e+00 -1.3924e+01 8.9203e+00 -1.8577e+00 3.2587e+00 3.2568e+00 +#> -9.2823e+00 2.3534e+01 -8.9547e+00 -4.0144e+00 1.0853e+00 -1.2542e+01 +#> 6.3050e+00 -1.6179e+00 3.5026e+00 3.5845e+00 -2.6009e+00 -7.2738e+00 +#> +#> Columns 49 to 54 -1.2046e+01 -7.2920e+00 6.6618e+00 3.8642e+00 1.1839e-01 -1.1737e-02 +#> -1.9295e+00 -1.2324e+01 4.3648e+00 -4.7373e-01 -2.2393e+00 -2.6499e+00 +#> -4.9235e+00 9.2022e+00 8.5990e+00 5.1345e+00 2.7203e+00 1.3626e+00 +#> 1.1801e+01 -3.4244e+00 9.3624e+00 -2.5441e+00 4.0397e+00 -1.0988e-02 +#> -7.1344e+00 -1.4162e+01 -1.0796e+01 -1.5963e+01 4.3136e-01 5.5208e-01 +#> -2.7459e+01 -7.8029e+00 -4.7287e+00 7.1100e+00 8.4058e+00 -8.3266e+00 +#> 1.7347e+01 -2.7476e+00 -5.8147e+00 1.0619e+00 -2.6772e+00 3.0231e+00 +#> 7.3309e+00 -3.1074e+00 1.9109e+01 6.2488e+00 -6.4734e+00 -4.7125e-01 +#> -4.8922e+00 -7.5932e+00 -2.2855e+00 -1.2168e+00 -1.4721e+00 -4.0499e+00 +#> -1.8002e+01 -4.7983e+00 6.1407e+00 -6.8202e+00 3.7508e+00 -6.9611e-01 +#> 1.5803e+00 2.1600e+00 9.1819e+00 2.6416e-01 1.6545e+00 -6.8457e+00 +#> -2.5037e+00 -3.5136e+00 -1.2750e+01 9.4648e+00 2.0263e+00 -7.7318e-01 +#> -5.8539e+00 7.2161e-01 3.3986e+00 2.6831e-01 8.4629e+00 1.0556e+00 +#> -8.1779e-01 -9.5382e+00 8.8504e+00 -7.6203e+00 1.3994e+00 -2.9230e+00 +#> -3.5740e+00 -1.7280e+00 1.4569e+01 -1.1737e+01 -8.8975e+00 3.7210e+00 +#> 1.1795e+01 3.3146e+00 1.2242e+01 2.9391e+00 -1.8201e+01 -1.2430e+00 +#> -1.8295e+01 -4.6633e+00 5.9964e+00 -4.3451e-01 -6.5197e+00 3.5632e+00 +#> 3.7127e-01 7.4454e+00 1.1682e+01 2.6667e-01 -1.9586e+00 -3.3837e+00 +#> -1.0365e+00 -8.2232e+00 -8.4112e+00 -9.1053e+00 -3.6521e+00 -2.2473e+00 +#> -2.5694e+00 4.3786e-01 -1.0536e+01 -8.8205e+00 -3.6897e+00 1.0798e+01 +#> 8.6545e+00 -5.0410e+00 7.0885e+00 1.8632e+00 6.6410e+00 -1.3708e+00 +#> -5.7480e+00 8.0330e-01 -3.3123e+00 5.2791e+00 -5.4046e+00 4.0533e+00 +#> -3.6363e+00 -8.6738e+00 3.7204e+00 1.1624e-02 1.8064e+01 -1.6268e-01 +#> -2.2105e+01 -1.7262e+01 -1.0033e+01 -9.1466e+00 -5.0783e+00 6.1272e+00 +#> -1.2115e+01 -2.3348e+00 -1.0168e+00 5.4170e+00 -1.1375e+00 -1.1276e+00 +#> -1.1370e+01 9.9577e+00 -1.5178e+01 -3.5892e+00 2.8437e+00 9.4898e-01 +#> 1.3018e+01 1.0340e+01 2.0948e+00 -2.8574e+00 8.2862e-01 3.8906e+00 +#> -1.7084e+00 2.7070e+00 3.6188e+00 -2.3062e+00 8.2864e+00 5.3085e+00 +#> 1.5275e+00 6.4327e+00 1.8934e+01 7.2108e+00 1.6752e+00 1.9821e+00 +#> -1.6915e+00 8.5454e-01 7.1428e+00 1.0604e+01 -7.7169e-01 9.2157e-01 +#> -1.8514e+00 -1.1039e+01 -4.7102e+00 -7.0633e-02 5.1042e+00 5.8931e-01 +#> -7.9077e+00 -5.2208e+00 -4.4968e+00 7.9002e+00 1.5707e+01 6.7454e+00 +#> -4.4708e-01 -2.1211e+00 -2.3933e+00 -2.8860e+00 2.2992e+00 4.1969e+00 +#> [ CPUFloatType{20,33,54} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_conv_transpose2d.html b/static/docs/reference/torch_conv_transpose2d.html new file mode 100644 index 0000000000000000000000000000000000000000..f95f6887c17f1133d306c67e51d05b1f59a61267 --- /dev/null +++ b/static/docs/reference/torch_conv_transpose2d.html @@ -0,0 +1,347 @@ + + + + + + + + +Conv_transpose2d — torch_conv_transpose2d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Conv_transpose2d

    +
    + +
    torch_conv_transpose2d(
    +  input,
    +  weight,
    +  bias = list(),
    +  stride = 1L,
    +  padding = 0L,
    +  output_padding = 0L,
    +  groups = 1L,
    +  dilation = 1L
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iH , iW)\)

    weight

    filters of shape \((\mbox{in\_channels} , \frac{\mbox{out\_channels}}{\mbox{groups}} , kH , kW)\)

    bias

    optional bias of shape \((\mbox{out\_channels})\). Default: NULL

    stride

    the stride of the convolving kernel. Can be a single number or a tuple (sH, sW). Default: 1

    padding

    dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padH, padW). Default: 0

    output_padding

    additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padH, out_padW). Default: 0

    groups

    split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

    dilation

    the spacing between kernel elements. Can be a single number or a tuple (dH, dW). Default: 1

    + +

    conv_transpose2d(input, weight, bias=NULL, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor

    + + + + +

    Applies a 2D transposed convolution operator over an input image +composed of several input planes, sometimes also called "deconvolution".

    +

    See nn_conv_transpose2d() for details and output shape.

    + +

    Examples

    +
    if (torch_is_installed()) { + +# With square kernels and equal stride +inputs = torch_randn(c(1, 4, 5, 5)) +weights = torch_randn(c(4, 8, 3, 3)) +nnf_conv_transpose2d(inputs, weights, padding=1) +} +
    #> torch_tensor +#> (1,1,.,.) = +#> -3.6222 -7.9970 2.8105 3.1026 1.7725 +#> -4.7404 12.8775 1.0415 -0.8881 4.1451 +#> -2.2161 -0.0100 -7.8568 2.5352 7.7713 +#> 1.3625 -3.1941 -6.9461 6.7935 2.9976 +#> -3.9660 0.9707 12.1132 -7.1898 -5.3734 +#> +#> (1,2,.,.) = +#> 5.5750 4.9565 -1.2831 -9.3766 2.7933 +#> 0.3677 -1.5841 -1.1948 2.0401 1.6757 +#> -1.1554 -7.9480 2.0611 3.8100 -2.3715 +#> 3.6882 5.7345 7.8422 -10.5027 4.2258 +#> 6.4450 -2.5137 -14.2397 6.3799 2.2110 +#> +#> (1,3,.,.) = +#> 2.3055 1.7193 1.5106 5.7209 -0.4044 +#> 5.4206 -5.3050 -1.7869 8.7362 4.7972 +#> -1.1797 -8.2598 0.6797 -5.7008 2.3267 +#> 8.3444 -1.9313 -3.6077 1.1224 0.3162 +#> -0.1740 -4.8076 -2.2376 -3.3701 -7.0435 +#> +#> (1,4,.,.) = +#> -2.1321 -0.6399 -0.5702 6.2185 -6.6191 +#> -6.7085 -1.7849 0.0567 0.2547 4.4343 +#> -0.1791 -13.0960 -1.1711 -11.7881 7.9123 +#> 1.6398 -8.7353 5.6190 2.7717 2.2932 +#> 6.8436 5.9709 0.8596 -5.4111 -4.1101 +#> +#> (1,5,.,.) = +#> 6.3795 12.9273 3.7923 -2.6331 -6.7760 +#> 12.3485 -7.3105 3.6371 -0.5407 -1.6971 +#> -6.8453 8.1406 8.8767 -0.0699 -10.0042 +#> 0.8129 -1.6930 3.5113 -0.9711 2.3083 +#> -1.1577 -2.1860 -1.9327 3.4738 -2.3536 +#> +#> (1,6,.,.) = +#> 2.8265 -4.2878 -10.9216 -1.4770 -3.4568 +#> -8.9188 -7.0011 -0.9053 -2.3985 -0.2689 +#> -0.8018 12.4512 0.5999 -5.5394 1.6234 +#> 4.3200 -10.4630 -4.2572 7.0144 -0.1696 +#> -4.6365 -0.5886 7.6997 -3.6797 2.4974 +#> +#> (1,7,.,.) = +#> -2.3134 1.9207 6.9010 -2.2058 2.9683 +#> -2.6449 5.9937 -0.7752 -2.3005 -1.7128 +#> 3.1190 -11.4434 2.3856 3.4017 -0.6764 +#> -2.5913 6.5472 -0.4979 2.9762 -7.4267 +#> 1.9456 3.5234 -6.3143 -3.6686 -0.6943 +#> +#> (1,8,.,.) = +#> -0.3478 4.7669 7.3665 6.3556 -2.5469 +#> 14.4336 2.1104 2.8604 0.8508 7.2790 +#> -9.4439 -5.6837 -8.8314 10.9166 0.5517 +#> -8.0535 6.8200 4.4783 13.7326 3.1114 +#> -0.0064 8.4882 -0.5751 1.8667 -4.2371 +#> [ CPUFloatType{1,8,5,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_conv_transpose3d.html b/static/docs/reference/torch_conv_transpose3d.html new file mode 100644 index 0000000000000000000000000000000000000000..dce76ea97da12c6d76ed9fd08e6a14bb2f4adac5 --- /dev/null +++ b/static/docs/reference/torch_conv_transpose3d.html @@ -0,0 +1,291 @@ + + + + + + + + +Conv_transpose3d — torch_conv_transpose3d • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Conv_transpose3d

    +
    + +
    torch_conv_transpose3d(
    +  input,
    +  weight,
    +  bias = list(),
    +  stride = 1L,
    +  padding = 0L,
    +  output_padding = 0L,
    +  groups = 1L,
    +  dilation = 1L
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iT , iH , iW)\)

    weight

    filters of shape \((\mbox{in\_channels} , \frac{\mbox{out\_channels}}{\mbox{groups}} , kT , kH , kW)\)

    bias

    optional bias of shape \((\mbox{out\_channels})\). Default: NULL

    stride

    the stride of the convolving kernel. Can be a single number or a tuple (sT, sH, sW). Default: 1

    padding

    dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padT, padH, padW). Default: 0

    output_padding

    additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padT, out_padH, out_padW). Default: 0

    groups

    split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

    dilation

    the spacing between kernel elements. Can be a single number or a tuple (dT, dH, dW). Default: 1

    + +

    conv_transpose3d(input, weight, bias=NULL, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor

    + + + + +

    Applies a 3D transposed convolution operator over an input image +composed of several input planes, sometimes also called "deconvolution"

    +

    See nn_conv_transpose3d() for details and output shape.

    + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +inputs = torch_randn(c(20, 16, 50, 10, 20)) +weights = torch_randn(c(16, 33, 3, 3, 3)) +nnf_conv_transpose3d(inputs, weights) +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_cos.html b/static/docs/reference/torch_cos.html new file mode 100644 index 0000000000000000000000000000000000000000..6f83e3776277b6b58e879c4041413c37956934ac --- /dev/null +++ b/static/docs/reference/torch_cos.html @@ -0,0 +1,259 @@ + + + + + + + + +Cos — torch_cos • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Cos

    +
    + +
    torch_cos(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    cos(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the cosine of the elements of input.

    +

    $$ + \mbox{out}_{i} = \cos(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_cos(a) +} +
    #> torch_tensor +#> 0.9950 +#> -0.6881 +#> 0.9608 +#> 0.4292 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_cosh.html b/static/docs/reference/torch_cosh.html new file mode 100644 index 0000000000000000000000000000000000000000..cc5ed43e0100aeb4cb2801382d4b7fd637f90bb3 --- /dev/null +++ b/static/docs/reference/torch_cosh.html @@ -0,0 +1,260 @@ + + + + + + + + +Cosh — torch_cosh • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Cosh

    +
    + +
    torch_cosh(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    cosh(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the hyperbolic cosine of the elements of +input.

    +

    $$ + \mbox{out}_{i} = \cosh(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_cosh(a) +} +
    #> torch_tensor +#> 1.6794 +#> 1.0414 +#> 1.1420 +#> 1.4987 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_cosine_similarity.html b/static/docs/reference/torch_cosine_similarity.html new file mode 100644 index 0000000000000000000000000000000000000000..467bdc8c16269f99bdd646c3159a09ee8cb39ee0 --- /dev/null +++ b/static/docs/reference/torch_cosine_similarity.html @@ -0,0 +1,368 @@ + + + + + + + + +Cosine_similarity — torch_cosine_similarity • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Cosine_similarity

    +
    + +
    torch_cosine_similarity(x1, x2, dim = 2L, eps = 0)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    x1

    (Tensor) First input.

    x2

    (Tensor) Second input (of size matching x1).

    dim

    (int, optional) Dimension of vectors. Default: 1

    eps

    (float, optional) Small value to avoid division by zero. Default: 1e-8

    + +

    cosine_similarity(x1, x2, dim=1, eps=1e-8) -> Tensor

    + + + + +

    Returns cosine similarity between x1 and x2, computed along dim.

    +

    $$ + \mbox{similarity} = \frac{x_1 \cdot x_2}{\max(\Vert x_1 \Vert _2 \cdot \Vert x_2 \Vert _2, \epsilon)} +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +input1 = torch_randn(c(100, 128)) +input2 = torch_randn(c(100, 128)) +output = torch_cosine_similarity(input1, input2) +output +} +
    #> torch_tensor +#> -0.0839 +#> -0.0532 +#> -0.0722 +#> 0.0385 +#> -0.0419 +#> 0.0142 +#> -0.1544 +#> 0.0472 +#> 0.0008 +#> -0.0617 +#> 0.0141 +#> -0.1420 +#> 0.0335 +#> 0.0430 +#> 0.0615 +#> 0.0353 +#> 0.1243 +#> 0.0716 +#> 0.0977 +#> -0.0064 +#> 0.0749 +#> -0.0833 +#> 0.1121 +#> 0.0322 +#> 0.0634 +#> -0.0648 +#> 0.0626 +#> -0.0684 +#> 0.0730 +#> -0.0172 +#> 0.0471 +#> 0.1222 +#> 0.0414 +#> 0.0982 +#> -0.0227 +#> -0.0359 +#> 0.1331 +#> 0.0087 +#> -0.0901 +#> -0.0007 +#> 0.0342 +#> -0.0128 +#> -0.0778 +#> 0.0089 +#> 0.0861 +#> 0.0460 +#> 0.2530 +#> -0.0914 +#> -0.0440 +#> -0.0222 +#> -0.0150 +#> 0.0758 +#> 0.0366 +#> 0.0954 +#> -0.1439 +#> -0.0192 +#> -0.0154 +#> -0.2044 +#> -0.0923 +#> 0.0788 +#> 0.0914 +#> 0.1129 +#> -0.1281 +#> -0.0538 +#> 0.0407 +#> -0.0087 +#> -0.0040 +#> 0.0872 +#> 0.0249 +#> -0.0875 +#> -0.0190 +#> -0.0206 +#> 0.0033 +#> -0.2125 +#> -0.2117 +#> 0.0331 +#> 0.1047 +#> -0.0187 +#> -0.0631 +#> -0.0723 +#> 0.0119 +#> -0.0522 +#> -0.0242 +#> 0.1630 +#> 0.2203 +#> -0.0939 +#> -0.0853 +#> -0.0385 +#> 0.0749 +#> -0.0212 +#> -0.1387 +#> -0.1505 +#> -0.1320 +#> 0.0642 +#> 0.0524 +#> -0.0435 +#> -0.0708 +#> 0.0286 +#> 0.0018 +#> 0.0362 +#> [ CPUFloatType{100} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_cross.html b/static/docs/reference/torch_cross.html new file mode 100644 index 0000000000000000000000000000000000000000..1fff399dd3c2d92b236d1ecdd8ba9c2640910698 --- /dev/null +++ b/static/docs/reference/torch_cross.html @@ -0,0 +1,272 @@ + + + + + + + + +Cross — torch_cross • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Cross

    +
    + +
    torch_cross(self, other, dim = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    other

    (Tensor) the second input tensor

    dim

    (int, optional) the dimension to take the cross-product in.

    + +

    cross(input, other, dim=-1, out=NULL) -> Tensor

    + + + + +

    Returns the cross product of vectors in dimension dim of input +and other.

    +

    input and other must have the same size, and the size of their +dim dimension should be 3.

    +

    If dim is not given, it defaults to the first dimension found with the +size 3.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4, 3)) +a +b = torch_randn(c(4, 3)) +b +torch_cross(a, b, dim=2) +torch_cross(a, b) +} +
    #> torch_tensor +#> 1.0051 1.1323 -1.5101 +#> 0.1088 -0.3737 1.1852 +#> -0.4150 -0.6857 -0.1986 +#> -1.1004 -2.0375 0.8405 +#> [ CPUFloatType{4,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_cummax.html b/static/docs/reference/torch_cummax.html new file mode 100644 index 0000000000000000000000000000000000000000..6be41dae2f1fc029f27c4be6631dbe77593a7b48 --- /dev/null +++ b/static/docs/reference/torch_cummax.html @@ -0,0 +1,287 @@ + + + + + + + + +Cummax — torch_cummax • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Cummax

    +
    + +
    torch_cummax(self, dim)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int) the dimension to do the operation over

    + +

    cummax(input, dim) -> (Tensor, LongTensor)

    + + + + +

    Returns a namedtuple (values, indices) where values is the cumulative maximum of +elements of input in the dimension dim. And indices is the index +location of each maximum value found in the dimension dim.

    +

    $$ + y_i = max(x_1, x_2, x_3, \dots, x_i) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(10)) +a +torch_cummax(a, dim=1) +} +
    #> [[1]] +#> torch_tensor +#> -1.3457 +#> -1.3457 +#> -1.2500 +#> -1.2500 +#> 0.6911 +#> 0.6911 +#> 0.6911 +#> 0.7823 +#> 0.7823 +#> 0.7892 +#> [ CPUFloatType{10} ] +#> +#> [[2]] +#> torch_tensor +#> 0 +#> 0 +#> 2 +#> 2 +#> 4 +#> 4 +#> 4 +#> 7 +#> 7 +#> 9 +#> [ CPULongType{10} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_cummin.html b/static/docs/reference/torch_cummin.html new file mode 100644 index 0000000000000000000000000000000000000000..e1892ecb05d6381e1a0250ea92b3d25bc60c9ea6 --- /dev/null +++ b/static/docs/reference/torch_cummin.html @@ -0,0 +1,287 @@ + + + + + + + + +Cummin — torch_cummin • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Cummin

    +
    + +
    torch_cummin(self, dim)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int) the dimension to do the operation over

    + +

    cummin(input, dim) -> (Tensor, LongTensor)

    + + + + +

    Returns a namedtuple (values, indices) where values is the cumulative minimum of +elements of input in the dimension dim. And indices is the index +location of each maximum value found in the dimension dim.

    +

    $$ + y_i = min(x_1, x_2, x_3, \dots, x_i) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(10)) +a +torch_cummin(a, dim=1) +} +
    #> [[1]] +#> torch_tensor +#> -1.4730 +#> -1.4730 +#> -1.4730 +#> -1.4730 +#> -1.4730 +#> -1.4730 +#> -1.4730 +#> -1.4730 +#> -1.4730 +#> -1.4730 +#> [ CPUFloatType{10} ] +#> +#> [[2]] +#> torch_tensor +#> 0 +#> 0 +#> 0 +#> 0 +#> 0 +#> 0 +#> 0 +#> 0 +#> 0 +#> 0 +#> [ CPULongType{10} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_cumprod.html b/static/docs/reference/torch_cumprod.html new file mode 100644 index 0000000000000000000000000000000000000000..4ec85b4c99cf6594801cd17d8ca742b830a893b2 --- /dev/null +++ b/static/docs/reference/torch_cumprod.html @@ -0,0 +1,276 @@ + + + + + + + + +Cumprod — torch_cumprod • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Cumprod

    +
    + +
    torch_cumprod(self, dim, dtype = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int) the dimension to do the operation over

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: NULL.

    + +

    cumprod(input, dim, out=NULL, dtype=NULL) -> Tensor

    + + + + +

    Returns the cumulative product of elements of input in the dimension +dim.

    +

    For example, if input is a vector of size N, the result will also be +a vector of size N, with elements.

    +

    $$ + y_i = x_1 \times x_2\times x_3\times \dots \times x_i +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(10)) +a +torch_cumprod(a, dim=1) +} +
    #> torch_tensor +#> -0.2882 +#> 0.0701 +#> 0.1595 +#> 0.0911 +#> 0.0191 +#> 0.0297 +#> -0.0133 +#> 0.0059 +#> -0.0030 +#> -0.0012 +#> [ CPUFloatType{10} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_cumsum.html b/static/docs/reference/torch_cumsum.html new file mode 100644 index 0000000000000000000000000000000000000000..401d32aed2b389996ae6cb9edcff3ac5901c0bfe --- /dev/null +++ b/static/docs/reference/torch_cumsum.html @@ -0,0 +1,276 @@ + + + + + + + + +Cumsum — torch_cumsum • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Cumsum

    +
    + +
    torch_cumsum(self, dim, dtype = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int) the dimension to do the operation over

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: NULL.

    + +

    cumsum(input, dim, out=NULL, dtype=NULL) -> Tensor

    + + + + +

    Returns the cumulative sum of elements of input in the dimension +dim.

    +

    For example, if input is a vector of size N, the result will also be +a vector of size N, with elements.

    +

    $$ + y_i = x_1 + x_2 + x_3 + \dots + x_i +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(10)) +a +torch_cumsum(a, dim=1) +} +
    #> torch_tensor +#> 0.5764 +#> 0.8705 +#> 0.0415 +#> 0.2468 +#> -0.7231 +#> -2.5011 +#> -3.5835 +#> -2.3688 +#> -3.0899 +#> -2.3232 +#> [ CPUFloatType{10} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_det.html b/static/docs/reference/torch_det.html new file mode 100644 index 0000000000000000000000000000000000000000..d62a4bc0a439c95a794e35d990de54049bd10445 --- /dev/null +++ b/static/docs/reference/torch_det.html @@ -0,0 +1,266 @@ + + + + + + + + +Det — torch_det • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Det

    +
    + +
    torch_det(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor of size (*, n, n) where * is zero or more batch dimensions.

    + +

    Note

    + + +
    Backward through `det` internally uses SVD results when `input` is
    +not invertible. In this case, double backward through `det` will be
    +unstable in when `input` doesn't have distinct singular values. See
    +`~torch.svd` for details.
    +
    + +

    det(input) -> Tensor

    + + + + +

    Calculates determinant of a square matrix or batches of square matrices.

    + +

    Examples

    +
    if (torch_is_installed()) { + +A = torch_randn(c(3, 3)) +torch_det(A) +A = torch_randn(c(3, 2, 2)) +A +A$det() +} +
    #> torch_tensor +#> 0.2158 +#> 1.0192 +#> -0.1403 +#> [ CPUFloatType{3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_device.html b/static/docs/reference/torch_device.html new file mode 100644 index 0000000000000000000000000000000000000000..403fe5ec25516b5e6d7f141445b04a58f982d080 --- /dev/null +++ b/static/docs/reference/torch_device.html @@ -0,0 +1,262 @@ + + + + + + + + +Create a Device object — torch_device • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    A torch_device is an object representing the device on which a torch_tensor +is or will be allocated.

    +
    + +
    torch_device(type, index = NULL)
    + +

    Arguments

    + + + + + + + + + + +
    type

    (character) a device type "cuda" or "cpu"

    index

    (integer) optional device ordinal for the device type. If the device ordinal +is not present, this object will always represent the current device for the device +type, even after torch_cuda_set_device() is called; e.g., a torch_tensor constructed +with device 'cuda' is equivalent to 'cuda:X' where X is the result of +torch_cuda_current_device().

    +

    A torch_device can be constructed via a string or via a string and device ordinal

    + + +

    Examples

    +
    if (torch_is_installed()) { + +# Via string +torch_device("cuda:1") +torch_device("cpu") +torch_device("cuda") # current cuda device + +# Via string and device ordinal +torch_device("cuda", 0) +torch_device("cpu", 0) + +} +
    #> torch_device(type='cpu', index=0)
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_diag.html b/static/docs/reference/torch_diag.html new file mode 100644 index 0000000000000000000000000000000000000000..8bf20a093cc85077a0eb7334aa48070f10b73012 --- /dev/null +++ b/static/docs/reference/torch_diag.html @@ -0,0 +1,258 @@ + + + + + + + + +Diag — torch_diag • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Diag

    +
    + +
    torch_diag(self, diagonal = 0L)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    diagonal

    (int, optional) the diagonal to consider

    + +

    diag(input, diagonal=0, out=NULL) -> Tensor

    + + + +
      +
    • If input is a vector (1-D tensor), then returns a 2-D square tensor +with the elements of input as the diagonal.

    • +
    • If input is a matrix (2-D tensor), then returns a 1-D tensor with +the diagonal elements of input.

    • +
    + +

    The argument diagonal controls which diagonal to consider:

      +
    • If diagonal = 0, it is the main diagonal.

    • +
    • If diagonal > 0, it is above the main diagonal.

    • +
    • If diagonal < 0, it is below the main diagonal.

    • +
    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_diag_embed.html b/static/docs/reference/torch_diag_embed.html new file mode 100644 index 0000000000000000000000000000000000000000..902ceb33721227e445d77b978bb841d4f88eb75c --- /dev/null +++ b/static/docs/reference/torch_diag_embed.html @@ -0,0 +1,297 @@ + + + + + + + + +Diag_embed — torch_diag_embed • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Diag_embed

    +
    + +
    torch_diag_embed(self, offset = 0L, dim1 = -2L, dim2 = -1L)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor. Must be at least 1-dimensional.

    offset

    (int, optional) which diagonal to consider. Default: 0 (main diagonal).

    dim1

    (int, optional) first dimension with respect to which to take diagonal. Default: -2.

    dim2

    (int, optional) second dimension with respect to which to take diagonal. Default: -1.

    + +

    diag_embed(input, offset=0, dim1=-2, dim2=-1) -> Tensor

    + + + + +

    Creates a tensor whose diagonals of certain 2D planes (specified by +dim1 and dim2) are filled by input. +To facilitate creating batched diagonal matrices, the 2D planes formed by +the last two dimensions of the returned tensor are chosen by default.

    +

    The argument offset controls which diagonal to consider:

      +
    • If offset = 0, it is the main diagonal.

    • +
    • If offset > 0, it is above the main diagonal.

    • +
    • If offset < 0, it is below the main diagonal.

    • +
    + +

    The size of the new matrix will be calculated to make the specified diagonal +of the size of the last input dimension. +Note that for offset other than \(0\), the order of dim1 +and dim2 matters. Exchanging them is equivalent to changing the +sign of offset.

    +

    Applying torch_diagonal to the output of this function with +the same arguments yields a matrix identical to input. However, +torch_diagonal has different default dimensions, so those +need to be explicitly specified.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(2, 3)) +torch_diag_embed(a) +torch_diag_embed(a, offset=1, dim1=1, dim2=3) +} +
    #> torch_tensor +#> (1,.,.) = +#> 0.0000 0.3358 0.0000 0.0000 +#> 0.0000 -3.1130 0.0000 0.0000 +#> +#> (2,.,.) = +#> 0.0000 0.0000 0.4778 0.0000 +#> 0.0000 0.0000 0.6605 0.0000 +#> +#> (3,.,.) = +#> 0.0000 0.0000 0.0000 -0.0378 +#> 0.0000 0.0000 0.0000 -0.3726 +#> +#> (4,.,.) = +#> 0 0 0 0 +#> 0 0 0 0 +#> [ CPUFloatType{4,2,4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_diagflat.html b/static/docs/reference/torch_diagflat.html new file mode 100644 index 0000000000000000000000000000000000000000..308a4c3471277d7bd22a87aea466b586911aece3 --- /dev/null +++ b/static/docs/reference/torch_diagflat.html @@ -0,0 +1,275 @@ + + + + + + + + +Diagflat — torch_diagflat • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Diagflat

    +
    + +
    torch_diagflat(self, offset = 0L)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    offset

    (int, optional) the diagonal to consider. Default: 0 (main diagonal).

    + +

    diagflat(input, offset=0) -> Tensor

    + + + +
      +
    • If input is a vector (1-D tensor), then returns a 2-D square tensor +with the elements of input as the diagonal.

    • +
    • If input is a tensor with more than one dimension, then returns a +2-D tensor with diagonal elements equal to a flattened input.

    • +
    + +

    The argument offset controls which diagonal to consider:

      +
    • If offset = 0, it is the main diagonal.

    • +
    • If offset > 0, it is above the main diagonal.

    • +
    • If offset < 0, it is below the main diagonal.

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(3)) +a +torch_diagflat(a) +torch_diagflat(a, 1) +a = torch_randn(c(2, 2)) +a +torch_diagflat(a) +} +
    #> torch_tensor +#> -0.7924 0.0000 0.0000 0.0000 +#> 0.0000 -0.6579 0.0000 0.0000 +#> 0.0000 0.0000 -1.1050 0.0000 +#> 0.0000 0.0000 0.0000 0.5352 +#> [ CPUFloatType{4,4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_diagonal.html b/static/docs/reference/torch_diagonal.html new file mode 100644 index 0000000000000000000000000000000000000000..28f70e727094b43e3e612278929142ef656cb120 --- /dev/null +++ b/static/docs/reference/torch_diagonal.html @@ -0,0 +1,298 @@ + + + + + + + + +Diagonal — torch_diagonal • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Diagonal

    +
    + +
    torch_diagonal(self, outdim, dim1 = 1L, dim2 = 2L, offset = 0L)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor. Must be at least 2-dimensional.

    outdim

    dimension name if self is a named tensor.

    dim1

    (int, optional) first dimension with respect to which to take diagonal. Default: 0.

    dim2

    (int, optional) second dimension with respect to which to take diagonal. Default: 1.

    offset

    (int, optional) which diagonal to consider. Default: 0 (main diagonal).

    + +

    diagonal(input, offset=0, dim1=0, dim2=1) -> Tensor

    + + + + +

    Returns a partial view of input with the its diagonal elements +with respect to dim1 and dim2 appended as a dimension +at the end of the shape.

    +

    The argument offset controls which diagonal to consider:

      +
    • If offset = 0, it is the main diagonal.

    • +
    • If offset > 0, it is above the main diagonal.

    • +
    • If offset < 0, it is below the main diagonal.

    • +
    + +

    Applying torch_diag_embed to the output of this function with +the same arguments yields a diagonal matrix with the diagonal entries +of the input. However, torch_diag_embed has different default +dimensions, so those need to be explicitly specified.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(3, 3)) +a +torch_diagonal(a, offset = 0) +torch_diagonal(a, offset = 1) +x = torch_randn(c(2, 5, 4, 2)) +torch_diagonal(x, offset=-1, dim1=1, dim2=2) +} +
    #> torch_tensor +#> (1,.,.) = +#> 0.6147 +#> 0.7596 +#> +#> (2,.,.) = +#> -0.9285 +#> -0.0531 +#> +#> (3,.,.) = +#> 0.8622 +#> -0.1970 +#> +#> (4,.,.) = +#> -1.2797 +#> 1.6829 +#> [ CPUFloatType{4,2,1} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_digamma.html b/static/docs/reference/torch_digamma.html new file mode 100644 index 0000000000000000000000000000000000000000..8ddf0b44cb5c4a3795fb90e7068344a2cd0f0582 --- /dev/null +++ b/static/docs/reference/torch_digamma.html @@ -0,0 +1,256 @@ + + + + + + + + +Digamma — torch_digamma • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Digamma

    +
    + +
    torch_digamma(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the tensor to compute the digamma function on

    + +

    digamma(input, out=NULL) -> Tensor

    + + + + +

    Computes the logarithmic derivative of the gamma function on input.

    +

    $$ + \psi(x) = \frac{d}{dx} \ln\left(\Gamma\left(x\right)\right) = \frac{\Gamma'(x)}{\Gamma(x)} +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_tensor(c(1, 0.5)) +torch_digamma(a) +} +
    #> torch_tensor +#> -0.5772 +#> -1.9635 +#> [ CPUFloatType{2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_dist.html b/static/docs/reference/torch_dist.html new file mode 100644 index 0000000000000000000000000000000000000000..cf47972c0645444d69bd847e2fd7317aa0e4091a --- /dev/null +++ b/static/docs/reference/torch_dist.html @@ -0,0 +1,268 @@ + + + + + + + + +Dist — torch_dist • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Dist

    +
    + +
    torch_dist(self, other, p = 2L)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    other

    (Tensor) the Right-hand-side input tensor

    p

    (float, optional) the norm to be computed

    + +

    dist(input, other, p=2) -> Tensor

    + + + + +

    Returns the p-norm of (input - other)

    +

    The shapes of input and other must be +broadcastable .

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_randn(c(4)) +x +y = torch_randn(c(4)) +y +torch_dist(x, y, 3.5) +torch_dist(x, y, 3) +torch_dist(x, y, 0) +torch_dist(x, y, 1) +} +
    #> torch_tensor +#> 2.48136 +#> [ CPUFloatType{} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_div.html b/static/docs/reference/torch_div.html new file mode 100644 index 0000000000000000000000000000000000000000..8708ced880ca9b91d5f3b342fa444784ea12266b --- /dev/null +++ b/static/docs/reference/torch_div.html @@ -0,0 +1,299 @@ + + + + + + + + +Div — torch_div • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Div

    +
    + +
    torch_div(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    other

    (Number) the number to be divided to each element of input

    + +

    div(input, other, out=NULL) -> Tensor

    + + + + +

    Divides each element of the input input with the scalar other and +returns a new resulting tensor.

    + + +

    Each element of the tensor input is divided by each element of the tensor +other. The resulting tensor is returned.

    +

    $$ + \mbox{out}_i = \frac{\mbox{input}_i}{\mbox{other}_i} +$$ +The shapes of input and other must be broadcastable +. If the torch_dtype of input and +other differ, the torch_dtype of the result tensor is determined +following rules described in the type promotion documentation +. If out is specified, the result must be +castable to the torch_dtype of the +specified output tensor. Integral division by zero leads to undefined behavior.

    +

    Warning

    + + + +

    Integer division using div is deprecated, and in a future release div will +perform true division like torch_true_divide(). +Use torch_floor_divide() to perform integer division, +instead.

    +

    $$ + \mbox{out}_i = \frac{\mbox{input}_i}{\mbox{other}} +$$ +If the torch_dtype of input and other differ, the +torch_dtype of the result tensor is determined following rules +described in the type promotion documentation . If +out is specified, the result must be castable +to the torch_dtype of the specified output tensor. Integral division +by zero leads to undefined behavior.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(5)) +a +torch_div(a, 0.5) + + +a = torch_randn(c(4, 4)) +a +b = torch_randn(c(4)) +b +torch_div(a, b) +} +
    #> torch_tensor +#> 0.6647 0.7626 -0.4095 0.1570 +#> 0.7859 -0.3618 0.0104 -0.7726 +#> 0.3775 -0.4202 0.9625 0.0194 +#> 0.4176 -0.7826 -0.3309 -0.8345 +#> [ CPUFloatType{4,4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_dot.html b/static/docs/reference/torch_dot.html new file mode 100644 index 0000000000000000000000000000000000000000..7d41ed249173c7474f9dfdd11a087976f90f375a --- /dev/null +++ b/static/docs/reference/torch_dot.html @@ -0,0 +1,258 @@ + + + + + + + + +Dot — torch_dot • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Dot

    +
    + +
    torch_dot(self, tensor)
    + +

    Arguments

    + + + + + + + + + + +
    self

    the input tensor

    tensor

    the other input tensor

    + +

    Note

    + +

    This function does not broadcast .

    +

    dot(input, tensor) -> Tensor

    + + + + +

    Computes the dot product (inner product) of two tensors.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_dot(torch_tensor(c(2, 3)), torch_tensor(c(2, 1))) +} +
    #> torch_tensor +#> 7 +#> [ CPUFloatType{} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_dtype.html b/static/docs/reference/torch_dtype.html new file mode 100644 index 0000000000000000000000000000000000000000..ca6599a883533c95cb950245371399416a2da797 --- /dev/null +++ b/static/docs/reference/torch_dtype.html @@ -0,0 +1,263 @@ + + + + + + + + +Torch data types — torch_dtype • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Returns the correspondent data type.

    +
    + +
    torch_float32()
    +
    +torch_float()
    +
    +torch_float64()
    +
    +torch_double()
    +
    +torch_float16()
    +
    +torch_half()
    +
    +torch_uint8()
    +
    +torch_int8()
    +
    +torch_int16()
    +
    +torch_short()
    +
    +torch_int32()
    +
    +torch_int()
    +
    +torch_int64()
    +
    +torch_long()
    +
    +torch_bool()
    +
    +torch_quint8()
    +
    +torch_qint8()
    +
    +torch_qint32()
    + + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_eig.html b/static/docs/reference/torch_eig.html new file mode 100644 index 0000000000000000000000000000000000000000..66ca51941b79471e1260b77d283ecfc36e70c058 --- /dev/null +++ b/static/docs/reference/torch_eig.html @@ -0,0 +1,254 @@ + + + + + + + + +Eig — torch_eig • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Eig

    +
    + +
    torch_eig(self, eigenvectors = FALSE)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the square matrix of shape \((n \times n)\) for which the eigenvalues and eigenvectors will be computed

    eigenvectors

    (bool) TRUE to compute both eigenvalues and eigenvectors; otherwise, only eigenvalues will be computed

    + +

    Note

    + + +
    Since eigenvalues and eigenvectors might be complex, backward pass is supported only
    +for [`torch_symeig`]
    +
    + +

    eig(input, eigenvectors=False, out=NULL) -> (Tensor, Tensor)

    + + + + +

    Computes the eigenvalues and eigenvectors of a real square matrix.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_einsum.html b/static/docs/reference/torch_einsum.html new file mode 100644 index 0000000000000000000000000000000000000000..7e8f13755d6a056d0796f856b56db40d4872462f --- /dev/null +++ b/static/docs/reference/torch_einsum.html @@ -0,0 +1,273 @@ + + + + + + + + +Einsum — torch_einsum • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Einsum

    +
    + +
    torch_einsum(equation, tensors)
    + +

    Arguments

    + + + + + + + + + + +
    equation

    (string) The equation is given in terms of lower case letters (indices) to be associated with each dimension of the operands and result. The left hand side lists the operands dimensions, separated by commas. There should be one index letter per tensor dimension. The right hand side follows after -> and gives the indices for the output. If the -> and right hand side are omitted, it implicitly defined as the alphabetically sorted list of all indices appearing exactly once in the left hand side. The indices not apprearing in the output are summed over after multiplying the operands entries. If an index appears several times for the same operand, a diagonal is taken. Ellipses ... represent a fixed number of dimensions. If the right hand side is inferred, the ellipsis dimensions are at the beginning of the output.

    tensors

    (Tensor) The operands to compute the Einstein sum of.

    + +

    einsum(equation, *operands) -> Tensor

    + + + + +

    This function provides a way of computing multilinear expressions (i.e. sums of products) using the +Einstein summation convention.

    + +

    Examples

    +
    if (torch_is_installed()) { + +if (FALSE) { + +x = torch_randn(c(5)) +y = torch_randn(c(4)) +torch_einsum('i,j->ij', list(x, y)) # outer product +A = torch_randn(c(3,5,4)) +l = torch_randn(c(2,5)) +r = torch_randn(c(2,4)) +torch_einsum('bn,anm,bm->ba', list(l, A, r)) # compare torch_nn$functional$bilinear +As = torch_randn(c(3,2,5)) +Bs = torch_randn(c(3,5,4)) +torch_einsum('bij,bjk->bik', list(As, Bs)) # batch matrix multiplication +A = torch_randn(c(3, 3)) +torch_einsum('ii->i', list(A)) # diagonal +A = torch_randn(c(4, 3, 3)) +torch_einsum('...ii->...i', list(A)) # batch diagonal +A = torch_randn(c(2, 3, 4, 5)) +torch_einsum('...ij->...ji', list(A))$shape # batch permute + +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_empty.html b/static/docs/reference/torch_empty.html new file mode 100644 index 0000000000000000000000000000000000000000..ae1f939a8aee8424a5353562d148681a02381555 --- /dev/null +++ b/static/docs/reference/torch_empty.html @@ -0,0 +1,280 @@ + + + + + + + + +Empty — torch_empty • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Empty

    +
    + +
    torch_empty(
    +  ...,
    +  names = NULL,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    ...

    a sequence of integers defining the shape of the output tensor.

    names

    optional character vector naming each dimension.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

    layout

    (torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    empty(*size, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False, pin_memory=False) -> Tensor

    + + + + +

    Returns a tensor filled with uninitialized data. The shape of the tensor is +defined by the variable argument size.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_empty(c(2, 3)) +} +
    #> torch_tensor +#> 1.9205e+31 1.8891e+31 6.3375e-10 +#> 1.8169e+31 4.4726e+21 8.4843e+26 +#> [ CPUFloatType{2,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_empty_like.html b/static/docs/reference/torch_empty_like.html new file mode 100644 index 0000000000000000000000000000000000000000..fbbf51f2c648608e6c17567be738f65f323d4713 --- /dev/null +++ b/static/docs/reference/torch_empty_like.html @@ -0,0 +1,281 @@ + + + + + + + + +Empty_like — torch_empty_like • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Empty_like

    +
    + +
    torch_empty_like(
    +  input,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE,
    +  memory_format = torch_preserve_format()
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    (Tensor) the size of input will determine size of the output tensor.

    dtype

    (torch.dtype, optional) the desired data type of returned Tensor. Default: if NULL, defaults to the dtype of input.

    layout

    (torch.layout, optional) the desired layout of returned tensor. Default: if NULL, defaults to the layout of input.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, defaults to the device of input.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    memory_format

    (torch.memory_format, optional) the desired memory format of returned Tensor. Default: torch_preserve_format.

    + +

    empty_like(input, dtype=NULL, layout=NULL, device=NULL, requires_grad=False, memory_format=torch.preserve_format) -> Tensor

    + + + + +

    Returns an uninitialized tensor with the same size as input. +torch_empty_like(input) is equivalent to +torch_empty(input.size(), dtype=input.dtype, layout=input.layout, device=input.device).

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_empty(list(2,3), dtype = torch_int64()) +} +
    #> torch_tensor +#> 1.2885e+10 0.0000e+00 0.0000e+00 +#> 0.0000e+00 1.7180e+10 1.3700e+02 +#> [ CPULongType{2,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_empty_strided.html b/static/docs/reference/torch_empty_strided.html new file mode 100644 index 0000000000000000000000000000000000000000..af47a07e0f65dfe5e5db598cc6eb3a570abf5488 --- /dev/null +++ b/static/docs/reference/torch_empty_strided.html @@ -0,0 +1,295 @@ + + + + + + + + +Empty_strided — torch_empty_strided • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Empty_strided

    +
    + +
    torch_empty_strided(
    +  size,
    +  stride,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE,
    +  pin_memory = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    size

    (tuple of ints) the shape of the output tensor

    stride

    (tuple of ints) the strides of the output tensor

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

    layout

    (torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    pin_memory

    (bool, optional) If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: FALSE.

    + +

    empty_strided(size, stride, dtype=NULL, layout=NULL, device=NULL, requires_grad=False, pin_memory=False) -> Tensor

    + + + + +

    Returns a tensor filled with uninitialized data. The shape and strides of the tensor is +defined by the variable argument size and stride respectively. +torch_empty_strided(size, stride) is equivalent to +torch_empty(size).as_strided(size, stride).

    +

    Warning

    + + + +

    More than one element of the created tensor may refer to a single memory +location. As a result, in-place operations (especially ones that are +vectorized) may result in incorrect behavior. If you need to write to +the tensors, please clone them first.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_empty_strided(list(2, 3), list(1, 2)) +a +a$stride(1) +a$size(1) +} +
    #> [1] 2
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_eq.html b/static/docs/reference/torch_eq.html new file mode 100644 index 0000000000000000000000000000000000000000..561c53e09f233594d33b8f19e3fc08b9cb87cde4 --- /dev/null +++ b/static/docs/reference/torch_eq.html @@ -0,0 +1,261 @@ + + + + + + + + +Eq — torch_eq • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Eq

    +
    + +
    torch_eq(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the tensor to compare

    other

    (Tensor or float) the tensor or value to compare +Must be a ByteTensor

    + +

    eq(input, other, out=NULL) -> Tensor

    + + + + +

    Computes element-wise equality

    +

    The second argument can be a number or a tensor whose shape is +broadcastable with the first argument.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_eq(torch_tensor(c(1,2,3,4)), torch_tensor(c(1, 3, 2, 4))) +} +
    #> torch_tensor +#> 1 +#> 0 +#> 0 +#> 1 +#> [ CPUBoolType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_equal.html b/static/docs/reference/torch_equal.html new file mode 100644 index 0000000000000000000000000000000000000000..f094458d9b263ba0d0022d3059a4b804c0149a0d --- /dev/null +++ b/static/docs/reference/torch_equal.html @@ -0,0 +1,253 @@ + + + + + + + + +Equal — torch_equal • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Equal

    +
    + +
    torch_equal(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    the input tensor

    other

    the other input tensor

    + +

    equal(input, other) -> bool

    + + + + +

    TRUE if two tensors have the same size and elements, FALSE otherwise.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_equal(torch_tensor(c(1, 2)), torch_tensor(c(1, 2))) +} +
    #> [1] TRUE
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_erf.html b/static/docs/reference/torch_erf.html new file mode 100644 index 0000000000000000000000000000000000000000..9f4e29490784fb0530313a665b1c8fe11a1109df --- /dev/null +++ b/static/docs/reference/torch_erf.html @@ -0,0 +1,256 @@ + + + + + + + + +Erf — torch_erf • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Erf

    +
    + +
    torch_erf(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    erf(input, out=NULL) -> Tensor

    + + + + +

    Computes the error function of each element. The error function is defined as follows:

    +

    $$ + \mathrm{erf}(x) = \frac{2}{\sqrt{\pi}} \int_{0}^{x} e^{-t^2} dt +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_erf(torch_tensor(c(0, -1., 10.))) +} +
    #> torch_tensor +#> 0.0000 +#> -0.8427 +#> 1.0000 +#> [ CPUFloatType{3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_erfc.html b/static/docs/reference/torch_erfc.html new file mode 100644 index 0000000000000000000000000000000000000000..8a4a96ccdd59f6ac64c572f786f0c07cf319bb4c --- /dev/null +++ b/static/docs/reference/torch_erfc.html @@ -0,0 +1,257 @@ + + + + + + + + +Erfc — torch_erfc • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Erfc

    +
    + +
    torch_erfc(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    erfc(input, out=NULL) -> Tensor

    + + + + +

    Computes the complementary error function of each element of input. +The complementary error function is defined as follows:

    +

    $$ + \mathrm{erfc}(x) = 1 - \frac{2}{\sqrt{\pi}} \int_{0}^{x} e^{-t^2} dt +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_erfc(torch_tensor(c(0, -1., 10.))) +} +
    #> torch_tensor +#> 1.0000e+00 +#> 1.8427e+00 +#> 1.4013e-45 +#> [ CPUFloatType{3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_erfinv.html b/static/docs/reference/torch_erfinv.html new file mode 100644 index 0000000000000000000000000000000000000000..03295dcb40caa4bc7783e616e9a675c035ec2a98 --- /dev/null +++ b/static/docs/reference/torch_erfinv.html @@ -0,0 +1,257 @@ + + + + + + + + +Erfinv — torch_erfinv • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Erfinv

    +
    + +
    torch_erfinv(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    erfinv(input, out=NULL) -> Tensor

    + + + + +

    Computes the inverse error function of each element of input. +The inverse error function is defined in the range \((-1, 1)\) as:

    +

    $$ + \mathrm{erfinv}(\mathrm{erf}(x)) = x +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_erfinv(torch_tensor(c(0, 0.5, -1.))) +} +
    #> torch_tensor +#> 0.0000 +#> 0.4769 +#> -inf +#> [ CPUFloatType{3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_exp.html b/static/docs/reference/torch_exp.html new file mode 100644 index 0000000000000000000000000000000000000000..e045862d18133ca66450a18572f8b0dae9b3365a --- /dev/null +++ b/static/docs/reference/torch_exp.html @@ -0,0 +1,256 @@ + + + + + + + + +Exp — torch_exp • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Exp

    +
    + +
    torch_exp(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    exp(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the exponential of the elements +of the input tensor input.

    +

    $$ + y_{i} = e^{x_{i}} +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_exp(torch_tensor(c(0, log(2)))) +} +
    #> torch_tensor +#> 1 +#> 2 +#> [ CPUFloatType{2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_expm1.html b/static/docs/reference/torch_expm1.html new file mode 100644 index 0000000000000000000000000000000000000000..7c09819e360b131ece10103daa522cdac3d6997a --- /dev/null +++ b/static/docs/reference/torch_expm1.html @@ -0,0 +1,256 @@ + + + + + + + + +Expm1 — torch_expm1 • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Expm1

    +
    + +
    torch_expm1(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    expm1(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the exponential of the elements minus 1 +of input.

    +

    $$ + y_{i} = e^{x_{i}} - 1 +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_expm1(torch_tensor(c(0, log(2)))) +} +
    #> torch_tensor +#> 0 +#> 1 +#> [ CPUFloatType{2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_eye.html b/static/docs/reference/torch_eye.html new file mode 100644 index 0000000000000000000000000000000000000000..9bb24caefa330575667237e1cb8e91ba7fb2f440 --- /dev/null +++ b/static/docs/reference/torch_eye.html @@ -0,0 +1,280 @@ + + + + + + + + +Eye — torch_eye • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Eye

    +
    + +
    torch_eye(
    +  n,
    +  m = n,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    n

    (int) the number of rows

    m

    (int, optional) the number of columns with default being n

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

    layout

    (torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    eye(n, m=NULL, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

    + + + + +

    Returns a 2-D tensor with ones on the diagonal and zeros elsewhere.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_eye(3) +} +
    #> torch_tensor +#> 1 0 0 +#> 0 1 0 +#> 0 0 1 +#> [ CPUFloatType{3,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_fft.html b/static/docs/reference/torch_fft.html new file mode 100644 index 0000000000000000000000000000000000000000..78ddfe283211e863a5cbb025ef6efcdbbdda08fc --- /dev/null +++ b/static/docs/reference/torch_fft.html @@ -0,0 +1,614 @@ + + + + + + + + +Fft — torch_fft • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fft

    +
    + +
    torch_fft(self, signal_ndim, normalized = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor of at least signal_ndim + 1 dimensions

    signal_ndim

    (int) the number of dimensions in each signal. signal_ndim can only be 1, 2 or 3

    normalized

    (bool, optional) controls whether to return normalized results. Default: FALSE

    + +

    Note

    + + +
    For CUDA tensors, an LRU cache is used for cuFFT plans to speed up
    +repeatedly running FFT methods on tensors of same geometry with same
    +configuration. See cufft-plan-cache for more details on how to
    +monitor and control the cache.
    +
    + +

    fft(input, signal_ndim, normalized=False) -> Tensor

    + + + + +

    Complex-to-complex Discrete Fourier Transform

    +

    This method computes the complex-to-complex discrete Fourier transform. +Ignoring the batch dimensions, it computes the following expression:

    +

    $$ + X[\omega_1, \dots, \omega_d] = + \sum_{n_1=0}^{N_1-1} \dots \sum_{n_d=0}^{N_d-1} x[n_1, \dots, n_d] + e^{-j\ 2 \pi \sum_{i=0}^d \frac{\omega_i n_i}{N_i}}, +$$ +where \(d\) = signal_ndim is number of dimensions for the +signal, and \(N_i\) is the size of signal dimension \(i\).

    +

    This method supports 1D, 2D and 3D complex-to-complex transforms, indicated +by signal_ndim. input must be a tensor with last dimension +of size 2, representing the real and imaginary components of complex +numbers, and should have at least signal_ndim + 1 dimensions with optionally +arbitrary number of leading batch dimensions. If normalized is set to +TRUE, this normalizes the result by dividing it with +\(\sqrt{\prod_{i=1}^K N_i}\) so that the operator is unitary.

    +

    Returns the real and the imaginary parts together as one tensor of the same +shape of input.

    +

    The inverse of this function is torch_ifft.

    +

    Warning

    + + + +

    For CPU tensors, this method is currently only available with MKL. Use +torch_backends.mkl.is_available to check if MKL is installed.

    + +

    Examples

    +
    if (torch_is_installed()) { + +# unbatched 2D FFT +x = torch_randn(c(4, 3, 2)) +torch_fft(x, 2) +# batched 1D FFT +torch_fft(x, 1) +# arbitrary number of batch dimensions, 2D FFT +x = torch_randn(c(3, 3, 5, 5, 2)) +torch_fft(x, 2) + +} +
    #> torch_tensor +#> (1,1,1,.,.) = +#> 3.8970 3.1487 +#> 3.5089 -8.0428 +#> -4.2355 4.7304 +#> 5.1422 8.4612 +#> -0.2829 6.5526 +#> +#> (2,1,1,.,.) = +#> -6.4914 0.7626 +#> -9.6798 -6.4868 +#> 4.7413 -6.3207 +#> 3.1433 6.0861 +#> 1.4183 8.1368 +#> +#> (3,1,1,.,.) = +#> 4.1796 -0.8876 +#> -2.7845 -0.2090 +#> 6.4877 -2.0553 +#> 1.6739 -1.4434 +#> 6.7934 3.7603 +#> +#> (1,2,1,.,.) = +#> -8.0135 -3.4585 +#> 3.2928 1.1881 +#> 1.0519 7.3132 +#> -2.0995 0.7831 +#> 11.4499 0.4932 +#> +#> (2,2,1,.,.) = +#> 2.4901 2.9753 +#> -8.2115 4.1514 +#> 6.0281 1.2984 +#> -7.5086 1.3346 +#> 3.6055 -0.8337 +#> +#> (3,2,1,.,.) = +#> 4.5424 -0.5391 +#> 10.2617 -3.2754 +#> -1.8136 5.1917 +#> -4.6990 -0.9120 +#> -0.9396 -2.4928 +#> +#> (1,3,1,.,.) = +#> -4.4424 -3.1498 +#> -0.4156 0.1798 +#> -13.2501 -4.7913 +#> -0.4209 2.1745 +#> -0.4077 -2.1715 +#> +#> (2,3,1,.,.) = +#> 1.8846 7.9342 +#> 3.5554 -4.6065 +#> 2.1314 -1.0630 +#> -1.8522 5.3271 +#> -4.2557 -6.4940 +#> +#> (3,3,1,.,.) = +#> -5.2358 -6.1865 +#> -13.5679 8.0294 +#> 5.8208 -5.6350 +#> 6.6610 -3.3221 +#> -2.0237 -1.4322 +#> +#> (1,1,2,.,.) = +#> 3.3358 4.7716 +#> 3.1996 3.2956 +#> 2.6306 7.2663 +#> -2.2276 -8.0122 +#> 2.0598 -2.9997 +#> +#> (2,1,2,.,.) = +#> 0.5116 0.5725 +#> 2.0175 -4.3590 +#> -1.4288 0.5337 +#> 2.3421 0.0760 +#> -7.5872 -0.3140 +#> +#> (3,1,2,.,.) = +#> 5.6348 1.3832 +#> 6.6151 0.4225 +#> 6.0133 -5.4770 +#> -7.6286 -6.4727 +#> 3.6738 4.4512 +#> +#> (1,2,2,.,.) = +#> 2.8345 -7.9496 +#> -1.2934 6.3027 +#> -2.5656 3.4925 +#> -4.2269 8.1202 +#> 7.0657 8.9951 +#> +#> (2,2,2,.,.) = +#> -2.5830 2.6838 +#> 1.3960 -2.1490 +#> 3.6903 -4.3413 +#> -4.8245 1.9661 +#> 2.4275 -0.8489 +#> +#> (3,2,2,.,.) = +#> -2.3136 -3.0029 +#> 6.4553 2.9853 +#> 1.0790 6.2816 +#> -2.3066 3.1048 +#> 2.4425 -2.4980 +#> +#> (1,3,2,.,.) = +#> 5.0888 -1.9831 +#> -3.3846 -3.1375 +#> 0.4764 0.9040 +#> -0.2155 -3.9354 +#> -0.2403 9.0833 +#> +#> (2,3,2,.,.) = +#> 3.2478 0.6904 +#> 4.8131 -4.7587 +#> 4.4327 -2.0460 +#> -0.1583 -1.1435 +#> -0.8185 -3.7916 +#> +#> (3,3,2,.,.) = +#> -4.8215 -5.7823 +#> -4.3197 2.5006 +#> -7.6764 10.5907 +#> 5.7097 9.1375 +#> -6.5421 5.3465 +#> +#> (1,1,3,.,.) = +#> 6.8857 1.6267 +#> 0.6538 -2.9130 +#> 4.2404 -5.3240 +#> 5.6476 5.1706 +#> 1.4154 2.3173 +#> +#> (2,1,3,.,.) = +#> -0.5342 -1.8588 +#> -6.3230 -2.3969 +#> -2.6561 -1.8841 +#> 2.2272 -2.2146 +#> -7.2355 -9.8852 +#> +#> (3,1,3,.,.) = +#> 6.9817 -1.9044 +#> -10.6065 -3.0032 +#> 6.2727 10.5124 +#> 2.0892 -0.2945 +#> 1.4965 3.6209 +#> +#> (1,2,3,.,.) = +#> -2.7138 0.5805 +#> 3.8351 -13.1060 +#> -2.0923 8.4118 +#> 6.7002 2.8014 +#> -2.0329 -5.4260 +#> +#> (2,2,3,.,.) = +#> -4.7038 4.4041 +#> 0.6079 6.9384 +#> -2.4581 -2.0337 +#> -2.4631 5.5419 +#> -2.4616 -2.3076 +#> +#> (3,2,3,.,.) = +#> 1.0971 1.5684 +#> 4.6985 3.6660 +#> 1.7358 2.5220 +#> -2.9059 -3.8360 +#> 8.5191 -0.6597 +#> +#> (1,3,3,.,.) = +#> 1.2912 -4.1150 +#> 5.9200 -9.2185 +#> 2.7708 -11.2848 +#> -5.2274 -9.0754 +#> -5.9556 -8.3723 +#> +#> (2,3,3,.,.) = +#> 3.6284 -9.7751 +#> -6.4936 0.4896 +#> 1.0783 3.2584 +#> -1.2375 4.0910 +#> 8.7281 11.1889 +#> +#> (3,3,3,.,.) = +#> -2.9392 -0.8457 +#> -5.1092 -3.4291 +#> -6.7226 -2.2762 +#> -6.7354 0.7375 +#> -3.6834 -10.3393 +#> +#> (1,1,4,.,.) = +#> -0.4641 5.1642 +#> -4.9807 -2.8183 +#> -0.9600 1.1617 +#> 13.1615 -5.7607 +#> -5.8192 6.1872 +#> +#> (2,1,4,.,.) = +#> -0.6880 12.9307 +#> -8.0335 -1.1752 +#> 6.2135 2.0547 +#> 8.6658 -5.6305 +#> -7.1765 -4.2632 +#> +#> (3,1,4,.,.) = +#> -3.0057 -3.1364 +#> 4.9857 4.4488 +#> 1.3519 9.6340 +#> -9.3729 4.8508 +#> 0.7863 3.1354 +#> +#> (1,2,4,.,.) = +#> -5.5472 0.7396 +#> 5.5568 -2.7190 +#> 2.1487 3.5182 +#> 2.2908 5.8565 +#> -2.7511 0.1199 +#> +#> (2,2,4,.,.) = +#> 0.1053 -9.7559 +#> 2.6366 2.9785 +#> 5.8546 -8.3557 +#> 2.3308 -0.3229 +#> -0.1616 -2.8580 +#> +#> (3,2,4,.,.) = +#> -8.8217 -4.0197 +#> -1.3547 2.0932 +#> 5.0738 -2.1672 +#> -1.6119 7.2245 +#> -1.4150 -0.6509 +#> +#> (1,3,4,.,.) = +#> -1.2559 2.1416 +#> -0.0231 4.1475 +#> -3.8181 2.3299 +#> 3.7565 0.7653 +#> 1.8683 -3.8644 +#> +#> (2,3,4,.,.) = +#> 0.4839 6.4445 +#> 3.1534 -8.8380 +#> 3.7889 6.3242 +#> 9.0306 11.1551 +#> 5.7409 -1.1564 +#> +#> (3,3,4,.,.) = +#> -2.2988 8.8577 +#> 4.7663 -5.2979 +#> -4.7160 9.3649 +#> -4.3578 -7.4509 +#> -4.3874 2.7093 +#> +#> (1,1,5,.,.) = +#> -0.4679 1.3350 +#> -1.4923 -5.4231 +#> -2.3291 0.3742 +#> 2.6760 -0.3634 +#> -3.0602 8.7696 +#> +#> (2,1,5,.,.) = +#> -2.1765 4.7492 +#> 7.2170 3.6409 +#> -6.5629 -2.2038 +#> -5.8527 -0.0878 +#> -4.5430 0.7514 +#> +#> (3,1,5,.,.) = +#> 8.4563 6.2536 +#> -7.6490 -0.3940 +#> -2.2804 0.5806 +#> -9.5464 4.4593 +#> -5.9173 -3.0061 +#> +#> (1,2,5,.,.) = +#> 5.1614 -5.6453 +#> 4.8296 -15.8688 +#> 1.8169 -0.8523 +#> 10.0722 -2.0120 +#> 0.8588 2.5174 +#> +#> (2,2,5,.,.) = +#> -6.1863 -1.2811 +#> -5.0314 3.6125 +#> -6.2021 -2.8083 +#> -0.3612 -3.9482 +#> -3.0259 0.6619 +#> +#> (3,2,5,.,.) = +#> 5.1903 0.0372 +#> -1.9769 1.8278 +#> 4.1834 -11.1886 +#> -3.8679 -4.4589 +#> 2.2054 -3.4290 +#> +#> (1,3,5,.,.) = +#> -9.9394 5.2093 +#> -6.9233 -3.8417 +#> 7.3800 -2.2016 +#> 1.4532 4.9600 +#> -2.6033 -3.4185 +#> +#> (2,3,5,.,.) = +#> -6.9207 1.5530 +#> -3.9120 0.2946 +#> -0.9112 5.2904 +#> 3.2102 -2.0272 +#> 7.6431 -0.3506 +#> +#> (3,3,5,.,.) = +#> 3.1340 6.7375 +#> 1.4610 3.8716 +#> 2.8714 3.5657 +#> -4.2452 -3.6816 +#> 5.5423 2.7491 +#> [ CPUFloatType{3,3,5,5,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_finfo.html b/static/docs/reference/torch_finfo.html new file mode 100644 index 0000000000000000000000000000000000000000..cb9ed67f88e4cd6aba64c442435df6596a51d830 --- /dev/null +++ b/static/docs/reference/torch_finfo.html @@ -0,0 +1,239 @@ + + + + + + + + +Floating point type info — torch_finfo • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    A list that represents the numerical properties of a +floating point torch.dtype

    +
    + +
    torch_finfo(dtype)
    + +

    Arguments

    + + + + + + +
    dtype

    dtype to check information

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_flatten.html b/static/docs/reference/torch_flatten.html new file mode 100644 index 0000000000000000000000000000000000000000..daccdc19295d52e79ec800ed4f7b4b163dfc0985 --- /dev/null +++ b/static/docs/reference/torch_flatten.html @@ -0,0 +1,270 @@ + + + + + + + + +Flatten — torch_flatten • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Flatten

    +
    + +
    torch_flatten(self, dims, start_dim = 1L, end_dim = -1L, out_dim)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dims

    if tensor is named you can pass the name of the dimensions to +flatten

    start_dim

    (int) the first dim to flatten

    end_dim

    (int) the last dim to flatten

    out_dim

    the name of the resulting dimension if a named tensor.

    + +

    flatten(input, start_dim=0, end_dim=-1) -> Tensor

    + + + + +

    Flattens a contiguous range of dims in a tensor.

    + +

    Examples

    +
    if (torch_is_installed()) { + +t = torch_tensor(matrix(c(1, 2), ncol = 2)) +torch_flatten(t) +torch_flatten(t, start_dim=2) +} +
    #> torch_tensor +#> 1 2 +#> [ CPUFloatType{1,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_flip.html b/static/docs/reference/torch_flip.html new file mode 100644 index 0000000000000000000000000000000000000000..e1146db25869b678c38e6530b383e6c389a17ec6 --- /dev/null +++ b/static/docs/reference/torch_flip.html @@ -0,0 +1,263 @@ + + + + + + + + +Flip — torch_flip • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Flip

    +
    + +
    torch_flip(self, dims)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dims

    (a list or tuple) axis to flip on

    + +

    flip(input, dims) -> Tensor

    + + + + +

    Reverse the order of a n-D tensor along given axis in dims.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_arange(0, 8)$view(c(2, 2, 2)) +x +torch_flip(x, c(1, 2)) +} +
    #> torch_tensor +#> (1,.,.) = +#> 6 7 +#> 4 5 +#> +#> (2,.,.) = +#> 2 3 +#> 0 1 +#> [ CPUFloatType{2,2,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_floor.html b/static/docs/reference/torch_floor.html new file mode 100644 index 0000000000000000000000000000000000000000..ee0a1dac547b05da896b46aeca8ac70c8d0e396d --- /dev/null +++ b/static/docs/reference/torch_floor.html @@ -0,0 +1,260 @@ + + + + + + + + +Floor — torch_floor • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Floor

    +
    + +
    torch_floor(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    floor(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the floor of the elements of input, +the largest integer less than or equal to each element.

    +

    $$ + \mbox{out}_{i} = \left\lfloor \mbox{input}_{i} \right\rfloor +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_floor(a) +} +
    #> torch_tensor +#> 0 +#> -1 +#> -1 +#> -1 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_floor_divide.html b/static/docs/reference/torch_floor_divide.html new file mode 100644 index 0000000000000000000000000000000000000000..10c57ef95b84fd6d306b1c8d44b27defe62c0fd5 --- /dev/null +++ b/static/docs/reference/torch_floor_divide.html @@ -0,0 +1,263 @@ + + + + + + + + +Floor_divide — torch_floor_divide • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Floor_divide

    +
    + +
    torch_floor_divide(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the numerator tensor

    other

    (Tensor or Scalar) the denominator

    + +

    floor_divide(input, other, out=NULL) -> Tensor

    + + + + +

    Return the division of the inputs rounded down to the nearest integer. See torch_div +for type promotion and broadcasting rules.

    +

    $$ + \mbox{{out}}_i = \left\lfloor \frac{{\mbox{{input}}_i}}{{\mbox{{other}}_i}} \right\rfloor +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_tensor(c(4.0, 3.0)) +b = torch_tensor(c(2.0, 2.0)) +torch_floor_divide(a, b) +torch_floor_divide(a, 1.4) +} +
    #> torch_tensor +#> 2 +#> 2 +#> [ CPUFloatType{2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_fmod.html b/static/docs/reference/torch_fmod.html new file mode 100644 index 0000000000000000000000000000000000000000..9bd9cde04b17c01b4f0a4365310fbe83dfef6a3d --- /dev/null +++ b/static/docs/reference/torch_fmod.html @@ -0,0 +1,264 @@ + + + + + + + + +Fmod — torch_fmod • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Fmod

    +
    + +
    torch_fmod(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the dividend

    other

    (Tensor or float) the divisor, which may be either a number or a tensor of the same shape as the dividend

    + +

    fmod(input, other, out=NULL) -> Tensor

    + + + + +

    Computes the element-wise remainder of division.

    +

    The dividend and divisor may contain both for integer and floating point +numbers. The remainder has the same sign as the dividend input.

    +

    When other is a tensor, the shapes of input and +other must be broadcastable .

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_fmod(torch_tensor(c(-3., -2, -1, 1, 2, 3)), 2) +torch_fmod(torch_tensor(c(1., 2, 3, 4, 5)), 1.5) +} +
    #> torch_tensor +#> 1.0000 +#> 0.5000 +#> 0.0000 +#> 1.0000 +#> 0.5000 +#> [ CPUFloatType{5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_frac.html b/static/docs/reference/torch_frac.html new file mode 100644 index 0000000000000000000000000000000000000000..eafa181a010efbc0bb4a6d8d9ad397c45ba2fbe4 --- /dev/null +++ b/static/docs/reference/torch_frac.html @@ -0,0 +1,256 @@ + + + + + + + + +Frac — torch_frac • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Frac

    +
    + +
    torch_frac(self)
    + +

    Arguments

    + + + + + + +
    self

    the input tensor.

    + +

    frac(input, out=NULL) -> Tensor

    + + + + +

    Computes the fractional portion of each element in input.

    +

    $$ + \mbox{out}_{i} = \mbox{input}_{i} - \left\lfloor |\mbox{input}_{i}| \right\rfloor * \mbox{sgn}(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_frac(torch_tensor(c(1, 2.5, -3.2))) +} +
    #> torch_tensor +#> 0.0000 +#> 0.5000 +#> -0.2000 +#> [ CPUFloatType{3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_full.html b/static/docs/reference/torch_full.html new file mode 100644 index 0000000000000000000000000000000000000000..6a26b0f00041a15d674d64106b207795ded8c9ca --- /dev/null +++ b/static/docs/reference/torch_full.html @@ -0,0 +1,293 @@ + + + + + + + + +Full — torch_full • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Full

    +
    + +
    torch_full(
    +  size,
    +  fill_value,
    +  names = NULL,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    size

    (int...) a list, tuple, or torch_Size of integers defining the shape of the output tensor.

    fill_value

    NA the number to fill the output tensor with.

    names

    optional names of the dimensions

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

    layout

    (torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    full(size, fill_value, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

    + + + + +

    Returns a tensor of size size filled with fill_value.

    +

    Warning

    + + + +

    In PyTorch 1.5 a bool or integral fill_value will produce a warning if +dtype or out are not set. +In a future PyTorch release, when dtype and out are not set +a bool fill_value will return a tensor of torch.bool dtype, +and an integral fill_value will return a tensor of torch.long dtype.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_full(list(2, 3), 3.141592) +} +
    #> torch_tensor +#> 3.1416 3.1416 3.1416 +#> 3.1416 3.1416 3.1416 +#> [ CPUFloatType{2,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_full_like.html b/static/docs/reference/torch_full_like.html new file mode 100644 index 0000000000000000000000000000000000000000..9113e685c795a3ca7841a5a5b07ac4a80ccfc57c --- /dev/null +++ b/static/docs/reference/torch_full_like.html @@ -0,0 +1,278 @@ + + + + + + + + +Full_like — torch_full_like • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Full_like

    +
    + +
    torch_full_like(
    +  input,
    +  fill_value,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE,
    +  memory_format = torch_preserve_format()
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    (Tensor) the size of input will determine size of the output tensor.

    fill_value

    the number to fill the output tensor with.

    dtype

    (torch.dtype, optional) the desired data type of returned Tensor. Default: if NULL, defaults to the dtype of input.

    layout

    (torch.layout, optional) the desired layout of returned tensor. Default: if NULL, defaults to the layout of input.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, defaults to the device of input.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    memory_format

    (torch.memory_format, optional) the desired memory format of returned Tensor. Default: torch_preserve_format.

    + +

    full_like(input, fill_value, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False,

    + + + + +

    memory_format=torch.preserve_format) -> Tensor

    +

    Returns a tensor with the same size as input filled with fill_value. +torch_full_like(input, fill_value) is equivalent to +torch_full(input.size(), fill_value, dtype=input.dtype, layout=input.layout, device=input.device).

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_gather.html b/static/docs/reference/torch_gather.html new file mode 100644 index 0000000000000000000000000000000000000000..153778c323cef6be6d0a9f1206a9982de16060eb --- /dev/null +++ b/static/docs/reference/torch_gather.html @@ -0,0 +1,275 @@ + + + + + + + + +Gather — torch_gather • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Gather

    +
    + +
    torch_gather(self, dim, index, sparse_grad = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the source tensor

    dim

    (int) the axis along which to index

    index

    (LongTensor) the indices of elements to gather

    sparse_grad

    (bool,optional) If TRUE, gradient w.r.t. input will be a sparse tensor.

    + +

    gather(input, dim, index, sparse_grad=FALSE) -> Tensor

    + + + + +

    Gathers values along an axis specified by dim.

    +

    For a 3-D tensor the output is specified by::

    out[i][j][k] = input[index[i][j][k]][j][k]  # if dim == 0
    +out[i][j][k] = input[i][index[i][j][k]][k]  # if dim == 1
    +out[i][j][k] = input[i][j][index[i][j][k]]  # if dim == 2
    +
    + +

    If input is an n-dimensional tensor with size +\((x_0, x_1..., x_{i-1}, x_i, x_{i+1}, ..., x_{n-1})\) +and dim = i, then index must be an \(n\)-dimensional tensor with +size \((x_0, x_1, ..., x_{i-1}, y, x_{i+1}, ..., x_{n-1})\) where \(y \geq 1\) +and out will have the same size as index.

    + +

    Examples

    +
    if (torch_is_installed()) { + +t = torch_tensor(matrix(c(1,2,3,4), ncol = 2, byrow = TRUE)) +torch_gather(t, 2, torch_tensor(matrix(c(1,1,2,1), ncol = 2, byrow=TRUE), dtype = torch_int64())) +} +
    #> torch_tensor +#> 1 1 +#> 4 3 +#> [ CPUFloatType{2,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_ge.html b/static/docs/reference/torch_ge.html new file mode 100644 index 0000000000000000000000000000000000000000..5add98bfb35e3bfdd14cbf3119fff8f462e3d2f2 --- /dev/null +++ b/static/docs/reference/torch_ge.html @@ -0,0 +1,259 @@ + + + + + + + + +Ge — torch_ge • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Ge

    +
    + +
    torch_ge(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the tensor to compare

    other

    (Tensor or float) the tensor or value to compare

    + +

    ge(input, other, out=NULL) -> Tensor

    + + + + +

    Computes \(\mbox{input} \geq \mbox{other}\) element-wise.

    +

    The second argument can be a number or a tensor whose shape is +broadcastable with the first argument.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_ge(torch_tensor(matrix(1:4, ncol = 2, byrow=TRUE)), + torch_tensor(matrix(c(1,1,4,4), ncol = 2, byrow=TRUE))) +} +
    #> torch_tensor +#> 1 1 +#> 0 1 +#> [ CPUBoolType{2,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_generator.html b/static/docs/reference/torch_generator.html new file mode 100644 index 0000000000000000000000000000000000000000..e205f8f225e6bdc2dac471ac70d08f6fbbb83c1e --- /dev/null +++ b/static/docs/reference/torch_generator.html @@ -0,0 +1,246 @@ + + + + + + + + +Create a Generator object — torch_generator • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    A torch_generator is an object which manages the state of the algorithm +that produces pseudo random numbers. Used as a keyword argument in many +In-place random sampling functions.

    +
    + +
    torch_generator()
    + + + +

    Examples

    +
    if (torch_is_installed()) { + +# Via string +generator <- torch_generator() +generator$current_seed() +generator$set_current_seed(1234567L) +generator$current_seed() + + +} +
    #> integer64 +#> [1] 1234567
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_geqrf.html b/static/docs/reference/torch_geqrf.html new file mode 100644 index 0000000000000000000000000000000000000000..506785d1ab30df48d8ab97fbd8701c94295f5495 --- /dev/null +++ b/static/docs/reference/torch_geqrf.html @@ -0,0 +1,250 @@ + + + + + + + + +Geqrf — torch_geqrf • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Geqrf

    +
    + +
    torch_geqrf(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input matrix

    + +

    geqrf(input, out=NULL) -> (Tensor, Tensor)

    + + + + +

    This is a low-level function for calling LAPACK directly. This function +returns a namedtuple (a, tau) as defined in LAPACK documentation for geqrf_ .

    +

    You'll generally want to use torch_qr instead.

    +

    Computes a QR decomposition of input, but without constructing +\(Q\) and \(R\) as explicit separate matrices.

    +

    Rather, this directly calls the underlying LAPACK function ?geqrf +which produces a sequence of 'elementary reflectors'.

    +

    See LAPACK documentation for geqrf_ for further details.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_ger.html b/static/docs/reference/torch_ger.html new file mode 100644 index 0000000000000000000000000000000000000000..2a20545da45c806f134f7da7d76b3eb40157c9ed --- /dev/null +++ b/static/docs/reference/torch_ger.html @@ -0,0 +1,265 @@ + + + + + + + + +Ger — torch_ger • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Ger

    +
    + +
    torch_ger(self, vec2)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) 1-D input vector

    vec2

    (Tensor) 1-D input vector

    + +

    Note

    + +

    This function does not broadcast .

    +

    ger(input, vec2, out=NULL) -> Tensor

    + + + + +

    Outer product of input and vec2. +If input is a vector of size \(n\) and vec2 is a vector of +size \(m\), then out must be a matrix of size \((n \times m)\).

    + +

    Examples

    +
    if (torch_is_installed()) { + +v1 = torch_arange(1., 5.) +v2 = torch_arange(1., 4.) +torch_ger(v1, v2) +} +
    #> torch_tensor +#> 1 2 3 +#> 2 4 6 +#> 3 6 9 +#> 4 8 12 +#> [ CPUFloatType{4,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_gt.html b/static/docs/reference/torch_gt.html new file mode 100644 index 0000000000000000000000000000000000000000..98edc7761725fa78887f50fb12d90f5b52a38c3e --- /dev/null +++ b/static/docs/reference/torch_gt.html @@ -0,0 +1,259 @@ + + + + + + + + +Gt — torch_gt • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Gt

    +
    + +
    torch_gt(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the tensor to compare

    other

    (Tensor or float) the tensor or value to compare

    + +

    gt(input, other, out=NULL) -> Tensor

    + + + + +

    Computes \(\mbox{input} > \mbox{other}\) element-wise.

    +

    The second argument can be a number or a tensor whose shape is +broadcastable with the first argument.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_gt(torch_tensor(matrix(1:4, ncol = 2, byrow=TRUE)), + torch_tensor(matrix(c(1,1,4,4), ncol = 2, byrow=TRUE))) +} +
    #> torch_tensor +#> 0 1 +#> 0 0 +#> [ CPUBoolType{2,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_hamming_window.html b/static/docs/reference/torch_hamming_window.html new file mode 100644 index 0000000000000000000000000000000000000000..bceab5f03b34cfad4f6ef5fa62fda9c542462cc1 --- /dev/null +++ b/static/docs/reference/torch_hamming_window.html @@ -0,0 +1,301 @@ + + + + + + + + +Hamming_window — torch_hamming_window • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Hamming_window

    +
    + +
    torch_hamming_window(
    +  window_length,
    +  periodic = TRUE,
    +  alpha = 0.54,
    +  beta = 0.46,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    window_length

    (int) the size of returned window

    periodic

    (bool, optional) If TRUE, returns a window to be used as periodic function. If False, return a symmetric window.

    alpha

    (float, optional) The coefficient \(\alpha\) in the equation above

    beta

    (float, optional) The coefficient \(\beta\) in the equation above

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type). Only floating point types are supported.

    layout

    (torch.layout, optional) the desired layout of returned window tensor. Only torch_strided (dense layout) is supported.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    Note

    + + +
    If `window_length` \eqn{=1}, the returned window contains a single value 1.
    +
    + +
    This is a generalized version of `torch_hann_window`.
    +
    + +

    hamming_window(window_length, periodic=TRUE, alpha=0.54, beta=0.46, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

    + + + + +

    Hamming window function.

    +

    $$ + w[n] = \alpha - \beta\ \cos \left( \frac{2 \pi n}{N - 1} \right), +$$ +where \(N\) is the full window size.

    +

    The input window_length is a positive integer controlling the +returned window size. periodic flag determines whether the returned +window trims off the last duplicate value from the symmetric window and is +ready to be used as a periodic window with functions like +torch_stft. Therefore, if periodic is true, the \(N\) in +above formula is in fact \(\mbox{window\_length} + 1\). Also, we always have +torch_hamming_window(L, periodic=TRUE) equal to +torch_hamming_window(L + 1, periodic=False)[:-1]).

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_hann_window.html b/static/docs/reference/torch_hann_window.html new file mode 100644 index 0000000000000000000000000000000000000000..2122ae2b50ce316ba3e90d367b0bee58e194c4e4 --- /dev/null +++ b/static/docs/reference/torch_hann_window.html @@ -0,0 +1,289 @@ + + + + + + + + +Hann_window — torch_hann_window • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Hann_window

    +
    + +
    torch_hann_window(
    +  window_length,
    +  periodic = TRUE,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    window_length

    (int) the size of returned window

    periodic

    (bool, optional) If TRUE, returns a window to be used as periodic function. If False, return a symmetric window.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type). Only floating point types are supported.

    layout

    (torch.layout, optional) the desired layout of returned window tensor. Only torch_strided (dense layout) is supported.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    Note

    + + +
    If `window_length` \eqn{=1}, the returned window contains a single value 1.
    +
    + +

    hann_window(window_length, periodic=TRUE, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

    + + + + +

    Hann window function.

    +

    $$ + w[n] = \frac{1}{2}\ \left[1 - \cos \left( \frac{2 \pi n}{N - 1} \right)\right] = + \sin^2 \left( \frac{\pi n}{N - 1} \right), +$$ +where \(N\) is the full window size.

    +

    The input window_length is a positive integer controlling the +returned window size. periodic flag determines whether the returned +window trims off the last duplicate value from the symmetric window and is +ready to be used as a periodic window with functions like +torch_stft. Therefore, if periodic is true, the \(N\) in +above formula is in fact \(\mbox{window\_length} + 1\). Also, we always have +torch_hann_window(L, periodic=TRUE) equal to +torch_hann_window(L + 1, periodic=False)[:-1]).

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_histc.html b/static/docs/reference/torch_histc.html new file mode 100644 index 0000000000000000000000000000000000000000..3fb1d585f79b2f01a61b58ec4e38af2eb26e2d2d --- /dev/null +++ b/static/docs/reference/torch_histc.html @@ -0,0 +1,269 @@ + + + + + + + + +Histc — torch_histc • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Histc

    +
    + +
    torch_histc(self, bins = 100L, min = 0L, max = 0L)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    bins

    (int) number of histogram bins

    min

    (int) lower end of the range (inclusive)

    max

    (int) upper end of the range (inclusive)

    + +

    histc(input, bins=100, min=0, max=0, out=NULL) -> Tensor

    + + + + +

    Computes the histogram of a tensor.

    +

    The elements are sorted into equal width bins between min and +max. If min and max are both zero, the minimum and +maximum values of the data are used.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_histc(torch_tensor(c(1., 2, 1)), bins=4, min=0, max=3) +} +
    #> torch_tensor +#> 0 +#> 2 +#> 1 +#> 0 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_ifft.html b/static/docs/reference/torch_ifft.html new file mode 100644 index 0000000000000000000000000000000000000000..833691b3aeeec3d90cd2904d84c2ef7f307cbffc --- /dev/null +++ b/static/docs/reference/torch_ifft.html @@ -0,0 +1,308 @@ + + + + + + + + +Ifft — torch_ifft • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Ifft

    +
    + +
    torch_ifft(self, signal_ndim, normalized = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor of at least signal_ndim + 1 dimensions

    signal_ndim

    (int) the number of dimensions in each signal. signal_ndim can only be 1, 2 or 3

    normalized

    (bool, optional) controls whether to return normalized results. Default: FALSE

    + +

    Note

    + + +
    For CUDA tensors, an LRU cache is used for cuFFT plans to speed up
    +repeatedly running FFT methods on tensors of same geometry with same
    +configuration. See cufft-plan-cache for more details on how to
    +monitor and control the cache.
    +
    + +

    ifft(input, signal_ndim, normalized=False) -> Tensor

    + + + + +

    Complex-to-complex Inverse Discrete Fourier Transform

    +

    This method computes the complex-to-complex inverse discrete Fourier +transform. Ignoring the batch dimensions, it computes the following +expression:

    +

    $$ + X[\omega_1, \dots, \omega_d] = + \frac{1}{\prod_{i=1}^d N_i} \sum_{n_1=0}^{N_1-1} \dots \sum_{n_d=0}^{N_d-1} x[n_1, \dots, n_d] + e^{\ j\ 2 \pi \sum_{i=0}^d \frac{\omega_i n_i}{N_i}}, +$$ +where \(d\) = signal_ndim is number of dimensions for the +signal, and \(N_i\) is the size of signal dimension \(i\).

    +

    The argument specifications are almost identical with torch_fft. +However, if normalized is set to TRUE, this instead returns the +results multiplied by \(\sqrt{\prod_{i=1}^d N_i}\), to become a unitary +operator. Therefore, to invert a torch_fft, the normalized +argument should be set identically for torch_fft.

    +

    Returns the real and the imaginary parts together as one tensor of the same +shape of input.

    +

    The inverse of this function is torch_fft.

    +

    Warning

    + + + +

    For CPU tensors, this method is currently only available with MKL. Use +torch_backends.mkl.is_available to check if MKL is installed.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_randn(c(3, 3, 2)) +x +y = torch_fft(x, 2) +torch_ifft(y, 2) # recover x +} +
    #> torch_tensor +#> (1,.,.) = +#> -1.1864 0.4079 +#> -0.7702 0.2462 +#> 0.1156 1.5071 +#> +#> (2,.,.) = +#> -1.1418 -0.3572 +#> -0.6047 0.3412 +#> -0.5722 -1.1495 +#> +#> (3,.,.) = +#> 0.3688 0.8228 +#> -0.6308 0.5165 +#> 1.4051 0.6681 +#> [ CPUFloatType{3,3,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_iinfo.html b/static/docs/reference/torch_iinfo.html new file mode 100644 index 0000000000000000000000000000000000000000..4df9fdd783e317d5746a02f26d1b6c14132a09e8 --- /dev/null +++ b/static/docs/reference/torch_iinfo.html @@ -0,0 +1,239 @@ + + + + + + + + +Integer type info — torch_iinfo • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    A list that represents the numerical properties of a integer +type.

    +
    + +
    torch_iinfo(dtype)
    + +

    Arguments

    + + + + + + +
    dtype

    dtype to get information from.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_imag.html b/static/docs/reference/torch_imag.html new file mode 100644 index 0000000000000000000000000000000000000000..8f35c0bc12897b58c7d1b411006d982d46a4dc1c --- /dev/null +++ b/static/docs/reference/torch_imag.html @@ -0,0 +1,258 @@ + + + + + + + + +Imag — torch_imag • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Imag

    +
    + +
    torch_imag(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    imag(input) -> Tensor

    + + + + +

    Returns the imaginary part of the input tensor.

    +

    Warning

    + + + +

    Not yet implemented.

    +

    $$ + \mbox{out}_{i} = imag(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +torch_imag(torch_tensor(c(-1 + 1i, -2 + 2i, 3 - 3i))) +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_index_select.html b/static/docs/reference/torch_index_select.html new file mode 100644 index 0000000000000000000000000000000000000000..7ee713515943fc8c80ee012205f3a69044ff11f8 --- /dev/null +++ b/static/docs/reference/torch_index_select.html @@ -0,0 +1,275 @@ + + + + + + + + +Index_select — torch_index_select • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Index_select

    +
    + +
    torch_index_select(self, dim, index)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int) the dimension in which we index

    index

    (LongTensor) the 1-D tensor containing the indices to index

    + +

    Note

    + +

    The returned tensor does not use the same storage as the original +tensor. If out has a different shape than expected, we +silently change it to the correct shape, reallocating the underlying +storage if necessary.

    +

    index_select(input, dim, index, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor which indexes the input tensor along dimension +dim using the entries in index which is a LongTensor.

    +

    The returned tensor has the same number of dimensions as the original tensor +(input). The dim\ th dimension has the same size as the length +of index; other dimensions have the same size as in the original tensor.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_randn(c(3, 4)) +x +indices = torch_tensor(c(1, 3), dtype = torch_int64()) +torch_index_select(x, 1, indices) +torch_index_select(x, 2, indices) +} +
    #> torch_tensor +#> 0.1471 0.1196 +#> 1.7271 0.1422 +#> -0.8387 0.1700 +#> [ CPUFloatType{3,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_inverse.html b/static/docs/reference/torch_inverse.html new file mode 100644 index 0000000000000000000000000000000000000000..21df2b28a13b8eacfad73963f55a15e28e936242 --- /dev/null +++ b/static/docs/reference/torch_inverse.html @@ -0,0 +1,268 @@ + + + + + + + + +Inverse — torch_inverse • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Inverse

    +
    + +
    torch_inverse(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor of size \((*, n, n)\) where * is zero or more batch dimensions

    + +

    Note

    + + +
    Irrespective of the original strides, the returned tensors will be
    +transposed, i.e. with strides like `input.contiguous().transpose(-2, -1).stride()`
    +
    + +

    inverse(input, out=NULL) -> Tensor

    + + + + +

    Takes the inverse of the square matrix input. input can be batches +of 2D square tensors, in which case this function would return a tensor composed of +individual inverses.

    + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +x = torch_rand(c(4, 4)) +y = torch_inverse(x) +z = torch_mm(x, y) +z +torch_max(torch_abs(z - torch_eye(4))) # Max non-zero +# Batched inverse example +x = torch_randn(c(2, 3, 4, 4)) +y = torch_inverse(x) +z = torch_matmul(x, y) +torch_max(torch_abs(z - torch_eye(4)$expand_as(x))) # Max non-zero +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_irfft.html b/static/docs/reference/torch_irfft.html new file mode 100644 index 0000000000000000000000000000000000000000..1ce3839faf30cc0ede0450bb0cd33c72f8a39043 --- /dev/null +++ b/static/docs/reference/torch_irfft.html @@ -0,0 +1,330 @@ + + + + + + + + +Irfft — torch_irfft • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Irfft

    +
    + +
    torch_irfft(
    +  self,
    +  signal_ndim,
    +  normalized = FALSE,
    +  onesided = TRUE,
    +  signal_sizes = list()
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor of at least signal_ndim + 1 dimensions

    signal_ndim

    (int) the number of dimensions in each signal. signal_ndim can only be 1, 2 or 3

    normalized

    (bool, optional) controls whether to return normalized results. Default: FALSE

    onesided

    (bool, optional) controls whether input was halfed to avoid redundancy, e.g., by torch_rfft(). Default: TRUE

    signal_sizes

    (list or torch.Size, optional) the size of the original signal (without batch dimension). Default: NULL

    + +

    Note

    + + +
    Due to the conjugate symmetry, `input` do not need to contain the full
    +complex frequency values. Roughly half of the values will be sufficient, as
    +is the case when `input` is given by [`~torch.rfft`] with
    +`rfft(signal, onesided=TRUE)`. In such case, set the `onesided`
    +argument of this method to `TRUE`. Moreover, the original signal shape
    +information can sometimes be lost, optionally set `signal_sizes` to be
    +the size of the original signal (without the batch dimensions if in batched
    +mode) to recover it with correct shape.
    +
    +Therefore, to invert an [torch_rfft()], the `normalized` and
    +`onesided` arguments should be set identically for [torch_irfft()],
    +and preferably a `signal_sizes` is given to avoid size mismatch. See the
    +example below for a case of size mismatch.
    +
    +See [torch_rfft()] for details on conjugate symmetry.
    +
    + +

    The inverse of this function is torch_rfft().

    +
    For CUDA tensors, an LRU cache is used for cuFFT plans to speed up
    +repeatedly running FFT methods on tensors of same geometry with same
    +configuration. See cufft-plan-cache for more details on how to
    +monitor and control the cache.
    +
    + +

    irfft(input, signal_ndim, normalized=False, onesided=TRUE, signal_sizes=NULL) -> Tensor

    + + + + +

    Complex-to-real Inverse Discrete Fourier Transform

    +

    This method computes the complex-to-real inverse discrete Fourier transform. +It is mathematically equivalent with torch_ifft with differences only in +formats of the input and output.

    +

    The argument specifications are almost identical with torch_ifft. +Similar to torch_ifft, if normalized is set to TRUE, +this normalizes the result by multiplying it with +\(\sqrt{\prod_{i=1}^K N_i}\) so that the operator is unitary, where +\(N_i\) is the size of signal dimension \(i\).

    +

    Warning

    + + + +

    Generally speaking, input to this function should contain values +following conjugate symmetry. Note that even if onesided is +TRUE, often symmetry on some part is still needed. When this +requirement is not satisfied, the behavior of torch_irfft is +undefined. Since torch_autograd.gradcheck estimates numerical +Jacobian with point perturbations, torch_irfft will almost +certainly fail the check.

    + +

    For CPU tensors, this method is currently only available with MKL. Use +torch_backends.mkl.is_available to check if MKL is installed.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_randn(c(4, 4)) +torch_rfft(x, 2, onesided=TRUE) +x = torch_randn(c(4, 5)) +torch_rfft(x, 2, onesided=TRUE) +y = torch_rfft(x, 2, onesided=TRUE) +torch_irfft(y, 2, onesided=TRUE, signal_sizes=c(4,5)) # recover x +} +
    #> torch_tensor +#> -1.2099 1.7469 -1.3247 0.0171 0.0024 +#> 0.4107 0.2902 -2.0005 -0.8215 -1.4046 +#> -1.2626 0.0016 2.6169 -1.2084 -0.1029 +#> 0.0418 1.5131 0.9458 -0.2751 -0.5556 +#> [ CPUFloatType{4,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_is_complex.html b/static/docs/reference/torch_is_complex.html new file mode 100644 index 0000000000000000000000000000000000000000..ef61d88fc0d6dd314c6f70f1701772a075246839 --- /dev/null +++ b/static/docs/reference/torch_is_complex.html @@ -0,0 +1,244 @@ + + + + + + + + +Is_complex — torch_is_complex • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Is_complex

    +
    + +
    torch_is_complex(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the PyTorch tensor to test

    + +

    is_complex(input) -> (bool)

    + + + + +

    Returns TRUE if the data type of input is a complex data type i.e., +one of torch_complex64, and torch.complex128.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_is_floating_point.html b/static/docs/reference/torch_is_floating_point.html new file mode 100644 index 0000000000000000000000000000000000000000..30c95bb71f37a1d1fb85a5a3a299ee36c4478f0c --- /dev/null +++ b/static/docs/reference/torch_is_floating_point.html @@ -0,0 +1,244 @@ + + + + + + + + +Is_floating_point — torch_is_floating_point • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Is_floating_point

    +
    + +
    torch_is_floating_point(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the PyTorch tensor to test

    + +

    is_floating_point(input) -> (bool)

    + + + + +

    Returns TRUE if the data type of input is a floating point data type i.e., +one of torch_float64, torch.float32 and torch.float16.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_is_installed.html b/static/docs/reference/torch_is_installed.html new file mode 100644 index 0000000000000000000000000000000000000000..2cf006e6716a5199baf7895b3222befc2bb1e130 --- /dev/null +++ b/static/docs/reference/torch_is_installed.html @@ -0,0 +1,229 @@ + + + + + + + + +Verifies if torch is installed — torch_is_installed • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Verifies if torch is installed

    +
    + +
    torch_is_installed()
    + + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_isfinite.html b/static/docs/reference/torch_isfinite.html new file mode 100644 index 0000000000000000000000000000000000000000..59316680032af4548798f988bd20c178d83910e6 --- /dev/null +++ b/static/docs/reference/torch_isfinite.html @@ -0,0 +1,255 @@ + + + + + + + + +Isfinite — torch_isfinite • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Isfinite

    +
    + +
    torch_isfinite(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) A tensor to check

    + +

    TEST

    + + + + +

    Returns a new tensor with boolean elements representing if each element is Finite or not.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_isfinite(torch_tensor(c(1, Inf, 2, -Inf, NaN))) +} +
    #> torch_tensor +#> 1 +#> 0 +#> 1 +#> 0 +#> 0 +#> [ CPUBoolType{5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_isinf.html b/static/docs/reference/torch_isinf.html new file mode 100644 index 0000000000000000000000000000000000000000..6b703dfe6d4bdb7929e22870f1fae4d8c1b0ecc1 --- /dev/null +++ b/static/docs/reference/torch_isinf.html @@ -0,0 +1,255 @@ + + + + + + + + +Isinf — torch_isinf • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Isinf

    +
    + +
    torch_isinf(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) A tensor to check

    + +

    TEST

    + + + + +

    Returns a new tensor with boolean elements representing if each element is +/-INF or not.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_isinf(torch_tensor(c(1, Inf, 2, -Inf, NaN))) +} +
    #> torch_tensor +#> 0 +#> 1 +#> 0 +#> 1 +#> 0 +#> [ CPUBoolType{5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_isnan.html b/static/docs/reference/torch_isnan.html new file mode 100644 index 0000000000000000000000000000000000000000..a498c097c004dcca08405225c83d7f5fa14abd85 --- /dev/null +++ b/static/docs/reference/torch_isnan.html @@ -0,0 +1,253 @@ + + + + + + + + +Isnan — torch_isnan • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Isnan

    +
    + +
    torch_isnan(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) A tensor to check

    + +

    TEST

    + + + + +

    Returns a new tensor with boolean elements representing if each element is NaN or not.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_isnan(torch_tensor(c(1, NaN, 2))) +} +
    #> torch_tensor +#> 0 +#> 1 +#> 0 +#> [ CPUBoolType{3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_kthvalue.html b/static/docs/reference/torch_kthvalue.html new file mode 100644 index 0000000000000000000000000000000000000000..1a2abf400c2f601127690d368833469f2e24a5fc --- /dev/null +++ b/static/docs/reference/torch_kthvalue.html @@ -0,0 +1,283 @@ + + + + + + + + +Kthvalue — torch_kthvalue • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Kthvalue

    +
    + +
    torch_kthvalue(self, k, dim = -1L, keepdim = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    k

    (int) k for the k-th smallest element

    dim

    (int, optional) the dimension to find the kth value along

    keepdim

    (bool) whether the output tensor has dim retained or not.

    + +

    kthvalue(input, k, dim=NULL, keepdim=False, out=NULL) -> (Tensor, LongTensor)

    + + + + +

    Returns a namedtuple (values, indices) where values is the k th +smallest element of each row of the input tensor in the given dimension +dim. And indices is the index location of each element found.

    +

    If dim is not given, the last dimension of the input is chosen.

    +

    If keepdim is TRUE, both the values and indices tensors +are the same size as input, except in the dimension dim where +they are of size 1. Otherwise, dim is squeezed +(see torch_squeeze), resulting in both the values and +indices tensors having 1 fewer dimension than the input tensor.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_arange(1., 6.) +x +torch_kthvalue(x, 4) +x=torch_arange(1.,7.)$resize_(c(2,3)) +x +torch_kthvalue(x, 2, 1, TRUE) +} +
    #> [[1]] +#> torch_tensor +#> 4 5 6 +#> [ CPUFloatType{1,3} ] +#> +#> [[2]] +#> torch_tensor +#> 1 1 1 +#> [ CPULongType{1,3} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_layout.html b/static/docs/reference/torch_layout.html new file mode 100644 index 0000000000000000000000000000000000000000..d557649615b38941db1a2acebeefb9085d91fa82 --- /dev/null +++ b/static/docs/reference/torch_layout.html @@ -0,0 +1,231 @@ + + + + + + + + +Creates the corresponding layout — torch_layout • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Creates the corresponding layout

    +
    + +
    torch_strided()
    +
    +torch_sparse_coo()
    + + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_le.html b/static/docs/reference/torch_le.html new file mode 100644 index 0000000000000000000000000000000000000000..9f5263505b3330c105394bc5119fc1a1ef9dc805 --- /dev/null +++ b/static/docs/reference/torch_le.html @@ -0,0 +1,259 @@ + + + + + + + + +Le — torch_le • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Le

    +
    + +
    torch_le(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the tensor to compare

    other

    (Tensor or float) the tensor or value to compare

    + +

    le(input, other, out=NULL) -> Tensor

    + + + + +

    Computes \(\mbox{input} \leq \mbox{other}\) element-wise.

    +

    The second argument can be a number or a tensor whose shape is +broadcastable with the first argument.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_le(torch_tensor(matrix(1:4, ncol = 2, byrow=TRUE)), + torch_tensor(matrix(c(1,1,4,4), ncol = 2, byrow=TRUE))) +} +
    #> torch_tensor +#> 1 0 +#> 1 1 +#> [ CPUBoolType{2,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_lerp.html b/static/docs/reference/torch_lerp.html new file mode 100644 index 0000000000000000000000000000000000000000..59fb1fd7448345f5eb1e7beda0c3740e75082271 --- /dev/null +++ b/static/docs/reference/torch_lerp.html @@ -0,0 +1,274 @@ + + + + + + + + +Lerp — torch_lerp • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Lerp

    +
    + +
    torch_lerp(self, end, weight)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the tensor with the starting points

    end

    (Tensor) the tensor with the ending points

    weight

    (float or tensor) the weight for the interpolation formula

    + +

    lerp(input, end, weight, out=NULL)

    + + + + +

    Does a linear interpolation of two tensors start (given by input) and end based +on a scalar or tensor weight and returns the resulting out tensor.

    +

    $$ + \mbox{out}_i = \mbox{start}_i + \mbox{weight}_i \times (\mbox{end}_i - \mbox{start}_i) +$$ +The shapes of start and end must be +broadcastable . If weight is a tensor, then +the shapes of weight, start, and end must be broadcastable .

    + +

    Examples

    +
    if (torch_is_installed()) { + +start = torch_arange(1., 5.) +end = torch_empty(4)$fill_(10) +start +end +torch_lerp(start, end, 0.5) +torch_lerp(start, end, torch_full_like(start, 0.5)) +} +
    #> torch_tensor +#> 5.5000 +#> 6.0000 +#> 6.5000 +#> 7.0000 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_lgamma.html b/static/docs/reference/torch_lgamma.html new file mode 100644 index 0000000000000000000000000000000000000000..a84df0c4fe9935cbb9afca4d720bb5f949247fa2 --- /dev/null +++ b/static/docs/reference/torch_lgamma.html @@ -0,0 +1,257 @@ + + + + + + + + +Lgamma — torch_lgamma • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Lgamma

    +
    + +
    torch_lgamma(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    lgamma(input, out=NULL) -> Tensor

    + + + + +

    Computes the logarithm of the gamma function on input.

    +

    $$ + \mbox{out}_{i} = \log \Gamma(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_arange(0.5, 2, 0.5) +torch_lgamma(a) +} +
    #> torch_tensor +#> 0.5724 +#> 0.0000 +#> -0.1208 +#> [ CPUFloatType{3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_linspace.html b/static/docs/reference/torch_linspace.html new file mode 100644 index 0000000000000000000000000000000000000000..3b0cfa6d1603c420109d78e3c777c71fa27e1340 --- /dev/null +++ b/static/docs/reference/torch_linspace.html @@ -0,0 +1,288 @@ + + + + + + + + +Linspace — torch_linspace • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Linspace

    +
    + +
    torch_linspace(
    +  start,
    +  end,
    +  steps = 100,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    start

    (float) the starting value for the set of points

    end

    (float) the ending value for the set of points

    steps

    (int) number of points to sample between start and end. Default: 100.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

    layout

    (torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    linspace(start, end, steps=100, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

    + + + + +

    Returns a one-dimensional tensor of steps +equally spaced points between start and end.

    +

    The output tensor is 1-D of size steps.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_linspace(3, 10, steps=5) +torch_linspace(-10, 10, steps=5) +torch_linspace(start=-10, end=10, steps=5) +torch_linspace(start=-10, end=10, steps=1) +} +
    #> torch_tensor +#> -10 +#> [ CPUFloatType{1} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_load.html b/static/docs/reference/torch_load.html new file mode 100644 index 0000000000000000000000000000000000000000..a7506e056c13ed5762035e60102d370ad3e1045d --- /dev/null +++ b/static/docs/reference/torch_load.html @@ -0,0 +1,241 @@ + + + + + + + + +Loads a saved object — torch_load • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Loads a saved object

    +
    + +
    torch_load(path)
    + +

    Arguments

    + + + + + + +
    path

    a path to the saved object

    + +

    See also

    + +

    Other torch_save: +torch_save()

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_log.html b/static/docs/reference/torch_log.html new file mode 100644 index 0000000000000000000000000000000000000000..0d30ca684e27ceeb3f14a00e4b5a3e0da2ba41dc --- /dev/null +++ b/static/docs/reference/torch_log.html @@ -0,0 +1,261 @@ + + + + + + + + +Log — torch_log • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Log

    +
    + +
    torch_log(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    log(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the natural logarithm of the elements +of input.

    +

    $$ + y_{i} = \log_{e} (x_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(5)) +a +torch_log(a) +} +
    #> torch_tensor +#> nan +#> 0.8911 +#> 0.0003 +#> nan +#> -1.5295 +#> [ CPUFloatType{5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_log10.html b/static/docs/reference/torch_log10.html new file mode 100644 index 0000000000000000000000000000000000000000..6b37e888b5bc4095cd9a74725c83e2c266b3d67a --- /dev/null +++ b/static/docs/reference/torch_log10.html @@ -0,0 +1,261 @@ + + + + + + + + +Log10 — torch_log10 • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Log10

    +
    + +
    torch_log10(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    log10(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the logarithm to the base 10 of the elements +of input.

    +

    $$ + y_{i} = \log_{10} (x_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_rand(5) +a +torch_log10(a) +} +
    #> torch_tensor +#> -0.2865 +#> -0.4904 +#> -0.9127 +#> -0.2747 +#> -0.2714 +#> [ CPUFloatType{5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_log1p.html b/static/docs/reference/torch_log1p.html new file mode 100644 index 0000000000000000000000000000000000000000..54c5c5f1210ba70a0e269aeceb0e20cec111dc16 --- /dev/null +++ b/static/docs/reference/torch_log1p.html @@ -0,0 +1,264 @@ + + + + + + + + +Log1p — torch_log1p • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Log1p

    +
    + +
    torch_log1p(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    Note

    + +

    This function is more accurate than torch_log for small +values of input

    +

    log1p(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the natural logarithm of (1 + input).

    +

    $$ + y_i = \log_{e} (x_i + 1) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(5)) +a +torch_log1p(a) +} +
    #> torch_tensor +#> 0.6781 +#> 0.2969 +#> 0.6634 +#> 0.5704 +#> -0.1361 +#> [ CPUFloatType{5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_log2.html b/static/docs/reference/torch_log2.html new file mode 100644 index 0000000000000000000000000000000000000000..6ed88a77ab04731040b774b7b34f173744b20537 --- /dev/null +++ b/static/docs/reference/torch_log2.html @@ -0,0 +1,261 @@ + + + + + + + + +Log2 — torch_log2 • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Log2

    +
    + +
    torch_log2(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    log2(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the logarithm to the base 2 of the elements +of input.

    +

    $$ + y_{i} = \log_{2} (x_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_rand(5) +a +torch_log2(a) +} +
    #> torch_tensor +#> -0.6302 +#> -6.7372 +#> -0.0043 +#> -1.2060 +#> -1.4108 +#> [ CPUFloatType{5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_logdet.html b/static/docs/reference/torch_logdet.html new file mode 100644 index 0000000000000000000000000000000000000000..f56fab2774565600d429111fe01e2875556d673b --- /dev/null +++ b/static/docs/reference/torch_logdet.html @@ -0,0 +1,269 @@ + + + + + + + + +Logdet — torch_logdet • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Logdet

    +
    + +
    torch_logdet(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor of size (*, n, n) where * is zero or more batch dimensions.

    + +

    Note

    + + +
    Result is `-inf` if `input` has zero log determinant, and is `NaN` if
    +`input` has negative determinant.
    +
    + +
    Backward through `logdet` internally uses SVD results when `input`
    +is not invertible. In this case, double backward through `logdet` will
    +be unstable in when `input` doesn't have distinct singular values. See
    +`~torch.svd` for details.
    +
    + +

    logdet(input) -> Tensor

    + + + + +

    Calculates log determinant of a square matrix or batches of square matrices.

    + +

    Examples

    +
    if (torch_is_installed()) { + +A = torch_randn(c(3, 3)) +torch_det(A) +torch_logdet(A) +A +A$det() +A$det()$log() +} +
    #> torch_tensor +#> nan +#> [ CPUFloatType{} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_logical_and.html b/static/docs/reference/torch_logical_and.html new file mode 100644 index 0000000000000000000000000000000000000000..fc1082267897bf266bb1e68b7d0e80688ad25664 --- /dev/null +++ b/static/docs/reference/torch_logical_and.html @@ -0,0 +1,260 @@ + + + + + + + + +Logical_and — torch_logical_and • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Logical_and

    +
    + +
    torch_logical_and(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    other

    (Tensor) the tensor to compute AND with

    + +

    logical_and(input, other, out=NULL) -> Tensor

    + + + + +

    Computes the element-wise logical AND of the given input tensors. Zeros are treated as FALSE and nonzeros are +treated as TRUE.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_logical_and(torch_tensor(c(TRUE, FALSE, TRUE)), torch_tensor(c(TRUE, FALSE, FALSE))) +a = torch_tensor(c(0, 1, 10, 0), dtype=torch_int8()) +b = torch_tensor(c(4, 0, 1, 0), dtype=torch_int8()) +torch_logical_and(a, b) +if (FALSE) { +torch_logical_and(a, b, out=torch_empty(4, dtype=torch_bool())) +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_logical_not.html b/static/docs/reference/torch_logical_not.html new file mode 100644 index 0000000000000000000000000000000000000000..bc9b3b63efc6239a9292cc298cbac1e1139eeab3 --- /dev/null +++ b/static/docs/reference/torch_logical_not.html @@ -0,0 +1,255 @@ + + + + + + + + +Logical_not — torch_logical_not • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Logical_not

    +
    + + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    logical_not(input, out=NULL) -> Tensor

    + + + + +

    Computes the element-wise logical NOT of the given input tensor. If not specified, the output tensor will have the bool +dtype. If the input tensor is not a bool tensor, zeros are treated as FALSE and non-zeros are treated as TRUE.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_logical_not(torch_tensor(c(TRUE, FALSE))) +torch_logical_not(torch_tensor(c(0, 1, -10), dtype=torch_int8())) +torch_logical_not(torch_tensor(c(0., 1.5, -10.), dtype=torch_double())) +} +
    #> torch_tensor +#> 1 +#> 0 +#> 0 +#> [ CPUBoolType{3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_logical_or.html b/static/docs/reference/torch_logical_or.html new file mode 100644 index 0000000000000000000000000000000000000000..6a7329686d38e52f458421b938930008d9d351cb --- /dev/null +++ b/static/docs/reference/torch_logical_or.html @@ -0,0 +1,262 @@ + + + + + + + + +Logical_or — torch_logical_or • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Logical_or

    +
    + +
    torch_logical_or(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    other

    (Tensor) the tensor to compute OR with

    + +

    logical_or(input, other, out=NULL) -> Tensor

    + + + + +

    Computes the element-wise logical OR of the given input tensors. Zeros are treated as FALSE and nonzeros are +treated as TRUE.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_logical_or(torch_tensor(c(TRUE, FALSE, TRUE)), torch_tensor(c(TRUE, FALSE, FALSE))) +a = torch_tensor(c(0, 1, 10, 0), dtype=torch_int8()) +b = torch_tensor(c(4, 0, 1, 0), dtype=torch_int8()) +torch_logical_or(a, b) +if (FALSE) { +torch_logical_or(a$double(), b$double()) +torch_logical_or(a$double(), b) +torch_logical_or(a, b, out=torch_empty(4, dtype=torch_bool())) +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_logical_xor.html b/static/docs/reference/torch_logical_xor.html new file mode 100644 index 0000000000000000000000000000000000000000..9ca24a8049164fdaa0eaf521871f62cc11cf012c --- /dev/null +++ b/static/docs/reference/torch_logical_xor.html @@ -0,0 +1,264 @@ + + + + + + + + +Logical_xor — torch_logical_xor • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Logical_xor

    +
    + +
    torch_logical_xor(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    other

    (Tensor) the tensor to compute XOR with

    + +

    logical_xor(input, other, out=NULL) -> Tensor

    + + + + +

    Computes the element-wise logical XOR of the given input tensors. Zeros are treated as FALSE and nonzeros are +treated as TRUE.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_logical_xor(torch_tensor(c(TRUE, FALSE, TRUE)), torch_tensor(c(TRUE, FALSE, FALSE))) +a = torch_tensor(c(0, 1, 10, 0), dtype=torch_int8()) +b = torch_tensor(c(4, 0, 1, 0), dtype=torch_int8()) +torch_logical_xor(a, b) +torch_logical_xor(a$to(dtype=torch_double()), b$to(dtype=torch_double())) +torch_logical_xor(a$to(dtype=torch_double()), b) +} +
    #> torch_tensor +#> 1 +#> 1 +#> 0 +#> 0 +#> [ CPUBoolType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_logspace.html b/static/docs/reference/torch_logspace.html new file mode 100644 index 0000000000000000000000000000000000000000..bdd37da5280fb1773a4ed78d04cac190dfb5abcf --- /dev/null +++ b/static/docs/reference/torch_logspace.html @@ -0,0 +1,294 @@ + + + + + + + + +Logspace — torch_logspace • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Logspace

    +
    + +
    torch_logspace(
    +  start,
    +  end,
    +  steps = 100,
    +  base = 10,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    start

    (float) the starting value for the set of points

    end

    (float) the ending value for the set of points

    steps

    (int) number of points to sample between start and end. Default: 100.

    base

    (float) base of the logarithm function. Default: 10.0.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

    layout

    (torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    logspace(start, end, steps=100, base=10.0, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

    + + + + +

    Returns a one-dimensional tensor of steps points +logarithmically spaced with base base between +\({\mbox{base}}^{\mbox{start}}\) and \({\mbox{base}}^{\mbox{end}}\).

    +

    The output tensor is 1-D of size steps.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_logspace(start=-10, end=10, steps=5) +torch_logspace(start=0.1, end=1.0, steps=5) +torch_logspace(start=0.1, end=1.0, steps=1) +torch_logspace(start=2, end=2, steps=1, base=2) +} +
    #> torch_tensor +#> 4 +#> [ CPUFloatType{1} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_logsumexp.html b/static/docs/reference/torch_logsumexp.html new file mode 100644 index 0000000000000000000000000000000000000000..c5a19fbd151e0b9894c6b292a81d9c238a851419 --- /dev/null +++ b/static/docs/reference/torch_logsumexp.html @@ -0,0 +1,272 @@ + + + + + + + + +Logsumexp — torch_logsumexp • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Logsumexp

    +
    + +
    torch_logsumexp(self, dim, keepdim = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int or tuple of ints) the dimension or dimensions to reduce.

    keepdim

    (bool) whether the output tensor has dim retained or not.

    + +

    logsumexp(input, dim, keepdim=False, out=NULL)

    + + + + +

    Returns the log of summed exponentials of each row of the input +tensor in the given dimension dim. The computation is numerically +stabilized.

    +

    For summation index \(j\) given by dim and other indices \(i\), the result is

    +

    $$ + \mbox{logsumexp}(x)_{i} = \log \sum_j \exp(x_{ij}) +$$

    +

    If keepdim is TRUE, the output tensor is of the same size +as input except in the dimension(s) dim where it is of size 1. +Otherwise, dim is squeezed (see torch_squeeze), resulting in the +output tensor having 1 (or len(dim)) fewer dimension(s).

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(3, 3)) +torch_logsumexp(a, 1) +} +
    #> torch_tensor +#> 1.7933 +#> 0.4143 +#> 0.9287 +#> [ CPUFloatType{3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_lstsq.html b/static/docs/reference/torch_lstsq.html new file mode 100644 index 0000000000000000000000000000000000000000..bc9849373f3bf879696a9fc8c829b0eea5aa5c6e --- /dev/null +++ b/static/docs/reference/torch_lstsq.html @@ -0,0 +1,298 @@ + + + + + + + + +Lstsq — torch_lstsq • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Lstsq

    +
    + +
    torch_lstsq(self, A)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the matrix \(B\)

    A

    (Tensor) the \(m\) by \(n\) matrix \(A\)

    + +

    Note

    + + +
    The case when \eqn{m < n} is not supported on the GPU.
    +
    + +

    lstsq(input, A, out=NULL) -> Tensor

    + + + + +

    Computes the solution to the least squares and least norm problems for a full +rank matrix \(A\) of size \((m \times n)\) and a matrix \(B\) of +size \((m \times k)\).

    +

    If \(m \geq n\), torch_lstsq() solves the least-squares problem:

    +

    $$ + \begin{array}{ll} + \min_X & \|AX-B\|_2. + \end{array} +$$ +If \(m < n\), torch_lstsq() solves the least-norm problem:

    +

    $$ + \begin{array}{llll} + \min_X & \|X\|_2 & \mbox{subject to} & AX = B. + \end{array} +$$ +Returned tensor \(X\) has shape \((\mbox{max}(m, n) \times k)\). The first \(n\) +rows of \(X\) contains the solution. If \(m \geq n\), the residual sum of squares +for the solution in each column is given by the sum of squares of elements in the +remaining \(m - n\) rows of that column.

    + +

    Examples

    +
    if (torch_is_installed()) { + +A = torch_tensor(rbind( + c(1,1,1), + c(2,3,4), + c(3,5,2), + c(4,2,5), + c(5,4,3) +)) +B = torch_tensor(rbind( + c(-10, -3), + c(12, 14), + c(14, 12), + c(16, 16), + c(18, 16) +)) +out = torch_lstsq(B, A) +out[[1]] +} +
    #> torch_tensor +#> 2.0000 1.0000 +#> 1.0000 1.0000 +#> 1.0000 2.0000 +#> 10.9635 4.8501 +#> 8.9332 5.2418 +#> [ CPUFloatType{5,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_lt.html b/static/docs/reference/torch_lt.html new file mode 100644 index 0000000000000000000000000000000000000000..ebda53accfeba8de2e395e8b7e1c269055b4d71d --- /dev/null +++ b/static/docs/reference/torch_lt.html @@ -0,0 +1,259 @@ + + + + + + + + +Lt — torch_lt • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Lt

    +
    + +
    torch_lt(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the tensor to compare

    other

    (Tensor or float) the tensor or value to compare

    + +

    lt(input, other, out=NULL) -> Tensor

    + + + + +

    Computes \(\mbox{input} < \mbox{other}\) element-wise.

    +

    The second argument can be a number or a tensor whose shape is +broadcastable with the first argument.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_lt(torch_tensor(matrix(1:4, ncol = 2, byrow=TRUE)), + torch_tensor(matrix(c(1,1,4,4), ncol = 2, byrow=TRUE))) +} +
    #> torch_tensor +#> 0 0 +#> 1 0 +#> [ CPUBoolType{2,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_lu.html b/static/docs/reference/torch_lu.html new file mode 100644 index 0000000000000000000000000000000000000000..c8eb4f82a3a7cfe7ebf0e58c0a203954d4ccd975 --- /dev/null +++ b/static/docs/reference/torch_lu.html @@ -0,0 +1,281 @@ + + + + + + + + +LU — torch_lu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Computes the LU factorization of a matrix or batches of matrices A. Returns a +tuple containing the LU factorization and pivots of A. Pivoting is done if pivot +is set to True.

    +
    + +
    torch_lu(A, pivot = TRUE, get_infos = FALSE, out = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    A

    (Tensor) the tensor to factor of size (, m, n)(,m,n)

    pivot

    (bool, optional) – controls whether pivoting is done. Default: TRUE

    get_infos

    (bool, optional) – if set to True, returns an info IntTensor. Default: FALSE

    out

    (tuple, optional) – optional output tuple. If get_infos is True, then the elements +in the tuple are Tensor, IntTensor, and IntTensor. If get_infos is False, then the +elements in the tuple are Tensor, IntTensor. Default: NULL

    + + +

    Examples

    +
    if (torch_is_installed()) { + +A = torch_randn(c(2, 3, 3)) +torch_lu(A) + +} +
    #> [[1]] +#> torch_tensor +#> (1,.,.) = +#> 0.3564 0.0937 -0.2445 +#> 0.3677 1.8428 -0.7129 +#> -0.8647 0.0045 0.4332 +#> +#> (2,.,.) = +#> 1.5740 -0.5029 -0.3207 +#> 0.2765 -0.6417 0.4304 +#> 0.1308 0.1404 0.1848 +#> [ CPUFloatType{2,3,3} ] +#> +#> [[2]] +#> torch_tensor +#> 3 3 3 +#> 3 2 3 +#> [ CPUIntType{2,3} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_lu_solve.html b/static/docs/reference/torch_lu_solve.html new file mode 100644 index 0000000000000000000000000000000000000000..81d46bfbd7cbb36d078e9e6bebfe562c529a302c --- /dev/null +++ b/static/docs/reference/torch_lu_solve.html @@ -0,0 +1,263 @@ + + + + + + + + +Lu_solve — torch_lu_solve • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Lu_solve

    +
    + +
    torch_lu_solve(self, LU_data, LU_pivots)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the RHS tensor of size \((*, m, k)\), where \(*\) is zero or more batch dimensions.

    LU_data

    (Tensor) the pivoted LU factorization of A from torch_lu of size \((*, m, m)\), where \(*\) is zero or more batch dimensions.

    LU_pivots

    (IntTensor) the pivots of the LU factorization from torch_lu of size \((*, m)\), where \(*\) is zero or more batch dimensions. The batch dimensions of LU_pivots must be equal to the batch dimensions of LU_data.

    + +

    lu_solve(input, LU_data, LU_pivots, out=NULL) -> Tensor

    + + + + +

    Returns the LU solve of the linear system \(Ax = b\) using the partially pivoted +LU factorization of A from torch_lu.

    + +

    Examples

    +
    if (torch_is_installed()) { +A = torch_randn(c(2, 3, 3)) +b = torch_randn(c(2, 3, 1)) +out = torch_lu(A) +x = torch_lu_solve(b, out[[1]], out[[2]]) +torch_norm(torch_bmm(A, x) - b) +} +
    #> torch_tensor +#> 1.58402e-07 +#> [ CPUFloatType{} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_manual_seed.html b/static/docs/reference/torch_manual_seed.html new file mode 100644 index 0000000000000000000000000000000000000000..37e25c5a87e114279333c7ce677f02ede48301a4 --- /dev/null +++ b/static/docs/reference/torch_manual_seed.html @@ -0,0 +1,237 @@ + + + + + + + + +Sets the seed for generating random numbers. — torch_manual_seed • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Sets the seed for generating random numbers.

    +
    + +
    torch_manual_seed(seed)
    + +

    Arguments

    + + + + + + +
    seed

    integer seed.

    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_masked_select.html b/static/docs/reference/torch_masked_select.html new file mode 100644 index 0000000000000000000000000000000000000000..6108bfdfd1bd4e65d4becab979a0dc928d41790d --- /dev/null +++ b/static/docs/reference/torch_masked_select.html @@ -0,0 +1,269 @@ + + + + + + + + +Masked_select — torch_masked_select • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Masked_select

    +
    + +
    torch_masked_select(self, mask)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    mask

    (BoolTensor) the tensor containing the binary mask to index with

    + +

    Note

    + +

    The returned tensor does not use the same storage +as the original tensor

    +

    masked_select(input, mask, out=NULL) -> Tensor

    + + + + +

    Returns a new 1-D tensor which indexes the input tensor according to +the boolean mask mask which is a BoolTensor.

    +

    The shapes of the mask tensor and the input tensor don't need +to match, but they must be broadcastable .

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_randn(c(3, 4)) +x +mask = x$ge(0.5) +mask +torch_masked_select(x, mask) +} +
    #> torch_tensor +#> 1.2190 +#> 1.2591 +#> 2.0310 +#> 0.7883 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_matmul.html b/static/docs/reference/torch_matmul.html new file mode 100644 index 0000000000000000000000000000000000000000..c705dc239d5ae04d856e425c968380ad27390459 --- /dev/null +++ b/static/docs/reference/torch_matmul.html @@ -0,0 +1,347 @@ + + + + + + + + +Matmul — torch_matmul • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Matmul

    +
    + +
    torch_matmul(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the first tensor to be multiplied

    other

    (Tensor) the second tensor to be multiplied

    + +

    Note

    + + +
    The 1-dimensional dot product version of this function does not support an `out` parameter.
    +
    + +

    matmul(input, other, out=NULL) -> Tensor

    + + + + +

    Matrix product of two tensors.

    +

    The behavior depends on the dimensionality of the tensors as follows:

      +
    • If both tensors are 1-dimensional, the dot product (scalar) is returned.

    • +
    • If both arguments are 2-dimensional, the matrix-matrix product is returned.

    • +
    • If the first argument is 1-dimensional and the second argument is 2-dimensional, +a 1 is prepended to its dimension for the purpose of the matrix multiply. +After the matrix multiply, the prepended dimension is removed.

    • +
    • If the first argument is 2-dimensional and the second argument is 1-dimensional, +the matrix-vector product is returned.

    • +
    • If both arguments are at least 1-dimensional and at least one argument is +N-dimensional (where N > 2), then a batched matrix multiply is returned. If the first +argument is 1-dimensional, a 1 is prepended to its dimension for the purpose of the +batched matrix multiply and removed after. If the second argument is 1-dimensional, a +1 is appended to its dimension for the purpose of the batched matrix multiple and removed after. +The non-matrix (i.e. batch) dimensions are broadcasted (and thus +must be broadcastable). For example, if input is a +\((j \times 1 \times n \times m)\) tensor and other is a \((k \times m \times p)\) +tensor, out will be an \((j \times k \times n \times p)\) tensor.

    • +
    + + +

    Examples

    +
    if (torch_is_installed()) { + +# vector x vector +tensor1 = torch_randn(c(3)) +tensor2 = torch_randn(c(3)) +torch_matmul(tensor1, tensor2) +# matrix x vector +tensor1 = torch_randn(c(3, 4)) +tensor2 = torch_randn(c(4)) +torch_matmul(tensor1, tensor2) +# batched matrix x broadcasted vector +tensor1 = torch_randn(c(10, 3, 4)) +tensor2 = torch_randn(c(4)) +torch_matmul(tensor1, tensor2) +# batched matrix x batched matrix +tensor1 = torch_randn(c(10, 3, 4)) +tensor2 = torch_randn(c(10, 4, 5)) +torch_matmul(tensor1, tensor2) +# batched matrix x broadcasted matrix +tensor1 = torch_randn(c(10, 3, 4)) +tensor2 = torch_randn(c(4, 5)) +torch_matmul(tensor1, tensor2) +} +
    #> torch_tensor +#> (1,.,.) = +#> -0.2185 0.1251 0.5783 -0.7380 1.9290 +#> -0.1489 0.9409 0.2506 0.7444 -0.3201 +#> 0.2165 2.5157 0.8195 -1.2013 2.7602 +#> +#> (2,.,.) = +#> -0.6251 1.0054 1.5308 1.0346 1.7585 +#> 0.7581 -0.5255 -1.6358 -0.3028 -2.8103 +#> 1.9208 0.9674 -2.9281 1.9148 -7.9955 +#> +#> (3,.,.) = +#> -0.1272 1.0480 0.5100 -0.5049 1.5373 +#> -0.6784 -2.4129 0.4735 0.3082 0.6775 +#> -0.9108 -1.5962 1.0820 -0.0427 2.2469 +#> +#> (4,.,.) = +#> 0.1000 1.6520 0.6610 1.2012 -0.2146 +#> -0.9429 -1.2208 0.9296 -1.5397 3.7074 +#> 0.6559 -0.7159 -1.7710 -0.6774 -2.5811 +#> +#> (5,.,.) = +#> -0.1291 1.6498 1.3190 0.7676 1.5068 +#> 0.1177 -0.1660 -0.6345 1.6803 -3.0527 +#> 0.5094 -2.0094 -1.7119 2.0194 -5.5159 +#> +#> (6,.,.) = +#> 0.0387 1.2627 0.4658 -0.2528 1.1118 +#> -0.0531 -0.8584 -0.3116 -1.4711 1.1272 +#> 0.7294 -0.9226 -1.9783 1.3631 -5.2894 +#> +#> (7,.,.) = +#> -0.7140 -1.2579 0.6561 -1.1249 2.6681 +#> 0.8946 1.1603 -1.5402 0.5578 -3.6267 +#> -0.8431 0.8911 1.9896 0.9616 2.7199 +#> +#> (8,.,.) = +#> 0.7925 0.9144 -1.5220 -0.5466 -2.3076 +#> -0.6783 0.0011 1.2162 -0.9383 3.4450 +#> 0.2356 -0.6907 -0.6703 -0.1563 -1.0982 +#> +#> (9,.,.) = +#> -0.7889 0.4032 1.3563 -0.5223 3.2740 +#> 0.3636 -0.9006 -1.1470 -1.1717 -0.8283 +#> -0.4663 -0.2173 0.5950 4.1339 -3.4726 +#> +#> (10,.,.) = +#> -0.7729 1.7140 2.1228 2.9509 0.6895 +#> 0.4515 -0.0429 -1.0629 -2.8544 1.2142 +#> -1.5890 -0.3097 2.8757 1.6369 3.7579 +#> [ CPUFloatType{10,3,5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_matrix_power.html b/static/docs/reference/torch_matrix_power.html new file mode 100644 index 0000000000000000000000000000000000000000..16f41b3166fa74f60aab14ef287ddd7140b9108c --- /dev/null +++ b/static/docs/reference/torch_matrix_power.html @@ -0,0 +1,268 @@ + + + + + + + + +Matrix_power — torch_matrix_power • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Matrix_power

    +
    + +
    torch_matrix_power(self, n)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    n

    (int) the power to raise the matrix to

    + +

    matrix_power(input, n) -> Tensor

    + + + + +

    Returns the matrix raised to the power n for square matrices. +For batch of matrices, each individual matrix is raised to the power n.

    +

    If n is negative, then the inverse of the matrix (if invertible) is +raised to the power n. For a batch of matrices, the batched inverse +(if invertible) is raised to the power n. If n is 0, then an identity matrix +is returned.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(2, 2, 2)) +a +torch_matrix_power(a, 3) +} +
    #> torch_tensor +#> (1,.,.) = +#> -0.4133 -0.7320 +#> -0.0927 0.0367 +#> +#> (2,.,.) = +#> -1.2180 2.0019 +#> -4.7379 -8.9109 +#> [ CPUFloatType{2,2,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_matrix_rank.html b/static/docs/reference/torch_matrix_rank.html new file mode 100644 index 0000000000000000000000000000000000000000..95a00df787a103dab2a7ab3262429ff07bcb8351 --- /dev/null +++ b/static/docs/reference/torch_matrix_rank.html @@ -0,0 +1,268 @@ + + + + + + + + +Matrix_rank — torch_matrix_rank • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Matrix_rank

    +
    + +
    torch_matrix_rank(self, tol, symmetric = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input 2-D tensor

    tol

    (float, optional) the tolerance value. Default: NULL

    symmetric

    (bool, optional) indicates whether input is symmetric. Default: FALSE

    + +

    matrix_rank(input, tol=NULL, symmetric=False) -> Tensor

    + + + + +

    Returns the numerical rank of a 2-D tensor. The method to compute the +matrix rank is done using SVD by default. If symmetric is TRUE, +then input is assumed to be symmetric, and the computation of the +rank is done by obtaining the eigenvalues.

    +

    tol is the threshold below which the singular values (or the eigenvalues +when symmetric is TRUE) are considered to be 0. If tol is not +specified, tol is set to S.max() * max(S.size()) * eps where S is the +singular values (or the eigenvalues when symmetric is TRUE), and eps +is the epsilon value for the datatype of input.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_eye(10) +torch_matrix_rank(a) +} +
    #> torch_tensor +#> 10 +#> [ CPULongType{} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_max.html b/static/docs/reference/torch_max.html new file mode 100644 index 0000000000000000000000000000000000000000..265c01dd6e2dcf51f584c2912c9baba180cbebaf --- /dev/null +++ b/static/docs/reference/torch_max.html @@ -0,0 +1,320 @@ + + + + + + + + +Max — torch_max • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Max

    +
    + + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int) the dimension to reduce.

    keepdim

    (bool) whether the output tensor has dim retained or not. Default: FALSE.

    out

    (tuple, optional) the result tuple of two output tensors (max, max_indices)

    other

    (Tensor) the second input tensor

    + +

    Note

    + +

    When the shapes do not match, the shape of the returned output tensor +follows the broadcasting rules .

    +

    max(input) -> Tensor

    + + + + +

    Returns the maximum value of all elements in the input tensor.

    +

    max(input, dim, keepdim=False, out=NULL) -> (Tensor, LongTensor)

    + + + + +

    Returns a namedtuple (values, indices) where values is the maximum +value of each row of the input tensor in the given dimension +dim. And indices is the index location of each maximum value found +(argmax).

    +

    Warning

    + + + +

    indices does not necessarily contain the first occurrence of each +maximal value found, unless it is unique. +The exact implementation details are device-specific. +Do not expect the same result when run on CPU and GPU in general.

    +

    If keepdim is TRUE, the output tensors are of the same size +as input except in the dimension dim where they are of size 1. +Otherwise, dim is squeezed (see torch_squeeze), resulting +in the output tensors having 1 fewer dimension than input.

    +

    max(input, other, out=NULL) -> Tensor

    + + + + +

    Each element of the tensor input is compared with the corresponding +element of the tensor other and an element-wise maximum is taken.

    +

    The shapes of input and other don't need to match, +but they must be broadcastable .

    +

    $$ + \mbox{out}_i = \max(\mbox{tensor}_i, \mbox{other}_i) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(1, 3)) +a +torch_max(a) + + +a = torch_randn(c(4, 4)) +a +torch_max(a, dim = 1) + + +a = torch_randn(c(4)) +a +b = torch_randn(c(4)) +b +torch_max(a, other = b) +} +
    #> torch_tensor +#> 0.4118 +#> -0.1116 +#> 0.7360 +#> 0.9171 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_mean.html b/static/docs/reference/torch_mean.html new file mode 100644 index 0000000000000000000000000000000000000000..00cd66b50408922c4e88fbeaf12b21a532ac9e29 --- /dev/null +++ b/static/docs/reference/torch_mean.html @@ -0,0 +1,283 @@ + + + + + + + + +Mean — torch_mean • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Mean

    +
    + +
    torch_mean(self, dim, keepdim = FALSE, dtype = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int or tuple of ints) the dimension or dimensions to reduce.

    keepdim

    (bool) whether the output tensor has dim retained or not.

    dtype

    the resulting data type.

    + +

    mean(input) -> Tensor

    + + + + +

    Returns the mean value of all elements in the input tensor.

    +

    mean(input, dim, keepdim=False, out=NULL) -> Tensor

    + + + + +

    Returns the mean value of each row of the input tensor in the given +dimension dim. If dim is a list of dimensions, +reduce over all of them.

    +

    If keepdim is TRUE, the output tensor is of the same size +as input except in the dimension(s) dim where it is of size 1. +Otherwise, dim is squeezed (see torch_squeeze), resulting in the +output tensor having 1 (or len(dim)) fewer dimension(s).

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(1, 3)) +a +torch_mean(a) + + +a = torch_randn(c(4, 4)) +a +torch_mean(a, 1) +torch_mean(a, 1, TRUE) +} +
    #> torch_tensor +#> 0.1962 0.3079 -0.5549 0.3102 +#> [ CPUFloatType{1,4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_median.html b/static/docs/reference/torch_median.html new file mode 100644 index 0000000000000000000000000000000000000000..d79258ad2d8943180f4d6ed021f4b796ae9e7ccf --- /dev/null +++ b/static/docs/reference/torch_median.html @@ -0,0 +1,294 @@ + + + + + + + + +Median — torch_median • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Median

    +
    + +
    torch_median(self, dim, keepdim = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int) the dimension to reduce.

    keepdim

    (bool) whether the output tensor has dim retained or not.

    + +

    median(input) -> Tensor

    + + + + +

    Returns the median value of all elements in the input tensor.

    +

    median(input, dim=-1, keepdim=False, out=NULL) -> (Tensor, LongTensor)

    + + + + +

    Returns a namedtuple (values, indices) where values is the median +value of each row of the input tensor in the given dimension +dim. And indices is the index location of each median value found.

    +

    By default, dim is the last dimension of the input tensor.

    +

    If keepdim is TRUE, the output tensors are of the same size +as input except in the dimension dim where they are of size 1. +Otherwise, dim is squeezed (see torch_squeeze), resulting in +the outputs tensor having 1 fewer dimension than input.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(1, 3)) +a +torch_median(a) + + +a = torch_randn(c(4, 5)) +a +torch_median(a, 1) +} +
    #> [[1]] +#> torch_tensor +#> 0.1832 +#> -0.8122 +#> -1.1052 +#> -1.2011 +#> -0.0244 +#> [ CPUFloatType{5} ] +#> +#> [[2]] +#> torch_tensor +#> 3 +#> 0 +#> 3 +#> 0 +#> 1 +#> [ CPULongType{5} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_memory_format.html b/static/docs/reference/torch_memory_format.html new file mode 100644 index 0000000000000000000000000000000000000000..5b23723e1d58df084b265d90852e046dea352298 --- /dev/null +++ b/static/docs/reference/torch_memory_format.html @@ -0,0 +1,233 @@ + + + + + + + + +Memory format — torch_memory_format • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Returns the correspondent memory format.

    +
    + +
    torch_contiguous_format()
    +
    +torch_preserve_format()
    +
    +torch_channels_last_format()
    + + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_meshgrid.html b/static/docs/reference/torch_meshgrid.html new file mode 100644 index 0000000000000000000000000000000000000000..5a4691b7e5dbab3ecac807e762f95158c0c8aa48 --- /dev/null +++ b/static/docs/reference/torch_meshgrid.html @@ -0,0 +1,268 @@ + + + + + + + + +Meshgrid — torch_meshgrid • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Meshgrid

    +
    + +
    torch_meshgrid(tensors)
    + +

    Arguments

    + + + + + + +
    tensors

    (list of Tensor) list of scalars or 1 dimensional tensors. Scalars will be +treated (1,).

    + +

    TEST

    + + + + +

    Take \(N\) tensors, each of which can be either scalar or 1-dimensional +vector, and create \(N\) N-dimensional grids, where the \(i\) th grid is defined by +expanding the \(i\) th input over dimensions defined by other inputs.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_tensor(c(1, 2, 3)) +y = torch_tensor(c(4, 5, 6)) +out = torch_meshgrid(list(x, y)) +out +} +
    #> [[1]] +#> torch_tensor +#> 1 1 1 +#> 2 2 2 +#> 3 3 3 +#> [ CPUFloatType{3,3} ] +#> +#> [[2]] +#> torch_tensor +#> 4 5 6 +#> 4 5 6 +#> 4 5 6 +#> [ CPUFloatType{3,3} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_min.html b/static/docs/reference/torch_min.html new file mode 100644 index 0000000000000000000000000000000000000000..f193a5f63bd096edd7d091f0ba1f978cd7ca4aa0 --- /dev/null +++ b/static/docs/reference/torch_min.html @@ -0,0 +1,321 @@ + + + + + + + + +Min — torch_min • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Min

    +
    + + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int) the dimension to reduce.

    keepdim

    (bool) whether the output tensor has dim retained or not.

    out

    (tuple, optional) the tuple of two output tensors (min, min_indices)

    other

    (Tensor) the second input tensor

    + +

    Note

    + +

    When the shapes do not match, the shape of the returned output tensor +follows the broadcasting rules .

    +

    min(input) -> Tensor

    + + + + +

    Returns the minimum value of all elements in the input tensor.

    +

    min(input, dim, keepdim=False, out=NULL) -> (Tensor, LongTensor)

    + + + + +

    Returns a namedtuple (values, indices) where values is the minimum +value of each row of the input tensor in the given dimension +dim. And indices is the index location of each minimum value found +(argmin).

    +

    Warning

    + + + +

    indices does not necessarily contain the first occurrence of each +minimal value found, unless it is unique. +The exact implementation details are device-specific. +Do not expect the same result when run on CPU and GPU in general.

    +

    If keepdim is TRUE, the output tensors are of the same size as +input except in the dimension dim where they are of size 1. +Otherwise, dim is squeezed (see torch_squeeze), resulting in +the output tensors having 1 fewer dimension than input.

    +

    min(input, other, out=NULL) -> Tensor

    + + + + +

    Each element of the tensor input is compared with the corresponding +element of the tensor other and an element-wise minimum is taken. +The resulting tensor is returned.

    +

    The shapes of input and other don't need to match, +but they must be broadcastable .

    +

    $$ + \mbox{out}_i = \min(\mbox{tensor}_i, \mbox{other}_i) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(1, 3)) +a +torch_min(a) + + +a = torch_randn(c(4, 4)) +a +torch_min(a, dim = 1) + + +a = torch_randn(c(4)) +a +b = torch_randn(c(4)) +b +torch_min(a, other = b) +} +
    #> torch_tensor +#> -0.5633 +#> -0.5903 +#> -2.0560 +#> -1.0671 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_mm.html b/static/docs/reference/torch_mm.html new file mode 100644 index 0000000000000000000000000000000000000000..ccb01a9d0c64f5591a3b744cd633a3e821018949 --- /dev/null +++ b/static/docs/reference/torch_mm.html @@ -0,0 +1,264 @@ + + + + + + + + +Mm — torch_mm • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Mm

    +
    + +
    torch_mm(self, mat2)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the first matrix to be multiplied

    mat2

    (Tensor) the second matrix to be multiplied

    + +

    Note

    + +

    This function does not broadcast . +For broadcasting matrix products, see torch_matmul.

    +

    mm(input, mat2, out=NULL) -> Tensor

    + + + + +

    Performs a matrix multiplication of the matrices input and mat2.

    +

    If input is a \((n \times m)\) tensor, mat2 is a +\((m \times p)\) tensor, out will be a \((n \times p)\) tensor.

    + +

    Examples

    +
    if (torch_is_installed()) { + +mat1 = torch_randn(c(2, 3)) +mat2 = torch_randn(c(3, 3)) +torch_mm(mat1, mat2) +} +
    #> torch_tensor +#> 0.1067 -2.8593 1.3572 +#> -2.8347 0.9182 1.1475 +#> [ CPUFloatType{2,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_mode.html b/static/docs/reference/torch_mode.html new file mode 100644 index 0000000000000000000000000000000000000000..a95d01c6c2a713ec61f396455bd574c95a1588d8 --- /dev/null +++ b/static/docs/reference/torch_mode.html @@ -0,0 +1,279 @@ + + + + + + + + +Mode — torch_mode • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Mode

    +
    + +
    torch_mode(self, dim = -1L, keepdim = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int) the dimension to reduce.

    keepdim

    (bool) whether the output tensor has dim retained or not.

    + +

    Note

    + +

    This function is not defined for torch_cuda.Tensor yet.

    +

    mode(input, dim=-1, keepdim=False, out=NULL) -> (Tensor, LongTensor)

    + + + + +

    Returns a namedtuple (values, indices) where values is the mode +value of each row of the input tensor in the given dimension +dim, i.e. a value which appears most often +in that row, and indices is the index location of each mode value found.

    +

    By default, dim is the last dimension of the input tensor.

    +

    If keepdim is TRUE, the output tensors are of the same size as +input except in the dimension dim where they are of size 1. +Otherwise, dim is squeezed (see torch_squeeze), resulting +in the output tensors having 1 fewer dimension than input.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randint(0, 50, size = list(5)) +a +torch_mode(a, 1) +} +
    #> [[1]] +#> torch_tensor +#> 34 +#> [ CPUFloatType{} ] +#> +#> [[2]] +#> torch_tensor +#> 2 +#> [ CPULongType{} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_mul.html b/static/docs/reference/torch_mul.html new file mode 100644 index 0000000000000000000000000000000000000000..02940046fb9618b430ebf505e8dca6892d0e4bbe --- /dev/null +++ b/static/docs/reference/torch_mul.html @@ -0,0 +1,282 @@ + + + + + + + + +Mul — torch_mul • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Mul

    +
    + +
    torch_mul(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the first multiplicand tensor

    other

    (Tensor) the second multiplicand tensor

    + +

    mul(input, other, out=NULL)

    + + + + +

    Multiplies each element of the input input with the scalar +other and returns a new resulting tensor.

    +

    $$ + \mbox{out}_i = \mbox{other} \times \mbox{input}_i +$$ +If input is of type FloatTensor or DoubleTensor, other +should be a real number, otherwise it should be an integer

    + + +

    Each element of the tensor input is multiplied by the corresponding +element of the Tensor other. The resulting tensor is returned.

    +

    The shapes of input and other must be +broadcastable .

    +

    $$ + \mbox{out}_i = \mbox{input}_i \times \mbox{other}_i +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(3)) +a +torch_mul(a, 100) + + +a = torch_randn(c(4, 1)) +a +b = torch_randn(c(1, 4)) +b +torch_mul(a, b) +} +
    #> torch_tensor +#> 0.9656 0.2508 -0.9725 0.1089 +#> -0.4285 -0.1113 0.4315 -0.0483 +#> -0.5136 -0.1334 0.5173 -0.0579 +#> 0.3787 0.0984 -0.3814 0.0427 +#> [ CPUFloatType{4,4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_multinomial.html b/static/docs/reference/torch_multinomial.html new file mode 100644 index 0000000000000000000000000000000000000000..6055e831dd89f9d369803090f25cd8b08d64ef78 --- /dev/null +++ b/static/docs/reference/torch_multinomial.html @@ -0,0 +1,291 @@ + + + + + + + + +Multinomial — torch_multinomial • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Multinomial

    +
    + +
    torch_multinomial(self, num_samples, replacement = FALSE, generator = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor containing probabilities

    num_samples

    (int) number of samples to draw

    replacement

    (bool, optional) whether to draw with replacement or not

    generator

    (torch.Generator, optional) a pseudorandom number generator for sampling

    + +

    Note

    + + +
    The rows of `input` do not need to sum to one (in which case we use
    +the values as weights), but must be non-negative, finite and have
    +a non-zero sum.
    +
    + +

    Indices are ordered from left to right according to when each was sampled +(first samples are placed in first column).

    +

    If input is a vector, out is a vector of size num_samples.

    +

    If input is a matrix with m rows, out is an matrix of shape +\((m \times \mbox{num\_samples})\).

    +

    If replacement is TRUE, samples are drawn with replacement.

    +

    If not, they are drawn without replacement, which means that when a +sample index is drawn for a row, it cannot be drawn again for that row.

    +
    When drawn without replacement, `num_samples` must be lower than
    +number of non-zero elements in `input` (or the min number of non-zero
    +elements in each row of `input` if it is a matrix).
    +
    + +

    multinomial(input, num_samples, replacement=False, *, generator=NULL, out=NULL) -> LongTensor

    + + + + +

    Returns a tensor where each row contains num_samples indices sampled +from the multinomial probability distribution located in the corresponding row +of tensor input.

    + +

    Examples

    +
    if (torch_is_installed()) { + +weights = torch_tensor(c(0, 10, 3, 0), dtype=torch_float()) # create a tensor of weights +torch_multinomial(weights, 2) +torch_multinomial(weights, 4, replacement=TRUE) +} +
    #> torch_tensor +#> 1 +#> 1 +#> 2 +#> 1 +#> [ CPULongType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_mv.html b/static/docs/reference/torch_mv.html new file mode 100644 index 0000000000000000000000000000000000000000..0a7c6e5e404af9d17e74b081541eb68f565f9aa8 --- /dev/null +++ b/static/docs/reference/torch_mv.html @@ -0,0 +1,264 @@ + + + + + + + + +Mv — torch_mv • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Mv

    +
    + +
    torch_mv(self, vec)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) matrix to be multiplied

    vec

    (Tensor) vector to be multiplied

    + +

    Note

    + +

    This function does not broadcast .

    +

    mv(input, vec, out=NULL) -> Tensor

    + + + + +

    Performs a matrix-vector product of the matrix input and the vector +vec.

    +

    If input is a \((n \times m)\) tensor, vec is a 1-D tensor of +size \(m\), out will be 1-D of size \(n\).

    + +

    Examples

    +
    if (torch_is_installed()) { + +mat = torch_randn(c(2, 3)) +vec = torch_randn(c(3)) +torch_mv(mat, vec) +} +
    #> torch_tensor +#> -5.7310 +#> 1.3023 +#> [ CPUFloatType{2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_mvlgamma.html b/static/docs/reference/torch_mvlgamma.html new file mode 100644 index 0000000000000000000000000000000000000000..6595d4c971bfdcfacfc07e5851dbc052a5c45b3b --- /dev/null +++ b/static/docs/reference/torch_mvlgamma.html @@ -0,0 +1,264 @@ + + + + + + + + +Mvlgamma — torch_mvlgamma • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Mvlgamma

    +
    + +
    torch_mvlgamma(self, p)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the tensor to compute the multivariate log-gamma function

    p

    (int) the number of dimensions

    + +

    mvlgamma(input, p) -> Tensor

    + + + + +

    Computes the multivariate log-gamma function <https://en.wikipedia.org/wiki/Multivariate_gamma_function>_) with dimension +\(p\) element-wise, given by

    +

    $$ + \log(\Gamma_{p}(a)) = C + \displaystyle \sum_{i=1}^{p} \log\left(\Gamma\left(a - \frac{i - 1}{2}\right)\right) +$$ +where \(C = \log(\pi) \times \frac{p (p - 1)}{4}\) and \(\Gamma(\cdot)\) is the Gamma function.

    +

    All elements must be greater than \(\frac{p - 1}{2}\), otherwise an error would be thrown.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_empty(c(2, 3))$uniform_(1, 2) +a +torch_mvlgamma(a, 2) +} +
    #> torch_tensor +#> 0.4040 0.4059 0.7450 +#> 0.3997 0.8720 0.4162 +#> [ CPUFloatType{2,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_narrow.html b/static/docs/reference/torch_narrow.html new file mode 100644 index 0000000000000000000000000000000000000000..887af9455e2af6e5efe0465dc93204eeac2b7021 --- /dev/null +++ b/static/docs/reference/torch_narrow.html @@ -0,0 +1,269 @@ + + + + + + + + +Narrow — torch_narrow • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Narrow

    +
    + +
    torch_narrow(self, dim, start, length)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the tensor to narrow

    dim

    (int) the dimension along which to narrow

    start

    (int) the starting dimension

    length

    (int) the distance to the ending dimension

    + +

    narrow(input, dim, start, length) -> Tensor

    + + + + +

    Returns a new tensor that is a narrowed version of input tensor. The +dimension dim is input from start to start + length. The +returned tensor and input tensor share the same underlying storage.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_tensor(matrix(c(1:9), ncol = 3, byrow= TRUE)) +torch_narrow(x, 1, torch_tensor(0L)$sum(dim = 1), 2) +torch_narrow(x, 2, torch_tensor(1L)$sum(dim = 1), 2) +} +
    #> torch_tensor +#> 2 3 +#> 5 6 +#> 8 9 +#> [ CPULongType{3,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_ne.html b/static/docs/reference/torch_ne.html new file mode 100644 index 0000000000000000000000000000000000000000..3696a16260b11dbb43f0a7a0d71476d66b1a0f06 --- /dev/null +++ b/static/docs/reference/torch_ne.html @@ -0,0 +1,259 @@ + + + + + + + + +Ne — torch_ne • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Ne

    +
    + +
    torch_ne(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the tensor to compare

    other

    (Tensor or float) the tensor or value to compare

    + +

    ne(input, other, out=NULL) -> Tensor

    + + + + +

    Computes \(input \neq other\) element-wise.

    +

    The second argument can be a number or a tensor whose shape is +broadcastable with the first argument.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_ne(torch_tensor(matrix(1:4, ncol = 2, byrow=TRUE)), + torch_tensor(matrix(rep(c(1,4), each = 2), ncol = 2, byrow=TRUE))) +} +
    #> torch_tensor +#> 0 1 +#> 1 0 +#> [ CPUBoolType{2,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_neg.html b/static/docs/reference/torch_neg.html new file mode 100644 index 0000000000000000000000000000000000000000..d8e7d5b458fe1df32ffe5b30aaf8d37c43755cd9 --- /dev/null +++ b/static/docs/reference/torch_neg.html @@ -0,0 +1,260 @@ + + + + + + + + +Neg — torch_neg • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Neg

    +
    + +
    torch_neg(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    neg(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the negative of the elements of input.

    +

    $$ + \mbox{out} = -1 \times \mbox{input} +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(5)) +a +torch_neg(a) +} +
    #> torch_tensor +#> 0.0573 +#> 0.4788 +#> 0.7503 +#> -0.4747 +#> -0.4677 +#> [ CPUFloatType{5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_nonzero.html b/static/docs/reference/torch_nonzero.html new file mode 100644 index 0000000000000000000000000000000000000000..e4f873c2374dd913650445feff23f2b6ee68980f --- /dev/null +++ b/static/docs/reference/torch_nonzero.html @@ -0,0 +1,284 @@ + + + + + + + + +Nonzero — torch_nonzero • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Nonzero

    +
    + +
    torch_nonzero(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    Note

    + + +
    [`torch_nonzero(..., as_tuple=False) <torch.nonzero>`] (default) returns a
    +2-D tensor where each row is the index for a nonzero value.
    +
    +[`torch_nonzero(..., as_tuple=TRUE) <torch.nonzero>`] returns a tuple of 1-D
    +index tensors, allowing for advanced indexing, so `x[x.nonzero(as_tuple=TRUE)]`
    +gives all nonzero values of tensor `x`. Of the returned tuple, each index tensor
    +contains nonzero indices for a certain dimension.
    +
    +See below for more details on the two behaviors.
    +
    + +

    nonzero(input, *, out=NULL, as_tuple=False) -> LongTensor or tuple of LongTensors

    + + + + +

    When as_tuple is FALSE (default):

    +

    Returns a tensor containing the indices of all non-zero elements of +input. Each row in the result contains the indices of a non-zero +element in input. The result is sorted lexicographically, with +the last index changing the fastest (C-style).

    +

    If input has \(n\) dimensions, then the resulting indices tensor +out is of size \((z \times n)\), where \(z\) is the total number of +non-zero elements in the input tensor.

    +

    When as_tuple is TRUE:

    +

    Returns a tuple of 1-D tensors, one for each dimension in input, +each containing the indices (in that dimension) of all non-zero elements of +input .

    +

    If input has \(n\) dimensions, then the resulting tuple contains \(n\) +tensors of size \(z\), where \(z\) is the total number of +non-zero elements in the input tensor.

    +

    As a special case, when input has zero dimensions and a nonzero scalar +value, it is treated as a one-dimensional tensor with one element.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_nonzero(torch_tensor(c(1, 1, 1, 0, 1))) +} +
    #> torch_tensor +#> 0 +#> 1 +#> 2 +#> 4 +#> [ CPULongType{4,1} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_norm.html b/static/docs/reference/torch_norm.html new file mode 100644 index 0000000000000000000000000000000000000000..94a942e1f9bda3330fdc4514d64e257b2d9bb5fa --- /dev/null +++ b/static/docs/reference/torch_norm.html @@ -0,0 +1,274 @@ + + + + + + + + +Norm — torch_norm • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Norm

    +
    + +
    torch_norm(self, p = 2L, dim, keepdim = FALSE, dtype)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor

    p

    (int, float, inf, -inf, 'fro', 'nuc', optional) the order of norm. Default: 'fro' The following norms can be calculated: ===== ============================ ========================== ord matrix norm vector norm ===== ============================ ========================== NULL Frobenius norm 2-norm 'fro' Frobenius norm -- 'nuc' nuclear norm -- Other as vec norm when dim is NULL sum(abs(x)ord)(1./ord) ===== ============================ ==========================

    dim

    (int, 2-tuple of ints, 2-list of ints, optional) If it is an int, vector norm will be calculated, if it is 2-tuple of ints, matrix norm will be calculated. If the value is NULL, matrix norm will be calculated when the input tensor only has two dimensions, vector norm will be calculated when the input tensor only has one dimension. If the input tensor has more than two dimensions, the vector norm will be applied to last dimension.

    keepdim

    (bool, optional) whether the output tensors have dim retained or not. Ignored if dim = NULL and out = NULL. Default: FALSE +Ignored if dim = NULL and out = NULL.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. If specified, the input tensor is casted to 'dtype' while performing the operation. Default: NULL.

    + +

    TEST

    + + + + +

    Returns the matrix norm or vector norm of a given tensor.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_arange(0, 9, dtype = torch_float()) +b = a$reshape(list(3, 3)) +torch_norm(a) +torch_norm(b) +torch_norm(a, Inf) +torch_norm(b, Inf) + +} +
    #> torch_tensor +#> 8 +#> [ CPUFloatType{} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_normal.html b/static/docs/reference/torch_normal.html new file mode 100644 index 0000000000000000000000000000000000000000..0387079e12ad31bbe4e3a73072c1776f6818d78e --- /dev/null +++ b/static/docs/reference/torch_normal.html @@ -0,0 +1,304 @@ + + + + + + + + +Normal — torch_normal • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Normal

    +
    + +
    torch_normal(mean, std = 1L, size, generator = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    mean

    (Tensor) the tensor of per-element means

    std

    (Tensor) the tensor of per-element standard deviations

    size

    (int...) a sequence of integers defining the shape of the output tensor.

    generator

    (torch.Generator, optional) a pseudorandom number generator for sampling

    + +

    Note

    + +

    When the shapes do not match, the shape of mean +is used as the shape for the returned output tensor

    +

    normal(mean, std, *, generator=NULL, out=NULL) -> Tensor

    + + + + +

    Returns a tensor of random numbers drawn from separate normal distributions +whose mean and standard deviation are given.

    +

    The mean is a tensor with the mean of +each output element's normal distribution

    +

    The std is a tensor with the standard deviation of +each output element's normal distribution

    +

    The shapes of mean and std don't need to match, but the +total number of elements in each tensor need to be the same.

    +

    normal(mean=0.0, std, out=NULL) -> Tensor

    + + + + +

    Similar to the function above, but the means are shared among all drawn +elements.

    +

    normal(mean, std=1.0, out=NULL) -> Tensor

    + + + + +

    Similar to the function above, but the standard-deviations are shared among +all drawn elements.

    +

    normal(mean, std, size, *, out=NULL) -> Tensor

    + + + + +

    Similar to the function above, but the means and standard deviations are shared +among all drawn elements. The resulting tensor has size given by size.

    + +

    Examples

    +
    if (torch_is_installed()) { + +if (FALSE) { +torch_normal(mean=0, std=torch_arange(1, 0, -0.1)) + + +torch_normal(mean=0.5, std=torch_arange(1., 6.)) + + +torch_normal(mean=torch_arange(1., 6.)) + + +torch_normal(2, 3, size=list(1, 4)) +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_ones.html b/static/docs/reference/torch_ones.html new file mode 100644 index 0000000000000000000000000000000000000000..e7706013ec4304a744cb2abb33c67cd9811c2fb8 --- /dev/null +++ b/static/docs/reference/torch_ones.html @@ -0,0 +1,284 @@ + + + + + + + + +Ones — torch_ones • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Ones

    +
    + +
    torch_ones(
    +  ...,
    +  names = NULL,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    ...

    (int...) a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple.

    names

    optional names for the dimensions

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

    layout

    (torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    ones(*size, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

    + + + + +

    Returns a tensor filled with the scalar value 1, with the shape defined +by the variable argument size.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_ones(c(2, 3)) +torch_ones(c(5)) +} +
    #> torch_tensor +#> 1 +#> 1 +#> 1 +#> 1 +#> 1 +#> [ CPUFloatType{5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_ones_like.html b/static/docs/reference/torch_ones_like.html new file mode 100644 index 0000000000000000000000000000000000000000..56cad96be5bd809f0bd4c69bae8a1a373f4f467c --- /dev/null +++ b/static/docs/reference/torch_ones_like.html @@ -0,0 +1,289 @@ + + + + + + + + +Ones_like — torch_ones_like • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Ones_like

    +
    + +
    torch_ones_like(
    +  input,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE,
    +  memory_format = torch_preserve_format()
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    (Tensor) the size of input will determine size of the output tensor.

    dtype

    (torch.dtype, optional) the desired data type of returned Tensor. Default: if NULL, defaults to the dtype of input.

    layout

    (torch.layout, optional) the desired layout of returned tensor. Default: if NULL, defaults to the layout of input.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, defaults to the device of input.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    memory_format

    (torch.memory_format, optional) the desired memory format of returned Tensor. Default: torch_preserve_format.

    + +

    ones_like(input, dtype=NULL, layout=NULL, device=NULL, requires_grad=False, memory_format=torch.preserve_format) -> Tensor

    + + + + +

    Returns a tensor filled with the scalar value 1, with the same size as +input. torch_ones_like(input) is equivalent to +torch_ones(input.size(), dtype=input.dtype, layout=input.layout, device=input.device).

    +

    Warning

    + + + +

    As of 0.4, this function does not support an out keyword. As an alternative, +the old torch_ones_like(input, out=output) is equivalent to +torch_ones(input.size(), out=output).

    + +

    Examples

    +
    if (torch_is_installed()) { + +input = torch_empty(c(2, 3)) +torch_ones_like(input) +} +
    #> torch_tensor +#> 1 1 1 +#> 1 1 1 +#> [ CPUFloatType{2,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_orgqr.html b/static/docs/reference/torch_orgqr.html new file mode 100644 index 0000000000000000000000000000000000000000..ab879c438cd24475978bb6dc73e4e214fd4443ef --- /dev/null +++ b/static/docs/reference/torch_orgqr.html @@ -0,0 +1,250 @@ + + + + + + + + +Orgqr — torch_orgqr • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Orgqr

    +
    + +
    torch_orgqr(self, input2)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the a from torch_geqrf.

    input2

    (Tensor) the tau from torch_geqrf.

    + +

    orgqr(input, input2) -> Tensor

    + + + + +

    Computes the orthogonal matrix Q of a QR factorization, from the (input, input2) +tuple returned by torch_geqrf.

    +

    This directly calls the underlying LAPACK function ?orgqr. +See LAPACK documentation for orgqr_ for further details.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_ormqr.html b/static/docs/reference/torch_ormqr.html new file mode 100644 index 0000000000000000000000000000000000000000..027e7c29997c310ae77f6febf61fda68f07ce0c6 --- /dev/null +++ b/static/docs/reference/torch_ormqr.html @@ -0,0 +1,262 @@ + + + + + + + + +Ormqr — torch_ormqr • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Ormqr

    +
    + +
    torch_ormqr(self, input2, input3, left = TRUE, transpose = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the a from torch_geqrf.

    input2

    (Tensor) the tau from torch_geqrf.

    input3

    (Tensor) the matrix to be multiplied.

    left

    see LAPACK documentation

    transpose

    see LAPACK documentation

    + +

    ormqr(input, input2, input3, left=TRUE, transpose=False) -> Tensor

    + + + + +

    Multiplies mat (given by input3) by the orthogonal Q matrix of the QR factorization +formed by torch_geqrf() that is represented by (a, tau) (given by (input, input2)).

    +

    This directly calls the underlying LAPACK function ?ormqr. +See LAPACK documentation for ormqr for further details.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_pdist.html b/static/docs/reference/torch_pdist.html new file mode 100644 index 0000000000000000000000000000000000000000..32cabdc54cbd0c4bfc31cb14a3f92b15abafad58 --- /dev/null +++ b/static/docs/reference/torch_pdist.html @@ -0,0 +1,256 @@ + + + + + + + + +Pdist — torch_pdist • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Pdist

    +
    + +
    torch_pdist(self, p = 2L)
    + +

    Arguments

    + + + + + + + + + + +
    self

    NA input tensor of shape \(N \times M\).

    p

    NA p value for the p-norm distance to calculate between each vector pair \(\in [0, \infty]\).

    + +

    pdist(input, p=2) -> Tensor

    + + + + +

    Computes the p-norm distance between every pair of row vectors in the input. +This is identical to the upper triangular portion, excluding the diagonal, of +torch_norm(input[:, NULL] - input, dim=2, p=p). This function will be faster +if the rows are contiguous.

    +

    If input has shape \(N \times M\) then the output will have shape +\(\frac{1}{2} N (N - 1)\).

    +

    This function is equivalent to scipy.spatial.distance.pdist(input, 'minkowski', p=p) if \(p \in (0, \infty)\). When \(p = 0\) it is +equivalent to scipy.spatial.distance.pdist(input, 'hamming') * M. +When \(p = \infty\), the closest scipy function is +scipy.spatial.distance.pdist(xn, lambda x, y: np.abs(x - y).max()).

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_pinverse.html b/static/docs/reference/torch_pinverse.html new file mode 100644 index 0000000000000000000000000000000000000000..9e1f439af0765bd734ebc6914b4efc1cd6831e06 --- /dev/null +++ b/static/docs/reference/torch_pinverse.html @@ -0,0 +1,283 @@ + + + + + + + + +Pinverse — torch_pinverse • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Pinverse

    +
    + +
    torch_pinverse(self, rcond = 0)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) The input tensor of size \((*, m, n)\) where \(*\) is zero or more batch dimensions

    rcond

    (float) A floating point value to determine the cutoff for small singular values. Default: 1e-15

    + +

    Note

    + + +
    This method is implemented using the Singular Value Decomposition.
    +
    + +
    The pseudo-inverse is not necessarily a continuous function in the elements of the matrix `[1]`_.
    +Therefore, derivatives are not always existent, and exist for a constant rank only `[2]`_.
    +However, this method is backprop-able due to the implementation by using SVD results, and
    +could be unstable. Double-backward will also be unstable due to the usage of SVD internally.
    +See `~torch.svd` for more details.
    +
    + +

    pinverse(input, rcond=1e-15) -> Tensor

    + + + + +

    Calculates the pseudo-inverse (also known as the Moore-Penrose inverse) of a 2D tensor. +Please look at Moore-Penrose inverse_ for more details

    + +

    Examples

    +
    if (torch_is_installed()) { + +input = torch_randn(c(3, 5)) +input +torch_pinverse(input) +# Batched pinverse example +a = torch_randn(c(2,6,3)) +b = torch_pinverse(a) +torch_matmul(b, a) +} +
    #> torch_tensor +#> (1,.,.) = +#> 1.0000e+00 -5.9605e-08 1.0431e-07 +#> 2.9802e-08 1.0000e+00 2.2352e-08 +#> 2.9802e-08 8.9407e-08 1.0000e+00 +#> +#> (2,.,.) = +#> 1.0000e+00 -5.9605e-08 -1.7136e-07 +#> -7.4506e-09 1.0000e+00 -1.2666e-07 +#> -8.1956e-08 -2.1607e-07 1.0000e+00 +#> [ CPUFloatType{2,3,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_pixel_shuffle.html b/static/docs/reference/torch_pixel_shuffle.html new file mode 100644 index 0000000000000000000000000000000000000000..e75f74e62208c4b4eb3ac26c61d69924854539b8 --- /dev/null +++ b/static/docs/reference/torch_pixel_shuffle.html @@ -0,0 +1,255 @@ + + + + + + + + +Pixel_shuffle — torch_pixel_shuffle • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Pixel_shuffle

    +
    + +
    torch_pixel_shuffle(self, upscale_factor)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor

    upscale_factor

    (int) factor to increase spatial resolution by

    + +

    Rearranges elements in a tensor of shape

    + +

    math:(*, C \times r^2, H, W) to a :

    +

    Rearranges elements in a tensor of shape \((*, C \times r^2, H, W)\) to a +tensor of shape \((*, C, H \times r, W \times r)\).

    +

    See ~torch.nn.PixelShuffle for details.

    + +

    Examples

    +
    if (torch_is_installed()) { + +input = torch_randn(c(1, 9, 4, 4)) +output = nnf_pixel_shuffle(input, 3) +print(output$size()) +} +
    #> [1] 1 1 12 12
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_poisson.html b/static/docs/reference/torch_poisson.html new file mode 100644 index 0000000000000000000000000000000000000000..f0d8e6b1313a33599c318d11cf654fb79585e885 --- /dev/null +++ b/static/docs/reference/torch_poisson.html @@ -0,0 +1,264 @@ + + + + + + + + +Poisson — torch_poisson • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Poisson

    +
    + +
    torch_poisson(self, generator = NULL)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor containing the rates of the Poisson distribution

    generator

    (torch.Generator, optional) a pseudorandom number generator for sampling

    + +

    poisson(input *, generator=NULL) -> Tensor

    + + + + +

    Returns a tensor of the same size as input with each element +sampled from a Poisson distribution with rate parameter given by the corresponding +element in input i.e.,

    +

    $$ + \mbox{out}_i \sim \mbox{Poisson}(\mbox{input}_i) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +rates = torch_rand(c(4, 4)) * 5 # rate parameter between 0 and 5 +torch_poisson(rates) +} +
    #> torch_tensor +#> 1 0 5 2 +#> 8 2 0 4 +#> 3 0 3 0 +#> 3 6 6 1 +#> [ CPUFloatType{4,4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_polygamma.html b/static/docs/reference/torch_polygamma.html new file mode 100644 index 0000000000000000000000000000000000000000..1eedf48a9eeb6cb2e3b9598a2a2ebb8ad7246db2 --- /dev/null +++ b/static/docs/reference/torch_polygamma.html @@ -0,0 +1,265 @@ + + + + + + + + +Polygamma — torch_polygamma • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Polygamma

    +
    + +
    torch_polygamma(n, self)
    + +

    Arguments

    + + + + + + + + + + +
    n

    (int) the order of the polygamma function

    self

    (Tensor) the input tensor.

    + +

    Note

    + + +
    This function is not implemented for \eqn{n \geq 2}.
    +
    + +

    polygamma(n, input, out=NULL) -> Tensor

    + + + + +

    Computes the \(n^{th}\) derivative of the digamma function on input. +\(n \geq 0\) is called the order of the polygamma function.

    +

    $$ + \psi^{(n)}(x) = \frac{d^{(n)}}{dx^{(n)}} \psi(x) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +a = torch_tensor(c(1, 0.5)) +torch_polygamma(1, a) +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_pow.html b/static/docs/reference/torch_pow.html new file mode 100644 index 0000000000000000000000000000000000000000..d13762eebd11b95f2418ad1db1fddd6409df9dc6 --- /dev/null +++ b/static/docs/reference/torch_pow.html @@ -0,0 +1,294 @@ + + + + + + + + +Pow — torch_pow • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Pow

    +
    + +
    torch_pow(self, exponent)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (float) the scalar base value for the power operation

    exponent

    (float or tensor) the exponent value

    + +

    pow(input, exponent, out=NULL) -> Tensor

    + + + + +

    Takes the power of each element in input with exponent and +returns a tensor with the result.

    +

    exponent can be either a single float number or a Tensor +with the same number of elements as input.

    +

    When exponent is a scalar value, the operation applied is:

    +

    $$ + \mbox{out}_i = x_i^{\mbox{exponent}} +$$ +When exponent is a tensor, the operation applied is:

    +

    $$ + \mbox{out}_i = x_i^{\mbox{exponent}_i} +$$ +When exponent is a tensor, the shapes of input +and exponent must be broadcastable .

    +

    pow(self, exponent, out=NULL) -> Tensor

    + + + + +

    self is a scalar float value, and exponent is a tensor. +The returned tensor out is of the same shape as exponent

    +

    The operation applied is:

    +

    $$ + \mbox{out}_i = \mbox{self} ^ {\mbox{exponent}_i} +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_pow(a, 2) +exp = torch_arange(1., 5.) +a = torch_arange(1., 5.) +a +exp +torch_pow(a, exp) + + +exp = torch_arange(1., 5.) +base = 2 +torch_pow(base, exp) +} +
    #> torch_tensor +#> 2 +#> 4 +#> 8 +#> 16 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_prod.html b/static/docs/reference/torch_prod.html new file mode 100644 index 0000000000000000000000000000000000000000..e14b48c631732d203f95a3e79d760d4db45191f1 --- /dev/null +++ b/static/docs/reference/torch_prod.html @@ -0,0 +1,283 @@ + + + + + + + + +Prod — torch_prod • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Prod

    +
    + +
    torch_prod(self, dim, keepdim = FALSE, dtype = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int) the dimension to reduce.

    keepdim

    (bool) whether the output tensor has dim retained or not.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: NULL.

    + +

    prod(input, dtype=NULL) -> Tensor

    + + + + +

    Returns the product of all elements in the input tensor.

    +

    prod(input, dim, keepdim=False, dtype=NULL) -> Tensor

    + + + + +

    Returns the product of each row of the input tensor in the given +dimension dim.

    +

    If keepdim is TRUE, the output tensor is of the same size +as input except in the dimension dim where it is of size 1. +Otherwise, dim is squeezed (see torch_squeeze), resulting in +the output tensor having 1 fewer dimension than input.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(1, 3)) +a +torch_prod(a) + + +a = torch_randn(c(4, 2)) +a +torch_prod(a, 1) +} +
    #> torch_tensor +#> 0.001 * +#> -1.8251 +#> -120.0931 +#> [ CPUFloatType{2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_promote_types.html b/static/docs/reference/torch_promote_types.html new file mode 100644 index 0000000000000000000000000000000000000000..5de63b432972ee366e63fa8a73a7deeac45928fe --- /dev/null +++ b/static/docs/reference/torch_promote_types.html @@ -0,0 +1,257 @@ + + + + + + + + +Promote_types — torch_promote_types • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Promote_types

    +
    + +
    torch_promote_types(type1, type2)
    + +

    Arguments

    + + + + + + + + + + +
    type1

    (torch.dtype)

    type2

    (torch.dtype)

    + +

    promote_types(type1, type2) -> dtype

    + + + + +

    Returns the torch_dtype with the smallest size and scalar kind that is +not smaller nor of lower kind than either type1 or type2. See type promotion +documentation for more information on the type +promotion logic.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_promote_types(torch_int32(), torch_float32()) +torch_promote_types(torch_uint8(), torch_long()) +} +
    #> torch_Long
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_qr.html b/static/docs/reference/torch_qr.html new file mode 100644 index 0000000000000000000000000000000000000000..c7849a5694322bb0a77f00936a01a13d61765310 --- /dev/null +++ b/static/docs/reference/torch_qr.html @@ -0,0 +1,274 @@ + + + + + + + + +Qr — torch_qr • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Qr

    +
    + +
    torch_qr(self, some = TRUE)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor of size \((*, m, n)\) where * is zero or more batch dimensions consisting of matrices of dimension \(m \times n\).

    some

    (bool, optional) Set to TRUE for reduced QR decomposition and FALSE for complete QR decomposition.

    + +

    Note

    + +

    precision may be lost if the magnitudes of the elements of input +are large

    +

    While it should always give you a valid decomposition, it may not +give you the same one across platforms - it will depend on your +LAPACK implementation.

    +

    qr(input, some=TRUE, out=NULL) -> (Tensor, Tensor)

    + + + + +

    Computes the QR decomposition of a matrix or a batch of matrices input, +and returns a namedtuple (Q, R) of tensors such that \(\mbox{input} = Q R\) +with \(Q\) being an orthogonal matrix or batch of orthogonal matrices and +\(R\) being an upper triangular matrix or batch of upper triangular matrices.

    +

    If some is TRUE, then this function returns the thin (reduced) QR factorization. +Otherwise, if some is FALSE, this function returns the complete QR factorization.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_tensor(matrix(c(12., -51, 4, 6, 167, -68, -4, 24, -41), ncol = 3, byrow = TRUE)) +out = torch_qr(a) +q = out[[1]] +r = out[[2]] +torch_mm(q, r)$round() +torch_mm(q$t(), q)$round() +} +
    #> torch_tensor +#> 1 0 0 +#> 0 1 0 +#> 0 0 1 +#> [ CPUFloatType{3,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_qscheme.html b/static/docs/reference/torch_qscheme.html new file mode 100644 index 0000000000000000000000000000000000000000..557b34642751fff9225ac6ac207d70fa32411b8d --- /dev/null +++ b/static/docs/reference/torch_qscheme.html @@ -0,0 +1,235 @@ + + + + + + + + +Creates the corresponding Scheme object — torch_qscheme • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Creates the corresponding Scheme object

    +
    + +
    torch_per_channel_affine()
    +
    +torch_per_tensor_affine()
    +
    +torch_per_channel_symmetric()
    +
    +torch_per_tensor_symmetric()
    + + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_quantize_per_channel.html b/static/docs/reference/torch_quantize_per_channel.html new file mode 100644 index 0000000000000000000000000000000000000000..590d80daec58fe18bb62e3fa0ba70fc2a541d1a9 --- /dev/null +++ b/static/docs/reference/torch_quantize_per_channel.html @@ -0,0 +1,271 @@ + + + + + + + + +Quantize_per_channel — torch_quantize_per_channel • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Quantize_per_channel

    +
    + +
    torch_quantize_per_channel(self, scales, zero_points, axis, dtype)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) float tensor to quantize

    scales

    (Tensor) float 1D tensor of scales to use, size should match input.size(axis)

    zero_points

    (int) integer 1D tensor of offset to use, size should match input.size(axis)

    axis

    (int) dimension on which apply per-channel quantization

    dtype

    (torch.dtype) the desired data type of returned tensor. Has to be one of the quantized dtypes: torch_quint8, torch.qint8, torch.qint32

    + +

    quantize_per_channel(input, scales, zero_points, axis, dtype) -> Tensor

    + + + + +

    Converts a float tensor to per-channel quantized tensor with given scales and zero points.

    + +

    Examples

    +
    if (torch_is_installed()) { +x = torch_tensor(matrix(c(-1.0, 0.0, 1.0, 2.0), ncol = 2, byrow = TRUE)) +torch_quantize_per_channel(x, torch_tensor(c(0.1, 0.01)), + torch_tensor(c(10L, 0L)), 0, torch_quint8()) +torch_quantize_per_channel(x, torch_tensor(c(0.1, 0.01)), + torch_tensor(c(10L, 0L)), 0, torch_quint8())$int_repr() +} +
    #> torch_tensor +#> 0 10 +#> 100 200 +#> [ CPUByteType{2,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_quantize_per_tensor.html b/static/docs/reference/torch_quantize_per_tensor.html new file mode 100644 index 0000000000000000000000000000000000000000..499139caa61b130b20da8328eda3958ad128a160 --- /dev/null +++ b/static/docs/reference/torch_quantize_per_tensor.html @@ -0,0 +1,266 @@ + + + + + + + + +Quantize_per_tensor — torch_quantize_per_tensor • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Quantize_per_tensor

    +
    + +
    torch_quantize_per_tensor(self, scale, zero_point, dtype)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) float tensor to quantize

    scale

    (float) scale to apply in quantization formula

    zero_point

    (int) offset in integer value that maps to float zero

    dtype

    (torch.dtype) the desired data type of returned tensor. Has to be one of the quantized dtypes: torch_quint8, torch.qint8, torch.qint32

    + +

    quantize_per_tensor(input, scale, zero_point, dtype) -> Tensor

    + + + + +

    Converts a float tensor to quantized tensor with given scale and zero point.

    + +

    Examples

    +
    if (torch_is_installed()) { +torch_quantize_per_tensor(torch_tensor(c(-1.0, 0.0, 1.0, 2.0)), 0.1, 10, torch_quint8()) +torch_quantize_per_tensor(torch_tensor(c(-1.0, 0.0, 1.0, 2.0)), 0.1, 10, torch_quint8())$int_repr() +} +
    #> torch_tensor +#> 0 +#> 10 +#> 20 +#> 30 +#> [ CPUByteType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_rand.html b/static/docs/reference/torch_rand.html new file mode 100644 index 0000000000000000000000000000000000000000..c6fef07b66971d56bbfe7d8bdf1db33688475407 --- /dev/null +++ b/static/docs/reference/torch_rand.html @@ -0,0 +1,282 @@ + + + + + + + + +Rand — torch_rand • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Rand

    +
    + +
    torch_rand(
    +  ...,
    +  names = NULL,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    ...

    (int...) a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple.

    names

    optional dimension names

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

    layout

    (torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    rand(*size, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

    + + + + +

    Returns a tensor filled with random numbers from a uniform distribution +on the interval \([0, 1)\)

    +

    The shape of the tensor is defined by the variable argument size.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_rand(4) +torch_rand(c(2, 3)) +} +
    #> torch_tensor +#> 0.4034 0.1802 0.2925 +#> 0.6916 0.6648 0.9430 +#> [ CPUFloatType{2,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_rand_like.html b/static/docs/reference/torch_rand_like.html new file mode 100644 index 0000000000000000000000000000000000000000..22de44417ea6a5dced8c229c44fca41729bf72ad --- /dev/null +++ b/static/docs/reference/torch_rand_like.html @@ -0,0 +1,273 @@ + + + + + + + + +Rand_like — torch_rand_like • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Rand_like

    +
    + +
    torch_rand_like(
    +  input,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE,
    +  memory_format = torch_preserve_format()
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    (Tensor) the size of input will determine size of the output tensor.

    dtype

    (torch.dtype, optional) the desired data type of returned Tensor. Default: if NULL, defaults to the dtype of input.

    layout

    (torch.layout, optional) the desired layout of returned tensor. Default: if NULL, defaults to the layout of input.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, defaults to the device of input.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    memory_format

    (torch.memory_format, optional) the desired memory format of returned Tensor. Default: torch_preserve_format.

    + +

    rand_like(input, dtype=NULL, layout=NULL, device=NULL, requires_grad=False, memory_format=torch.preserve_format) -> Tensor

    + + + + +

    Returns a tensor with the same size as input that is filled with +random numbers from a uniform distribution on the interval \([0, 1)\). +torch_rand_like(input) is equivalent to +torch_rand(input.size(), dtype=input.dtype, layout=input.layout, device=input.device).

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_randint.html b/static/docs/reference/torch_randint.html new file mode 100644 index 0000000000000000000000000000000000000000..15ab1288a6249a616cd9e76dd12970208766ed23 --- /dev/null +++ b/static/docs/reference/torch_randint.html @@ -0,0 +1,302 @@ + + + + + + + + +Randint — torch_randint • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Randint

    +
    + +
    torch_randint(
    +  low,
    +  high,
    +  size,
    +  generator = NULL,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE,
    +  memory_format = torch_preserve_format()
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    low

    (int, optional) Lowest integer to be drawn from the distribution. Default: 0.

    high

    (int) One above the highest integer to be drawn from the distribution.

    size

    (tuple) a tuple defining the shape of the output tensor.

    generator

    (torch.Generator, optional) a pseudorandom number generator for sampling

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

    layout

    (torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    memory_format

    memory format for the resulting tensor.

    + +

    randint(low=0, high, size, *, generator=NULL, out=NULL, \

    + + + + +

    dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

    +

    Returns a tensor filled with random integers generated uniformly +between low (inclusive) and high (exclusive).

    +

    The shape of the tensor is defined by the variable argument size.

    +

    .. note: +With the global dtype default (torch_float32), this function returns +a tensor with dtype torch_int64.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_randint(3, 5, list(3)) +torch_randint(0, 10, size = list(2, 2)) +torch_randint(3, 10, list(2, 2)) +} +
    #> torch_tensor +#> 7 8 +#> 8 9 +#> [ CPUFloatType{2,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_randint_like.html b/static/docs/reference/torch_randint_like.html new file mode 100644 index 0000000000000000000000000000000000000000..2459fc32ab684e5ed43c3a974ae2ac74ff056da4 --- /dev/null +++ b/static/docs/reference/torch_randint_like.html @@ -0,0 +1,281 @@ + + + + + + + + +Randint_like — torch_randint_like • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Randint_like

    +
    + +
    torch_randint_like(
    +  input,
    +  low,
    +  high,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    (Tensor) the size of input will determine size of the output tensor.

    low

    (int, optional) Lowest integer to be drawn from the distribution. Default: 0.

    high

    (int) One above the highest integer to be drawn from the distribution.

    dtype

    (torch.dtype, optional) the desired data type of returned Tensor. Default: if NULL, defaults to the dtype of input.

    layout

    (torch.layout, optional) the desired layout of returned tensor. Default: if NULL, defaults to the layout of input.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, defaults to the device of input.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    randint_like(input, low=0, high, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False,

    + + + + +

    memory_format=torch.preserve_format) -> Tensor

    +

    Returns a tensor with the same shape as Tensor input filled with +random integers generated uniformly between low (inclusive) and +high (exclusive).

    +

    .. note: +With the global dtype default (torch_float32), this function returns +a tensor with dtype torch_int64.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_randn.html b/static/docs/reference/torch_randn.html new file mode 100644 index 0000000000000000000000000000000000000000..38ec31367f5a19f4c67312c2852f29fc1d0a1840 --- /dev/null +++ b/static/docs/reference/torch_randn.html @@ -0,0 +1,286 @@ + + + + + + + + +Randn — torch_randn • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Randn

    +
    + +
    torch_randn(
    +  ...,
    +  names = NULL,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    ...

    (int...) a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple.

    names

    optional names for the dimensions

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

    layout

    (torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    randn(*size, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

    + + + + +

    Returns a tensor filled with random numbers from a normal distribution +with mean 0 and variance 1 (also called the standard normal +distribution).

    +

    $$ + \mbox{out}_{i} \sim \mathcal{N}(0, 1) +$$ +The shape of the tensor is defined by the variable argument size.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_randn(c(4)) +torch_randn(c(2, 3)) +} +
    #> torch_tensor +#> 1.7302 1.3721 -0.0691 +#> 0.4933 -0.7643 -0.5334 +#> [ CPUFloatType{2,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_randn_like.html b/static/docs/reference/torch_randn_like.html new file mode 100644 index 0000000000000000000000000000000000000000..452aab9d39021baaa753f6d8efb0c2cd5a6ddbab --- /dev/null +++ b/static/docs/reference/torch_randn_like.html @@ -0,0 +1,273 @@ + + + + + + + + +Randn_like — torch_randn_like • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Randn_like

    +
    + +
    torch_randn_like(
    +  input,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE,
    +  memory_format = torch_preserve_format()
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    (Tensor) the size of input will determine size of the output tensor.

    dtype

    (torch.dtype, optional) the desired data type of returned Tensor. Default: if NULL, defaults to the dtype of input.

    layout

    (torch.layout, optional) the desired layout of returned tensor. Default: if NULL, defaults to the layout of input.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, defaults to the device of input.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    memory_format

    (torch.memory_format, optional) the desired memory format of returned Tensor. Default: torch_preserve_format.

    + +

    randn_like(input, dtype=NULL, layout=NULL, device=NULL, requires_grad=False, memory_format=torch.preserve_format) -> Tensor

    + + + + +

    Returns a tensor with the same size as input that is filled with +random numbers from a normal distribution with mean 0 and variance 1. +torch_randn_like(input) is equivalent to +torch_randn(input.size(), dtype=input.dtype, layout=input.layout, device=input.device).

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_randperm.html b/static/docs/reference/torch_randperm.html new file mode 100644 index 0000000000000000000000000000000000000000..452ba130f5bece594e3e25187a97719fee6c6018 --- /dev/null +++ b/static/docs/reference/torch_randperm.html @@ -0,0 +1,276 @@ + + + + + + + + +Randperm — torch_randperm • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Randperm

    +
    + +
    torch_randperm(
    +  n,
    +  dtype = torch_int64(),
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    n

    (int) the upper bound (exclusive)

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: torch_int64.

    layout

    (torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    randperm(n, out=NULL, dtype=torch.int64, layout=torch.strided, device=NULL, requires_grad=False) -> LongTensor

    + + + + +

    Returns a random permutation of integers from 0 to n - 1.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_randperm(4) +} +
    #> torch_tensor +#> 3 +#> 2 +#> 0 +#> 1 +#> [ CPULongType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_range.html b/static/docs/reference/torch_range.html new file mode 100644 index 0000000000000000000000000000000000000000..712e5c94149ab72deaa8e3179aad097145a02d50 --- /dev/null +++ b/static/docs/reference/torch_range.html @@ -0,0 +1,299 @@ + + + + + + + + +Range — torch_range • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Range

    +
    + +
    torch_range(
    +  start,
    +  end,
    +  step = 1,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    start

    (float) the starting value for the set of points. Default: 0.

    end

    (float) the ending value for the set of points

    step

    (float) the gap between each pair of adjacent points. Default: 1.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type). If dtype is not given, infer the data type from the other input arguments. If any of start, end, or stop are floating-point, the dtype is inferred to be the default dtype, see ~torch.get_default_dtype. Otherwise, the dtype is inferred to be torch.int64.

    layout

    (torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    range(start=0, end, step=1, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

    + + + + +

    Returns a 1-D tensor of size \(\left\lfloor \frac{\mbox{end} - \mbox{start}}{\mbox{step}} \right\rfloor + 1\) +with values from start to end with step step. Step is +the gap between two values in the tensor.

    +

    $$ + \mbox{out}_{i+1} = \mbox{out}_i + \mbox{step}. +$$

    +

    Warning

    + + + +

    This function is deprecated in favor of torch_arange.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_range(1, 4) +torch_range(1, 4, 0.5) +} +
    #> Warning: This function is deprecated in favor of torch_arange.
    #> Warning: This function is deprecated in favor of torch_arange.
    #> torch_tensor +#> 1.0000 +#> 1.5000 +#> 2.0000 +#> 2.5000 +#> 3.0000 +#> 3.5000 +#> [ CPUFloatType{6} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_real.html b/static/docs/reference/torch_real.html new file mode 100644 index 0000000000000000000000000000000000000000..77c9ca9000cf64173c733b3cc22594e2e8d1b267 --- /dev/null +++ b/static/docs/reference/torch_real.html @@ -0,0 +1,260 @@ + + + + + + + + +Real — torch_real • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Real

    +
    + +
    torch_real(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    real(input) -> Tensor

    + + + + +

    Returns the real part of the input tensor. If +input is a real (non-complex) tensor, this function just +returns it.

    +

    Warning

    + + + +

    Not yet implemented for complex tensors.

    +

    $$ + \mbox{out}_{i} = real(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +torch_real(torch_tensor(c(-1 + 1i, -2 + 2i, 3 - 3i))) +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_reciprocal.html b/static/docs/reference/torch_reciprocal.html new file mode 100644 index 0000000000000000000000000000000000000000..ea420f9fb7cd002ee6b75d4ff7e2996981c44702 --- /dev/null +++ b/static/docs/reference/torch_reciprocal.html @@ -0,0 +1,259 @@ + + + + + + + + +Reciprocal — torch_reciprocal • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Reciprocal

    +
    + +
    torch_reciprocal(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    reciprocal(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the reciprocal of the elements of input

    +

    $$ + \mbox{out}_{i} = \frac{1}{\mbox{input}_{i}} +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_reciprocal(a) +} +
    #> torch_tensor +#> 5.6343 +#> -0.4645 +#> 4.8170 +#> 2.1907 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_reduction.html b/static/docs/reference/torch_reduction.html new file mode 100644 index 0000000000000000000000000000000000000000..4d2de320fd508ce8c0f3b3b1701c29ffbcda19bb --- /dev/null +++ b/static/docs/reference/torch_reduction.html @@ -0,0 +1,233 @@ + + + + + + + + +Creates the reduction objet — torch_reduction • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Creates the reduction objet

    +
    + +
    torch_reduction_sum()
    +
    +torch_reduction_mean()
    +
    +torch_reduction_none()
    + + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_relu.html b/static/docs/reference/torch_relu.html new file mode 100644 index 0000000000000000000000000000000000000000..9eedfcae4ae325029caa6970dd055fb1141280b9 --- /dev/null +++ b/static/docs/reference/torch_relu.html @@ -0,0 +1,243 @@ + + + + + + + + +Relu — torch_relu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Relu

    +
    + +
    torch_relu(self)
    + +

    Arguments

    + + + + + + +
    self

    the input tensor

    + +

    relu(input) -> Tensor

    + + + + +

    Computes the relu tranformation.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_relu_.html b/static/docs/reference/torch_relu_.html new file mode 100644 index 0000000000000000000000000000000000000000..572dddd535330ac98a399d50b4d3e5863290dae4 --- /dev/null +++ b/static/docs/reference/torch_relu_.html @@ -0,0 +1,243 @@ + + + + + + + + +Relu_ — torch_relu_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Relu_

    +
    + +
    torch_relu_(self)
    + +

    Arguments

    + + + + + + +
    self

    the input tensor

    + +

    relu_(input) -> Tensor

    + + + + +

    In-place version of torch_relu().

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_remainder.html b/static/docs/reference/torch_remainder.html new file mode 100644 index 0000000000000000000000000000000000000000..351ef4af7ec1089daa4a689789d530acdc8f70ce --- /dev/null +++ b/static/docs/reference/torch_remainder.html @@ -0,0 +1,264 @@ + + + + + + + + +Remainder — torch_remainder • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Remainder

    +
    + +
    torch_remainder(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the dividend

    other

    (Tensor or float) the divisor that may be either a number or a Tensor of the same shape as the dividend

    + +

    remainder(input, other, out=NULL) -> Tensor

    + + + + +

    Computes the element-wise remainder of division.

    +

    The divisor and dividend may contain both for integer and floating point +numbers. The remainder has the same sign as the divisor.

    +

    When other is a tensor, the shapes of input and +other must be broadcastable .

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_remainder(torch_tensor(c(-3., -2, -1, 1, 2, 3)), 2) +torch_remainder(torch_tensor(c(1., 2, 3, 4, 5)), 1.5) +} +
    #> torch_tensor +#> 1.0000 +#> 0.5000 +#> 0.0000 +#> 1.0000 +#> 0.5000 +#> [ CPUFloatType{5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_renorm.html b/static/docs/reference/torch_renorm.html new file mode 100644 index 0000000000000000000000000000000000000000..3f28bd7b59e7eb0f2ba64e913002d41498b70688 --- /dev/null +++ b/static/docs/reference/torch_renorm.html @@ -0,0 +1,273 @@ + + + + + + + + +Renorm — torch_renorm • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Renorm

    +
    + +
    torch_renorm(self, p, dim, maxnorm)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    p

    (float) the power for the norm computation

    dim

    (int) the dimension to slice over to get the sub-tensors

    maxnorm

    (float) the maximum norm to keep each sub-tensor under

    + +

    Note

    + +

    If the norm of a row is lower than maxnorm, the row is unchanged

    +

    renorm(input, p, dim, maxnorm, out=NULL) -> Tensor

    + + + + +

    Returns a tensor where each sub-tensor of input along dimension +dim is normalized such that the p-norm of the sub-tensor is lower +than the value maxnorm

    + +

    Examples

    +
    if (torch_is_installed()) { +x = torch_ones(c(3, 3)) +x[2,]$fill_(2) +x[3,]$fill_(3) +x +torch_renorm(x, 1, 1, 5) +} +
    #> torch_tensor +#> 1.0000 1.0000 1.0000 +#> 1.6667 1.6667 1.6667 +#> 1.6667 1.6667 1.6667 +#> [ CPUFloatType{3,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_repeat_interleave.html b/static/docs/reference/torch_repeat_interleave.html new file mode 100644 index 0000000000000000000000000000000000000000..0e85bf392140d5bdac1af2d486b8c60fb6a0cd37 --- /dev/null +++ b/static/docs/reference/torch_repeat_interleave.html @@ -0,0 +1,277 @@ + + + + + + + + +Repeat_interleave — torch_repeat_interleave • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Repeat_interleave

    +
    + +
    torch_repeat_interleave(self, repeats, dim = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    repeats

    (Tensor or int) The number of repetitions for each element. repeats is broadcasted to fit the shape of the given axis.

    dim

    (int, optional) The dimension along which to repeat values. By default, use the flattened input array, and return a flat output array.

    + +

    repeat_interleave(input, repeats, dim=NULL) -> Tensor

    + + + + +

    Repeat elements of a tensor.

    +

    Warning

    + + +
    This is different from `torch_Tensor.repeat` but similar to `numpy.repeat`.
    +
    + +

    repeat_interleave(repeats) -> Tensor

    + + + + +

    If the repeats is tensor([n1, n2, n3, ...]), then the output will be +tensor([0, 0, ..., 1, 1, ..., 2, 2, ..., ...]) where 0 appears n1 times, +1 appears n2 times, 2 appears n3 times, etc.

    + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +x = torch_tensor(c(1, 2, 3)) +x$repeat_interleave(2) +y = torch_tensor(matrix(c(1, 2, 3, 4), ncol = 2, byrow=TRUE)) +torch_repeat_interleave(y, 2) +torch_repeat_interleave(y, 3, dim=1) +torch_repeat_interleave(y, torch_tensor(c(1, 2)), dim=1) +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_reshape.html b/static/docs/reference/torch_reshape.html new file mode 100644 index 0000000000000000000000000000000000000000..63aee6db042b4450b5ce87cd1f3ec58626956166 --- /dev/null +++ b/static/docs/reference/torch_reshape.html @@ -0,0 +1,268 @@ + + + + + + + + +Reshape — torch_reshape • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Reshape

    +
    + +
    torch_reshape(self, shape)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the tensor to be reshaped

    shape

    (tuple of ints) the new shape

    + +

    reshape(input, shape) -> Tensor

    + + + + +

    Returns a tensor with the same data and number of elements as input, +but with the specified shape. When possible, the returned tensor will be a view +of input. Otherwise, it will be a copy. Contiguous inputs and inputs +with compatible strides can be reshaped without copying, but you should not +depend on the copying vs. viewing behavior.

    +

    See torch_Tensor.view on when it is possible to return a view.

    +

    A single dimension may be -1, in which case it's inferred from the remaining +dimensions and the number of elements in input.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_arange(0, 4) +torch_reshape(a, list(2, 2)) +b = torch_tensor(matrix(c(0, 1, 2, 3), ncol = 2, byrow=TRUE)) +torch_reshape(b, list(-1)) +} +
    #> torch_tensor +#> 0 +#> 1 +#> 2 +#> 3 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_result_type.html b/static/docs/reference/torch_result_type.html new file mode 100644 index 0000000000000000000000000000000000000000..261709999741bc447d6d4e3bd00a7157e0744241 --- /dev/null +++ b/static/docs/reference/torch_result_type.html @@ -0,0 +1,255 @@ + + + + + + + + +Result_type — torch_result_type • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Result_type

    +
    + +
    torch_result_type(tensor1, tensor2)
    + +

    Arguments

    + + + + + + + + + + +
    tensor1

    (Tensor or Number) an input tensor or number

    tensor2

    (Tensor or Number) an input tensor or number

    + +

    result_type(tensor1, tensor2) -> dtype

    + + + + +

    Returns the torch_dtype that would result from performing an arithmetic +operation on the provided input tensors. See type promotion documentation +for more information on the type promotion logic.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_result_type(tensor1 = torch_tensor(c(1, 2), dtype=torch_int()), tensor2 = 1) +} +
    #> torch_Float
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_rfft.html b/static/docs/reference/torch_rfft.html new file mode 100644 index 0000000000000000000000000000000000000000..faca255090b4358b7fdbf6bebc2c6ed6624883fa --- /dev/null +++ b/static/docs/reference/torch_rfft.html @@ -0,0 +1,334 @@ + + + + + + + + +Rfft — torch_rfft • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Rfft

    +
    + +
    torch_rfft(self, signal_ndim, normalized = FALSE, onesided = TRUE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor of at least signal_ndim dimensions

    signal_ndim

    (int) the number of dimensions in each signal. signal_ndim can only be 1, 2 or 3

    normalized

    (bool, optional) controls whether to return normalized results. Default: FALSE

    onesided

    (bool, optional) controls whether to return half of results to avoid redundancy. Default: TRUE

    + +

    Note

    + + +
    For CUDA tensors, an LRU cache is used for cuFFT plans to speed up
    +repeatedly running FFT methods on tensors of same geometry with same
    +configuration. See cufft-plan-cache for more details on how to
    +monitor and control the cache.
    +
    + +

    rfft(input, signal_ndim, normalized=False, onesided=TRUE) -> Tensor

    + + + + +

    Real-to-complex Discrete Fourier Transform

    +

    This method computes the real-to-complex discrete Fourier transform. It is +mathematically equivalent with torch_fft with differences only in +formats of the input and output.

    +

    This method supports 1D, 2D and 3D real-to-complex transforms, indicated +by signal_ndim. input must be a tensor with at least +signal_ndim dimensions with optionally arbitrary number of leading batch +dimensions. If normalized is set to TRUE, this normalizes the result +by dividing it with \(\sqrt{\prod_{i=1}^K N_i}\) so that the operator is +unitary, where \(N_i\) is the size of signal dimension \(i\).

    +

    The real-to-complex Fourier transform results follow conjugate symmetry:

    +

    $$ + X[\omega_1, \dots, \omega_d] = X^*[N_1 - \omega_1, \dots, N_d - \omega_d], +$$ +where the index arithmetic is computed modulus the size of the corresponding +dimension, \(\ ^*\) is the conjugate operator, and +\(d\) = signal_ndim. onesided flag controls whether to avoid +redundancy in the output results. If set to TRUE (default), the output will +not be full complex result of shape \((*, 2)\), where \(*\) is the shape +of input, but instead the last dimension will be halfed as of size +\(\lfloor \frac{N_d}{2} \rfloor + 1\).

    +

    The inverse of this function is torch_irfft.

    +

    Warning

    + + + +

    For CPU tensors, this method is currently only available with MKL. Use +torch_backends.mkl.is_available to check if MKL is installed.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_randn(c(5, 5)) +torch_rfft(x, 2) +torch_rfft(x, 2, onesided=FALSE) +} +
    #> torch_tensor +#> (1,.,.) = +#> -0.4625 0.0000 +#> -5.1624 3.2586 +#> -6.0212 1.9752 +#> -6.0212 -1.9752 +#> -5.1624 -3.2586 +#> +#> (2,.,.) = +#> 1.6002 -2.9581 +#> 2.1200 0.1343 +#> 3.0293 -7.1277 +#> 3.5287 3.5903 +#> -0.8434 -0.1112 +#> +#> (3,.,.) = +#> -3.0107 -2.3239 +#> 0.9071 0.0358 +#> -0.0607 -0.0974 +#> -2.2262 2.3303 +#> 1.6252 1.9184 +#> +#> (4,.,.) = +#> -3.0107 2.3239 +#> 1.6252 -1.9184 +#> -2.2262 -2.3303 +#> -0.0607 0.0974 +#> 0.9071 -0.0358 +#> +#> (5,.,.) = +#> 1.6002 2.9581 +#> -0.8434 0.1112 +#> 3.5287 -3.5903 +#> 3.0293 7.1277 +#> 2.1200 -0.1343 +#> [ CPUFloatType{5,5,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_roll.html b/static/docs/reference/torch_roll.html new file mode 100644 index 0000000000000000000000000000000000000000..dd1a34eae50ac286ec137a4ce83b3e967307171a --- /dev/null +++ b/static/docs/reference/torch_roll.html @@ -0,0 +1,269 @@ + + + + + + + + +Roll — torch_roll • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Roll

    +
    + +
    torch_roll(self, shifts, dims = list())
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    shifts

    (int or tuple of ints) The number of places by which the elements of the tensor are shifted. If shifts is a tuple, dims must be a tuple of the same size, and each dimension will be rolled by the corresponding value

    dims

    (int or tuple of ints) Axis along which to roll

    + +

    roll(input, shifts, dims=NULL) -> Tensor

    + + + + +

    Roll the tensor along the given dimension(s). Elements that are shifted beyond the +last position are re-introduced at the first position. If a dimension is not +specified, the tensor will be flattened before rolling and then restored +to the original shape.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_tensor(c(1, 2, 3, 4, 5, 6, 7, 8))$view(c(4, 2)) +x +torch_roll(x, 1, 1) +torch_roll(x, -1, 1) +torch_roll(x, shifts=list(2, 1), dims=list(1, 2)) +} +
    #> torch_tensor +#> 6 5 +#> 8 7 +#> 2 1 +#> 4 3 +#> [ CPUFloatType{4,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_rot90.html b/static/docs/reference/torch_rot90.html new file mode 100644 index 0000000000000000000000000000000000000000..b71ff01b65e8f291268e302dd701a0e96e4c5fdc --- /dev/null +++ b/static/docs/reference/torch_rot90.html @@ -0,0 +1,271 @@ + + + + + + + + +Rot90 — torch_rot90 • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Rot90

    +
    + +
    torch_rot90(self, k = 1L, dims = c(0, 1))
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    k

    (int) number of times to rotate

    dims

    (a list or tuple) axis to rotate

    + +

    rot90(input, k, dims) -> Tensor

    + + + + +

    Rotate a n-D tensor by 90 degrees in the plane specified by dims axis. +Rotation direction is from the first towards the second axis if k > 0, and from the second towards the first for k < 0.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_arange(0, 4)$view(c(2, 2)) +x +torch_rot90(x, 1, c(1, 2)) +x = torch_arange(0, 8)$view(c(2, 2, 2)) +x +torch_rot90(x, 1, c(1, 2)) +} +
    #> torch_tensor +#> (1,.,.) = +#> 2 3 +#> 6 7 +#> +#> (2,.,.) = +#> 0 1 +#> 4 5 +#> [ CPUFloatType{2,2,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_round.html b/static/docs/reference/torch_round.html new file mode 100644 index 0000000000000000000000000000000000000000..24caf7992d68339bd1797163e45bf81ce07233a8 --- /dev/null +++ b/static/docs/reference/torch_round.html @@ -0,0 +1,257 @@ + + + + + + + + +Round — torch_round • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Round

    +
    + +
    torch_round(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    round(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with each of the elements of input rounded +to the closest integer.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_round(a) +} +
    #> torch_tensor +#> 1 +#> 1 +#> 0 +#> 1 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_rrelu_.html b/static/docs/reference/torch_rrelu_.html new file mode 100644 index 0000000000000000000000000000000000000000..276681d77e36162863b415e85ea98dcc320c7df1 --- /dev/null +++ b/static/docs/reference/torch_rrelu_.html @@ -0,0 +1,265 @@ + + + + + + + + +Rrelu_ — torch_rrelu_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Rrelu_

    +
    + +
    torch_rrelu_(
    +  self,
    +  lower = 0.125,
    +  upper = 0.333333,
    +  training = FALSE,
    +  generator = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    the input tensor

    lower

    lower bound of the uniform distribution. Default: 1/8

    upper

    upper bound of the uniform distribution. Default: 1/3

    training

    bool wether it's a training pass. DEfault: FALSE

    generator

    random number generator

    + +

    rrelu_(input, lower=1./8, upper=1./3, training=False) -> Tensor

    + + + + +

    In-place version of torch_rrelu.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_rsqrt.html b/static/docs/reference/torch_rsqrt.html new file mode 100644 index 0000000000000000000000000000000000000000..c172ff369b8c0bd3823dd83440a70e9fcfd09492 --- /dev/null +++ b/static/docs/reference/torch_rsqrt.html @@ -0,0 +1,260 @@ + + + + + + + + +Rsqrt — torch_rsqrt • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Rsqrt

    +
    + +
    torch_rsqrt(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    rsqrt(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the reciprocal of the square-root of each of +the elements of input.

    +

    $$ + \mbox{out}_{i} = \frac{1}{\sqrt{\mbox{input}_{i}}} +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_rsqrt(a) +} +
    #> torch_tensor +#> nan +#> 1.0875 +#> 1.7124 +#> 1.1438 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_save.html b/static/docs/reference/torch_save.html new file mode 100644 index 0000000000000000000000000000000000000000..017f9bd346312629c61263eeafd24e5b38feaf9f --- /dev/null +++ b/static/docs/reference/torch_save.html @@ -0,0 +1,251 @@ + + + + + + + + +Saves an object to a disk file. — torch_save • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    This function is experimental, don't use for long +term storage.

    +
    + +
    torch_save(obj, path, ...)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    obj

    the saved object

    path

    a connection or the name of the file to save.

    ...

    not currently used.

    + +

    See also

    + +

    Other torch_save: +torch_load()

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_selu.html b/static/docs/reference/torch_selu.html new file mode 100644 index 0000000000000000000000000000000000000000..5abb76651208d96b74fd3dae13da246143e88756 --- /dev/null +++ b/static/docs/reference/torch_selu.html @@ -0,0 +1,243 @@ + + + + + + + + +Selu — torch_selu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Selu

    +
    + +
    torch_selu(self)
    + +

    Arguments

    + + + + + + +
    self

    the input tensor

    + +

    selu(input) -> Tensor

    + + + + +

    Computes the selu transformation.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_selu_.html b/static/docs/reference/torch_selu_.html new file mode 100644 index 0000000000000000000000000000000000000000..fd576b25e401c1de3ff933c6480a5b55c4a83965 --- /dev/null +++ b/static/docs/reference/torch_selu_.html @@ -0,0 +1,243 @@ + + + + + + + + +Selu_ — torch_selu_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Selu_

    +
    + +
    torch_selu_(self)
    + +

    Arguments

    + + + + + + +
    self

    the input tensor

    + +

    selu_(input) -> Tensor

    + + + + +

    In-place version of torch_selu().

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_sigmoid.html b/static/docs/reference/torch_sigmoid.html new file mode 100644 index 0000000000000000000000000000000000000000..b375a2ed2e001aad41db1d66594d0cbb7bbb51e0 --- /dev/null +++ b/static/docs/reference/torch_sigmoid.html @@ -0,0 +1,259 @@ + + + + + + + + +Sigmoid — torch_sigmoid • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Sigmoid

    +
    + +
    torch_sigmoid(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    sigmoid(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the sigmoid of the elements of input.

    +

    $$ + \mbox{out}_{i} = \frac{1}{1 + e^{-\mbox{input}_{i}}} +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_sigmoid(a) +} +
    #> torch_tensor +#> 0.4169 +#> 0.4884 +#> 0.4421 +#> 0.6942 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_sign.html b/static/docs/reference/torch_sign.html new file mode 100644 index 0000000000000000000000000000000000000000..aba2438810c0c0e540823c6f4ebe184d9428a4a0 --- /dev/null +++ b/static/docs/reference/torch_sign.html @@ -0,0 +1,259 @@ + + + + + + + + +Sign — torch_sign • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Sign

    +
    + +
    torch_sign(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    sign(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the signs of the elements of input.

    +

    $$ + \mbox{out}_{i} = \mbox{sgn}(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_tensor(c(0.7, -1.2, 0., 2.3)) +a +torch_sign(a) +} +
    #> torch_tensor +#> 1 +#> -1 +#> 0 +#> 1 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_sin.html b/static/docs/reference/torch_sin.html new file mode 100644 index 0000000000000000000000000000000000000000..c90950e40729f699168987a6ec67ccfced38c824 --- /dev/null +++ b/static/docs/reference/torch_sin.html @@ -0,0 +1,259 @@ + + + + + + + + +Sin — torch_sin • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Sin

    +
    + +
    torch_sin(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    sin(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the sine of the elements of input.

    +

    $$ + \mbox{out}_{i} = \sin(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_sin(a) +} +
    #> torch_tensor +#> -0.2805 +#> 0.5482 +#> 0.8525 +#> -0.6915 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_sinh.html b/static/docs/reference/torch_sinh.html new file mode 100644 index 0000000000000000000000000000000000000000..1e6a7751be74d3166f92c21fee879e085060bcff --- /dev/null +++ b/static/docs/reference/torch_sinh.html @@ -0,0 +1,260 @@ + + + + + + + + +Sinh — torch_sinh • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Sinh

    +
    + +
    torch_sinh(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    sinh(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the hyperbolic sine of the elements of +input.

    +

    $$ + \mbox{out}_{i} = \sinh(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_sinh(a) +} +
    #> torch_tensor +#> -0.7905 +#> 1.1799 +#> 1.2017 +#> -2.5987 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_slogdet.html b/static/docs/reference/torch_slogdet.html new file mode 100644 index 0000000000000000000000000000000000000000..b2be6fc0a374f0bcf304860fcbc2cac15d4aecf0 --- /dev/null +++ b/static/docs/reference/torch_slogdet.html @@ -0,0 +1,274 @@ + + + + + + + + +Slogdet — torch_slogdet • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Slogdet

    +
    + +
    torch_slogdet(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor of size (*, n, n) where * is zero or more batch dimensions.

    + +

    Note

    + + +
    If `input` has zero determinant, this returns `(0, -inf)`.
    +
    + +
    Backward through `slogdet` internally uses SVD results when `input`
    +is not invertible. In this case, double backward through `slogdet`
    +will be unstable in when `input` doesn't have distinct singular values.
    +See `~torch.svd` for details.
    +
    + +

    slogdet(input) -> (Tensor, Tensor)

    + + + + +

    Calculates the sign and log absolute value of the determinant(s) of a square matrix or batches of square matrices.

    + +

    Examples

    +
    if (torch_is_installed()) { + +A = torch_randn(c(3, 3)) +A +torch_det(A) +torch_logdet(A) +torch_slogdet(A) +} +
    #> [[1]] +#> torch_tensor +#> 1 +#> [ CPUFloatType{} ] +#> +#> [[2]] +#> torch_tensor +#> -0.447307 +#> [ CPUFloatType{} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_solve.html b/static/docs/reference/torch_solve.html new file mode 100644 index 0000000000000000000000000000000000000000..2f10717ce07329937ba0f34c619cacf11f71f6df --- /dev/null +++ b/static/docs/reference/torch_solve.html @@ -0,0 +1,288 @@ + + + + + + + + +Solve — torch_solve • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Solve

    +
    + +
    torch_solve(self, A)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) input matrix \(B\) of size \((*, m, k)\) , where \(*\) is zero or more batch dimensions.

    A

    (Tensor) input square matrix of size \((*, m, m)\), where \(*\) is zero or more batch dimensions.

    + +

    Note

    + + +
    Irrespective of the original strides, the returned matrices
    +`solution` and `LU` will be transposed, i.e. with strides like
    +`B$contiguous()$transpose(-1, -2)$stride()` and
    +`A$contiguous()$transpose(-1, -2)$stride()` respectively.
    +
    + +

    solve(input, A) -> (Tensor, Tensor)

    + + + + +

    This function returns the solution to the system of linear +equations represented by \(AX = B\) and the LU factorization of +A, in order as a namedtuple solution, LU.

    +

    LU contains L and U factors for LU factorization of A.

    +

    torch_solve(B, A) can take in 2D inputs B, A or inputs that are +batches of 2D matrices. If the inputs are batches, then returns +batched outputs solution, LU.

    + +

    Examples

    +
    if (torch_is_installed()) { + +A = torch_tensor(rbind(c(6.80, -2.11, 5.66, 5.97, 8.23), + c(-6.05, -3.30, 5.36, -4.44, 1.08), + c(-0.45, 2.58, -2.70, 0.27, 9.04), + c(8.32, 2.71, 4.35, -7.17, 2.14), + c(-9.67, -5.14, -7.26, 6.08, -6.87)))$t() +B = torch_tensor(rbind(c(4.02, 6.19, -8.22, -7.57, -3.03), + c(-1.56, 4.00, -8.67, 1.75, 2.86), + c(9.81, -4.09, -4.57, -8.61, 8.99)))$t() +out = torch_solve(B, A) +X = out[[1]] +LU = out[[2]] +torch_dist(B, torch_mm(A, X)) +# Batched solver example +A = torch_randn(c(2, 3, 1, 4, 4)) +B = torch_randn(c(2, 3, 1, 4, 6)) +out = torch_solve(B, A) +X = out[[1]] +LU = out[[2]] +torch_dist(B, A$matmul(X)) +} +
    #> torch_tensor +#> 1.29687e-05 +#> [ CPUFloatType{} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_sort.html b/static/docs/reference/torch_sort.html new file mode 100644 index 0000000000000000000000000000000000000000..bb1dd9dcc1697b4f5460cb39a096a8907da3e0f4 --- /dev/null +++ b/static/docs/reference/torch_sort.html @@ -0,0 +1,281 @@ + + + + + + + + +Sort — torch_sort • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Sort

    +
    + +
    torch_sort(self, dim = -1L, descending = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int, optional) the dimension to sort along

    descending

    (bool, optional) controls the sorting order (ascending or descending)

    + +

    sort(input, dim=-1, descending=FALSE) -> (Tensor, LongTensor)

    + + + + +

    Sorts the elements of the input tensor along a given dimension +in ascending order by value.

    +

    If dim is not given, the last dimension of the input is chosen.

    +

    If descending is TRUE then the elements are sorted in descending +order by value.

    +

    A namedtuple of (values, indices) is returned, where the values are the +sorted values and indices are the indices of the elements in the original +input tensor.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_randn(c(3, 4)) +out = torch_sort(x) +out +out = torch_sort(x, 1) +out +} +
    #> [[1]] +#> torch_tensor +#> -0.7961 0.1753 -0.2432 -1.1334 +#> -0.2084 0.5509 0.3876 -0.9865 +#> 0.1948 0.9346 0.5226 -0.1232 +#> [ CPUFloatType{3,4} ] +#> +#> [[2]] +#> torch_tensor +#> 0 1 0 2 +#> 2 2 1 0 +#> 1 0 2 1 +#> [ CPULongType{3,4} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_sparse_coo_tensor.html b/static/docs/reference/torch_sparse_coo_tensor.html new file mode 100644 index 0000000000000000000000000000000000000000..99aa3f3df153fedca01dba9aab3261ddc6cd2baa --- /dev/null +++ b/static/docs/reference/torch_sparse_coo_tensor.html @@ -0,0 +1,294 @@ + + + + + + + + +Sparse_coo_tensor — torch_sparse_coo_tensor • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Sparse_coo_tensor

    +
    + +
    torch_sparse_coo_tensor(
    +  indices,
    +  values,
    +  size = NULL,
    +  dtype = NULL,
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    indices

    (array_like) Initial data for the tensor. Can be a list, tuple, NumPy ndarray, scalar, and other types. Will be cast to a torch_LongTensor internally. The indices are the coordinates of the non-zero values in the matrix, and thus should be two-dimensional where the first dimension is the number of tensor dimensions and the second dimension is the number of non-zero values.

    values

    (array_like) Initial values for the tensor. Can be a list, tuple, NumPy ndarray, scalar, and other types.

    size

    (list, tuple, or torch.Size, optional) Size of the sparse tensor. If not provided the size will be inferred as the minimum size big enough to hold all non-zero elements.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, infers data type from values.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    sparse_coo_tensor(indices, values, size=NULL, dtype=NULL, device=NULL, requires_grad=False) -> Tensor

    + + + + +

    Constructs a sparse tensors in COO(rdinate) format with non-zero elements at the given indices +with the given values. A sparse tensor can be uncoalesced, in that case, there are duplicate +coordinates in the indices, and the value at that index is the sum of all duplicate value entries: +torch_sparse_.

    + +

    Examples

    +
    if (torch_is_installed()) { + +i = torch_tensor(matrix(c(1, 2, 2, 3, 1, 3), ncol = 3, byrow = TRUE), dtype=torch_int64()) +v = torch_tensor(c(3, 4, 5), dtype=torch_float32()) +torch_sparse_coo_tensor(i, v) +torch_sparse_coo_tensor(i, v, c(2, 4)) + +# create empty sparse tensors +S = torch_sparse_coo_tensor( + torch_empty(c(1, 0), dtype = torch_int64()), + torch_tensor(numeric(), dtype = torch_float32()), + c(1) +) +S = torch_sparse_coo_tensor( + torch_empty(c(1, 0), dtype = torch_int64()), + torch_empty(c(0, 2)), + c(1, 2) +) +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_split.html b/static/docs/reference/torch_split.html new file mode 100644 index 0000000000000000000000000000000000000000..7104580caad9a25df391dd6045a923718b53ff1a --- /dev/null +++ b/static/docs/reference/torch_split.html @@ -0,0 +1,260 @@ + + + + + + + + +Split — torch_split • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Split

    +
    + +
    torch_split(self, split_size, dim = 1L)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) tensor to split.

    split_size

    (int) size of a single chunk or list of sizes for each chunk

    dim

    (int) dimension along which to split the tensor.

    + +

    TEST

    + + + + +

    Splits the tensor into chunks. Each chunk is a view of the original tensor.

    If `split_size_or_sections` is an integer type, then `tensor` will
    +be split into equally sized chunks (if possible). Last chunk will be smaller if
    +the tensor size along the given dimension `dim` is not divisible by
    +`split_size`.
    +
    +If `split_size_or_sections` is a list, then `tensor` will be split
    +into `len(split_size_or_sections)` chunks with sizes in `dim` according
    +to `split_size_or_sections`.
    +
    + + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_sqrt.html b/static/docs/reference/torch_sqrt.html new file mode 100644 index 0000000000000000000000000000000000000000..b8b49a160f400ecb994a36a6d940efac48826c9a --- /dev/null +++ b/static/docs/reference/torch_sqrt.html @@ -0,0 +1,259 @@ + + + + + + + + +Sqrt — torch_sqrt • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Sqrt

    +
    + +
    torch_sqrt(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    sqrt(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the square-root of the elements of input.

    +

    $$ + \mbox{out}_{i} = \sqrt{\mbox{input}_{i}} +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_sqrt(a) +} +
    #> torch_tensor +#> nan +#> 0.2255 +#> 0.5333 +#> 0.3449 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_square.html b/static/docs/reference/torch_square.html new file mode 100644 index 0000000000000000000000000000000000000000..c6decf0732f4c1d01a7441ca8c433e780c02f33a --- /dev/null +++ b/static/docs/reference/torch_square.html @@ -0,0 +1,256 @@ + + + + + + + + +Square — torch_square • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Square

    +
    + +
    torch_square(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    square(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the square of the elements of input.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_square(a) +} +
    #> torch_tensor +#> 1.5194 +#> 0.2682 +#> 1.6344 +#> 0.8489 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_squeeze.html b/static/docs/reference/torch_squeeze.html new file mode 100644 index 0000000000000000000000000000000000000000..323d22f26ef3a273af3e7ee809f0427517f0cb93 --- /dev/null +++ b/static/docs/reference/torch_squeeze.html @@ -0,0 +1,283 @@ + + + + + + + + +Squeeze — torch_squeeze • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Squeeze

    +
    + +
    torch_squeeze(self, dim)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int, optional) if given, the input will be squeezed only in this dimension

    + +

    Note

    + +

    The returned tensor shares the storage with the input tensor, +so changing the contents of one will change the contents of the other.

    +

    squeeze(input, dim=NULL, out=NULL) -> Tensor

    + + + + +

    Returns a tensor with all the dimensions of input of size 1 removed.

    +

    For example, if input is of shape: +\((A \times 1 \times B \times C \times 1 \times D)\) then the out tensor +will be of shape: \((A \times B \times C \times D)\).

    +

    When dim is given, a squeeze operation is done only in the given +dimension. If input is of shape: \((A \times 1 \times B)\), +squeeze(input, 0) leaves the tensor unchanged, but squeeze(input, 1) +will squeeze the tensor to the shape \((A \times B)\).

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_zeros(c(2, 1, 2, 1, 2)) +x +y = torch_squeeze(x) +y +y = torch_squeeze(x, 1) +y +y = torch_squeeze(x, 2) +y +} +
    #> torch_tensor +#> (1,1,.,.) = +#> 0 0 +#> +#> (2,1,.,.) = +#> 0 0 +#> +#> (1,2,.,.) = +#> 0 0 +#> +#> (2,2,.,.) = +#> 0 0 +#> [ CPUFloatType{2,2,1,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_stack.html b/static/docs/reference/torch_stack.html new file mode 100644 index 0000000000000000000000000000000000000000..c29f7a7dee21b779ccb2b4ed7d37ecf1aafec5fe --- /dev/null +++ b/static/docs/reference/torch_stack.html @@ -0,0 +1,248 @@ + + + + + + + + +Stack — torch_stack • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Stack

    +
    + +
    torch_stack(tensors, dim = 1L)
    + +

    Arguments

    + + + + + + + + + + +
    tensors

    (sequence of Tensors) sequence of tensors to concatenate

    dim

    (int) dimension to insert. Has to be between 0 and the number of dimensions of concatenated tensors (inclusive)

    + +

    stack(tensors, dim=0, out=NULL) -> Tensor

    + + + + +

    Concatenates sequence of tensors along a new dimension.

    +

    All tensors need to be of the same size.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_std.html b/static/docs/reference/torch_std.html new file mode 100644 index 0000000000000000000000000000000000000000..4bdd78bf1f9b118461bfb9fdfcdd5acaa30c587d --- /dev/null +++ b/static/docs/reference/torch_std.html @@ -0,0 +1,289 @@ + + + + + + + + +Std — torch_std • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Std

    +
    + +
    torch_std(self, dim, unbiased = TRUE, keepdim = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int or tuple of ints) the dimension or dimensions to reduce.

    unbiased

    (bool) whether to use the unbiased estimation or not

    keepdim

    (bool) whether the output tensor has dim retained or not.

    + +

    std(input, unbiased=TRUE) -> Tensor

    + + + + +

    Returns the standard-deviation of all elements in the input tensor.

    +

    If unbiased is FALSE, then the standard-deviation will be calculated +via the biased estimator. Otherwise, Bessel's correction will be used.

    +

    std(input, dim, unbiased=TRUE, keepdim=False, out=NULL) -> Tensor

    + + + + +

    Returns the standard-deviation of each row of the input tensor in the +dimension dim. If dim is a list of dimensions, +reduce over all of them.

    +

    If keepdim is TRUE, the output tensor is of the same size +as input except in the dimension(s) dim where it is of size 1. +Otherwise, dim is squeezed (see torch_squeeze), resulting in the +output tensor having 1 (or len(dim)) fewer dimension(s).

    +

    If unbiased is FALSE, then the standard-deviation will be calculated +via the biased estimator. Otherwise, Bessel's correction will be used.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(1, 3)) +a +torch_std(a) + + +a = torch_randn(c(4, 4)) +a +torch_std(a, dim=1) +} +
    #> torch_tensor +#> 0.5958 +#> 0.5692 +#> 1.5368 +#> 0.7848 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_std_mean.html b/static/docs/reference/torch_std_mean.html new file mode 100644 index 0000000000000000000000000000000000000000..65ce94ee7793b0779393ac03bed4dae3c8b1941f --- /dev/null +++ b/static/docs/reference/torch_std_mean.html @@ -0,0 +1,299 @@ + + + + + + + + +Std_mean — torch_std_mean • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Std_mean

    +
    + +
    torch_std_mean(self, dim, unbiased = TRUE, keepdim = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int or tuple of ints) the dimension or dimensions to reduce.

    unbiased

    (bool) whether to use the unbiased estimation or not

    keepdim

    (bool) whether the output tensor has dim retained or not.

    + +

    std_mean(input, unbiased=TRUE) -> (Tensor, Tensor)

    + + + + +

    Returns the standard-deviation and mean of all elements in the input tensor.

    +

    If unbiased is FALSE, then the standard-deviation will be calculated +via the biased estimator. Otherwise, Bessel's correction will be used.

    +

    std_mean(input, dim, unbiased=TRUE, keepdim=False) -> (Tensor, Tensor)

    + + + + +

    Returns the standard-deviation and mean of each row of the input tensor in the +dimension dim. If dim is a list of dimensions, +reduce over all of them.

    +

    If keepdim is TRUE, the output tensor is of the same size +as input except in the dimension(s) dim where it is of size 1. +Otherwise, dim is squeezed (see torch_squeeze), resulting in the +output tensor having 1 (or len(dim)) fewer dimension(s).

    +

    If unbiased is FALSE, then the standard-deviation will be calculated +via the biased estimator. Otherwise, Bessel's correction will be used.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(1, 3)) +a +torch_std_mean(a) + + +a = torch_randn(c(4, 4)) +a +torch_std_mean(a, 1) +} +
    #> [[1]] +#> torch_tensor +#> 1.4613 +#> 0.7194 +#> 0.5630 +#> 1.2705 +#> [ CPUFloatType{4} ] +#> +#> [[2]] +#> torch_tensor +#> 0.2608 +#> -0.1669 +#> -0.7519 +#> -1.0647 +#> [ CPUFloatType{4} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_stft.html b/static/docs/reference/torch_stft.html new file mode 100644 index 0000000000000000000000000000000000000000..1d6ef0f23dc29663385b3a1012e1ee2efca64cc0 --- /dev/null +++ b/static/docs/reference/torch_stft.html @@ -0,0 +1,341 @@ + + + + + + + + +Stft — torch_stft • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Stft

    +
    + +
    torch_stft(
    +  input,
    +  n_fft,
    +  hop_length = NULL,
    +  win_length = NULL,
    +  window = NULL,
    +  center = TRUE,
    +  pad_mode = "reflect",
    +  normalized = FALSE,
    +  onesided = TRUE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    (Tensor) the input tensor

    n_fft

    (int) size of Fourier transform

    hop_length

    (int, optional) the distance between neighboring sliding window frames. Default: NULL (treated as equal to floor(n_fft / 4))

    win_length

    (int, optional) the size of window frame and STFT filter. Default: NULL (treated as equal to n_fft)

    window

    (Tensor, optional) the optional window function. Default: NULL (treated as window of all \(1\) s)

    center

    (bool, optional) whether to pad input on both sides so that the \(t\)-th frame is centered at time \(t \times \mbox{hop\_length}\). Default: TRUE

    pad_mode

    (string, optional) controls the padding method used when center is TRUE. Default: "reflect"

    normalized

    (bool, optional) controls whether to return the normalized STFT results Default: FALSE

    onesided

    (bool, optional) controls whether to return half of results to avoid redundancy Default: TRUE

    + +

    Short-time Fourier transform (STFT).

    + + + + +

    Short-time Fourier transform (STFT).

    Ignoring the optional batch dimension, this method computes the following
    +expression:
    +
    + +

    $$ + X[m, \omega] = \sum_{k = 0}^{\mbox{win\_length-1}}% + \mbox{window}[k]\ \mbox{input}[m \times \mbox{hop\_length} + k]\ % + \exp\left(- j \frac{2 \pi \cdot \omega k}{\mbox{win\_length}}\right), +$$ +where \(m\) is the index of the sliding window, and \(\omega\) is +the frequency that \(0 \leq \omega < \mbox{n\_fft}\). When +onesided is the default value TRUE,

    * `input` must be either a 1-D time sequence or a 2-D batch of time
    +  sequences.
    +
    +* If `hop_length` is `NULL` (default), it is treated as equal to
    +  `floor(n_fft / 4)`.
    +
    +* If `win_length` is `NULL` (default), it is treated as equal to
    +  `n_fft`.
    +
    +* `window` can be a 1-D tensor of size `win_length`, e.g., from
    +  `torch_hann_window`. If `window` is `NULL` (default), it is
    +  treated as if having \eqn{1} everywhere in the window. If
    +  \eqn{\mbox{win\_length} < \mbox{n\_fft}}, `window` will be padded on
    +  both sides to length `n_fft` before being applied.
    +
    +* If `center` is `TRUE` (default), `input` will be padded on
    +  both sides so that the \eqn{t}-th frame is centered at time
    +  \eqn{t \times \mbox{hop\_length}}. Otherwise, the \eqn{t}-th frame
    +  begins at time  \eqn{t \times \mbox{hop\_length}}.
    +
    +* `pad_mode` determines the padding method used on `input` when
    +  `center` is `TRUE`. See `torch_nn.functional.pad` for
    +  all available options. Default is `"reflect"`.
    +
    +* If `onesided` is `TRUE` (default), only values for \eqn{\omega}
    +  in \eqn{\left[0, 1, 2, \dots, \left\lfloor \frac{\mbox{n\_fft}}{2} \right\rfloor + 1\right]}
    +  are returned because the real-to-complex Fourier transform satisfies the
    +  conjugate symmetry, i.e., \eqn{X[m, \omega] = X[m, \mbox{n\_fft} - \omega]^*}.
    +
    +* If `normalized` is `TRUE` (default is `FALSE`), the function
    +  returns the normalized STFT results, i.e., multiplied by \eqn{(\mbox{frame\_length})^{-0.5}}.
    +
    +Returns the real and the imaginary parts together as one tensor of size
    +\eqn{(* \times N \times T \times 2)}, where \eqn{*} is the optional
    +batch size of `input`, \eqn{N} is the number of frequencies where
    +STFT is applied, \eqn{T} is the total number of frames used, and each pair
    +in the last dimension represents a complex number as the real part and the
    +imaginary part.
    +
    + +

    Warning

    + + + +

    This function changed signature at version 0.4.1. Calling with the +previous signature may cause error or return incorrect result.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_sum.html b/static/docs/reference/torch_sum.html new file mode 100644 index 0000000000000000000000000000000000000000..6498a6e4f0cc96a8d1c862b1842a44fe97e1b393 --- /dev/null +++ b/static/docs/reference/torch_sum.html @@ -0,0 +1,287 @@ + + + + + + + + +Sum — torch_sum • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Sum

    +
    + +
    torch_sum(self, dim, keepdim = FALSE, dtype = NULL)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int or tuple of ints) the dimension or dimensions to reduce.

    keepdim

    (bool) whether the output tensor has dim retained or not.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: NULL.

    + +

    sum(input, dtype=NULL) -> Tensor

    + + + + +

    Returns the sum of all elements in the input tensor.

    +

    sum(input, dim, keepdim=False, dtype=NULL) -> Tensor

    + + + + +

    Returns the sum of each row of the input tensor in the given +dimension dim. If dim is a list of dimensions, +reduce over all of them.

    +

    If keepdim is TRUE, the output tensor is of the same size +as input except in the dimension(s) dim where it is of size 1. +Otherwise, dim is squeezed (see torch_squeeze), resulting in the +output tensor having 1 (or len(dim)) fewer dimension(s).

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(1, 3)) +a +torch_sum(a) + + +a = torch_randn(c(4, 4)) +a +torch_sum(a, 1) +b = torch_arange(0, 4 * 5 * 6)$view(c(4, 5, 6)) +torch_sum(b, list(2, 1)) +} +
    #> torch_tensor +#> 435 +#> 1335 +#> 2235 +#> 3135 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_svd.html b/static/docs/reference/torch_svd.html new file mode 100644 index 0000000000000000000000000000000000000000..1914c96943faa8aa3b19cc342f4206e047300f7f --- /dev/null +++ b/static/docs/reference/torch_svd.html @@ -0,0 +1,298 @@ + + + + + + + + +Svd — torch_svd • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Svd

    +
    + +
    torch_svd(self, some = TRUE, compute_uv = TRUE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor of size \((*, m, n)\) where * is zero or more batch dimensions consisting of \(m \times n\) matrices.

    some

    (bool, optional) controls the shape of returned U and V

    compute_uv

    (bool, optional) option whether to compute U and V or not

    + +

    Note

    + +

    The singular values are returned in descending order. If input is a batch of matrices, +then the singular values of each matrix in the batch is returned in descending order.

    +

    The implementation of SVD on CPU uses the LAPACK routine ?gesdd (a divide-and-conquer +algorithm) instead of ?gesvd for speed. Analogously, the SVD on GPU uses the MAGMA routine +gesdd as well.

    +

    Irrespective of the original strides, the returned matrix U +will be transposed, i.e. with strides U.contiguous().transpose(-2, -1).stride()

    +

    Extra care needs to be taken when backward through U and V +outputs. Such operation is really only stable when input is +full rank with all distinct singular values. Otherwise, NaN can +appear as the gradients are not properly defined. Also, notice that +double backward will usually do an additional backward through U and +V even if the original backward is only on S.

    +

    When some = FALSE, the gradients on U[..., :, min(m, n):] +and V[..., :, min(m, n):] will be ignored in backward as those vectors +can be arbitrary bases of the subspaces.

    +

    When compute_uv = FALSE, backward cannot be performed since U and V +from the forward pass is required for the backward operation.

    +

    svd(input, some=TRUE, compute_uv=TRUE) -> (Tensor, Tensor, Tensor)

    + + + + +

    This function returns a namedtuple (U, S, V) which is the singular value +decomposition of a input real matrix or batches of real matrices input such that +\(input = U \times diag(S) \times V^T\).

    +

    If some is TRUE (default), the method returns the reduced singular value decomposition +i.e., if the last two dimensions of input are m and n, then the returned +U and V matrices will contain only \(min(n, m)\) orthonormal columns.

    +

    If compute_uv is FALSE, the returned U and V matrices will be zero matrices +of shape \((m \times m)\) and \((n \times n)\) respectively. some will be ignored here.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(5, 3)) +a +out = torch_svd(a) +u = out[[1]] +s = out[[2]] +v = out[[3]] +torch_dist(a, torch_mm(torch_mm(u, torch_diag(s)), v$t())) +a_big = torch_randn(c(7, 5, 3)) +out = torch_svd(a_big) +u = out[[1]] +s = out[[2]] +v = out[[3]] +torch_dist(a_big, torch_matmul(torch_matmul(u, torch_diag_embed(s)), v$transpose(-2, -1))) +} +
    #> torch_tensor +#> 2.94448e-06 +#> [ CPUFloatType{} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_symeig.html b/static/docs/reference/torch_symeig.html new file mode 100644 index 0000000000000000000000000000000000000000..28c6582567f5076df82100c80690dd440a618a10 --- /dev/null +++ b/static/docs/reference/torch_symeig.html @@ -0,0 +1,290 @@ + + + + + + + + +Symeig — torch_symeig • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Symeig

    +
    + +
    torch_symeig(self, eigenvectors = FALSE, upper = TRUE)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor of size \((*, n, n)\) where * is zero or more batch dimensions consisting of symmetric matrices.

    eigenvectors

    (boolean, optional) controls whether eigenvectors have to be computed

    upper

    (boolean, optional) controls whether to consider upper-triangular or lower-triangular region

    + +

    Note

    + +

    The eigenvalues are returned in ascending order. If input is a batch of matrices, +then the eigenvalues of each matrix in the batch is returned in ascending order.

    +

    Irrespective of the original strides, the returned matrix V will +be transposed, i.e. with strides V.contiguous().transpose(-1, -2).stride().

    +

    Extra care needs to be taken when backward through outputs. Such +operation is really only stable when all eigenvalues are distinct. +Otherwise, NaN can appear as the gradients are not properly defined.

    +

    symeig(input, eigenvectors=False, upper=TRUE) -> (Tensor, Tensor)

    + + + + +

    This function returns eigenvalues and eigenvectors +of a real symmetric matrix input or a batch of real symmetric matrices, +represented by a namedtuple (eigenvalues, eigenvectors).

    +

    This function calculates all eigenvalues (and vectors) of input +such that \(\mbox{input} = V \mbox{diag}(e) V^T\).

    +

    The boolean argument eigenvectors defines computation of +both eigenvectors and eigenvalues or eigenvalues only.

    +

    If it is FALSE, only eigenvalues are computed. If it is TRUE, +both eigenvalues and eigenvectors are computed.

    +

    Since the input matrix input is supposed to be symmetric, +only the upper triangular portion is used by default.

    +

    If upper is FALSE, then lower triangular portion is used.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(5, 5)) +a = a + a$t() # To make a symmetric +a +o = torch_symeig(a, eigenvectors=TRUE) +e = o[[1]] +v = o[[2]] +e +v +a_big = torch_randn(c(5, 2, 2)) +a_big = a_big + a_big$transpose(-2, -1) # To make a_big symmetric +o = a_big$symeig(eigenvectors=TRUE) +e = o[[1]] +v = o[[2]] +torch_allclose(torch_matmul(v, torch_matmul(e$diag_embed(), v$transpose(-2, -1))), a_big) +} +
    #> [1] TRUE
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_t.html b/static/docs/reference/torch_t.html new file mode 100644 index 0000000000000000000000000000000000000000..caf5489313d86a1ad77fa0b46dd783e9946ff9c1 --- /dev/null +++ b/static/docs/reference/torch_t.html @@ -0,0 +1,264 @@ + + + + + + + + +T — torch_t • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    T

    +
    + +
    torch_t(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    t(input) -> Tensor

    + + + + +

    Expects input to be <= 2-D tensor and transposes dimensions 0 +and 1.

    +

    0-D and 1-D tensors are returned as is. When input is a 2-D tensor this +is equivalent to transpose(input, 0, 1).

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_randn(c(2,3)) +x +torch_t(x) +x = torch_randn(c(3)) +x +torch_t(x) +x = torch_randn(c(2, 3)) +x +torch_t(x) +} +
    #> torch_tensor +#> 0.9294 -0.8336 +#> 1.1379 0.4580 +#> -2.2674 0.2512 +#> [ CPUFloatType{3,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_take.html b/static/docs/reference/torch_take.html new file mode 100644 index 0000000000000000000000000000000000000000..fc13a3a8e0a486ce2299a06dbb488ea0f47b45c2 --- /dev/null +++ b/static/docs/reference/torch_take.html @@ -0,0 +1,260 @@ + + + + + + + + +Take — torch_take • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Take

    +
    + +
    torch_take(self, index)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    index

    (LongTensor) the indices into tensor

    + +

    take(input, index) -> Tensor

    + + + + +

    Returns a new tensor with the elements of input at the given indices. +The input tensor is treated as if it were viewed as a 1-D tensor. The result +takes the same shape as the indices.

    + +

    Examples

    +
    if (torch_is_installed()) { + +src = torch_tensor(matrix(c(4,3,5,6,7,8), ncol = 3, byrow = TRUE)) +torch_take(src, torch_tensor(c(1, 2, 5), dtype = torch_int64())) +} +
    #> torch_tensor +#> 4 +#> 3 +#> 7 +#> [ CPUFloatType{3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_tan.html b/static/docs/reference/torch_tan.html new file mode 100644 index 0000000000000000000000000000000000000000..33954955b11cbb0e6a6b01f23d0b3d534965c891 --- /dev/null +++ b/static/docs/reference/torch_tan.html @@ -0,0 +1,259 @@ + + + + + + + + +Tan — torch_tan • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Tan

    +
    + +
    torch_tan(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    tan(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the tangent of the elements of input.

    +

    $$ + \mbox{out}_{i} = \tan(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_tan(a) +} +
    #> torch_tensor +#> -0.2036 +#> -0.6868 +#> 0.0959 +#> 1.0523 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_tanh.html b/static/docs/reference/torch_tanh.html new file mode 100644 index 0000000000000000000000000000000000000000..f397dc61c56e116268f7105657188c29cd27ba45 --- /dev/null +++ b/static/docs/reference/torch_tanh.html @@ -0,0 +1,260 @@ + + + + + + + + +Tanh — torch_tanh • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Tanh

    +
    + +
    torch_tanh(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    tanh(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the hyperbolic tangent of the elements +of input.

    +

    $$ + \mbox{out}_{i} = \tanh(\mbox{input}_{i}) +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_tanh(a) +} +
    #> torch_tensor +#> 0.4501 +#> -0.2993 +#> 0.4767 +#> 0.4199 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_tensor.html b/static/docs/reference/torch_tensor.html new file mode 100644 index 0000000000000000000000000000000000000000..55df1a5151775bded575e601047f92c297eeebd6 --- /dev/null +++ b/static/docs/reference/torch_tensor.html @@ -0,0 +1,271 @@ + + + + + + + + +Converts R objects to a torch tensor — torch_tensor • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Converts R objects to a torch tensor

    +
    + +
    torch_tensor(
    +  data,
    +  dtype = NULL,
    +  device = NULL,
    +  requires_grad = FALSE,
    +  pin_memory = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    data

    an R atomic vector, matrix or array

    dtype

    a torch_dtype instance

    device

    a device creted with torch_device()

    requires_grad

    if autograd should record operations on the returned tensor.

    pin_memory

    If set, returned tensor would be allocated in the pinned memory.

    + + +

    Examples

    +
    if (torch_is_installed()) { +torch_tensor(c(1,2,3,4)) +torch_tensor(c(1,2,3,4), dtype = torch_int()) + +} +
    #> torch_tensor +#> 1 +#> 2 +#> 3 +#> 4 +#> [ CPUIntType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_tensordot.html b/static/docs/reference/torch_tensordot.html new file mode 100644 index 0000000000000000000000000000000000000000..567ab398d080d088736197cc9feecdd0a3c3c73a --- /dev/null +++ b/static/docs/reference/torch_tensordot.html @@ -0,0 +1,260 @@ + + + + + + + + +Tensordot — torch_tensordot • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Returns a contraction of a and b over multiple dimensions. +tensordot implements a generalized matrix product.

    +
    + +
    torch_tensordot(a, b, dims = 2)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    a

    (Tensor) Left tensor to contract

    b

    (Tensor) Right tensor to contract

    dims

    (int or tuple of two lists of integers) number of dimensions to contract or explicit lists of dimensions for a and b respectively

    + + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_arange(start = 0, end = 60.)$reshape(c(3, 4, 5)) +b = torch_arange(start = 0, end = 24.)$reshape(c(4, 3, 2)) +torch_tensordot(a, b, dims = list(c(2, 1), c(1, 2))) +if (FALSE) { +a = torch_randn(3, 4, 5, device='cuda') +b = torch_randn(4, 5, 6, device='cuda') +c = torch_tensordot(a, b, dims=2)$cpu() +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_threshold_.html b/static/docs/reference/torch_threshold_.html new file mode 100644 index 0000000000000000000000000000000000000000..02a50c01ee0f71519ad3b8036c99c4189e442291 --- /dev/null +++ b/static/docs/reference/torch_threshold_.html @@ -0,0 +1,251 @@ + + + + + + + + +Threshold_ — torch_threshold_ • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Threshold_

    +
    + +
    torch_threshold_(self, threshold, value)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    input tensor

    threshold

    The value to threshold at

    value

    The value to replace with

    + +

    threshold_(input, threshold, value) -> Tensor

    + + + + +

    In-place version of torch_threshold.

    + +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_topk.html b/static/docs/reference/torch_topk.html new file mode 100644 index 0000000000000000000000000000000000000000..735d131fca4ab081895fc967988d27de51bab107 --- /dev/null +++ b/static/docs/reference/torch_topk.html @@ -0,0 +1,287 @@ + + + + + + + + +Topk — torch_topk • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Topk

    +
    + +
    torch_topk(self, k, dim = -1L, largest = TRUE, sorted = TRUE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    k

    (int) the k in "top-k"

    dim

    (int, optional) the dimension to sort along

    largest

    (bool, optional) controls whether to return largest or smallest elements

    sorted

    (bool, optional) controls whether to return the elements in sorted order

    + +

    topk(input, k, dim=NULL, largest=TRUE, sorted=TRUE) -> (Tensor, LongTensor)

    + + + + +

    Returns the k largest elements of the given input tensor along +a given dimension.

    +

    If dim is not given, the last dimension of the input is chosen.

    +

    If largest is FALSE then the k smallest elements are returned.

    +

    A namedtuple of (values, indices) is returned, where the indices are the indices +of the elements in the original input tensor.

    +

    The boolean option sorted if TRUE, will make sure that the returned +k elements are themselves sorted

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_arange(1., 6.) +x +torch_topk(x, 3) +} +
    #> [[1]] +#> torch_tensor +#> 5 +#> 4 +#> 3 +#> [ CPUFloatType{3} ] +#> +#> [[2]] +#> torch_tensor +#> 4 +#> 3 +#> 2 +#> [ CPULongType{3} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_trace.html b/static/docs/reference/torch_trace.html new file mode 100644 index 0000000000000000000000000000000000000000..fdb85b552c25d0687da248c97e60c6860e263430 --- /dev/null +++ b/static/docs/reference/torch_trace.html @@ -0,0 +1,253 @@ + + + + + + + + +Trace — torch_trace • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Trace

    +
    + +
    torch_trace(self)
    + +

    Arguments

    + + + + + + +
    self

    the input tensor

    + +

    trace(input) -> Tensor

    + + + + +

    Returns the sum of the elements of the diagonal of the input 2-D matrix.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_arange(1., 10.)$view(c(3, 3)) +x +torch_trace(x) +} +
    #> torch_tensor +#> 15 +#> [ CPUFloatType{} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_transpose.html b/static/docs/reference/torch_transpose.html new file mode 100644 index 0000000000000000000000000000000000000000..34b09b098b4a9d97a9c82b617c6d63cafa64e49a --- /dev/null +++ b/static/docs/reference/torch_transpose.html @@ -0,0 +1,267 @@ + + + + + + + + +Transpose — torch_transpose • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Transpose

    +
    + +
    torch_transpose(self, dim0, dim1)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim0

    (int) the first dimension to be transposed

    dim1

    (int) the second dimension to be transposed

    + +

    transpose(input, dim0, dim1) -> Tensor

    + + + + +

    Returns a tensor that is a transposed version of input. +The given dimensions dim0 and dim1 are swapped.

    +

    The resulting out tensor shares it's underlying storage with the +input tensor, so changing the content of one would change the content +of the other.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_randn(c(2, 3)) +x +torch_transpose(x, 1, 2) +} +
    #> torch_tensor +#> 0.4633 -0.6867 +#> -1.8115 0.4476 +#> -1.5475 1.2365 +#> [ CPUFloatType{3,2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_trapz.html b/static/docs/reference/torch_trapz.html new file mode 100644 index 0000000000000000000000000000000000000000..6a977caf74be5edf59462f83a01b384c82448ac6 --- /dev/null +++ b/static/docs/reference/torch_trapz.html @@ -0,0 +1,274 @@ + + + + + + + + +Trapz — torch_trapz • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Trapz

    +
    + +
    torch_trapz(y, dx = 1L, x, dim = -1L)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    y

    (Tensor) The values of the function to integrate

    dx

    (float) The distance between points at which y is sampled.

    x

    (Tensor) The points at which the function y is sampled. If x is not in ascending order, intervals on which it is decreasing contribute negatively to the estimated integral (i.e., the convention \(\int_a^b f = -\int_b^a f\) is followed).

    dim

    (int) The dimension along which to integrate. By default, use the last dimension.

    + +

    trapz(y, x, *, dim=-1) -> Tensor

    + + + + +

    Estimate \(\int y\,dx\) along dim, using the trapezoid rule.

    +

    trapz(y, *, dx=1, dim=-1) -> Tensor

    + + + + +

    As above, but the sample points are spaced uniformly at a distance of dx.

    + +

    Examples

    +
    if (torch_is_installed()) { + +y = torch_randn(list(2, 3)) +y +x = torch_tensor(matrix(c(1, 3, 4, 1, 2, 3), ncol = 3, byrow=TRUE)) +torch_trapz(y, x = x) + +} +
    #> torch_tensor +#> -1.7562 +#> -0.8343 +#> [ CPUFloatType{2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_triangular_solve.html b/static/docs/reference/torch_triangular_solve.html new file mode 100644 index 0000000000000000000000000000000000000000..b426ed5c99539f7127af4e4e1b76dde13b8b9b05 --- /dev/null +++ b/static/docs/reference/torch_triangular_solve.html @@ -0,0 +1,292 @@ + + + + + + + + +Triangular_solve — torch_triangular_solve • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Triangular_solve

    +
    + +
    torch_triangular_solve(
    +  self,
    +  A,
    +  upper = TRUE,
    +  transpose = FALSE,
    +  unitriangular = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) multiple right-hand sides of size \((*, m, k)\) where \(*\) is zero of more batch dimensions (\(b\))

    A

    (Tensor) the input triangular coefficient matrix of size \((*, m, m)\) where \(*\) is zero or more batch dimensions

    upper

    (bool, optional) whether to solve the upper-triangular system of equations (default) or the lower-triangular system of equations. Default: TRUE.

    transpose

    (bool, optional) whether \(A\) should be transposed before being sent into the solver. Default: FALSE.

    unitriangular

    (bool, optional) whether \(A\) is unit triangular. If TRUE, the diagonal elements of \(A\) are assumed to be 1 and not referenced from \(A\). Default: FALSE.

    + +

    triangular_solve(input, A, upper=TRUE, transpose=False, unitriangular=False) -> (Tensor, Tensor)

    + + + + +

    Solves a system of equations with a triangular coefficient matrix \(A\) +and multiple right-hand sides \(b\).

    +

    In particular, solves \(AX = b\) and assumes \(A\) is upper-triangular +with the default keyword arguments.

    +

    torch_triangular_solve(b, A) can take in 2D inputs b, A or inputs that are +batches of 2D matrices. If the inputs are batches, then returns +batched outputs X

    + +

    Examples

    +
    if (torch_is_installed()) { + +A = torch_randn(c(2, 2))$triu() +A +b = torch_randn(c(2, 3)) +b +torch_triangular_solve(b, A) +} +
    #> [[1]] +#> torch_tensor +#> 5.9921 7.3633 6.5760 +#> 0.6631 3.9283 2.7029 +#> [ CPUFloatType{2,3} ] +#> +#> [[2]] +#> torch_tensor +#> 0.1910 0.0582 +#> 0.0000 0.2591 +#> [ CPUFloatType{2,2} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_tril.html b/static/docs/reference/torch_tril.html new file mode 100644 index 0000000000000000000000000000000000000000..9fac64c1bacc3359fae381ab6f74915941b045d4 --- /dev/null +++ b/static/docs/reference/torch_tril.html @@ -0,0 +1,274 @@ + + + + + + + + +Tril — torch_tril • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Tril

    +
    + +
    torch_tril(self, diagonal = 0L)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    diagonal

    (int, optional) the diagonal to consider

    + +

    tril(input, diagonal=0, out=NULL) -> Tensor

    + + + + +

    Returns the lower triangular part of the matrix (2-D tensor) or batch of matrices +input, the other elements of the result tensor out are set to 0.

    +

    The lower triangular part of the matrix is defined as the elements on and +below the diagonal.

    +

    The argument diagonal controls which diagonal to consider. If +diagonal = 0, all elements on and below the main diagonal are +retained. A positive value includes just as many diagonals above the main +diagonal, and similarly a negative value excludes just as many diagonals below +the main diagonal. The main diagonal are the set of indices +\(\lbrace (i, i) \rbrace\) for \(i \in [0, \min\{d_{1}, d_{2}\} - 1]\) where +\(d_{1}, d_{2}\) are the dimensions of the matrix.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(3, 3)) +a +torch_tril(a) +b = torch_randn(c(4, 6)) +b +torch_tril(b, diagonal=1) +torch_tril(b, diagonal=-1) +} +
    #> torch_tensor +#> 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 +#> -0.0705 0.0000 0.0000 0.0000 0.0000 0.0000 +#> 1.4173 0.3856 0.0000 0.0000 0.0000 0.0000 +#> 1.3653 -1.3079 -1.1473 0.0000 0.0000 0.0000 +#> [ CPUFloatType{4,6} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_tril_indices.html b/static/docs/reference/torch_tril_indices.html new file mode 100644 index 0000000000000000000000000000000000000000..3d08e21ecd8e1bce06a8d09612e8bb4e76036b08 --- /dev/null +++ b/static/docs/reference/torch_tril_indices.html @@ -0,0 +1,301 @@ + + + + + + + + +Tril_indices — torch_tril_indices • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Tril_indices

    +
    + +
    torch_tril_indices(
    +  row,
    +  col,
    +  offset = 0,
    +  dtype = torch_long(),
    +  device = "cpu",
    +  layout = torch_strided()
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    row

    (int) number of rows in the 2-D matrix.

    col

    (int) number of columns in the 2-D matrix.

    offset

    (int) diagonal offset from the main diagonal. Default: if not provided, 0.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, torch_long.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    layout

    (torch.layout, optional) currently only support torch_strided.

    + +

    Note

    + + +
    When running on CUDA, `row * col` must be less than \eqn{2^{59}} to
    +prevent overflow during calculation.
    +
    + +

    tril_indices(row, col, offset=0, dtype=torch.long, device='cpu', layout=torch.strided) -> Tensor

    + + + + +

    Returns the indices of the lower triangular part of a row-by- +col matrix in a 2-by-N Tensor, where the first row contains row +coordinates of all indices and the second row contains column coordinates. +Indices are ordered based on rows and then columns.

    +

    The lower triangular part of the matrix is defined as the elements on and +below the diagonal.

    +

    The argument offset controls which diagonal to consider. If +offset = 0, all elements on and below the main diagonal are +retained. A positive value includes just as many diagonals above the main +diagonal, and similarly a negative value excludes just as many diagonals below +the main diagonal. The main diagonal are the set of indices +\(\lbrace (i, i) \rbrace\) for \(i \in [0, \min\{d_{1}, d_{2}\} - 1]\) +where \(d_{1}, d_{2}\) are the dimensions of the matrix.

    + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +a = torch_tril_indices(3, 3) +a +a = torch_tril_indices(4, 3, -1) +a +a = torch_tril_indices(4, 3, 1) +a +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_triu.html b/static/docs/reference/torch_triu.html new file mode 100644 index 0000000000000000000000000000000000000000..2f3f9d69989df885e741df467ed4e275e8e1ee5f --- /dev/null +++ b/static/docs/reference/torch_triu.html @@ -0,0 +1,276 @@ + + + + + + + + +Triu — torch_triu • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Triu

    +
    + +
    torch_triu(self, diagonal = 0L)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    diagonal

    (int, optional) the diagonal to consider

    + +

    triu(input, diagonal=0, out=NULL) -> Tensor

    + + + + +

    Returns the upper triangular part of a matrix (2-D tensor) or batch of matrices +input, the other elements of the result tensor out are set to 0.

    +

    The upper triangular part of the matrix is defined as the elements on and +above the diagonal.

    +

    The argument diagonal controls which diagonal to consider. If +diagonal = 0, all elements on and above the main diagonal are +retained. A positive value excludes just as many diagonals above the main +diagonal, and similarly a negative value includes just as many diagonals below +the main diagonal. The main diagonal are the set of indices +\(\lbrace (i, i) \rbrace\) for \(i \in [0, \min\{d_{1}, d_{2}\} - 1]\) where +\(d_{1}, d_{2}\) are the dimensions of the matrix.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(3, 3)) +a +torch_triu(a) +torch_triu(a, diagonal=1) +torch_triu(a, diagonal=-1) +b = torch_randn(c(4, 6)) +b +torch_triu(b, diagonal=1) +torch_triu(b, diagonal=-1) +} +
    #> torch_tensor +#> 0.8511 0.3136 -0.8565 -0.3131 0.8333 -1.6256 +#> 0.6987 -0.8917 2.5117 1.6975 -0.5125 1.6937 +#> 0.0000 0.6684 1.6408 0.0282 0.3932 -1.6401 +#> 0.0000 0.0000 -0.9555 0.2990 0.3913 -0.5259 +#> [ CPUFloatType{4,6} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_triu_indices.html b/static/docs/reference/torch_triu_indices.html new file mode 100644 index 0000000000000000000000000000000000000000..4f55b0f81a07ccd8acc7200cdec6eb1b4993f1e5 --- /dev/null +++ b/static/docs/reference/torch_triu_indices.html @@ -0,0 +1,301 @@ + + + + + + + + +Triu_indices — torch_triu_indices • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Triu_indices

    +
    + +
    torch_triu_indices(
    +  row,
    +  col,
    +  offset = 0,
    +  dtype = torch_long(),
    +  device = "cpu",
    +  layout = torch_strided()
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    row

    (int) number of rows in the 2-D matrix.

    col

    (int) number of columns in the 2-D matrix.

    offset

    (int) diagonal offset from the main diagonal. Default: if not provided, 0.

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, torch_long.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    layout

    (torch.layout, optional) currently only support torch_strided.

    + +

    Note

    + + +
    When running on CUDA, `row * col` must be less than \eqn{2^{59}} to
    +prevent overflow during calculation.
    +
    + +

    triu_indices(row, col, offset=0, dtype=torch.long, device='cpu', layout=torch.strided) -> Tensor

    + + + + +

    Returns the indices of the upper triangular part of a row by +col matrix in a 2-by-N Tensor, where the first row contains row +coordinates of all indices and the second row contains column coordinates. +Indices are ordered based on rows and then columns.

    +

    The upper triangular part of the matrix is defined as the elements on and +above the diagonal.

    +

    The argument offset controls which diagonal to consider. If +offset = 0, all elements on and above the main diagonal are +retained. A positive value excludes just as many diagonals above the main +diagonal, and similarly a negative value includes just as many diagonals below +the main diagonal. The main diagonal are the set of indices +\(\lbrace (i, i) \rbrace\) for \(i \in [0, \min\{d_{1}, d_{2}\} - 1]\) +where \(d_{1}, d_{2}\) are the dimensions of the matrix.

    + +

    Examples

    +
    if (torch_is_installed()) { +if (FALSE) { +a = torch_triu_indices(3, 3) +a +a = torch_triu_indices(4, 3, -1) +a +a = torch_triu_indices(4, 3, 1) +a +} +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_true_divide.html b/static/docs/reference/torch_true_divide.html new file mode 100644 index 0000000000000000000000000000000000000000..0f6b9ad402b09d0f29423b6d57fb091bd904111e --- /dev/null +++ b/static/docs/reference/torch_true_divide.html @@ -0,0 +1,265 @@ + + + + + + + + +TRUE_divide — torch_true_divide • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    TRUE_divide

    +
    + +
    torch_true_divide(self, other)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the dividend

    other

    (Tensor or Scalar) the divisor

    + +

    true_divide(dividend, divisor) -> Tensor

    + + + + +

    Performs "true division" that always computes the division +in floating point. Analogous to division in Python 3 and equivalent to +torch_div except when both inputs have bool or integer scalar types, +in which case they are cast to the default (floating) scalar type before the division.

    +

    $$ + \mbox{out}_i = \frac{\mbox{dividend}_i}{\mbox{divisor}} +$$

    + +

    Examples

    +
    if (torch_is_installed()) { + +dividend = torch_tensor(c(5, 3), dtype=torch_int()) +divisor = torch_tensor(c(3, 2), dtype=torch_int()) +torch_true_divide(dividend, divisor) +torch_true_divide(dividend, 2) +} +
    #> torch_tensor +#> 2.5000 +#> 1.5000 +#> [ CPUFloatType{2} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_trunc.html b/static/docs/reference/torch_trunc.html new file mode 100644 index 0000000000000000000000000000000000000000..bf9f628f516910690e5f9ca0d6d7073356a94563 --- /dev/null +++ b/static/docs/reference/torch_trunc.html @@ -0,0 +1,257 @@ + + + + + + + + +Trunc — torch_trunc • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Trunc

    +
    + +
    torch_trunc(self)
    + +

    Arguments

    + + + + + + +
    self

    (Tensor) the input tensor.

    + +

    trunc(input, out=NULL) -> Tensor

    + + + + +

    Returns a new tensor with the truncated integer values of +the elements of input.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(4)) +a +torch_trunc(a) +} +
    #> torch_tensor +#> -0 +#> -0 +#> -1 +#> 1 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_unbind.html b/static/docs/reference/torch_unbind.html new file mode 100644 index 0000000000000000000000000000000000000000..21f4b662f8f221760556747fa0b5699c63933660 --- /dev/null +++ b/static/docs/reference/torch_unbind.html @@ -0,0 +1,274 @@ + + + + + + + + +Unbind — torch_unbind • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Unbind

    +
    + +
    torch_unbind(self, dim = 1L)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the tensor to unbind

    dim

    (int) dimension to remove

    + +

    unbind(input, dim=0) -> seq

    + + + + +

    Removes a tensor dimension.

    +

    Returns a tuple of all slices along a given dimension, already without it.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_unbind(torch_tensor(matrix(1:9, ncol = 3, byrow=TRUE))) +} +
    #> [[1]] +#> torch_tensor +#> 1 +#> 2 +#> 3 +#> [ CPULongType{3} ] +#> +#> [[2]] +#> torch_tensor +#> 4 +#> 5 +#> 6 +#> [ CPULongType{3} ] +#> +#> [[3]] +#> torch_tensor +#> 7 +#> 8 +#> 9 +#> [ CPULongType{3} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_unique_consecutive.html b/static/docs/reference/torch_unique_consecutive.html new file mode 100644 index 0000000000000000000000000000000000000000..ea5529bb9bb217237f6b32e0868c6a7c0235c7ff --- /dev/null +++ b/static/docs/reference/torch_unique_consecutive.html @@ -0,0 +1,294 @@ + + + + + + + + +Unique_consecutive — torch_unique_consecutive • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Unique_consecutive

    +
    + +
    torch_unique_consecutive(
    +  self,
    +  return_inverse = FALSE,
    +  return_counts = FALSE,
    +  dim = NULL
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor

    return_inverse

    (bool) Whether to also return the indices for where elements in the original input ended up in the returned unique list.

    return_counts

    (bool) Whether to also return the counts for each unique element.

    dim

    (int) the dimension to apply unique. If NULL, the unique of the flattened input is returned. default: NULL

    + +

    TEST

    + + + + +

    Eliminates all but the first element from every consecutive group of equivalent elements.

    .. note:: This function is different from [`torch_unique`] in the sense that this function
    +    only eliminates consecutive duplicate values. This semantics is similar to `std::unique`
    +    in C++.
    +
    + + +

    Examples

    +
    if (torch_is_installed()) { +x = torch_tensor(c(1, 1, 2, 2, 3, 1, 1, 2)) +output = torch_unique_consecutive(x) +output +torch_unique_consecutive(x, return_inverse=TRUE) +torch_unique_consecutive(x, return_counts=TRUE) +} +
    #> [[1]] +#> torch_tensor +#> 1 +#> 2 +#> 3 +#> 1 +#> 2 +#> [ CPUFloatType{5} ] +#> +#> [[2]] +#> torch_tensor +#> [ CPULongType{0} ] +#> +#> [[3]] +#> torch_tensor +#> 2 +#> 2 +#> 1 +#> 2 +#> 1 +#> [ CPULongType{5} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_unsqueeze.html b/static/docs/reference/torch_unsqueeze.html new file mode 100644 index 0000000000000000000000000000000000000000..c051eef6a5d7413c08b46bb779772a0cec3e509d --- /dev/null +++ b/static/docs/reference/torch_unsqueeze.html @@ -0,0 +1,265 @@ + + + + + + + + +Unsqueeze — torch_unsqueeze • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Unsqueeze

    +
    + +
    torch_unsqueeze(self, dim)
    + +

    Arguments

    + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int) the index at which to insert the singleton dimension

    + +

    unsqueeze(input, dim) -> Tensor

    + + + + +

    Returns a new tensor with a dimension of size one inserted at the +specified position.

    +

    The returned tensor shares the same underlying data with this tensor.

    +

    A dim value within the range [-input.dim() - 1, input.dim() + 1) +can be used. Negative dim will correspond to unsqueeze +applied at dim = dim + input.dim() + 1.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x = torch_tensor(c(1, 2, 3, 4)) +torch_unsqueeze(x, 1) +torch_unsqueeze(x, 2) +} +
    #> torch_tensor +#> 1 +#> 2 +#> 3 +#> 4 +#> [ CPUFloatType{4,1} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_var.html b/static/docs/reference/torch_var.html new file mode 100644 index 0000000000000000000000000000000000000000..56b75c8e401757b18c1c4ecda5ba39f0616da83c --- /dev/null +++ b/static/docs/reference/torch_var.html @@ -0,0 +1,288 @@ + + + + + + + + +Var — torch_var • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Var

    +
    + +
    torch_var(self, dim, unbiased = TRUE, keepdim = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int or tuple of ints) the dimension or dimensions to reduce.

    unbiased

    (bool) whether to use the unbiased estimation or not

    keepdim

    (bool) whether the output tensor has dim retained or not.

    + +

    var(input, unbiased=TRUE) -> Tensor

    + + + + +

    Returns the variance of all elements in the input tensor.

    +

    If unbiased is FALSE, then the variance will be calculated via the +biased estimator. Otherwise, Bessel's correction will be used.

    +

    var(input, dim, keepdim=False, unbiased=TRUE, out=NULL) -> Tensor

    + + + + +

    Returns the variance of each row of the input tensor in the given +dimension dim.

    +

    If keepdim is TRUE, the output tensor is of the same size +as input except in the dimension(s) dim where it is of size 1. +Otherwise, dim is squeezed (see torch_squeeze), resulting in the +output tensor having 1 (or len(dim)) fewer dimension(s).

    +

    If unbiased is FALSE, then the variance will be calculated via the +biased estimator. Otherwise, Bessel's correction will be used.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(1, 3)) +a +torch_var(a) + + +a = torch_randn(c(4, 4)) +a +torch_var(a, 1) +} +
    #> torch_tensor +#> 0.3456 +#> 0.0521 +#> 1.0536 +#> 0.3194 +#> [ CPUFloatType{4} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_var_mean.html b/static/docs/reference/torch_var_mean.html new file mode 100644 index 0000000000000000000000000000000000000000..ddf616f79f04dd13d58b709117248c63460c49d0 --- /dev/null +++ b/static/docs/reference/torch_var_mean.html @@ -0,0 +1,298 @@ + + + + + + + + +Var_mean — torch_var_mean • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Var_mean

    +
    + +
    torch_var_mean(self, dim, unbiased = TRUE, keepdim = FALSE)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + +
    self

    (Tensor) the input tensor.

    dim

    (int or tuple of ints) the dimension or dimensions to reduce.

    unbiased

    (bool) whether to use the unbiased estimation or not

    keepdim

    (bool) whether the output tensor has dim retained or not.

    + +

    var_mean(input, unbiased=TRUE) -> (Tensor, Tensor)

    + + + + +

    Returns the variance and mean of all elements in the input tensor.

    +

    If unbiased is FALSE, then the variance will be calculated via the +biased estimator. Otherwise, Bessel's correction will be used.

    +

    var_mean(input, dim, keepdim=False, unbiased=TRUE) -> (Tensor, Tensor)

    + + + + +

    Returns the variance and mean of each row of the input tensor in the given +dimension dim.

    +

    If keepdim is TRUE, the output tensor is of the same size +as input except in the dimension(s) dim where it is of size 1. +Otherwise, dim is squeezed (see torch_squeeze), resulting in the +output tensor having 1 (or len(dim)) fewer dimension(s).

    +

    If unbiased is FALSE, then the variance will be calculated via the +biased estimator. Otherwise, Bessel's correction will be used.

    + +

    Examples

    +
    if (torch_is_installed()) { + +a = torch_randn(c(1, 3)) +a +torch_var_mean(a) + + +a = torch_randn(c(4, 4)) +a +torch_var_mean(a, 1) +} +
    #> [[1]] +#> torch_tensor +#> 0.2877 +#> 1.0334 +#> 0.7203 +#> 1.7788 +#> [ CPUFloatType{4} ] +#> +#> [[2]] +#> torch_tensor +#> 0.1609 +#> -0.4014 +#> 0.6819 +#> 0.3604 +#> [ CPUFloatType{4} ] +#>
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_where.html b/static/docs/reference/torch_where.html new file mode 100644 index 0000000000000000000000000000000000000000..62993b0de7f4fb0aae7b73e105ba797b21f701eb --- /dev/null +++ b/static/docs/reference/torch_where.html @@ -0,0 +1,287 @@ + + + + + + + + +Where — torch_where • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Where

    +
    + +
    torch_where(condition, self, other)
    + +

    Arguments

    + + + + + + + + + + + + + + +
    condition

    (BoolTensor) When TRUE (nonzero), yield x, otherwise yield y

    self

    (Tensor) values selected at indices where condition is TRUE

    other

    (Tensor) values selected at indices where condition is FALSE

    + +

    Note

    + + +
    The tensors `condition`, `x`, `y` must be broadcastable .
    +
    + +

    See also torch_nonzero().

    +

    where(condition, x, y) -> Tensor

    + + + + +

    Return a tensor of elements selected from either x or y, depending on condition.

    +

    The operation is defined as:

    +

    $$ + \mbox{out}_i = \left\{ \begin{array}{ll} + \mbox{x}_i & \mbox{if } \mbox{condition}_i \\ + \mbox{y}_i & \mbox{otherwise} \\ + \end{array} + \right. +$$

    +

    where(condition) -> tuple of LongTensor

    + + + + +

    torch_where(condition) is identical to +torch_nonzero(condition, as_tuple=TRUE).

    + +

    Examples

    +
    if (torch_is_installed()) { + +if (FALSE) { +x = torch_randn(c(3, 2)) +y = torch_ones(c(3, 2)) +x +torch_where(x > 0, x, y) +} + + + +} +
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_zeros.html b/static/docs/reference/torch_zeros.html new file mode 100644 index 0000000000000000000000000000000000000000..1a2884308c9e35f0f70a58afef1cd098107d14d5 --- /dev/null +++ b/static/docs/reference/torch_zeros.html @@ -0,0 +1,284 @@ + + + + + + + + +Zeros — torch_zeros • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Zeros

    +
    + +
    torch_zeros(
    +  ...,
    +  names = NULL,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    ...

    a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple.

    names

    optional dimension names

    dtype

    (torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

    layout

    (torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    + +

    zeros(*size, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

    + + + + +

    Returns a tensor filled with the scalar value 0, with the shape defined +by the variable argument size.

    + +

    Examples

    +
    if (torch_is_installed()) { + +torch_zeros(c(2, 3)) +torch_zeros(c(5)) +} +
    #> torch_tensor +#> 0 +#> 0 +#> 0 +#> 0 +#> 0 +#> [ CPUFloatType{5} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/torch_zeros_like.html b/static/docs/reference/torch_zeros_like.html new file mode 100644 index 0000000000000000000000000000000000000000..01f0335253ce5c3695859b4f82958fc40cf1a8a7 --- /dev/null +++ b/static/docs/reference/torch_zeros_like.html @@ -0,0 +1,289 @@ + + + + + + + + +Zeros_like — torch_zeros_like • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Zeros_like

    +
    + +
    torch_zeros_like(
    +  input,
    +  dtype = NULL,
    +  layout = torch_strided(),
    +  device = NULL,
    +  requires_grad = FALSE,
    +  memory_format = torch_preserve_format()
    +)
    + +

    Arguments

    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    (Tensor) the size of input will determine size of the output tensor.

    dtype

    (torch.dtype, optional) the desired data type of returned Tensor. Default: if NULL, defaults to the dtype of input.

    layout

    (torch.layout, optional) the desired layout of returned tensor. Default: if NULL, defaults to the layout of input.

    device

    (torch.device, optional) the desired device of returned tensor. Default: if NULL, defaults to the device of input.

    requires_grad

    (bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

    memory_format

    (torch.memory_format, optional) the desired memory format of returned Tensor. Default: torch_preserve_format.

    + +

    zeros_like(input, dtype=NULL, layout=NULL, device=NULL, requires_grad=False, memory_format=torch.preserve_format) -> Tensor

    + + + + +

    Returns a tensor filled with the scalar value 0, with the same size as +input. torch_zeros_like(input) is equivalent to +torch_zeros(input.size(), dtype=input.dtype, layout=input.layout, device=input.device).

    +

    Warning

    + + + +

    As of 0.4, this function does not support an out keyword. As an alternative, +the old torch_zeros_like(input, out=output) is equivalent to +torch_zeros(input.size(), out=output).

    + +

    Examples

    +
    if (torch_is_installed()) { + +input = torch_empty(c(2, 3)) +torch_zeros_like(input) +} +
    #> torch_tensor +#> 0 0 0 +#> 0 0 0 +#> [ CPUFloatType{2,3} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/with_enable_grad.html b/static/docs/reference/with_enable_grad.html new file mode 100644 index 0000000000000000000000000000000000000000..04143facb36989544b9b02295a5d9d94189efd2f --- /dev/null +++ b/static/docs/reference/with_enable_grad.html @@ -0,0 +1,259 @@ + + + + + + + + +Enable grad — with_enable_grad • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Context-manager that enables gradient calculation. +Enables gradient calculation, if it has been disabled via with_no_grad.

    +
    + +
    with_enable_grad(code)
    + +

    Arguments

    + + + + + + +
    code

    code to be executed with gradient recording.

    + +

    Details

    + +

    This context manager is thread local; it will not affect computation in +other threads.

    + +

    Examples

    +
    if (torch_is_installed()) { + +x <- torch_tensor(1, requires_grad=TRUE) +with_no_grad({ + with_enable_grad({ + y = x * 2 + }) +}) +y$backward() +x$grad + +} +
    #> torch_tensor +#> 2 +#> [ CPUFloatType{1} ]
    +
    + +
    + + +
    + + +
    +

    Site built with pkgdown 1.6.1.

    +
    + +
    +
    + + + + + + + + diff --git a/static/docs/reference/with_no_grad.html b/static/docs/reference/with_no_grad.html new file mode 100644 index 0000000000000000000000000000000000000000..85d16d558682a86c107cccf56e34f048f4d8e092 --- /dev/null +++ b/static/docs/reference/with_no_grad.html @@ -0,0 +1,249 @@ + + + + + + + + +Temporarily modify gradient recording. — with_no_grad • torch + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + + +
    + +
    +
    + + +
    +

    Temporarily modify gradient recording.

    +
    + +
    with_no_grad(code)
    + +

    Arguments

    + + + + + + +
    code

    code to be executed with no gradient recording.

    + + +

    Examples

    +
    if (torch_is_installed()) { +x <- torch_tensor(runif(5), requires_grad = TRUE) +with_no_grad({ + x$sub_(torch_tensor(as.numeric(1:5))) +}) +x +x$grad + +} +
    #> torch_tensor +#> [ Tensor (undefined) ]
    +
    + +
    + + + +
    + + + + + + + +