I’m at a client that has the users “My Documents” on a share drive, so that when you logon to any computer, you have your “My Documents” available.

The question is, how to get that directory/path name?

Why? You might want to write a C# program to access one of the files there, or you might want the path to put in Total Commander as short cut.

There may be better methods, but this is what I did.

I opened PowerShell ISE, and typed in “Get-Location” as the text of my program. I then did a “File Save As”, navigated to the “My Documents” directory and saved it.

I then pressed F5 to run it. It actually failed, but the error at least gave me the path name, as shown below:

S C:\WINDOWS\system32> \\abcServerName\Users$\johndoe\My Documents\GetLocation.ps1
File \\abcServerName\Users$\johndoe\My Documents\GetLocation.ps1 cannot be loaded 
because running scripts is disabled on this system. For more information, see 
about_Execution_Policies at https:/go.microsoft.com/fwlink/?LinkID=135170.
    + CategoryInfo          : SecurityError: (:) [], ParentContainsErrorRecord 
   Exception
    + FullyQualifiedErrorId : UnauthorizedAccess


PyTorch is a machine learning package for Python. This code sample will test if it access to your Graphical Processing Unit (GPU) to use “CUDA

from __future__ import print_function
import torch

x = torch.rand(5, 3)
print(x)

if not torch.cuda.is_available():
   print ("Cuda is available")
   device_id = torch.cuda.current_device()
   gpu_properties = torch.cuda.get_device_properties(device_id)
   print("Found %d GPUs available. Using GPU %d (%s) of compute capability %d.%d with "
          "%.1fGb total memory.\n" % 
          (torch.cuda.device_count(),
          device_id,
          gpu_properties.name,
          gpu_properties.major,
          gpu_properties.minor,
          gpu_properties.total_memory / 1e9))
else:    
   print ("Cuda is not available")

CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs.

In GPU-accelerated applications, the sequential part of the workload runs on the CPU – which is optimized for single-threaded performance – while the compute intensive portion of the application runs on thousands of GPU cores in parallel. When using CUDA, developers program in popular languages such as C, C++, Fortran, Python and MATLAB and express parallelism through extensions in the form of a few basic keywords.