/

GPU Support

Adding GPU support to processing blocks.


It is possible to use GPUs in your processing block if the algorithm being implemented benefits from and/or requires it.

Any deep learning algorithm can benefit from existing free or proprietary frameworks/software.

Two of the most popular free software frameworks are:

The frameworks themselves provide support for using GPUs, namely the ones provided by our platform: NVIDIA Tesla K80 GPU.

To take advantage of GPUs you need to do two things:

  1. Make sure the UP42Manifest.json file specifies the machine type gpu_nvidia_tesla_k80.

  2. Make sure that the CUDA libraries are included in your custom block such that inside the Docker container you have:

# CUDA libraries path.
/usr/local/nvidia/lib64
# CUDA debug utilities path.
/usr/local/nvidia/bin
# LD_LIBRARY_PATH environment variable should include.
/usr/local/nvidia/lib64

If your Docker image relies on a official public Docker image, e.g., TensorFlow Docker images then you don’t need to worry about it, because this is usually already taken care of.

If on the other you use a custom/private framework and/or Docker image for providing the CUDA libraries you must make sure the above paths and libraries are properly setup inside your Docker container.

The Sentinel 2 Super-resolution block by UP42 provides a good example of a block that relies on the official TensorFlow images to provide the CUDA libraries in the proper way. You can look at the Dockerfile the relevant line is:

# Use one of the official Tensorflow Docker images as base.
FROM tensorflow/tensorflow:latest-gpu-py3

In this case it relies on TensorFlow official public Docker image for Python 3.

Note also the UP42Manifest.json file with the machine object type field specifying a GPU enabled machine type.

"machine": {
     "type": "gpu_nvidia_tesla_k80"
 }