/

Super-resolution Pléiades/SPOT

Quadruples imagery resolution of Pléiades or SPOT.


Description

A processing block that increases the number of pixels by a factor of 16 for all existing spectral bands by using a trained Convolutional Neural Network.

See this block on the marketplace.

How it works

The generated image doesn't contain more information than the recorded at the original resolution. Quality improvements of the images are measured using the SSIM metric, thus guaranteeing that the algorithm increases the information content of the original image.

Image resolution of the processed images will be quadrupled, but it needs to be understood that an algorithmically derived image can never have the same information content as an image that was originally recorded at that resolution. The use case for this block is as a preprocessing step for object detection algorithms (ships, cars, planes, etc.) as the images become crisper and contour outlines more well-defined.

ParameterOverview
modelstring / required
The model to use to super-resolve the image:
  • SRCNN
  • AESR
  • RedNet
The default value is SRCNN.
AESR or RedNet will take much longer to super-resolve the image than SRCNN, because they're deeper model architectures.

Example

An example using SPOT 6/7 Reflectance (Download) as a data source, returning the super-resolved result using the AESR model.

{
  "oneatlas-spot-fullscene:1": {
    "ids": null,
    "bbox": [13.405215963721279, 52.48480326228838, 13.4388092905283, 52.505278605259086],
    "time": null,
    "limit": 1,
    "order_ids": null,
    "time_series": null
  },
  "superresolution:1": {
    "model": "AESR"
  }
}

Capabilities

Input

raster
up42_standard
bands
[
  "red",
  "green",
  "blue",
  "nir"
]
dtypeuint16
formatGTiff
sensor
{
  "or": [
    "Pleiades",
    "SPOT"
  ]
}

Output

To know more please check the block capabilities specifications.

Learn more