Getting started

Get familiar with the analytics platform.


Projects are intended for storing workflows and their corresponding job runs. A project contains the project credentials which are necessary to generate an access token. This token allows you to make API calls or use the Python SDK.

A project also contains the settings for the threshold limits. These limits are maximum values that you can apply during the API, Python SDK or console operations: the size of your area of interest, the number of returned images and the number of concurrent jobs.


A workflow is a Directed Acyclic Graph (DAG) of data and processing blocks. The workflow defines the order in which each operation associated with a certain block is performed. A workflow always starts with a data block and is followed by one or more processing blocks. A workflow can't be empty and it needs to contain at least one data block.


A job is a unique instance of a workflow that runs in order to generate the outputs corresponding to each block and defined by the job parameters. A workflow can have one or more job runs. For instance, the job parameters of a workflow can be configured more times and trigger a new job run if you want to retrieve data from multiple areas of interest or dates.


A block is a unit of the workflow and acts as an operator for data retrieval or processing algorithms. Blocks are divided in two types:

  1. Data block: operator for downloading/streaming a data source. The data block is always the first operator of a workflow and it can be followed by one or more processing blocks.
  2. Processing block: operator for processing the previous data source or processing block output. The order of the blocks is defined by the block capabilities, which specify the parameter values necessary to allow the combination of blocks: spectral or spatial resolution, file format, bit depth etc. A data or processing block can be used in multiple workflows.