UP42 is a geospatial platform that centralizes access to a wide variety of geospatial datasets acquired by aircraft sensors and spaceborne platforms (aircrafts, satellite sensors or stratospheric balloons) and advanced algorithms that facilitate the extraction of useful insights such as ship identification, cloud masking, zonal statistics, vegetation health, change detection and many more.
UP42 also gives you the tools necessary to explore the platform and automate processes using a powerful API and Python SDK that you can easily integrate into your application.
If you intend to purchase geospatial datasets from our archived collections, we recommend exploring the data platform. After viewing the available datasets, you can select a specific dataset that suits your needs and perform a search over your area of interest. The search functionality is available through the API and the console interface.
The diagram below explains the basic structure of the UP42 data platform. First, the user creates a project in the personal workspace. This project contains the project credentials that are necessary for authenticating on the API or Python SDK. The UP42 data platform allows users to order geospatial datasets that contain specific metadata, an order ID and an asset ID. After placing an order, your storage is populated with each order and its associated status. After the order is successful, the asset is available for download.
For more information: Data platform.
If you intend to purchase fresh geospatial datasets, we recommend tasking a sensor that captures images over your area of interest and delivers the datasets in near real-time. After viewing the available taskable datasets, you can select a specific dataset that suits your needs and place a tasking order. The search functionality for taskable datasets is available through the API and the console interface.
The diagram below explains the basic structure of the UP42 tasking platform. First, the user creates a project in the personal workspace. This project contains the project credentials that are necessary for authenticating on the API or Python SDK. The user needs to get access to a selected taskable collection and then send a tasking request that contains the acquisition parameters. Based on these parameters, the UP42 operations team creates a feasibility study that evaluates the difficulty of the tasking operation. Depending on the acquisition parameters and the difficulty level of the tasking order, a final quotation is issued with the total amount necessary to purchase the geospatial datasets that will be captured by the tasking operation. Upon payment, the tasking operation is activated and the image acqusition starts. Every successful image acquisition will trigger the delivery of an image to the user's storage.
For more information: Tasking.
If you intend to process geospatial datasets and build custom processing blocks, we recommend exploring the processing platform. This section explains how to create projects, build workflows with data and processing blocks, create your own custom blocks and run jobs.
Projects are intended for storing workflows and their corresponding job runs. A project contains the project credentials which are necessary to generate an access token. This token allows you to make API calls or use the Python SDK.
A project also contains the settings for the threshold limits. These limits are maximum values that you can apply during the API, Python SDK or console operations: the size of your area of interest, the number of returned images and the number of concurrent jobs.
For more information: Projects.
A workflow is a Directed Acyclic Graph (DAG) of data and processing blocks. The workflow defines the order in which each operation associated with a certain block is performed. A workflow always starts with a data block and is followed by one or more processing blocks. A workflow cannot be empty and it needs to contain at least one data block.
For more information: Workflows.
A block is a unit of the workflow and acts as an operator for data retrieval or processing algorithms. Blocks are divided in two types:
- Data block: operator for downloading/streaming a data source. The data block is always the first operator of a workflow and it can be followed by one or more processing blocks.
- Processing block: operator for processing the previous data source or processing block output.
The order of the blocks is defined by the block capabilities, which specify the parameter values necessary to allow the combination of blocks: spectral or spatial resolution, file format, bit depth etc. A data or processing block can be used in multiple workflows.
For more information: Blocks.
A job is a unique instance of a workflow that runs in order to generate the outputs corresponding to each block and defined by the job parameters. A job workflow can have one or more job runs. For instance, the job parameters of a workflow can be configured more times and trigger a new job run if you want to retrieve data from multiple areas of interest or dates.
For more information: Jobs.
The diagram below explains the basic structure of the UP42 processing platform. First, the user creates a project in the personal workspace. This project contains one or more workflows that are created with data and/or processing blocks. Each of these blocks is added as an individual workflow task. After the workflow is successfully created, it can be run as one or more jobs. Each job contains job tasks that are associated with the data and/or processing block that was previously added to the workflow. Each job task generates an individual output.