/

API Walkthrough

Guiding you through the capabilities of the REST API.


Introduction

The API Walkthrough builds upon what was learned from the tutorial on Your First API Request and is a guide for learning how to use the REST API to perform actions such as a catalog search for images, getting quicklooks, working on and modifying existing workflows, jobs and tasks.

Optionally and/or as an addition to following this walkthrough you might consult the REST API.

Looking for data

Just as it's possible to search for Pléiades and SPOT imagery through the Catalog and the SDK, you can search for imagery through the API. Through the API, you can search for imagery that are in both immediately available, online archive and imagery that are stored in the long-term archive.

The relevant endpoint for performing a catalog search is as shown:

search_url=https://api.up42.com/catalog/stac/search

In order to search for a specific image, you will need to specify certain criteria in the form of a request body. The relevant criteria include:

  • AOI: Define your AOI coordinates either through intersects, contains, or bounding box
  • limit: Pre-deterimine how many images you want the search query to return
  • cloudCoverage: Specify the cloud coverage for the whole scene as a percentage
  • processingLevel: Select which archive you want to retrieve your data from. Album refers to the long term or cold storage while Sensor refers to the immediately available archive or the warm storage
  • dataBlock: Specify which data block you want to search for
  • datetime: Limit your results to a specific time frame

A sample catalog search request body is provided here:

{
  "intersects": {
      "type": "Polygon",
      "coordinates": [
        [
          [
            10.007905,
            53.598798
          ],
          [
            10.008206,
            53.593783
          ],
          [
            10.021521,
            53.593808
          ],
          [
            10.020876,
            53.599078
          ],
          [
            10.007905,
            53.598798
          ]
        ]
      ]
    },
  "limit": 100,
  "query": {
    "cloudCoverage": {
      "lte": 20
    },
    "processingLevel": {
      "IN": [
        "ALBUM",
        "SENSOR"
      ]
    },
    "dataBlock": {
      "in": [
        "oneatlas-pleiades-fullscene",
        "oneatlas-pleiades-aoiclipped"
      ]
    }
  },
  "datetime": "2016-01-01T00:00:00.000Z/.."
}

Save the contents of the request body in a file such as search_params.json.

pbpaste > search_params.json

Issue a POST request using the above endpoint and the request body and save the results of that search in a separate file called search_results.json:

curl -L -s -X POST $search_url \
     -H 'Content-Type: application/json' \
     -H "Authorization: Bearer $PTOKEN" \
     -d @search_params.json | jq '.' > search_results.json

If you're curious as to how many images were returned from your search, you can issue the following command to check the length of the features array:

jq '.features | length' search_results.json

The search_results.json file will contain valuable information on the image properties of each image that was returned from your search. This information can be used to filter out your search results for images that match your needs, such as filtering out images that are immediately available using the productionStatus tag. Below are some useful image property attribute in the search response bodies that you may want to consider:

  • id: Image ID or the unique identifier in the OneAtlas catalog for a specific image
  • incidenceAngle: This angle corresponds to the angle between the ground normal and look direction from the satellite.
  • platform: Corresponds to either the Pléiades satellites (PHR1A or PHR1B) or the SPOT satellites (SPOT6 or SPOT7)
  • productionStatus: ARCHIVED refers to images that originate from the long-term archive (cold storage) and require 24 hours to be delivered to the user Storage on the console, whereas IN_CLOUD refers to images that are immediately available (warm storage)
  • processingLevel: Similar to productionStatus, SENSORand ALBUM are two types of processing levels returned based on image availability. Sensor refers to images that are immediately available while ALBUM refers to images that are from the long-term archive
  • productType: bundle or mono. bundle refers to the spectral band combination which delivers both panchromatic (0.5m) + multispectral (2m) bands (delivered together but separately as individual files), whereas mono refers to the acquisition mode, which means that only one image is returned for the AOI
  • resolution: Another field for identifying which satellite was used. 0.5 for Pléiades and 1.5 for SPOT.
  • sourceIdentifier: Scene ID for a specific image

In order to isolate just the images that are immediately available and save the scene ID and image ID of those images in a comma separated file (CSV), issue the following command:

 jq -r '.features[].properties as $p | if $p.providerProperties.productionStatus=="ARCHIVED" then $p.providerProperties.sourceIdentifier + ","+ $p.id else empty end' search_results.json > archived_images.csv

Knowing the object identifier index and its position in the search response body structure using the property /features[] will help you customize the above command to filter results for other parameters.

Getting quicklooks

If you would like to inspect the image that was returned from your catalog search before committing to a purchase, you can obtain a quicklook. Quicklooks are lower resolution previews that provide a good indication of how much cloud cover there is for the entire scene and how much area the image covers your AOI.

Let's retrieve a quicklook for one image using its image ID, which we specify here as $image_id.

First, create a quicklook url variable by using an image ID within the following endpoint:

quicklook_url=https://api.up42.com/catalog/oneatlas/image/$image_id/quicklook

Issue a curl request and retrieve a quicklook:

curl -L $quicklook_url \
     -H "Authorization: Bearer $PTOKEN" \
     -H 'Accept: image/webp; q=0.9, image/png; q=0.8, image/jpeg; q=0.7' \
     -o quicklook_oneatlas_$image_id.jpg 

View the quicklook:

   open quicklook_oneatlas_$image_id.jpg

If you would like to obtain quicklooks for multiple images, use the following commands to read through the archived_images.csv file, extract the image IDs for each line and download the quicklooks:

while read i 
do
  image_id=$(echo "$i" | awk -F ',' 'NR == 1 {print $2}') 
  quicklook_url=https://api.up42.com/catalog/oneatlas/image/$image_id/quicklook 
  curl -L -H "Authorization: Bearer $PTOKEN" -H 'Accept: image/webp; q=0.9, image/png; q=0.8, image/jpeg; q=0.7' -o quicklook_oneatlas_$image_id.jpg $quicklook_url 
done < archived_images.csv  

Open all the quicklooks at once to compare:

open quicklook*.jpg

Order Estimation

Before committing to an order purchase, you can first get an estimate on the order price. We highly recommend that you get a price estimate prior to placing an order in order to have piece of mind in knowing how much an image will cost you.

For order price estimation, you will need to have your workspace ID. You can get your workspace ID by copying and pasting the random alphanumeric value after "workspace=" from the URL in your Console:

workspace_id=$(pbpaste)

To proceed with the order estimation, add your workspace ID to the following endpoint:

estimate_order_url=https://api.up42.com/workspaces/$workspace_id/orders/estimate 

Provide a order request with the image ID of the image you would like and your AOI coordinates:

{
  "dataProviderName": "oneatlas",
  "orderParams": {
    "id": "3c6bbc84-2509-4b76-8345-9fbc52330974",
    "aoi": {
      "type": "Polygon",
      "coordinates": [
        [
          [
            10.007905,
            53.598798
          ],
          [
            10.008206,
            53.593783
          ],
          [
            10.021521,
            53.593808
          ],
          [
            10.020876,
            53.599078
          ],
          [
            10.007905,
            53.598798
          ]
        ]
      ]
    }
  }
}

Issue the following curl request to get a price estimation for this order:

curl -L -s -X POST $estimate_order_url \
     -H "Authorization: Bearer $PTOKEN" \
     -H 'Content-Type: application/json' \
     -d @order_request.json | jq '.'

Work with jobs

Get the jobs logs

To get the log of a running job you first need to identify the task that is running. For that we query the endpoint for the tasks of the above created job:

# Job tasks endpoint.
URL_JOB_TASKS_INFO="https://api.up42.com/projects/$PROJ/jobs/$JOB/tasks"

curl -s -L $URL_JOB_TASKS_INFO \
     -H "Authorization: Bearer $PTOKEN" \
     | jq '.' > jobs_job_tasks-$JOB.json

Now we extract the task ID from the previously saved file.

TASK=$(cat jobs_job_tasks-$JOB.json | jq -j '.data[] as $task | if $task.status == "RUNNING" then $task.id else "" end')

It returns:

echo $TASK

79512809-fcd7-41d4-9701-cf38c3355ab3

RUNNING_TASK_URL="https://api.up42.com/projects/$PROJ/jobs/$JOB/tasks/$TASK"

curl -s -L POST $URL_WORKFLOWS \
     -H "Authorization: Bearer $PTOKEN" \
     "$RUNNING_TASK_URL/logs" \ 
     -d @create_new_workflow.json > task_log-$TASK.txt

This command returns the log file as shown below:

Loading...

Get the job results

Once the job is completed, you can query the API to get the results. There are 3 types of results:

  1. A GeoJSON file containing metadata and vectorial data relative to the job output. The specific content of this file depends on the workflow, i.e, on the blocks being used.
  2. The output directory delivered as a gzipped tarball.
  3. A set of low resolution RGB images, quicklook. These are only available as task specific results and not available as job results.

The support for quicklooks is a block specific feature, and it will vary from block to block. In most cases it will depend on upstream APIs supporting it.

Get the results: GeoJSON

OUTPUT_URL="https://api.up42.com/projects/$PROJ/jobs/$JOB/outputs"

curl -s -L -H "Authorization: Bearer $PTOKEN" "$OUTPUT_URL/data-json" | jq '.' > output-$JOB.json

Output:

Loading...

Get the results: tarball

To get the resulting tarball, you need to first get the signed URL to be able to download it.

DOWNLOAD_URL="https://api.up42.com/projects/$PROJ/jobs/$JOB/downloads"

TARBALL_URL=$(curl -s -L -H "Authorization: Bearer $PTOKEN" "$DOWNLOAD_URL/results" | jq -j '.data.url')

curl -s -L "$TARBALL_URL" \
     -H "Authorization: Bearer $PTOKEN" \
     -o output-$JOB.tar.gz

Inspect the retrieved tarball:

tar ztvf output-$JOB.tar.gz

drwxrwxrwx  0 root   root        0 Sep 16 19:40 output
-rw-r--r--  0 root   root  5515635 Sep 16 19:40 output/56f3c47a-92a8-4e89-a005-ff1bbd567ac9_ndvi.tif
-rw-r--r--  0 root   root   399659 Sep 16 19:40 output/data.json

There is both the GeoJSON file and the output as a GeoTIFF file. In this case the file name is constructed from the first task ID and part of the block name. See below for an explanation of what tasks are.

Create and run a named job

By default a when a job is created it can only be identified by its ID. The ID is unique. This is essential to avoid unambiguity in when having machine to machine interactions, but you may want to name a job to make it easier to identify and recognize, without the need to have a map of the job ID to a human easily recognizable name.

To name a job you need to pass the name as an argument in the URL query string. Be aware that being in a URL implies that certain characters need to be encoded. In the case of space you can use a + sign for encoding a space.

# Job name with spaces: + represents space
JOB_NAME='Just+a+named+job+example'

# The URL to post a named job. Note the query string argument: name.
URL_POST_NAMED_JOB="https://api.up42.com/projects/$PROJ/workflows/$WORKFLOW/jobs?name=$JOB_NAME"

curl -s -L -X POST $URL_POST_NAMED_JOB \
     -H "Authorization: Bearer $PTOKEN" \
     -H 'Content-Type: application/json' \ 
     -d @job_params_$PROJ.json | jq '.' > named_job_create_response.json

If we now extract the name from the created file.

cat named_job_create_response.json | jq -r '.data.name'

Printing:

Just a named job example

By default when using the UI the job is named using the workflow name. On the API if you create and run a job without explicitly setting a name, the name is the empty string (null).

Rename a job

It might happen that you either want to name a job that initially had no explicitly set name or that you want to rename a job you named yourself.

To do that you issue a PUT request to the specific job URL.

# Job ID corresponding to the job to be renamed.
RENAME_JOB_ID=e3ed4856-dd2e-477f-a957-1886cd4c9c52

curl -s -L -X PUT "$URL_POST_JOB/$RENAME_JOB_ID" \
     -H "Authorization: Bearer $PTOKEN" \
     -H 'Content-Type: application/json' \ 
     -d '{"name": "My newly renamed job"}' | jq '.' > renamed_job_response.json

You can rename any job that is either running or has been run.

Re-run a job

There are occasions where you just want to re-run a job. For example, it might happen that the job failed due to an upstream API that the job relied upon failing. In this case you want to re-run the job so that it succeeds and you get the expected output. This means keeping the same job parameters and creating and running the job. The API provides a way to do that without having to create and run a job explicitly.

Let us re-run the job we renamed above.

curl -s -L -X POST "$URL_POST_JOB/$RENAME_JOB_ID?name=Rerun+My+newly+created+job+again" \
     -H "Authorization: Bearer $PTOKEN" \
     -H 'Content-Type: application/json' \ 
     | jq '.' > response_rerun_job.json

Cancel a job

You can cancel a job once is launched and while is running. For that we are going to use a named job.

# Job name with spaces: + represents space.
JOB_NAME='Job+to+be+canceled'

# The URL to post a named job. Note the query string argument: name.
URL_POST_NAMED_JOB="https://api.up42.com/projects/$PROJ/workflows/$WORKFLOW/jobs?name=$JOB_NAME"

curl -s -L -X POST $URL_POST_NAMED_JOB \
     -H "Authorization: Bearer $PTOKEN" \
     -H 'Content-Type: application/json' \ 
     [email protected]_params_$PROJ.json | jq '.' > job2cancel_create_response.json

We can now get the job status as exemplified above.

JOB2CANCEL=$(cat job2cancel_create_response.json | jq -j '.data.id')

Echoing the created shell variable:

echo $JOB2CANCEL

f47729b1-c727-4048-9db1-5697d49dc77e

New we get the current job status:

# Job to cancel URL.
URL_JOB2CANCEL_INFO="https://api.up42.com/projects/$PROJ/jobs/$JOB2CANCEL"

curl -s -L "$URL_JOB2CANCEL_INFO" \
     -H "Authorization: Bearer $PTOKEN" \
     | jq '.' '.data.status'

It returns:

RUNNING

To cancel the job issue the request:

curl -si -L -X POST -H "Authorization: Bearer $PTOKEN" "$URL_JOB2CANCEL_INFO/cancel"

HTTP/2 204
date: Fri, 27 Sep 2019 18:26:54 GMT
x-content-type-options: nosniff
x-xss-protection: 1; mode=block
cache-control: no-cache, no-store, max-age=0, must-revalidate
pragma: no-cache
expires: 0
x-frame-options: SAMEORIGIN
referrer-policy: same-origin
x-powered-by: Rocket Fuel
access-control-allow-credentials: true
access-control-allow-methods: GET, POST, PUT, PATCH, DELETE, HEAD, OPTIONS
access-control-allow-headers: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization
access-control-expose-headers: Content-Disposition
strict-transport-security: max-age=31536000; includeSubDomains; preload

The HTTP status 204 No Content means that the request was successful but no data is returned.

Querying again for the job status.

curl -s -L -H "Authorization: Bearer $PTOKEN" "$URL_JOB2CANCEL_INFO" | jq -r '.data.status'

It returns:

CANCELLED

Work with workflows

The workflow API allows you to manipulate workflows. You can do all CRUD operations on workflows.

Get all workflows

Toggle to the shell code from the dropdown menu and enter the code provided. The contents of the output file are as shown in the JSON option in the dropdown menu:

{
  "error": null,
  "data": [
    {
      "id": "cfadb63c-aeaa-43d2-b931-e138ed25bdc4",
      "name": "Demo Workflow",
      "description": "An example workflow that demonstrates how to produce NDVI with 0.5 m resolution using pan-sharpened Pl辿iades data.",
      "createdAt": "2019-12-18T14:08:19.022Z",
      "updatedAt": "2019-12-18T14:08:19.221Z",
      "totalProcessingTime": 766
    }
  ]
}

In this case there is 1 workflow. You can verify this by issuing the following command:

cat workflows-5a21eaff-cdaa-48ab-bedf-5454116d16ff.json | jq '.data | length'

giving 1. We are in the first workflow for this project.

cat workflows-5a21eaff-cdaa-48ab-bedf-5454116d16ff.json | jq '.data[0]'

{
  "id": "cfadb63c-aeaa-43d2-b931-e138ed25bdc4",
  "name": "Demo Workflow",
  "description": "An example workflow that demonstrates how to produce NDVI with 0.5 m resolution using pan-sharpened Pléiades data.",
  "createdAt": "2019-12-18T14:08:19.022Z",
  "updatedAt": "2019-12-18T14:08:19.221Z",
  "totalProcessingTime": 766
}

Extracting the workflow ID:

WORKFLOW=$(cat workflows-5a21eaff-cdaa-48ab-bedf-5454116d16ff.json | jq -j '.data[0].id')

returns:

echo $WORKFLOW

21415975-390f-4215-becb-8d46aaf5156c

Get a specific workflow

Now reusing the WORKFLOW variable from above to obtain the details for a particular workflow.

curl -s -L "$URL_WORKFLOWS/$WORKFLOW/tasks" \
     -H "Authorization: Bearer $PTOKEN" \
     | jq '.' > workflow-$WORKFLOW.json

The file shown below will be returned:

Loading...

Create a workflow

You can think of workflow creation as being an operation consisting of two steps:

  1. Create the workflow resource via a POST request.
  2. Populate that resource via a PUT request.

POST request: creating the resource

To create a new workflow we need to give a JSON as the request body.

{
  "id": null,
  "name": "Create a new Pléiades + Pansharpening + NDVI workflow",
  "description": "Just trying out workflow creation",
  "projectId": "5a21eaff-cdaa-48ab-bedf-5454116d16ff",
  "tasks": []
}

as you can see we have the following fields:

  • id: the workflow ID, it is null because the ID will be given in the response once the resource is created.
  • name: the name you want to give to the workflow.
  • description: the workflow description.
  • projectId: the project ID we defined above.
  • tasks: the tasks in this workflow. Since we just created the workflow this is currently empty. Therefore we set it to an empty array.

Issuing the request:

curl -s -L -X POST $URL_WORKFLOWS \
     -H "Authorization: Bearer $PTOKEN" \
     -H 'Content-Type: application/json' \ 
     -d @create_new_workflow.json | jq '.' > workflow-created-response.json

And this is the response body.

{
  "error": null,
  "data": {
     "id": "39275f92-f4e1-4696-a668-f01cdd84bfb6",
     "name": "Create a new Pléiades + Pansharpening + NDVI workflow",
     "description": "Just trying out workflow creation",
     "createdAt": "2019-10-08T09:50:00.054Z",
     "updatedAt": "2019-10-08T09:50:00.054Z",
     "totalProcessingTime": 0
  }
}

The resource has been created with the ID 39275f92-f4e1-4696-a668-f01cdd84bfb6.

The ID is the last component of the URL when creating tasks, since it refers to a specific resource: the just created workflow.

It is useful to store it in a variable:

NEW_WORKFLOW=$(cat workflow-created-response.json | jq -j '.data.id')

To confirm the value:

echo $NEW_WORKFLOW

39275f92-f4e1-4696-a668-f01cdd84bfb6

Now using the ID you can populate the workflow with the tasks. Task creation will be done one by one. Since the workflow has two tasks there are two separate PUT requests.

Preamble to creating the workflow tasks: getting the block IDs

First you need to create the response body for the POST request. In the body we need the block ID that uniquely identifies a particular block. So the first thing to do is to extract the block IDs. In this case we are just going to re-use the previously obtained file workflow-21415975-390f-4215-becb-8d46aaf5156c.json.

cat workflow-21415975-390f-4215-becb-8d46aaf5156c.json | jq -r '.data[] | .blockName + ": " + .block.id'

oneatlas-pleiades-fullscene: ee7c108d-47dc-4555-97ef-c77d62d6ac08
pansharpen: d058a536-e771-4a22-8df6-441ac5a425c4
ndvi: 1184ee5a-32a3-4659-a35a-d79efda79d1b

We see then that we have the following:

Block NameBlock ID
OneAtlas Pléiades Fullsceneee7c108d-47dc-4555-97ef-c77d62d6ac08
Pan-sharpeningd058a536-e771-4a22-8df6-441ac5a425c4
NDVI1184ee5a-32a3-4659-a35a-d79efda79d1b

Create two variables with block IDs.

TASK1_BLOCK_ID=$(cat workflow-21415975-390f-4215-becb-8d46aaf5156c.json | jq -r '.data[0].block.id')
TASK2_BLOCK_ID=$(cat workflow-21415975-390f-4215-becb-8d46aaf5156c.json | jq -r '.data[1].block.id')
TASK2_BLOCK_ID=$(cat workflow-21415975-390f-4215-becb-8d46aaf5156c.json | jq -r '.data[1].block.id')

echo $TASK1_BLOCK_ID $TASK2_BLOCK_ID $TASK3_BLOCK_ID

e0b133ae-7b9c-435c-99ac-c4527cc8d9cf 3f5f4490-9e58-490f-80e0-9a464355d5ce 1184ee5a-32a3-4659-a35a-d79efda79d1b

Now we can proceed to create the first task for this workflow.

Creating the first task: data block addition

Adding the data block: Pléiades Download. Let us start with an empty blockId field and make use of jq to set the blockId programmatically. This is the file named empty_task1_workflow-39275f92-f4e1-4696-a668-f01cdd84bfb6.json with the contents.

[
  {
    "name": "First task Pléiades Download data block",
    "parentName": null,
    "blockId": null
  }
]

cat empty_task1_workflow-$NEW_WORKFLOW.json | jq ". | .[0].blockId |= \"$TASK1_BLOCK_ID\"" > create_task1_workflow-$NEW_WORKFLOW.json

It gives us the file create_task1_workflow-39275f92-f4e1-4696-a668-f01cdd84bfb6.json with the contents.

[
  {
    "name": "First task Pléiades Download data block",
    "parentName": null,
    "blockId": "e0b133ae-7b9c-435c-99ac-c4527cc8d9cf"
  }
]

where we have the fields given when creating the workflow resource (POST request) plus the workflow ID and the first task specific fields:

  • name: the task name.
  • parentName: the name of the parent task, i.e., the task that precedes the current task. Since this is the first task, it is null.
  • blockId: the block ID as obtained above.

We put the above JSON payload in the file workflow_task1_created-$NEW_WORKFLOW.json, where NEW_WORKFLOW is the above obtained workflow ID: 39275f92-f4e1-4696-a668-f01cdd84bfb6. Now issuing the request:

curl -s -L -X POST "$URL_WORKFLOWS/$NEW_WORKFLOW/tasks" \
     -H "Authorization: Bearer $PTOKEN" \
     -H 'Content-Type: application/json' \ 
     -d @create_task1_workflow-$NEW_WORKFLOW.json | jq '.' > workflow_task1_created-$NEW_WORKFLOW.json

generates the following response body:

Loading...

The workflow has now the first task in place.

Creating the the second task: processing block addition

Adding the processing block: Pan-sharpening. We are going to rely again on jq to make sure the values set for the request body are correct.

The new block needs to be added to the task list (a JS array). We start with the following JSON.

[
   {
     "name": "First task Pléiades Download data block",
     "parentName": null,
     "blockId": "e0b133ae-7b9c-435c-99ac-c4527cc8d9cf"
   },
   {
     "name": "oneatlas-pleiades-fullscene",
     "parentName": null,
     "blockId": null
   }
]

Now we set the values of the second task object based on the first:

cat empty_task2_workflow-$NEW_WORKFLOW.json | jq '. | .[0] as $bn | .[1].parentName |= $bn.name' | jq ". | .[1].blockId |= \"$TASK2_BLOCK_ID\"" > create_task2_workflow-$NEW_WORKFLOW.json

This generates the JSON:

[
   {
      "name": "First task Pléiades Download data block",
      "parentName": null,
      "blockId": "e0b133ae-7b9c-435c-99ac-c4527cc8d9cf"
   },
   {
      "name": "oneatlas-pleiades-fullscene",
      "parentName": "First task Pléiades Download data block",
      "blockId": "3f5f4490-9e58-490f-80e0-9a464355d5ce"
   }
]

The task list has now three entries, the second being the Pan-sharpening block. Notice that parentName is set to be the first task in the workflow: First task Pléiades Download data block and blockId is set to the block ID of the data block.

To add the second block the API call is:

curl -s -L -X POST "$URL_WORKFLOWS/$NEW_WORKFLOW/tasks" \
     -H "Authorization: Bearer $PTOKEN" \
     -H 'Content-Type: application/json' \ 
     -d @create_task2_workflow-$NEW_WORKFLOW.json | jq '.' > workflow_task2_created-$NEW_WORKFLOW.json

that outputs the file in the response body:

Loading...

Now querying the workflow endpoint:

curl -s -L "$URL_WORKFLOWS/$NEW_WORKFLOW/tasks" \
     -H "Authorization: Bearer $PTOKEN" \
     -H 'Content-Type: application/json' \ 
     | jq '.' > workflow-$NEW_WORKFLOW.json

When comparing the output as you create the second task, you can certify that they are nearly identical except for some minor details, such as createdAt, updatedat, displayId, id and the ordering of the fields in the JSON.

Update a workflow

To update a workflow, overwrite the workflow by sending a POST request to the workflow task endpoint. As an example, we will replace the Pléiades Download data block by the SPOT 6/7 Download data block. For that we have the following payload, enumerating all the tasks:

 [
   {
     "name": "First task SPOT 6/7 Download data block",
     "parentName": null,
     "blockID": "0f15e07f-efcc-4598-939b-18aade349c5"
   },
  {
    "name": "pansharpen",
    "parentName": "First task SPOT 6/7 Download data block",
    "blockID": "3f5f4490-9e58-490f-80e0-9a464355d5ce"
  }
]

We obtained the blockID by invoking the following call:

curl -sL https://api.up42.com/marketplace/blocks | jq -r --arg bn 'SPOT.*clipped' '.data[] as $b | $b.name | if test($bn; "ing") then $b.id else empty end'

0f15e07f-efcc-4598-939b-18aade349c57

This calls the marketplace API to get all the marketplace available blocks. Using this you can build fully machine-to-machine (m2m) workflows.

curl -s -L -X POST "$URL_WORKFLOWS/$NEW_WORKFLOW/tasks" \
     -H "Authorization: Bearer $PTOKEN" \
     -H 'Content-Type: application/json' \ 
     -d @update_workflow-$NEW_WORKFLOW.json | jq '.' > workflow_updated-$NEW_WORKFLOW.json

Which gives the following response:

Loading...

Delete a workflow

To delete a workflow, we need to get the workflow ID of the workflow to be deleted. From the file workflows-5a21eaff-cdaa-48ab-bedf-5454116d16ff.json that we obtained before, we can see that there is a workflow that is called Create a new Pléiades + Pansharpening + NDVI workflow.

# Get the workflow ID of the workflow to be deleted.
DEL_WORKFLOW=$(cat workflows-$PROJ.json | jq -j '.data[] as $wf | if $wf.name == "Create a new Pléiades + Pansharpening + NDVI workflow" then $wf.id else "" end')

echo $DEL_WORKFLOW

c5085052-509b-4cba-951a-8e6a18aee9bb

To delete this workflow the request is:

curl -sL -X DELETE "$URL_WORKFLOWS/$DEL_WORKFLOW" \
     -H "Authorization: Bearer $PTOKEN" \
     -H 'Content-Type: application/json' \ 

And the response:

HTTP/2 204
date: Wed, 09 Sep 2019 17:55:34 GMT
x-content-type-options: nosniff
x-xss-protection: 1; mode=block
cache-control: no-cache, no-store, max-age=0, must-revalidate
pragma: no-cache
expires: 0
x-frame-options: SAMEORIGIN
referrer-policy: same-origin
x-powered-by: Rocket Fuel
access-control-allow-credentials: true
access-control-allow-methods: GET, POST, PUT, PATCH, DELETE, HEAD, OPTIONS
access-control-allow-headers: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization
access-control-expose-headers: Content-Disposition
strict-transport-security: max-age=31536000; includeSubDomains; preload

The HTTP status 204 No Content means that the request was successful but no data is returned.

If we now try to access the deleted workflow we get:

curl -s -L "$URL_WORKFLOWS/$DEL_WORKFLOW" \
     -H "Authorization: Bearer $PTOKEN" \
     -H 'Content-Type: application/json' \ 
     | jq '.'

{
   "error": {
      "code": "RESOURCE_NOT_FOUND",
      "message": "Workflow not found for id c5085052-509b-4cba-951a-8e6a18aee9bb and projectId 5a21eaff-cdaa-48ab-bedf-5454116d16ff and userId 8cd5de7b-82e2-4625-b094-d5392f1cf780",
      "details": null
   },
  "data": null
}

Because the workflow was deleted, it no longer exists and hence the 404 Not Found.

Work with tasks

Similarly to job results you can access each task results and their logs.

Get individual tasks results and logs

The job is composed of three tasks, with each corresponding to a block in the workflow: the first obtains the Pléiades Download data, the second runs the Pan-sharpening, and the last runs the NDVI Pléiades block. We can obtain the partial results, i.e., we can get the results from each task in the job.

The task results are again given as a GeoJSON file and/or a tarball as they are for a job result.

Iterate through the tasks in the job file:

cat jobs_job_tasks-$JOB.json | jq -r '.data[] | .id  + "_" + .name'

which outputs:

ee7c108d-47dc-4555-97ef-c77d62d6ac08_oneatlas-pleiades-fullscene:1
d058a536-e771-4a22-8df6-441ac5a425c4_pansharpen:1
1184ee5a-32a3-4659-a35a-d79efda79d1b_ndvi:1

The first is the task ID and the second is the task name, clearly identifying the task ID and what it corresponds to in terms of the workflow.

Create three shell variables, one for each task:

TASK1=$(cat jobs_job_tasks-$JOB.json | jq -j '.data[0] | .id')
TASK2=$(cat jobs_job_tasks-$JOB.json | jq -j '.data[1] | .id')
TASK3=$(cat jobs_job_tasks-$JOB.json | jq -j '.data[1] | .id')

TASK1_URL="https://api.up42.dev/projects/$PROJ/jobs/$JOB/tasks/$TASK1"
TASK2_URL="https://api.up42.dev/projects/$PROJ/jobs/$JOB/tasks/$TASK2"
TASK3_URL="https://api.up42.dev/projects/$PROJ/jobs/$JOB/tasks/$TASK3"

echo $TASK1 $TASK2 $TASK3

ee7c108d-47dc-4555-97ef-c77d62d6ac08 d058a536-e771-4a22-8df6-441ac5a425c4 d058a536-e771-4a22-8df6-441ac5a425c4

Now with the individual tasks IDs, let us proceed to get the respective results.

First task logs

The first task is the Pléiades acquisition. To get the first task log we issue the following API request:

curl -s -L "$TASK1_URL/logs" \
     -H "Authorization: Bearer $PTOKEN" \
     -H 'Content-Type: text/plain' > task_log-$TASK1.txt

Resulting in the following file:

Loading...

First task results: GeoJSON

The output GeoJSON is:

TASK1_URL="https://api.up42.com/projects/$PROJ/jobs/$JOB/tasks/$TASK1"
curl -s -L -H "Authorization: Bearer $PTOKEN" "$TASK1_URL/outputs/data-json" | jq '.' > output_task-$TASK1.json

which returns the following file.

First task results: tarball

Again we need to get the signed URL pointing to the first task tarball.

TASK1_TARBALL_URL=$(curl -s -L -H "Authorization: Bearer $PTOKEN" "$TASK1_URL/downloads/results" | jq -j '.data.url')

curl -s -L '$TASK1_TARBALL_URL" \
     -H "Authorization: Bearer $PTOKEN" \
     -o output_$TASK1.tar.gz

Inspecting the tarball:

tar ztvf output_$TASK1.tar.gz

drwxrwxrwx  0 root   root        0 Sep 16 19:21 output
-rw-r--r--  0 root   root 132209093 Sep 16 19:21 output/ee7c108d-47dc-4555-97ef-c77d62d6ac08.tif
-rw-r--r--  0 root   root     35363 Sep 16 19:21 output/data.json

You can find the resulting Pléiades image there.

First task results: quicklooks

First we need to get the list of available images.

curl -s -L -H "Authorization: Bearer $PTOKEN" "$TASK1_URL/outputs/quicklooks" | jq '.'  > quicklooks_list_$TASK1.json

This gives us the JSON:

{
   "error": null,
   "data": [
     "b8c9698b-0c42-47ac-b503-a956bf45b5f2.jpg"
   ]
}

Now we can iterate over the given JSON array data and get all the quicklook images, which in this case is only one. The filename is composed of the feature ID and the extension.

# Loop over all available quicklooks images and get them.

for i in $(cat quicklooks_list_$TASK1.json | jq -r '.data[]')
    do curl -s -L -O -H "Authorization: Bearer $PTOKEN" "$TASK1_URL/outputs/quicklooks/$i"
done

The final task of a workflow produces the same results as the job itself.