Create a job

A JOB

POST https://api.playment.io/v1/projects/:project_id/jobs

This endpoint allows you to create a job

Path Parameters

Name
Type
Description

project_id

string

ID of the project in which you want to create the job

Headers

Name
Type
Description

x-api-key

string

API key for authentication

Request Body

Name
Type
Description

batch_id

string

A batch is a way to organize multiple jobs under one batch_id. You can create new batches from the dashboard or by using the batch creation API. If batch_id is left empty or the key is not present, the job is created in the Default batch in your project.

work_flow_id

string

The ID of the workflow inside which you want to create the job

data

object

The data object contains all the information and attachments required to label a job. The data object is defined below

reference_id

string

The unique identifier of the job

{
  "data": {
    "job_id": "3f3e8675-ca69-46d7-aa34-96f90fcbb732",
    "reference_id": "001",
    "work_flow_id": "2aae1234-acac-1234-eeff-12a22a237bbc"
  },
  "success": true
}

Payload

Payload Definition

Key
Type
Description

data.video_data.frames

array

This contains an array of objects, with each object containing two properties: frame_id - Unique identifier for the frame. src - URL of the image.

data.reference_data

Object

The reference_data object contains an additional list of images that can be used as a reference while annotating the primary image. This is an optional key based on your requirement. data.reference_data.vector.images is an array of objects, each containing two properties: frame_id and data. frame_id is a unique identifier for the frame, which will be used to map reference image(s) with the main image. Each element of the data array contains two properties: label and image_url. image_url - URL of the image. label- Name of the image.

Code Example

Creating jobs with pre-labeled data

If you have data which has been labeled previously by an ML model or by human labelers, you can create jobs with such labels already created. To do this, you need to send the annotation data in the data.maker_response key in the payload. The annotation data needs to be in our annotation format.

Here's an example

The data.maker_response.video_2d.data.annotations list contains objects, where each object is a tracker. A tracker tracks an object across frames. The frames key in the tracker object maps each annotation object in the tracker to the frame_id it belongs to.

You can check the structure for various annotation_object below:

In our annotation output, the x and y coordinates are normalised to ensure consistency across different image sizes. Normalisation is crucial for accurately representing object positions relative to the image dimensions.

X and Y Coordinates:

  • X Coordinate:

    • Normalised x coordinates (XnormXnorm​) are calculated using the formula: Xnorm=Xraw/ImageWidthXnorm = Xraw / Image Width

    • The result ranges from 0.0 to 1.0, where 0.0(Origin) corresponds to the leftmost edge of the image, and 1.0 corresponds to the rightmost edge.

  • Y Coordinate:

    • Normalised y coordinates (YnormYnorm​) are calculated using the formula: Ynorm=Yraw/ImageHeightYnorm = Yraw / Image Height

    • The result ranges from 0.0 to 1.0, where 0.0(Origin) corresponds to the topmost edge of the image, and 1.0 corresponds to the bottommost edge.

Last updated

Was this helpful?