Create a job

You can send your data to us through either of the following channels

  1. API (recommended)

  2. Sharing a cloud storage bucket

The data could be a collection of frames or a collection of sequences. A unit of work is either a single frame or a single sequence based on your requirement. The API will have to be called for each unit of work.

Following Json template gives an idea of the payload the API is expecting.

The payload is organised as follows

  1. Each request is made of frames and sensor meta data

  2. Each frame can have multiple sensor data

  3. For each sensor following two types of data is expected

    1. URL to the media which can be accessed by our servers

    2. Pose of the sensor for that frame. Please ensure that the sensor poses and the point cloud files are with respect to the same coordinate frame of reference.

  4. In sensor meta, each sensor has to be defined and the intrinsics of the camera sensors is expected.

Note: Following values are just placeholders and actual values will be expected in the request.

A JOB

POST https://api.playment.io/v1/projects/:project_id/jobs

This endpoint allows you to create a job

Path Parameters

Name
Type
Description

project_id

string

ID of the project in which you want to create the job

Headers

Name
Type
Description

x-api-key

string

API key for authentication

Request Body

Name
Type
Description

batch_id

string

A batch is a way to organize multiple jobs under one batch_id. You can create new batches from the dashboard or by using the batch creation API. If batch_id is left empty or the key is not present, the job is created in the Default batch in your project.

work_flow_id

string

The ID of the workflow inside which you want to create the job

data

object

The data object contains all the information and attachments required to label a job. The data object is defined below

reference_id

string

The unique identifier of the job

{
  "data": {
    "job_id": "3f3e8675-ca69-46d7-aa34-96f90fcbb732",
    "reference_id": "001",
    "work_flow_id": "2aae1234-acac-1234-eeff-12a22a237bbc"
  },
  "success": true
}

Payload

{  
   "reference_id":"001",
   "data":{
     "sensor_data": {
      "frames": [
        {
          "frame_id": "frame001",
          "ego_pose" : {
            "position": {
              "x": 0,
              "y": 0,
              "z": 0
            },
            "heading": {
              "w": 1,
              "x": 0,
              "y": 0,
              "z": 0
            }
          },
          "sensors" : [
            {
              "sensor_id": "lidar",
              "data_url": "https://s3.amazonaws.com/example-bucket/lidar/frame001.pcd",
              "sensor_pose": {
                "position": {
                  "x": 0.1,
                  "y": 0.05,
                  "z": 0.4
                },
                "heading": {
                  "w": 0.847,
                  "x": -0.002,
                  "y": -0.504,
                  "z": 0.168
                }
              }
            },
            {
              "sensor_id": "cam-1",
              "data_url": "https://s3.amazonaws.com/example-bucket/cam-1/frame001.png",
              "sensor_pose": {
                "position": {
                  "x": 0.01,
                  "y": 0.1,
                  "z": 0.1
                },
                "heading": {
                  "w": 0.002,
                  "x": 0.847,
                  "y": -0.168,
                  "z": 0.504
                }
              }
            }
          ]
        },
        {
          "frame_id": "frame002",
          "ego_pose" : {
                "position": {
                  "x": 0.1,
                  "y": 0.05,
                  "z": 0.4
                },
                "heading": {
                  "w": 0.847,
                  "x": -0.002,
                  "y": -0.504,
                  "z": 0.168
                }
              },
          "sensors" : [
            {
              "sensor_id": "lidar",
              "data_url": "https://s3.amazonaws.com/example-bucket/lidar/frame002.pcd",
              "sensor_pose": {
                "position": {
                  "x": 0,
                  "y": 0,
                  "z": 0
                },
                "heading": {
                  "w": 1,
                  "x": 0,
                  "y": 0,
                  "z": 0
                }
              }
            },
            {
              "sensor_id": "cam-1",
              "data_url": "https://s3.amazonaws.com/example-bucket/cam-1/frame002.png",
              "sensor_pose": {
                "position": {
                  "x": 0.01,
                  "y": 0.1,
                  "z": 0.1
                },
                "heading": {
                  "w": 0.002,
                  "x": 0.847,
                  "y": -0.168,
                  "z": 0.504
                }
              }
            }
          ]
        }
      ],
      "sensor_meta" : [
        {
          "id": "lidar",
          "name": "lidar",
          "state": "editable",
          "modality": "lidar",
          "primary_view": true
        },
        {
          "id": "cam-1",
          "name": "cam-1",
          "state": "editable",
          "modality": "camera",
          "camera_model": "brown_conrady", 
          "primary_view": false,
          "intrinsics": {
            "cx": 600,
            "cy": 400,
            "fx": 1200,
            "fy": 800,
            "k1": 0,
            "k2": 0,
            "k3": 0,
            "k4": 0,
            "p1": 0,
            "p2": 0,
            "skew": 0,
            "scale_factor": 1
          }
        }
      ]
    }
   },
   "work_flow_id":"2aae1234-acac-1234-eeff-12a22a237bbc"
}

Payload definition

Key

Description

Type

data.sensor_data

Contains lists of frames and sensor metadata

Object

data.sensor_data.sensor_meta

Contains a list of all the sensors with each having metadata information like id : id of sensor name : name of sensor modality : lidar / camera

If the sensor is a camera, you can add the camera intrinsic values as well as the camera_model. These values are used along with the sensor_pose to create projections between sensors.

camera_model: one of brown_conrady or fisheye

If this key doesn't exist or is null, the tool will assume brown_conrady.

The intrinsics object contains the following keys:

cx: principal point x value cy: principal point y value fx: focal length in x-axis fy: focal length in y-axis k1, k2, k3, k4, k5, k6: Radial distortion coefficients p1, p2: Tangential distortion coefficients skew: camera skew coefficient scale_factor: The factor by which the image has been downscaled (For example, scale_factor will be 2 if the original image is twice as large as the downscaled image)

If the camera_model is brown_conrady then the distortion coefficients should be one of the following combinations:

  • k1, k2, p1, p2

  • k1, k2, p1, p2, k3

  • k1, k2, p1, p2, k3, k4, k5, k6

If the camera_model is fisheye then the distortion coefficients should be the following combination:

  • k1, k2, k3, k4

The remaining coefficients can be ignored or be assigned a value of 0

Object

data.sensor_data.frames

List of frames, each for a particular timestamp in the order of annotation. Each having frame_id, ego_pose and sensors

List

data.sensor_data.frames.[i].frame_id

Unique identifier of the particular frame

String

data.sensor_data.frames.[i].ego_pose

Contains the pose of a fixed point on the ego vehicle in the world frame of reference in the form of position (in (x, y, z)) and orientation (as quaternion (w, x, y, z))

In case the pose of the ego vehicle is available in the world frame of reference, The tool can allow annotators to mark objects as stationary and toggle APC (Aggregated point cloud) mode.

Usually, if a vehicle is equipped with an IMU or Odometry sensor, then it is possible to get the pose of the ego-vehicle in the world frame of reference.

Object

data.sensor_data.frames.[i].sensors

List of all the sensors associated with this particular frame with each having:

sensor_id : id of the sensor. This is a foreign key to the sensor id mentioned in the sensor_meta of the sequence data

data_url : A URL to the file containing the data captured from the sensor for this frame. In order to annotate lidar data, please share point clouds in ascii encoded PCD format.

sensor_pose : This key specifies the pose of respective sensors in a common frame of reference. If the ego_pose is available in the world frame of reference, then you should specify the sensor_pose of individual sensors in the same world frame of reference. In such cases, the pose might change in every frame, as the vehicle moves. If the ego_pose is not available, then all sensor_pose can be specified with respect to a fixed point on the vehicle. In such cases, the pose will not change between frames.

Object

Please share point clouds in ascii encoded PCD format.

If you are sharing the ego_pose and sensor_pose in the world frame of reference, then the points in the PCD file should also be in the world frame of reference

PCD format Specification: https://pcl.readthedocs.io/projects/tutorials/en/latest/pcd_file_format.html

# .PCD v0.7 - Point Cloud Data file format 
VERSION 0.7
FIELDS x y z
SIZE 4 4 4
TYPE F F F
COUNT 1 1 1
WIDTH 47286
HEIGHT 1
VIEWPOINT 0 0 0 1 0 0 0
POINTS 47286
DATA ascii
5075.773 3756.887 107.923
5076.011 3756.876 107.865
5076.116 3756.826 107.844
5076.860 3756.975 107.648
5077.045 3756.954 107.605
5077.237 3756.937 107.559
5077.441 3756.924 107.511
5077.599 3756.902 107.474
5077.780 3756.885 107.432
5077.955 3756.862 107.391
...

Creating jobs with pre-labeled data

If you have data which has been labeled previously by an ML model or by human labelers, you can create jobs with such labels already created. To do this, you need to send the annotation data in the data.maker_response key in the payload. The annotation data needs to be in our annotation format.

Here's an example:

{
  "reference_id": "001",
  "data": {
    "sensor_data": {
      "frames": [...],
      "sensor_meta": [...]
    },
    "maker_response": {
      "sensor_fusion_v2": {
        "data": {
          "frames": [
            {
              "_id": "0030"
            },
            {
              "_id": "0031"
            }
          ],
          "tracks": [
            {
              "_id": "5b88b0cc-3ba9-4197-a6a1-4b99e976025c",
              "color": "rgb(33, 196, 254)",
              "label": "Car"
            },
            {
              "_id": "afe4ebe9-5c2f-43fb-9d61-5c58a1c57d7d",
              "color": "rgb(63, 225, 250)",
              "label": "Bus"
            }
          ],
          "sensors": [
            {
              "sensor_id": "Lidar",
              "sensor_type": "LIDAR"
            },
            {
              "sensor_id": "front_left_camera",
              "sensor_type": "CAMERA"
            }
          ],
          "annotations": [...]
        }
      }
    }
  },
  "work_flow_id": "2aae1234-acac-1234-eeff-12a22a237bbc"
}

You can check the structure for various annotation_object below:

{
  "_id": "b45e76d2-8295-4212-9134-a60640556b09",
  "type": "rectangle",
  "label": "Car",
  "source": "images",
  "frame_id": "0030",
  "track_id": "5b88b0cc-3ba9-4197-a6a1-4b99e976025c",
  "sensor_id": "side_right_camera",
  "attributes": {
    "pose": {
      "value": "standing"
    }
  },
  "coordinates": [
    {
      "x": 0.47711045883773673,
      "y": 0.11618729909271992
    },
    {
      "x": 0.7399349526734161,
      "y": 0.11618729909271992
    },
    {
      "x": 0.7399349526734162,
      "y": 0.5347698204628802
    },
    {
      "x": 0.47711045883773673,
      "y": 0.5347698204628804
    }
  ]
}

Last updated