Create a job
You can send your data to us through either of the following channels
API (recommended)
Sharing a cloud storage bucket
The data could be a collection of frames or a collection of sequences. A unit of work is either a single frame or a single sequence based on your requirement. The API will have to be called for each unit of work.
Following Json template gives an idea of the payload the API is expecting.
The payload is organised as follows
Each request is made of frames and sensor meta data
Each frame can have multiple sensor data
For each sensor following two types of data is expected
URL to the media which can be accessed by our servers
Pose of the sensor for that frame. Please ensure that the sensor poses and the point cloud files are with respect to the same coordinate frame of reference.
In sensor meta, each sensor has to be defined and the intrinsics of the camera sensors is expected.
Note: Following values are just placeholders and actual values will be expected in the request.
A JOB
POST
https://api.playment.io/v1/projects/:project_id/jobs
This endpoint allows you to create a job
Path Parameters
Name | Type | Description |
---|---|---|
project_id | string | ID of the project in which you want to create the job |
Headers
Name | Type | Description |
---|---|---|
x-api-key | string | API key for authentication |
Request Body
Name | Type | Description |
---|---|---|
batch_id | string | A batch is a way to organize multiple jobs under one |
work_flow_id | string | The ID of the workflow inside which you want to create the job |
data | object | The |
reference_id | string | The unique identifier of the job |
Payload
Payload definition
Key | Description | Type |
| Contains lists of frames and sensor metadata |
|
| Contains a list of all the sensors with each having metadata information like
If the sensor is a camera, you can add the camera
If this key doesn't exist or is null, the tool will assume The
If the
If the
The remaining coefficients can be ignored or be assigned a value of 0 OpenCV reference: Camera calibration and 3D reconstruction |
|
| List of frames, each for a particular timestamp in the order of annotation. Each having |
|
| Unique identifier of the particular frame |
|
| Contains the pose of a fixed point on the ego vehicle in the world frame of reference in the form of position (in (x, y, z)) and orientation (as quaternion (w, x, y, z)) In case the pose of the ego vehicle is available in the world frame of reference, The tool can allow annotators to mark objects as stationary and toggle APC (Aggregated point cloud) mode. Usually, if a vehicle is equipped with an IMU or Odometry sensor, then it is possible to get the pose of the ego-vehicle in the world frame of reference. |
|
| List of all the sensors associated with this particular frame with each having:
|
|
Please share point clouds in ascii
encoded PCD format.
If you are sharing the ego_pose
and sensor_pose
in the world frame of reference, then the points in the PCD file should also be in the world frame of reference
PCD format Specification: https://pcl.readthedocs.io/projects/tutorials/en/latest/pcd_file_format.html
Creating jobs with pre-labeled data
If you have data which has been labeled previously by an ML model or by human labelers, you can create jobs with such labels already created. To do this, you need to send the annotation data in the data.maker_response
key in the payload. The annotation data needs to be in our annotation format.
Here's an example:
You can check the structure for various annotation_object
below:
Last updated