Create a job
A JOB
POST
https://api.playment.io/v1/projects/:project_id/jobs
This endpoint allows you to create a job
Path Parameters
project_id
string
ID of the project in which you want to create the job
Headers
x-api-key
string
API key for authentication
Request Body
batch_id
string
A batch is a way to organize multiple jobs under one batch_id
. You can create new batches from the dashboard or by using the batch creation API.
If batch_id
is left empty or the key is not present, the job is created in the Default batch
in your project.
work_flow_id
string
The ID of the workflow inside which you want to create the job
data
object
The data
object contains all the information and attachments required to label a job. The data
object is defined below
reference_id
string
The unique identifier of the job
Payload
Key
Description
Type
data.sensor_data
Contains a sensors list, ego_pose object and sensor metadata list
Object
data.sensor_data.sensor_meta
Contains a list of all the sensors with each having metadata information like
id
: id of sensor
name
: name of sensor
modality
: lidar / camera
If the sensor is a camera, you can add the camera intrinsic
values as well as the camera_model
. These values are used along with the sensor_pose to create projections between sensors.
camera_model
: one of brown_conrady
or fisheye
If this key doesn't exist or is null, the tool will assume brown_conrady
.
The intrinsics
object contains the following keys:
cx
: principal point x value
cy
: principal point y value
fx
: focal length in x-axis
fy
: focal length in y-axis
k1, k2, k3, k4, k5, k6
: Radial distortion coefficients
p1, p2
: Tangential distortion coefficients
skew
: camera skew coefficient
scale_factor
: The factor by which the image has been downscaled (For example, scale_factor will be 2 if the original image is twice as large as the downscaled image)
If the camera_model
is brown_conrady
then the distortion coefficients should be one of the following combinations:
k1, k2, p1, p2
k1, k2, p1, p2, k3
k1, k2, p1, p2, k3, k4, k5, k6
If the camera_model
is fisheye
then the distortion coefficients should be the following combination:
k1, k2, k3, k4
The remaining coefficients can be ignored or be assigned a value of 0
List
data.sensor_data.ego_pose
Contains the pose of a fixed point on the ego vehicle in the world frame of reference in the form of position (in (x, y, z)) and orientation (as quaternion (w, x, y, z))
In case the pose of the ego vehicle is available in the world frame of reference, The tool can allow annotators to mark objects as stationary and toggle APC (Aggregated point cloud) mode.
Usually, if a vehicle is equipped with an IMU or Odometry sensor, then it is possible to get the pose of the ego-vehicle in the world frame of reference.
Object
data.sensor_data.sensors
List of all the sensors associated with this particular frame with each having:
sensor_id
: id of the sensor. This is a foreign key to the sensor id mentioned in the sensor_meta
of the sequence data
data_url
: A URL to the file containing the data captured from the sensor for this frame. In order to annotate lidar data, please share point clouds in ascii encoded PCD format.
sensor_pose
: This key specifies the pose of respective sensors in a common frame of reference.
If the ego_pose
is available in the world frame of reference, then you should specify the sensor_pose
of individual sensors in the same world frame of reference. In such cases, the pose might change in every frame, as the vehicle moves.
If the ego_pose
is not available, then all sensor_pose
can be specified with respect to a fixed point on the vehicle. In such cases, the pose will not change between frames.
List
Please share point clouds in ascii
encoded PCD format
PCD format Specification: https://pcl.readthedocs.io/projects/tutorials/en/latest/pcd_file_format.html
Visualizing intensity/reflectivity information
You can send additional data like Intensity or reflectivity values for each point in the PCD file. This can help annotators segment reflective surfaces like lane markings.
You can refer to a sample PCD structure below.
Once you share the data with the above format, annotators can utilize it to view point-cloud colors based on the intesity. It'll look like this:
Helper Python script to create jobs
Last updated
Was this helpful?