How to (use API) post an image with segmentation mask in RLE format to label studio project?

I use the MobileSAM to automative generate mask and use the function call “image2annotation” from (label-studio-converter/label_studio_converter/ at master · HumanSignal/label-studio-converter · GitHub) in order to upload the image with MobileSAM mask. Show the step below:

This is an raw image:

And the image with MobileSAM mask:

And the image upload to Label studio from API post:

We found the mask be weird.

Share my code:

Use MobileSAM to generate mask on given picture

mask_generator_2 = SamAutomaticMaskGenerator(
points_per_side=3, # 在每側生成32個點,形成一個32 x 32的網格,共1024個點
min_mask_region_area=100, # Requires open-cv to run post-processing
output_mode = ‘binary_mask’ # Can be ‘binary_mask’,‘uncompressed_rle’, or ‘coco_rle’. ‘coco_rle’ requires pycocotools.
masks2 = mask_generator_2.generate(image_rgb)

Stack the SAM mask on image

def export_mask(anns, image_rgb):
if len(anns) == 0:
return image_rgb
# anns = masks2
sorted_anns = sorted(anns, key=(lambda x: x[‘area’]), reverse=True)

img = np.ones((sorted_anns[0]['segmentation'].shape[0], sorted_anns[0]['segmentation'].shape[1], 4))
img[:,:,3] = 0
for ann in sorted_anns:
    m = ann['segmentation']
    color_mask = np.concatenate([np.random.random(3), [0.35]])
    img[m] = color_mask
# 將遮罩圖像疊加在原圖像上
combined_img = image_rgb.copy()
mask = img[:, :, 3] > 0
combined_img[mask] = combined_img[mask] * (1 - img[mask, 3, None]) + img[mask, :3] * img[mask, 3, None]
return combined_img
# return img

img = export_mask(masks2, image_rgb)
if img is not None:
plt.imsave(‘output.png’, img)

Convert image with mask to RLE format and json format

from label_studio_converter.brush import image2annotation
result = image2annotation(path=‘output.png’, label_name=“Cat”, from_name=‘tag’, to_name=‘image’
, model_version=‘SamAutomaticMaskGenerator’, score=masks2[0][‘predicted_iou’])

Create json format

def create_data_dict(image_path, annotations=None, predictions=None):
if annotations is None:
annotations =
if predictions is None:
predictions =

data_dict = {
    "data": {
        "image": image_path
    "annotations": annotations,
    "predictions": predictions

return data_dict

Post image with mask to label studio

import json
data_dict = create_data_dict(image_path=‘/data/local-files/?d=data/6.jpg’, predictions=[result])
test = json.dumps(data_dict, indent=4)
with open(f’test.json’, ‘w’) as file:

hostname = ‘http://localhost:8083/
api_token = {“Content-Type”: “application/json”, “Authorization”: “Token 65ef79347799b2e7c3891e9b5a15f713c809e37b”}
project_id = 1
api_url = f’{hostname}api/projects/{project_id}/import’
import requests
response =, headers=api_token, data=json.dumps(data_dict, indent=4))

What version of Label Studio you’re using (for example, 1.10.0).
version : 1.12.0
How you installed Label Studio (for example, pip, brew, Docker, etc.).

Hey @Leo thanks for the question.

  1. Check the Mask Generation and Export:
    Ensure that the mask generated by MobileSAM is correctly formatted and aligned with the original image. The mask should be in the same dimensions as the original image.
  2. Verify the RLE Conversion:
    The image2annotation function from label_studio_converter.brush should correctly convert the mask to RLE format. Ensure that the RLE data is correctly generated and matches the expected format.
  3. Label Studio Configuration:
    Ensure that your Label Studio project configuration matches the expected input format. The from_name and to_name attributes in your labeling configuration should match those in your prediction JSON.
  4. Debugging the JSON Payload:
    Can you share the JSON for the annotations you get?