I use the MobileSAM to automative generate mask and use the function call “image2annotation” from brush.py (label-studio-converter/label_studio_converter/brush.py at master · HumanSignal/label-studio-converter · GitHub) in order to upload the image with MobileSAM mask. Show the step below:
This is an raw image:
And the image with MobileSAM mask:
And the image upload to Label studio from API post:
We found the mask be weird.
Share my code:
Use MobileSAM to generate mask on given picture
mask_generator_2 = SamAutomaticMaskGenerator(
model=mobile_sam,
points_per_side=3, # 在每側生成32個點,形成一個32 x 32的網格,共1024個點
pred_iou_thresh=1,
stability_score_thresh=0.97,
crop_n_layers=1,
crop_n_points_downscale_factor=2,
min_mask_region_area=100, # Requires open-cv to run post-processing
output_mode = ‘binary_mask’ # Can be ‘binary_mask’,‘uncompressed_rle’, or ‘coco_rle’. ‘coco_rle’ requires pycocotools.
)
masks2 = mask_generator_2.generate(image_rgb)
Stack the SAM mask on image
def export_mask(anns, image_rgb):
if len(anns) == 0:
return image_rgb
# anns = masks2
sorted_anns = sorted(anns, key=(lambda x: x[‘area’]), reverse=True)
img = np.ones((sorted_anns[0]['segmentation'].shape[0], sorted_anns[0]['segmentation'].shape[1], 4))
img[:,:,3] = 0
for ann in sorted_anns:
m = ann['segmentation']
color_mask = np.concatenate([np.random.random(3), [0.35]])
img[m] = color_mask
# 將遮罩圖像疊加在原圖像上
combined_img = image_rgb.copy()
mask = img[:, :, 3] > 0
combined_img[mask] = combined_img[mask] * (1 - img[mask, 3, None]) + img[mask, :3] * img[mask, 3, None]
return combined_img
# return img
img = export_mask(masks2, image_rgb)
if img is not None:
plt.imsave(‘output.png’, img)
Convert image with mask to RLE format and json format
from label_studio_converter.brush import image2annotation
result = image2annotation(path=‘output.png’, label_name=“Cat”, from_name=‘tag’, to_name=‘image’
, model_version=‘SamAutomaticMaskGenerator’, score=masks2[0][‘predicted_iou’])
result
Create json format
def create_data_dict(image_path, annotations=None, predictions=None):
if annotations is None:
annotations =
if predictions is None:
predictions =
data_dict = {
"data": {
"image": image_path
},
"annotations": annotations,
"predictions": predictions
}
return data_dict
Post image with mask to label studio
import json
data_dict = create_data_dict(image_path=‘/data/local-files/?d=data/6.jpg’, predictions=[result])
test = json.dumps(data_dict, indent=4)
with open(f’test.json’, ‘w’) as file:
file.write(test)
hostname = ‘http://localhost:8083/’
api_token = {“Content-Type”: “application/json”, “Authorization”: “Token 65ef79347799b2e7c3891e9b5a15f713c809e37b”}
project_id = 1
api_url = f’{hostname}api/projects/{project_id}/import’
import requests
response = requests.post(url=api_url, headers=api_token, data=json.dumps(data_dict, indent=4))
print(response)
What version of Label Studio you’re using (for example, 1.10.0).
version : 1.12.0
How you installed Label Studio (for example, pip, brew, Docker, etc.).
Docker