Label studio ml backand - Problem with processing

Hello everyone. I will briefly describe the problem: This is enough for me to describe the model OpenCV.I would like to separate the outline from the white background and segment the remaining outline polygonally. But I encountered a problem. In general, this algorithm works (I have a separate script in C++), but when I rewrite that same one, I see on the interface not a contour, but just a line, in the upper left corner of the screen. I’m new to this and would really appreciate some help.
I will attach the script below model.py Maybe it will help. I would be grateful for examples and attention to my problem.

it`s my code:

Hello,

It seems that the issue you’re experiencing is due to how the polygon annotations are being generated and formatted for Label Studio. Specifically:

  1. Coordinate Scaling: Label Studio expects polygon coordinates as percentages between 0 and 100 relative to the original image dimensions. In your code, the coordinates might be between 0 and 1 or not properly scaled, causing the polygon to appear as a line in the upper-left corner.

  2. Annotation Format: The annotation dictionary must follow Label Studio’s expected format, including the correct type and value keys.

  3. Missing Labels: Each polygon annotation needs to include a label from your labeling configuration.

  4. Resizing Images: Resizing images can lead to discrepancies in coordinate calculations. It’s better to use the original image dimensions.

Here’s how you can modify your code to address these issues:

1. Use Original Image Dimensions

Update your process_image method to use the original image dimensions:

def process_image(self, image_path: str) -> (np.ndarray, int, int):
    """Generate binary mask for segmentation using OpenCV."""
    logger.debug(f"Loading image from path: {image_path}")
    image = cv2.imread(image_path)
    if image is None:
        raise ValueError(f"Failed to load image from {image_path}")

    height, width = image.shape[:2]
    
    # Convert image to grayscale
    gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
    
    # Apply a binary threshold to separate the object from a white background
    _, binary_mask = cv2.threshold(gray, 240, 255, cv2.THRESH_BINARY_INV)
    
    # Perform morphological operations to clean up the mask
    kernel = np.ones((5, 5), np.uint8)
    cleaned_mask = cv2.morphologyEx(binary_mask, cv2.MORPH_CLOSE, kernel)
    
    logger.debug("Binary mask generated successfully using OpenCV.")
    return cleaned_mask, width, height

2. Correctly Scale Coordinates and Include Labels

Update generate_polygon_annotations to scale coordinates between 0 and 100 and include the necessary labels:

def generate_polygon_annotations(self, mask: np.ndarray, width: int, height: int) -> List[Dict]:
    """Convert mask to polygon annotations for Label Studio."""
    logger.debug(f"Generating annotations for image with size {width}x{height}")
    
    # Find contours in the mask
    contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    logger.debug(f"Found {len(contours)} contours in mask")

    annotations = []
    
    for contour in contours:
        if len(contour) < 3:
            continue  # Need at least 3 points for a valid polygon
        # Reshape contour to (n_points, 2)
        polygon = contour.squeeze()
        # Ensure the contour has the correct shape
        if len(polygon.shape) != 2 or polygon.shape[1] != 2:
            continue
        # Calculate percentage coordinates
        relative_points = [[(x / width) * 100, (y / height) * 100] for x, y in polygon]
        
        annotation = {
            "id": str(uuid4())[:8],
            "from_name": "label",
            "to_name": "image",
            "original_width": width,
            "original_height": height,
            "image_rotation": 0,
            "type": "polygonlabels",
            "value": {
                "points": relative_points,
                "polygonlabels": ["Object"]  # Replace 'Object' with your actual label
            }
        }
        annotations.append(annotation)
    return annotations

3. Update the predict Method

Pass the image dimensions and use the updated methods:

def predict(self, tasks: List[Dict], **kwargs) -> ModelResponse:
    predictions = []
    
    for task in tasks:
        image_url = task['data'].get('image')
        if not image_url:
            logger.error("Image URL not found in task data")
            continue
        
        try:
            image_path = self.get_local_path(image_url, task_id=task['id'])
            logger.debug(f"Resolved image path: {image_path}")
            
            mask, width, height = self.process_image(image_path)
            annotations = self.generate_polygon_annotations(mask, width, height)
            
            if annotations:
                prediction = {
                    "model_version": self.get("model_version"),
                    "result": annotations
                }
                predictions.append(prediction)
            else:
                logger.warning(f"No annotations generated for task {task['id']}")

        except Exception as e:
            logger.error(f"Error processing task {task['id']}: {e}")
    
    return ModelResponse(predictions=predictions)

4. Ensure Labeling Configuration Matches

In Label Studio, your labeling configuration should be:

<View>
  <Image name="image" value="$image"/>
  <PolygonLabels name="label" toName="image">
    <Label value="Object" background="red"/>
  </PolygonLabels>
</View>
  • 'from_name' in your annotations should match the name of the <PolygonLabels> tag ("label").
  • 'to_name' should match the name of the <Image> tag ("image").
  • The label "Object" should be the actual label you intend to use.

5. Double-Check Annotation Format

Each annotation should have:

  • id: A unique identifier.
  • from_name, to_name: Matching the names in your labeling configuration.
  • original_width, original_height: The actual dimensions of the image.
  • type: "polygonlabels".
  • value: Contains points in percentage (0-100) and polygonlabels.

Example Annotation

{
  "id": "a1b2c3d4",
  "from_name": "label",
  "to_name": "image",
  "original_width": 1920,
  "original_height": 1080,
  "image_rotation": 0,
  "type": "polygonlabels",
  "value": {
    "points": [[10.0, 20.0], [30.0, 40.0], ...],
    "polygonlabels": ["Object"]
  }
}

6. Avoid Resizing Images

By processing images at their original size, you ensure that the coordinates correspond correctly. Resizing can lead to mismatches between the image displayed in Label Studio and your calculated annotations.

7. Debugging Tips

  • Visualize the Mask and Contours: Before generating annotations, you can save and inspect the mask and the contours to ensure they’re as expected.

    cv2.imwrite("mask.png", mask)
    for i, contour in enumerate(contours):
        cv2.drawContours(image, [contour], -1, (0, 255, 0), 2)
    cv2.imwrite("contours.png", image)
    
  • Check the Points: Print out the relative_points to ensure they are within 0 to 100.

  • Logs: Review your logs for any warnings or errors during processing.

8. Ensure Correct Data in Tasks

Make sure that your tasks have the 'image' key in the 'data' dictionary. For example:

{
  "id": 1,
  "data": {
    "image": "http://example.com/path/to/image.jpg"
  }
}

Conclusion

By making these changes, your model should generate polygon annotations that display correctly in Label Studio. Remember to replace "Object" with your actual label names.

Links Used:

Thank you very much, you have greatly advanced my understanding of this problem. I’m grateful to you

1 Like