Dynamic choices in Search Page Ranking template + ml backend

Hey, everybody. I want to connect Search Page Ranking template to backend search service, but I am not getting anything. I have read the documentation, looked at examples, tried many different ways to generate json, but the result does not want to be displayed in the template. I’m trying to run the code that phind.com generated for me. Please point out my mistakes. Working with dynamic selection is not well covered.

<View>
  <View style="margin:5px;      width:575px;      border-radius:30px;      border:1px solid #dcdcdc;             height:45px;      width:500px;      font-size:16px;             display: flex;             justify-content: center;             padding: 8px;      outline: none;             background-image: url('https://htx-pub.s3.amazonaws.com/samples/google-search-magnifying-glass-icon-5.jpeg');             background-position: left center;             background-size: 24px;             background-repeat: no-repeat;             background-origin: content-box;                ">
    <Text name="text" value="$query"/>
  </View>
  <View className="dynamic_choices">
    <Choices name="dynamic_choices" toName="text" selection="checkbox" value="$options" layout="vertical" choice="multiple" allownested="true"/>
  </View>
  <Style>
  .searchresultsarea {
    margin-left: 10px;
    font-family: 'Arial';
  }
  .searchresult {
  	margin-left: 8px;
	}
	.searchresult h2 {
    font-size: 19px;
    line-height: 18px;
    font-weight: normal;
    color: rgb(29, 1, 189);
    margin-bottom: 0px;
    margin-top: 25px;
	}
	.searchresult a {
    font-size: 14px;
    line-height: 14px;
    color: green;
    margin-bottom: 0px;
	}
  .searchresult button {
    font-size: 10px;
    line-height: 14px;
    color: green;
    margin-bottom: 0px;
    padding: 0px;
    border-width: 0px;
    background-color: white;
  }
  </Style>
</View>
class Search(LabelStudioMLBase):

    def setup(self):
        self.set("model_version", f'{self.__class__.__name__}-v0.0.1')


    def predict(self, tasks: List[Dict], context: Optional[Dict] = None, **kwargs) -> List[Dict]:
        predictions = []
        
        for task in tasks:
            print(">> task = ", task)
            # Получаем поисковый запрос из задачи
            query = task.get('data', {}).get('query', '')
            print(">> query = ", query)
            # Здесь должна быть логика получения результатов поиска
            # Для демонстрации используем статические данные
            mock_results = self._generate_mock_results(query)
            
            prediction = {
                'task_id': task['id'],
                'predictions': [{
                    'result': mock_results,
                    'model_version': self.model_version,
                    'score': 1.0
                }]
            }
            predictions.append(prediction)
        
        return predictions

    def _generate_mock_results(self, query: str) -> List[Dict]:
        # В реальном приложении здесь должна быть интеграция с поисковой системой
        return [
            {
                "html": "<div class='searchresultsarea'><div class='searchresult'><h2>Result 1</h2><a href='#'>Link 1</a></div></div>",
                "value": "result1"
            },
            {
                "html": "<div class='searchresultsarea'><div class='searchresult'><h2>Result 2</h2><a href='#'>Link 2</a></div></div>",
                "value": "result2"
            }
        ]

Hello, it’s not possible to generate dynamic choices as a part of annotations by the design. They should be predefined in tasks and ml backend can’t send them as predictions.

So, you have to import initial tasks with the field “options”:

{
“query”: “my text”,
“options”: [ … ]
}

I’m trying to replicate this pattern, but with the backend part connected. Why is this not possible if the template is designed to accomplish this task?

Please, check the Example data section there, the options field is a part of a task, not annotation.

You will be able to use checkboxes to label options as a part of annotations, but options themselves should be predefined at the task import step.

I tried doing it this way, but it didn’t work

class Search(LabelStudioMLBase):
    def setup(self):
        # Устанавливаем версию модели
        self.set("model_version", f"{self.__class__.__name__}-v0.0.1")

    def _generate_mock_results(self, query):
        return [
      { "value": "Do or doughnut. There is no try.", "html": "<img src='https://labelstud.io/images/logo.png'>" },
      { "value": "Do or do not. There is no trial.", "html": "<h1>You can use hypertext here</h2>" },
      { "value": "Do or do not. There is no try." },
      { "value": "Duo do not. There is no try." }
    ]

    def predict(self, tasks: List[Dict], context: Optional[Dict] = None, **kwargs) -> List[Dict]:
        predictions = []
        
        for task in tasks:
            print(">> task = ", task)
            # Получаем поисковый запрос из задачи
            query = task.get('data', {}).get('query', '')
            print(">> query = ", query)
            # Здесь должна быть логика получения результатов поиска
            # Для демонстрации используем статические данные
            mock_results = self._generate_mock_results(query)

            prediction = {
                'task_id': task['id'],
                'predictions': [{"query": "my text",
                                 "options": mock_results}]}
            predictions.append(prediction)

You are trying to set options as part of predictions, but they should be inside tasks. It is not possible to do this using the machine learning backend because the machine learning backend cannot modify tasks. You can set it at the import step only, or use PATCH api/tasks/<id> if your tasks are already created.

If you already know the number of search results, e.g., 5, you can use this labeling configuration:

<View>
  <Header value="Text" />
  <Text name="text" value="$text" />

  <Header value="Search results" />
  
  <TextArea name="result1" toName="text"
            showSubmitButton="true" maxSubmissions="1" editable="true"
            required="true" />
  
  <TextArea name="result2" toName="text"
            showSubmitButton="true" maxSubmissions="1" editable="true"
            required="true" />

  <TextArea name="result3" toName="text"
            showSubmitButton="true" maxSubmissions="1" editable="true"
            required="true" />

  <TextArea name="result4" toName="text"
            showSubmitButton="true" maxSubmissions="1" editable="true"
            required="true" />

  <TextArea name="result5" toName="text"
            showSubmitButton="true" maxSubmissions="1" editable="true"
            required="true" />

</View>

When you use TextArea, you can populate it with texts on the ML backend side:

[
    {
        "value": {
            "text": [
                "test1"
            ]
        },
        "meta": {
            "lead_time": 1.539
        },
        "id": "RR-6BtPQXd",
        "from_name": "result1",
        "to_name": "text",
        "type": "textarea"
    },
    {
        "value": {
            "text": [
                "test2"
            ]
        },
        "meta": {
            "lead_time": 1.134
        },
        "id": "SA_9fJbgBU",
        "from_name": "result2",
        "to_name": "text",
        "type": "textarea"
    },
    {
        "value": {
            "text": [
                "test3"
            ]
        },
        "meta": {
            "lead_time": 1.369
        },
        "id": "4T0mT9aMaz",
        "from_name": "result3",
        "to_name": "text",
        "type": "textarea"
    },
    {
        "value": {
            "text": [
                "test4"
            ]
        },
        "meta": {
            "lead_time": 1.01
        },
        "id": "bdQV4UBtJk",
        "from_name": "result4",
        "to_name": "text",
        "type": "textarea"
    },
    {
        "value": {
            "text": [
                "test5"
            ]
        },
        "meta": {
            "lead_time": 0.978
        },
        "id": "ESsxCO7pcZ",
        "from_name": "result5",
        "to_name": "text",
        "type": "textarea"
    }
]

The possible machine learning backend might look like this:

class Search(LabelStudioMLBase):

    def setup(self):
        self.set("model_version", f'{self.__class__.__name__}-v0.0.1')


    def predict(self, tasks: List[Dict], context: Optional[Dict] = None, **kwargs) -> List[Dict]:
        predictions = []
        
        mock_results = [
          {
              "value": {
                  "text": [
                      "test1"
                  ]
              },
              "meta": {
                  "lead_time": 1.539
              },
              "id": "RR-6BtPQXd",
              "from_name": "result1",
              "to_name": "text",
              "type": "textarea"
          },
          {
              "value": {
                  "text": [
                      "test2"
                  ]
              },
              "meta": {
                  "lead_time": 1.134
              },
              "id": "SA_9fJbgBU",
              "from_name": "result2",
              "to_name": "text",
              "type": "textarea"
          },
          {
              "value": {
                  "text": [
                      "test3"
                  ]
              },
              "meta": {
                  "lead_time": 1.369
              },
              "id": "4T0mT9aMaz",
              "from_name": "result3",
              "to_name": "text",
              "type": "textarea"
          },
          {
              "value": {
                  "text": [
                      "test4"
                  ]
              },
              "meta": {
                  "lead_time": 1.01
              },
              "id": "bdQV4UBtJk",
              "from_name": "result4",
              "to_name": "text",
              "type": "textarea"
          },
          {
              "value": {
                  "text": [
                      "test5"
                  ]
              },
              "meta": {
                  "lead_time": 0.978
              },
              "id": "ESsxCO7pcZ",
              "from_name": "result5",
              "to_name": "text",
              "type": "textarea"
          }
      ]

        
        for task in tasks:            
            prediction = {
                'task_id': task['id'],
                'predictions': [{
                    'result': mock_results,
                    'model_version': self.model_version,
                    'score': 1.0
                }]
            }
            predictions.append(prediction)
        
        return predictions

I realized that passing options in the prediction doesn’t make sense, thanks. The thing is, I need to display Choices with a floating Choice counter and change the displayed Choice content. Is it possible to do this? I tried to do this, but the result is not displayed, although I don’t get an error. I assumed that dynamic Choices are only needed to generate lists with floating count of items.

class LangchainSearchAgent(LabelStudioMLBase):
    def setup(self):
        # Устанавливаем версию модели
        self.set("model_version", f"{self.__class__.__name__}-v0.0.1")

    def _search_results(self, query):
        return [
      { "value": "Do or doughnut. There is no try.", "html": "<img src='https://labelstud.io/images/logo.png'>" },
      { "value": "Do or do not. There is no trial.", "html": "<h1>You can use hypertext here</h2>" },
      { "value": "Do or do not. There is no try." },
      { "value": "Duo do not. There is no try." }
    ]

    def predict(self, tasks: List[Dict], context: Optional[Dict] = None, **kwargs) -> List[Dict]:  
      # Получаем поисковый запрос из задачи
      query = tasks[0].get('data', {}).get('query', '')
      print(">> query = ", query)
      # Здесь должна быть логика получения результатов поиска
      # Для демонстрации используем статические данные
      search_results = self._search_results(query)

      prediction = {
                'task_id': tasks[0]['id'],
                "query":query,
                "options": search_results}
      return prediction

Results are not displayed

I use Label Studio 1.14.0