Troubleshooting API Calls to Retrieve ML Backend Predictions

Charlie Batchelor: Hi - I have a working ML Backend for my LS project, and I can retrieve predictions via the UI. However, if I make a POST request to try and achieve the same result, I get a null response bogus prediction tied to the task, and the task subsequently fails to load. The shape of the request I’m making looks like:

curl -v -X POST "<http://localhost:8080/api/predictions>" \
-H 'Authorization: Token &lt;token&gt;' \
-H 'Content-Type: application/json' \
-d '{
  "task": "&lt;task_id&gt;", "model_version":"&lt;version&gt;" 

And the response is:

{"id":703,"model_version":"1688068863","created_ago":"0 minutes","result":null,"score":null,"cluster":null,"neighbors":null,"mislabeling":0.0,"created_at":"2023-07-04T11:46:55.497033Z","updated_at":"2023-07-04T11:46:55.497052Z","task":6731}%  

Is my request badly formed? Any help with this would be much appreciated!

Chris Hoge (HumanSignal): I’m not sure if you’ve dig in deeper. I know that there’s an issue with the ML backend where the model ID might be updated without your knowledge, and that can cause some issues (if the model prediction ID doesn’t match the one you requested, it will return a null result). This is something that’s on the roadmap to improve.

Charlie Batchelor: Hi @Chris Hoge (HumanSignal) - thanks for this! The solution turned out to be a little simpler than this. I marked my comment on it, helping another user;cid=C01VBREDGAY|here.

Regarding the IDs, LS seems to apply a unique ID to each task across the whole LS instance, so distributed across all projects. The task IDs sent were fine, and the model version supplied with it got me all the predictions I wanted via a Python script :slightly_smiling_face:

Note: This post was generated by the Label Studio Archive Bot from a conversation in the Label Studio Slack, a gathering place for the Label Studio community. Someone in the community thought this was worth sharing!

If this post answered a question for you, hit the Like button - we use that to assess which posts to put into docs.

archivebot: Gosh, this is an interesting conversation - I’ve filed a copy at for future reference!