We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent bf1edac commit a976c42Copy full SHA for a976c42
docs/en/stack/ml/nlp/ml-nlp-deploy-models.asciidoc
@@ -152,7 +152,10 @@ increases the speed of {infer} requests. The value of this setting must not
152
exceed the number of available allocated processors per node.
153
154
You can view the allocation status in {kib} or by using the
155
-{ref}/get-trained-models-stats.html[get trained model stats API].
+{ref}/get-trained-models-stats.html[get trained model stats API]. If you to
156
+change the number of allocations, you can use the
157
+{ref}/update-trained-model-deployment.html[update trained model stats API] after
158
+the allocation status is `started`.
159
160
[discrete]
161
[[ml-nlp-test-inference]]
0 commit comments