We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent dd7c125 commit 0e910aeCopy full SHA for 0e910ae
docs/en/stack/ml/nlp/ml-nlp-deploy-models.asciidoc
@@ -208,7 +208,7 @@ increases the speed of {infer} requests. The value of this setting must not
208
exceed the number of available allocated processors per node.
209
210
You can view the allocation status in {kib} or by using the
211
-{ref}/get-trained-models-stats.html[get trained model stats API]. If you to
+{ref}/get-trained-models-stats.html[get trained model stats API]. If you want to
212
change the number of allocations, you can use the
213
{ref}/update-trained-model-deployment.html[update trained model stats API] after
214
the allocation status is `started`.
0 commit comments