Skip to content

Commit 460f789

Browse files
szabostevelcawldroberts195
authored
[DOCS] Adds autoscaling info to Anomaly detection at scale (elastic#1659)
Co-authored-by: Lisa Cawley <[email protected]> Co-authored-by: David Roberts <[email protected]>
1 parent 9d75b29 commit 460f789

File tree

1 file changed

+15
-2
lines changed

1 file changed

+15
-2
lines changed

docs/en/stack/ml/anomaly-detection/anomaly-detection-scale.asciidoc

Lines changed: 15 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ require you to clone an existing job or create a new one.
2424

2525
[discrete]
2626
[[node-sizing]]
27-
== 1. Consider node sizing and configuration
27+
== 1. Consider autoscaling, node sizing, and configuration
2828

2929
An {anomaly-job} runs on a single node and requires sufficient resources to hold
3030
its model in memory. When a job is opened, it will be placed on the node with
@@ -54,6 +54,19 @@ Increasing the number of nodes will allow distribution of job processing as well
5454
as fault tolerance. If running many jobs, even small memory ones, then consider
5555
increasing the number of nodes in your environment.
5656

57+
In {ecloud}, you can enable {ref}/xpack-autoscaling.html[autoscaling] so that
58+
the {ml} nodes in your cluster scale up or down based on current {ml}
59+
memory requirements. The {ecloud} infrastructure allows you to create
60+
{ml-jobs} up to the size that fits on the maximum node size that the
61+
cluster can scale to (usually somewhere between 58GB and 64GB) rather than what
62+
would fit in the current cluster. If you attempt to use autoscaling outside of
63+
{ecloud}, then set `xpack.ml.max_ml_node_size` to define the maximum possible
64+
size of a {ml} node. Creating {ml-jobs} with model memory limits larger than the
65+
maximum node size can support is not allowed, as autoscaling cannot add a node
66+
big enough to run the job. On a self-managed deployment, you can set
67+
`xpack.ml.max_model_memory_limit` according to the available resources of the
68+
{ml} node. This prevents you from you creating jobs with model memory limits too
69+
high to open in your cluster.
5770

5871
[discrete]
5972
[[dedicated-results-index]]
@@ -296,4 +309,4 @@ forecast needs more memory than the provided value, it spools to disk. Forecasts
296309
that spool to disk generally run slower. If you need to speed up forecasts,
297310
increase the available memory for the forecast. Forecasts that would take more
298311
than 500 MB to run won’t start because this is the maximum limit of disk space
299-
that a forecast is allowed to use.
312+
that a forecast is allowed to use.

0 commit comments

Comments
 (0)