This section describes the configuration options available when creating a new inference endpoint. Each section of the interface allows fine-grained control over how the model is deployed, accessed, and scaled.
In the top left you can:

The Hardware Configuration section allows you to choose the compute backend used to host the model. You can select from three major cloud providers:

You must also choose an accelerator type:
Additionally, you can select the deployment region (e.g., East US) using the dropdown menu. Once the provider, accelerator, and region are chosen, a list of available instance types is displayed. Each instance tile includes:
You can select a tile to choose that instance type for your deployment. Instances that are incompatible or unavailable in the selected region are grayed out and unclickable.
This section determines who can access your deployed endpoint. Available options are:
Additionally, if you deploy your Inference Endpoint in AWS, you can use AWS PrivateLink for an intra-region secured connection to your AWS VPN.

The Autoscaling section configures how many replicas of your model run and whether the system scales down to zero during periods of inactivity. For more information we recommend reading the in-depth guide on autoscaling.

This section allows you to specify how the container hosting your model behaves. This setting depends on the selected inference engine.
For configuration details, please read the Inference Engine section.

Here you can edit the container arguments and container command.

Environment variables can be provided to customize container behavior or pass secrets.
Each section allows you to add multiple entries using the Add button.

You can label endpoints with tags (e.g., for-testing) to help organize and manage deployments across environments or teams. In the dashboard you will be able to filter and sort endpoints based on these tags. Tags are plain text labels added via the Add button.

This section determines from where your deployed endpoint can be accessed.
By default, your endpoint is accessible from the Internet, and secured with TLS/SSL. Endpoints deployed on an AWS instance can use AWS PrivateLink to restrict access to a specific VPC.
To configure it you need to:
Optionally you can enable PrivateLink Sharing. This will enable sharing of the same PrivateLink between different endpoints.

Advanced Settings offer more fine-grained control over deployment.
