Skip to content

Releases: dstackai/dstack

0.18.34

09 Jan 11:28
2135175
Compare
Choose a tag to compare

Idle duration

If provisioned fleet instances aren’t used, they are marked as idle for reuse within the configured idle duration. After this period, instances are automatically deleted. This behavior was previously configured using the termination_policy and termination_idle_time properties in run or fleet configurations.

With this update, we replace these two properties with idle_duration, a simpler way to configure this behavior. This property can be set to a specific duration or to off for unlimited time.

type: dev-environment
name: vscode

python: "3.11"
ide: vscode

# Terminate instances idle for more than 1 hour
idle_duration: 1h

resources:
  gpu: 24GB

Docker

Previously, dstack had limitations on Docker images for dev environments, tasks, and services. These have now been lifted, allowing images based on various Linux distributions like Alpine, Rocky Linux, and Fedora.

dstack now also supports Docker images with built-in OpenSSH servers, which previously caused issues.

Documentation

The documentation has been significantly improved:

  • Backend configuration has been moved from the Reference page to Concepts→Backends.
  • Major examples related to dev environments, tasks, and services have been relocated from the Reference page to their respective Concepts pages.

Deprecations

  • The termination_idle_time and termination_policy parameters in run configurations have been deprecated in favor of idle_duration.

What's changed

  • [dstack-shim] Implement Future API by @un-def in #2141
  • [API] Add API support to get runs by id by @r4victor in #2157
  • [TPU] Update TPU v5e runtime and update vllm-tpu example by @Bihan in #2155
  • [Internal] Skip docs-build on PRs from forks by @r4victor in #2159
  • [dstack-shim] Add API v2 compat support to ShimClient by @un-def in #2156
  • [Run configurations] Support Alpine and more RPM-based images by @un-def in #2151
  • [Internal] Omit id field in (API) Client.runs.get() method by @un-def in #2174
  • [dstack-shim] Remove API v1 by @un-def in #2160
  • [Volumes] Fix volume attachment with dstack backend by @un-def in #2175
  • Replace termination_policy and termination_idle_time with idle_duration: int|str|off by @peterschmidt85 in #2167
  • Allow running sshd in dstack runs by @jvstme in #2178
  • [Docs] Many docs improvements by @peterschmidt85 in #2171

Full changelog: 0.18.33...0.18.34

0.18.33

27 Dec 15:29
09cf464
Compare
Choose a tag to compare

This update fixes TPU v6e support and a potential gateway upgrade issue.

What's Changed

Full Changelog: 0.18.32...0.18.33

0.18.32

27 Dec 15:29
2df74b8
Compare
Choose a tag to compare

TPU

Trillium (v6e)

dstack adds support for the latest Trillium TPU (v6e), which became generally available in GCP on December 12th. The new TPU generation doubles the TPU memory and bursts performance, supporting larger workloads.

Resources

dstack now includes CPU, RAM, and TPU memory in Google Cloud TPU offers:

$ dstack apply --gpu tpu

 #  BACKEND  REGION        INSTANCE     RESOURCES                                           SPOT  PRICE   
 1  gcp      europe-west4  v5litepod-1  24xCPU, 48GB, 1xv5litepod-1 (16GB), 100.0GB (disk)  no    $1.56   
 2  gcp      europe-west4  v6e-1        44xCPU, 176GB, 1xv6e-1 (32GB), 100.0GB (disk)       no    $2.97   
 3  gcp      europe-west4  v2-8         96xCPU, 334GB, 1xv2-8 (64GB), 100.0GB (disk)        no    $4.95                                                   

Volumes

By default, TPU VMs contain a 100GB boot disk, and its size cannot be changed. Now, you can add more storage using Volumes.

Gateways

In this update, we've greatly refactored Gateways, improving their reliability and fixing several bugs.

Note

If you are running multiple replicas of the dstack server, ensure all replicas are upgraded promptly. Leaving some replicas on an older version may prevent them from creating or deleting services and could result in minor errors in their logs.

Warning

Ensure you update to 0.18.33, which includes critical hot-fixes for important issues.

What's changed

Full changelog: 0.18.31...0.18.32

0.18.31

18 Dec 11:55
cc96bf4
Compare
Choose a tag to compare

GCP

Running VMs on behalf of a service account

Like all major clouds, GCP supports running a VM on behalf of a managed identity using a service account. Now you can assign a service account to a GCP VM with dstack by specifying the vm_service_account property in the GCP config:

type: gcp
project_id: myproject
vm_service_account: sa@myproject.iam.gserviceaccount.com
creds:
  type: default

Assigning a service account to a VM can be used to access GCP resources from within runs. Another use case is using firewall rules that rely on the service account as the target. Such rules are typical for Shared VPC setups when admins of a host project can create firewall rules for service projects based on their service accounts.

Volumes

Creating user home directory automatically

Following support for non-root users in Docker images, dstack improves handling of users' home directories. Most importantly, the HOME environment variable is set according to /etc/passwd, and the home directory is created automatically if it does not exist.

The update opens up new possibilities including the use of an empty volume for /home:

type: dev-environment
ide: vscode
image: ubuntu
user: ubuntu
volumes:
  - volume-aws:/home

AWS volumes with non-Nitro instances

dstack users previously reported AWS Volumes not working with some instance types. This is now fixed and tested for all instance types supported by dstack including older Xen-based instances like the P3 family.

Deprecations

  • The home_dir and setup parameters in run configurations have been deprecated. If you're using setup, move setup commands to the top of init.

What's changed

Full changelog: 0.18.30...0.18.31

0.18.30

12 Dec 11:09
8d82a35
Compare
Choose a tag to compare

AWS Capacity Reservations and Capacity Blocks

dstack now allows provisioning AWS instances using Capacity Reservations and Capacity Blocks. Given a CapacityReservationId, you can specify it in a fleet or a run configuration:

type: fleet
nodes: 1
name: my-cr-fleet
reservation: cr-0f45ab39cd64a1cee

The instance will use the reserved capacity, so as long as you have enough, the provisioning is guaranteed to succeed.

Non-root users in Docker images

Previously, dstack always executed the workload as root, ignoring the user property set in the image. Now, dstack executes the workload with the default image user, and you can override it with a new user property:

type: task
image: nvcr.io/nim/meta/llama-3.1-8b-instruct
user: nim

The format of the user property is the same as Docker uses: username[:groupname], uid[:gid], and so on.

Improved dstack apply and repos UX

Previously, dstack apply used the current directory as the repo that's made available within the run at /workflow. The directory had to be initialized with dstack init before running dstack apply.

Now you can pass --repo to dstack apply. It can be a path to a local directory or a remote Git repo URL. The specified repo will be available within the run at /workflow. You can also specify --no-repo if the run doesn't need any repo. With --repo or --no-repo specified, you don't need to run dstack init:

$ dstack apply -f task.dstack.yaml --repo .
$ dstack apply -f task.dstack.yaml --repo ../parent_dir
$ dstack apply -f task.dstack.yaml --repo https://github.com/dstackai/dstack.git
$ dstack apply -f task.dstack.yaml --no-repo

Specifying --repo explicitly can be useful when running dstack apply from scripts, pipelines, or CI. dstack init stays relevant for use cases when you work with dstack apply interactively and want to set up the repo to work with once.

Lightweight pip install dstack

pip install dstack used to install all the dstack server dependencies. Now pip install dstack installs only the CLI and Python API, which is optimal for use cases when a remote dstack server is used. You can do pip install "dstack[server]" to install the server or do pip install "dstack[all]" to install the server with all backends supported.

Breaking changes

  • pip install dstack no longer install the server dependencies. If you relied on it to install the server, ensure you use pip install "dstack[server]" or pip install "dstack[all]".

What's Changed

New Contributors

Full Changelog: 0.18.29...0.18.30

0.18.29

04 Dec 10:45
c10b1fe
Compare
Choose a tag to compare

Support internal_ip for SSH fleet clusters

It's now possible to specify instance IP addresses used for communication inside SSH fleet clusters using the internal_ip property:

type: fleet
name: my-ssh-fleet
placement: cluster
ssh_config:
  user: ubuntu
  identity_file: ~/.ssh/dstack/key.pem
  hosts:
    - hostname: "3.79.203.200"
      internal_ip: "172.17.0.1"
    - hostname: "18.184.67.100"
      internal_ip: "172.18.0.2"

If internal_ip is not specified, dstack automatically detects internal IPs by inspecting network interfaces. This works when all instances have IPs belonging to the same subnet and are accessible on those IPs. The explicitly specified internal_ip enables networking configurations when the instances are accessible on IPs that do not belong to the same subnet.

UX enhancements for dstack apply

The dstack apply command gets many improvements including more concise and consistent output and better error reporting. When applying run configurations, dstack apply now prints a table similar to the dstack ps output:

✗ dstack apply
 Project                main                                 
 User                   admin                                
 ...                                  

Submit a new run? [y/n]: y
 NAME           BACKEND          RESOURCES       PRICE     STATUS   SUBMITTED 
 spicy-tiger-1  gcp              2xCPU, 8GB,     $0.06701  running  14:52     
                (us-central1)    100.0GB (disk)                               

spicy-tiger-1 provisioning completed (running)

What's Changed

New Contributors

Full Changelog: 0.18.28...0.18.29

0.18.28

26 Nov 11:32
de0ff48
Compare
Choose a tag to compare

CLI improvements

  • Added alias -R for --reuse with dstack apply
  • Shorten model URL output
  • dstack apply and dstack attach no longer rely on external tools such as ps and grep on Unix-like systems and powershell on Windows. With this change, it's now possible to use dstack CLI client in minimal environments such as Docker containers, including the official dstackai/dstack image

What's Changed

Full Changelog: 0.18.27...0.18.28

0.18.27

22 Nov 09:42
d8b6ccb
Compare
Choose a tag to compare

UI/UX improvements

This release fixes a login issue in the control plane UI and introduces other UI/UX improvements.

What's Changed

Full Changelog: 0.18.26...0.18.27

0.18.26

20 Nov 16:02
b26d4ed
Compare
Choose a tag to compare

Git

Previously, when you called dstack init, Git credentials were reused between users of the same project and repository.

Starting with this release, to improve security, dstack no longer shares Git credentials across users.

Warning

If you submitted credentials earlier with dstack init, they will continue to work. However, it is recommended that each user call dstack init again to ensure they do not reuse credentials from other users.

Deleting legacy credentials

To ensure no credentials submitted earlier are shared across users, you can run the following SQL statements:

UPDATE repos SET creds = NULL;

UI

This update brings a few UI improvements:

  • Added Delete button to the Volumes page
  • Added Refresh button to all pages with lists: Runs, Models, Fleets, Volumes, Projects
  • Improved Code button on the model page

What's changed

  • Implement per-user repo creds storage by @un-def in #2004
  • [UI] Add Refresh button to all pages with lists by @olgenn in #2007
  • [UI] Include base URL and authentication token in the code snippets by @olgenn in #2006
  • [UI] The Code button improvements on the Model page by @olgenn in #2001
  • [UI] It's not possible to select and delete volumes by @olgenn in #2000
  • [UI] [Bug]: Services without model mapping are displayed in Models UI by @olgenn in #1993
  • Ensure sshd privsep dir in container is properly set up by @un-def in #2008
  • [Docs] Many minor improvements to docs and examples by @peterschmidt85 in #2013
  • [Docs] Services without a gateway by @jvstme in #2011
  • [Docs] Add deployment section with vLLM, TGI and NIM. Remove alignment handbook by @Bihan in #1990
  • [Docs] Updated Installation and Server deployment guides to include CloudFormation by @peterschmidt85
  • [Docs] Update services docs to reflect that gateway is now optional by @peterschmidt85 in #2005
  • [Examples] Add a CloudFormation template showing how to deploy dstack server to AWS by @peterschmidt85 in #1944
  • [Examples] Add Airflow example by @r4victor in #1991

Full changelog: 0.18.25...0.18.26

0.18.25

13 Nov 10:40
e8aebe8
Compare
Choose a tag to compare

Multiple volumes per mount point

It's now possible to specify a list of volumes for a mount point in run configurations:

...
volumes:
  - name: [my-aws-eu-west-1-volume, my-aws-us-east-1-volume]
    path: /volume_data

dstack will choose and mount one volume from the list. This can be used to increase GPU availability by specifying different volumes for different regions, which is desirable for use cases like caching. Previously, it was possible to specify only one volume per mount point, so if there was no compute capacity in the volume's region, provisioning would fail.

DSTACK_NODES_IPS environment variable

A new DSTACK_NODES_IPS environment variable is now available for multi-node tasks. It contains a list of internal IP addresses of all nodes in the cluster, e.g. DSTACK_NODES_IPS="10.128.0.47\n10.128.0.48\n10.128.0.49". This feature enables cluster workloads that require configuring IP addresses of all the nodes.

What's Changed

Full Changelog: 0.18.24...0.18.25