Releases: run-house/runhouse
v0.0.20
Highlights
Cluster Sharing
We’ve made it easier to share clusters across different environments and with other users. You can now share and load a cluster just as you would any other resource.
my_cluster = rh.cluster("rh-cluster", ips=[...], ...)
my_cluster.share(["user1@email.com", "username2"])
# load the box with
shared_cluster = rh.cluster("owner_username/rh-cluster")
Shared users will be able to seamlessly run shared apps on that cluster, or SSH directly onto the remote box. To enable this, we persist the SSH credentials for the cluster as a Runhouse Secret object, which can easily be reloaded when another user tries to connect.
Improved rh.Mapper
rh.Mapper
was first introduced in runhouse v0.0.15, an extension of functions/modules to handle mapping, replicating, and load balancing. Further improvements and some bug fixes were included in this release, plus a BC-breaking variable name (see section below).
def local_sum(arg1, arg2, arg3):
return arg1 + arg2 + arg3
remote_fn = rh.function(local_sum).to(my_cluster)
mapper = rh.mapper(remote_fn, replicas=2)
mapper.map([1, 2], [1, 4], [2, 3])
# output: [4, 9]
Improvements
- Use hashed subtoken for cluster requests (#270)
- Simplify storage of SSH creds for more reliable cluster access across environments and users (#479)
- Remove sky storage dependency (#415)
- Replace subprocess check_call with run (#503)
- Serialize exceptions properly (#516)
- Improved Logging
Bug Fixes
- Mapper bug fixes (#539)
Deprecation
BC-Breaking
rh.mapper
factory function args renamingnum_replicas
->replicas
replicas
->concurrency
Docs
See updated tutorials on Runhouse docs
- New quick start guides -- local, cloud, and Den versions
- Updated API tutorials -- clusters, functions & modules, envs, folders
Examples
See new Runhouse examples on GitHub or webpage
- Llama2 inference on AWS EC2
- Stable Diffusion XL 1.0 on AWS EC2
- Stable Diffusion XL 1.0 on AWS Inferentia
Other
- Remove paramiko as server connection type
v0.0.19
v0.0.18
Highlights
Runhouse Local Mode and rh.here
Previously, the Runhouse server was strictly designed to allow you to deploy apps to it remotely with my_module.to(my_cluster)
. Now, you can now start the Runhouse server daemon directly to be able to deploy it locally like a traditional web server. Access the local daemon's Cluster object in Python with rh.here
. rh.here
always refers to the locally running daemon, so you can use within an existing Runhouse cluster as well.
Start your local Runhouse server:
$ runhouse restart
$ runhouse status
To send a module:
def concat(a, b):
return a+b
import runhouse as rh
rh.function(concat).to(rh.here)
To try out your service:
curl -X "GET" 'http://localhost:32300/concat/call?a=run&b=house'
>>> {"data":"\"runhouse\"","error":null,"traceback":null,"output_type":"result_serialized","serialization":"json"}
This is also particularly useful for debugging. You can ssh onto your cluster, start a Python shell, and run methods like rh.here.call("my_module", "my_method")
to test or analyze your deployed module's behavior or contents quickly.
Replace nginx with Caddy
Use Caddy as a reverse proxy for the Runhouse server launched on clusters, as well as automatically generating and auto-renewing self-signed certificates, making it easy to secure your cluster with HTTPS right out of the box.
Improvements
- Improved logging to reduce log clutter, and differentiate local and cluster(#436, #475)
- Support packages using setup.cfg (#456)
- Runhouse status updates (#462, #469)
Build
- Remove Sky dependency for SSH command runner
Bug Fixes
- Fix name to properly be updated in cluster when saved (#451, #477)
- Fix bug in sagemaker cluster factory (#459)
- Fix Cluster.from_name to properly load existing config in Den (#468)
- Fix CLI runhouse status for on-demand cluster (#478)
BC-Breaking
reqs
andsetup_cmds
removed from function .to (#373)- Generator module now returns generator rather than streamed results (#373)
Other
v0.0.17
Patch for v0.0.16
Remove deprecated runhouse.rns.Secrets
class, which is no longer being used and was causing an issue in importing runhouse.
v0.0.16
This release largely consists of updates to Runhouse infrastructure and build, with the addition of two new runhouse CLI commands (stop and status), basic ASGI support, and some bug fixes.
Note: We’ve removed some dependencies to better support local-only use cases of Runhouse. To use Runhouse with a cluster, please install with pip install “runhouse[sky]”
, and to use Runhouse data structures like tables, please install with pip install “runhouse[data]”
Improvements
- Change Module's endpoint from property to function (#367)
- Change ray to only be initialized in runhouse start or HTTPServer() (#369)
- Ray start to connect to existing cluster (#405)
New features
- Introduce Asgi module and support calling route functions directly (#370)
- Add runhouse stop command/function (#392)
- Add deleting env servlet functionality (#417)
- Add runhouse status command (#416)
Build
Bug Fixes
- Account for workdir or compute in env factory (#354)
rh.here
working (#338)- Fix to function's .notebook() functionality (#362)
- Only set rns_address upon save (#434)
BC-Breaking
- Replace num_entries with limit for resource history API (#399)
v0.0.15
Highlights
- Mapper, a built-in Runhouse module for mapping functions over a list of inputs across compute (#327)
- Python3.11 support (#279)
rh.Mapper Module
The Mapper expands Runhouse functions and Module methods to handle mapping, replicating, and load balancing. A Mapper object is constructed simply by passing in a function or module and module method, along with the number of replicas to use, and optionally your own user-specified replicas. It takes the function and creates replicas of it and its envs, and round-robin calls the replicas to run function calls in parallel.
def local_sum(arg1, arg2, arg3):
return arg1 + arg2 + arg3
remote_fn = rh.function(local_sum).to(my_cluster)
mapper = rh.mapper(remote_fn, num_replicas=2)
mapper.map([1, 2], [1, 4], [2, 3])
# output: [4,9]
Improvements
Better multinode cluster support
- Sync runhouse to all nodes instead of just the head node (#278)
- Start Ray on both head and worker nodes (#305)
- Add back support for cluster IPs (#346)
Introduce cluster servlet for handling global cluster object store (#308)
Build
Bug Fixes
- Fix streaming with HTTP/HTTPS/Nginx (#261)
BC-Breaking
- Replace
instance_count
withnum_instances
for cluster class (#269)
Docs
v0.0.14
Highlights
- Secrets Revamp (#135)
- Facilitate saving, sending, and sharing of Secrets by treating Secrets as a Runhouse resource
- (Alpha) AWS Lambda Functions support (#139, #240, #244)
- Introduce AWS Lambda support for Runhouse functions
Secrets Revamp
The rh.Secrets
class is being deprecated in favor of converting secrets to a Runhouse resource type. As with other resources, the new Secret class supports saving, reloading, and sending secrets across clusters.
There are various builtin secret provider types, for keeping track of compute providers (aws, azure, gcp..), api key based providers (openai, anthropic, …), and ssh key pairs.
# non-provider secret, in-memory
my_secret = rh.secret(name=”my_secret”, values={“key1”: “val1”, “key2”: “val2”})
my_secret.save()
reloaded_secret = rh.secret(“my_secret”)
# provider secret, in-memory or loaded from default location
aws_secret = rh.provider_secret(“aws”) # loads from ~/.aws/credentials or from env vars
openai_secret = rh.provider_secret(“openai”, values={“api_key”: “my_openai_key”}) # explicitly provided values
There are also various APIs for syncing secrets across your clusters and environments:
aws_secret.to(cluster, env)
cluster.sync_secrets([“aws”, “gcp”], env)
env = rh.env(secrets=[“aws”, “openai”]
fn.to(cluster, env)
Please refer to the API tutorial for a more in-depth walkthrough of using Secrets, or the documentation for specific APIs and a full list of builtin providers.
(Alpha) Lambda Functions (AWS serverless)
Runhouse is extending functions to Amazon Web Services (AWS) Lambda Compute. These functions are deployed directly on AWS serverless compute, with Lambda’s infra and servers handled under the hood, making the Lambda onboarding process more smooth and removing the need to translate code through Lambda-specific APIs.
Note: Lambda Functions are in Alpha and the APIs are subject to change. A more stable release along with examples will be published soon. In the meantime, you can find documentation here.
New Additions
- Add visibility to resource config, and enable public resources (#222)
- API for revoking access to shared secrets (#235)
Bug Fixes
- Proper tunnel caching (#191, #194): tunnels were not previously being cached correctly, and dead connections not accounted for
- Sagemaker cluster launch fix (#206): remove runhouse as a dependency from the launch script, as it has not yet been installed on the cluster
- Fix bug with loading runhouse files/folders through SSH fsspec (#225): custom SSH port was not being set in fsspec filesystem of runhouse files/folders
- Correctly launch multiple node clusters according to num_instances (#229): previously was not properly launching multiple nodes
Deprecations + BC-Breaking
access_type
deprecated and renamed toaccess_level
for resource and sharing (#223, #224, #231)rh.Secrets
class deprecated in favor of convert Secrets to a resource type ((#135). Some old APIs are removed, and others are deprecated. Please refer to docs and tutorial for the new secrets flow.
Other
- README updates (#187)
- Various docs updates
v0.0.13
Highlights
- AWS Sagemaker Cluster (#105 #115 #166)
- facilitates easy access to existing or new AWS SageMaker compute
- HTTPS support (Alpha) (#114)
- adds option for starting up the Runhouse API server on the cluster with HTTPS
Sagemaker Cluster
Runhouse is integrating with Amazon Web Services (AWS) SageMaker to allow rapid onboarding onto SageMaker, usually within minutes, and to remove the need to translate code into SageMaker-specific APIs so it can still be used dynamically with other compute infra.
The SageMaker cluster follows the Runhouse cluster definition and usage, but uses Sagemaker compute under the hood.
If you already use SageMaker with your AWS account, you should already be set to use Runhouse SageMaker support. For full SageMaker setup and dependencies, please refer to the docs.
Example 1: Launch a new SageMaker instance and keep it up indefinitely.
# Note: this will use Role ARN associated with the "sagemaker" profile defined in the local AWS config (e.g. `~/.aws/config`).
import runhouse as rh
c = rh.sagemaker_cluster(name='sm-cluster', profile="sagemaker").save()
Example 2: Running a training job with a provided Estimator
c = rh.sagemaker_cluster(name='sagemaker-cluster',
estimator=PyTorch(entry_point='train.py',
role='arn:aws:iam::123456789012:role/MySageMakerRole',
source_dir='/Users/myuser/dev/sagemaker',
framework_version='1.8.1',
py_version='py36',
instance_type='ml.p3.2xlarge'),
).save()
Support HTTPS calls to clusters (Alpha)
Adds an option for starting up the Runhouse API server on the cluster with HTTPS, including optionally creating self-signed certs and proxying through Nginx. This makes it incredibly fast and easy to stand up a microservice with standard bearer token authentication (using a Runhouse token), allowing users to share Runhouse resources with collaborators, teams, customers, etc.
Supports several new server connection types, including tls
, ssh
. For more information on these types, please refer to docs.
BC Breaking
- The default Runhouse HTTP server port is now
32300
(#124)
Other
- Remove the
paramiko
dependency for password clusters (#131) - Support running shell commands in
env
(#132)
Example code:
rh.env(
name="my_env",
reqs=["torch", "diffusers"],
setup_cmds=["source ~/.bashrc"]
)
- Support an optional
host
parameter for therunhouse start
andrunhouse restart
commands, which now defaults to0.0.0.0
(#110)
Example code:
runhouse restart --host 0.0.0.0
v0.0.12
Highlights
- In-memory resources, an update to existing remote resource implementations (#78)
- includes new
rh.Module
resource, and resulting performance and feature improvements
- includes new
- Sagemaker Cluster (Alpha) (#89)
- facilitates easy access to existing or new SageMaker compute
In-memory Resources
As mentioned in the 0.0.11 Release Notes, we've redesigned how we handle remote resources, resulting in performance and feature improvements, as well as support for a new type of resource. Basic notes can be found below, or a more comprehensive technical overview can be found in our 0.0.12 blog post
rh.Module Resource
rh.Module represents a class that can be accessed and used remotely, including all its class methods and variables, and with out-of-the-box support for capabilities like streaming logs/results, async, queuing, etc
rh.module()
factory function for wrapping existing Python classesrh.Module
class that can be subclasses to write natively Runhouse-compatible classes
In-Python Object Pinning
Storing large objects, such as models, in Python memory can reduce time spent loading objects from disk or sending them over.
- more stable object pinning in Python memory
- intuitive
rh.here.get()
andrh.here.put()
APIs, whererh.here
returns the cluster it is called from
Performance Improvements
- Reduced process overhead and latency, by having each underlying Ray Actor live in it's own process rather than launching
Other resulting improvements
- Streaming support
- Increased logging support
- Async support
Sagemaker Cluster (Alpha)
Runhouse is integrating with SageMaker to make the SageMaker onboarding process more smooth, and removing the need to translate code through SageMaker specific estimators or APIs. This will be described in more detail in the 0.0.13 release, or check out the documentation in the meantime.
Build
- Remove s3fs dependency
- Upgrade to SkyPilot 0.0.4, to resolve Colab installation issues
BC Breaking
.remote()
now returns a remote object, rather than a string associated with the object/run. To get the contents of the result, useresult.fetch()
v0.0.11
What's New
In-memory Resources (Alpha)
We revamped our underlying code implementation for handling remote code execution, resulting in improvements and added support for:
- True in-Python pinning
- Improved performance and decreased process overhead
- Increased support for streaming and logs
- Remote classes and class method calls (
rh.Module
resource)
These new features and updates will be explained in more detail in the following (0.0.12) release
Docs Site
Documentation is now supported and hosted directly in our website, at run.house/docs. Easily access documentation for any of or current and past releases.
Other
- Environment caching, skip env subpackage installations if existing environment is already detected
- ssh proxy tunnel support for BYO clusters (#85)
- troubleshooting and manual setup instructions for commonly encountered issues
- add
runhouse start
command
BC-Breaking
- rename
runhouse restart_server
command torunhouse restart