What is Ray?¶
Ray is a fast and simple framework for building and running distributed applications.
Ray accomplishes this mission by:
Providing simple primitives for building and running distributed applications.
Enabling end users to parallelize single machine code, with little to zero code changes.
Including a large ecosystem of applications, libraries, and tools on top of the core Ray to enable complex applications.
Ray Core provides the simple primitives for application building.
On top of Ray Core are several libraries for solving problems in machine learning:
Ray also has a number of other community contributed libraries:
Getting Started with Ray¶
Check out A Gentle Introduction to Ray to learn more about Ray and its ecosystem of libraries that enable things like distributed hyperparameter tuning, reinforcement learning, and distributed training.
Ray uses Tasks (functions) and Actors (Classes) to allow you to parallelize your Python code:
# First, run `pip install ray`. import ray ray.init() @ray.remote def f(x): return x * x futures = [f.remote(i) for i in range(4)] print(ray.get(futures)) # [0, 1, 4, 9] @ray.remote class Counter(object): def __init__(self): self.n = 0 def increment(self): self.n += 1 def read(self): return self.n counters = [Counter.remote() for i in range(4)] [c.increment.remote() for c in counters] futures = [c.read.remote() for c in counters] print(ray.get(futures)) # [1, 1, 1, 1]
Ray is more than a framework for distributed applications but also an active community of developers, researchers, and folks that love machine learning. Here’s a list of tips for getting involved with the Ray community:
Join our community slack to discuss Ray! The community is extremely active in helping people succeed in building their ray applications.
Star and follow us on on GitHub.
Join our Meetup Group to connect with others in the community!
Use the [ray] tag on StackOverflow to ask and answer questions about Ray usage
Subscribe to firstname.lastname@example.org to join development discussions.
Follow us and spread the word on Twitter!
If you’re interested in contributing to Ray, visit our page on Getting Involved to read about the contribution process and see what you can work on!
Here are some talks, papers, and press coverage involving Ray and its libraries. Please raise an issue if any of the below links are broken, or if you’d like to add your own talk!
Blog and Press¶
- Ray Serve: Scalable and Programmable Serving
- Key Concepts
- Deploying Ray Serve
- Deploying a Model with Ray Serve
- Updating Your Model Over Time
- Deploying as a Kubernetes Service
- Deployment FAQ
- Advanced Topics, Configurations, & FAQ
- Scaling Out
- Using Resources (CPUs, GPUs)
- Batching to improve performance
- Splitting Traffic Between Backends
- Composing Multiple Models
- Ray Serve FAQ
- Package Reference