Can you write this from memory?
Wrap the value 0 in an Arc for thread-safe sharing.
Rust's ownership system prevents data races at compile time. But concurrent programming still requires understanding threads, synchronization, and communication patterns.
std::thread for spawning. Mutex for shared mutable state. mpsc channels for message passing. Arc for thread-safe shared ownership.
This page focuses on safe concurrent Rust patterns.
thread::spawn requires FnOnce + Send + 'static. A closure that borrows local data cannot satisfy 'static:
// WRONG: closure borrows v, but thread may outlive the function
// let v = vec![1, 2, 3];
// thread::spawn(|| {
// println!("{v:?}"); // error[E0373]
// });
// RIGHT: move ownership into the thread
let v = vec![1, 2, 3];
thread::spawn(move || {
println!("{v:?}"); // v moved into thread
}).join().unwrap();
If you need to borrow data across threads without moving it, use scoped threads. The scope guarantees all threads join before returning:
let v = vec![1, 2, 3];
let mut results = vec![];
thread::scope(|s| {
s.spawn(|| {
println!("{v:?}"); // borrows &v — no move needed
});
s.spawn(|| {
results.push(v.len()); // borrows &mut results — safe!
});
});
// all threads have joined here — v and results are still usable
println!("{results:?}");
thread::scope eliminates the need for Arc in many cases. Use it when all threads should finish before moving on.
Moving a Mutex into thread::spawn transfers ownership on the first iteration. The second iteration tries to move it again:
// WRONG: Mutex moved into first thread, gone on second iteration
// let counter = Mutex::new(0);
// for _ in 0..10 {
// thread::spawn(move || {
// *counter.lock().unwrap() += 1; // error[E0382] on 2nd iter
// });
// }
// RIGHT: Wrap in Arc, clone before each spawn
let counter = Arc::new(Mutex::new(0));
let mut handles = vec![];
for _ in 0..10 {
let counter = Arc::clone(&counter);
let handle = thread::spawn(move || {
*counter.lock().unwrap() += 1;
});
handles.push(handle);
}
for handle in handles {
handle.join().unwrap();
}
println!("Result: {}", *counter.lock().unwrap());
Arc::clone creates a new reference-counted pointer (cheap — no data copy). Each thread gets its own Arc pointing to the same Mutex.
Rc uses non-atomic reference counting and does not implement Send. The compiler rejects it at thread boundaries:
// WRONG: Rc is not Send
// use std::rc::Rc;
// let data = Rc::new(vec![1, 2, 3]);
// thread::spawn(move || {
// println!("{data:?}"); // error[E0277]: Rc cannot be sent
// });
// RIGHT: Arc uses atomic reference counting (thread-safe)
use std::sync::Arc;
let data = Arc::new(vec![1, 2, 3]);
let data_clone = Arc::clone(&data);
thread::spawn(move || {
println!("{data_clone:?}");
}).join().unwrap();
Send and Sync
Send means ownership can be transferred between threads. Sync means references can be shared — formally, T is Sync if and only if &T is Send. Most standard types are both. Rc, Cell, and raw pointers are neither. The smart pointers guide covers Rc vs Arc and when to pair Arc with Mutex.
Holding a MutexGuard across long operations blocks all other threads waiting on that lock:
// WRONG: guard held during expensive work
// let data = Arc::new(Mutex::new(vec![]));
// let mut guard = data.lock().unwrap();
// let result = expensive_computation(); // other threads blocked!
// guard.push(result);
// RIGHT: lock briefly, release before slow work
let data = Arc::new(Mutex::new(vec![]));
let result = expensive_computation(); // no lock held
data.lock().unwrap().push(result); // lock, push, guard drops
// Or use a scope to make the drop point explicit:
{
let mut guard = data.lock().unwrap();
guard.push(another_result);
} // guard drops here
When a thread panics while holding a lock, the Mutex becomes poisoned. Future .lock() calls return Err:
// WRONG: .unwrap() crashes if any thread panicked
// let value = mutex.lock().unwrap();
// RIGHT: handle poisoning explicitly
let value = mutex.lock().unwrap_or_else(|poisoned| {
eprintln!("Mutex poisoned, recovering");
poisoned.into_inner() // get the data anyway
});
Decide your policy: propagate the panic (.unwrap() is fine if a panic means a bug), or recover with .into_inner() if the data might still be usable. The ownership and borrowing rules are exactly what prevent data races in Rust -- the move keyword in thread closures transfers ownership cleanly.
For I/O-bound concurrency, async Rust with tokio::spawn offers a lighter-weight alternative to OS threads. Use channels when you can model the problem as passing work or results:
use std::sync::mpsc;
let (tx, rx) = mpsc::channel();
thread::spawn(move || {
tx.send("hello").unwrap();
});
let msg = rx.recv().unwrap();
println!("{msg}");
Multiple Producers
Clone the sender for each thread. Drop the original so the receiver knows when all senders are done:
let (tx, rx) = mpsc::channel();
for i in 0..3 {
let tx = tx.clone();
thread::spawn(move || {
tx.send(i).unwrap();
});
}
drop(tx); // drop original so rx iterator terminates
for val in rx {
println!("{val}");
}
WRONG → RIGHT: Channel Hangs Forever
If any sender remains alive, rx.recv() blocks forever waiting for more messages:
// WRONG: original tx still alive — rx never terminates
// let (tx, rx) = mpsc::channel();
// let tx2 = tx.clone();
// thread::spawn(move || { tx2.send(1).unwrap(); });
// for val in rx { println!("{val}"); } // hangs!
// RIGHT: drop all senders when done
let (tx, rx) = mpsc::channel();
let tx2 = tx.clone();
thread::spawn(move || { tx2.send(1).unwrap(); });
drop(tx); // drop original — receiver terminates after tx2 drops
for val in rx { println!("{val}"); }
Bounded Channels: Backpressure with sync_channel
mpsc::channel() is unbounded — fast producers can exhaust memory. Use sync_channel(n) to cap the buffer:
let (tx, rx) = mpsc::sync_channel(10); // buffer holds 10 messages
thread::spawn(move || {
for i in 0..100 {
tx.send(i).unwrap(); // blocks when buffer is full
}
});
for val in rx {
println!("{val}");
}
When reads vastly outnumber writes, RwLock allows concurrent readers while writes get exclusive access:
use std::sync::{Arc, RwLock};
let config = Arc::new(RwLock::new(String::from("v1")));
// Multiple readers can proceed concurrently
let config_r = Arc::clone(&config);
thread::spawn(move || {
let val = config_r.read().unwrap();
println!("Config: {val}");
});
// Writer gets exclusive access
let config_w = Arc::clone(&config);
thread::spawn(move || {
let mut val = config_w.write().unwrap();
*val = String::from("v2");
});
Use RwLock for read-heavy workloads (config, caches). Use Mutex when reads and writes are roughly equal — it's simpler and has less overhead.
- The Rust Book: Threads — spawning, join handles, move closures
- The Rust Book: Message Passing — mpsc channels
- The Rust Book: Shared-State Concurrency — Mutex, Arc
- std::thread::scope — scoped threads (borrow without move)
- std::sync::mpsc — channel API, dropping senders
When to Use Rust Concurrency: Threads, Mutex, Channels & Arc
- Use threads for CPU-bound parallel work.
- Use channels for communication between threads (message passing).
- Use Mutex when multiple threads need to mutate shared state.
- Use Arc to share ownership across threads.
- Use RwLock when reads are much more frequent than writes.
Check Your Understanding: Rust Concurrency: Threads, Mutex, Channels & Arc
How does Rust prevent data races?
Through the type system: types must implement Send to be transferred between threads and Sync to be shared. Mutex ensures exclusive access. Arc provides thread-safe reference counting. The compiler rejects unsafe sharing at compile time.
What You'll Practice: Rust Concurrency: Threads, Mutex, Channels & Arc
Common Rust Concurrency: Threads, Mutex, Channels & Arc Pitfalls
- Symptom: `error[E0277]: Rc<T> cannot be sent between threads safely`. Why: `Rc` uses non-atomic reference counting and does not implement `Send`. Fix: Replace `Rc` with `Arc` for any data shared across threads.
- Symptom: `error[E0373]: closure may outlive the current function`. Why: The thread closure borrows a local variable, but the thread could outlive the function. Fix: Add `move` before the closure to transfer ownership into the thread.
- Symptom: Lock contention slows your program to a crawl. Why: You hold the `MutexGuard` across long operations, blocking all other threads. Fix: Lock, copy or modify the data, then drop the guard immediately — `let val = { *lock.lock().unwrap() };`.
- Symptom: Your program hangs with no error. Why: Two threads each hold one Mutex and wait for the other — a classic deadlock. Fix: Always acquire multiple Mutexes in a consistent order, or restructure to use channels instead.
- Symptom: Main exits before threads finish, output is missing. Why: `thread::spawn` returns a `JoinHandle` you ignored, so the main thread doesn't wait. Fix: Collect handles and call `.join().unwrap()` on each before main returns.
Rust Concurrency: Threads, Mutex, Channels & Arc FAQ
What is the difference between Rc and Arc?
Rc uses non-atomic reference counting (faster, single-threaded only). Arc uses atomic reference counting (thread-safe but slower). Use Rc in single-threaded code, Arc across threads.
What happens if a thread panics while holding a Mutex?
The Mutex becomes "poisoned." Future calls to .lock() return Err. You can recover with .lock().unwrap_or_else(|e| e.into_inner()) but the data may be inconsistent.
When should I use channels vs shared state?
Channels are simpler and avoid lock contention. Use them when you can model the problem as message passing. Use shared state (Mutex) when multiple threads need random access to the same data.
What are Send and Sync traits?
Send means ownership can be transferred between threads. Sync means references can be shared between threads. Most types are both. Rc is neither. Raw pointers are neither.
How do I wait for multiple threads to complete?
Collect JoinHandles and call .join() on each, or use thread pools (rayon crate) for higher-level abstractions.
Rust Concurrency: Threads, Mutex, Channels & Arc Syntax Quick Reference
let h = thread::spawn(|| 42);
let answer = h.join().unwrap();let data = vec![1, 2, 3];
thread::spawn(move || {
println!("{data:?}");
});handle.join().unwrap();let m = Mutex::new(0);let mut num = m.lock().unwrap();let data = Arc::new(Mutex::new(0));let data = Arc::clone(&data);let (tx, rx) = mpsc::channel();tx.send(value).unwrap();let msg = rx.recv().unwrap();Rust Concurrency: Threads, Mutex, Channels & Arc Sample Exercises
Fill in the method to propagate any panic from the spawned thread.
unwrapFill in the function to create a new thread.
spawnFill in the method to wait for the thread to finish.
join+ 49 more exercises