Syntax Cache
BlogMethodFeaturesHow It WorksBuild a Game
  1. Home
  2. Rust
  3. Rust Concurrency Practice: Threads, Mutex, Channels & Arc
Rust52 exercises

Rust Concurrency Practice: Threads, Mutex, Channels & Arc

Practice Rust concurrency including thread spawning, Mutex for shared state, channels for message passing, and Arc for thread-safe sharing.

Common ErrorsQuick ReferencePractice
Warm-up1 / 2

Can you write this from memory?

Wrap the value 0 in an Arc for thread-safe sharing.

On this page
  1. 1Compiler Error E0373: Closure May Outlive the Current Function
  2. 2thread::scope: Borrow Without move
  3. 3Compiler Error E0382: Use of Moved Value in a Spawn Loop
  4. 4Compiler Error E0277: Rc Cannot Be Sent Between Threads
  5. Send and Sync
  6. 5WRONG → RIGHT: Mutex Guard Held Too Long
  7. 6WRONG → RIGHT: Mutex Poisoning
  8. 7Channels: Message Passing Between Threads
  9. Multiple Producers
  10. WRONG → RIGHT: Channel Hangs Forever
  11. Bounded Channels: Backpressure with sync_channel
  12. 8RwLock: Many Readers, One Writer
  13. 9Further Reading
Compiler Error E0373: Closure May Outlive the Current Functionthread::scope: Borrow Without moveCompiler Error E0382: Use of Moved Value in a Spawn LoopCompiler Error E0277: Rc Cannot Be Sent Between ThreadsWRONG → RIGHT: Mutex Guard Held Too LongWRONG → RIGHT: Mutex PoisoningChannels: Message Passing Between ThreadsRwLock: Many Readers, One WriterFurther Reading

Rust's ownership system prevents data races at compile time. But concurrent programming still requires understanding threads, synchronization, and communication patterns.

std::thread for spawning. Mutex for shared mutable state. mpsc channels for message passing. Arc for thread-safe shared ownership.

This page focuses on safe concurrent Rust patterns.

Related Rust Topics
Rust Ownership & Borrowing: Move Semantics & Borrow RulesRust Smart Pointers: Box, Rc, RefCell & ArcRust Closures: Fn, FnMut, FnOnce & CapturesRust Async: async/await, Futures & Tokio Basics

thread::spawn requires FnOnce + Send + 'static. A closure that borrows local data cannot satisfy 'static:

// WRONG: closure borrows v, but thread may outlive the function
// let v = vec![1, 2, 3];
// thread::spawn(|| {
//     println!("{v:?}");  // error[E0373]
// });

// RIGHT: move ownership into the thread
let v = vec![1, 2, 3];
thread::spawn(move || {
    println!("{v:?}");  // v moved into thread
}).join().unwrap();

Ready to practice?

Start practicing Rust Concurrency: Threads, Mutex, Channels & Arc with spaced repetition

thread::scope lets threads borrow local data without move or Arc—the scope guarantees all threads join before returning. Use it when all threads should finish before moving on.

If you need to borrow data across threads without moving it, use scoped threads. The scope guarantees all threads join before returning:

let v = vec![1, 2, 3];
let mut results = vec![];

thread::scope(|s| {
    s.spawn(|| {
        println!("{v:?}");  // borrows &v — no move needed
    });
    s.spawn(|| {
        results.push(v.len());  // borrows &mut results — safe!
    });
});
// all threads have joined here — v and results are still usable
println!("{results:?}");

thread::scope eliminates the need for Arc in many cases. Use it when all threads should finish before moving on.

Wrap shared state in Arc and clone before each spawn. Arc::clone is cheap (increments a counter, no data copy). Each thread gets its own Arc pointing to the same Mutex.

Moving a Mutex into thread::spawn transfers ownership on the first iteration. The second iteration tries to move it again:

// WRONG: Mutex moved into first thread, gone on second iteration
// let counter = Mutex::new(0);
// for _ in 0..10 {
//     thread::spawn(move || {
//         *counter.lock().unwrap() += 1;  // error[E0382] on 2nd iter
//     });
// }

// RIGHT: Wrap in Arc, clone before each spawn
let counter = Arc::new(Mutex::new(0));
let mut handles = vec![];
for _ in 0..10 {
    let counter = Arc::clone(&counter);
    let handle = thread::spawn(move || {
        *counter.lock().unwrap() += 1;
    });
    handles.push(handle);
}
for handle in handles {
    handle.join().unwrap();
}
println!("Result: {}", *counter.lock().unwrap());

Arc::clone creates a new reference-counted pointer (cheap — no data copy). Each thread gets its own Arc pointing to the same Mutex.

Rc uses non-atomic reference counting and does not implement Send. The compiler rejects it at thread boundaries:

// WRONG: Rc is not Send
// use std::rc::Rc;
// let data = Rc::new(vec![1, 2, 3]);
// thread::spawn(move || {
//     println!("{data:?}");  // error[E0277]: Rc cannot be sent
// });

// RIGHT: Arc uses atomic reference counting (thread-safe)
use std::sync::Arc;
let data = Arc::new(vec![1, 2, 3]);
let data_clone = Arc::clone(&data);
thread::spawn(move || {
    println!("{data_clone:?}");
}).join().unwrap();

Send and Sync

Send means ownership can be transferred between threads. Sync means references can be shared — formally, T is Sync if and only if &T is Send. Most standard types are both. Rc, Cell, and raw pointers are neither. The smart pointers guide covers Rc vs Arc and when to pair Arc with Mutex.

Holding a MutexGuard across long operations blocks all other threads waiting on that lock:

// WRONG: guard held during expensive work
// let data = Arc::new(Mutex::new(vec![]));
// let mut guard = data.lock().unwrap();
// let result = expensive_computation();  // other threads blocked!
// guard.push(result);

// RIGHT: lock briefly, release before slow work
let data = Arc::new(Mutex::new(vec![]));
let result = expensive_computation();    // no lock held
data.lock().unwrap().push(result);       // lock, push, guard drops

// Or use a scope to make the drop point explicit:
{
    let mut guard = data.lock().unwrap();
    guard.push(another_result);
}  // guard drops here

When a thread panics while holding a lock, the Mutex becomes poisoned. Future .lock() calls return Err:

// WRONG: .unwrap() crashes if any thread panicked
// let value = mutex.lock().unwrap();

// RIGHT: handle poisoning explicitly
let value = mutex.lock().unwrap_or_else(|poisoned| {
    eprintln!("Mutex poisoned, recovering");
    poisoned.into_inner()  // get the data anyway
});

Decide your policy: propagate the panic (.unwrap() is fine if a panic means a bug), or recover with .into_inner() if the data might still be usable. The ownership and borrowing rules are exactly what prevent data races in Rust -- the move keyword in thread closures transfers ownership cleanly.

Channels are simpler than shared state and avoid lock contention. Clone the sender for multiple producers, and drop the original sender so the receiver's iterator terminates.

For I/O-bound concurrency, async Rust with tokio::spawn offers a lighter-weight alternative to OS threads. Use channels when you can model the problem as passing work or results:

use std::sync::mpsc;

let (tx, rx) = mpsc::channel();

thread::spawn(move || {
    tx.send("hello").unwrap();
});

let msg = rx.recv().unwrap();
println!("{msg}");

Multiple Producers

Clone the sender for each thread. Drop the original so the receiver knows when all senders are done:

let (tx, rx) = mpsc::channel();

for i in 0..3 {
    let tx = tx.clone();
    thread::spawn(move || {
        tx.send(i).unwrap();
    });
}
drop(tx);  // drop original so rx iterator terminates

for val in rx {
    println!("{val}");
}

WRONG → RIGHT: Channel Hangs Forever

If any sender remains alive, rx.recv() blocks forever waiting for more messages:

// WRONG: original tx still alive — rx never terminates
// let (tx, rx) = mpsc::channel();
// let tx2 = tx.clone();
// thread::spawn(move || { tx2.send(1).unwrap(); });
// for val in rx { println!("{val}"); }  // hangs!

// RIGHT: drop all senders when done
let (tx, rx) = mpsc::channel();
let tx2 = tx.clone();
thread::spawn(move || { tx2.send(1).unwrap(); });
drop(tx);  // drop original — receiver terminates after tx2 drops
for val in rx { println!("{val}"); }

Bounded Channels: Backpressure with sync_channel

mpsc::channel() is unbounded — fast producers can exhaust memory. Use sync_channel(n) to cap the buffer:

let (tx, rx) = mpsc::sync_channel(10);  // buffer holds 10 messages

thread::spawn(move || {
    for i in 0..100 {
        tx.send(i).unwrap();  // blocks when buffer is full
    }
});

for val in rx {
    println!("{val}");
}

When reads vastly outnumber writes, RwLock allows concurrent readers while writes get exclusive access:

use std::sync::{Arc, RwLock};

let config = Arc::new(RwLock::new(String::from("v1")));

// Multiple readers can proceed concurrently
let config_r = Arc::clone(&config);
thread::spawn(move || {
    let val = config_r.read().unwrap();
    println!("Config: {val}");
});

// Writer gets exclusive access
let config_w = Arc::clone(&config);
thread::spawn(move || {
    let mut val = config_w.write().unwrap();
    *val = String::from("v2");
});

Use RwLock for read-heavy workloads (config, caches). Use Mutex when reads and writes are roughly equal — it's simpler and has less overhead.

  • The Rust Book: Threads — spawning, join handles, move closures
  • The Rust Book: Message Passing — mpsc channels
  • The Rust Book: Shared-State Concurrency — Mutex, Arc
  • std::thread::scope — scoped threads (borrow without move)
  • std::sync::mpsc — channel API, dropping senders

When to Use Rust Concurrency: Threads, Mutex, Channels & Arc

  • Use threads for CPU-bound parallel work.
  • Use channels for communication between threads (message passing).
  • Use Mutex when multiple threads need to mutate shared state.
  • Use Arc to share ownership across threads.
  • Use RwLock when reads are much more frequent than writes.

Check Your Understanding: Rust Concurrency: Threads, Mutex, Channels & Arc

Prompt

How does Rust prevent data races?

What a strong answer looks like

Through the type system: types must implement Send to be transferred between threads and Sync to be shared. Mutex ensures exclusive access. Arc provides thread-safe reference counting. The compiler rejects unsafe sharing at compile time.

What You'll Practice: Rust Concurrency: Threads, Mutex, Channels & Arc

Spawn threads with thread::spawnUse move closures for thread ownershipWait for threads with JoinHandle::join()Protect shared data with MutexShare ownership across threads with ArcCreate channels with mpsc::channel()Send and receive messages between threadsHandle Mutex poisoning

Common Rust Concurrency: Threads, Mutex, Channels & Arc Pitfalls

  • Symptom: `error[E0277]: Rc<T> cannot be sent between threads safely`. Why: `Rc` uses non-atomic reference counting and does not implement `Send`. Fix: Replace `Rc` with `Arc` for any data shared across threads.
  • Symptom: `error[E0373]: closure may outlive the current function`. Why: The thread closure borrows a local variable, but the thread could outlive the function. Fix: Add `move` before the closure to transfer ownership into the thread.
  • Symptom: Lock contention slows your program to a crawl. Why: You hold the `MutexGuard` across long operations, blocking all other threads. Fix: Lock, copy or modify the data, then drop the guard immediately — `let val = { *lock.lock().unwrap() };`.
  • Symptom: Your program hangs with no error. Why: Two threads each hold one Mutex and wait for the other — a classic deadlock. Fix: Always acquire multiple Mutexes in a consistent order, or restructure to use channels instead.
  • Symptom: Main exits before threads finish, output is missing. Why: `thread::spawn` returns a `JoinHandle` you ignored, so the main thread doesn't wait. Fix: Collect handles and call `.join().unwrap()` on each before main returns.

Rust Concurrency: Threads, Mutex, Channels & Arc FAQ

What is the difference between Rc and Arc?

Rc uses non-atomic reference counting (faster, single-threaded only). Arc uses atomic reference counting (thread-safe but slower). Use Rc in single-threaded code, Arc across threads.

What happens if a thread panics while holding a Mutex?

The Mutex becomes "poisoned." Future calls to .lock() return Err. You can recover with .lock().unwrap_or_else(|e| e.into_inner()) but the data may be inconsistent.

When should I use channels vs shared state?

Channels are simpler and avoid lock contention. Use them when you can model the problem as message passing. Use shared state (Mutex) when multiple threads need random access to the same data.

What are Send and Sync traits?

Send means ownership can be transferred between threads. Sync means references can be shared between threads. Most types are both. Rc is neither. Raw pointers are neither.

How do I wait for multiple threads to complete?

Collect JoinHandles and call .join() on each, or use thread pools (rayon crate) for higher-level abstractions.

Rust Concurrency: Threads, Mutex, Channels & Arc Syntax Quick Reference

Spawn thread
let h = thread::spawn(|| 42);
let answer = h.join().unwrap();
Move closure
let data = vec![1, 2, 3];
thread::spawn(move || {
    println!("{data:?}");
});
Join thread
handle.join().unwrap();
Create Mutex
let m = Mutex::new(0);
Lock Mutex
let mut num = m.lock().unwrap();
Arc + Mutex
let data = Arc::new(Mutex::new(0));
Clone Arc
let data = Arc::clone(&data);
Create channel
let (tx, rx) = mpsc::channel();
Send message
tx.send(value).unwrap();
Receive message
let msg = rx.recv().unwrap();

Rust Concurrency: Threads, Mutex, Channels & Arc Sample Exercises

Example 1Difficulty: 2/5

Fill in the method to propagate any panic from the spawned thread.

unwrap
Example 2Difficulty: 2/5

Fill in the function to create a new thread.

spawn
Example 3Difficulty: 2/5

Fill in the method to wait for the thread to finish.

join

+ 49 more exercises

Further Reading

  • Rust Newtype Pattern: Catch Unit Bugs at Compile Time18 min read

Start practicing Rust Concurrency: Threads, Mutex, Channels & Arc

Free daily exercises with spaced repetition. No credit card required.

← Back to Rust Syntax Practice
Syntax Cache

Build syntax muscle memory with spaced repetition.

Product

  • Pricing
  • Our Method
  • Daily Practice
  • Design Patterns
  • Interview Prep

Resources

  • Blog
  • Compare
  • Cheat Sheets
  • Vibe Coding
  • Muscle Memory

Languages

  • Python
  • JavaScript
  • TypeScript
  • Rust
  • SQL
  • GDScript

Legal

  • Terms
  • Privacy
  • Contact

© 2026 Syntax Cache

Cancel anytime in 2 clicks. Keep access until the end of your billing period.

No refunds for partial billing periods.