Introduction
This comprehensive guide covers advanced Rust interview questions focusing on concurrency, async programming, macros, unsafe Rust, and performance optimization. These questions are designed for mid-level to senior Rust developer positions.
Table of Contents
- Concurrency and Parallelism
- Async Programming
- Macros and Metaprogramming
- Unsafe Rust
- Performance Optimization
- Advanced Trait Patterns
- System Programming
- WebAssembly
- Embedded Rust
- Interview Tips and Scenarios
1. Concurrency and Parallelism
Q1: Explain the difference between threads, async tasks, and processes in Rust.
Answer:
use std::thread;
use std::time::Duration;
use tokio::task;
// 1. OS Threads - 1:1 threading model
fn thread_example() {
let handle = thread::spawn(|| {
for i in 1..5 {
println!("Thread: {}", i);
thread::sleep(Duration::from_millis(100));
}
});
for i in 1..5 {
println!("Main: {}", i);
thread::sleep(Duration::from_millis(100));
}
handle.join().unwrap();
}
// 2. Async Tasks - Green threads / M:N threading
async fn async_example() {
let task1 = task::spawn(async {
for i in 1..5 {
println!("Task 1: {}", i);
tokio::time::sleep(Duration::from_millis(100)).await;
}
});
let task2 = task::spawn(async {
for i in 1..5 {
println!("Task 2: {}", i);
tokio::time::sleep(Duration::from_millis(50)).await;
}
});
let _ = tokio::join!(task1, task2);
}
// 3. Processes - Separate OS processes
use std::process::Command;
fn process_example() {
let output = Command::new("ls")
.arg("-l")
.output()
.expect("Failed to execute command");
println!("Output: {}", String::from_utf8_lossy(&output.stdout));
}
Q2: How do you prevent data races in concurrent Rust code?
Answer:
use std::sync::{Arc, Mutex, RwLock, atomic::{AtomicUsize, Ordering}};
use std::thread;
// 1. Using Mutex for exclusive access
fn mutex_example() {
let counter = Arc::new(Mutex::new(0));
let mut handles = vec![];
for _ in 0..10 {
let counter = Arc::clone(&counter);
handles.push(thread::spawn(move || {
let mut num = counter.lock().unwrap();
*num += 1;
}));
}
for handle in handles {
handle.join().unwrap();
}
println!("Result: {}", *counter.lock().unwrap());
}
// 2. Using RwLock for read/write separation
fn rwlock_example() {
let data = Arc::new(RwLock::new(vec![1, 2, 3]));
let mut handles = vec![];
// Readers
for _ in 0..3 {
let data = Arc::clone(&data);
handles.push(thread::spawn(move || {
let read = data.read().unwrap();
println!("Read: {:?}", *read);
}));
}
// Writer
let data = Arc::clone(&data);
handles.push(thread::spawn(move || {
let mut write = data.write().unwrap();
write.push(4);
println!("Wrote: {:?}", *write);
}));
for handle in handles {
handle.join().unwrap();
}
}
// 3. Using atomic types for simple operations
fn atomic_example() {
let counter = Arc::new(AtomicUsize::new(0));
let mut handles = vec![];
for _ in 0..10 {
let counter = Arc::clone(&counter);
handles.push(thread::spawn(move || {
counter.fetch_add(1, Ordering::SeqCst);
}));
}
for handle in handles {
handle.join().unwrap();
}
println!("Result: {}", counter.load(Ordering::SeqCst));
}
Q3: Explain the difference between Send and Sync traits.
Answer:
use std::thread;
use std::rc::Rc;
use std::sync::{Arc, Mutex};
// Send: Type can be transferred across threads
// Sync: Type can be safely referenced across threads
struct MyStruct {
data: i32,
}
// Most types are automatically Send + Sync
// 1. Types that are !Send
struct NotSend {
data: Rc<i32>, // Rc is !Send
}
// 2. Types that are !Sync
struct NotSync {
data: std::cell::RefCell<i32>, // RefCell is !Sync
}
// 3. Custom implementation (rarely needed)
unsafe impl Send for MyStruct {} // Only if you're sure it's safe
unsafe impl Sync for MyStruct {} // Only if you're sure it's safe
fn main() {
// Send example
let data = Arc::new(5); // Arc is Send
let handle = thread::spawn(move || {
println!("Data: {}", data);
});
handle.join().unwrap();
// Sync example
let data = Arc::new(Mutex::new(5)); // Mutex is Sync
let data_clone = Arc::clone(&data);
let handle = thread::spawn(move || {
let mut val = data_clone.lock().unwrap();
*val += 1;
});
handle.join().unwrap();
println!("Value: {}", *data.lock().unwrap());
}
Q4: Implement a thread-safe work queue with multiple producers and consumers.
Answer:
use std::sync::{Arc, Mutex, Condvar};
use std::collections::VecDeque;
use std::thread;
use std::time::Duration;
struct WorkQueue<T> {
queue: Mutex<VecDeque<T>>,
not_empty: Condvar,
}
impl<T> WorkQueue<T> {
fn new() -> Self {
WorkQueue {
queue: Mutex::new(VecDeque::new()),
not_empty: Condvar::new(),
}
}
fn push(&self, item: T) {
let mut queue = self.queue.lock().unwrap();
queue.push_back(item);
self.not_empty.notify_one(); // Wake up one waiting consumer
}
fn pop(&self) -> T {
let mut queue = self.queue.lock().unwrap();
// Wait until queue is not empty
while queue.is_empty() {
queue = self.not_empty.wait(queue).unwrap();
}
queue.pop_front().unwrap()
}
fn try_pop(&self) -> Option<T> {
let mut queue = self.queue.lock().unwrap();
queue.pop_front()
}
fn len(&self) -> usize {
self.queue.lock().unwrap().len()
}
}
// Multiple producers and consumers example
fn main() {
let queue = Arc::new(WorkQueue::new());
let mut handles = vec![];
// Producers
for i in 0..3 {
let queue = Arc::clone(&queue);
handles.push(thread::spawn(move || {
for j in 0..5 {
queue.push(format!("Producer {}-{}", i, j));
thread::sleep(Duration::from_millis(10));
}
}));
}
// Consumers
for i in 0..4 {
let queue = Arc::clone(&queue);
handles.push(thread::spawn(move || {
loop {
let item = queue.pop();
println!("Consumer {} got: {}", i, item);
thread::sleep(Duration::from_millis(20));
// Stop after processing enough items
if queue.len() < 2 {
break;
}
}
}));
}
for handle in handles {
handle.join().unwrap();
}
}
2. Async Programming
Q5: Explain the async/await model in Rust. How is it different from other languages?
Answer:
use tokio::time::{sleep, Duration};
use futures::future::join_all;
// Rust's async is zero-cost and doesn't require a runtime
// but most practical applications use tokio or async-std
// 1. Basic async function
async fn fetch_data(id: u32) -> String {
println!("Fetching data {}", id);
sleep(Duration::from_millis(100)).await;
format!("Data for {}", id)
}
// 2. Concurrent execution
async fn concurrent_fetches() {
let fetches: Vec<_> = (1..=5)
.map(|id| fetch_data(id))
.collect();
let results = join_all(fetches).await;
println!("Results: {:?}", results);
}
// 3. Async with select!
use tokio::select;
async fn select_example() {
let task1 = fetch_data(1);
let task2 = fetch_data(2);
select! {
result = task1 => println!("Task 1 completed: {}", result),
result = task2 => println!("Task 2 completed: {}", result),
}
}
// 4. Understanding Future
use std::future::Future;
use std::pin::Pin;
use std::task::{Context, Poll};
struct MyFuture {
state: u32,
}
impl Future for MyFuture {
type Output = u32;
fn poll(mut self: Pin<&mut Self>, _cx: &mut Context<'_>) -> Poll<Self::Output> {
if self.state == 5 {
Poll::Ready(self.state)
} else {
self.state += 1;
Poll::Pending
}
}
}
#[tokio::main]
async fn main() {
concurrent_fetches().await;
select_example().await;
// Using custom future
let my_future = MyFuture { state: 0 };
let result = my_future.await;
println!("Custom future result: {}", result);
}
Q6: What are the differences between tokio and async-std?
Answer:
// tokio (more mature, production-ready)
use tokio::net::TcpListener;
use tokio::io::{AsyncReadExt, AsyncWriteExt};
async fn tokio_server() -> Result<(), Box<dyn std::error::Error>> {
let listener = TcpListener::bind("127.0.0.1:8080").await?;
loop {
let (mut socket, addr) = listener.accept().await?;
println!("Connection from: {}", addr);
tokio::spawn(async move {
let mut buf = [0; 1024];
let n = socket.read(&mut buf).await.unwrap();
socket.write_all(&buf[0..n]).await.unwrap();
});
}
}
// async-std (simpler, closer to std)
use async_std::net::TcpListener;
use async_std::prelude::*;
async fn async_std_server() -> Result<(), Box<dyn std::error::Error>> {
let listener = TcpListener::bind("127.0.0.1:8081").await?;
while let Ok((stream, addr)) = listener.accept().await {
println!("Connection from: {}", addr);
async_std::task::spawn(async move {
// Handle connection
});
}
Ok(())
}
// Key differences:
// 1. Runtime: tokio uses work-stealing, async-std uses per-task
// 2. API: async-std mimics std, tokio has its own patterns
// 3. Ecosystem: tokio has larger ecosystem
// 4. Performance: tokio often faster for high-concurrency
Q7: Explain cancellation safety in async Rust.
Answer:
use tokio::time::{sleep, Duration};
use std::sync::Arc;
use std::sync::atomic::{AtomicBool, Ordering};
// Cancellation-safe operations can be dropped without causing issues
// 1. Non-cancellation-safe operation
async fn unsafe_operation() -> String {
// This might leave state inconsistent if cancelled
let mut data = vec![];
for i in 0..100 {
data.push(i);
sleep(Duration::from_millis(10)).await; // Cancellation point
}
format!("Data length: {}", data.len())
}
// 2. Making it cancellation-safe with a guard
struct OperationGuard {
data: Vec<i32>,
completed: Arc<AtomicBool>,
}
impl OperationGuard {
fn new(completed: Arc<AtomicBool>) -> Self {
OperationGuard {
data: Vec::with_capacity(100),
completed,
}
}
async fn run(&mut self) -> String {
for i in 0..100 {
if self.completed.load(Ordering::SeqCst) {
return "Cancelled".to_string();
}
self.data.push(i);
sleep(Duration::from_millis(10)).await;
}
format!("Data length: {}", self.data.len())
}
}
// 3. Using tokio's cancellation-safe primitives
use tokio::sync::oneshot;
async fn safe_with_channel() -> Result<String, &'static str> {
let (tx, rx) = oneshot::channel();
tokio::spawn(async move {
// Do work
sleep(Duration::from_millis(100)).await;
tx.send("Done").unwrap();
});
// This select is cancellation-safe
tokio::select! {
result = rx => Ok(result.unwrap()),
_ = sleep(Duration::from_millis(50)) => {
Err("Timeout")
}
}
}
// 4. Using structured concurrency
async fn structured_operation() -> String {
let completed = Arc::new(AtomicBool::new(false));
let mut guard = OperationGuard::new(Arc::clone(&completed));
let handle = tokio::spawn(async move {
guard.run().await
});
// Let it run for a while then cancel
sleep(Duration::from_millis(500)).await;
completed.store(true, Ordering::SeqCst);
handle.await.unwrap()
}
3. Macros and Metaprogramming
Q8: Explain the difference between declarative and procedural macros.
Answer:
// 1. Declarative macros (macro_rules!)
macro_rules! vec_of_strings {
($($x:expr),*) => {
vec![$($x.to_string()),*]
};
}
// 2. Procedural macros (more powerful, custom syntax)
// These are defined in separate crates
// #[derive] macro example
use serde::{Serialize, Deserialize};
#[derive(Debug, Serialize, Deserialize)]
struct User {
name: String,
age: u8,
}
// Attribute-like macro
// #[route(GET, "/path")]
fn route_handler() {}
// Function-like macro
// sql!(SELECT * FROM users WHERE id = 1);
// Example of custom derive macro (conceptual)
// In a separate crate:
// use proc_macro::TokenStream;
//
// #[proc_macro_derive(MyTrait)]
// pub fn my_trait_derive(input: TokenStream) -> TokenStream {
// // Parse input and generate implementation
// }
// Usage:
trait Hello {
fn hello(&self) -> String;
}
// Macro to implement Hello for structs
macro_rules! impl_hello {
($type:ty) => {
impl Hello for $type {
fn hello(&self) -> String {
format!("Hello from {}", stringify!($type))
}
}
};
}
struct Person {
name: String,
}
impl_hello!(Person);
fn main() {
let strings = vec_of_strings!("a", "b", "c");
println!("{:?}", strings);
let person = Person { name: "Alice".to_string() };
println!("{}", person.hello());
}
Q9: Write a macro that implements a DSL for simple HTTP routes.
Answer:
use std::collections::HashMap;
// Macro to define routes
macro_rules! routes {
($($method:ident $path:literal => $handler:expr),* $(,)?) => {
{
let mut router = Router::new();
$(
router.add_route(stringify!($method), $path, Box::new($handler));
)*
router
}
};
}
type Handler = Box<dyn Fn(&str) -> String>;
struct Router {
routes: HashMap<String, HashMap<String, Handler>>,
}
impl Router {
fn new() -> Self {
Router {
routes: HashMap::new(),
}
}
fn add_route(&mut self, method: &str, path: &str, handler: Handler) {
self.routes
.entry(method.to_string())
.or_insert_with(HashMap::new)
.insert(path.to_string(), handler);
}
fn handle(&self, method: &str, path: &str, body: &str) -> Option<String> {
self.routes
.get(method)
.and_then(|routes| routes.get(path))
.map(|handler| handler(body))
}
}
// Usage
fn main() {
let router = routes! {
GET "/" => |_| "Hello, World!".to_string(),
POST "/users" => |body| format!("Creating user: {}", body),
GET "/users/:id" => |id| format!("Getting user: {}", id),
};
// Test the router
if let Some(response) = router.handle("GET", "/", "") {
println!("GET /: {}", response);
}
if let Some(response) = router.handle("POST", "/users", "{\"name\":\"Alice\"}") {
println!("POST /users: {}", response);
}
if let Some(response) = router.handle("GET", "/users/42", "") {
println!("GET /users/42: {}", response);
}
}
Q10: Explain hygiene in Rust macros.
Answer:
// Macro hygiene prevents name clashes between macro and caller scope
// 1. Unhygienic (C preprocessor style) would cause issues
// Rust macros are hygienic
macro_rules! create_var {
($name:ident, $value:expr) => {
let $name = $value;
println!("Inside macro: {}", $name);
};
}
fn hygiene_example() {
let x = 10;
// This won't conflict with outer x
create_var!(x, 5);
println!("Outside macro: {}", x); // Still prints 10
}
// 2. Using $crate for absolute paths
#[macro_export]
macro_rules! log_error {
($msg:expr) => {
// Use $crate to refer to the crate's module
$crate::println!("Error: {}", $msg);
};
}
// 3. Hygiene with multiple scopes
macro_rules! with_temp {
($body:block) => {
{
let temp = 42;
$body
}
};
}
fn multi_scope() {
let temp = 100;
// Inner temp doesn't affect outer
let result = with_temp!({
println!("Inside: {}", temp); // Prints 42
temp
});
println!("Outside: {}, result: {}", temp, result); // Prints 100, 42
}
// 4. Breaking hygiene when needed
macro_rules! capture_ident {
($name:ident) => {
// This will capture identifier from caller
let $name = 5;
};
}
fn capture_example() {
// This would shadow outer variable intentionally
capture_ident!(x); // Declares x in this scope
println!("x = {}", x); // Prints 5
}
fn main() {
hygiene_example();
multi_scope();
capture_example();
}
4. Unsafe Rust
Q11: When would you need to use unsafe Rust?
Answer:
// Unsafe Rust is needed for:
// 1. Dereferencing raw pointers
// 2. Calling unsafe functions
// 3. Implementing unsafe traits
// 4. Accessing/modifying mutable statics
// 1. FFI (Foreign Function Interface)
extern "C" {
fn abs(input: i32) -> i32;
fn malloc(size: usize) -> *mut u8;
fn free(ptr: *mut u8);
}
fn ffi_example() {
unsafe {
println!("Absolute value of -3: {}", abs(-3));
let ptr = malloc(1024);
if !ptr.is_null() {
// Use the memory
*ptr = 42;
println!("First byte: {}", *ptr);
free(ptr);
}
}
}
// 2. Performance optimizations
fn fast_copy(src: &[u8], dst: &mut [u8]) {
assert!(src.len() <= dst.len());
unsafe {
// Bypass bounds checking for speed
std::ptr::copy_nonoverlapping(
src.as_ptr(),
dst.as_mut_ptr(),
src.len()
);
}
}
// 3. Implementing data structures
struct MyVec<T> {
ptr: *mut T,
len: usize,
capacity: usize,
}
impl<T> MyVec<T> {
fn new() -> Self {
MyVec {
ptr: std::ptr::null_mut(),
len: 0,
capacity: 0,
}
}
fn push(&mut self, value: T) {
if self.len == self.capacity {
self.grow();
}
unsafe {
std::ptr::write(self.ptr.add(self.len), value);
self.len += 1;
}
}
fn grow(&mut self) {
let new_capacity = (self.capacity * 2).max(1);
let new_ptr = unsafe {
let layout = std::alloc::Layout::array::<T>(new_capacity).unwrap();
if self.capacity == 0 {
std::alloc::alloc(layout) as *mut T
} else {
let old_layout = std::alloc::Layout::array::<T>(self.capacity).unwrap();
std::alloc::realloc(
self.ptr as *mut u8,
old_layout,
new_capacity * std::mem::size_of::<T>()
) as *mut T
}
};
self.ptr = new_ptr;
self.capacity = new_capacity;
}
}
impl<T> Drop for MyVec<T> {
fn drop(&mut self) {
if self.capacity > 0 {
unsafe {
// Drop all elements
for i in 0..self.len {
std::ptr::drop_in_place(self.ptr.add(i));
}
// Deallocate memory
let layout = std::alloc::Layout::array::<T>(self.capacity).unwrap();
std::alloc::dealloc(self.ptr as *mut u8, layout);
}
}
}
}
Q12: Explain the rules for using unsafe code.
Answer:
// The 5 unsafe superpowers:
// 1. Dereference a raw pointer
// 2. Call an unsafe function
// 3. Implement an unsafe trait
// 4. Access/modify a mutable static
// 5. Access fields of unions
// Rule 1: Raw pointers must be valid
fn raw_pointer_rules() {
let mut x = 10;
let ptr = &mut x as *mut i32;
unsafe {
// Pointer must be:
// - Non-null
// - Properly aligned
// - Dereferenceable
// - Not aliasing in prohibited ways
*ptr = 20;
}
// WRONG: Dangling pointer
// let dangling;
// {
// let y = 5;
// dangling = &y as *const i32;
// }
// unsafe { println!("{}", *dangling); } // Undefined behavior!
}
// Rule 2: Unsafe functions must document safety requirements
/// # Safety
/// `ptr` must be:
/// - Non-null
/// - Properly aligned for T
/// - Point to valid memory for the entire lifetime 'a
unsafe fn dangerous<T>(ptr: *const T) -> &'a T {
&*ptr
}
// Rule 3: Unsafe traits must document invariants
/// # Safety
/// Implementer must ensure that the type is actually `Send`
unsafe trait MySend: Send { }
// Rule 4: Mutable statics require unsafe
static mut COUNTER: u32 = 0;
fn mutable_static() {
unsafe {
COUNTER += 1;
println!("Counter: {}", COUNTER);
}
}
// Rule 5: Union access requires unsafe
union IntOrFloat {
i: i32,
f: f32,
}
fn union_example() {
let u = IntOrFloat { i: 42 };
unsafe {
println!("As int: {}", u.i);
println!("As float: {}", u.f); // Undefined behavior! (not initialized as float)
}
}
Q13: Implement a safe abstraction over unsafe code.
Answer:
use std::ptr::NonNull;
use std::marker::PhantomData;
// Safe wrapper around raw pointer
struct MyBox<T> {
ptr: NonNull<T>,
_marker: PhantomData<T>,
}
impl<T> MyBox<T> {
fn new(value: T) -> Self {
// Allocate memory
let layout = std::alloc::Layout::new::<T>();
let ptr = unsafe {
let ptr = std::alloc::alloc(layout) as *mut T;
if ptr.is_null() {
std::alloc::handle_alloc_error(layout);
}
ptr.write(value);
NonNull::new_unchecked(ptr)
};
MyBox {
ptr,
_marker: PhantomData,
}
}
fn as_ref(&self) -> &T {
unsafe { self.ptr.as_ref() }
}
fn as_mut(&mut self) -> &mut T {
unsafe { self.ptr.as_mut() }
}
}
impl<T> Drop for MyBox<T> {
fn drop(&mut self) {
unsafe {
// Drop the value
std::ptr::drop_in_place(self.ptr.as_ptr());
// Free the memory
let layout = std::alloc::Layout::new::<T>();
std::alloc::dealloc(self.ptr.as_ptr() as *mut u8, layout);
}
}
}
// Safe wrapper for a simple lock
struct SpinLock<T> {
locked: std::sync::atomic::AtomicBool,
data: std::cell::UnsafeCell<T>,
}
impl<T> SpinLock<T> {
fn new(data: T) -> Self {
SpinLock {
locked: std::sync::atomic::AtomicBool::new(false),
data: std::cell::UnsafeCell::new(data),
}
}
fn lock(&self) -> SpinLockGuard<'_, T> {
while self.locked.swap(true, std::sync::atomic::Ordering::Acquire) {
std::hint::spin_loop();
}
SpinLockGuard {
lock: self,
}
}
}
struct SpinLockGuard<'a, T> {
lock: &'a SpinLock<T>,
}
impl<'a, T> Drop for SpinLockGuard<'a, T> {
fn drop(&mut self) {
self.lock.locked.store(false, std::sync::atomic::Ordering::Release);
}
}
impl<'a, T> std::ops::Deref for SpinLockGuard<'a, T> {
type Target = T;
fn deref(&self) -> &T {
unsafe { &*self.lock.data.get() }
}
}
impl<'a, T> std::ops::DerefMut for SpinLockGuard<'a, T> {
fn deref_mut(&mut self) -> &mut T {
unsafe { &mut *self.lock.data.get() }
}
}
fn main() {
// Using safe MyBox
let mut boxed = MyBox::new(42);
println!("Value: {}", boxed.as_ref());
*boxed.as_mut() = 100;
println!("New value: {}", boxed.as_ref());
// Using safe SpinLock
let lock = SpinLock::new(5);
std::thread::scope(|s| {
s.spawn(|| {
let mut guard = lock.lock();
*guard += 1;
});
s.spawn(|| {
let mut guard = lock.lock();
*guard += 2;
});
});
println!("Final value: {}", *lock.lock());
}
5. Performance Optimization
Q14: How do you profile and optimize Rust code?
Answer:
use std::time::Instant;
use std::collections::HashMap;
// 1. Benchmarking with std::time
fn time_function<F, T>(f: F) -> (T, std::time::Duration)
where
F: FnOnce() -> T,
{
let start = Instant::now();
let result = f();
let duration = start.elapsed();
(result, duration)
}
// 2. Using criterion for benchmarks
// In Cargo.toml:
// [dev-dependencies]
// criterion = "0.5"
//
// [[bench]]
// name = "my_benchmark"
// harness = false
// 3. Common optimizations
struct Optimizer {
data: Vec<i32>,
}
impl Optimizer {
// Bad: Repeated allocations
fn bad_sum_of_squares(&self) -> i32 {
let mut squares = Vec::new();
for &x in &self.data {
squares.push(x * x);
}
squares.iter().sum()
}
// Good: No intermediate allocation
fn good_sum_of_squares(&self) -> i32 {
self.data.iter().map(|&x| x * x).sum()
}
// Bad: Bounds checking on every access
fn bad_binary_search(&self, target: i32) -> Option<usize> {
let mut low = 0;
let mut high = self.data.len() - 1;
while low <= high {
let mid = (low + high) / 2;
if self.data[mid] == target {
return Some(mid);
} else if self.data[mid] < target {
low = mid + 1;
} else {
high = mid - 1;
}
}
None
}
// Good: Using slice pattern to avoid bounds checks
fn good_binary_search(&self, target: i32) -> Option<usize> {
self.data.binary_search(&target).ok()
}
}
// 4. Cache-friendly data structures
#[repr(C)]
struct CacheOptimized {
a: i32,
b: i32,
c: i32,
d: i32,
}
// 5. Using const generics for compile-time optimization
fn process_array<const N: usize>(arr: [i32; N]) -> i32 {
let mut sum = 0;
for i in 0..N {
sum += arr[i];
}
sum
}
// 6. Avoiding clones
#[derive(Clone)]
struct ExpensiveData {
data: Vec<u8>,
}
fn process_data(data: &ExpensiveData) { // Use reference
println!("Processing {} bytes", data.data.len());
}
fn main() {
// Benchmarking
let data = vec![1, 2, 3, 4, 5];
let optimizer = Optimizer { data };
let (result, time) = time_function(|| optimizer.good_sum_of_squares());
println!("Sum: {}, took {:?}", result, time);
// Memory layout optimization
println!("Size of CacheOptimized: {} bytes",
std::mem::size_of::<CacheOptimized>());
println!("Alignment: {} bytes",
std::mem::align_of::<CacheOptimized>());
// Const generic
let arr = [1, 2, 3, 4, 5];
println!("Sum: {}", process_array(arr));
// Avoid clones
let expensive = ExpensiveData { data: vec![0; 1000] };
process_data(&expensive); // Borrow instead of clone
println!("Still have data: {} bytes", expensive.data.len());
}
Q15: Explain zero-cost abstractions in Rust.
Answer:
// Zero-cost abstractions mean high-level code compiles to
// equivalent low-level code with no runtime overhead
// 1. Iterators vs manual loops
fn iterator_example(data: &[i32]) -> i32 {
// High-level iterator chain
data.iter()
.filter(|&&x| x % 2 == 0)
.map(|&x| x * x)
.sum()
}
fn manual_loop_example(data: &[i32]) -> i32 {
// Equivalent manual loop (same performance)
let mut sum = 0;
for &x in data {
if x % 2 == 0 {
sum += x * x;
}
}
sum
}
// 2. Generic functions
fn generic_min<T: Ord>(a: T, b: T) -> T {
if a < b { a } else { b }
}
// Compiles to specialized version for each type
// 3. Closures
fn closure_example() {
let add_one = |x: i32| x + 1;
let result = add_one(5);
// Closure is inlined, same as writing: 5 + 1
}
// 4. RAII (Resource Acquisition Is Initialization)
struct Guard {
data: String,
}
impl Drop for Guard {
fn drop(&mut self) {
println!("Cleaning up: {}", self.data);
}
}
// Drop is called automatically, zero overhead compared to manual cleanup
// 5. Pattern matching
fn pattern_match(x: Option<i32>) -> i32 {
match x {
Some(v) => v,
None => 0,
}
}
// Compiles to efficient conditional check
// 6. Zero-sized types
struct Empty; // Takes no space
struct Wrapper<T>(T); // Same size as T
fn zero_sized_example() {
println!("Size of Empty: {} bytes", std::mem::size_of::<Empty>());
println!("Size of Wrapper<Empty>: {} bytes", std::mem::size_of::<Wrapper<Empty>>());
}
// 7. Enum optimization
enum Option<T> {
Some(T),
None,
}
// Option<&T> uses pointer null optimization - zero overhead
fn main() {
let data = vec![1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
// These two compile to nearly identical assembly
let iter_sum = iterator_example(&data);
let loop_sum = manual_loop_example(&data);
println!("Iterator: {}, Manual: {}", iter_sum, loop_sum);
// Generic instantiation
println!("Min i32: {}", generic_min(5, 3));
println!("Min f64: {}", generic_min(5.0, 3.0));
zero_sized_example();
// Option optimization
let x: Option<&i32> = Some(&5);
println!("Size of Option<&i32>: {} bytes", std::mem::size_of_val(&x));
// Same size as a raw pointer!
}
6. Advanced Trait Patterns
Q16: Explain the orphan rule and how to work around it.
Answer:
use std::fmt;
// The orphan rule: You can't implement a foreign trait for a foreign type
// i.e., at least one of trait or type must be local to your crate
// Problem: Can't do this (in your crate)
// impl Display for Vec<String> { }
// Solutions:
// 1. Newtype pattern (wrapper)
struct MyVec(Vec<String>);
impl fmt::Display for MyVec {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "{:?}", self.0)
}
}
// 2. Using a local trait (extension trait)
trait StringExt {
fn repeat_twice(&self) -> String;
}
impl StringExt for String {
fn repeat_twice(&self) -> String {
format!("{}{}", self, self)
}
}
// 3. Using blanket implementations with constraints
trait Printable {
fn print(&self);
}
// Blanket implementation for any type that implements Display
impl<T: fmt::Display> Printable for T {
fn print(&self) {
println!("{}", self);
}
}
// 4. Workaround for specific cases with marker traits
pub trait MyMarker {}
impl MyMarker for String {}
// Can implement foreign trait for types marked with MyMarker
// but not directly for foreign types
// 5. Using deref coercion (for smart pointers)
struct Wrapper<T>(T);
impl<T> std::ops::Deref for Wrapper<T> {
type Target = T;
fn deref(&self) -> &T {
&self.0
}
}
// Now Wrapper<String> can be used where &str is expected
// 6. Using associated types
trait Container {
type Item;
fn get(&self) -> &Self::Item;
}
impl<T> Container for Vec<T> {
type Item = T;
fn get(&self) -> &T {
&self[0]
}
}
fn main() {
// Newtype pattern
let my_vec = MyVec(vec!["hello".to_string(), "world".to_string()]);
println!("{}", my_vec);
// Extension trait
let s = "hello".to_string();
println!("{}", s.repeat_twice());
// Blanket implementation
42.print();
"hello".print();
// Deref coercion
let wrapped = Wrapper("hello".to_string());
takes_str(&wrapped); // Works due to deref coercion
}
fn takes_str(s: &str) {
println!("Got: {}", s);
}
Q17: Implement the Iterator trait for a custom type.
Answer:
// 1. Simple iterator
struct Counter {
count: u32,
max: u32,
}
impl Counter {
fn new(max: u32) -> Self {
Counter { count: 0, max }
}
}
impl Iterator for Counter {
type Item = u32;
fn next(&mut self) -> Option<Self::Item> {
self.count += 1;
if self.count <= self.max {
Some(self.count)
} else {
None
}
}
}
// 2. Infinite iterator
struct Fibonacci {
current: u64,
next: u64,
}
impl Fibonacci {
fn new() -> Self {
Fibonacci { current: 0, next: 1 }
}
}
impl Iterator for Fibonacci {
type Item = u64;
fn next(&mut self) -> Option<Self::Item> {
let new_next = self.current + self.next;
self.current = self.next;
self.next = new_next;
Some(self.current)
}
}
// 3. Double-ended iterator
struct Range {
start: i32,
end: i32,
}
impl Iterator for Range {
type Item = i32;
fn next(&mut self) -> Option<Self::Item> {
if self.start <= self.end {
let result = self.start;
self.start += 1;
Some(result)
} else {
None
}
}
}
impl DoubleEndedIterator for Range {
fn next_back(&mut self) -> Option<Self::Item> {
if self.start <= self.end {
let result = self.end;
self.end -= 1;
Some(result)
} else {
None
}
}
}
// 4. Generic iterator with lifetime
struct Iter<'a, T> {
slice: &'a [T],
index: usize,
}
impl<'a, T> Iterator for Iter<'a, T> {
type Item = &'a T;
fn next(&mut self) -> Option<Self::Item> {
if self.index < self.slice.len() {
let result = &self.slice[self.index];
self.index += 1;
Some(result)
} else {
None
}
}
}
// 5. Iterator that returns references to its own data
struct Buffer {
data: Vec<String>,
index: usize,
}
impl Buffer {
fn new(data: Vec<String>) -> Self {
Buffer { data, index: 0 }
}
}
impl Iterator for Buffer {
type Item = String; // Returns owned data
fn next(&mut self) -> Option<Self::Item> {
if self.index < self.data.len() {
let result = self.data[self.index].clone();
self.index += 1;
Some(result)
} else {
None
}
}
}
// 6. Using iterator adapters
fn iterator_adapters() {
let counter = Counter::new(10);
let result: Vec<u32> = counter
.filter(|&x| x % 2 == 0)
.map(|x| x * x)
.collect();
println!("Squares of evens: {:?}", result);
}
fn main() {
// Basic counter
let mut counter = Counter::new(5);
while let Some(x) = counter.next() {
println!("Counter: {}", x);
}
// Fibonacci
let fib: Vec<u64> = Fibonacci::new().take(10).collect();
println!("Fibonacci: {:?}", fib);
// Double-ended iterator
let mut range = Range { start: 1, end: 5 };
println!("From front: {:?}", range.next());
println!("From back: {:?}", range.next_back());
// Generic iterator
let data = vec![1, 2, 3, 4];
let iter = Iter {
slice: &data,
index: 0,
};
for &x in iter {
println!("Value: {}", x);
}
iterator_adapters();
}
7. System Programming
Q18: How do you implement a memory allocator in Rust?
Answer:
use std::alloc::{GlobalAlloc, Layout, System};
use std::sync::atomic::{AtomicUsize, Ordering};
// 1. Custom allocator that counts allocations
struct CountingAllocator;
unsafe impl GlobalAlloc for CountingAllocator {
unsafe fn alloc(&self, layout: Layout) -> *mut u8 {
// Count allocation
ALLOC_COUNT.fetch_add(1, Ordering::SeqCst);
ALLOC_BYTES.fetch_add(layout.size(), Ordering::SeqCst);
// Delegate to system allocator
System.alloc(layout)
}
unsafe fn dealloc(&self, ptr: *mut u8, layout: Layout) {
// Count deallocation
DEALLOC_COUNT.fetch_add(1, Ordering::SeqCst);
System.dealloc(ptr, layout)
}
}
static ALLOC_COUNT: AtomicUsize = AtomicUsize::new(0);
static DEALLOC_COUNT: AtomicUsize = AtomicUsize::new(0);
static ALLOC_BYTES: AtomicUsize = AtomicUsize::new(0);
#[global_allocator]
static ALLOCATOR: CountingAllocator = CountingAllocator;
// 2. Simple bump allocator
struct BumpAllocator {
heap_start: usize,
heap_end: usize,
next: AtomicUsize,
}
impl BumpAllocator {
const fn new(heap_start: usize, heap_end: usize) -> Self {
BumpAllocator {
heap_start,
heap_end,
next: AtomicUsize::new(heap_start),
}
}
}
unsafe impl GlobalAlloc for BumpAllocator {
unsafe fn alloc(&self, layout: Layout) -> *mut u8 {
// Align the next pointer
let align = layout.align();
let size = layout.size();
let mut current = self.next.load(Ordering::SeqCst);
let aligned = (current + align - 1) & !(align - 1);
if aligned + size <= self.heap_end {
// Allocation fits
self.next.store(aligned + size, Ordering::SeqCst);
aligned as *mut u8
} else {
// Out of memory
std::ptr::null_mut()
}
}
unsafe fn dealloc(&self, _ptr: *mut u8, _layout: Layout) {
// Bump allocator never deallocates
}
}
// 3. Pool allocator for fixed-size objects
struct PoolAllocator {
chunks: Vec<*mut u8>,
free_list: Vec<*mut u8>,
chunk_size: usize,
object_size: usize,
}
impl PoolAllocator {
fn new(object_size: usize, chunk_size: usize) -> Self {
PoolAllocator {
chunks: Vec::new(),
free_list: Vec::new(),
chunk_size,
object_size,
}
}
fn allocate(&mut self) -> *mut u8 {
if let Some(ptr) = self.free_list.pop() {
return ptr;
}
// Allocate new chunk
let layout = Layout::array::<u8>(self.chunk_size).unwrap();
let chunk = unsafe { std::alloc::alloc(layout) };
if chunk.is_null() {
return std::ptr::null_mut();
}
self.chunks.push(chunk);
// Split chunk into objects
let objects = self.chunk_size / self.object_size;
for i in 0..objects {
let ptr = unsafe { chunk.add(i * self.object_size) };
self.free_list.push(ptr);
}
self.free_list.pop().unwrap()
}
fn deallocate(&mut self, ptr: *mut u8) {
self.free_list.push(ptr);
}
}
impl Drop for PoolAllocator {
fn drop(&mut self) {
let layout = Layout::array::<u8>(self.chunk_size).unwrap();
for &chunk in &self.chunks {
unsafe {
std::alloc::dealloc(chunk, layout);
}
}
}
}
fn main() {
// Test counting allocator
let v = vec![1, 2, 3, 4, 5];
println!("Allocations: {}", ALLOC_COUNT.load(Ordering::SeqCst));
println!("Bytes allocated: {}", ALLOC_BYTES.load(Ordering::SeqCst));
drop(v);
println!("Deallocations: {}", DEALLOC_COUNT.load(Ordering::SeqCst));
// Test pool allocator
let mut pool = PoolAllocator::new(32, 1024);
let ptr1 = pool.allocate();
let ptr2 = pool.allocate();
println!("Pool allocated: {:p}, {:p}", ptr1, ptr2);
pool.deallocate(ptr1);
let ptr3 = pool.allocate(); // Reuses ptr1
println!("Reused: {:p}", ptr3);
}
Q19: Explain how to interface with C code in Rust.
Answer:
use std::ffi::{CString, CStr};
use std::os::raw::{c_char, c_int, c_void};
// 1. Declaring external C functions
extern "C" {
fn puts(s: *const c_char) -> c_int;
fn malloc(size: usize) -> *mut c_void;
fn free(ptr: *mut c_void);
fn strlen(s: *const c_char) -> usize;
}
// 2. Creating C-compatible types
#[repr(C)]
struct Point {
x: c_int,
y: c_int,
}
// 3. Exporting Rust functions to C
#[no_mangle]
pub extern "C" fn rust_add(a: c_int, b: c_int) -> c_int {
a + b
}
#[no_mangle]
pub extern "C" fn rust_create_point(x: c_int, y: c_int) -> *mut Point {
let point = Box::new(Point { x, y });
Box::into_raw(point)
}
#[no_mangle]
pub extern "C" fn rust_destroy_point(point: *mut Point) {
if !point.is_null() {
unsafe {
drop(Box::from_raw(point));
}
}
}
// 4. Working with C strings
fn c_string_example() {
// Rust -> C
let rust_string = "Hello from Rust";
let c_string = CString::new(rust_string).unwrap();
unsafe {
puts(c_string.as_ptr());
}
// C -> Rust
let c_str_ptr = c_string.as_ptr();
let rust_str = unsafe {
CStr::from_ptr(c_str_ptr).to_str().unwrap()
};
println!("Converted back: {}", rust_str);
// Using strlen
unsafe {
let len = strlen(c_str_ptr);
println!("String length: {}", len);
}
}
// 5. Callbacks between Rust and C
type Callback = extern "C" fn(c_int) -> c_int;
extern "C" fn rust_callback(x: c_int) -> c_int {
println!("Rust callback called with: {}", x);
x * 2
}
extern "C" {
fn register_callback(cb: Callback);
fn trigger_callback(value: c_int) -> c_int;
}
// 6. Struct with callbacks
#[repr(C)]
struct CallbackHandler {
data: *mut c_void,
callback: extern "C" fn(*mut c_void, c_int) -> c_int,
}
extern "C" fn c_callback_handler(data: *mut c_void, value: c_int) -> c_int {
// Convert back to Rust reference
let handler = unsafe { &*(data as *mut CallbackHandler) };
(handler.callback)(handler.data, value)
}
// 7. FFI safety wrapper
struct SafeWrapper {
point: *mut Point,
}
impl SafeWrapper {
fn new(x: i32, y: i32) -> Self {
let point = unsafe {
let ptr = malloc(std::mem::size_of::<Point>()) as *mut Point;
(*ptr).x = x;
(*ptr).y = y;
ptr
};
SafeWrapper { point }
}
fn get(&self) -> (i32, i32) {
unsafe {
((*self.point).x, (*self.point).y)
}
}
}
impl Drop for SafeWrapper {
fn drop(&mut self) {
unsafe {
free(self.point as *mut c_void);
}
}
}
fn main() {
// Basic FFI
c_string_example();
// Using safe wrapper
let wrapper = SafeWrapper::new(10, 20);
println!("Point from C: {:?}", wrapper.get());
// Callback example (if C library supports)
unsafe {
register_callback(rust_callback);
let result = trigger_callback(42);
println!("Callback result: {}", result);
}
// External C function (if linking with C library)
// unsafe {
// puts(CString::new("Calling C puts!").unwrap().as_ptr());
// }
}
8. WebAssembly
Q20: How do you compile Rust to WebAssembly and interact with JavaScript?
Answer:
// 1. Basic WASM function
use wasm_bindgen::prelude::*;
#[wasm_bindgen]
pub fn add(a: i32, b: i32) -> i32 {
a + b
}
// 2. Working with strings
#[wasm_bindgen]
pub fn greet(name: &str) -> String {
format!("Hello, {}!", name)
}
// 3. Returning complex objects
#[wasm_bindgen]
pub struct Point {
x: i32,
y: i32,
}
#[wasm_bindgen]
impl Point {
#[wasm_bindgen(constructor)]
pub fn new(x: i32, y: i32) -> Point {
Point { x, y }
}
#[wasm_bindgen(getter)]
pub fn x(&self) -> i32 {
self.x
}
#[wasm_bindgen(setter)]
pub fn set_x(&mut self, x: i32) {
self.x = x;
}
pub fn distance(&self, other: &Point) -> f64 {
let dx = (self.x - other.x) as f64;
let dy = (self.y - other.y) as f64;
(dx * dx + dy * dy).sqrt()
}
}
// 4. Working with arrays
#[wasm_bindgen]
pub fn sum_array(arr: &[i32]) -> i32 {
arr.iter().sum()
}
// 5. Returning arrays
#[wasm_bindgen]
pub fn create_array(len: usize) -> Vec<i32> {
(0..len as i32).collect()
}
// 6. Async functions
#[wasm_bindgen]
pub async fn fetch_data(url: String) -> Result<String, JsValue> {
let response = reqwest::get(&url)
.await
.map_err(|e| JsValue::from_str(&e.to_string()))?;
let text = response
.text()
.await
.map_err(|e| JsValue::from_str(&e.to_string()))?;
Ok(text)
}
// 7. Calling JavaScript from Rust
#[wasm_bindgen]
extern "C" {
#[wasm_bindgen(js_namespace = console)]
fn log(s: &str);
#[wasm_bindgen(js_namespace = Math)]
fn random() -> f64;
type Date;
#[wasm_bindgen(static_method_of = Date)]
fn now() -> f64;
#[wasm_bindgen(js_namespace = document)]
fn getElementById(id: &str) -> Option<Element>;
type Element;
#[wasm_bindgen(method, js_name = innerHTML)]
fn set_inner_html(this: &Element, html: &str);
}
// 8. DOM manipulation
#[wasm_bindgen]
pub fn update_dom() -> Result<(), JsValue> {
if let Some(element) = getElementById("app") {
let timestamp = Date::now();
element.set_inner_html(&format!(
"Hello from Rust! Timestamp: {}",
timestamp
));
}
Ok(())
}
// 9. Working with closures
use wasm_bindgen::closure::Closure;
#[wasm_bindgen]
pub fn setup_click_handler() -> Result<(), JsValue> {
let window = web_sys::window().expect("no global window");
let document = window.document().expect("no document");
let button = document.get_element_by_id("my-button")
.expect("no button");
let closure = Closure::wrap(Box::new(move || {
log("Button clicked!");
}) as Box<dyn FnMut()>);
button
.dyn_ref::<web_sys::HtmlElement>()
.expect("not an element")
.set_onclick(Some(closure.as_ref().unchecked_ref()));
closure.forget(); // Prevent closure from being dropped
Ok(())
}
// 10. WebAssembly linear memory
#[wasm_bindgen]
pub fn memory_operations(ptr: *mut u8, len: usize) -> u8 {
unsafe {
let slice = std::slice::from_raw_parts(ptr, len);
slice.iter().sum()
}
}
// 11. Working with web_sys
use web_sys::{CanvasRenderingContext2d, HtmlCanvasElement};
#[wasm_bindgen]
pub fn draw_canvas() -> Result<(), JsValue> {
let document = web_sys::window().unwrap().document().unwrap();
let canvas = document.get_element_by_id("canvas")
.unwrap()
.dyn_into::<HtmlCanvasElement>()?;
let context = canvas
.get_context("2d")?
.unwrap()
.dyn_into::<CanvasRenderingContext2d>()?;
context.set_fill_style(&JsValue::from_str("red"));
context.fill_rect(10.0, 10.0, 100.0, 100.0);
Ok(())
}
// 12. Performance-sensitive code
#[wasm_bindgen]
pub fn process_pixels(data: &mut [u8]) {
for chunk in data.chunks_exact_mut(4) {
// Simple grayscale filter
let gray = (chunk[0] as u16 + chunk[1] as u16 + chunk[2] as u16) / 3;
chunk[0] = gray as u8;
chunk[1] = gray as u8;
chunk[2] = gray as u8;
}
}
// In JavaScript, you would use:
// import * as wasm from './pkg/your_crate.js';
//
// async function init() {
// await wasm.default();
//
// // Call Rust functions
// console.log(wasm.add(5, 3));
// console.log(wasm.greet("World"));
//
// // Create Rust objects
// let point = new wasm.Point(10, 20);
// console.log(point.distance(new wasm.Point(5, 5)));
//
// // Work with arrays
// let arr = new Uint8Array([1, 2, 3, 4]);
// let sum = wasm.memory_operations(arr.byteOffset, arr.length);
// }
9. Embedded Rust
Q21: How do you write Rust for embedded systems?
Answer:
#![no_std] // No standard library
#![no_main] // No main function
use core::panic::PanicInfo;
use cortex_m_rt::entry;
use cortex_m_semihosting::hprintln;
// 1. No_std environment
// No heap allocation, no threads, no file I/O
// 2. Panic handler for no_std
#[panic_handler]
fn panic(_info: &PanicInfo) -> ! {
loop {}
}
// 3. Entry point
#[entry]
fn main() -> ! {
hprintln!("Hello, embedded world!").unwrap();
// Main loop
loop {
// Do work
}
}
// 4. GPIO manipulation
use embedded_hal::digital::v2::{OutputPin, InputPin};
struct Led<P: OutputPin> {
pin: P,
}
impl<P: OutputPin> Led<P> {
fn new(pin: P) -> Self {
Led { pin }
}
fn on(&mut self) {
self.pin.set_high().ok();
}
fn off(&mut self) {
self.pin.set_low().ok();
}
fn toggle(&mut self) {
self.pin.toggle().ok();
}
}
// 5. Timer usage
use embedded_hal::blocking::delay::{DelayMs, DelayUs};
fn blink_led<D, P>(delay: &mut D, led: &mut Led<P>)
where
D: DelayMs<u32>,
P: OutputPin,
{
loop {
led.on();
delay.delay_ms(500u32);
led.off();
delay.delay_ms(500u32);
}
}
// 6. Reading sensors
use embedded_hal::adc::OneShot;
struct TemperatureSensor<ADC, PIN> {
adc: ADC,
pin: PIN,
}
impl<ADC, PIN, WORD> TemperatureSensor<ADC, PIN>
where
ADC: OneShot<PIN, WORD>,
{
fn new(adc: ADC, pin: PIN) -> Self {
TemperatureSensor { adc, pin }
}
fn read_temperature(&mut self) -> Result<f32, ADC::Error> {
let raw = self.adc.read(&mut self.pin)?;
// Convert raw ADC value to temperature
let voltage = (raw as f32) * 3.3 / 4096.0;
let temperature = (voltage - 0.5) * 100.0;
Ok(temperature)
}
}
// 7. Serial communication
use embedded_hal::serial::{Read, Write};
fn echo_serial<S>(serial: &mut S) -> Result<(), S::Error>
where
S: Read<u8> + Write<u8>,
{
while let Ok(byte) = serial.read() {
serial.write(byte)?;
serial.flush()?;
}
Ok(())
}
// 8. Interrupt handling
use cortex_m::interrupt::{self, Mutex};
use cortex_m::asm;
use core::cell::RefCell;
static SHARED: Mutex<RefCell<Option<Led<GPIO>>>> = Mutex::new(RefCell::new(None));
fn interrupt_handler() {
interrupt::free(|cs| {
if let Some(led) = SHARED.borrow(cs).borrow_mut().as_mut() {
led.toggle();
}
});
}
// 9. Memory-mapped registers
use volatile_register::{RW, RO};
#[repr(C)]
struct UartRegisters {
data: RW<u32>, // Data register
status: RO<u32>, // Status register
control: RW<u32>, // Control register
}
const UART_BASE: usize = 0x4000_1000;
fn uart_example() {
let uart = unsafe { &*(UART_BASE as *const UartRegisters) };
// Read status
let status = uart.status.read();
// Write control
unsafe { uart.control.write(0x01) };
}
// 10. Fixed-point arithmetic (no FPU)
struct Fixed(i32);
impl Fixed {
const SCALE: i32 = 1000;
fn new(value: i32) -> Self {
Fixed(value * Self::SCALE)
}
fn from_raw(raw: i32) -> Self {
Fixed(raw)
}
fn as_i32(&self) -> i32 {
self.0 / Self::SCALE
}
fn as_f32(&self) -> f32 {
self.0 as f32 / Self::SCALE as f32
}
}
impl core::ops::Add for Fixed {
type Output = Self;
fn add(self, other: Self) -> Self {
Fixed(self.0 + other.0)
}
}
// 11. Device-specific configuration
#[repr(u8)]
enum ClockSource {
HSI = 0,
HSE = 1,
PLL = 2,
}
struct SystemConfig {
clock_source: ClockSource,
frequency: u32,
}
// 12. Real-time constraints
struct Deadline {
deadline_us: u32,
}
impl Deadline {
fn new(deadline_us: u32) -> Self {
Deadline { deadline_us }
}
fn is_met(&self, current_time_us: u32) -> bool {
current_time_us <= self.deadline_us
}
fn remaining_us(&self, current_time_us: u32) -> i32 {
self.deadline_us as i32 - current_time_us as i32
}
}
// 13. DMA transfers
struct DmaTransfer<T> {
source: *const T,
destination: *mut T,
count: usize,
}
impl<T> DmaTransfer<T> {
fn start(&self) {
// Configure DMA controller
// Start transfer
}
fn wait(&self) {
// Wait for transfer complete interrupt
}
}
10. Interview Tips and Scenarios
Q22: Common interview scenarios and how to approach them
Answer:
// Scenario 1: Design a thread-safe cache
use std::collections::HashMap;
use std::sync::{Arc, RwLock};
use std::time::{Duration, Instant};
struct Cache<K, V> {
inner: Arc<RwLock<HashMap<K, CacheEntry<V>>>>,
ttl: Duration,
}
struct CacheEntry<V> {
value: V,
expires: Instant,
}
impl<K: Eq + std::hash::Hash + Clone, V: Clone> Cache<K, V> {
fn new(ttl: Duration) -> Self {
Cache {
inner: Arc::new(RwLock::new(HashMap::new())),
ttl,
}
}
fn get(&self, key: &K) -> Option<V> {
let inner = self.inner.read().unwrap();
if let Some(entry) = inner.get(key) {
if Instant::now() < entry.expires {
return Some(entry.value.clone());
}
}
None
}
fn set(&self, key: K, value: V) {
let mut inner = self.inner.write().unwrap();
inner.insert(key, CacheEntry {
value,
expires: Instant::now() + self.ttl,
});
}
fn cleanup(&self) {
let mut inner = self.inner.write().unwrap();
inner.retain(|_, entry| Instant::now() < entry.expires);
}
}
// Scenario 2: Implement a simple actor system
use std::sync::mpsc;
trait Actor {
type Message;
fn handle(&mut self, msg: Self::Message);
}
struct ActorHandle<Msg> {
sender: mpsc::Sender<Msg>,
}
impl<Msg> ActorHandle<Msg> {
fn send(&self, msg: Msg) -> Result<(), mpsc::SendError<Msg>> {
self.sender.send(msg)
}
}
fn spawn_actor<A, Msg>(mut actor: A) -> ActorHandle<Msg>
where
A: Actor<Message = Msg> + Send + 'static,
Msg: Send + 'static,
{
let (tx, rx) = mpsc::channel();
std::thread::spawn(move || {
for msg in rx {
actor.handle(msg);
}
});
ActorHandle { sender: tx }
}
// Example actor
struct CounterActor {
count: i32,
}
impl Actor for CounterActor {
type Message = CounterMessage;
fn handle(&mut self, msg: Self::Message) {
match msg {
CounterMessage::Increment => {
self.count += 1;
println!("Count: {}", self.count);
}
CounterMessage::Get => {
println!("Current count: {}", self.count);
}
}
}
}
enum CounterMessage {
Increment,
Get,
}
// Scenario 3: Implement a simple state machine
#[derive(Debug, PartialEq)]
enum ConnectionState {
Disconnected,
Connecting,
Connected,
Error,
}
struct Connection {
state: ConnectionState,
retries: u8,
}
impl Connection {
fn new() -> Self {
Connection {
state: ConnectionState::Disconnected,
retries: 0,
}
}
fn connect(&mut self) -> Result<(), &'static str> {
match self.state {
ConnectionState::Disconnected => {
self.state = ConnectionState::Connecting;
self.retries = 0;
Ok(())
}
_ => Err("Invalid state for connect"),
}
}
fn handle_event(&mut self, event: ConnectionEvent) {
match (&self.state, event) {
(ConnectionState::Connecting, ConnectionEvent::Connected) => {
self.state = ConnectionState::Connected;
self.retries = 0;
}
(ConnectionState::Connecting, ConnectionEvent::Failed) => {
if self.retries < 3 {
self.retries += 1;
} else {
self.state = ConnectionState::Error;
}
}
(ConnectionState::Connected, ConnectionEvent::Disconnected) => {
self.state = ConnectionState::Disconnected;
}
_ => {}
}
}
}
enum ConnectionEvent {
Connected,
Disconnected,
Failed,
}
// Scenario 4: Memory-efficient data structure
struct BitSet {
data: Vec<u64>,
size: usize,
}
impl BitSet {
fn new(size: usize) -> Self {
let num_words = (size + 63) / 64;
BitSet {
data: vec![0; num_words],
size,
}
}
fn set(&mut self, index: usize) {
if index < self.size {
let word = index / 64;
let bit = index % 64;
self.data[word] |= 1 << bit;
}
}
fn clear(&mut self, index: usize) {
if index < self.size {
let word = index / 64;
let bit = index % 64;
self.data[word] &= !(1 << bit);
}
}
fn contains(&self, index: usize) -> bool {
if index < self.size {
let word = index / 64;
let bit = index % 64;
(self.data[word] & (1 << bit)) != 0
} else {
false
}
}
fn iter(&self) -> BitSetIter {
BitSetIter {
bitset: self,
current: 0,
}
}
}
struct BitSetIter<'a> {
bitset: &'a BitSet,
current: usize,
}
impl<'a> Iterator for BitSetIter<'a> {
type Item = usize;
fn next(&mut self) -> Option<Self::Item> {
while self.current < self.bitset.size {
if self.bitset.contains(self.current) {
let result = self.current;
self.current += 1;
return Some(result);
}
self.current += 1;
}
None
}
}
// Scenario 5: Interview problem - Implement a rate limiter
use std::collections::VecDeque;
use std::time::{Duration, Instant};
struct RateLimiter {
max_requests: usize,
window: Duration,
requests: VecDeque<Instant>,
}
impl RateLimiter {
fn new(max_requests: usize, window: Duration) -> Self {
RateLimiter {
max_requests,
window,
requests: VecDeque::with_capacity(max_requests + 1),
}
}
fn allow(&mut self) -> bool {
let now = Instant::now();
// Remove old requests
while let Some(&time) = self.requests.front() {
if now - time > self.window {
self.requests.pop_front();
} else {
break;
}
}
// Check if under limit
if self.requests.len() < self.max_requests {
self.requests.push_back(now);
true
} else {
false
}
}
fn remaining(&self) -> usize {
self.max_requests - self.requests.len()
}
fn reset(&mut self) {
self.requests.clear();
}
}
Q23: Problem-Solving Approach
// When faced with a coding problem in an interview:
// 1. Understand the problem
// 2. Ask clarifying questions
// 3. Discuss edge cases
// 4. Outline your approach
// 5. Write code
// 6. Test with examples
// Example: Implement a function that finds the longest palindrome substring
fn longest_palindrome(s: &str) -> &str {
if s.is_empty() {
return "";
}
let bytes = s.as_bytes();
let mut start = 0;
let mut max_len = 1;
// Helper to expand around center
fn expand(bytes: &[u8], mut left: i32, mut right: i32) -> (usize, usize) {
while left >= 0 && right < bytes.len() as i32 && bytes[left as usize] == bytes[right as usize] {
left -= 1;
right += 1;
}
((left + 1) as usize, (right - left - 1) as usize)
}
for i in 0..bytes.len() {
// Odd length palindromes
let (odd_start, odd_len) = expand(bytes, i as i32 - 1, i as i32 + 1);
if odd_len > max_len {
max_len = odd_len;
start = odd_start;
}
// Even length palindromes
if i + 1 < bytes.len() {
let (even_start, even_len) = expand(bytes, i as i32, i as i32 + 1);
if even_len > max_len {
max_len = even_len;
start = even_start;
}
}
}
&s[start..start + max_len]
}
// Test cases
#[test]
fn test_longest_palindrome() {
assert_eq!(longest_palindrome("babad"), "bab");
assert_eq!(longest_palindrome("cbbd"), "bb");
assert_eq!(longest_palindrome("a"), "a");
assert_eq!(longest_palindrome("ac"), "a");
assert_eq!(longest_palindrome(""), "");
}
Q24: System Design with Rust
// Design a simple key-value store
use std::collections::HashMap;
use std::fs::{File, OpenOptions};
use std::io::{BufReader, BufWriter, Read, Write};
use std::path::Path;
use serde::{Serialize, Deserialize};
#[derive(Debug, Serialize, Deserialize)]
struct Entry {
key: String,
value: Vec<u8>,
timestamp: u64,
}
struct KeyValueStore {
data: HashMap<String, Entry>,
wal: BufWriter<File>,
path: String,
}
impl KeyValueStore {
fn new(path: &str) -> Result<Self, std::io::Error> {
let mut store = KeyValueStore {
data: HashMap::new(),
wal: BufWriter::new(
OpenOptions::new()
.create(true)
.append(true)
.open(Path::new(path).with_extension("wal"))?
),
path: path.to_string(),
};
// Load from WAL on startup
store.load_from_wal()?;
Ok(store)
}
fn set(&mut self, key: String, value: Vec<u8>) -> Result<(), std::io::Error> {
let entry = Entry {
key: key.clone(),
value: value.clone(),
timestamp: std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap()
.as_secs(),
};
// Write to WAL first
let serialized = bincode::serialize(&entry).unwrap();
let len = serialized.len() as u32;
self.wal.write_all(&len.to_le_bytes())?;
self.wal.write_all(&serialized)?;
self.wal.flush()?;
// Update in-memory store
self.data.insert(key, entry);
Ok(())
}
fn get(&self, key: &str) -> Option<&Vec<u8>> {
self.data.get(key).map(|e| &e.value)
}
fn delete(&mut self, key: &str) -> Result<bool, std::io::Error> {
if self.data.remove(key).is_some() {
// Write tombstone to WAL
let entry = Entry {
key: key.to_string(),
value: Vec::new(),
timestamp: 0, // 0 timestamp indicates tombstone
};
let serialized = bincode::serialize(&entry).unwrap();
let len = serialized.len() as u32;
self.wal.write_all(&len.to_le_bytes())?;
self.wal.write_all(&serialized)?;
self.wal.flush()?;
Ok(true)
} else {
Ok(false)
}
}
fn load_from_wal(&mut self) -> Result<(), std::io::Error> {
let wal_path = Path::new(&self.path).with_extension("wal");
if !wal_path.exists() {
return Ok(());
}
let mut reader = BufReader::new(File::open(wal_path)?);
let mut buffer = Vec::new();
while let Ok(len_bytes) = reader.fill_buf() {
if len_bytes.is_empty() {
break;
}
if len_bytes.len() < 4 {
break;
}
let mut len_arr = [0u8; 4];
len_arr.copy_from_slice(&len_bytes[..4]);
let len = u32::from_le_bytes(len_arr) as usize;
reader.consume(4);
buffer.resize(len, 0);
reader.read_exact(&mut buffer)?;
if let Ok(entry) = bincode::deserialize::<Entry>(&buffer) {
if entry.timestamp == 0 {
// Tombstone
self.data.remove(&entry.key);
} else {
self.data.insert(entry.key.clone(), entry);
}
}
}
Ok(())
}
fn snapshot(&self) -> Result<(), std::io::Error> {
let snapshot_path = Path::new(&self.path).with_extension("snap");
let file = File::create(snapshot_path)?;
let mut writer = BufWriter::new(file);
for entry in self.data.values() {
let serialized = bincode::serialize(entry).unwrap();
let len = serialized.len() as u32;
writer.write_all(&len.to_le_bytes())?;
writer.write_all(&serialized)?;
}
writer.flush()?;
Ok(())
}
}
Conclusion
This comprehensive guide covers the most commonly asked advanced Rust interview questions. Key areas to focus on:
- Concurrency: Understand Send/Sync, atomics, mutexes, channels
- Async: Know the async model, futures, tokio vs async-std
- Unsafe: Be able to explain when and how to use unsafe code
- Performance: Understand zero-cost abstractions, profiling, optimization
- System Programming: FFI, memory management, embedded systems
- WebAssembly: Know the basics of wasm-bindgen and web-sys
- Design Patterns: Be able to implement common patterns in Rust
Final Tips
- Practice coding on platforms like LeetCode using Rust
- Read open-source Rust code to learn idiomatic patterns
- Understand the borrow checker deeply - it's what makes Rust unique
- Be prepared to discuss tradeoffs - no solution is perfect
- Show your thought process during problem-solving
- Ask clarifying questions before diving into code
Good luck with your Rust interview!