Rust Interview Preparation: Commonly Asked Questions – Set 3

Table of Contents

Table of Contents

  1. Ownership and Borrowing Deep Dive
  2. Lifetimes Advanced Concepts
  3. Concurrency and Parallelism
  4. Unsafe Rust
  5. Macros and Metaprogramming
  6. Performance Optimization
  7. Async Programming
  8. Common Coding Problems
  9. System Programming Concepts
  10. Best Practices and Patterns

1. Ownership and Borrowing Deep Dive

Q1: Explain the difference between Box<T>, Rc<T>, and Arc<T> with use cases.

Answer:

// Box<T> - Single ownership, heap allocation
struct TreeNode {
value: i32,
left: Option<Box<TreeNode>>,
right: Option<Box<TreeNode>>,
}
// Rc<T> - Multiple ownership (single-threaded)
use std::rc::Rc;
struct GraphNode {
value: i32,
edges: Vec<Rc<GraphNode>>, // Multiple nodes can share ownership
}
// Arc<T> - Multiple ownership (thread-safe)
use std::sync::Arc;
use std::thread;
let data = Arc::new(vec![1, 2, 3]);
let mut handles = vec![];
for _ in 0..3 {
let data = Arc::clone(&data);
handles.push(thread::spawn(move || {
println!("Data: {:?}", data);
}));
}

Key Points:

  • Box<T>: Single ownership, compile-time known size, for recursive types
  • Rc<T>: Reference counting, non-thread-safe, for shared ownership in single thread
  • Arc<T>: Atomic reference counting, thread-safe, for shared ownership across threads

Q2: What's the difference between Cell<T>, RefCell<T>, and Mutex<T>?

Answer:

use std::cell::{Cell, RefCell};
use std::sync::Mutex;
// Cell<T> - For Copy types, no runtime borrow checking
let cell = Cell::new(42);
cell.set(100);
let value = cell.get(); // Always returns a copy
// RefCell<T> - For non-Copy types, runtime borrow checking
let ref_cell = RefCell::new(String::from("hello"));
// Multiple immutable borrows allowed
let borrow1 = ref_cell.borrow();
let borrow2 = ref_cell.borrow();
// let mut borrow_mut = ref_cell.borrow_mut(); // Would panic at runtime
drop(borrow1);
drop(borrow2);
// Now mutable borrow works
let mut borrow_mut = ref_cell.borrow_mut();
borrow_mut.push_str(" world");
// Mutex<T> - Thread-safe interior mutability
let mutex = Mutex::new(42);
{
let mut guard = mutex.lock().unwrap();
*guard += 10;
} // Mutex automatically unlocked when guard drops

Interview Tip: Emphasize that Cell has no runtime overhead but only works with Copy types, RefCell has runtime borrow checking, and Mutex provides thread safety.

Q3: How does Rust prevent data races at compile time?

Answer:

// This code won't compile - data race prevented
let mut data = vec![1, 2, 3];
let ref1 = &data;
let ref2 = &mut data; // ERROR: cannot borrow `data` as mutable because it's also borrowed as immutable
// This is safe - compiler prevents race conditions
use std::sync::{Arc, Mutex};
use std::thread;
let data = Arc::new(Mutex::new(vec![1, 2, 3]));
let mut handles = vec![];
for _ in 0..3 {
let data = Arc::clone(&data);
handles.push(thread::spawn(move || {
let mut data = data.lock().unwrap();
data.push(1); // Safe: Mutex ensures mutual exclusion
}));
}

Rust's Data Race Prevention:

  1. Ownership rules: Only one owner at a time
  2. Borrowing rules: Either one mutable reference or multiple immutable references
  3. Send/Sync traits: Types know if they're safe to send between threads
  4. Mutex/Atomics: Provide safe shared mutation

2. Lifetimes Advanced Concepts

Q4: Explain lifetime elision rules with examples.

Answer:

// Lifetime elision rules (three rules applied automatically)
// Rule 1: Each input reference gets its own lifetime
fn first_word(s: &str) -> &str {  // Desugared: fn first_word<'a>(s: &'a str) -> &'a str
&s[0..1]
}
// Rule 2: If only one input lifetime, output gets that lifetime
fn longest(x: &str, y: &str) -> &str {  // ERROR: Need explicit lifetimes
if x.len() > y.len() { x } else { y }
}
// Fix: fn longest<'a>(x: &'a str, y: &'a str) -> &'a str
// Rule 3: If self, output gets self's lifetime
struct ImportantExcerpt<'a> {
part: &'a str,
}
impl<'a> ImportantExcerpt<'a> {
fn level(&self) -> i32 {  // Desugared: fn level<'b>(&'b self) -> i32
3
}
fn announce_and_return_part(&self, announcement: &str) -> &str {
// Desugared: fn announce_and_return_part<'b, 'c>(&'b self, announcement: &'c str) -> &'b str
println!("{}", announcement);
self.part
}
}

Q5: What's the difference between 'static lifetime and &'static str?

Answer:

// 'static lifetime - lives for the entire program
const MESSAGE: &'static str = "Hello";  // Stored in read-only memory
// These are different:
fn example() {
// This string literal has 'static lifetime
let s1: &'static str = "I live forever";
// This creates a String, then takes a reference
let s2 = String::from("I'm on the heap");
let s2_ref: &str = &s2; // Not 'static
// This can't be 'static because it might be deallocated
// let s3: &'static str = &s2; // ERROR
}
// 'static constraint on generic type
fn print_if_static<T: 'static>(t: T) {
// T must not contain any non-static references
}
// Common misconception: 'static doesn't mean "lives forever in memory"
// It means "can be safely treated as living for the entire program"

Q6: How do lifetimes work with structs and impl blocks?

Answer:

struct Parser<'a> {
input: &'a str,
position: usize,
}
impl<'a> Parser<'a> {
fn new(input: &'a str) -> Self {
Parser { input, position: 0 }
}
fn next_word(&mut self) -> Option<&'a str> {
// Returns slice with same lifetime as input
let start = self.position;
while self.position < self.input.len() 
&& !self.input.chars().nth(self.position).unwrap().is_whitespace() {
self.position += 1;
}
if start < self.position {
Some(&self.input[start..self.position])
} else {
None
}
}
}
// Multiple lifetimes in struct
struct Pair<'a, 'b> {
first: &'a str,
second: &'b str,
}
impl<'a, 'b> Pair<'a, 'b> {
fn longest(&self) -> &str
where 
'a: 'b,  // 'a outlives 'b
{
if self.first.len() > self.second.len() {
self.first
} else {
self.second
}
}
}

3. Concurrency and Parallelism

Q7: Explain Send and Sync traits with examples.

Answer:

use std::rc::Rc;
use std::sync::{Arc, Mutex};
use std::thread;
// Send: Types that can be transferred across threads
// Sync: Types that can be shared across threads safely
// Rc<T> is neither Send nor Sync
let rc = Rc::new(5);
// thread::spawn(move || { println!("{}", rc); }); // ERROR: Rc cannot be sent
// Arc<T> is both Send and Sync
let arc = Arc::new(5);
let arc_clone = Arc::clone(&arc);
thread::spawn(move || {
println!("From thread: {}", arc_clone);
}).join().unwrap();
// Mutex<T> is Send + Sync
let mutex = Arc::new(Mutex::new(0));
let mut handles = vec![];
for _ in 0..10 {
let m = Arc::clone(&mutex);
handles.push(thread::spawn(move || {
let mut data = m.lock().unwrap();
*data += 1;
}));
}
// Custom type implementing Send/Sync
struct ThreadSafeType {
data: Arc<Mutex<Vec<i32>>>,
}
// Automatically Send/Sync if all fields are Send/Sync
struct NotThreadSafe {
data: Rc<Vec<i32>>,  // Rc makes the whole struct not Send
}

Q8: What's the difference between thread::spawn and scope threads?

Answer:

use std::thread;
// thread::spawn - Owns data, must be 'static
fn spawn_example() {
let data = vec![1, 2, 3];
let handle = thread::spawn(move || {  // Must take ownership
println!("Data: {:?}", data);
});
handle.join().unwrap();
// println!("{:?}", data); // ERROR: data moved
}
// scoped threads - Can borrow data
fn scope_example() {
let data = vec![1, 2, 3];
thread::scope(|s| {
s.spawn(|| {
// Can borrow data, no 'static required
println!("From scope: {:?}", data);
});
s.spawn(|| {
// Multiple threads can borrow same data
println!("Also borrowing: {:?}", data);
});
}); // All threads automatically joined here
println!("Still have data: {:?}", data); // OK
}
// Real-world example: Parallel processing
fn parallel_sum(data: &[i32]) -> i32 {
let chunk_size = data.len() / 4;
thread::scope(|s| {
let mut handles = vec![];
for chunk in data.chunks(chunk_size) {
handles.push(s.spawn(|| {
chunk.iter().sum::<i32>()
}));
}
handles.into_iter().map(|h| h.join().unwrap()).sum()
})
}

Q9: How do you implement a thread-safe singleton in Rust?

Answer:

use std::sync::{Once, OnceLock, LazyLock};
use std::sync::Mutex;
// Method 1: Once (manual initialization)
static INIT: Once = Once::new();
static mut CONFIG: Option<String> = None;
fn get_config() -> &'static String {
unsafe {
INIT.call_once(|| {
CONFIG = Some("production".to_string());
});
CONFIG.as_ref().unwrap()
}
}
// Method 2: OnceLock (stable since Rust 1.70)
static DATABASE: OnceLock<Mutex<Vec<String>>> = OnceLock::new();
fn get_database() -> &'static Mutex<Vec<String>> {
DATABASE.get_or_init(|| {
Mutex::new(Vec::new())
})
}
// Method 3: LazyLock (nightly) or lazy_static crate
static CACHE: LazyLock<Mutex<Vec<i32>>> = LazyLock::new(|| {
Mutex::new(vec![1, 2, 3])
});
// Method 4: Arc with lazy initialization
use std::sync::Arc;
struct AppState {
settings: Arc<Mutex<Vec<String>>>,
}
impl AppState {
fn new() -> Self {
Self {
settings: Arc::new(Mutex::new(Vec::new())),
}
}
fn clone(&self) -> Self {
Self {
settings: Arc::clone(&self.settings),
}
}
}

4. Unsafe Rust

Q10: When would you need to use unsafe Rust?

Answer:

// 1. Dereferencing raw pointers
fn unsafe_deref() {
let x = 5;
let raw = &x as *const i32;
unsafe {
println!("Raw pointer value: {}", *raw);
}
}
// 2. Calling unsafe functions (FFI)
extern "C" {
fn abs(input: i32) -> i32;
}
fn call_c_function() {
unsafe {
println!("Absolute value: {}", abs(-3));
}
}
// 3. Accessing/modifying mutable static variables
static mut COUNTER: u32 = 0;
fn increment_counter() {
unsafe {
COUNTER += 1;
}
}
// 4. Implementing unsafe traits
unsafe trait MyUnsafeTrait {
// Must be implemented carefully
}
unsafe impl MyUnsafeTrait for i32 {}
// 5. Union field access
union MyUnion {
i: i32,
f: f32,
}
fn union_example() {
let u = MyUnion { i: 42 };
unsafe {
println!("Int: {}", u.i);
}
}

Q11: Explain the invariants that must be maintained when implementing unsafe code.

Answer:

use std::ptr;
// Invariant 1: Raw pointers must be valid
fn invalid_raw_pointer() {
let ptr: *const i32 = ptr::null();
unsafe {
// println!("{}", *ptr); // UNDEFINED BEHAVIOR!
}
}
// Invariant 2: No aliasing violations
fn aliasing_violation() {
let mut data = 10;
let r1 = &mut data as *mut i32;
let r2 = &mut data as *mut i32; // Two mutable pointers to same data
unsafe {
*r1 = 20;
// *r2 = 30; // Undefined behavior!
}
}
// Invariant 3: Proper alignment
#[repr(packed)]
struct Packed {
x: u8,
y: u32,  // Not properly aligned
}
// Safe abstraction example
struct MyVec<T> {
ptr: *mut T,
len: usize,
capacity: usize,
}
impl<T> MyVec<T> {
fn new() -> Self {
Self {
ptr: ptr::null_mut(),
len: 0,
capacity: 0,
}
}
fn push(&mut self, value: T) {
if self.len == self.capacity {
self.grow();
}
unsafe {
// We guarantee ptr is valid and not aliased
ptr::write(self.ptr.add(self.len), value);
self.len += 1;
}
}
unsafe fn get_unchecked(&self, index: usize) -> &T {
// Caller must ensure index is in bounds
&*self.ptr.add(index)
}
}

Q12: What's the purpose of MaybeUninit<T> and when would you use it?

Answer:

use std::mem::MaybeUninit;
// MaybeUninit is used for uninitialized memory
fn manual_array_initialization() {
// Create an uninitialized array
let mut arr: [MaybeUninit<i32>; 10] = unsafe {
MaybeUninit::uninit().assume_init()
};
// Initialize elements
for i in 0..10 {
arr[i] = MaybeUninit::new(i as i32);
}
// Assume all elements are initialized
let arr = unsafe {
std::mem::transmute::<_, [i32; 10]>(arr)
};
println!("{:?}", arr);
}
// Partial initialization
struct Buffer {
data: [MaybeUninit<u8>; 1024],
len: usize,
}
impl Buffer {
fn new() -> Self {
Self {
data: unsafe { MaybeUninit::uninit().assume_init() },
len: 0,
}
}
fn push(&mut self, byte: u8) {
if self.len < 1024 {
self.data[self.len] = MaybeUninit::new(byte);
self.len += 1;
}
}
fn as_slice(&self) -> &[u8] {
unsafe {
std::slice::from_raw_parts(
self.data.as_ptr() as *const u8,
self.len
)
}
}
}

5. Macros and Metaprogramming

Q13: Explain the difference between declarative and procedural macros.

Answer:

// Declarative macros (macro_rules!) - Pattern matching
macro_rules! vec_of_strings {
($($x:expr),*) => {
vec![$(String::from($x)),*]
};
}
// Usage
let v = vec_of_strings!["a", "b", "c"];
// Procedural macros - More powerful, operate on AST
// In a separate crate (requires proc-macro = true)
// Derive macro
use proc_macro::TokenStream;
#[proc_macro_derive(MyTrait)]
pub fn my_trait_derive(input: TokenStream) -> TokenStream {
// Parse input and generate code
input
}
// Attribute macro
#[proc_macro_attribute]
pub fn log_function(args: TokenStream, input: TokenStream) -> TokenStream {
// Generate wrapper with logging
input
}
// Function-like macro
#[proc_macro]
pub fn sql(input: TokenStream) -> TokenStream {
// Parse SQL-like syntax at compile time
input
}

Q14: How do you write a macro that handles different types?

Answer:

// Macro that works with different numeric types
macro_rules! create_add_function {
($name:ident, $type:ty) => {
fn $name(a: $type, b: $type) -> $type {
a + b
}
};
}
create_add_function!(add_i32, i32);
create_add_function!(add_f64, f64);
// Macro with repetition and type handling
macro_rules! sum {
// Base case: single number
($x:expr) => { $x };
// Recursive case: add first to sum of rest
($x:expr, $($y:expr),+) => {
$x + sum!($($y),+)
};
}
assert_eq!(sum!(1, 2, 3, 4), 10);
// Macro with type-specific handling
macro_rules! as_string {
($x:expr) => {
match $x {
s if stringify!($x).starts_with('"') => $x.to_string(),
_ => format!("{:?}", $x),
}
};
}
// Macro with different patterns
macro_rules! debug_print {
// Print with label
($label:expr, $value:expr) => {
println!("{}: {:?}", $label, $value);
};
// Print multiple values
($($value:expr),+) => {
$(println!("{:?}", $value);)+
};
// Print with custom format
($format:expr, $($value:expr),+) => {
println!($format, $($value),+);
};
}

Q15: What's the difference between stringify! and format!?

Answer:

// stringify! - Compile-time, turns tokens into string literal
macro_rules! make_function {
($name:ident) => {
fn $name() {
println!("Function {} called", stringify!($name));
}
};
}
make_function!(hello);
// Generates:
// fn hello() {
//     println!("Function {} called", "hello");
// }
// format! - Runtime, creates formatted String
fn runtime_format() {
let name = "world";
let greeting = format!("Hello, {}!", name); // Runtime
println!("{}", greeting);
}
// Comparison
macro_rules! compare {
($x:expr) => {
println!("stringify: {}", stringify!($x));
println!("format: {}", format!("{:?}", $x));
};
}
compare!(1 + 2);
// Output:
// stringify: 1 + 2
// format: 3

6. Performance Optimization

Q16: Explain zero-cost abstractions in Rust with examples.

Answer:

// Zero-cost abstraction: Iterators
fn sum_with_loop(v: &[i32]) -> i32 {
let mut sum = 0;
for i in 0..v.len() {
sum += v[i];
}
sum
}
fn sum_with_iterator(v: &[i32]) -> i32 {
v.iter().sum()
}
// Both compile to identical assembly
// Zero-cost abstraction: Generics
fn identity<T>(x: T) -> T { x }
// Monomorphized to separate functions for each type
// Zero-cost abstraction: Enums
enum MyOption<T> {
Some(T),
None,
}
// Compiles to tagged union, no overhead beyond C union
// Trait objects (dynamic dispatch) have runtime cost
fn dynamic_dispatch(x: &dyn Display) {
println!("{}", x); // Virtual call overhead
}
// Static dispatch (generics) has zero runtime cost
fn static_dispatch<T: Display>(x: T) {
println!("{}", x); // Direct call, monomorphized
}

Q17: How do you optimize Rust code for performance?

Answer:

use std::time::Instant;
// 1. Pre-allocate collections
fn inefficient_allocation() {
let start = Instant::now();
let mut v = Vec::new();
for i in 0..100_000 {
v.push(i); // May reallocate multiple times
}
println!("Inefficient: {:?}", start.elapsed());
}
fn efficient_allocation() {
let start = Instant::now();
let mut v = Vec::with_capacity(100_000);
for i in 0..100_000 {
v.push(i); // No reallocation
}
println!("Efficient: {:?}", start.elapsed());
}
// 2. Use iterators instead of manual loops
fn manual_loop(v: &[i32]) -> i32 {
let mut sum = 0;
for i in 0..v.len() {
sum += v[i]; // Bounds check each iteration
}
sum
}
fn iterator_version(v: &[i32]) -> i32 {
v.iter().sum() // No bounds checks, can be auto-vectorized
}
// 3. Cache-friendly data structures
// Bad: Vec of pointers
struct Bad {
data: Vec<Box<Data>>, // Poor cache locality
}
// Good: Contiguous data
struct Good {
data: Vec<Data>, // Good cache locality
}
// 4. Use const where possible
const FACTORS: [i32; 10] = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
// Computed at compile time
// 5. Avoid unnecessary cloning
fn process_string_inefficient(s: String) {
let cloned = s.clone(); // Unnecessary
println!("{}", cloned);
}
fn process_string_efficient(s: &str) {
println!("{}", s);
}
// 6. Use small types and bit packing
#[repr(packed)]
struct PackedData {
flags: u8,    // 1 byte
value: u32,   // 4 bytes (unaligned access cost!)
// Better: Use alignment directives
}

Q18: What's the difference between #[inline], #[inline(always)], and #[inline(never)]?

Answer:

// #[inline] - Hint to compiler, can ignore
#[inline]
fn small_function(x: i32) -> i32 {
x + 1
}
// #[inline(always)] - Force inlining
#[inline(always)]
fn very_small_function(x: i32) -> i32 {
x * 2
}
// #[inline(never)] - Prevent inlining
#[inline(never)]
fn large_function(x: i32) -> i32 {
// Complex logic that shouldn't be inlined
let mut result = 0;
for i in 0..x {
result += i;
}
result
}
// When to use each:
// - #[inline]: Small functions that are called frequently
// - #[inline(always)]: Extremely small functions (1-2 instructions)
// - #[inline(never)]: Large functions or when you want to optimize instruction cache
// Example: Generic functions often benefit from inlining
#[inline]
fn generic_add<T: std::ops::Add<Output = T>>(a: T, b: T) -> T {
a + b // Generic code benefits from inlining for type specialization
}

7. Async Programming

Q19: Explain the difference between async/await and threads.

Answer:

use tokio::time;
use std::thread;
// Threads: OS-managed, preemptive, heavyweight
fn thread_example() {
let handle = thread::spawn(|| {
for i in 0..5 {
println!("Thread: {}", i);
thread::sleep(std::time::Duration::from_millis(100));
}
});
for i in 0..3 {
println!("Main: {}", i);
thread::sleep(std::time::Duration::from_millis(150));
}
handle.join().unwrap();
}
// Async: Language feature, cooperative, lightweight
async fn async_example() {
let task1 = tokio::spawn(async {
for i in 0..5 {
println!("Task1: {}", i);
time::sleep(time::Duration::from_millis(100)).await;
}
});
let task2 = tokio::spawn(async {
for i in 0..3 {
println!("Task2: {}", i);
time::sleep(time::Duration::from_millis(150)).await;
}
});
task1.await.unwrap();
task2.await.unwrap();
}
// Key differences:
// 1. Memory: Threads ~MB, Tasks ~KB
// 2. Context switching: Threads OS-mediated, Tasks at await points
// 3. Scalability: Threads limited, Tasks can scale to millions
// 4. Blocking: Threads can block, Tasks must be non-blocking
// When to use each:
// - Threads: CPU-bound work, need true parallelism
// - Async: I/O-bound work, many concurrent operations

Q20: How do you handle errors in async code?

Answer:

use tokio::fs::File;
use tokio::io::{self, AsyncReadExt};
// Basic error handling with ? in async functions
async fn read_file() -> io::Result<String> {
let mut file = File::open("hello.txt").await?;
let mut contents = String::new();
file.read_to_string(&mut contents).await?;
Ok(contents)
}
// Custom error type
#[derive(Debug)]
enum AppError {
Io(io::Error),
Parse(String),
Timeout,
}
impl From<io::Error> for AppError {
fn from(error: io::Error) -> Self {
AppError::Io(error)
}
}
async fn process_file() -> Result<String, AppError> {
let contents = read_file().await.map_err(AppError::Io)?;
if contents.is_empty() {
return Err(AppError::Parse("Empty file".to_string()));
}
Ok(contents)
}
// Timeout handling
use tokio::time::{timeout, Duration};
async fn with_timeout() -> Result<String, AppError> {
let result = timeout(
Duration::from_secs(5),
read_file()
).await;
match result {
Ok(Ok(contents)) => Ok(contents),
Ok(Err(e)) => Err(AppError::Io(e)),
Err(_) => Err(AppError::Timeout),
}
}
// Concurrent error handling
use futures::future::{join_all, try_join_all};
async fn concurrent_tasks() -> Result<Vec<String>, AppError> {
let tasks = vec![
read_file(),
read_file(),
read_file(),
];
// All succeed or first error
try_join_all(tasks).await.map_err(AppError::Io)
}
// Error handling in select
use tokio::select;
async fn select_example() -> Result<(), AppError> {
let task1 = with_timeout();
let task2 = with_timeout();
select! {
res1 = task1 => {
println!("Task1 finished first: {:?}", res1);
}
res2 = task2 => {
println!("Task2 finished first: {:?}", res2);
}
}
Ok(())
}

Q21: What's the difference between async fn and returning impl Future?

Answer:

use std::future::Future;
use std::pin::Pin;
// async fn - Desugars to returning impl Future
async fn async_function() -> i32 {
42
}
// Desugars to:
fn async_function_desugared() -> impl Future<Output = i32> {
async { 42 }
}
// Returning impl Future directly - More control
fn manual_future() -> impl Future<Output = i32> {
std::future::ready(42)
}
// Complex case: Conditional futures
fn conditional_future(flag: bool) -> impl Future<Output = i32> {
if flag {
// return async { 42 }; // ERROR: different types
// Need to use Box or enum
let fut = async { 42 };
fut.left_future()
} else {
let fut = async { 100 };
fut.right_future()
}
}
// Using Box for dynamic dispatch
fn dynamic_future(flag: bool) -> Pin<Box<dyn Future<Output = i32> + Send>> {
if flag {
Box::pin(async { 42 })
} else {
Box::pin(async { 100 })
}
}
// Key differences:
// - async fn: Simpler, always returns opaque type
// - impl Future: Can return different futures based on conditions
// - Box<dyn Future>: Dynamic dispatch, useful for recursion/collections

8. Common Coding Problems

Q22: Implement a thread-safe producer-consumer queue.

Answer:

use std::collections::VecDeque;
use std::sync::{Arc, Mutex, Condvar};
use std::thread;
struct Queue<T> {
data: Mutex<VecDeque<T>>,
not_empty: Condvar,
}
impl<T> Queue<T> {
fn new() -> Self {
Queue {
data: Mutex::new(VecDeque::new()),
not_empty: Condvar::new(),
}
}
fn push(&self, item: T) {
let mut data = self.data.lock().unwrap();
data.push_back(item);
self.not_empty.notify_one();
}
fn pop(&self) -> T {
let mut data = self.data.lock().unwrap();
loop {
if let Some(item) = data.pop_front() {
return item;
}
data = self.not_empty.wait(data).unwrap();
}
}
fn try_pop(&self) -> Option<T> {
let mut data = self.data.lock().unwrap();
data.pop_front()
}
}
fn main() {
let queue = Arc::new(Queue::new());
let mut handles = vec![];
// Producers
for i in 0..3 {
let q = Arc::clone(&queue);
handles.push(thread::spawn(move || {
for j in 0..5 {
q.push(format!("Producer {}-{}", i, j));
thread::sleep(std::time::Duration::from_millis(10));
}
}));
}
// Consumers
for _ in 0..2 {
let q = Arc::clone(&queue);
handles.push(thread::spawn(move || {
for _ in 0..7 {
let item = q.pop();
println!("Consumed: {}", item);
}
}));
}
for handle in handles {
handle.join().unwrap();
}
}

Q23: Implement a thread pool from scratch.

Answer:

use std::sync::{Arc, Mutex, mpsc};
use std::thread;
type Job = Box<dyn FnOnce() + Send + 'static>;
struct ThreadPool {
workers: Vec<Worker>,
sender: mpsc::Sender<Job>,
}
struct Worker {
id: usize,
thread: Option<thread::JoinHandle<()>>,
}
impl Worker {
fn new(id: usize, receiver: Arc<Mutex<mpsc::Receiver<Job>>>) -> Self {
let thread = thread::spawn(move || loop {
let job = receiver.lock().unwrap().recv();
match job {
Ok(job) => {
println!("Worker {} got a job", id);
job();
}
Err(_) => {
println!("Worker {} shutting down", id);
break;
}
}
});
Worker {
id,
thread: Some(thread),
}
}
}
impl ThreadPool {
fn new(size: usize) -> Self {
assert!(size > 0);
let (sender, receiver) = mpsc::channel();
let receiver = Arc::new(Mutex::new(receiver));
let mut workers = Vec::with_capacity(size);
for id in 0..size {
workers.push(Worker::new(id, Arc::clone(&receiver)));
}
ThreadPool { workers, sender }
}
fn execute<F>(&self, f: F)
where
F: FnOnce() + Send + 'static,
{
let job = Box::new(f);
self.sender.send(job).unwrap();
}
}
impl Drop for ThreadPool {
fn drop(&mut self) {
drop(self.sender); // Close channel
for worker in &mut self.workers {
if let Some(thread) = worker.thread.take() {
thread.join().unwrap();
}
}
}
}
// Usage
fn main() {
let pool = ThreadPool::new(4);
for i in 0..10 {
pool.execute(move || {
println!("Task {} executed by thread", i);
thread::sleep(std::time::Duration::from_millis(100));
});
}
}

Q24: Implement a custom iterator.

Answer:

struct Fibonacci {
current: u64,
next: u64,
max: Option<u64>,
}
impl Fibonacci {
fn new() -> Self {
Fibonacci {
current: 0,
next: 1,
max: None,
}
}
fn with_max(max: u64) -> Self {
Fibonacci {
current: 0,
next: 1,
max: Some(max),
}
}
}
impl Iterator for Fibonacci {
type Item = u64;
fn next(&mut self) -> Option<Self::Item> {
let current = self.current;
if let Some(max) = self.max {
if current > max {
return None;
}
}
self.current = self.next;
self.next = current + self.next;
Some(current)
}
}
// Custom collection
struct Buffer<T> {
data: Vec<T>,
capacity: usize,
}
impl<T> Buffer<T> {
fn new(capacity: usize) -> Self {
Buffer {
data: Vec::with_capacity(capacity),
capacity,
}
}
fn push(&mut self, item: T) -> Option<T> {
if self.data.len() < self.capacity {
self.data.push(item);
None
} else {
Some(item) // Buffer full
}
}
}
// Implementing IntoIterator
impl<T> IntoIterator for Buffer<T> {
type Item = T;
type IntoIter = std::vec::IntoIter<T>;
fn into_iter(self) -> Self::IntoIter {
self.data.into_iter()
}
}
// Usage
fn main() {
let fib = Fibonacci::with_max(100);
let fib_numbers: Vec<_> = fib.collect();
println!("Fibonacci: {:?}", fib_numbers);
// Using iterator adapters
let fib_even: Vec<_> = Fibonacci::new()
.filter(|&x| x % 2 == 0)
.take(10)
.collect();
println!("Even Fibonacci: {:?}", fib_even);
}

9. System Programming Concepts

Q25: How do you interface with C code in Rust?

Answer:

// 1. Using extern blocks
extern "C" {
fn strlen(s: *const u8) -> usize;
fn puts(s: *const u8) -> i32;
}
fn call_c_functions() {
let s = b"Hello, world!\0";
unsafe {
let len = strlen(s.as_ptr());
println!("Length: {}", len);
puts(s.as_ptr());
}
}
// 2. Declaring C functions in Rust
#[no_mangle]
pub extern "C" fn rust_function(x: i32, y: i32) -> i32 {
x + y
}
// 3. Working with C structs
#[repr(C)]
struct Point {
x: f64,
y: f64,
}
extern "C" {
fn distance(p1: *const Point, p2: *const Point) -> f64;
}
// 4. Callbacks from C to Rust
type Callback = extern "C" fn(i32) -> i32;
extern "C" fn register_callback(cb: Callback) {
// C code stores callback
}
extern "C" fn my_callback(x: i32) -> i32 {
x * 2
}
// 5. Build configuration (build.rs)
// println!("cargo:rustc-link-search=native=/path/to/libs");
// println!("cargo:rustc-link-lib=static=foo");
// 6. Using bindgen to generate bindings
// In build.rs:
// bindgen::Builder::default()
//     .header("wrapper.h")
//     .generate()
//     .unwrap()
//     .write_to_file(out_path)
//     .unwrap();

Q26: Explain memory layout of Rust types.

Answer:

use std::mem;
#[repr(C)] // C-style layout (predictable)
struct CStruct {
a: u8,   // 1 byte
b: u32,  // 4 bytes
c: u16,  // 2 bytes
}
// Layout: a(1) + padding(3) + b(4) + c(2) + padding(2) = 12 bytes
#[repr(packed)] // No padding
struct PackedStruct {
a: u8,
b: u32,
c: u16,
}
// Layout: a(1) + b(4) + c(2) = 7 bytes (but unaligned access!)
#[repr(align(16))] // 16-byte alignment
struct AlignedStruct {
data: [u8; 10],
}
// Layout: 10 bytes + 6 bytes padding to align to 16
// Enum memory layout
enum MyEnum {
A,              // 0 discriminant
B(i32),         // 1 discriminant + i32
C { x: f64 },   // 2 discriminant + f64
}
// Size = max(variant size) + discriminant + padding
// Option optimization
assert_eq!(mem::size_of::<&i32>(), mem::size_of::<Option<&i32>>());
// Compiler uses null pointer optimization for Option<&T>
fn print_layout<T>() {
println!("Size of {}: {} bytes", 
std::any::type_name::<T>(),
mem::size_of::<T>());
println!("Alignment: {} bytes", mem::align_of::<T>());
}
fn main() {
print_layout::<CStruct>();
print_layout::<Option<&i32>>();
print_layout::<Box<dyn Fn()>>(); // Fat pointer: data + vtable
}

10. Best Practices and Patterns

Q27: What are the most important Rust design patterns?

Answer:

// 1. Builder Pattern
struct Pizza {
size: String,
toppings: Vec<String>,
}
struct PizzaBuilder {
size: String,
toppings: Vec<String>,
}
impl PizzaBuilder {
fn new() -> Self {
PizzaBuilder {
size: "medium".to_string(),
toppings: vec![],
}
}
fn size(mut self, size: &str) -> Self {
self.size = size.to_string();
self
}
fn add_topping(mut self, topping: &str) -> Self {
self.toppings.push(topping.to_string());
self
}
fn build(self) -> Pizza {
Pizza {
size: self.size,
toppings: self.toppings,
}
}
}
// 2. RAII (Resource Acquisition Is Initialization)
struct DatabaseConnection {
connected: bool,
}
impl DatabaseConnection {
fn new() -> Self {
println!("Connecting to database...");
DatabaseConnection { connected: true }
}
}
impl Drop for DatabaseConnection {
fn drop(&mut self) {
println!("Disconnecting from database...");
self.connected = false;
}
}
// 3. Newtype Pattern
struct Meters(f64);
struct Seconds(f64);
fn speed(distance: Meters, time: Seconds) -> f64 {
distance.0 / time.0
}
// 4. Type State Pattern
struct Empty;
struct Waiting;
struct Confirmed;
struct Order<State> {
items: Vec<String>,
state: std::marker::PhantomData<State>,
}
impl Order<Empty> {
fn new() -> Self {
Order {
items: vec![],
state: std::marker::PhantomData,
}
}
fn add_item(self, item: String) -> Order<Waiting> {
let mut items = self.items;
items.push(item);
Order {
items,
state: std::marker::PhantomData,
}
}
}
impl Order<Waiting> {
fn confirm(self) -> Order<Confirmed> {
Order {
items: self.items,
state: std::marker::PhantomData,
}
}
}
impl Order<Confirmed> {
fn process(&self) {
println!("Processing order: {:?}", self.items);
}
}
// 5. Dependency Injection
trait Logger {
fn log(&self, message: &str);
}
struct ConsoleLogger;
impl Logger for ConsoleLogger {
fn log(&self, message: &str) {
println!("{}", message);
}
}
struct Service<T: Logger> {
logger: T,
}
impl<T: Logger> Service<T> {
fn new(logger: T) -> Self {
Service { logger }
}
fn do_work(&self) {
self.logger.log("Working...");
}
}

Q28: How do you design error types in a library?

Answer:

use std::fmt;
use std::error::Error;
// 1. Define custom error enum
#[derive(Debug)]
enum LibraryError {
Network(String),
Parse { kind: String, line: usize },
Io(std::io::Error),
Timeout,
#[cfg(feature = "debug")]
Internal(String),
}
// 2. Implement Display
impl fmt::Display for LibraryError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self {
LibraryError::Network(msg) => write!(f, "Network error: {}", msg),
LibraryError::Parse { kind, line } => {
write!(f, "Parse error at line {}: {}", line, kind)
}
LibraryError::Io(e) => write!(f, "IO error: {}", e),
LibraryError::Timeout => write!(f, "Operation timed out"),
#[cfg(feature = "debug")]
LibraryError::Internal(msg) => write!(f, "Internal error: {}", msg),
}
}
}
// 3. Implement Error trait
impl Error for LibraryError {
fn source(&self) -> Option<&(dyn Error + 'static)> {
match self {
LibraryError::Io(e) => Some(e),
_ => None,
}
}
}
// 4. Provide From conversions
impl From<std::io::Error> for LibraryError {
fn from(error: std::io::Error) -> Self {
LibraryError::Io(error)
}
}
impl From<std::num::ParseIntError> for LibraryError {
fn from(error: std::num::ParseIntError) -> Self {
LibraryError::Parse {
kind: error.to_string(),
line: 0,
}
}
}
// 5. Create type alias
type Result<T> = std::result::Result<T, LibraryError>;
// 6. Usage in library
fn read_config(path: &str) -> Result<String> {
let content = std::fs::read_to_string(path)?; // Automatically converts
if content.is_empty() {
return Err(LibraryError::Network("Empty response".to_string()));
}
Ok(content)
}
// 7. For callers
fn main() {
match read_config("config.txt") {
Ok(data) => println!("Config: {}", data),
Err(e) => {
eprintln!("Error: {}", e);
if let Some(source) = e.source() {
eprintln!("Caused by: {}", source);
}
}
}
}

Interview Tips Summary

Key Areas to Master:

  1. Ownership & Borrowing: Understand move semantics, borrowing rules, and when to use each
  2. Lifetimes: Know elision rules, struct lifetimes, and common patterns
  3. Concurrency: Send/Sync traits, thread safety, async/await
  4. Error Handling: Result, Option, custom error types
  5. Performance: Zero-cost abstractions, memory layout, optimization techniques
  6. Patterns: Builder, RAII, Newtype, Type State
  7. Unsafe: When and why, invariants to maintain
  8. Macros: Declarative vs procedural, common use cases
  9. Testing: Unit tests, integration tests, doc tests
  10. Ecosystem: Common crates, tooling, FFI

Common Interview Questions to Expect:

  1. "Explain how ownership works with an example"
  2. "What's the difference between String and &str?"
  3. "How do you share data between threads safely?"
  4. "What are zero-cost abstractions?"
  5. "How would you design a thread pool?"
  6. "Explain the difference between Rc and Arc"
  7. "When would you use unsafe code?"
  8. "How do you handle errors in a library?"
  9. "What's the difference between async fn and impl Future?"
  10. "How does Rust prevent data races?"

Leave a Reply

Your email address will not be published. Required fields are marked *


Macro Nepal Helper