Uber

Real Uber Interview Experience

5 Rounds: Coding, Frontend, Machine Coding, System Design, Behavioral

5
Rounds
6
Weeks Prep
80 LPA - 1 CR
Package
βœ“
Offer
Vasanth

By Vasanth Bhat

Staff Software Engineer @ Walmart Global Tech

Mentored 100+ frontend developers through successful interviews

Round 1: Data Structures & Algorithms (45 minutes)

Medium-Hard DSA problem. Focus: Clean code, optimal solution, communication.

πŸ“Œ Problem: LRU Cache (LeetCode Hard)

Platform: LeetCode | Difficulty: Hard | Time: 35-40 min

Problem Statement:
Design and implement an LRU (Least Recently Used) Cache class with the following operations:
LRUCache(capacity): Initialize cache with positive capacity
get(key): Return value of key if it exists, else return -1
put(key, value): Update value of key. If key doesn't exist, add it. If cache exceeds capacity, evict least recently used item.

Time Complexity: Both operations must be O(1)
πŸ“‹ Constraints:
β€’ 1 ≀ capacity ≀ 10⁴
β€’ 0 ≀ key, value ≀ 10⁹
β€’ At most 2 Γ— 10⁴ calls to get and put
β€’ Must achieve O(1) for both get and put - This is critical!
πŸ“ Example:
LRUCache cache = new LRUCache(2);
cache.put(1, 1);     // cache is {1=1}
cache.put(2, 2);     // cache is {1=1, 2=2}
cache.get(1);        // returns 1, cache is {2=2, 1=1} (1 is now most recent)
cache.put(3, 3);     // capacity exceeded, evict key 2 (least recently used)
                     // cache is {1=1, 3=3}
cache.get(2);        // returns -1 (not found)
πŸ”„ Why This is Hard:
Naive approach (HashMap only): O(1) get, but put with eviction requires O(n) to find min/max
Solution: Combine HashMap + Doubly Linked List. The list maintains order (most recent = tail, least recent = head), HashMap provides O(1) access.
πŸ’‘ Solution Approach:
class Node {
  constructor(key, value) {
    this.key = key;
    this.value = value;
    this.prev = null;
    this.next = null;
  }
}

class LRUCache {
  constructor(capacity) {
    this.capacity = capacity;
    this.cache = new Map();
    // Dummy nodes for easy manipulation
    this.head = new Node(0, 0);
    this.tail = new Node(0, 0);
    this.head.next = this.tail;
    this.tail.prev = this.head;
  }

  addToHead(node) {
    node.next = this.head.next;
    node.prev = this.head;
    this.head.next.prev = node;
    this.head.next = node;
  }

  removeNode(node) {
    node.prev.next = node.next;
    node.next.prev = node.prev;
  }

  moveToHead(node) {
    this.removeNode(node);
    this.addToHead(node);
  }

  get(key) {
    if (!this.cache.has(key)) return -1;
    const node = this.cache.get(key);
    this.moveToHead(node);
    return node.value;
  }

  put(key, value) {
    if (this.cache.has(key)) {
      const node = this.cache.get(key);
      node.value = value;
      this.moveToHead(node);
    } else {
      const newNode = new Node(key, value);
      this.cache.set(key, newNode);
      this.addToHead(newNode);
      
      if (this.cache.size > this.capacity) {
        const leastUsed = this.tail.prev;
        this.removeNode(leastUsed);
        this.cache.delete(leastUsed.key);
      }
    }
  }
}

// Time: O(1) for both get and put
// Space: O(capacity)
Why Uber Asks This:
βœ“ Tests data structure knowledge (linked list + hash map)
βœ“ Shows you can combine data structures for optimal solution
βœ“ Real-world use: Caching, browser history, CDN edge servers
βœ“ Communication: Can you explain the approach clearly?

πŸ’‘ Interview Tips for Round 1:

  • Discuss approach before coding: Hash map + linked list, why both?
  • Explain trade-offs: O(1) time requires O(capacity) space
  • Handle edge cases: Capacity = 1, accessing same key repeatedly
  • Test with examples: Walk through your code step by step
  • Optimize early: Don't settle for O(n) when O(1) is possible

At Uber's scale, efficiency matters. Between you and this round is seeing patterns instantlyβ€”when hash maps work, when graphs shine, when sorting matters. Learn to see like they do β†’

Round 2: Frontend Deep Dive (60 minutes)

10+ questions covering React, JavaScript, performance, and browser concepts.

πŸ”§ Question 1: Closure and Memory Leaks

Question: "A closure in your React component captured a large object. Will it cause a memory leak?"

Answer: Yes, if the closure outlives the component. Example:
// ❌ Memory leak
function MyComponent() {
  const largeArray = new Array(1000000).fill(0);
  
  window.myGlobalCallback = () => {
    console.log(largeArray); // Closure holds reference
  };
  
  return <div>Component</div>;
}
// largeArray never gets GC'd until callback is removed

// βœ… Fixed
function MyComponent() {
  const largeArray = new Array(1000000).fill(0);
  
  useEffect(() => {
    const callback = () => console.log(largeArray);
    window.myGlobalCallback = callback;
    
    return () => {
      delete window.myGlobalCallback; // Cleanup
    };
  }, []);
}
Key Point: Closures keep variables in memory. Clean up event listeners, timers, and global references in useEffect cleanup.

πŸ”§ Question 2: Event Loop and Microtasks vs Macrotasks

Question: "What's the output order? Why?"
console.log('1');

setTimeout(() => console.log('2'), 0);

Promise.resolve()
  .then(() => console.log('3'))
  .then(() => console.log('4'));

console.log('5');
Output: 1, 5, 3, 4, 2

Why:
β€’ Synchronous: 1, 5
β€’ Microtask (Promises): 3, 4 (execute after sync, before setTimeout)
β€’ Macrotask (setTimeout): 2 (execute last)

Why Uber Cares: Understanding this prevents race conditions, memory leaks, and performance issues.

πŸ”§ Question 3: React.memo with Objects and Functions

Question: "My React.memo child still re-renders even though the parent didn't change. Why?"

Answer: Objects and functions are new on every parent render:
// ❌ Child re-renders every time
function Parent() {
  const config = { color: 'red' }; // New object each render
  const handleClick = () => console.log('clicked'); // New function
  
  return <Child config={config} onClick={handleClick} />;
}

const Child = React.memo(({ config, onClick }) => {
  // config and onClick are "new" every time
  // Props appear different to React, so re-render happens
  return <button onClick={onClick}>Click</button>;
});

// βœ… Fixed with useMemo and useCallback
function Parent() {
  const config = useMemo(() => ({ color: 'red' }), []);
  const handleClick = useCallback(() => console.log('clicked'), []);
  
  return <Child config={config} onClick={handleClick} />;
}
Key Point: React.memo compares props by reference, not value. Memoize objects and functions.

πŸ”§ Question 4: Hoisting Behavior

Question: "What happens when you call a function before defining it?"

Answer:
// βœ… Function declarations are hoisted
console.log(sum(2, 3)); // Works! Output: 5
function sum(a, b) { return a + b; }

// ❌ Function expressions are NOT hoisted
console.log(add(2, 3)); // Error: add is not a function
const add = (a, b) => a + b;

// ❌ Variables with let/const have temporal dead zone
console.log(x); // Error: Cannot access 'x'
let x = 5;

// βœ… var is hoisted (set to undefined initially)
console.log(y); // undefined
var y = 5;
Why It Matters in React: Affects callback order, hook dependencies, and debugging. Understand what's available when.

πŸ”§ Question 5: WeakMap and WeakSet Use Cases

Question: "When would you use WeakMap instead of Map?"

Answer: WeakMap keys must be objects and don't prevent garbage collection:
// Use case: Store private data on objects
const privateData = new WeakMap();

class User {
  constructor(name) {
    this.name = name;
    privateData.set(this, { password: 'secret123' });
  }
  
  getPassword() {
    return privateData.get(this);
  }
}

let user = new User('Alice');
user.getPassword(); // { password: 'secret123' }

user = null; // When user is garbage collected,
// privateData entry is automatically removed too

// Real Uber use: Cache DOM nodes without preventing GC
const domCache = new WeakMap();

// Why not just use an object key?
// - Map would keep all objects in memory forever
// - WeakMap allows GC when object is no longer referenced
Key Benefit: Memory safety. Used in libraries for private properties and DOM caching.

πŸ”§ Question 6: Virtual DOM and Diffing Algorithm

Question: "How does React decide which elements changed?"

Answer: React uses the `key` prop and element position:
// ❌ Bad: No keys, position-based
{items.map((item, index) => (
  <div key={index}>{item.name}</div>
))}
// Problem: Reordering breaks component state

// βœ… Good: Unique keys
{items.map(item => (
  <div key={item.id}>{item.name}</div>
))}
// React matches elements by id, not position

// React compares:
// 1. Element type (div vs span = different)
// 2. Props (className, onClick changed = update)
// 3. Children (recursively)
Performance Impact: Bad keys = re-render 100 items instead of 1. Keys are critical for lists.

πŸ”§ Question 7: Debounce vs Throttle in Search

Question: "User types in a search box. How do you optimize API calls?"

Answer: Debounce for search (wait for pause), throttle for scroll (consistent intervals):
// Debounce: Wait 300ms after user stops typing
const debounce = (fn, delay) => {
  let timeoutId;
  return (...args) => {
    clearTimeout(timeoutId);
    timeoutId = setTimeout(() => fn(...args), delay);
  };
};

// Throttle: Call at most once every 300ms
const throttle = (fn, delay) => {
  let lastRun = 0;
  return (...args) => {
    const now = Date.now();
    if (now - lastRun >= delay) {
      fn(...args);
      lastRun = now;
    }
  };
};

// In React:
const [searchTerm, setSearchTerm] = useState('');

const handleSearch = useMemo(
  () => debounce(async (term) => {
    const results = await fetch(`/search?q=${term}`);
    // Update results
  }, 300),
  []
);

return <input onChange={(e) => handleSearch(e.target.value)} />;
When to Use: Debounce for search/autocomplete, throttle for scroll/resize.

πŸ”§ Question 8: Context vs Redux vs Prop Drilling

Question: "How do you decide between Context, Redux, and prop drilling?"

Answer:
Prop Drilling: For data passing 1-2 levels deep
β€’ Pro: Simple, explicit
β€’ Con: Verbose with many props

Context: For global data (theme, locale, user)
β€’ Pro: No provider boilerplate, good for UI state
β€’ Con: All children re-render when value changes

Redux: For complex app state
β€’ Pro: Predictable, devtools, time travel
β€’ Con: Boilerplate, overkill for small apps

Uber Approach: Zustand or Recoil for simplicity

πŸ’‘ What Uber Values in Frontend Engineers:

  • Deep JavaScript knowledge (not just React)
  • Performance awareness (memory, rendering, events)
  • Real production experience (debugging issues)
  • Thoughtful solutions (trade-offs, constraints)
  • Practical thinking (this actually happens in real code)

React isn't magic; it's philosophy. Understanding why Uber picks React, state patterns, and performance separates engineers who know the tool from those who master the craft. Master the craft with us β†’

Round 3: Machine Coding - Ride Booking Component (90 minutes)

Build a feature end-to-end with working UI and actual data processing.

βš™οΈ Build: Ride Booking Component with Live Demo

Hard | Time: 90 min

Requirements:
βœ“ User enters pickup and dropoff locations
βœ“ Shows estimated fare based on distance
βœ“ Select ride type (Uber X, Uber Premier)
βœ“ Apply promo code (valid codes: UBER10, SAVE20)
βœ“ Display price breakdown
βœ“ Handle loading and error states
βœ“ Form validation

Interactive Demo (with Mock Data):
πŸ“ Try This Demo:

Enter any two locations (pre-filled with real SF locations), select ride type, and click "Get Fare Estimate" to see the fare calculation in action. Use promo codes UBER10 or SAVE20 for discounts.

Or enter any custom locationsβ€”the demo calculates distances between them.
✨ Try: UBER10 (10% off) or SAVE20 (20% off)
Complete React Component Code:
import React, { useState } from 'react';

// Mock location database
const LOCATIONS = {
  'downtown sf': { lat: 37.7749, lng: -122.4194 },
  'airport terminal 2': { lat: 37.6213, lng: -122.3790 },
  'pier 39': { lat: 37.8087, lng: -122.4098 },
  'golden gate': { lat: 37.8199, lng: -122.4783 }
};

const RIDE_TYPES = {
  'UBER_X': { rate: 1.50, name: 'Uber X' },
  'UBER_PREMIER': { rate: 2.50, name: 'Uber Premier' }
};

const PROMO_CODES = {
  'UBER10': 0.10,
  'SAVE20': 0.20
};

const calculateDistance = (loc1, loc2) => {
  // Simplified: use Manhattan distance
  const dx = Math.abs(loc1.lat - loc2.lat);
  const dy = Math.abs(loc1.lng - loc2.lng);
  // 1 degree β‰ˆ 111 km
  return Math.sqrt(dx * dx + dy * dy) * 111;
};

function RideBooking() {
  const [pickup, setPickup] = useState('Downtown SF');
  const [dropoff, setDropoff] = useState('Airport Terminal 2');
  const [rideType, setRideType] = useState('UBER_X');
  const [promoCode, setPromoCode] = useState('');
  const [fareData, setFareData] = useState(null);
  const [error, setError] = useState('');
  const [loading, setLoading] = useState(false);

  const handleEstimateFare = () => {
    // Validate inputs
    if (!pickup.trim() || !dropoff.trim()) {
      setError('Please enter both locations');
      return;
    }
    if (pickup.toLowerCase() === dropoff.toLowerCase()) {
      setError('Pickup and dropoff must be different');
      return;
    }

    setLoading(true);
    setError('');

    // Simulate API call
    setTimeout(() => {
      const pickupKey = pickup.toLowerCase();
      const dropoffKey = dropoff.toLowerCase();

      const loc1 = LOCATIONS[pickupKey] || {
        lat: 37.7749 + Math.random() * 0.5,
        lng: -122.4194 + Math.random() * 0.5
      };
      const loc2 = LOCATIONS[dropoffKey] || {
        lat: 37.7749 + Math.random() * 0.5,
        lng: -122.4194 + Math.random() * 0.5
      };

      const distance = calculateDistance(loc1, loc2).toFixed(1);
      const baseFare = 5;
      const rideRate = RIDE_TYPES[rideType].rate;
      const distanceCharge = (distance * rideRate).toFixed(2);
      const subtotal = (parseFloat(baseFare) + 
                       parseFloat(distanceCharge)).toFixed(2);

      let discount = 0;
      if (PROMO_CODES[promoCode.toUpperCase()]) {
        discount = (subtotal * 
                   PROMO_CODES[promoCode.toUpperCase()]).toFixed(2);
      }

      const finalPrice = 
        (parseFloat(subtotal) - parseFloat(discount)).toFixed(2);

      setFareData({
        distance,
        baseFare,
        distanceCharge,
        subtotal,
        discount,
        finalPrice,
        promoApplied: promoCode.toUpperCase()
      });

      setLoading(false);
    }, 800);
  };

  const handleBookRide = () => {
    if (!fareData) return;
    
    setLoading(true);
    // Simulate booking API
    setTimeout(() => {
      setError('');
      setFareData(null);
      setPickup('');
      setDropoff('');
      setPromoCode('');
      alert(`βœ“ Ride booked! Total: $${fareData.finalPrice}`);
      setLoading(false);
    }, 600);
  };

  return (
    <div>
      <input value={pickup} onChange={(e) => setPickup(e.target.value)} />
      <input value={dropoff} 
             onChange={(e) => setDropoff(e.target.value)} />
      <select value={rideType} 
              onChange={(e) => setRideType(e.target.value)}>
        {Object.entries(RIDE_TYPES).map(([key, val]) => (
          <option key={key} value={key}>
            {val.name} (${val.rate}/km)
          </option>
        ))}
      </select>
      <input value={promoCode} 
             onChange={(e) => setPromoCode(e.target.value)} 
             placeholder="Promo code" />
      <button onClick={handleEstimateFare} disabled={loading}>
        {loading ? 'Estimating...' : 'Get Estimate'}
      </button>

      {error && <div style={{color: 'red'}}>{error}</div>}
      
      {fareData && (
        <>
          <div>Distance: {fareData.distance} km</div>
          <div>Base: ${fareData.baseFare}</div>
          <div>Distance Charge: ${fareData.distanceCharge}</div>
          {fareData.discount > 0 && (
            <div>Discount: -${fareData.discount}</div>
          )}
          <div><strong>Total: ${fareData.finalPrice}</strong></div>
          <button onClick={handleBookRide} disabled={loading}>
            {loading ? 'Booking...' : 'Book Ride'}
          </button>
        </>
      )}
    </div>
  );
}

export default RideBooking;

πŸ’‘ Machine Coding Best Practices:

  • Ask clarifying questions about edge cases first
  • Plan component structure before coding
  • Build MVP first, then add features
  • Handle errors and loading states explicitly
  • Write clean, well-named code with comments
  • Mock APIs realistically (delays, validation)

Building features is easy. Building the *right* features, the *right way*, at *Uber's scale*β€”that's the skill. Architecture decisions under pressure are what they're watching for. Learn to architect like they think β†’

Round 4: System Design (60 minutes)

Deep-dive into architecture, scaling, and trade-offs at Uber scale.

πŸ—οΈ Question 1: Session Management at Scale

Question: "Users stay logged in forever if not explicitly logging out. How do you prevent abandoned sessions from piling up?"

Deep Answer:
Problem: Millions of users, infinite sessions = infinite memory in database

Architecture:
Token Strategy:
β€’ Access Token: JWT, 15 minutes, stateless (no DB lookup)
β€’ Refresh Token: Opaque, 30 days, stored in Redis (stateful)
β€’ Why split? Access token fast (stateless), refresh token secure (short-lived DB)

Storage Layer (Redis):
Key: `refresh_token:{token_hash}`
Value: `{user_id, device_id, issued_at, expires_at}`
TTL: 30 days (auto-expire)

Inactive Session Cleanup:
β€’ Track `last_activity` timestamp
β€’ Cron job: Delete sessions inactive 60+ days
β€’ Lazy cleanup: When user refreshes, check last_activity

Multi-Device Support:
Key: `refresh_token:{user_id}:{device_id}`
Allows different tokens per device
User can logout from "all other devices"

Security Measures:
β€’ If refresh fails: Force re-login
β€’ Detect suspicious refresh: Multiple regions in 1 hour
β€’ Token rotation: Issue new refresh on use

At Uber Scale (10M concurrent):
β€’ Shard Redis by user_id hash
β€’ Separate cluster for refresh tokens
β€’ TTL auto-cleanup handles expiration

πŸ—οΈ Question 2: Real-Time Location Updates - 100K Drivers

Question: "Driver sends GPS location every 2 seconds. 100K drivers = 50K messages/sec. How do you handle this?"

Deep Architecture:
Data Flow:
Mobile App β†’ WebSocket β†’ Message Queue β†’ Cache β†’ Broadcast

1. Message Queue (Kafka):
β€’ Receive 50K location updates/sec
β€’ Decouple producer (driver) from consumer (server)
β€’ Replayable: Can replay if crash
β€’ Partitioned by driver_id for parallelism

2. Stream Processor (Flink/Spark):
β€’ Deduplicate: If driver sends twice in 1 sec, keep latest
β€’ Validate: Check location sanity (speed limit breaker = fake)
β€’ Aggregate: Store in Redis (sub-100ms latency)

3. Cache Layer (Redis):
Key: `driver_location:{driver_id}`
Value: `{lat, lng, timestamp, accuracy}`
TTL: 5 minutes (assume driver inactive if no update)

4. Broadcasting (WebSocket):
β€’ Only broadcast to "interested" clients
β€’ Interested = customer for that ride, other drivers in nearby zone
β€’ Use geographic sharding: Divide city into zones
β€’ Zone server only broadcasts to connected clients in that zone

5. Persistence:
β€’ Write to DynamoDB (append-only table)
β€’ TTL: 90 days (for analytics, disputes)
β€’ Not in real-time path (async batching)

Scaling to 1M Drivers:
β€’ Shard WebSocket servers by city
β€’ Shard Redis cache by driver_id hash
β€’ Use consistent hashing for zone assignment
β€’ CDN edge for location distribution

πŸ—οΈ Question 3: Ride History Caching Strategy

Question: "Users check ride history constantly. You can't query database every time. Design the caching layer."

Multi-Tier Caching Strategy:
Tier 1: Client Cache (10ms)
β€’ Store last 20 rides in localStorage
β€’ Instant load on app restart
β€’ TTL: 24 hours
β€’ Limited by storage (~1MB)

Tier 2: CDN Cache (20ms)
β€’ Cache GET /api/rides for each user
β€’ Cloudflare/Akamai
β€’ TTL: 5 minutes
β€’ Cache-Key: user_id + "rides"

Tier 3: Redis Cache (10ms)
Key: `user_rides:{user_id}:{page}`
Value: JSON array of last 50 rides
TTL: 1 hour
Partitioned by user_id for parallelism

Tier 4: Database
β€’ PostgreSQL with indexes on (user_id, created_at)
β€’ Query only if not cached
β€’ Use pagination to avoid huge responses

Cache Invalidation:
When ride completes:
1. Update database
2. Invalidate Redis: DEL user_rides:{user_id}:*
3. Invalidate CDN: PURGE by cache-key
4. Send WebSocket to app: "History updated, refresh"
5. App clears localStorage, refetches

Performance Impact:
β€’ No caching: ~300ms per request (DB query)
β€’ Redis only: ~10ms per request
β€’ Redis + Client: ~0ms (instant from device)

Avoiding Cache Stampede:
If key expires and 10K users request:
β€’ Use cache-aside pattern with mutex lock
β€’ First request gets lock, queries DB
β€’ Others wait for first response
β€’ Prevents thundering herd

πŸ—οΈ Question 4: ETA (Estimated Time of Arrival) Calculation

Question: "How do you predict accurate ETAs? Show arrives in 5 min but actually 8? Bad UX."

ML-Based Architecture:
Problem with Naive Approach:
ETA = (distance / speed). Fails because:
β€’ Speed changes every minute (traffic)
β€’ Driver location is 2-5s old
β€’ Weather, accidents, time-of-day affect speed
β€’ Result: Β±40% error rate

Uber's Real Solution:
1. Feature Engineering:
β€’ Start location (lat, lng)
β€’ End location (lat, lng)
β€’ Time of day (rush hour?)
β€’ Day of week (Friday night parties?)
β€’ Weather (rain = slower)
β€’ Event (concert, game, holiday?)
β€’ Historical: Avg time for this route at this time

2. Model Training:
β€’ Collect 1B+ completed trips
β€’ Train XGBoost / Neural Network
β€’ Input: Features above
β€’ Output: Predicted travel time
β€’ RMSE: Β±2 minutes

3. Real-Time Adjustment:
Every 10 seconds after pickup:
β€’ Get current location
β€’ Get current traffic (real-time API)
β€’ Recalculate remaining distance
β€’ Use ML model on remaining segment
β€’ If new ETA differs >1min: Update app

4. Post-Trip Analysis:
β€’ Store: predicted_eta, actual_duration, error
β€’ Use for model retraining weekly
β€’ Improve accuracy over time

5. Confidence Intervals:
β€’ Don't show point estimate (5 min)
β€’ Show range (4-6 min) with confidence
β€’ More honest, users set expectations

Scaling Considerations:
β€’ Can't use expensive model on-the-fly
β€’ Pre-compute for popular routes
β€’ Cache predictions for 10 minutes
β€’ Only recompute if traffic changed significantly

πŸ’‘ System Design Evaluation Criteria:

  • Correctness: Does your design solve the problem?
  • Scalability: Can it handle 10x current load?
  • Trade-offs: What are you sacrificing (cost, latency, complexity)?
  • Monitoring: How do you detect failures?
  • Failure modes: What happens if component X fails?
  • Real-world constraints: Cost, team size, time to launch

Round 5: Behavioral & Frontend Culture (45 minutes)

Questions focused on frontend engineering, technical decisions, and team collaboration.

🀝 Question 1: Performance Regression You Caused

Question: "Tell me about a time you unintentionally made the UI slower. How did you discover and fix it?"

STAR Response Example (Frontend-Focused):
Situation: Building a product listing page with 50 items. I added React.memo to optimize, but logged state changes in render.

Task: Ship the feature by Friday. Didn't have time for performance review.

Action:
1. User reported: "Page stutters when scrolling"
2. Opened DevTools β†’ Performance tab
3. Found: Each render triggers 50 logs β†’ 50 console.logs = 150ms frame
4. Root cause: Placed `console.log(state)` in component render
5. Fix: Moved logging to useEffect, conditional logging
6. Also added: React.memo with dependencies
7. Retest: 60fps smooth scrolling βœ“

Result: Page went from janky to smooth. Shipped fix same day.
Learning: Always use DevTools before guessing. Console.log has hidden costs.
Why Uber Asks This: Shows you can identify problems methodically, use tools, and learn from mistakes.

🀝 Question 2: Disagreement Over React Architecture

Question: "Your team wanted to use Context for global state. You thought Redux would be better. What happened?"

STAR Response Example:
Situation: New frontend app. I proposed Redux, lead wanted Context API for simplicity.

Task: Make decision that doesn't delay project.

Action:
1. Listened to lead's concerns: "Redux is overkill, more boilerplate"
2. Acknowledged valid point: Context is simpler to start
3. Proposed middle ground: "Let's start with Context, benchmark performance"
4. Showed data: Chart of app complexity vs boilerplate cost
5. Agreement: Context for v1, migrate to Redux if needed

Result:
β€’ Started with Context (faster delivery)
β€’ Hit performance issues at 50 screens
β€’ Migrated to Zustand (lighter than Redux)
β€’ Both happy: pragmatic approach

Learning: Sometimes the "better" tech is worse for the team. Fit matters.

🀝 Question 3: Fixing Someone Else's Broken Component

Question: "You inherited a React component that's hard to read, has memory leaks, and poor performance. How do you handle it tactfully?"

STAR Response (Frontend-Focused):
Situation: Code review: teammate's 400-line component with nested ternaries, 20 useStates, no cleanup.

Task: Improve code without making teammate defensive.

Action:
1. Private message (not public PR comment)
2. Praised what worked: "Good separation of concerns in UI"
3. Mentioned concrete issue: "Memory leak in useEffect without cleanup"
4. Offered help: "Want to pair on refactoring this?"
5. Did refactor together:
- Split into 3 smaller components
- Extracted custom hook for state logic
- Added cleanup functions
6. Used commit message: "Refactor: Extract logic into custom hook"
(not "Fix mess" or "Improve bad code")

Result: Component reduced to 80 lines. No memory leaks. Teammate learned useCallback pattern.
Learning: Always assume good intent. Collaborate, don't criticize.

🀝 Question 4: Learning a New Frontend Technology

Question: "Product wants to use a new framework/library you don't know. How do you approach it?"

STAR Response:
Situation: Team wanted to use Next.js. I'd only used Create React App.

Task: Learn quickly without delaying the project.

Action:
1. Read official docs (Next.js fundamentals)
2. Built small proof-of-concept (Auth + API routes)
3. Identified gotchas: SSR vs CSR, data fetching patterns
4. Asked questions in pair programming with senior
5. Did prototype page before production code
6. Set up ESLint rules to catch common Next.js mistakes

Result: Shipped production app using Next.js. No major issues. Team learned together.
Learning: Smart learning = small projects first, ask questions, establish patterns.

🀝 Question 5: Why Uber? Why Now?

Question: "Why do you want to work at Uber specifically? What about Uber Frontend excites you?"

Good Answer (Personalize This):
"I'm passionate about building robust systems at scale. Three things excite me about Uber:

1. Frontend Complexity: Mapping, real-time updates, 100M+ concurrent users. Not your average CRUD app. Real technical challenges.

2. Performance Culture: Uber cares deeply about milliseconds. Every 100ms delay = fewer bookings. This pushes engineers to think differently.

3. Global Impact: My code directly affects drivers and riders worldwide. That matters to me more than company size/prestige.

Specifically, I'm interested in the [Rider App / Maps / Payments] team because..."
Pro Tips:
βœ“ Do research: Recent tech blog posts, open-source projects
βœ“ Reference specific team or technology
βœ“ Show genuine interest (not desperate)
βœ“ Ask about their challenges at the end

πŸ’‘ Frontend-Focused Behavioral Patterns:

  • Share actual debugging stories (DevTools, performance issues)
  • Talk about trade-offs in architecture decisions
  • Mention performance wins (ms matter)
  • Show learning from mistakes (refactors, tech choices)
  • Discuss code quality and maintainability
  • Reference real projects and scale you've handled

🎯 Final Interview Checklist

  • βœ… Practice LRU Cache problem until you can code it in 30 min
  • βœ… Memorize 2-3 examples for each frontend question
  • βœ… Build a small project using React + system design thinking
  • βœ… Use DevTools to profile and optimize something
  • βœ… Research Uber's blog, engineering talks on YouTube
  • βœ… Prepare for each behavioral question (write it down)
  • βœ… Mock interview with someone who interviews
  • βœ… Get 8 hours sleep before interview
  • βœ… Test setup: camera, mic, internet 30 min before
  • βœ… Show enthusiasm, ask genuine questions back

Between you and Uber's system design round sits caching strategies, trade-off thinking, and failure scenarios. They'll probe. Most stumble. Learn to answer with confidence β†’

You Now Know What Uber Expects πŸ‘†

From LRU cache design to global system architectureβ€”this is the complete Uber interview roadmap. But knowing what to expect and being able to deliver under pressure are two different things.

Our cohort has helped a few developers crack interviews at Uber, Microsoft, Atlassian, and Amazon. They didn't just learn the conceptsβ€”they learned to think like a Uber engineer.

What our cohort members get:

  • βœ… Structured 8-week curriculum covering all 4 rounds
  • βœ… 50+ mock interviews (real-time, recorded, feedback)
  • βœ… 1-on-1 mentoring with engineers from Uber, Microsoft, etc.
  • βœ… Private community (25 dedicated developers)
  • βœ… Lifetime access to recordings & materials
  • βœ… 100% money-back if you don't get offers

Cohort 2 starts June 2026. Limited to 25 seats. Early birds get exclusive pricing.

Ready to Crack Uber?

Join our cohort for structured prep, mock interviews, and personalized feedback.

Join Next Cohort

Other Interview Experiences

Microsoft Interview

4 rounds with system design

Atlassian Interview

Collaboration-focused rounds