Old blog

30 Uber Interview LeetCode Questions for 2026

April 30, 20269 min read
pexels mikhail nilov 7988688

Prepare for Uber coding interviews with 30 LeetCode-style questions, level-based difficulty, OA, phone screen, onsite, and machine coding prep.

Uber Interview LeetCode: 30 Most Asked Questions (2026)

If you're preparing for an Uber software engineering interview, the LeetCode grind is only part of the picture. Uber interview questions cluster around a handful of pattern families — graphs, DP, trees, arrays, and design-adjacent coding — and the specific problems that show up depend heavily on whether you're interviewing at the SWE II level or the L5A senior track. This guide covers confirmed questions from real Uber loops, the full interview process, what Uber actually evaluates beyond correctness, and a practice method that mirrors how their interviewers score you.

How Uber's interview process works in 2026

The loop typically runs about four weeks from recruiter screen to offer decision. Here's what each stage looks like.

Online assessment (CodeSignal)

Four questions, timed. Difficulty skews medium. Both correctness and runtime matter — brute-force solutions that pass test cases but time out won't clear the bar. Community consensus from candidates who've been through it recently: aim to solve at least three of the four for a strong approval signal.

Technical phone screen

One coding question, roughly one hour. Expect medium difficulty. Your code needs to compile and run — pseudocode doesn't get a pass here. The interviewer will have test cases ready.

Onsite / virtual onsite loop (4–5 rounds)

The onsite is where the real evaluation happens. Round types, roughly in order:

  • General coding (1–2 rounds): DSA-heavy. Graphs, DP, and tree problems dominate. Expect at least one hard-level question at senior levels.
  • Machine coding / specialization round (senior track): Design and implement a working mini-system in-session. Not pseudocode — testable, runnable code.
  • System design (senior track): Scale, tradeoffs, architecture choices. You'll need to justify every component.
  • Behavioral / leadership (all levels): Impact, ownership, conflict resolution. Metrics matter.
  • Bar raiser / TLM round (senior): Project depth, cross-functional judgment. This round is all-or-nothing.

Uber interview LeetCode questions by category

Questions cluster into five pattern families. Knowing the family matters more than memorizing one solution — Uber interviewers care about whether you can recognize the pattern and narrate your approach, not whether you've seen the exact problem before.

Graphs and shortest path

  • Grid minimum-cost path (0/1 weighted edges, Dijkstra variant) — appeared in L5A senior loops
  • Shortest Word Distance II — phone screen question from an ATG loop
  • Word Search II — onsite round, ATG loop
  • Insert Delete GetRandom O(1) — onsite coding round
  • Social network graph/query design — ATG onsite, graph traversal with design elements

Uber's mapping and routing domain makes graph problems a recurring signal. If you're only going to deep-dive one category, this is it.

Dynamic programming

  • 2D DP grid problems — confirmed in SWE II loops; one candidate reported a hard 2D DP as one of four OA questions
  • Next greater palindrome / mirror-and-carry palindrome logic — appeared in L5A coding rounds
  • Sliding Window Maximum — ATG onsite

DP questions skew hard at senior level. Candidates preparing for Uber consistently flag DP and graphs as the two areas that matter most.

Trees and binary search

  • Binary Tree Right Side View — SWE II phone screen
  • Lowest Common Ancestor of a BST — SWE II phone screen
  • Expiry counter using timestamp storage and binary search — L5A machine coding round

Tree questions appear more often in mid-level (SWE II) loops. At the senior level, they tend to show up as components inside larger design problems rather than standalone questions.

Arrays and strings

  • Meeting Rooms II — ATG phone screen
  • Easy string/array manipulation questions — SWE II OA (typically 2 of the 4 slots)
  • Meeting scheduler with recurring meetings and conflict counting — SWE II onsite, with follow-ups on edge cases

These are warm-up or phone-screen calibration questions, not the bar-setters. Get them right quickly and cleanly — the interviewer is watching your process as much as your answer.

Design adjacent coding (machine coding)

  • Expiry counter / rate-limiter design — implement in-session with working code, L5A
  • Valid Sudoku + Sudoku Solver — SWE II onsite, HackerRank-style with test cases the interviewer runs live
  • Autocomplete system — ATG onsite, a trie/design hybrid that tests both data structure choice and implementation speed

Machine coding rounds expect working, testable code. Pseudocode won't pass. The interviewer will run your implementation against their test cases during the session.

A note on the "30" in the title: the questions above are sourced from confirmed Uber interview reports across multiple levels and time periods. Some — like the ATG questions — come from older loops (2019) and the specific round structure may have evolved, but the problem patterns remain consistent with what candidates report today. Where sources reference aggregate counts (e.g., "7–8 machine coding problems practiced") without naming every specific title, I've included the confirmed ones and left out the rest rather than guessing.

What Uber actually evaluates (beyond getting the answer)

Getting the right answer is necessary. It's not sufficient. Here's what interview reports consistently highlight as the real evaluation signals:

  • Correctness: Your code must compile and pass test cases. Interviewers run them live.
  • Production thinking: Can you discuss edge cases, error handling, and what happens at scale? A solution that works for n=10 but breaks at n=10⁶ is a fail.
  • Tradeoff narration: Explain why you chose one approach over another, out loud, as you go. Silent coding followed by "it works" is not what they're looking for.
  • Architecture depth (senior): In at least one confirmed L4 rejection, the candidate's coding was rated strong but the offer was denied because architecture depth was weak. Design rounds carry real weight.
  • Impact communication (behavioral): Bring metrics and project receipts. The TLM/bar raiser round probes impact, ownership, and cross-team judgment — vague claims about "leading a project" won't clear the bar.

Difficulty expectations by level

New grad / SWE II

  • OA spread: A realistic distribution is 2 easy-to-medium string/array questions, 1 medium tree question, and 1 hard 2D DP question, across 70 minutes total.
  • Phone screen: Medium difficulty, one question, runnable code. Binary Tree Right Side View and Lowest Common Ancestor of a BST are representative of the level.
  • Onsite: A mix of coding, broad system/OOP design (think: meeting scheduler, photo-sharing app), and behavioral. The design rounds at this level test product thinking and OOP structure more than distributed-systems scale.

Senior / L5A

  • Coding rounds: Hard graph and DP problems. Dijkstra variants, palindrome construction, sliding window — expect at least one question where the brute-force approach is clearly insufficient.
  • Machine coding: Design and implement a working mini-system. Candidates who received offers reported practicing 7–8 machine coding problems before their loop.
  • Prep volume signal: Multiple successful L5A candidates reported solving 200+ LeetCode questions, covering roughly 70% of a curated top-150 list, plus dedicated machine coding and system design practice.
  • OA threshold: Solve at least 3 of 4 CodeSignal questions for a strong approval signal.

How to practice Uber interview LeetCode questions the right way

Volume alone does not predict outcomes. Structured reps do. Here's a practice method drawn from what actually works:

  • Time-box each question: 30–35 minutes max before reviewing the solution. If you haven't made meaningful progress by then, study the approach and come back to it later.
  • Articulate the problem before writing a single line. State constraints, edge cases, and expected output out loud. This is exactly what Uber interviewers expect you to do.
  • Brainstorm Big-O before implementing. Uber interviewers expect you to narrate tradeoffs — "this is O(n²), which won't work for the input size, so let's use a hash map for O(n)" — before you start coding.
  • Test your own code with invented inputs before the interviewer runs theirs. Catching your own bugs is a strong positive signal.
  • Run full mock sessions, not just solo grinding. Interview-like reps — with a timer, a listener, and verbal narration — build the habits that matter under pressure. Solving problems in silence trains a different skill than performing in a live interview.

Verve AI's mock interview feature lets you run timed, realistic Uber-style coding sessions with AI feedback on your narration, tradeoffs, and code quality. The Interview Copilot is there during the real thing too — listening to the conversation and suggesting approaches in real time, so you're not relying on memory alone when the pressure hits. It's the practice-to-performance bridge that solo grinding doesn't cover.

System design and behavioral — what to prepare alongside LeetCode

LeetCode alone won't get you through an Uber senior loop. Here's what else showed up.

System design themes that appeared in Uber loops

  • Near-real-time heatmap aggregation: Geohash bucketing, Kafka input stream (~500K TPS), Flink processing, Redis + Postgres storage. This is a classic Uber-domain problem — know your scale math.
  • Ride-count distributed systems design: How do you count rides across a distributed fleet in near-real-time?
  • Social network graph/query design — appeared in the ATG onsite
  • Photo-sharing / Instagram-like design — SWE II onsite round

Every component choice needs justification. "I'd use Kafka" is not an answer. "I'd use Kafka because the input stream is ~500K events per second and we need durable, ordered delivery to downstream consumers" is.

Behavioral prep in one sentence

Prepare 3–4 project stories with concrete metrics — the TLM/bar raiser round will probe impact, ownership, and cross-team judgment, and vague answers are the fastest way to a no-hire.

Wrapping up

Uber's coding interviews draw from a consistent set of pattern families: graphs and shortest path, DP, trees, arrays, and design-adjacent machine coding. The difficulty scales with level — SWE II candidates face a medium-heavy mix with one hard DP curveball, while L5A candidates should expect hard graph/DP problems plus a machine coding round that demands working, testable code. Beyond correctness, Uber evaluates tradeoff narration, production thinking, and architecture depth.

Practice the way Uber evaluates: timed, narrated, with real feedback. Verve AI's mock interviews are built for exactly that — structured reps that train the habits interviewers actually score on, not just the ability to solve problems in silence.

VA

Verve AI

Archive