Rahul Kumar

On Day 6, we used two pointers to exploit the structure of a sorted array. We knew the largest squares lived at the ends, so we filled our result from back to front by always picking the current maximum. The insight was spatial: the extremes tell you where to look.
Today's problem asks a completely different question. We are not measuring distances or squares. We are asking: which element dominates? And the insight that unlocks it is not spatial at all. It is about survival. If one element appears more than half the time, you cannot cancel it out no matter how hard every other element tries. That is the idea behind the Boyer-Moore Voting Algorithm, one of the most elegant and counterintuitive algorithms you will encounter in early DSA study.
Let us get into it.
Given an arraynumsof sizen, return the majority element. The majority element is the element that appears more than⌊n / 2⌋times. You may assume that the majority element always exists in the array.
Example 1:
Input: nums = [3, 2, 3]
Output: 3Example 2:
Input: nums = [2, 2, 1, 1, 1, 2, 2]
Output: 2Constraints:
1 <= nums.length <= 5 × 10^4-10^9 <= nums[i] <= 10^9Here is the single observation that makes this problem solvable in O(1) space: if you pair every occurrence of the majority element with a different element and cancel them both out, the majority element still has occurrences left over.
Say the array has 7 elements and the majority element appears 4 times. That means the remaining 3 elements are not the majority. Even if every single non-majority element paired up with a majority occurrence, you would cancel at most 3 pairs and be left with 1 uncancelled majority element. It cannot be fully erased because it appears strictly more than half the time.
The naive approach, which most people reach for first, is to use a frequency map: count how many times each element appears and return the one whose count exceeds n / 2. That costs O(n) time and O(n) space because you need to store a count for every distinct element.
But the cancellation insight tells us we do not need to count everyone. We only need to track one candidate and one confidence score. If we lose confidence in the current candidate completely, we switch to the next element. Whatever survives to the end must be the majority. That is O(n) time and O(1) space.
The first instinct is almost always the hash map approach:
// Naive approach: O(n) time, O(n) space
var majorityElement = function(nums) {
const counts = {};
for (let num of nums) {
counts[num] = (counts[num] || 0) + 1;
if (counts[num] > Math.floor(nums.length / 2)) return num;
}
};This works. It is correct. And in an interview, it is a perfectly reasonable starting point. But when the interviewer asks "can you do it in O(1) space?", this approach hits a wall. The counts object grows with the number of distinct elements, and there is no way to compress it without losing information.
The mental shift required is to stop thinking about frequency and start thinking about net votes. Instead of asking "how many times does each element appear?", ask "what happens when different elements cancel each other out?". A majority element, by definition, has more appearances than all other elements combined. So no matter what order they appear in, it cannot be fully outvoted. This reframing from counting to cancellation is the key.
We track exactly two variables throughout the entire algorithm.
majority holds the current candidate for the majority element. It gets replaced whenever our confidence hits zero. count holds our current confidence in the candidate. It starts at zero, meaning we have no candidate yet.
The rules on each step are as follows. If count is zero, we have no active candidate, so we adopt the current element as the new candidate and set count to 1. If the current element matches the active candidate, we increment count because this is another vote in favor of our candidate. If the current element does not match, we decrement count because this is one cancellation against our candidate. When count reaches zero again, our candidate has been fully cancelled and we are ready to adopt the next element we see.
Because the majority element is guaranteed to exist, the last candidate standing after the entire array is processed will always be the correct answer.
We will trace through nums = [2, 2, 1, 1, 1, 2, 2] completely.
Initial state:
nums = [2, 2, 1, 1, 1, 2, 2]
indices = 0 1 2 3 4 5 6
majority = 0 (placeholder, no candidate yet)
count = 0Iteration 1 (i = 0):
[2] 2 1 1 1 2 2
^
i
nums[i] = 2
count == 0, so adopt new candidate.
majority = 2
count = 1We had no candidate, so the first element becomes our default pick. Confidence is 1.
Iteration 2 (i = 1):
2 [2] 1 1 1 2 2
^
i
nums[i] = 2
nums[i] == majority (2 == 2), so increment count.
majority = 2
count = 2Another 2 reinforces our candidate. Confidence grows to 2.
Iteration 3 (i = 2):
2 2 [1] 1 1 2 2
^
i
nums[i] = 1
nums[i] != majority (1 != 2), so decrement count.
majority = 2
count = 1A 1 cancels one of our 2s. The candidate is still 2 but confidence drops to 1.
Iteration 4 (i = 3):
2 2 1 [1] 1 2 2
^
i
nums[i] = 1
nums[i] != majority (1 != 2), so decrement count.
majority = 2
count = 0Another cancellation. Confidence hits zero. Our candidate has been fully neutralised. We do not replace it yet; we wait for the next element.
Iteration 5 (i = 4):
2 2 1 1 [1] 2 2
^
i
nums[i] = 1
count == 0, so adopt new candidate.
majority = 1
count = 1With no active candidate, we adopt 1 as our new pick. Note that 2 is still the true majority element, but at this point in the scan we have no memory of that. The algorithm does not need to remember. It just needs to survive.
Iteration 6 (i = 5):
2 2 1 1 1 [2] 2
^
i
nums[i] = 2
nums[i] != majority (2 != 1), so decrement count.
majority = 1
count = 0A 2 cancels our candidate 1. Confidence drops to zero again.
Iteration 7 (i = 6):
2 2 1 1 1 2 [2]
^
i
nums[i] = 2
count == 0, so adopt new candidate.
majority = 2
count = 1With zero confidence, we adopt 2 as our candidate. This is the final element in the array. The loop ends.
return majority = 2
Final output: 2. Correct.
Bonus trace for Example 1: nums = [3, 2, 3]
At i = 0: count == 0, adopt 3. majority = 3, count = 1. At i = 1: 2 != 3, decrement. count = 0. At i = 2: count == 0, adopt 3. majority = 3, count = 1. Return 3. Correct.
1. The majority element appears strictly more than n / 2 times. This is the mathematical guarantee that makes the algorithm work. If the majority element appeared exactly n / 2 times, it could theoretically be fully cancelled by the remaining n / 2 non-majority elements. The strict inequality is not a minor detail; it is the entire foundation of correctness.
2. Every cancellation removes one majority vote and one non-majority vote. When count decrements, it means we have seen one occurrence of our current candidate paired against one occurrence of a different element. These two effectively annihilate each other. Since there are more majority elements than non-majority elements in total, the majority cannot be completely annihilated across the whole array.
3. The final candidate is not necessarily the element that was a candidate the longest. As we saw in the dry run, the algorithm switched candidates twice before landing on the correct answer. The correctness guarantee does not come from persistence of the candidate but from the mathematical impossibility of the majority element being fully cancelled. Whatever survives the cancellation process must be the majority.
4. The algorithm makes no assumptions about the order of elements. The array could have all majority elements clustered at the front, at the back, or scattered throughout. The cancellation logic handles all orderings correctly because the net count for the majority element across the entire array is always positive, regardless of when those elements appear.
var majorityElement = function(nums) {
let count = 0;
let majority = 0;
for (let i = 0; i < nums.length; i++) {
if (count == 0) {
majority = nums[i];
}
if (nums[i] == majority) {
count++;
} else {
count--;
}
}
return majority;
};We initialize count to 0 and majority to 0. The value 0 is just a placeholder; it has no significance because it gets replaced the moment we enter the loop with count == 0.
The outer for loop visits every element exactly once. The first if (count == 0) block handles candidate adoption. Whenever confidence is zero, the current element becomes the new candidate immediately before we evaluate the next condition. This ordering is important: we set majority = nums[i] first, then the second if evaluates nums[i] == majority, which is now true, so count increments to 1. There is no separate path for the adoption case; adoption and vote increment happen in sequence within the same iteration.
The second condition increments count if the current element matches our candidate and decrements it otherwise. After the loop completes, majority holds the element that survived all cancellations, which is guaranteed to be the majority element, and we return it.
| Complexity | Explanation | |
|---|---|---|
| Time | O(n) | We make a single pass through the array. Every element is visited exactly once. |
| Space | O(1) | Only two variables are maintained regardless of the size of the input. |
Mistake 1: Worrying that the initial value of majority causes a bug.
Some implementations avoid initializing majority = 0 out of concern that 0 could be an actual element in the array. This concern is unfounded. The initial value of majority only matters when count > 0, but the very first iteration always has count == 0, which triggers an immediate reassignment before any vote is cast. The placeholder value is always overwritten before it influences any comparison.
Mistake 2: Splitting the adoption case into a branch that skips the vote.
A common rewrite handles adoption as a full early return within the if:
// Subtle error: skips the vote increment for the adopted candidate
if (count == 0) {
majority = nums[i];
continue; // or count = 1 manually
}Using continue forces you to manually set count = 1. Forgetting that means every newly adopted candidate starts with zero confidence, which breaks the next comparison. The original two-if structure is cleaner because adoption and vote increment flow naturally without manual bookkeeping.
Mistake 3: Trying to verify the result after the loop.
If the problem did not guarantee the existence of a majority element, we would need a second pass to count the candidate's actual occurrences and confirm it exceeds n / 2. Since the guarantee is given in the constraints here, that second pass is unnecessary and adds O(n) work for no reason.
Mistake 4: Treating count as a frequency.
count does not tell you how many times the majority element has appeared. It tells you the net lead of the candidate over its opposition at this moment in the scan. It can be 1 even if the majority element has appeared 5 times, if it also absorbed 4 cancellations. Treating count as a frequency is a conceptual error that causes bugs in variations of this problem like LeetCode 229.
Mistake 5: Applying Boyer-Moore when a majority is not guaranteed.
Boyer-Moore always produces a candidate, even when no majority exists. For the input [1, 2, 3], it would return 3, which is a wrong answer if no majority element actually exists. The algorithm is only correct under the guarantee stated in the constraints. In problems without that guarantee, a verification pass is mandatory.
Step 1: Start with the hash map and be transparent about its cost. Say: "My first approach is to count frequencies with a hash map and return the element whose count exceeds n / 2. That is O(n) time and O(n) space. But I think we can eliminate the extra space."
Step 2: Introduce the cancellation intuition before naming the algorithm. Say: "The key insight is that a majority element appears more than half the time. Even if every other element pairs up to cancel it, there will still be majority occurrences left over. So instead of tracking everyone, we track one candidate and see if it survives."
Step 3: Explain what count actually represents. Say: "count is not a frequency. It is a net confidence score. It goes up when we see the candidate and down when we see anything else. When it hits zero, the candidate has been neutralised and we adopt the next element we see."
Step 4: Trace one or two iterations before writing code. Walk through the first three elements of [2, 2, 1, 1, 1, 2, 2] to show how adoption, reinforcement, and cancellation work in practice. Interviewers appreciate seeing you simulate your own algorithm before committing to code.
Step 5: Name the algorithm and state complexity at the end. Say: "This is the Boyer-Moore Voting Algorithm. It runs in O(n) time with O(1) space because we maintain only two variables and make a single pass through the array."
LeetCode 229: Majority Element II extends this problem to find all elements that appear more than n / 3 times. The solution requires two candidates and two counts running simultaneously, a direct generalization of Boyer-Moore that is a very common follow-up interview question.
LeetCode 136: Single Number uses a different cancellation trick, XOR, to find an element that appears an odd number of times while every other appears twice. The mindset of using mathematical properties of the elements rather than storing counts is the same core instinct as Boyer-Moore.
LeetCode 1150: Check If a Number Is Majority Element in a Sorted Array asks you to verify whether a given element is a majority in a sorted array. Binary search handles it efficiently, but the definition of majority is identical and understanding Boyer-Moore gives you a second approach to compare against.
LeetCode 912: Sort an Array is worth noting as a natural interview follow-up. After solving Majority Element, interviewers sometimes ask "what if the array were unsorted and you had no extra space at all?" Boyer-Moore answers that, making it a clean bridge to sorting-based problem discussions.
This problem is about elimination, not counting. The Boyer-Moore algorithm does not track how often anything appears. It tracks whether a candidate can survive being cancelled by everything else, which is a fundamentally different mental model from the hash map approach.
The guaranteed existence of a majority element is load-bearing. Without it, Boyer-Moore is incorrect. With it, the algorithm is provably correct. Always check the constraints before applying this algorithm to a new problem.
count measures net lead, not frequency. It represents how far ahead the current candidate is against its combined opposition at this point in the scan. This distinction matters deeply when you extend the algorithm to problems like Majority Element II.
Constant space solutions often require reframing the question. The jump from O(n) space to O(1) space here was not achieved by optimizing the hash map. It was achieved by asking a completely different question: not "how often does each element appear?" but "which element cannot be cancelled?". That kind of reframing is a skill worth developing consciously.
Day 7 is complete. We went from a problem that looks like it needs counting to a solution that does no counting at all. The Boyer-Moore Voting Algorithm is one of those rare algorithms that feels like cheating when you first see it, and then feels inevitable once you internalize the cancellation argument.
On Day 8, we continue building our array problem toolkit. The pattern recognition we are developing now, knowing when to reach for two pointers, hash maps, sliding windows, or voting, will pay dividends across every medium and hard problem we tackle later in the challenge.
If something clicked for you today, or if you are still wrestling with why Boyer-Moore works, drop a comment below. I read every one and the questions readers ask shape what I explain more deeply in future posts.
See you on Day 8. 🚀
This post is part of my 250 Days DSA Challenge, solving one problem every day with a focus on intuition, patterns, and interview ready thinking.
Master LangChain Runnables from scratch. Learn what they are, why they replaced chains, and how LCEL helps you build flexible AI pipelines with clean, modular code.
Sameer Singh
Low Level Design is one of the most important skills for modern software engineers. This detailed guide explains design patterns, UML relationships, aggregation vs composition, object interactions, interview strategies, and how to think like a senior engineer while designing scalable systems.
Sign in to join the discussion.
Rahul Kumar