Array remove duplicates javascript: Practical Dedup Patterns
Learn practical techniques to remove duplicates from JavaScript arrays, focusing on array remove duplicates javascript patterns, including Set, filter, and Map. This guide covers performance concerns, edge cases, and how to deduplicate complex data.

To remove duplicates from an array in JavaScript, you can convert the array to a Set, filter with index checks, or reduce with a tracking approach. The simplest and most reliable pattern is const unique = [...new Set(arr)]; This preserves insertion order for primitives. According to JavaScripting, this approach is a practical starting point for most projects and demonstrates array remove duplicates javascript in action.
Understanding the problem: array remove duplicates javascript
De-duplicating arrays is a common data-processing task in JavaScript. In this article we focus on the concrete goal of removing repeated values from a list while preserving order whenever possible. The phrase array remove duplicates javascript appears here to tie the topic to common developer search intents. This section introduces the simplest, native approach and why it works for primitive values.
const nums = [1, 2, 2, 3, 3, 4, 4];
const unique = [...new Set(nums)];
console.log(unique); // [1, 2, 3, 4]- The Set-based method is concise and fast for primitive data types because Sets enforce uniqueness by value. Spreading a Set back into an array yields a clean result without mutating the original.
const letters = ['a', 'b', 'a', 'c', 'b'];
const uniqLetters = Array.from(new Set(letters));
console.log(uniqLetters); // ['a', 'b', 'c']-
Trade-offs: while this method is great for primitives, it does not deduplicate objects by value unless you transform them first. If you need a custom key, you’ll move to a Map or reduce-based pattern.
-
Variations: you can also write a micro-helper like
const dedup = arr => [...new Set(arr)];for reuse across codebases.
The Set-based solution: why it’s popular and how to use it in practice
// Fast primitive dedup with a small helper
function uniquePrimitives(arr) {
return [...new Set(arr)];
}
console.log(uniquePrimitives([true, false, true, true])); // [true, false]- Why this works: Sets maintain a single instance per value. For arrays of numbers or strings, this is typically all you need.
- When to avoid: if you require deep equality checks for objects, Set will not deduplicate by content without a transformation step.
- Alternatives: filter-based dedup keeps compatibility with older runtimes; however, it’s generally slower for large arrays.
// Filter-based dedup with preserving order
function uniqueByFilter(arr) {
return arr.filter((v, i, a) => a.indexOf(v) === i);
}
console.log(uniqueByFilter([1, 2, 2, 3])); // [1, 2, 3]- Performance note: indexOf scans the array on every iteration, giving O(n^2) time complexity in the worst case. Use this only when you’re certain the array is small or readability is preferred over speed.
Edge cases and performance considerations
// NaN and -0/0 handling in Sets
const mixed = [NaN, NaN, 0, -0, 0, '0', '0'];
console.log([...new Set(mixed)]); // [NaN, 0, '0']- Semantics: JavaScript’s Set uses SameValueZero semantics, which treats NaN as equal to NaN and 0 and -0 as the same value. This affects how duplicates are detected.
- Big arrays: For millions of items, consider memory usage. A Set needs extra space; if you only need a final list, a streaming approach can reduce peak memory in some workloads.
- Type safety: For heterogeneous data, you may want to normalize values before deduping (e.g., convert numbers to strings or clamp values).
// Reducer-based approach (less efficient but flexible)
function dedupeWithReduce(arr) {
return arr.reduce((acc, cur) => {
if (!acc.includes(cur)) acc.push(cur);
return acc;
}, []);
}
console.log(dedupeWithReduce([1, 2, 2, 3])); // [1, 2, 3]- This reducer-based pattern illustrates the trade-off: readability vs. performance. It’s useful when you want to extend the logic (e.g., track counts) during deduplication.
Deduplicating arrays of objects: map-based patterns
const users = [
{ id: 1, name: 'Ada' },
{ id: 2, name: 'Benu' },
{ id: 1, name: 'Ada' },
{ id: 3, name: 'Kai' }
];
// Deduplicate by key property (id) using Map
const dedupById = [...new Map(users.map(u => [u.id, u])).values()];
console.log(dedupById);
// [{ id: 1, name: 'Ada' }, { id: 2, name: 'Benu' }, { id: 3, name: 'Kai' }]- Proposition: Map preserves the last-seen value for a given key. If you want the first occurrence, swap to a different pattern (e.g., reduce with a lookup Set and push logic).
- Nested data: For deeply nested structures, deduplicate by a composed key (e.g.,
${obj.id}_${obj.locale}) or by a dedicated hashing function.
// Deduplicate by a computed key using a helper
function dedupeByKey(arr, keyFn) {
const seen = new Map();
for (const item of arr) {
const key = keyFn(item);
if (!seen.has(key)) seen.set(key, item);
}
return Array.from(seen.values());
}
const products = [
{ sku: 'A1', price: 9.99 },
{ sku: 'A2', price: 12.5 },
{ sku: 'A1', price: 9.99 }
];
console.log(dedupeByKey(products, p => p.sku));- This pattern is especially handy when deduplicating large arrays of objects where identity is determined by a specific field.
Patterns for real-world usage and best practices
// Reusable dedup helper with an optional predicate
function deduplicate(arr, keyFn) {
if (!keyFn) return [...new Set(arr)];
const seen = new Map();
for (const item of arr) {
const key = keyFn(item);
if (!seen.has(key)) seen.set(key, item);
}
return Array.from(seen.values());
}
console.log(deduplicate([1,2,2,3])); // [1,2,3]
console.log(deduplicate([{id:1},{id:2},{id:1}], x => x.id)); // [{id:1},{id:2}]- Prefer pure functions: avoid mutating the input array when possible to reduce surprises in larger codebases.
- Micro-optimizations: for primitive arrays, the Set approach is typically fastest and most readable. For object arrays, prefer Map by a key rather than trying to compare entire objects.
- Testing: write unit tests that cover NaN, -0, and edge cases with mixed types to ensure your dedup logic remains robust across environments.
Quick patterns recap and when to choose which method
- Primitive arrays: Set-based dedup (fast, concise) or filter-based as a readability-friendly alternative.
- Object arrays: dedupe by a key via Map, possibly combined with a small key function.
- Large datasets: profile memory usage; a streaming or chunked approach may be warranted for extremely large inputs.
- Edge cases: account for NaN handling, -0 vs 0, and optional type normalization before dedup.
blockData.dlnote_1 not_used???
prerequisitesI hope this makes sense.
Steps
Estimated time: 40-60 minutes
- 1
Decide the dedup strategy
Compare Set-based vs Map/reduce approaches and select the one that fits your data shape (primitive vs object arrays) and performance needs.
Tip: Choose Set for primitives to maximize readability and speed. - 2
Implement a small helper
Create a reusable function that deduplicates via a key function if needed, so you can reuse across modules.
Tip: Keep inputs immutable by returning a new array. - 3
Add tests for edge cases
Test with NaN, -0/0, and mixed types to verify your dedup logic behaves as expected across environments.
Tip: Include a test for objects with duplicate keys. - 4
Profile performance
Run benchmarks with large arrays to ensure you’re using the most efficient pattern for your app.
Tip: Consider chunking or streaming if inputs are huge. - 5
Integrate into codebase
Export the helper and document its behavior so other developers can use it confidently.
Tip: Add TypeScript types if you use TS for better safety.
Prerequisites
Required
- Required
- Required
- Required
- Basic command line knowledgeRequired
Optional
- Optional
Keyboard Shortcuts
| Action | Shortcut |
|---|---|
| CopyCopy code blocks or text in editors and terminals | Ctrl+C |
| PastePaste into code editor or terminal | Ctrl+V |
| FindSearch within the current file or page | Ctrl+F |
| Format documentFormat code in the editor (where supported) | ⇧+Alt+F |
| Comment lineToggle line comment | Ctrl+/ |
Questions & Answers
What is the simplest way to remove duplicates from an array in JavaScript?
The simplest approach uses a Set: const uniq = [...new Set(array)]; This works well for primitive values and preserves insertion order. For objects, you typically deduplicate by a key using Map or a reducer.
Use a Set to deduplicate primitives; for objects, deduplicate by a key with Map.
How can I preserve the original order while removing duplicates?
Both the Set approach and Map-based patterns preserve the original insertion order for the unique elements. Spread the Set back into an array to maintain order as elements first appeared.
Use a Set and spread it back to an array to keep the first-seen order.
What should I do if I have an array of objects and I want to deduplicate by a property like id?
Deduplicate by a key property using Map or a keyed reducer, for example: const dedup = [...new Map(arr.map(x => [x.id, x])).values()]; This keeps the last occurrence by default.
If you have objects, deduplicate by a key with Map and keep the latest occurrence.
Is filtering with indexOf a good idea for large arrays?
indexOf inside a filter is O(n^2) in the worst case, which can be slow for large arrays. Use Set or Map-based approaches for better performance in real-world apps.
Using indexOf in a loop can be slow for big lists; prefer Set or Map-based dedup for speed.
What to Remember
- Deduplicate primitives quickly with Set
- Preserve order by spreading Set results
- Deduplicate objects by a key using Map
- Be mindful of NaN and -0 handling in Sets
- Test edge cases and measure performance