Copying JavaScript objects is not as simple as it seems. Understanding how objects and references work during this process is critical to web developers and can save hours of debugging time. This becomes increasingly important when you use large stateful applications, such as those built in React or Vue.
Shallow copy and deep copy refer to how we create copies of objects in JavaScript and what data we create in "Copy". In this article, we will dig deep into the differences between these methods, explore their practical applications, and uncover potential pitfalls that may arise when using them.
Key Points
- Shallow copy in JavaScript creates a new object that copies the properties of an existing object, but retains the same reference to the original value or object. This means that modifications to nested objects in shallow replicas also affect the original object and any other shallow replicas.
- On the other hand, deep copy creates an exact copy of an existing object, including all its properties and any nested objects, rather than just references. This makes deep copying beneficial when you need two separate objects that do not share references, ensuring that changes to one object do not affect the other.
- While deep replication provides the benefits of data accuracy, it may also have some disadvantages such as performance impact, increased memory consumption, circular reference issues, function and special object handling, and implementation complexity. Therefore, it is important to evaluate whether deep replication is required for each particular use case.
What is "shallow" copy
Shallow copy refers to the process of creating a new object, which is a copy of an existing object whose properties refer to the same numerical value or object as the original object. In JavaScript, this is usually achieved using methods such as Object.assign()
or expansion syntax ({...originalObject}
). Shallow copy creates only new references to existing objects or values, and does not create deep copy, meaning that nested objects are still referenced, not duplicates.
Let's take a look at the following code example. The newly created object shallowCopyZoo
is a copy of zoo
created by expanding the operator, which has some unexpected consequences.
let zoo = { name: "Amazing Zoo", location: "Melbourne, Australia", animals: [ { species: "Lion", favoriteTreat: "?", }, { species: "Panda", favoriteTreat: "?", }, ], }; let shallowCopyZoo = { ...zoo }; shallowCopyZoo.animals[0].favoriteTreat = "?"; console.log(zoo.animals[0].favoriteTreat); // "?",而不是 "?"
But let's see what exactly is in shallowCopyZoo
. The properties name
and location
are the original values ??(strings), so their values ??are copied. However, the animals
property is an array of objects, so what is copied is a reference to that array, not the array itself.
You can use the strict equality operator (===
) to test this quickly (if you don't believe me). Only when an object refers to the same object, one object is equal to another object (see Primitive Data Types and Reference Data Types). Note that the animals
attributes are equal in both, but the object itself is not equal.
let zoo = { name: "Amazing Zoo", location: "Melbourne, Australia", animals: [ { species: "Lion", favoriteTreat: "?", }, { species: "Panda", favoriteTreat: "?", }, ], }; let shallowCopyZoo = { ...zoo }; shallowCopyZoo.animals[0].favoriteTreat = "?"; console.log(zoo.animals[0].favoriteTreat); // "?",而不是 "?"
This can cause potential problems in the code base and is especially difficult when dealing with large modifications. Modifying nested objects in shallow replicas also affects the original object and any other shallow replicas because they all share the same reference.
Deep copy
Deep copy is a trick to create a new object that is an exact copy of an existing object. This includes copying all its properties and any nested objects, rather than references. Deep cloning is useful when you need two separate objects that do not share references, ensuring that changes to one object do not affect the other.
Programmers often use deep cloning when dealing with application state objects in complex applications. Creating a new state object without affecting the previous state is critical to maintaining application stability and correctly implementing the undo/redo functionality.
How to use JSON.stringify()
and JSON.parse()
for deep copy
A popular and library-free deep copy method is to use the built-in JSON.stringify()
and JSON.parse()
methods.
parse(stringify())
The method is not perfect. For example, special data types such as Date
will be converted to strings, and undefined values ??will be ignored. As with all options in this article, you should consider it based on your specific use case.
In the following code, we will use these methods to create a deepCopy
function to deeply clone an object. Then we copy the playerProfile
object and modify the copied object without affecting the original object. This demonstrates the value of deep replication in maintaining independent objects that do not share references.
console.log(zoo.animals === shallowCopyZoo.animals) // true console.log(zoo === shallowCopyZoo) // false
Library for deep replication
There are also a variety of third-party libraries that provide deep replication solutions.
-
The
- Lodash library's
cloneDeep()
functions can handle loop references, functions, and special objects correctly. -
extend()
function of jQuery library[deep = true]
The immer library is built for React-Redux developers and provides convenient tools for modifying objects.
If for some reason you don't want to use JSON objects or third-party libraries, you can also create a custom deep copy function in Vanilla JavaScript. The function recursively iterates over object properties and creates a new object with the same properties and values.
const playerProfile = { name: 'Alice', level: 10, achievements: [ { title: 'Fast Learner', emoji: '?' }, { title: 'Treasure Hunter', emoji: '?' } ] }; function deepCopy(obj) { return JSON.parse(JSON.stringify(obj)); } const clonedProfile = deepCopy(playerProfile); console.log(clonedProfile); // 輸出與playerProfile相同 // 修改克隆的配置文件而不影響原始配置文件 clonedProfile.achievements.push({ title: 'Marathon Runner', emoji: '?' }); console.log(playerProfile.achievements.length); // 輸出:2 console.log(clonedProfile.achievements.length); // 輸出:3Disadvantages of deep copy
While deep replication provides great benefits for data accuracy, it is recommended to evaluate whether deep replication is required for each specific use case. In some cases, shallow copying or other techniques for managing object references may be more appropriate, thereby providing better performance and reducing complexity.
- Performance Impact: Deep replication can be computationally expensive, especially when dealing with large or complex objects. Since the deep replication process iterates over all nested properties, it can take a lot of time to negatively affect the performance of your application.
- Memory consumption: Creating deep copy results in the entire object hierarchy, including all nested objects. This can lead to increased memory usage, which can become a problem in memory-constrained environments or when processing large datasets.
- Round Reference: Deep copying can cause problems when an object contains a circular reference (i.e., when an object's properties refer to itself directly or indirectly). Loop references can cause infinite loops or stack overflow errors during deep copying, and handling them requires additional logic to avoid these problems.
- Function and Special Object Handling: Deep copy may not handle functions or objects with special characteristics as expected (for example,
Date
,RegExp
, DOM elements). For example, when deep copying an object containing a function, a reference to the function may be copied, but the closure of the function and its bound context will not be copied. Likewise, objects with special features may lose their unique properties and behavior when deeply replicated. - Implementation Complexity: Writing a custom deep copy function can be complicated, and built-in methods like
JSON.parse(JSON.stringify(obj))
also have some limitations, such as not being able to handle functions correctly, circular references, or special objects. While there are some third-party libraries such as Lodash's_.cloneDeep()
that handle deep replication more efficiently, adding external dependencies for deep replication may not always be ideal.
Conclusion
Thank you for taking the time to read this article. Shallow and deep replication are much more complex than any beginner might think. While there are many pitfalls in each approach, taking the time to review and consider these options will ensure that your application and data keeps what you want to look like.
FAQs about shallow and deep replication in JavaScript (FAQ)
What is the main difference between shallow and deep replication in JavaScript?
The main difference between shallow and deep replication is the way they handle properties as objects. In shallow copy, the copied object shares the same reference to the nested object as the original object. This means that changes to nested objects will be reflected in the original object and the copy object. Deep replication, on the other hand, creates new instances of nested objects, meaning changes to nested objects in the replicated object do not affect the original object.
How does the expansion operator work in shallow copy?
The expansion operator (…) in JavaScript is usually used for shallow copying. It copies all enumerable properties of one object to another. However, it only copies the first level attributes and references to nested objects. Therefore, changes to nested objects will affect the original object and the copied object.
Can I use the JSON method for deep copying?
Yes, you can use the JSON method to perform deep copying in JavaScript. A combination of JSON.stringify()
and JSON.parse()
methods creates a deep copy of an object. JSON.stringify()
Converts the object to a string, JSON.parse()
parses the string back to the new object. However, this method has some limitations because it does not copy the method and is not suitable for special JavaScript objects such as Date
, RegExp
, Map
, Set
, etc.
What are the limitations of shallow replication?
Shallow copy only replicates the first-level attributes and references to nested objects. Therefore, if the original object contains nested objects, changes to those nested objects will affect the original object and the copied object. This can lead to unexpected results and errors in the code.
Object.assign()
How does the method work in shallow copy?
The Object.assign()
method is used to copy the values ??of all enumerable properties of one or more source objects to the target object. It returns the target object. However, it performs shallow copying, which means it only copies the first level properties and references to nested objects.
What is the best way to deeply copy objects in JavaScript?
The best way to deeply copy objects in JavaScript depends on the specific requirements of the code. If your object does not contain methods or special JavaScript objects, you can use a combination of JSON.stringify()
and JSON.parse()
methods. For more complex objects, you may want to use libraries like Lodash, which provide deep cloning functions.
Can I use the expand operator for deep copying?
No, the expansion operator in JavaScript only performs shallow copying. It copies the first level attribute and references to nested objects. To perform deep replication, you need to use another method or library.
What is the performance impact of deep replication?
Deep replication may consume more resources than shallow replication, especially for large objects. This is because deep copy creates new instances for all nested objects, which can take up more memory and processing power.
How to deal with circular references in deep replication?
Deep copy methods such asJSON.stringify()
and JSON.parse()
do not handle circular references and will throw an error. A circular reference occurs when an object's properties refer to the object itself. To handle circular references, you need to use a library that supports it, such as Lodash.
Why should I care about the difference between shallow copy and deep copy?
Understanding the difference between shallow and deep replication is critical to managing data in JavaScript. It affects how your objects interact with each other. If you are not careful, shallow copying can lead to unexpected results and errors, as changes to nested objects affect the original and copied objects. Deep copy, on the other hand, ensures that your copy object is completely independent from the original object.
The above is the detailed content of Shallow vs. Deep Copying in JavaScript. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

The following points should be noted when processing dates and time in JavaScript: 1. There are many ways to create Date objects. It is recommended to use ISO format strings to ensure compatibility; 2. Get and set time information can be obtained and set methods, and note that the month starts from 0; 3. Manually formatting dates requires strings, and third-party libraries can also be used; 4. It is recommended to use libraries that support time zones, such as Luxon. Mastering these key points can effectively avoid common mistakes.

PlacingtagsatthebottomofablogpostorwebpageservespracticalpurposesforSEO,userexperience,anddesign.1.IthelpswithSEObyallowingsearchenginestoaccesskeyword-relevanttagswithoutclutteringthemaincontent.2.Itimprovesuserexperiencebykeepingthefocusonthearticl

Event capture and bubble are two stages of event propagation in DOM. Capture is from the top layer to the target element, and bubble is from the target element to the top layer. 1. Event capture is implemented by setting the useCapture parameter of addEventListener to true; 2. Event bubble is the default behavior, useCapture is set to false or omitted; 3. Event propagation can be used to prevent event propagation; 4. Event bubbling supports event delegation to improve dynamic content processing efficiency; 5. Capture can be used to intercept events in advance, such as logging or error processing. Understanding these two phases helps to accurately control the timing and how JavaScript responds to user operations.

If JavaScript applications load slowly and have poor performance, the problem is that the payload is too large. Solutions include: 1. Use code splitting (CodeSplitting), split the large bundle into multiple small files through React.lazy() or build tools, and load it as needed to reduce the first download; 2. Remove unused code (TreeShaking), use the ES6 module mechanism to clear "dead code" to ensure that the introduced libraries support this feature; 3. Compress and merge resource files, enable Gzip/Brotli and Terser to compress JS, reasonably merge files and optimize static resources; 4. Replace heavy-duty dependencies and choose lightweight libraries such as day.js and fetch

The main difference between ES module and CommonJS is the loading method and usage scenario. 1.CommonJS is synchronously loaded, suitable for Node.js server-side environment; 2.ES module is asynchronously loaded, suitable for network environments such as browsers; 3. Syntax, ES module uses import/export and must be located in the top-level scope, while CommonJS uses require/module.exports, which can be called dynamically at runtime; 4.CommonJS is widely used in old versions of Node.js and libraries that rely on it such as Express, while ES modules are suitable for modern front-end frameworks and Node.jsv14; 5. Although it can be mixed, it can easily cause problems.

There are three common ways to initiate HTTP requests in Node.js: use built-in modules, axios, and node-fetch. 1. Use the built-in http/https module without dependencies, which is suitable for basic scenarios, but requires manual processing of data stitching and error monitoring, such as using https.get() to obtain data or send POST requests through .write(); 2.axios is a third-party library based on Promise. It has concise syntax and powerful functions, supports async/await, automatic JSON conversion, interceptor, etc. It is recommended to simplify asynchronous request operations; 3.node-fetch provides a style similar to browser fetch, based on Promise and simple syntax

To write clean and maintainable JavaScript code, the following four points should be followed: 1. Use clear and consistent naming specifications, variable names are used with nouns such as count, function names are started with verbs such as fetchData(), and class names are used with PascalCase such as UserProfile; 2. Avoid excessively long functions and side effects, each function only does one thing, such as splitting update user information into formatUser, saveUser and renderUser; 3. Use modularity and componentization reasonably, such as splitting the page into UserProfile, UserStats and other widgets in React; 4. Write comments and documents until the time, focusing on explaining the key logic and algorithm selection

JavaScript's garbage collection mechanism automatically manages memory through a tag-clearing algorithm to reduce the risk of memory leakage. The engine traverses and marks the active object from the root object, and unmarked is treated as garbage and cleared. For example, when the object is no longer referenced (such as setting the variable to null), it will be released in the next round of recycling. Common causes of memory leaks include: ① Uncleared timers or event listeners; ② References to external variables in closures; ③ Global variables continue to hold a large amount of data. The V8 engine optimizes recycling efficiency through strategies such as generational recycling, incremental marking, parallel/concurrent recycling, and reduces the main thread blocking time. During development, unnecessary global references should be avoided and object associations should be promptly decorated to improve performance and stability.
