Concepts of JavaScript — Part 3

Image for post
Image for post

📄 Table of contents

— — — — — — — — — — — — ♥️ ♥️ ♥️ — — — — — — — — — — — —

Array vs Objects

Each item in the array is stored in consecutive blocks of memory and has an index, which makes retrieving data easy as long as you know the index of the item.

Objects are stored in the for of a key/value pair {key: value}. JavaScript objects are implemented using Hash tables under the hood.

Question

What is the difference between Object and Map?

Solution

  1. Objects can have only strings and symbols as their keys while Map can have primitive, functions or even objects as their keys.
  2. Keys in Map are ordered but not in objects.
  3. Size of Map can be easily determined using size property while it’s relatively hard for objects.
  4. A Map may perform better in scenarios involving frequent addition and removal of key pairs.

◉ PROTOTYPES

JS is a prototype based language and INHERITANCE in JS is based on “prototypes”. By default, every “FUNCTION” has a property called “PROTOTYPE”. This property, by default is EMPTY. You can add properties and methods to it.Now, when you create objects from the function, it will inherit these properties & methods that is defined in the function’s prototype.

So, when a function is created in JavaScript, the JavaScript engine adds a prototype property to the function. This prototype property is an object (called a prototype object) that has a constructor property by default. The constructor property points back to the function on which prototype object is a property. We can access the function’s prototype property using functionName.prototype.

“GOD” constructor function has a prototype property which points to the prototype object. The prototype object has a constructor property which points back to the “GOD” constructor function. Let’s see an example below:

console.log(person1);
Image for post
Image for post

Creating an object using the constructor function

When an object is created (say, human) in JavaScript, JavaScript engine adds a __proto__ property to the newly created object which is called dunder proto.

dunder proto or __proto__ points to the prototype object of the constructor function.

Image for post
Image for post
human.prototype.__proto__ === God.prototype;  //true

Prototype object of the constructor function is shared among all the objects created using the constructor function.

As prototype object is shared among all the objects created using the constructor function, its properties and methods are also shared among all the objects. If an object A modifies property of the prototype having primitive value, other objects will not get affected by.

Must read @ https://medium.com/better-programming/prototypes-in-javascript-5bba2990e04b

◉ EVENT LOOP

JS the runtime can do only one thing at a time. The reason we can do things concurrently is coz browser is much more than just the runtime.

JS Runtime engine “V8” has Heap (where memory allocations happen) and Stack (LIFO). In addition to JS Runtime engine, we have DOM, Timers (setTimeout etc) + XHR (ajax) request which are all part of web-APIs provided by the browser. However, these are NOT the part of JS Runtime engine “V8”.

Image for post
Image for post
Chrome Browser Internals

Event loop job is to look at the stack and task queue. If the stack is empty, it takes the first thing on the queue and pushes onto stack, which effectively runs it.

◉ Understanding the Critical Rendering Path

source: https://bitsofco.de/understanding-the-critical-rendering-path/

When a browser receives the HTML response for a page from the server, there are a series of steps to be taken before pixels are drawn on the screen. This sequence the browsers needs to run through for the initial paint of the page is called the “Critical Rendering Path”.

Knowledge of the CRP is incredibly useful for understanding how a site’s performance can be improved. There are 6 stages to the CRP -

  1. Constructing the DOM Tree
  2. Constructing the CSSOM Tree
  3. Running JavaScript
  4. Creating the Render Tree
  5. Generating the Layout
  6. Painting
Image for post
Image for post
Critical Rendering Path (CRP)

[1]. A good thing about HTML is that it can be executed in parts. However, same is NOT the case with CSS & JS (which block the rendering of the page).

The full document doesn’t have to be loaded for content to start appearing on the page, in terms of HTML.

[2] . CSS is considered a “render blocking resource”.

CSS’s render-blocking will NOT block DOM construction, it only blocks the content from displaying/rendering until CSSOM is ready.

CSS can also be “script blocking”. This is because JavaScript files must wait until the CSSOM has been constructed before it can run.

Also, Unlike HTML, CSS cannot be used in parts because of its inherit cascading nature.

[3]. JavaScript is considered a “parser blocking resource”. This means that the parsing of the HTML document itself is blocked by JavaScript.

[4]. The Render Tree is a combination of both the DOM and CSSOM. It is a tree that represents what will be eventually rendered on the page. This means that it only captures the visible content and will NOT include, for example, elements that have been hidden with CSS using display: none.

[5]. The viewport size is determined by the meta viewport tag provided in the document head or, if no tag is provided, the default viewport width of 980px is used.

<meta name="viewport" content="width=device-width, initial-scale=1">

[6]. Finally, in the Painting step, the visible content of the page can be converted to pixels to be displayed on the screen.

Render blocking vs Parser Blocking

CSS resources are different. When the parser sees a stylesheet to load, it issues the request to the server, and moves on. If there are other resources to load, these can all be fetched in parallel (subject to some HTTP restrictions). But only when the CSS resources are loaded and ready can the page be painted on the screen. That’s render blocking, and because the fetches are in parallel, it’s a less serious slow down.

  • Parser blocking is not quite as simple as that in some modern browsers. They have some ability to tentatively parse the following HTML in the hope that the script, when it loads and executes, doesn’t do anything to mess up the subsequent parsing, or if it does, that the same resources are still required to be loaded. But they can still have to back out the work if the script does something awkward.

Repaint vs Reflow

  • A repaint occurs when changes are made to an elements skin that changes visibly, but do not affect its layout.

Examples of this include outline, visibility, background, or color. According to Opera, repaint is expensive because the browser must verify the visibility of all other nodes in the DOM tree.

  • A reflow is even more critical to performance because it involves changes that affect the layout of a portion of the page (or the whole page).

Examples that cause reflows include: adding or removing content, explicitly or implicitly changing width, height, font-family, font-size and more.

◉ Normal vs Asynchronous vs Deferred JS (NAD)

source:https://bitsofco.de/async-vs-defer/

The <script> element has two attributes, async and defer, that can give us more control over how and when external files are fetched and executed.

🔘 Normal Execution

Image for post
Image for post
The HTML parsing is paused for the script to be fetched and executed, thereby extending the amount of time it takes to get to first paint.

🔘 The async Attribute

<script async src="script.js">
Image for post
Image for post
the file can be downloaded while the HTML document is still parsing. Once it has been downloaded, the parsing is paused for the script to be executed.

🔘 The defer Attribute

<script defer src="script.js">
Image for post
Image for post
even if the file is fully downloaded long before the document is finished parsing, the script is not executed until the parsing is complete.

Asynchronous, Deferred or Normal Execution?

So, when should we use asynchronous, deferred, or normal JavaScript execution? As always, it depends on the situation, and there are a few questions to consider.

Where is the <script> element located?

Asynchronous and deferred execution of scripts are more important when the <script> element is NOT located at the very end of the document. HTML documents are parsed in order, from the first opening <html> element to it's close.

If an externally sourced JavaScript file is placed right before the closing </body> element, it becomes much less pertinent to use an async or defer attribute. Since the parser will have finished the vast majority of the document by that point, JavaScript files don't have much parsing left to block.

Is the script self-contained?

For script files that are not dependent on other files and/or do not have any dependencies themselves, the async attribute is particularly useful. Since we do not care exactly at which point the file is executed, asynchronous loading is the most suitable option.

De”B”ounce and Th”R”ottle

Debounce and throttle are two programming techniques which limits the rate at which a function can fire. Also, that can save the day when it comes to performance.

Image for post
Image for post

When to use each

Debouncing and throttling are recommended to use on events that a user can fire more often than you need them to.

Examples include window resizing, scrolling, “n” no. of clicks on a button, Auto Suggest in Search Box.

Use debounce when you want your function to postpone its next execution until after X milliseconds have elapsed since the last time it was invoked.

Use throttle when you need to ensure that events fire at given regular intervals.

Debouncing and throttling are not something provided by JavaScript itself. They’re just concepts we can implement using the setTimeout web API. Some libraries like underscore.js and loadash provide these methods out of the box.

Both throttling and debouncing can be implemented with the help of the setTimeout function. So, let’s try to understand the setTimeout function.

setTimeout

setTimeout is a scheduling function in JavaScript that can be used to schedule the execution of any function. It is a web API provided by the browsers and used to execute a function after a specified time. Here’s the basic syntax:

var timerId = setTimeout(callbackFunction, timeToDelay)

Implementing Debouncing in JavaScript

Image for post
Image for post
https://www.telerik.com/blogs/debouncing-and-throttling-in-javascript

Here’s the HTML for debounce example:

debounce.html

<html>
<body>
<label>Search</label>
<!-- Renders an HTML input box -->
<input type="text" id="search-box">
<p>No of times event fired</p>
<p id='show-api-call-count'></p>
<p>No of times debounce executed the method</p>
<p id="debounce-count"></p>
</body>
<script src="debounce.js"></script>
</html>

Here’s the JavaScript for debounce example:

debounce.js

Image for post
Image for post

Let’s understand more about the debounceFunction:

The debounceFunction is used to limit the number of times any function is executed. It takes input as the func that is a function whose execution has to be limited, and delay that is the time in milliseconds. If the user types very fast, the debounceFunction will allow the execution of func only when the user has stopped typing in the textbox.

Let’s examine the above code line by line:

  1. When the user types the first letter in the textbox, event handler or the anonymous function calls the debounceFunction with the makeAPICall function and 200 milliseconds as parameters.
  2. Inside the debounceFunction, timerId is undefined, as it has not been initialized so far. Hence, clearTimeout function will do nothing.
  3. Next, we pass func that is the makeAPICall function as a callback to the setTimeout function, with delay that is 200 milliseconds as another parameter. This means that we want the makeAPICall function to be executed after 200 milliseconds. The setTimeout function returns an integer value as its unique id, which is stored by the timerId.
  4. Now, when the user types a second letter in the textbox, again debounceFunction is called. But this time timerId is not undefined and it stores the unique id of the previous setTimeout function. Hence, when clearTimeout function is called with this timerId, it cancels the execution of the previous setTimeout function.
  5. Hence, all func or makeAPICall functions scheduled by setTimeout function due to the user typing in the textbox will be cancelled by the clearTimeout function. Only the makeAPICall function scheduled by the last letter in the textbox will execute after the specified time of 200 milliseconds.

Thus, no matter how many letters the user types in the textbox, the debounceFunction will execute the makeAPICall method only one time after 200 milliseconds - after the user types the “last” letter. And that’s debouncing!

This is what Debouncing is!

In the debouncing technique, no matter how many times the user fires the event, the attached function will be executed only after the specified time once the user STOPS firing the event.

For instance, suppose a user is clicking a button 5 times in 100 milliseconds. Debouncing will not let any of these clicks execute the attached function. Once the user has stopped clicking, if debouncing time is 100 milliseconds, the attached function will be executed after 100 milliseconds. Thus, to a naked eye, debouncing behaves like grouping multiple events into one single event.

This is what throttling is!

Throttling is a technique in which, no matter how many times the user fires the event, the attached function will be EXECUTED ONLY ONCE in a given time interval.

For instance, when a user clicks on a button, a helloWorld function is executed which prints Hello, world on the console. Now, when throttling is applied with 1000 milliseconds to this helloWorld function, no matter how many times the user clicks on the button, Hello, world will be printed only once in 1000 milliseconds. Throttling ensures that the function executes at a regular interval.

◉ try-catch-finally

Reference: https://levelup.gitconnected.com/5-things-you-dont-know-about-try-catch-finally-in-javascript-5d661996d77c

The try statement lets you test a block of code for errors.

The catch statement lets you handle the error.

The throw statement lets you create custom errors.

The finally statement lets you execute code, after try and catch, regardless of the result.

try-catch-finally is used to handle runtime errors and prevent them from halting the execution of a program.

1. Return statement inside the try or catch block

If we have a finally block, the return statement inside try and catch block are NOT executed. It will always hit the finally block.

function test() {
try {
return 10;
throw "error"; // this is not executed, control goes to finally
}
catch {
console.log("catch");
return 1;
}
finally {
console.log("finally");
return 1000;
}}
console.log( test() ); // finally 1000

2. Variables declared inside a try block are NOT available in the catch or finally block

If we use let or const to declare a variable in the try block, it will not be available to catch or finally. This is because these variable declarations are block-scoped.

try {

let a = 10;
throw "a is block scoped ";
}
catch(e) {

console.log("Reached catch");
console.log(a); // Reference a is no defined
}

But if we use var instead of let or const, then it will be available inside the catch because var is function scoped, and the declaration will be hoisted.

try {

var a = 10;
throw "a is function scoped ";
}
catch(e) {

console.log("Reached catch");
console.log(a); // 10
}

3. Catch without error details

Earlier, the JavaScript statements try and catch used to come in pairs.

In ES2019, the argument for the catch block is optional.

try {
// code with bug;
}
catch {
// catch without exception argument
}

4. try…catch will not work on setTimeOut

If an exception happens in “scheduled” code, like setTimeout, then try..catch won’t catch it.

function callback() {
// error code
}
function test() {
try {
setTimeout( callback , 1000);
}
catch (e) {
console.log( "not executed" );
}
}

To handle this, we need to add try…catch inside the setTimeOut callback:

function callback() {
try {
// error code
}
catch
console.log("Error caught") ;
}
}
function test() {
setTimeout(callback, 1000);
}

5. Adding a global error handler

We can register a window.onerror event listener that will run in case of an uncaught error. This will not handle the error but detect it— you would need to supply your own try…catch inside of it.

window.onerror = function(e) {
console.log("error handled", e);
}
function funcWithError() {
a; // a is not defined
}
function test() {
funcWithError();
console.log("hi"); // this will not be executed
}
test();

◉ Shallow copy vs Deep Copy

In shallow copy, the original Object and cloned Object point to the same referenced Object or same memory location. Comparatively, in deep copy, original Object and cloned Object point to different memory locations.

Image for post
Image for post

In the above picture, we can see actual memory allocation for the original and cloned Object. Let’s consider p is the original Object and q is the cloned reference.

In shallow copy, original Object (p) and copied reference (q) are pointing to the same memory location(100). Changes on copied reference reflect on the original reference and vice versa.That’s why both references will modify the same data.

For any given Object, the first level of properties gets copied and the deeper(nested) level of properties gets referenced.

In a deep copy, data of the original Object(p which is at 100 memory location) gets copied into cloned reference(q) with a separate memory location(101). Another new reference gets created in memory(that is q). Modification of any Object data won’t affect its copied Object and vice versa.

Image for post
Image for post

NOTE: JSON.parse(JSON.stringify()) can also be used to create a NEW reference.

const array = [‘a','b','c']const newArray = array;
console.log(newArray === array) // true
const newArray1 = [...array]
console.log(newArray1 === array) // false
const newArray2 = Object.assign({},array)
console.log(newArray2 === array) // false
const newArray3 = Object.assign(array)
console.log(newArray3 === array). // true
const newArray4 = JSON.parse(JSON.stringify(array))
console.log(newArray4 === array) // false
Image for post
Image for post
Shallow Copy Example using Spread(…) Operator

In the above snapshot, the student is pointing to the original Object and newStudent is a new reference.

newStudent = { ...student }

The above statement copies all values from student and nested Object references. Now, newStudent.address will point to same memory location where student.address is pointing and same copy of other variables created.

When we modify the data of newStudent.address variable, then the original student.address will get modified.

Object.assign() is also used to achieve shallow copy.

Image for post
Image for post
Shallow Copy Example using JSON.parse ( JSON.stringify (object ) )

For simple understanding, we can keep in mind that Shallow copy is looking ourselves in the mirror(you are Object and replica in mirror can be a reference).

Consider twin brothers/sisters. They are a perfect example of Deep Copy. Twins are copies of each other but they are different in real life as original and copied Objects in memory😀.

Deep copy with custom function

It is pretty easy to write a recursive JavaScript function that will make a deep copy of nested objects or arrays. Here is an example:

References:

Experience with Front-end Technologies and MERN / MEAN Stack. Working on all Major UI Frameworks like React, Angular.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store