Give practical ways to write better JavaScript.

Posted by  Nishi Tiwari
 1031  View(s)
Ratings:
Rate this:

Give all the practical ways of writing better JavaScript with the suitable images. 

  1. Re: Give practical ways to write better JavaScript.

    Practical Ways to Write Better JavaScript

    Here, There are lots of simple things which we can do to improve at JavaScript and some methods of are as follows:-

    Use TypeScript 

    The first thing which we can do to improve our JavaScript is by not writing JS. The uninitiated, TypeScript is the compiled superset of JS. This TS adds as an optional typing system on top of the JS experience. For long time, TypeScript support across the system which was inconsistent enough for us to feel uncomfortable recommending it.

    • TypeScript enforces type safety 

    Type safety mainly describes the process where the compiler verifies that all the types which are being used is in a legal way throughout in a piece of code. In another words, if we create a function foo which takes a number:

    function foo(someNum: number): number {
    
      return someNum + 5;
    }

    This foo function should always be called with a number:

    good
    
    console.log(foo(2)); // prints "7"
    no good
    console.log(foo("two")); // invalid TS code

    we save from the overhead of adding types to our code. The benefit is too large that we can’t ignore. This Type safety provides us an extra level of protection against common errors/bugs, which is a blessing for the lawless language like JavaScript.

    • Typescript types make refactoring larger applications possible 

    Refactoring a large JS application is painful due to the fact that it doesn’t enforce function signatures. That means a JS function cannot really be misused. For example, if we have a function myAPI that is used by 1000 different services: 

    function myAPI(someNum, someString) {
    
      if (someNum > 0) {
        leakCredentials();
      } else {
        console.log(someString);
      }
    }

    and we want to change the call signature a bit:

    function myAPI(someString, someNum) {
    
      if (someNum > 0) {
        leakCredentials();
      } else {
        console.log(someString);
      }
    }

    We have to be 100% sure, that every place where this function is used i.e thousands of places, we correctly update the usage and if we even miss one then our credentials could leak. Here’s the same case with TS:

    before

    function myAPITS(Num: number, String: string) { ... }
    

    after

    function myAPITS(String: string, Num: number) { ... }
    

    As we can see, this myAPITS function went through the same change as the JavaScript but instead of resulting in a valid JavaScript, this code results in an invalid TypeScript, it is due to the thousands of places where it’s used are now providing with the wrong types. And this is because of the type safety, those thousand cases will block the compilation and our credentials won’t get leaked.

    • TypeScript makes team architecture communication easier 

    When TypeScript is setup correctly, then it will be difficult for us to write the code without defining interfaces and classes. It mainly provides a way to share concise, communicative architecture proposals. Before TS, other solutions to this problem also existed, but no one can solved it without making us to do extra work. For example, if we want to propose a new Request type for our backend, we can send the following to the teammate using TS.

    interface BasicRequest {
    
      body: Buffer;
      headers: { [header: string]: string | string[] | undefined; };
      secret: Shhh;
    }

    If developers define interfaces and APIs first results in better code.

    Overall, we can say TypeScript has evolved into a mature and more predictable alternative to vanilla JS.

    Use Modern Features 

     As JavaScript is known as one of the most popular programming languages. Now many changes and additions have been made to JS . if someone who has started writing JS in the last two years, they had the advantage of coming in without bias or expectations and this results in much more programatic choices about which features of the language to utilize and which to avoid

    • async and await

    For many years, asynchronous and event-driven callbacks were the unavoidable part of JS development:

    traditional callback
    makeHttpRequest('google.com', function (err, result) {
      if (err) {
        console.log('Oh boy, an error');
      } else {
        console.log(result);
      }
    });
    

    To solve the problem with callbacks, a new concept called Promises were added to it. The Promises allow us to write asynchronous logic while avoiding the nesting problem that previously plagued callback-based code.

    • Promises
    makeHttpRequest('google.com').then(function (result) {
      console.log(result);
    }).catch(function (err) {
      console.log('Oh boy, an error');
    });
    

    The best advantage of Promises over callbacks is readability and chainability.

    As Promises are great but they still left something to be desired. For many developers, this Promise experience was still too reminiscent for callbacks. Developer were asking for the alternative to the Promise model. To sovle this issue the ECMAScript committee decided to add a new method for utilizing promises, async and await:

    async and await
    try {
      const result = await makeHttpRequest('google.com');
      console.log(result);
    } catch (err) {
      console.log('Oh boy, an error');
    }
    

    For anything you await must have been declared async:

    required definition of makeHttpRequest in prev example

    async function makeHttpRequest(url) {
      // ...
    }
    

    It is also possible to await a Promise directly because an async is just like a fancy Promise wrapper. It means the async/await code and the Promise code are functionally equivalent. So feel free to use async/await without feeling guilty.

    • let and const

    For most javascript existence, there was only one variable scope qualifier i.e. var. This var has some unique rules in regards, how it handles scope. This scoping behavior of var is inconsistent and confusing also and therefore resulted in unexpected behavior and bugs throughout the JS. But according to ES6, there is an alternative to var i.e. const and let. If any logic which uses var can easily be converted to equivalent const and let based code.

    As we always start by declaring everything const. this const is far more restrictive and “immutablish,” which always results in better code. Lets say 1/20 variables , declare with let and the rest are all const.

    This const is “immutablish” because it does not work in the same way as const is doing in C/C++.this const means to the JavaScript runtime is that the reference to that const variable will never be change and it dosn’t mean that contents stored at that reference will never be change. For the primitive types i.e. number, boolean, etc. const does translate to immutability because it’s a single memory address but for all objects such as classes, arrays, dicts, const does not guarantee immutability.

    • Arrow => Functions

    This Arrow function is a concise method of declaring anonymous functions in JS.these Anonymous functions aren’t explicitly named. Mainly , anonymous functions are passed as a callback or event hook.

    vanilla anonymous function

    someMethod(1, function () { // has no name
      console.log('called');
    });
    

    Vanilla anonymous functions behave “uniquely” in regards to scope, that resulted in many unexpected bugs. But we don’t have to worry about that due to arrow functions. Here is an example implemented with an arrow function:

    anonymous arrow function

    someMethod(1, () => { // has no name
      console.log('called');
    });
    

    Apart from being far more concise, arrow functions have more practical scoping behavior. This arrow functions inherit this from the scope they were defined in.

    const isadded = [0, 1, 2, 3, 4].map((item) => item + 1);
    console.log(isadded) // prints "[1, 2, 3, 4, 5]"
    

    Arrow functions which resides on a single line include a implicit return statement and there is no need for brackets or semi-colons with single line arrow functions.

    • Spread Operator ...

    The extracting key or value pairs of one object and then adding them as children of another object is a very common. Earlier, there have been a few ways to accomplish this, but all of those methods are clunky:

    const obj1 = { dog: 'woof' };
    const obj2 = { cat: 'meow' };
    const merged = Object.assign({}, obj1, obj2);
    console.log(merged) // prints { dog: 'woof', cat: 'meow' }
    

    This pattern is common, so the above approach becomes tedious but due to the spread operator, there’s never a need to use it again:

    const obj1 = { dog: 'woof' };
    const obj2 = { cat: 'meow' };
    console.log({ ...obj1, ...obj2 }); // prints { dog: 'woof', cat: 'meow' }
    

    The best part is, this also works seamlessly with arrays:

    const arr1 = [1, 2];
    const arr2 = [3, 4];
    console.log([ ...arr1, ...arr2 ]); // prints [1, 2, 3, 4]
    

    • Template Literals (Template Strings)

    Strings are the common programming constructs. For a long time, the JS comes in the “crappy string” family. But due to the addition of template literals JS puts in a category of its own. This template literals natively and conveniently solve the two biggest problems with writing strings that is adding dynamic content and writing strings which bridges multiple lines:

    const name = 'Ryland';
    const helloString =
    `Hello
     ${name}`;
    

    • Object Destructuring

    This Object destructuring is a way where we extract values from a data collection i.e. object, array, etc without having to iterate over the data or access its keys explicitly:

    old way

    function animalParty(dogSound, catSound) {}
    const myDict = {
      dog: 'woof',
      cat: 'meow',
    };
    animalParty(myDict.dog, myDict.cat);
    destructuring
    function animalParty(dogSound, catSound) {}
    const myDict = {
      dog: 'woof',
      cat: 'meow',
    };
    const { dog, cat } = myDict;
    animalParty(dog, cat);
    

    we can also define destructuring in the signature of a function:

    destructuring 2

    function animalParty({ dog, cat }) {}
    const myDict = {
      dog: 'woof',
      cat: 'meow',
    };
    animalParty(myDict);
    

    Always Assume Your System is Distributed

    Whenever we are writing parallelized applications our goal is to optimize the amount of work we are doing at one time. For example if we have four available cores and our code can only utilize a single core, 75% of our potential is being wasted it means that blocking, synchronous operations are the ultimate enemy of parallel computing.

    JavaScript is a single threaded, but not single-file. If we are sending an HTTP request it may take seconds or even minutes, so if JS stopped executing code until a response came back from the request, the language would be unusable.

    JavaScript solves this problem with an event loop. This event loop loops through the registered events and executes them based on their internal scheduling/prioritization logic. It enables sending thousands of simultaneous HTTP requests or reading multiple files from disk at the same time. The catch: JavaScript can only utilize this capability if we utilize the correct features. The most simple example is the for-loop:

    let sum = 0;
    const Array = [1, 2, 3, 4,.... 99, 100];
    for (let i = 0; i < Array.length; i += 1) {
      sum += Array[i];
    }
    

    This vanilla for-loop is one of the least parallel constructs which exists in programming. The difficulty of parallelizing a for-loop stems from a few problematic patterns. The sequential for-loops are very rare, but they alone make it impossible to guarantee a for-loops decomposability:

    let runningTotal = 0;
    for (let i = 0; i < Array.length; i += 1) {
      if (i == 50 && runningTotal >= 50) {
        runningTotal = 0;
      }
      runningTotal += Math.random() + runningTotal;
    }
    

    The above code only produces the intended result only if it is executed in order, iteration by iteration if we tried to execute multiple iterations at once, then the processor might incorrectly branch based on inaccurate values, which invalidates the result. In JavaScript, the traditional for-loops should only be used if it is necessary. Otherwise, utilize the following constructs:

    map

    // in decreasing relevancy :0
    const urls = ['google.com', 'yahoo.com', 'aol.com', 'netscape.com'];
    const resultPromises = urls.map((url) => makHttpRequest(url));
    const results = await Promise.all(resultingPromises);
    map with index
    // in decreasing relevancy :0
    const urls = ['google.com', 'yahoo.com', 'aol.com', 'netscape.com'];
    const resultingPromises = urls.map((url, index) => makHttpRequest(url, index));
    const results = await Promise.all(resultingPromises);
    for-each
    const urls = ['google.com', 'yahoo.com', 'aol.com', 'netscape.com'];
    // note this is non blocking
    urls.forEach(async (url) => {
      try {
        await makHttpRequest(url);
      } catch (err) {
        console.log(`${err} bad practice`);
      }
    });
    

    Instead of executing each iteration in order i.e. sequentially, constructs such as map take all of the elements and submit them as individual events to the user-defined map function. The most part of individual iterations have no inherent connection or dependence to each other, that allowing them to run concurrently. It would look something like this:

    const items = [ 1, 2, 3, 4, 5, 6, 7, 8, 9,10];
    async function testCall() {
      // do async stuff here
    }
    for (let i = 0; i <=10; i += 1) {
      testCall();
    }
    

    As we can see, the for-loop doesn’t prevent us from doing it the right way, but it sure doesn’t make it any easier either. Compare to the map version:

    const items = [1, 2, 3, 4, 5, 6, 7, 8, 9,10];
    items.map(async (item) => {
     // do async stuff here
    });
    

    As we can see, the map just works. The advantage of the map becomes even more clear if we want to block until all of the individual async operations are done. With the for-loop code, we would need to manage an array ourself. Here’s the map version:

    const items = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
     const allResults = await Promise.all(items.map(async (item) => {
      // do async stuff here
     }));
    

    it's really that easy

    There are cases where a for-loop is just like as performant in comparison to a map or forEach. The for-loop is too generic to have meaningful optimizations for that same pattern.

    There are a other valid async options outside of map and forEach, such as for-await-of.

    Lint Your Code and Enforce a Style

    Those Codes which are without a consistent style i.e. look and feel are difficult to read and understand. Therefore, this aspect of writing high-end code in any of the languages is having a consistent and sensible style. Mainly due to the breadth of the JS ecosystem, there are many options for linters and style. It is more important that we are using a linter and enforcing a style rather than you specifically choosing linter/style.

    people ask what should they use eslint or prettier ? as both serve very different purposes and therefore they used in conjunction. Eslint is a traditional linter and going to be identify the issues with the code that have less to do with style and more to do with correctness. For example, use eslint with AirBNB rules.

    Prettier is known as a code formatter which is less concerned with correctness and far more concerned about uniformity and consistency. Prettier is not going to complain about using var, but it will automatically align all the brackets in our code. For personal development process, always run prettier at the last step before pushing code to Git and it also ensures that all code coming into source control has consistent style and structure.

    Test Your Code

     Testings the code is an indirect but effective method of improving the JS code we write. We should always recommend to become a comfortable with a wide range of testing tools. In this way testing tools will vary, and there is no single tool that can handle everything. So, many well established testing tools in the JS ecosystem and choosing those tools mostly comes from the personal taste.

    • Test Driver – Ava

    AvaJS on Github

    Test drivers are simply those frameworks that give structure and mostly utilities at a very high level.sometimes they are used in conjunction with other specific testing tools, that may vary based on our testing needs.

    Ava is the perfect balance for expressiveness and conciseness. Those tests which runs faster save developers time and as well as companies money. Ava has features, such as built-in assertions, with managing to stay very minimal.

    Alternatives: Jest, Mocha, Jasmine

    • Spies and Stubs – Sinon

    Sinon on Github

    Spies give you function analytics, such as how many times a function is called, what they are called by, and other important data.

    Sinon is the library that does a lot of things, but with only few super skill,and sinon excels when it comes to spies and stubs. The set of features is rich but the syntax is concise. It mainly important for stubs, as they consider partially exist to save space.

    Alternatives: testdouble

    • Mocks – Nock

    Nock on Github

    HTTP mocking is the process of making some part of the http request process so that the tester can inject custom logic to simulate server behavior.

    Http mocking can be painful, but nock makes it less painful. The Nock directly overrides the request built-in of nodejs and intercepts outgoing http requests. This in turn gives us the complete control of the response.

    • Web Automation – Selenium

    Selenium on Github

    Selenium the most popular option for web automation, it has lots of community and online resource set. But, unfortunately the learning curve is quite steep, and it depends upon external libraries for the real use. That being said, it’s the only real free option, so unless we are doing some enterprise grade web-automation, Selenium will do that job.

    Alternatives: Cypress, PhantomJS

    The Never Ending Journey

    The process of writing better JavaScript is a continuous process. This Code can always be cleaner, new features will be added all the time, and there will be never enough tests. It may seem to be overwhelming, but due to so many potential aspects to improve, we can really progress at our own pace. Taking things one step at a time, before we knew it, will be a JavaScript ace.

    Give practical ways to write  better JavaScript.

  1. Re: Give practical ways to write better JavaScript.

    It is very interesting article. I have learned many useful method, which help to improve my JavaScript programming.

Answer