Tech

Demystifying Redux: A Beginner’s Guide What Next?

Software development companies in Sri Lanka

A Guide to Understanding Redux Thunk, Saga, and Observables

Redux is an extremely popular JavaScript library that can be used to manage an application state. An extremely loyal friend to the React ninjas amongst us, think of Redux as the middleman between the frontend and the backend, whose job it is to store data temporarily. In this blog post, we will be examining Redux itself, along with its other components such as thunks, sagas, and observables.

Created by Dan Abramov, Redux provides a predictable approach to managing a state that benefits from immutability, keeps business logic contained, acts as the single source of truth, and has a very small API. To use an analogy, if you equate Mario to React, Redux is what would tell the game how long Small Mario can remain in his Super form after making use of a Super Mushroom. The beauty of Redux lies in how easily it allows developers to scale a simple app to a large, complex one.

Redux is built on three main components, namely:

  • Actions: Payloads of information that send data from your application to your store. They are the only source of information for the store. You send them to the store by dispatching actions using the redux-connect library.
  • Reducers: Are responsible for modifying the store (i.e. state) according to the actions dispatched.
  • Store: Stores the whole state of the app in an immutable object tree.

The typical Redux app would have a single store with a single root-reducing function. As an app grows, all you have to do is to split the root reducer into smaller reducers independently operating on the different parts of the state tree. This structure allows Redux to be simple yet powerful because it is possible to trace every mutation to the action that caused it. You can even record user sessions and reproduce them just by replaying every action.

Redux Thunk

Remember, Redux is not an application framework and does not dictate how effects should be handled. For that, developers can adopt any preferred middleware, and redux-thunk is arguably the most primitive of them. Redux-thunk is noteworthy in that it allows you to dispatch actions asynchronously. Written by Dan Abramov himself as part of Redux before being split out into a separate package, redux-thunk’s original implementation is tiny enough to quote in its entirety:

In simple terms, redux-thunk is a functional programming technique used to delay computation. Instead of executing a function right away, redux-thunk can be optionally used to perform a function later.  

See for yourself:

redux-thunk can also wrap calculations that might be slow, or even unending, while other code components can decide whether to actually run the thunk.

The key benefit provided by redux-thunk is it allows us to avoid directly causing side effects in our actions, action creators, or components. Potentially messy code can be isolated in a thunk, leaving the rest of the code uncluttered. Middleware can later invoke the thunk to actually execute that function. Employing a code structure of this nature makes it easier to test, maintain, extend, and reuse all components in a given codebase.

But, while redux-thunk works well for simple use cases, it may struggle to handle more complex scenarios. This brings us to…

Redux Saga

redux-saga is a different kind of middleware that can be used to handle asynchronous executions and is an alternative to redux-thunk which comes with the added benefit of being able to easily handle complicated scenarios. redux-saga works by listening for dispatched actions, performing side effects, and dispatching actions to be handled by the redux reducer. 

Because redux-saga relies on ES6 Generator functions, its code is simpler and more readable. However, asynchronous calls which would normally be directly inside an action creator in redux-thunk will have a clear separation in redux-saga.

The benefits of using redux-saga are many. Testing is much easier. The code is much more readable and test cases become simple without needing to mock asynchronous behavior, thus making redux-saga a great fit to handle complex scenarios. However, redux-saga also brings in a lot of added complexity and additional dependencies.  

Redux Observable

redux-observable is the new kid on the block and can accomplish pretty much everything redux-saga can. Both are middleware. But, the difference between the two stems from how they operate— redux-saga uses the generator function, while redux-observable doesn’t.

redux-observable relies on ‘epic’, which is a function that takes a stream of actions and returns a modified stream of actions. You can think of an epic as a description of what additional actions redux-observable should dispatch. An epic is very much similar to the concept of a “saga” in redux-saga.

The benefits of redux-observable lie in its high function reusability and easy testing. However, in contrast to redux-saga, tests in redux-observable require mocking.

Which One Should I Use?

This is where things get tricky. As a rule, don’t bring in redux-saga or redux-observable before you need them. For most simple cases, redux-thunk will serve you well (and is very easy to learn, too). As the async becomes more complex, that’s when you should start thinking about bringing in redux-saga or redux-observable, because that is where they truly shine. 

And how should you choose between redux-saga and redux-observable? 

That young padawan is a balancing act. Our advice is to weigh up the pros and cons and go with the one that promises a higher marginal benefit. 

Happy coding!

AnnouncementsNews

Gehan Dias joins Calcey as General Manager

Gehan Dias

We are thrilled to announce the appointment of Gehan Dias as General Manager at Calcey. Gehan joins us after having served as General Manager of LSEG Technology (part of the London Stock Exchange Group). Formerly known as MillenniumIT, LSEG Technology is one of the best-known success stories in the Sri Lankan IT ecosystem, having grown from a startup into is a leading provider of mission-critical, high-performance, high-availability systems to the world’s financial markets, culminating in an acquisition by the London Stock Exchange.

Gehan has spent almost 20 years managing large teams and delivering complex, multi-year enterprise projects. Aside from his time at MillenniumIT, he also founded one of the first mobile app development agencies in Sri Lanka – Appwolf – providing mobile app development services to clients around the globe. He holds a BA in Philosophy, Politics and Economics from the University of Oxford. 

Calcey’s CEO Mangala Karunaratne said, ‘We are thrilled to add Gehan to our team as we prepare for a period of aggressive growth. Despite the current challenges in the business climate, we expect the technology sector to thrive and perhaps even gain new momentum as digitization becomes a priority for all businesses. Gehan’s many years of managing complex projects and teams within a rapidly growing company are an ideal complement to our growth plans”. 

Gehan said, “I’m delighted to join such an outstanding technology company just as it is about to enter its next phase of growth. Calcey has a 17-year track record of excellence in the software industry and I’m looking forward to helping it to reach new heights.”

Welcome on board Gehan!

Announcements

Providing Business Continuity and Assistance to Fight a Global Pandemic – the Calcey Experience

Software development companies in Sri Lanka

COVID-19 has brought the world to an unprecedented standstill. The crisis has also resulted in a working-from-home experiment at global scale. As a software product engineering company offering dedicated, remote teams for building digital products to technology companies in Silicon Valley, New York, London and Gothenburg – working remotely is part of Calcey’s DNA. However, having the whole company working from home instead of Calcey’s headquarters at Trace Expert City, was still a new experience. 

Nevertheless, Calcey’s team has kept work going at the same velocity and in line with all timelines committed to clients previously. As well as providing business continuity at a time of high uncertainty, Calcey’s also making crucial contributions to the efforts of some of its clients battling the epidemic. 

A team from Calcey is currently working around the clock, from their homes, building modules to support Fresh Fitness Food (FFF)’s efforts to provide meals for frontline NHS workers. FFF is a scale-up offering meal tailored to individual needs and health goals, delivered to door in London. FFF has been on a steep growth trajectory in recent months – after having digitized its processes end-to-end, with a customer and workflow management solution built by Calcey. It is now facing unprecedented demand as its offering has become a magnet for Londoners facing a shortage of food supplies. 

Another Calcey client – Compare Networks, a technology company based in California, supplying media and platform products for life sciences and healthcare industries, operates Biocompare.com, a global marketplace for life sciences products, providig anitbodies and testing resources, along with millions of other products, for researchers racing to find cures for COVID-19. Calcey’s team has been building and maintaining all technology platforms for Compare for almost a decade and continues this work unabated, during this crucial period.  

Calcey has been able to manage this shift to working from home, relatively smoothly due to quick contingency planning and internal practices, that we’ve been working on for a while, that support this style of working. We shared some of these practices through a recent blog post. Stay agile, stay safe and let’s do our bit to support our customers, businesses and communities through this period. 

Announcements

Keeping the Lights on in a Software Engineering Services Company During a Global Pandemic

Software Engineering Services Company in Sri Lanka

As COVID-19 brings countries and economies to a standstill, here’s how we at Calcey are continuing to work, albeit remotely.

Image credits: digiday.com

As we write this, all of humanity is collectively focused on battling the COVID-19 pandemic. Towns, cities, and even countries have gone into lockdown, pushing businesses everywhere to allow their employees to work from home.

As a software engineering services provider, we are fortunate that our business model, industry, and circumstances allow us to keep going. But, we also recognise that keeping an entire workforce sane while working remotely for weeks is no walk in the park. It takes a lot of foresight, planning and trust to build a viable remote operating model. 

Oh, and a lot of experience, gained through trial and error over the years.

We thought it apt to share our model of remote work, which is allowing us to carry on unimpeded.

Prepare in advance

Almost a year ago, Sri Lanka was put under curfew in the wake of the tragic Easter Sunday attacks linked to an ISIS terror cell. That was the first time we tried this new model of remote work, and this year, we were able to use that experience to our advantage. Adversity is after all, a good teacher.

COVID-19’s spread around the world was slow at first, and that gave us a small window to draw up our plans. Two weeks before the Sri Lankan government imposed an island-wide curfew, we at Calcey conducted a few remote work trials in which the entire Calcey team participated.

Being a software engineering services provider, Calcey has always been proud to offer our team members the freedom and flexibility to manage their own time. For instance, Calcey had already implemented an optional working from home policy, before this crisis hit. But we knew COVID-19 was going to have a huge impact on a country like Sri Lanka where tourism is a key industry (i.e. it was only a matter of time before COVID-19 reached Sri Lanka), so it only made sense to plan, execute, and learn.

Give people all the tools they need to do their job

Remote work models (and even regular work models) will quickly fall apart if people are not given all the tools they need to get their work done. Consider how sometimes, companies don’t allow offsite access to internal systems for no valid reason. In a remote work model, this kind of roadblock quickly leads to frustration, which in turn leads to a complete breakdown.

All Calcey team members were given access to all systems and devices they need beforehand. There were also a few challenges, which we eventually managed to overcome. For instance, the QA team was faced with the dilemma of figuring out how to share devices used for testing purposes while working remotely. To solve this, we turned to BrowserStack, a device cloud application that allows our QA team to conduct testing on different devices through the cloud, very much similar to how they would carry out device testing if they were inside a physical office space.

Build remote-friendly team structures

At Calcey, we have built team structures so that they are remote-friendly anyway. Each development team has a team lead or architect who is in charge of technology architecture, but also assists the developers in thinking through their problems, identifying the right libraries and tools to use, etc. The leads and architects are our very own walking, talking internal knowledge bases and have insight into all our projects by virtue of their experience.

Every day, all team members check in with their respective leads or architects and discuss what needs to be done for the day or week. The goal of this practice is to anticipate any potential roadblocks and proactively figure out how each developer would overcome them. In our view, this is both a good coding practice, as well as a sensible course of action to follow, in general. We first solve the problem, before writing any code. 

The regular check-ins with the leads and architects then become a collective problem-solving session. Having an experienced hand in the fray means that we apply standardized solutions to commonly faced problems, allowing everyone to focus on the novel engineering challenges of the project and turn this into a hands-on mentoring process. 

As a result, our developers are not really working alone even when they are physically away from each other. They don’t have to reinvent the wheel at every turn while trying to figure things out on their own, and can still show up at the office with a full head of hair once the curfew is lifted for good. Less frustration equals better productivity, after all.

Put in processes and trust people to make it work

Given how we operate to tight deadlines to ship code, we have put processes in place for everyone to check in their code. Team leads are not looking over people’s shoulders and micro-managing them. Instead, we rely on a powerful tool— trust. We trust all team members of Calcey to check in regularly, provide updates and raise questions where necessary, and work together as a team.

In our experience, trust is what makes remote work (and a whole lot of other things) possible. Calcey team members know and understand that they have the freedom to manage their time so that they can both get their work done and also live a full life. We also have a results-based culture, which goes a long way toward helping make Calcey remote-friendly. 

Remember, employees are also human


Being forced by the government to remain indoors at all times and the limited human interaction that comes with it, can quickly take a toll on an individual’s mental health. People need breaks, and perhaps even an occasional distraction to calm their minds during trying times like this. It is with this in mind that we took the initiative to organise an e-sports tournament, in which the entire company can participate. Never one to rest on our laurels, we are looking at putting together a roster of a few such activities for everyone to participate in, from the safety and comfort of their own home.

What systems, tools, and processes have you employed in your company to make remote work possible? Let us know in the comments.

And finally, we would also like to salute and thank all first responders, medical professionals, and key workers who are tirelessly working day and night to ensure that we are protected from the threat that is COVID-19.

Stay safe!

StartupsTrends

Navigating The Maze Of Tech Stacks

IT companies in Sri Lanka

What You Need To Know Before Choosing A Tech Stack For Your App

Image Credits: mindinventory.com

When building an app, deciding on what tech stack to use is perhaps one of the biggest obstacles to overcome. The right tech stack can help provide the user with a great experience, thus helping drive adoption and growth in the early stages of an app’s lifecycle. But if the wrong choice were to be made in selecting a tech stack, the consequences are dire. There is often no going back, and development teams will have no choice but to scrap everything, move to a new stack, and restart development efforts all over again.

There are a few important factors to consider when choosing a tech stack. They are:

  • Current requirements and feature roadmap
  • Budget (especially in the case of startups)
  • Competency of the development team

However, care must be taken to not let the capabilities of the development team override or constrain the feature roadmap.

Next, it is important to pay attention to the proposed architecture of the app. For instance, one can choose to build a native app, a cross-platform app, or a hybrid app. Today, ‘Progressive Web Apps’ are also popular, but we don’t think it is apt to consider them as a distinct application architecture, primarily because they are essentially repackaged web apps.

Let’s now compare the pros and cons of each architecture.

Native Apps

Native apps are specially made and coded for a specific mobile platform in its native programming language, and as such are extremely suitable for processor-intensive and GPU-intensive apps. Native apps make full use of technologies provided by the platform itself, and hence there is minimal chance of running into issues. The development of native apps is also relatively straightforward. Components are provided out of the box, and connecting them to an app is quite simple. 

The most obvious drawback with opting for a native tech stack is that if you decide to build apps for multiple platforms, you also have to build separate versions of the app. Native apps do not allow for code sharing between platforms and as a result, development times are longer and require a higher investment. By virtue of also having two separate codebases, maintenance can also be challenging. Even if a new feature is to be rolled out, your development will have to build the feature into two different codebases.

  • Technologies available:  Swift (iOS), Kotlin (Android), Objective-C, Java
  • Native apps: Uber, Pinterest, WhatsApp (These apps all make use of extensive functionalities available on the device, hence the need to go with a native tech stack)

Cross Platform

Cross-platform apps can be deployed or published on multiple platforms using a single codebase, instead of having to deploy multiple native apps, one for each platform.

A cross-platform tech stack will allow you to potentially use upto 80% of code used within an app, across multiple platforms. This is perhaps the biggest advantage of opting for a cross-platform stack. Apart from this, there is also the benefit of being able to quickly render UI elements using native controls, very much similar to how a native app would.

However, the very characteristics which make cross-platform tech stacks attractive can also be their downfall, depending on the envisaged use case. The fact that not all code can be shared necessitates extra, and a rather tedious amount of development. Further, a cross-platform stack may not be as fast as a native stack, and the level to which it can interact with the device is largely dependent on the framework.

  • Technologies available:  React Native, Flutter, Xamarin, NativeScript
  • Cross-platform apps: Uber Eats, FB, CitiBank, Instagram

Hybrid Apps

A hybrid app is created as a single app, but for use on multiple platforms such as Android, iPhone and Windows. From a technical standpoint, hybrid apps are actually a combination of native apps and web apps. As a result, a single hybrid app will work seamlessly on any operating system such as iOS, Android, Windows, etc.

Hybrid tech stacks allow for a significant degree of code sharing between different platforms. In a boon for developers, hybrid stacks also allow for the core part of an app to be built using web technologies, paving the way for shorter development times. The web app underpinnings of hybrid tech stacks also mean that the core codebase of a hybrid web app can always be updated via a ‘hot code push’, bypassing the formal App Store and Play Store channels.

Apart from lower performance compared to native or cross-platform tech stacks, hybrid tech stacks also suffer from a design flaw whereby not all code can be shared between different platforms, therefore a certain degree of native code development becomes mandatory. Further, performance too can take a hit, since all in-app interaction is routed through an embedded web browser control. A good example of how this can go wrong comes from Facebook, which in 2012, disastrously bet on an HTML5 stack for its apps. Today though, all of Facebook’s apps are built on React Native, which is a cross-platform tech stack. When a hybrid tech stack is used, UI elements will also be rendered as HTML components, instead of native elements, thus leading to slower performance.

  • Technologies available: Ionic, Mobile Angular UI, Bootstrap
  • Hybrid apps: Diesel, MarketWatch, Mcdonald’s, Sworkit

So Which Tech Stack Is The Best?

There’s no definitive answer to this question, and the decision would always depend on factors such as current requirements, the feature roadmap, budget, etc. as we mentioned earlier. But, what is important is to choose the right stack for the job. A misstep here can often be the difference between success and failure for your app.

How toTrends

Easy API Testing With Postman

IT companies in Sri Lanka

Understanding Postman, the app that has become the darling of code testers around the world

Image credits: meshworld.in

Any given app in this day and age may employ a number of different APIs from various services such as Google Analytics, Salesforce CRM, Paypal, Shopify etc. This complex combination of multiple APIs which interact seamlessly with each other through a common application codebase is what has freed us from the need to be bound to our desks. Thanks to APIs, people today can choose to even run entire businesses on the move.

However, while there is no doubt that the task of imparting various functionalities into an app has been made easier thanks to APIs, these very APIs also complicate the job of a Quality Assurance engineer in many ways, the most obvious being that every time the core codebase is modified for any reason, the APIs must also be tested for compatibility with the new code. Naturally, testing several APIs over and over again is quickly going to get tedious.

This is where Postman comes in, to help with the tedious task of API testing. API testing involves testing the collection of APIs and checking if they meet expectations for functionality, reliability, performance, and security and returns the correct response.

Postman is an API client which can be used to develop, test, share and document APIs and is currently one of the most popular tools used in API testing. Its features allow code testers to speed up their workflow while reaping the benefits of automation as much as possible. Postman’s sleek user interface is a boon to testers, who don’t have to go through the hassle of writing lots of code to test the functionality of an API.

Postman also has the following features on offer:

Accessibility

Once installed, Postman allows users to create an account which then syncs their files to the cloud. Once complete, users can access their files from any computer which has the Postman application installed.

In addition, it is also possible for users to share collections of testing requests via a unique URL or even by generating a JSON file.

Workspaces & Collections

Postman’s interface is built around workspaces and collections. Think of a workspace as an isolated container within which a tester can store, group, and manage all their code test requests. Workspaces are further divided into Personal and Team workspaces. As their names indicate, personal workspaces are visible only to a user, while team workspaces can be made available to a team. Each team gets one common workspace by default, with the option to create an unlimited number of new workspaces.

Collections are simply a collection of pre-built requests that can be organized into folders, and they can be easily exported and shared with others.

Ability to create Environments

In Postman, environments allow users to run requests and collections against different data sets. For example, users can create different environments, one for development, one for testing, and another for production. In such a scenario, authentication parameters such as usernames and passwords can change from environment to environment. Postman remedies this by allowing users to create a staging environment and assign a staging URL, staging username, and password. These variables can be then be passed between requests and tests allowing users to easily switch between different environments.

Parameterization

Postman allows users to parameterize requests as variables, thus granting users the ability to store frequently used parameters in test requests and scripts. Postman supports 5 different types of variable scopes namely Global, Collection, Environment, Data, and Local.

Scopes can be thought of as different “buckets” in which values reside. If a variable is in multiple “buckets”, the scope with a higher priority wins and the variable gets its value from there. Postman resolves scopes using this hierarchy progressing from broad to narrow scope.

Creation of Tests

It is also possible for users to create custom tests which can be added to each API call. For instance, a 200 OK request test can be created to check if an API successfully returns a given request.

Postman also contains a very helpful Snippets section which contains a set of pre-written tests which can be deployed with a single click.

Testing each field of a JSON RESTful service manually every time there is a change can be very time consuming, therefore the best way to do this is by validating the structure using a schema. Given below are the steps to follow to validate the schema using Postman.

Step 1: Assuming that we already have a JSON structure we will start with the Schema Generation. We will use https://jsonschema.net/#/ for generating the schema where we can copy and paste the JSON doc into the JSON Instance and it will generate the schema for us

Step 2: After generating the schema we will go to the tests tab of the postman and declare a variable Schema and paste the schema as follows

Var schema = { <Insert Schema here>
}

Step 3: After that we will write the test as follows to do the validation process.

pm.test('Schema is valid', function() {
pm.expect(tv4.validate(pm.response.json(), schema)).to.be.true;
});


Automation Testing

Postman has a complementary command-line interface known as Newman which can be installed separately. Newman can then be used to run tests for multiple iterations.

Consider a situation where there is a need to run a selected collection of written tests automatically without opening Postman and manually triggering those tests. This is where Newman comes in. Thanks to its ability to collaborate with any program that can trigger a command, such as Jenkins or Azure DevOps. For example, with the help of Newman our tests can be integrated with CI, and if any code change is pushed, CI will run the Postman collections which will in turn help developers obtain quick feedback on how their APIs perform after code changes.

Postman can be used to automate many types of tests including unit tests, functional tests, and integration tests, thus helping to reduce the amount of human error involved.

Newman is also special in that it allows users to deploy collections on computers which may not be running Postman. Collections can be fetched through the CLI of a host computer, by running a few commands.

For the uninitiated, here’s quick tutorial on how to install Newman:

Note: Installing Newman requires the prior installation of Node.js as well as NPM (Node Package Manager).

  1. Open the command prompt (Terminal for mac)
  2. Type npm install -g newman
    Now Newman is installed in your system.
  3. Export the collection you want to run as a json file. (For instance, collectionFile.json)
  4. On command prompt go to the location of the collection json file & run the command
    newman run collectionFile.json
  5. If you want to run the test with environment variables you can export the environment as a json file.(For instance, environmentFile.json)
  6. You can run the test with the environment variables using the command
    newman run collectionFile.json -e environmentFile.json

Following are some of the other options that can be used to customize the tests

-d, --data [file] Specify a data file to use either json or csv

-g, --global [file] Specify a Postman globals file as JSON [file]

-n, --iteration-count [number] Define the number of iterations to run

--delay-request [number] Specify a delay (in ms) between requests [number]

--timeout-request [number] Specify a request timeout (in ms) for a request

--bail Stops the runner when a test case fails

Easier Debugging

The consoles contained within Postman can be used to debug any errors that may arise. Postman contains two debugging tools. One is the console itself, which records any errors which take place while testing an API. Second is the DevTools console, which helps debug any errors occuring with respect to the Postman app itself. For instance, if Postman crashes while executing a test, the DevTools console is where you would look to diagnose the problem.

Support for Continuous Integration

By virtue of being open source, Postman supports a wide variety of Continuous Integration (CI) tools such as Jenkins and Bamboo. This helps ensure that development practices are kept consistent across various teams.

 

With so many features on offer to make life easier for code testers, it is not surprising that in the world of code testing, Postman is heralded as the best thing since sliced bread.

AnnouncementsTrends

Calcey organizes Colombo React Native Meetup

IT companies in Sri Lanka

Calcey organized the very first Colombo React Native Meetup last week. Premuditha Perera, one of Calcey’s Software Architects, conducted the session (refer his presentation here). As this was the first session the focus was to cover the core principals of React. The goal was to lay the foundation for future sessions, where the focus will be on hands-on coding and feature implementation. For this reason, the first session covered the principals of React. Future sessions will provide deep dives into both React and React Native, enabling our community to develop both web and mobile using React-based technologies. 

We had an excellent turnout at the meetup, with a  full house of over 250 participants. Experienced developers working in the industry and many university students were among the audience. We hope to see the same level of enthusiasm and attendance at our next meetup!

If you haven’t done so already join our meetup.com community to ensure that you are notified of our next meetup. 

StartupsTrends

Open Banking For Dummies

Custom software development in Sri Lanka

Everything you need to know about the newest buzzword everyone in the banking industry is talking about.

Banks by nature, are extremely protective of the information they hold within their ageing filing cabinets, for obvious reasons. Money is a touchy subject, and people prefer to keep details about their finances private. However, with the rise of the data economy, everyone from banks to central banks are realising that given how practically every bank has the exact same business model, there is a huge duplication of data which unwittingly takes place. If banks simply commenced sharing such data with each other, wouldn’t that make banking services much less cumbersome? With easier banking, wouldn’t life be much better?

What Is Open Banking?

In layman’s terms, open banking is all about enabling the sharing of information securely, in a standardised format, so that it makes it easier for companies to deliver services more efficiently. Under current banking practices, customers or merchants maintain separate relationships with different financial institutions in order to achieve their financial goals. This is often done by employing the practice of screen scraping, where a third party company creates a mirrored login page, which looks and feels similar to a bank’s or credit card issuer’s online login page. The customer enters their login details, passwords and additional security details such as their pet’s name, which the third party can use to log in as the customer. Once logged into the account as the customer, screen scraping tools copy available data to an external database and can be used outside of the financial institution. This is obviously dangerous, and renders the system extremely vulnerable to man-in-the-middle attacks. Instead, Open banking introduces a more consolidated experience to the customer by allowing banks to expose their functionality via APIs, but subject to the customer’s explicit consent and in compliance with strict information security requirements imposed by the Financial Conduct Authority of the UK.

The concept of Open Banking has its roots in the United Kingdom. In 2016, the Competition and Markets Authority ordered the nine biggest UK banks to allow licensed startups direct access to their data, right down to the level of current account transactions. Again, account holders must approve any exchange.

When talking about Open Banking, you will often hear ‘PSD2’ being referred to. PSD2 is the European version of Open Banking, and refers to the second Payments Services Directive which modernises European payment regulations, thereby enabling consumers and small businesses to have greater control over their data. There is just one small difference between Open Banking and PSD2. Whilst PSD2 requires banks to open up their data to third parties, Open Banking dictates that they do so in a standard format.

Open Banking is now being spoken about everywhere / Credits: Business Insider

How Will Open Banking benefit customers?

The various ways in which open banking will be used to create new services is anyone’s guess, but there are three distinct areas in which Open Banking is starting to make waves.

Money management

At the moment, customers who maintain accounts with two different banks, have no choice but to look at them separately, because the banks’ systems are resolutely incompatible. Open Banking will allow customers to manage their money from within a single app, which should make things much easier.

Banks and startups are already sensing an opportunity in this space. Dutch bank ING has an app called Yolt, while third party app Money Dashboard provides a similar service in the UK.

The Yolt app / Credits: ING Bank

Lending

When a customer takes out a loan, they are sometimes required to provide details of their finances to ensure that they are ‘credit-worthy’. Open Banking will allow customers to provide such information online – for instance, by giving an investor one-off access to 12 months income and spending history.

There are services which already do this, but in order to use them, it becomes necessary to hand over your login details – which is not as secure or seamless. It will also be more accurate, which should help people with what are known as “thin files”. (For instance, if the customer hasn’t worked or been in the country long.)

Payments

The current banking payment infrastructure used around the globe is very much a multi-layered one. For instance, when a purchase is made on Amazon, the retailer contacts an “acquirer”’, such as WorldPay or Global Payments, which gets in touch with Visa or MasterCard to deduct the payment from the customer’s account. Cue much fumbling around with cards and passwords.

By opening up banks’ data, Open Banking makes it possible to pay directly from a bank account – which should be both quicker and (since the various middlemen each charge for their service) cheaper. The bank authenticates the purchase without involving other organisations.

Open Banking will give rise to Banking-as-a-Service (BaaS) / Credits: Bankable

Is it safe?

From a technical point of view, Open Banking is at least as safe as online banking. APIs – the technology used to move the data – are trusted and the law requires account providers to use strong customer authentication, a procedure which allows the payment service provider to verify the identity of both the user and the service.

The key thing to remember is that anyone using an Open Banking service will not need to share their banking login or password with anyone but the bank. This is actually an improvement on existing services, which sometimes require this as a workaround for existing incompatibility.

All in all, Open Banking has the potential to upend the way we bank, disrupting the sector in the same way as media or retail. It could, for instance, enable digital-only banks that manage money automatically via intelligent software. Banking-as-a-Service (BaaS) too, will go mainstream, bringing to life a whole ecosystem of services running on top of an Open Banking layer. Personal finance, now an arcane subject, will become transparent and easy for everyone. Whether this is a dystopian or utopian future depends on one’s perspective – either way, it just appears to be more likely now.

StartupsTrends

What Is Spooking Casper?

Custom software development in Sri Lanka
Credits: Travel Wire News

Casper, the Direct-to-Consumer (DTC) mattress company that bills itself as ‘The Sleep Company’, has filed to go public. Founded in 2014, Casper sells mattresses of relatively good quality online. Thanks to savvy marketing and a 100-day risk-free return policy, Casper thrived in its market, going on to become the most well-known DTC mattress company in the US. At first glance, this is good. And it is on the back of this success that Casper is trying to raise funds from the public markets.

So What’s The Problem?

While things may look rosy on the surface, underneath Casper’s hood is a can of worms. This has prompted a slew of commentators, including Forbes magazine, to publish scathing criticisms of Casper’s business model. What are these criticisms, and most importantly, what can other startups learn from Casper’s mistakes? These are the questions we will try to find answers to in this blog post.

Casper has a poor competitive advantage

One of the most often repeated truths in business circles is that a business needs a competitive advantage. In simple terms, a competitive advantage is what allows a firm to perform at a higher level compared to its competitors in the same industry or market. That is why maintaining a competitive advantage becomes important if a firm intends to become profitable and reward its investors.

But for a firm operating in the DTC sector, it becomes very hard to own a competitive advantage.  Your competitors can copy your marketing advantage, your physical product distribution is mostly outsourced, and for existing categories like mattresses, price comparison is easy.

Casper’s initial success spawned hundreds of competitors (literally), who swiftly started copying Casper without much trouble. Fast Company estimates that there are nearly 178 bed-in-a-box companies, who have followed Casper’s path.

Some of Casper’s competitors /Credits: CNBC

“The products that you’re buying — there are many similarities and only some minor differences,” said Seth Basham, an analyst at Wedbush Securities who covers the mattress industry. Profit is hard to come by because the ease of forming an online mattress company makes the market competitive, according to Basham. “Barriers to entry are low, but barriers to profitability are high,” he said. “It doesn’t take that much to design a mattress, a marketing campaign, put up a website, and have one of these big companies like Carpenter do the fulfillment for you,” he said, referring to one of the key mattress manufacturing companies.

Casper has bad unit economics

If someone were to pore through Casper’s S-1 which was filed with the SEC, there is one thing that becomes absolutely clear–Casper has dominated marketing. It has spent a significant amount of capital on promotions such as ‘napmobiles’, a cruise around Manhattan, and a hotline that helped people fall asleep.

All this spending would be okay…if it made sense.

Prof. Scott Galloway of NYU writes about how for every mattress Casper sells, it spends USD 480 on marketing, going on to make a loss of USD 349 per mattress, according to his calculations. And if Casper chooses to grow bigger (which it will have to, in order to satisfy investors), it will have to continue to lose money on every mattress. Basically, Casper’s unit economics don’t look great. Worse yet, it’s hard to imagine they will get better.

Instead of spending money on marketing, Casper can send every customer $300 and still be profitable /Credits: Scott Galloway/No Mercy No Malice

Why?

Selling a durable product tied to housing makes you vulnerable to the economic cycle, and the long replacement cycle of mattresses makes it hard to build brand loyalty. Since mattress replacement cycles stretch into years, Casper has to bombard each customer with marketing for 5 or 10 years till the customer decides to buy a new mattress. This is expensive, and it is not sensible to assume that one can just blast consumers with marketing emails and hope they click “buy” before they click “unsubscribe.”

This is not just a hypothesis. Casper mentions this in its S-1, but a sharp eye is needed to decode this hidden message. Something which Byrne Smith of MAKER clearly has.

From Casper’s S-1:
“From Casper’s beginning through September 30, 2019, we have seen more than 16% of customers who have purchased at least once through our direct-to-consumer channel return to purchase another product. Importantly, 14% of our customers returned within a year of their original purchase.”

Byrne opines that a 16% repurchase rate and a 14% first-year repurchase rate imply that only about 2% of customers buy something new after a year. What this means is that since mattresses have about a 10-year replacement cycle, Casper loses the vast majority of its ongoing customer relationships before the next mattress purchase.

Economics, one. Casper, zero.

Growth Hacks can become poison too (if you are not careful)

When it launched, Casper’s claim to fame was that it offered a 100-day risk-free return option. But returning mattresses is not like returning shoes and dresses. Casper provides information about its return rates in its S-1, but the trend is far from inspiring. Returns were 15.4% of gross sales in 2017, 18.4% in 2018, and 20.4% in the first three quarters of 2019. When you’re shipping a 90-pound package to the customer, and they’re shipping it back, the costs add up quickly. 

Casper’s return policy is a drain on working capital /Credits: Casper

Also, under U.S. law, companies aren’t allowed to sell used mattresses as new. But Casper donates these mattresses to charities instead of shipping them halfway across the country to be refurbished. Again, this might look like a smart business decision at the outset. But think about this. Casper’s free return policy has been replicated by everyone. If all 178 bed-in-a-box companies resort to donating mattresses, the capacity to absorb donated mattresses is going to dry up pretty quickly. While a donation may get you a small tax benefit under the U.S. tax code, the costs associated with manufacturing it will still continue to eat into profits. And therein lies the fault in Casper’s key growth hack– the very thing which got Casper noticed, has now become a ticking financial time bomb.

To reiterate, Casper is not a bad company. It’s just a good company stuck in a bad business, as a result of which it’s entire business model is standing on shaky ground. While it remains to be seen how Casper will claw itself out of this predicament, startup founders everywhere will do well to learn from Casper’s missteps.

How to

A Picture Is Worth A 1000 Words, But What If Nobody Can See It?

React Native Sri Lanka

A trick to speed up the loading of cached images on React Native

Be it an app or website, speed is crucial. Too much time taken to load content can turn away users, putting to waste hundreds, if not thousands of man-hours spent painstakingly writing, and reviewing code.

Quite recently, some of our code ninjas got together and built a nifty little component that can be made use of by anyone coding in a React Native environment. We have now made the code freely accessible on GitHub for you to experiment with.

Note: This library supports React Native 0.6 or above, which means iOS 9.0 and Android 4.1 (API 16) or newer.

Our module in action

What Does This Module Do?

In a nutshell, this module is a simple, lightweight CachedImage component that is adept at handling how an image will load based on network connectivity. It can deliver significant performance improvement, particularly when called upon to fetch high-resolution images within an app.

This module can be used to:

  • Manage how images are rendered within an app while reducing a user’s mobile data usage
  • Enable offline support for online images.
  • Manage an application cache within a predefined cache storage limit, or reduce/limit how much of a device’s internal storage can be used by an app
  • Reduce loading times of static online images by switching between low-resolution and high-resolution images

How Does This Module Work?

The module itself is built on React Native 16’s Context API, which allows to initiate a cache manager when an application starts up. A developer can define a cache limit for the whole application (500 MB for instance) and the module can cache images up to its defined cache limits without the need for concurrent caching processes. The cache manager will also scrap any data which exceeds defined storage limits.

The freedom to define a cache limit opens up the opportunity for developers to deploy our module on lower-end devices, which often possess relatively low internal memory capacities.

It is important to note that our module favors a client-side caching approach. Images will be downloaded and stored using a unique file naming pattern, which aligns with their original URLs. Whenever the application next requests for a similar URL, the caching module will step in to serve up the relevant cached image.

Since the whole point of a caching module is to reduce demands on device and network resources, our module makes use of the react-native-fetch-blob module to handle native (Android/ iOS ) file system access requests. Not only does this make our module lightweight, but it also reduces the dependence on excessive boilerplate code.

A Special Note From Our Developers

  1. This module provides a simple solution for handling cache with limits, but only for images. In practice though, requirements may vary from application to application. So feel free to use the architecture/structuring of this module to come up with customizable, scalable, and configurable advanced caching modules that support other file types as well.
  2. Currently, we have not implemented a validation method to prevent the scrapping of cache data that is in use. Because of this, defining a low value for the cache limit could lead to corrupted images. Therefore, use your judgment when deciding on the cache limit.

That’s about it. Do play around with this little module, and let us know what you think!

Cover image credits: salonlfc.com/