OpinionTrends

Lessons for startups from Zoom

Calcey

Zoom, the video conferencing startup which managed to beat Cisco’s WebEx at its own game, recently went public. Leaving the IPO aside, there was a lot of media attention on Zoom’s history as a company, since it very much broke the stereotype of the ‘hot Silicon Valley startup’.

Before Zoom arrived on the scene, many thought that the problem of video conferencing had been solved thanks to Cisco’s WebEx and Skype. But that’s not what Eric Yuan thought. A founding engineer on the WebEx team, Eric was passionate about building a video conferencing solution that just worked. He tried to implement his ideas at WebEx, but his bosses didn’t want to listen, and Eric left WebEx to found Zoom.

Eric Yuan, founder of Zoom / Source: Thrive Global

Having looked at Zoom’s growth from afar, here’s what we think all other startups can learn from Zoom

Be focused on the product, maniacally

This story about how focused Zoom is on improving its product comes directly from Sequoia Capital, one of Zoom’s investors. But before they became an investor, Sequoia was a paying customer of Zoom.

“When Sequoia first reached out to Eric in 2014, he told us he admired our work but wasn’t looking for funding. Then he asked for an intro to our IT department, to see if they’d be interested in using Zoom. He cared more about our business than he did about our money — because he was, as he is today, singularly focused on his mission of making people happy.”

-Carl Eschenbach & Pat Grady, Sequoia Capital

Many early-stage startups suffer from a tendency to focus on securing funding, instead of focusing on their product and acquiring paying customers. But Zoom’s approach of focusing on acquiring paying customers, which indirectly gave them more leverage when negotiating with investors later.

To exhibit how focused Zoom is on making its product good, consider this. In a recent feature on Zoom, Forbes columnist Alex Conrad wrote that Zoom could operate well even on a connection with 40% packet loss, which is a boon for those on spotty or slow connections.

Zoom’s platform / Source: Company S-1

Build sustainable revenue streams

In Silicon Valley, there is a tendency to chase revenue growth which is usually fuelled by deep discounts and/or by running at a loss. A ready example can be found in the meal delivery startup sector, where profitability remains elusive yet discounts, plentiful. Essentially, most startups in the sector are hemorrhaging money to make a little bit of money or no money at all. Worse yet, some will never see a cent in profits for a very, very long time. Not so with Zoom.

Consider the following, taken from the second page of Zoom’s S-1 document:

“Our revenue was $60.8 million, $151.5 million and $330.5 million for the fiscal years ended January 31, 2017, 2018 and 2019, respectively, representing annual revenue growth of 149% and 118% in fiscal 2018 and fiscal 2019, respectively.”

But the next section makes things even more interesting:

“We had a net loss of $0.0 million and $3.8 million for the fiscal years ended January 31, 2017, and 2018, respectively, and net income of $7.6 million for the fiscal year ended January 31, 2019.”

Simply put, Zoom was already a profitable company when it sought to list its shares, a rare achievement in the startup world. For comparison, look at the finances of some other startups which went public in recent times:

  • Pinterest, who filed on March 22nd, the same day as Zoom, made $755M in revenue in the fiscal year 2018 but a net loss of $63M.
  • PagerDuty, who filed on March 16th, made $79.6M revenue in the fiscal year 2018, but a net loss of $38.1M.
  • Lyft, who filed on March 1st, made $2.2B revenue in the fiscal year 2019, but a net loss of $911.3M.

In the technology world, running at a loss in order to get a shot at an IPO is widely considered a necessary evil. But Zoom was comfortably in the black, which allowed the company to list at a valuation of USD 8.98 billion.

Zoom’s financials remain healthy / Source: Forbes

Your users can be your best evangelists

Zoom credits its growth to its bottom-up user generation cycle, which conceptually, shares a few similarities with Dropbox’s famous referral system. With Zoom, users can sign up and invite others to a meeting (for free) and when they realize how easy-to-use and great the product is, they sign up too and then pay for more features.

Zoom’s S-1 states that amongst others, the company had 344 customers who generated more than $100K in annual revenue, up 141% YoY. This customer segment accounted for 30% of Zoom’s revenues in FY’19. 55% of those 344 customers started with at least one free host prior to subscribing. As more and more customers invite people to meetings held on Zoom, those numbers are only going to rise. Consider this quote from a Sequoia spokesperson:

“We had been watching the video conferencing space for many years because we were convinced that it was a huge market primed for innovation. But we couldn’t find a product that customers loved to use. That changed when our portfolio companies started raving to us about Zoom.”

Execution matters

When Eric Yuan decided to build Zoom, the problem of video conferencing was, for all intents and purposes, considered to be solved. There were many incumbents, ranging from WebEx to Skype and Google Hangouts. But they were full of problems. Some were built for an age where video conferencing was done in front of a computer, some didn’t have features such as file sharing from mobile, etc. In trying to build a better video conferencing product that truly lived off the cloud, and scaled simply and scaled well, Zoom did not try to reinvent the wheel. Instead, they just set out to make a motorized car while the rest of the world was content to ride on horse-drawn carriages. Unsurprisingly, Zoom is a company favoured by Social Capital CEO Chamath Palihapitiya, who ranks it on the same level as Slack, another successful tech startup (of which Palihapitiya is an investor).

If you’re building a startup yourself, we highly recommend that you keep an eye on Eric and his team. In the meantime, if you are a user of Zoom, what was your experience with the product like? Do you think Zoom will become the next Slack? Let us know in the comments!

References

Trends

Reactive Programming: How Did We Get Here?

Calcey

In a world that continues to evolve rapidly, the way we build software too is in a constant state of flux. The heavy architectures of yesterday have given way to new, lighter, more agile architectures such as reactive programming.

What Is Reactive Programming?

At its core, reactive programming is a way of defining inter-communications and codependent behaviors of program components, so that there is minimal interdependence..

In simple terms, this is achieved by each individual component exposing data about changes happening within them in a format and medium accessible to others, while allowing other components to act upon this data if it is of any relevance to them.

In today’s always-on, completely mobile world users expect responses in milliseconds, along with 100% uptime. Only systems that are responsive, resilient, elastic and message driven can deliver this kind of performance, which is why they can be termed ‘reactive systems’.  And in order to build reactive systems, we must employ ‘reactive programming’.

How Did Reactive Programming Come To Be?

As a technique, reactive programming has been in existence since the seventies (and perhaps even before) and is not something that rose to prominence recently. For instance, when the Graphical User Interface (GUI) was first introduced, reactive programming techniques could have been used to reflect changes in the mouse pointer’s position on the screen.

Examples of Reactive Programming At Work

In general, reactive programming can be seen in action in the following instances:

  • External Service Calls
    Under reactive techniques, a developer will be able to optimize the execution of any external HTTP or REST calls, thus benefiting from the promise of ‘composability’ offered by reactive programming.
  • Highly Concurrent Message Consumers
    Reactive programming can also be used in situations where messages need to be processed between multiple systems, a need that frequently arises in the enterprise space. The patterns of reactive programming are a perfect fit for message processing since events usually translate well into a message.
  • Spreadsheets
    Often the favourite tool (or bane) of many cubicle dwellers, Excel is another perfect example of reactive programming at play. Think of a scenario where you built a model with interdependencies between several cells. A group of cells will be linked to one cell, or even another spreadsheet. Making a change in the precedent cell, will automatically force changes in the dependent cells. This is in effect, reactive programming at play.

When To Use Reactive Programming

In practice, programmers use both reactive and traditional techniques. As such, there is no definitive guide on when to use reactive programming and when not to. It’s more of an intuitive understanding, which a developer will gain over time through experience and countless hours of coding.

As a rule of thumb, if an application’s architecture is simple and straightforward, a developer may be better served by sticking to traditional code structuring methods. Breaking this rule may leave you with an over-engineered product on your hands, which is undesirable.

But, as with all things, proceed with caution. If you do end up following a reactive programming technique over imperative or some other technique, you will essentially be accepting a much higher level of code complexity, in return for more flexibility and robustness in the components of your program. Therefore, it is upto you to weigh the costs against the benefits.

The reactive landscape has evolved so much that today, we have Reactive UIs, ReactiveX APIs, and even Reactive microservices. Overall, these developments point towards a very bright future for reactive programming as a practice.

That wraps up our thoughts on the evolution of reactive programming.

What would you like to see us discuss next? Let us know in the comments below.

References

AnnouncementsLife at Calcey

Empowering the Future Generation with Coding

Calcey

Education is the passport to the future; for tomorrow belongs to those who prepare for it today

Malcolm X

Keeping this quote in mind Calcey recently took initiative to empower disadvantaged youth. Our goal was to create and support a full-time training program that would provide young people who had completed A/L’s, but not got selected to local universities, a foundation in IT and software development, preparing them to take up internships in software companies within 6 to 8 months.

A call for applications was sent out and participants for the program were chosen through a shortlisting process. It was encouraging to see a significant number of female applicants. The program curriculum was designed by Calcey and YMBA Maharagama provided a venue for conducting classes. Calcey interviewed and hired a full-time instructor. Then the program kicked off on the 27th of June, 2019 and is now underway with sessions also being delivered by Calcey team members, who have industry experience and expertise in the various technology or subject areas being discussed.

We’ve been thrilled with the feedback we’ve got so far. It’s great to see the students enjoying the curriculum we designed and wonderful to see their enthusiasm to learn. Our team members facilitating the program are energized by the thought of supporting these youth to become self-sufficient and acquire skills in a growing industry that can take them anywhere in the world.

Calcey has conducted a similar program in Rambuka before and its success led to many requests for another batch to be provided the same opportunity. This time we chose to locate it in Maharagama, so that its easier on our team members who are volunteering their time for this worthy cause.

Cheers to more sessions to come.

How to

React native advanced Mobile Application Development

Calcey React Native

New to React Native?

The technically inclined amongst us may already know the story of React Native. Developed by Facebook, it is essentially a set of libraries that can communicate with their corresponding APIs. This is where the ‘Native’ tag comes in. By design, React Native is able to easily access features native to the device it is being run on, be it a phone or tablet running Android, iOS, or even Windows, connecting native threads and JavaScript threads through its event bridge.

React Native uses a mix of JavaScript and XML-like syntax, known as JSX, to render the user interface as a function of the application’s current state. This makes it much more interesting to build component-rich UIs with principals like stateful components, a layout engine, virtual DOMs, etc.

Let’s go deep.

Here, at Calcey, React Native is one of our favorite tools to work with. Along the way, we’ve picked up a few tricks useful for scalable react-native app development which we’ll be sharing today.

Write reusable components (But don’t overdo it)

Write reusable components (But don’t overdo it)

React recommends creating reusable components as much as you can. Obviously, this makes maintenance and debugging considerably easier. However, as any experienced coder knows, defining components with too much specificity can actually render them useless. Similarly, defining components too loosely will complicate things.
Take the example of building a screen for an app. A screen is essentially a group of components. Intuitively, it makes sense to write common UI elements such as buttons, lists, etc. as reusable blocks of code. This will not only save time but also make your code cleaner.

Safe coding

Safety is determined by how far the platform will go to prevent the developer from making mistakes when writing applications. Due to the freedom given by JavaScript to decide a coding style based on the preference of the developer, code safety will become an important factor, especially when dealing with scalable apps.

React Native has a few tricks of its own which support Flow and TypeScript to avoid such cases if the developer decides to use them. Flow grants us the ability to easily add static type checking to our JavaScript. It will also help prevent bugs and allow for better code documentation. Meanwhile, TypeScript will provide great tooling and language services for autocompletion, code navigation, and refactoring. The ecosystem you work in usually has a major influence on helping you decide what to use, as does your previous exposure to static-type systems.

However, Calcey uses these tools to make sure that developers are benefiting from them when it comes to the readability of the code or the code standards.

Extract, extract, extract

React Native projects tend to include a large number of common elements such as styles, images, and global functions (functions that format dates and times, make requests to a server, etc.). At Calcey, we generally encourage our developers to keep such elements separate from the component code. This makes it easier to share elements from anywhere within the app, while also making a given app’s codebase cleaner, and easier to maintain and scale.

Here’s an example of a color.js file coded by one of our developers:

export function hexToRgbA(hex: string, opacity: number) {
  let c;
  if (/^#([A-Fa-f0-9]{3}){1,2}$/.test(hex)) {
    c = hex.substring(1).split('');
    if (c.length === 3) {
      c = [c[0], c[0], c[1], c[1], c[2], c[2]];
    }
    c = `0x${c.join('')}`;
    return `rgba(${[(c >> 16) & 255, (c >> 8) & 255, c & 255].join(',')}, ${opacity})`;
  }
  throw new Error('Bad Hex');
}

Store management

To most React Native developers, Redux is an absolute necessity. But at Calcey, we believe that Redux is not a necessity for the most part. The way we see it, bringing Redux into the picture would be akin to using a hammer to crack open an egg.

Ever since we started using Redux, it has only come in necessary for the most complex of apps where immense scalability is required. To understand this better, consider why Redux was developed in the first place. As Facebook grew to become what was essentially the biggest web app in the world, it had to contend with the headache of not being able to show the correct number of notifications in the header bar. At the time, it was just difficult for Facebook (or any other web app) to recognize changes in one part of the app (e.g. when you read a comment on a post) and reflect that change in another area (i.e. reduce the number of unread notifications by one). Facebook wasn’t happy with forcing a web page refresh to solve the problem, so it built Redux as a solution.

Redux works by storing information of an app in a single JavaScript object. Whenever a part of an app needed to show some data, it would request the information from the server, update the single JavaScript object, and then show that data to users. By storing all information in one place, the app always displayed the correct information, no matter where thereby solving Facebook’s notification problem.

Problems cropped up when other independent developers began using a single object to store all their information—basically, every single piece of data provided by the server. This approach has three main drawbacks namely, introducing a need for extra code and creating the problem of ‘stale data,’ whereby unwanted data appears within an app from a previous state and increases the learning curve for new developers.

So how does one overcome this problem? By planning ahead, and using proper requirement identification. If you envision that your app will have extreme scalability issues in the future, it may be better to employ Redux from day one. Otherwise, deploying Redux selectively is wiser. After all, it is possible to apply ideas from Redux without using React. An example of a React component with a local state is given below:

import React, { Component } from 'react';
class Counter extends Component {
  state = { value: 0 };
  increment = (): void => {
    this.setState(prevState => ({
      value: prevState.value + 1
    }));
  };
  decrement = (): void => {
    this.setState(prevState => ({
      value: prevState.value - 1
    }));
  };
  render() {
    return (
      <View>
        <ChildComponent value={this.state.value} />
        <Button onClick={this.increment}>+</button>
        <Button onClick={this.decrement}>-</button>
      </View>
    );
  }
}

We can take these attributes or functions to any depth of the component tree and use them inside those components. This mechanism is called prop drilling. Be warned though, it’s not a good idea to drill multiple layers unless you have an understanding of where the props are coming from, and where they are going next.

Another solution we can use is the Context API provided by React itself. The Context API allows us to access these props of the parents from any child or a parallel component using the consumer design principle. All these options are used at Calcey, based on the use case.

These are a few of our internal React Native practices and tricks. What are yours? Let us know in the comments below!

How to

Automating The eSignature Process Using DocuSign

Calcey

In an ever-evolving digital world, legal documents with dotted lines for signatures are perhaps one of the last remaining analog holdouts. However, that too is now going digital, with e-signatures gaining more widespread acceptance.

There are a plethora of services online which allow users to sign documents electronically. DocuSign is one of the most well known, while HelloSign, SignNow, and Citrix RightSign are a few others that make up the rest of the pack.

The basic premise of eSignature services

In order to use an eSignature service, a user must first upload a document that will be scanned by the service. Next, the user will be allowed to define the areas on the document where a signature or some other type of input is required from the signees. Once all this is done, the signable document will be delivered to the specified signees via email.

Everything works seamlessly when it is just one document that needs to be sent across at any given time. However, what if a user needs to frequently send similar sets of documents to different groups of signees, perhaps on a daily basis?

In such scenarios, it may not be wise to require a user to upload documents and define input areas several times over. Not only is this time consuming, but it is also extremely tedious.

Faced with this problem, one of our own clients recently turned to us for help.

Our Solution

Having identified the scale of the problem, our engineers set out to develop a solution that could unite the convenience provided by a service such as DocuSign with the simplicity and seamlessness promised by automation.

Since the client was already using the DocuSign platform to send documents to signees, our engineers decided to build a layer of code that would sit above DocuSign, thus essentially building a customized eSignature platform for the client.

Our solution is expected to allow the input of all details relevant to a signee such as full name, address, etc into a database. Once the data has been input, all the client has to do is select the relevant document, select the name of the signee, and the code will take over the task of populating all the relevant fields with the correct information.

How We Built It

In order to build a code layer that runs atop DocuSign, one must first sign up for a DocuSign developer account and build a sandbox. Visit https://developers.docusign.com/ and sign up to create a sandbox.

Next, an authorization method must be chosen. Due to the need to ensure that the application is able to access the DocuSign API without the need for any human interaction,

Calcey’s engineers chose to use JWT as the authorization model. With JWT in place, our custom application will seek to impersonate a user with a DocuSign login. In order to allow the impersonation to take place smoothly, we must register the application with DocuSign, and ensure that the target user provides explicit permission for the API to use their credentials. It is important to note that the process of granting permission to use one’s login credentials is a one-time action.

You can now choose to create an envelope template, which can hold a set of documents that require signing. Once the documents have been uploaded, the user needs to manually specify where data input is necessary on each document.
Note: When creating placeholders, it must be ensured that the template contains one or more signees. It is also important to insert only the role of the signee when creating the template since all other relevant information will be taken care of by the application.

Once all placeholders have been defined, we can consider the template ‘ready’. Now, whenever a user wants to send out documents, the DocuSign API can fetch a list of pre-uploaded templates, allowing the user to pick and choose the correct set of documents to send out. With the aid of the Template ID, the DocuSign API will create what is known as an ‘envelope’ and automatically deliver the documents to the intended recipients.

How to

Skyrocketing with Android JetPack

Calcey

In 2018, at Google I/O, Android introduced a next-generation suite called Jetpack to accelerate Android development. Android Jetpack is a set of components, tools, and architectural guidance, that makes it quick and easy to build great Android apps. Components are unbundled but built to work together while leveraging Kotlin language features to make developers more productive. Technically, Jetpack consists of the existing support library, architecture components, and Android-ktx, in separate modules and rebranded in an adaptive way providing coverage for lifecycle management, robustness of data states, background tasks, navigation, and much more.

Source-: https://android.jlelse.eu/what-is-android-jetpack-737095e88161

As represented in the illustration above, Jetpack combines four major categories.

  • Foundation
  • Architecture
  • Behavior
  • UI

Each section consists of both old and latest components. The older components have been in use for quite a while. This post will focus mainly on a few newly developed components such as navigation, paging, Android KTX, and WorkManager.

Navigation

Source -: https://medium.com/@Alex.v/android-navigation-architecture-component-25b5a7aab8aa

The navigation component

  • Reduces boilerplate code to fragment transactions and reverse events – where the component is smart enough to navigate itself – and include bundle data if needed at runtime, based on provided navigation destinations and actions.
  • It gives developers an opportunity to navigate through the view hierarchy, similar to a storyboard in Xcode.

When it comes to passing data through the bundle, the navigation component library comes with a Gradle plugin called Safe Args to avoid mistakes made by developers such as passing random bundles or using the wrong keys to extract data.

Migrating to the navigation component is pretty straightforward; simply following the steps below would be adequate.

  • Create a navigation graph for separate activities if required.
  • Link separate activities through activity destinations, replacing existing startActivity()
  • In case multiple activities share the same layout, navigation graphs can be combined, replacing navigate calls to the activity destinations to navigation graphs

Paging

Apps work with enormous sets of data but only require the loading of a small portion of this data for a given timeframe. This should be a key consideration for a developer since it causes the battery to drain and wastes bandwidth. Jetpack provides a paging library to overcome this challenge by enabling gradual and graceful data loading. Furthermore, it can be integrated into RecyclerView and works with both LiveData and RxJava.

The Paging library consists of the following core elements.

  • PageList
  • DataSource

PageList is a collection that has the capability to load data as chunks asynchronously.

DataSource is the base class for loading snapshots of data to the PageList. The illustration below provides an easy guide on how data loads from the data layer to the UI components

Assuming the database is your data source and will pass the data to be created, DataSource allows the data to be handled in a repository with LiveData that is created by LivePageListBuilder. Then, through the ViewModel, data will navigate to PageListAdapter API which provides from the paging library to help present data from the page list to RecyclerView. PageListAdapter will use the Diffutill class to find new data and notifies automatically.

Refer to the following links for more details

https://developer.android.com/topic/libraries/architecture/paging/

https://medium.com/@sharmadhiraj.np/android-paging-library-step-by-step-implementation-guide-75417753d9b9

https://medium.com/@Ahmed.AbdElmeged/android-paging-library-with-rxjava-and-rest-api-e5c229fd70ba

Android KTX

Android KTX is another feature that comes with Jetpack that provides a set of Kotlin extensions. The purpose of Android KTX is to give more concision, reduce the lines of code and make them more readable. Refer to the following sample codes.

Kotlin

sharedPreferences.edit()
    .putBoolean("key", value)
    .apply() 

Kotlin + KTX

sharedPreferences.edit {
    putBoolean("key", value)
} 

Kotlin

Toast.makeText(this,
    R.string.text,
    Toast.LENGTH_SHORT)
.show()

Kotlin +KTX

context.toast(R.string.text,)

Kotlin

for (recipe in recipes) print(item)

Kotlin+KTX

recipes.forEach{
print(it)
 }

Pretty simple, isn’t it? It’s fun and simple to understand.

WorkManager

Assuming you need to execute a task immediately or at a pre-scheduled time, Jetpack provides an optimal solution called the WorkManager. WorkManager is smart enough to execute the task based on the device’s API level and the app state.

Imagine the application wants to run a task in the foreground, WorkManager runs it in a separate thread inside the app’s processes. If the app is in the background, it will schedule a background thread based on the device’s capabilities. WorkManager might use JobScheduler, Firebase Job Dispatcher, or Alarm Manager. Basically, WorkManager has the power to select the best option based on the device’s capabilities and execute the appropriate API, reducing the boilerplate code to figure out the potential device’s state.

With all the new features mentioned above, it is evident that Jetpack is a great option for developing Android apps. I personally love Jetpack because of the boost in efficiency that it brings and for allowing me to focus more on application logic, reducing boilerplate code writing to a minimum.

How to

How to Build a Simple Static Website with Jekyll

Calcey

HTML and CSS can be considered the bread and butter of any website. HTML is the standard markup language for creating web pages and CSS is a language that describes the style of an HTML element. Be it a complex website like Amazon or a simple static website, the information will be displayed to end-user users as rendered HTML. If you are a rockstar developer or a newbie, you might have to bang your head against a wall to figure out the ideal tech-stack and framework to build a website.

The goal of this article is to help you understand how easy it is to build a simple, blog-aware, static website with Jekyll in no time.

Jekyll is a static site generator written in Ruby by Tom Preston-Werner, GitHub’s co-founder. Jekyll is at its best when it comes to personal blogs, portfolios, and static websites. The real beauty in Jekyll is that you can provide the content you want to publish on a website in your favorite markup language (as plain text) and Jekyll will automagically generate static HTML pages for you.

If you already have a Ruby development environment, you can get a simple static website up and running in just four steps. [Ruby development environment install guide]

1. Install Jekyll and bundler. If you have already installed these gems, you can skip this step.

gem install jekyll bundler

2. Create a new project named personal-blog.

jekyll new myblog

3. Change into the project directory.

cd personal-blog 

4. Build the project and serve the site using a development server.

bundle exec jekyll serve

Open your favorite web browser and navigate to http://localhost:4000 to view the website just created. If everything has gone well, you should get the webpage shown below.

Let’s take a step back and see exactly what Jekyll had done and the files that were generated for us when we created the new project.

├── 404.html	  # The default 404 error page
├── Gemfile	  # Project related Gem dependencies
├── Gemfile.lock  # Used by Bundler to record installed Gem versions
├── _config.yml	  # The main configuration file of the project
├── _posts/	  # Holds the blog posts
├── _site/        # Holds the generated site
├── about.md	  # The default about page
└── index.md	  # The home page

The auto-generated file structure is pretty straightforward. But if you look at our website, you will notice that it’s already styled. That’s because Jekyll uses a default theme called minima and it is specified in a _config.yml file. Jekyll comes with an extensive theming system (or layouts in Jekyll nomenclature) and provides full support for community maintained templates. The minima theme comes with Jekyll Gem. If you want to customize the look and feel of the site, you need to copy minima into the project directory and make the required changes.

The next challenge is to deploy this website and make it available to public users. When it comes to deployment, you can go ahead with one of the following options:

A. Web Servers – NGINX/Apache
B. AWS S3 for static site hosting
C. GitHub Pages

If you want to go ahead with option A or B, you need to build the project to get the distribution ready version of the website which you can achieve by executing the following command in the project directory.

 

jekyll build

Compared to option A and B, option C is very straightforward and hasslefree. It does not involve any cost and you can host your website for free with Github Pages. Also, you do not have to build the site each time you make a change; just commit your changes to GitHub and Jekyll will automagically build and publish your website.

Resources

Hosting a Static Website on Amazon S3

GitHub Pages – Websites for you and your projects

Hosting on Github Pages

How toTrends

Efficient Engineering: How We Used Talend To Supercharge Business Intelligence.

IT companies in Sri Lanka

Despite the availability of a multitude of tools, data can be quite a beast to tame. But, the world that we live in is such that ‘data has become the new oil’, especially when it comes to business. Today, even businesses have evolved to the point where they consider data as their competitive advantage. From Amazon to Google, Spotify, and Tesco, the examples are numerous.

The Problem

However, large volumes of data can make it extremely hard to glean information. This was a recent problem faced by one of Calcey’s very own European clients. The client is in the business of providing cloud-based Point of Sale (POS) solutions to independent restaurants in Northern Europe.

As it set about scaling its operations by signing up new restaurants, the company understood that the sheer volume and complexity of data rendered analysis (in the traditional sense) a wasteful affair. To understand this problem better, consider how a standalone restaurant stores its transaction data. There could be hundreds of SKUs, all recorded using a naming convention chosen by the owner of the restaurant. The data would most likely be stored in a proprietary database, or even in Microsoft Excel. When you consider how a cloud-based solution provider will now have to aggregate all this data across hundreds of restaurants in many different municipalities, the complexity of the task at hand becomes apparent.

The legacy system our client had to contend with before they approached us creaked under the weight of the data it had to bear. Database timeouts were common, and it took around fifteen minutes for a single report to be compiled. The client had to also resign themselves to generating only daily reports since the legacy system could not aggregate data to provide a weekly or monthly report.

So, how does one sanitize and unify all this data, so that actionable information can be gleaned at the click of a button?

Our Solution

In consultation with the client, we opted to conduct a pilot using the data set belonging to a single restaurant. Since unstructured data must first be sanitized, we chose Talend Cloud as the overall data integration and governance platform, primarily because of its flexibility and speed. Talend’s support for integrating third-party business intelligence (BI) tools was also a definite advantage. This allowed Calcey’s engineers to map the database structure to a set of API endpoints, thereby allowing the BI tool to access a dataset asynchronously.

The proposed system architecture

Second, we opted to use HSQL-DB to improve query performance. By using HSQL-DB, our engineers were able to create a memory cache of the dataset, which had the advantage of improving the speed of the API and improving the application’s performance, while reducing the load on the back-end infrastructure. As a result of this structure, Calcey’s solution was able to deliver a much welcome cost saving to the client.

How the caching works
The caching mechanism within Talend

The Results
By virtue of using an in-memory database to crunch the data, we managed to shorten the time it takes for our client to generate a report to mere seconds, compared to the fifteen minutes it took previously. The in-memory database structure also allows for real-time filtering of data. Additionally, we were able to integrate the database with Power BI through the Talend API, which granted our client the ability to generate deep, detailed, and actionable business insights.

How the API works
The API within Talend

Since the API works by obtaining data directly from the cache memory, we undertook to build a job within Talend (i.e. an updater module) which automatically runs according to a predetermined schedule, thus saving time and reducing the workload of the system administrator.

Trends

3D authentication is set for mass adoption in EU in 2 months. Are you ready?

IT companies in Sri Lanka

This September, Europe will see the introduction of new requirements for authenticating online payments, as part of the second Payment Services Directive (PSD2). These requirements, also known as ‘Strong Customer Authentication’, are going to significantly change how online retailers process payments within Europe. Here at Calcey, we do a lot of work with European clients, who have had to migrate to 3D Secure-compliant processes. Here are a few things which we have learned along the way.

What is Strong Customer Authentication (SCA)?

The European regulators introduced SCA as a method to reduce fraud and make online transactions more secure. Once SCA becomes legally binding from September 2019 onwards, merchants (especially those who conduct transactions online) will have to build an additional authentication component into their checkout flow. For SCA to work properly, every authentication request has to have any two of the following:

  1. Something the customer knows (e.g. PIN number or a password)
  2. Something the customer has (e.g. a hardware token, or a phone)
  3. Something the customer is (e.g. a fingerprint or face recognition)


From September 14 onwards, banks will be able to decline transactions which don’t meet the SCA criteria.

How SCA Works / Credit: WP Simple Pay

How Authentication Works

Currently, the most popular way of authenticating a card payment is via 3D Secure 1— a protocol supported by a vast majority of cards globally. You know that 3D Secure is in place when you try to checkout, and end up being prompted to enter an OTP code or password. This extra authentication layer also enabled merchants to transfer liability for fraudulent transactions to the card issuer. 

3D Secure 1 was first rolled out in 2001, and though it has gained popularity as an effective tool to help reduce card fraud, it did have its own problems. Chief among the list of grievances against 3D Secure 1 is that the additional step required to complete the transaction didn’t mesh well with the payment flow, thus leading to a high cart abandonment rate. Secondly, lots of banks forced their customers to remember static passwords to complete 3D Secure authentication, and naturally, this didn’t work out too well.

Enter 3D Secure 2: Frictionless And Better Looking

3D Secure 2 aims to address these drawbacks while simultaneously strengthening security. One of the main features of 3D Secure 2 is the introduction of Risk Based Authentication (RBA) for transactions, thanks to its ability to support the sending of multiple data elements. The said data elements include payment-specific data such as shipping addresses, as well as contextual data, such as the customer’s device ID or previous transaction history.

The cardholder’s bank can then use this information to assess the risk level of the transaction and decide on an appropriate response to go along with it:

  • If the data is adequate for the bank to trust that the real cardholder is carrying out the purchase, the transaction goes through the “frictionless” flow and the authentication is completed without any additional input from the cardholder.
  • If the bank decides that it needs further proof, the transaction is sent through the “challenge” flow and the customer is asked to provide additional input to authenticate the payment.

Second, 3D Secure 1 was developed well before the rise of the smartphone. Today, we live our lives on our smartphones. As a result of the time it was built in, 3D Secure was very unpleasant to interact with unless you were in front of a PC. 3D Secure 1 would force a full page redirect, which was cumbersome and left customers potentially vulnerable to ‘Man-in-the-Middle’ attacks.

This has been rectified with 3D Secure 2, and banks can now offer a more seamless and less disruptive authentication experience. Instead of entering a password or waiting for a OTP-bearing text message to arrive, banks can now allow customers to authenticate the payment via fingerprint, face scanning, or even through the mobile banking app installed on their phone. 

3D Secure 2 has also been designed so that it is possible to embed the challenge flow directly within web and mobile checkout flows—without requiring full page redirects. This is a boon for any developer concerned with the user experience, like we are at Calcey. If a customer initiates an authentication on your site or webpage, the 3D Secure prompt now by default appears in a modal on the checkout page (browser flow).

3D Secure 1 left the user open to ‘Man-in-the-middle’ attacks / Credits: Unsplash

Issuers such as Visa and MasterCard have now made available mobile SDKs which make it easier to build ‘in-app’ authentication flows. Both processors have also made available UI guidelines for developers to help sidestep the problem of cart abandonment due to poor UI, which banks can be notorious for.

New age payment systems such as Apple Pay and Google Pay already support 3D Secure 2, and enabling these as payment options on your ecommerce site will allow you to quickly offer a seamless checkout and authentication experience.

While traditional banks may take some time to fully comply with SCA, payments processors such as Stripe and Braintree are already fully compliant. For instance, if you’re using Stripe to process payments, a quick upgrade of the Checkout integration is all you need to be fully compliant with 3D Secure 2.

Payment providers such as Stripe, Braintree, Square etc. are already SCA compliant / Credits: Unsplash

I run a small e-commerce startup? Should I worry about 3D Secure 2?

Not every online retailer needs to consider migrating to 3D Secure 2 immediately. If you are a small e-commerce site, you can temporarily postpone worrying about 3D Secure 2, since both 3D Secure 1 and 3D Secure 2 are expected to co-exist for some time. However, if your web analytics tools are telling you that you’re losing a lot of customers at the checkout stage due to 3D Secure 1, you may be better off considering an immediate shift to 3D Secure 2. While you’re at it, we would also recommend overhauling your backend infrastructure so that it is upgrade friendly, perhaps by integrating with Stripe and Shopify or something similar. This will free you from the headache of worrying about keeping your site’s code up-to-date, since these third party platforms will take care of everything for you. And if you need help, feel free to contact us.

References

https://developers.braintreepayments.com/guides/3d-secure/overview

https://stripe.com/docs/payments/3d-secure

https://stripe.com/guides/3d-secure-2

https://www.adyen.com/blog/3d-secure-20-a-new-authentication-solution

https://developer.visa.com/pages/visa-3d-secure

Life at Calcey

Mohomed Thahsan: How I Got Into Code

IT companies in Sri Lanka

We at Calcey consider ourselves to be different⁠—the square pegs in the round holes if you may. This is also reflected in our hiring practices. Most other software firms in the country choose to recruit only from the top universities, but not us . The way we see it, a talented problem solver is a much more valuable asset than an expensive degree.

Mohomed Thahsan is an Associate Tech lead at Calcey who joined us three and a half years ago. A proud self-taught coder, Thahsan is a living testament to how powerful a cocktail of passion and hard work can be. 

Thahsan, extreme left, with some of his mates at a coding competition in Sri Lanka

Q: What piqued your interest in coding?

Growing up, I was the less talented sibling in my family. In comparison, my brother was leagues ahead. In other words,a code-junkie through and through. Naturally, I didn’t want to be like him. In my mind, I crafted an imaginary future for myself away from brightly lit screens.

I grew up, did my Advanced Level exams, and just about managed to pass. As I sat at home, pondering my future–a fuzzy abstract I had no means of comprehending, a relative told me to give IT a shot. In the absence of any worthwhile alternatives, I decided to try my hand at coding. And so it all began.

Q: As with every story, did yours have an important turning point?

Of course, it did. I enrolled in a short course with the aim of learning Android app development. I don’t believe I gained much out of it, but it did give me the impetus to start experimenting on my own. I began trying to develop small apps. The breakthrough came when I managed to develop a basic calculator app and get it running. Observing my own creation come alive on the screen was all that it took to solidify my path as a coder.

As a coder, that first success is quite important. It is the fuel that keeps you going till you bag your next win.

Q: What brought you to Calcey?

While I was busy teaching myself Android app development, I obtained a job at a small-scale software development firm. There were only a handful of employees. Coupled with the flat structure, I was involved in all development efforts at the firm. As a result, I had the chance to frequently challenge my own capabilities. A seating position next to a colleague who was well-versed in Java proved to be advantageous. A client of that firm referred me to Calcey, and here I am.

Q: How has Calcey helped you grow?

Calcey is where I came into my own as a well-rounded developer. Of course, I had my fair share of struggles. The first few days at Calcey, I left work exhausted simply because I was learning so many new things in such a short timeframe. Fortunately Caley gave me a mentor, Pramuditha, who kindly showed me the ropes. Things were much better from thereon.

Q: Do you have any particular method you use to help you learn?

The most important thing is to break problems down into smaller pieces. Then I try to look for something I already understand very well, even a basic ‘Hello World’ function would do. Using that as a platform, I try to put the disparate pieces of the puzzle together, eventually solving whatever problem I originally faced.

Q: Did you ever think of giving up at any point?

I did, and most people will. That’s normal. But what I realised was that entertaining the notion of giving up, along with all the mental torment that comes with it, was part and parcel of every amateur coder’s journey towards becoming a professional. The ‘Learn-To-Code’ journey is perfectly illustrated by Thinkful’s blog post.

The journey to coding competence is full of valleys and peaks / Credits: Thinkful

In essence, the journey towards becoming an accomplished coder can be divided into four parts.

  1. The Hand-Holding Honeymoon: You get to make use of all the well-polished tutorials and learning material that is available to you. You will still be learning the basics, but you will feel good about your accomplishments.
  2. The Cliff of Confusion: Stuck in a constant loop of debugging, you realise that coding is a much harder affair than you initially thought.
  3. The Desert of Despair: A long and lonely journey through a pathless landscape where every new direction seems correct but you’re frequently going in circles and you’re starving for the resources to get you through it.
  4. The Upswing of Awesome: Once you reach this stage, you realise that you’ve finally found a path through the desert and pulled together an understanding of how to build applications. But your code is still siloed and brittle, much like a house of cards. Now comes your search for a job.

Q: Any words of advice to an aspiring coder?

The best advice I can give anyone is to keep learning and keep experimenting. Follow your curiosity and start learning to code in the direction that your curiosity guides you. There will be times when you will be tempted to tear out your hair in frustration, but don’t. Spend your energy working through things, one step at a time.

Second, keep an eye out for new trends. I’ve subscribed to the Medium Daily Digest so that I can keep up with everything going on in the world of tech. It’s quite helpful and saves me a lot of time, which I would otherwise spend on mindless browsing.

Third, find a good environment to help you grow. For me, Calcey was the place which helped me improve my skills and become the proficient coder I am today. It’s an opportunity that I’m very grateful for.