OpinionTrends

Data May Be The New Oil, But Don’t Be A Rockefeller

Software development companies in Sri Lanka

Is there a right way to use your customers data?

In a world where data was touted as the ‘new oil’, it was only a matter of time before the debate between privacy and data sharing reached a new crescendo. Starting with Cambridge Analytica, scandal after scandal has kept alive the ever-evolving debate around how our personal data is collected and used.

In short… don’t be Dogbert Credit: Scott Adams/Dilbert

To begin with, sharing data does have its merits. For instance:

  • Medical researchers need access to confidential patient data to use in their studies of diseases to identify cures.
  • Retail chains need consumer data to identify markets that can support new stores while meeting demand.
  • Municipalities need to share data to improve transit systems and public safety.
  • Makers of intelligent connected cars need to enable vehicle data exchange and the monetization of vehicle data while protecting data privacy.

However, when companies and app developers start using this vast pool of data at their disposal to create new revenue streams by essentially commoditizing the user, there arises a question about the ethics of such practices. For instance, The Verge has revealed that while struggling to generate revenues post-IPO, Facebook considered selling access to user data in order to make money. Last year, Buzzfeed News revealed that out of nearly 160,000 free Android apps available on the Google Play Store, nearly 55% tried to extract and share the users location with third parties, while 30% accessed the device’s contact list.

In light of all this, it is only natural for users to start worrying about the privacy of their data, prompting governments to crack down hard on firms and developers who misuse personal data. But, as developers, how do we ensure that the data we collect is used for the common good, and not for any nefarious purposes (even by accident)? Where do we draw the line when it comes to data collection practices?

Here is a list of best practices (and common sense) which we advise our clients to follow

Have a privacy policy

Before you try to collect any data at all, it is important to think really hard about why you want to collect customer data, how you want to use it, and whether or not you will be sharing this data with external parties. Once these basics have been figured out, build upon them to formulate a data collection and privacy policy for your company, product, or app. Use simple, clear language (because nobody understands legalese), but run it past your lawyer to make sure that everything is okay. Finally, make the policy available and easily accessible on your website and app.

Be transparent

While the law may shape how you disclose your policies and handle your data, being transparent with your users about how their data is collected, used, and shared is a very good idea. After all, being transparent builds trust. Providing users with the power to control the data they share with you is also a giant leap forward. For instance, if you’re developing an app, consider providing users the ability to view, limit, or delete the data they have shared with you. This will ensure that whatever data you have with you, has been collected entirely with the consent of your users.

Designing self-service pages where users can control their data can be a huge step forward for user privacy and consensual collection. Users can understand the data they’ve explicitly provided, the data you’ve gathered in the background based on their usage, and the ongoing ways that data is currently entering your systems. This encourages users to take an active and considered approach to their own privacy and allows users to refuse specific types of collection with an understanding of how that may affect their access.

When given a choice between collecting and correlating data in the background and asking for it explicitly from users, it is usually best to tend towards the latter. While your privacy policy may outline various ways that you may gather data, asking directly will minimize surprises and help build trust. Users may be willing to provide more information when they feel like they control the interaction rather than when it is collected by monitoring behavior, which can feel intrusive.

If you’re domiciled in a locality where GDPR applies, then it goes without saying that almost all of the above are requirements that you must comply with. GDPR is essentially a legal framework which governs how firms can collect and handle user data, while providing greater protection and rights to individuals. The costs of non-compliance with GDPR can be quite high. Smaller offences could result in fines of up to EUR 10 million or two per cent of a firm’s global turnover (whichever is greater). Those with more serious consequences can have fines of up to EUR 20 million or four per cent of a firm’s global turnover (whichever is greater). For more information, see what The Guardian has to say.

Build strong safeguards

If you are collecting user data, a data breach can be your worst nightmare. Not only would it be a public-relations disaster, but in a worst-case scenario, it could spell the end of your company or startup. Data breaches lead to people’s identities being stolen, credit cards being opened in their name without them knowing it, and even fraudulent tax returns being filed. If you’re going to collect all this personal data, it’s your responsibility to safeguard the data you collect.

To that end, we recommend that you:

  • Back up your data in case your systems crash
  • Ensure there is no personally identifiable information within your database (make sure it’s all encrypted or anonymized)
  • Have malware, antivirus software, and firewalls that protect from data breaches (and make sure it’s all up to date)
  • Have an emergency plan in the event of a data breach

Minimise permissions

When you ask users permission to access certain data or services on their phones, ensure that you are only asking for permissions that are appropriate, and not excessively intrusive. For example, if your app is a simple tic tac toe game, it doesn’t make sense to ask the user for permission to access the camera on their device.

Don’t use code you don’t understand

Developers usually work with a lot of open-source software when building apps, and it is a very common (and good) practice to rely on other people’s code snippets, be it in the form of frameworks or libraries, where relevant. Platforms such as GitHub are a treasure trove of top-notch code snippets, which can often cut development time by a significant amount. But if that code is handling your users’ information inappropriately, it’s your problem. So make a point of checking code before you rely on it.

What are your thoughts on the data privacy vs. data sharing debate? Let us know in the comments below!

Cover image credits: Unsplash

How to

How to set up Kafka in a Docker container

Calcey

At Calcey, we recently found ourselves having to deal with linking a legacy system with a new information system on behalf of a client. In order to avoid complications, we explored the possibility of deploying Kafka within a docker container.

What is Kafka?

Kafka is an open-source, fault-tolerant event streaming platform. Kafka can help bridge the information gap between legacy systems and newer systems. Imagine a situation where you have a newer, better system that needs data from an older, legacy system. Kafka can fetch this data on behalf of the developer without the need to build an actual connection between the two systems.

Kafka, therefore, will behave as an intermediary layer between the two systems.

In order to speed things up, we recommend using a ‘Docker container’ to deploy Kafka. For the uninitiated, a ‘Docker container’ is a lightweight, standalone, executable packages of software that include everything needed to run an application: code, runtime, system tools, system libraries, and settings.

To deploy Kafka, three pieces of the puzzle need to fall in place: the ZooKeeper server, Kafka server, and a connector to the data source. In addition, we will be making use of SQL server’s Change Data Capture (CDC) feature to feed data into Kafka. CDC records any insertion, updating, and deletion activity that is applied to a SQL Server table. This makes the details of the changes available in an easily consumed relational format. But that’s a topic for another day.

The easiest way to set all this up is to use Debezium. We recommend using the Debezium image which can be downloaded from https://debezium.io/docs/tutorial/. This image will allow you to configure ZooKeeper, Kafka and the Connector in one go.

With both ZooKeeper and Kafka now set up, all you have to do is tell Kafka where your data is located. To do so, you can connect Kafka to a data source by means of a ‘connector’. While there is a wide range of connectors available to choose from, we opted to use the SQLServer connector image created by Debezium. Once a connection is established with the data source, pointing the connector back to the Kafka server will ensure that all changes are persisted with.

And that’s all there is to deploying Kafka in a Docker Container!

OpinionTrends

Lessons for startups from Zoom

Calcey

Zoom, the video conferencing startup which managed to beat Cisco’s WebEx at its own game, recently went public. Leaving the IPO aside, there was a lot of media attention on Zoom’s history as a company, since it very much broke the stereotype of the ‘hot Silicon Valley startup’.

Before Zoom arrived on the scene, many thought that the problem of video conferencing had been solved thanks to Cisco’s WebEx and Skype. But that’s not what Eric Yuan thought. A founding engineer on the WebEx team, Eric was passionate about building a video conferencing solution that just worked. He tried to implement his ideas at WebEx, but his bosses didn’t want to listen, and Eric left WebEx to found Zoom.

Eric Yuan, founder of Zoom / Source: Thrive Global

Having looked at Zoom’s growth from afar, here’s what we think all other startups can learn from Zoom

Be focused on the product, maniacally

This story about how focused Zoom is on improving its product comes directly from Sequoia Capital, one of Zoom’s investors. But before they became an investor, Sequoia was a paying customer of Zoom.

“When Sequoia first reached out to Eric in 2014, he told us he admired our work but wasn’t looking for funding. Then he asked for an intro to our IT department, to see if they’d be interested in using Zoom. He cared more about our business than he did about our money — because he was, as he is today, singularly focused on his mission of making people happy.”

-Carl Eschenbach & Pat Grady, Sequoia Capital

Many early-stage startups suffer from a tendency to focus on securing funding, instead of focusing on their product and acquiring paying customers. But Zoom’s approach of focusing on acquiring paying customers, which indirectly gave them more leverage when negotiating with investors later.

To exhibit how focused Zoom is on making its product good, consider this. In a recent feature on Zoom, Forbes columnist Alex Conrad wrote that Zoom could operate well even on a connection with 40% packet loss, which is a boon for those on spotty or slow connections.

Zoom’s platform / Source: Company S-1

Build sustainable revenue streams

In Silicon Valley, there is a tendency to chase revenue growth which is usually fuelled by deep discounts and/or by running at a loss. A ready example can be found in the meal delivery startup sector, where profitability remains elusive yet discounts, plentiful. Essentially, most startups in the sector are hemorrhaging money to make a little bit of money or no money at all. Worse yet, some will never see a cent in profits for a very, very long time. Not so with Zoom.

Consider the following, taken from the second page of Zoom’s S-1 document:

“Our revenue was $60.8 million, $151.5 million and $330.5 million for the fiscal years ended January 31, 2017, 2018 and 2019, respectively, representing annual revenue growth of 149% and 118% in fiscal 2018 and fiscal 2019, respectively.”

But the next section makes things even more interesting:

“We had a net loss of $0.0 million and $3.8 million for the fiscal years ended January 31, 2017, and 2018, respectively, and net income of $7.6 million for the fiscal year ended January 31, 2019.”

Simply put, Zoom was already a profitable company when it sought to list its shares, a rare achievement in the startup world. For comparison, look at the finances of some other startups which went public in recent times:

  • Pinterest, who filed on March 22nd, the same day as Zoom, made $755M in revenue in the fiscal year 2018 but a net loss of $63M.
  • PagerDuty, who filed on March 16th, made $79.6M revenue in the fiscal year 2018, but a net loss of $38.1M.
  • Lyft, who filed on March 1st, made $2.2B revenue in the fiscal year 2019, but a net loss of $911.3M.

In the technology world, running at a loss in order to get a shot at an IPO is widely considered a necessary evil. But Zoom was comfortably in the black, which allowed the company to list at a valuation of USD 8.98 billion.

Zoom’s financials remain healthy / Source: Forbes

Your users can be your best evangelists

Zoom credits its growth to its bottom-up user generation cycle, which conceptually, shares a few similarities with Dropbox’s famous referral system. With Zoom, users can sign up and invite others to a meeting (for free) and when they realize how easy-to-use and great the product is, they sign up too and then pay for more features.

Zoom’s S-1 states that amongst others, the company had 344 customers who generated more than $100K in annual revenue, up 141% YoY. This customer segment accounted for 30% of Zoom’s revenues in FY’19. 55% of those 344 customers started with at least one free host prior to subscribing. As more and more customers invite people to meetings held on Zoom, those numbers are only going to rise. Consider this quote from a Sequoia spokesperson:

“We had been watching the video conferencing space for many years because we were convinced that it was a huge market primed for innovation. But we couldn’t find a product that customers loved to use. That changed when our portfolio companies started raving to us about Zoom.”

Execution matters

When Eric Yuan decided to build Zoom, the problem of video conferencing was, for all intents and purposes, considered to be solved. There were many incumbents, ranging from WebEx to Skype and Google Hangouts. But they were full of problems. Some were built for an age where video conferencing was done in front of a computer, some didn’t have features such as file sharing from mobile, etc. In trying to build a better video conferencing product that truly lived off the cloud, and scaled simply and scaled well, Zoom did not try to reinvent the wheel. Instead, they just set out to make a motorized car while the rest of the world was content to ride on horse-drawn carriages. Unsurprisingly, Zoom is a company favoured by Social Capital CEO Chamath Palihapitiya, who ranks it on the same level as Slack, another successful tech startup (of which Palihapitiya is an investor).

If you’re building a startup yourself, we highly recommend that you keep an eye on Eric and his team. In the meantime, if you are a user of Zoom, what was your experience with the product like? Do you think Zoom will become the next Slack? Let us know in the comments!

References

Trends

Reactive Programming: How Did We Get Here?

Calcey

In a world that continues to evolve rapidly, the way we build software too is in a constant state of flux. The heavy architectures of yesterday have given way to new, lighter, more agile architectures such as reactive programming.

What Is Reactive Programming?

At its core, reactive programming is a way of defining inter-communications and codependent behaviors of program components, so that there is minimal interdependence..

In simple terms, this is achieved by each individual component exposing data about changes happening within them in a format and medium accessible to others, while allowing other components to act upon this data if it is of any relevance to them.

In today’s always-on, completely mobile world users expect responses in milliseconds, along with 100% uptime. Only systems that are responsive, resilient, elastic and message driven can deliver this kind of performance, which is why they can be termed ‘reactive systems’.  And in order to build reactive systems, we must employ ‘reactive programming’.

How Did Reactive Programming Come To Be?

As a technique, reactive programming has been in existence since the seventies (and perhaps even before) and is not something that rose to prominence recently. For instance, when the Graphical User Interface (GUI) was first introduced, reactive programming techniques could have been used to reflect changes in the mouse pointer’s position on the screen.

Examples of Reactive Programming At Work

In general, reactive programming can be seen in action in the following instances:

  • External Service Calls
    Under reactive techniques, a developer will be able to optimize the execution of any external HTTP or REST calls, thus benefiting from the promise of ‘composability’ offered by reactive programming.
  • Highly Concurrent Message Consumers
    Reactive programming can also be used in situations where messages need to be processed between multiple systems, a need that frequently arises in the enterprise space. The patterns of reactive programming are a perfect fit for message processing since events usually translate well into a message.
  • Spreadsheets
    Often the favourite tool (or bane) of many cubicle dwellers, Excel is another perfect example of reactive programming at play. Think of a scenario where you built a model with interdependencies between several cells. A group of cells will be linked to one cell, or even another spreadsheet. Making a change in the precedent cell, will automatically force changes in the dependent cells. This is in effect, reactive programming at play.

When To Use Reactive Programming

In practice, programmers use both reactive and traditional techniques. As such, there is no definitive guide on when to use reactive programming and when not to. It’s more of an intuitive understanding, which a developer will gain over time through experience and countless hours of coding.

As a rule of thumb, if an application’s architecture is simple and straightforward, a developer may be better served by sticking to traditional code structuring methods. Breaking this rule may leave you with an over-engineered product on your hands, which is undesirable.

But, as with all things, proceed with caution. If you do end up following a reactive programming technique over imperative or some other technique, you will essentially be accepting a much higher level of code complexity, in return for more flexibility and robustness in the components of your program. Therefore, it is upto you to weigh the costs against the benefits.

The reactive landscape has evolved so much that today, we have Reactive UIs, ReactiveX APIs, and even Reactive microservices. Overall, these developments point towards a very bright future for reactive programming as a practice.

That wraps up our thoughts on the evolution of reactive programming.

What would you like to see us discuss next? Let us know in the comments below.

References

AnnouncementsLife at Calcey

Empowering the Future Generation with Coding

Calcey

Education is the passport to the future; for tomorrow belongs to those who prepare for it today

Malcolm X

Keeping this quote in mind Calcey recently took initiative to empower disadvantaged youth. Our goal was to create and support a full-time training program that would provide young people who had completed A/L’s, but not got selected to local universities, a foundation in IT and software development, preparing them to take up internships in software companies within 6 to 8 months.

A call for applications was sent out and participants for the program were chosen through a shortlisting process. It was encouraging to see a significant number of female applicants. The program curriculum was designed by Calcey and YMBA Maharagama provided a venue for conducting classes. Calcey interviewed and hired a full-time instructor. Then the program kicked off on the 27th of June, 2019 and is now underway with sessions also being delivered by Calcey team members, who have industry experience and expertise in the various technology or subject areas being discussed.

We’ve been thrilled with the feedback we’ve got so far. It’s great to see the students enjoying the curriculum we designed and wonderful to see their enthusiasm to learn. Our team members facilitating the program are energized by the thought of supporting these youth to become self-sufficient and acquire skills in a growing industry that can take them anywhere in the world.

Calcey has conducted a similar program in Rambuka before and its success led to many requests for another batch to be provided the same opportunity. This time we chose to locate it in Maharagama, so that its easier on our team members who are volunteering their time for this worthy cause.

Cheers to more sessions to come.

How to

React native advanced Mobile Application Development

Calcey React Native

New to React Native?

The technically inclined amongst us may already know the story of React Native. Developed by Facebook, it is essentially a set of libraries that can communicate with their corresponding APIs. This is where the ‘Native’ tag comes in. By design, React Native is able to easily access features native to the device it is being run on, be it a phone or tablet running Android, iOS, or even Windows, connecting native threads and JavaScript threads through its event bridge.

React Native uses a mix of JavaScript and XML-like syntax, known as JSX, to render the user interface as a function of the application’s current state. This makes it much more interesting to build component-rich UIs with principals like stateful components, a layout engine, virtual DOMs, etc.

Let’s go deep.

Here, at Calcey, React Native is one of our favorite tools to work with. Along the way, we’ve picked up a few tricks useful for scalable react-native app development which we’ll be sharing today.

Write reusable components (But don’t overdo it)

Write reusable components (But don’t overdo it)

React recommends creating reusable components as much as you can. Obviously, this makes maintenance and debugging considerably easier. However, as any experienced coder knows, defining components with too much specificity can actually render them useless. Similarly, defining components too loosely will complicate things.
Take the example of building a screen for an app. A screen is essentially a group of components. Intuitively, it makes sense to write common UI elements such as buttons, lists, etc. as reusable blocks of code. This will not only save time but also make your code cleaner.

Safe coding

Safety is determined by how far the platform will go to prevent the developer from making mistakes when writing applications. Due to the freedom given by JavaScript to decide a coding style based on the preference of the developer, code safety will become an important factor, especially when dealing with scalable apps.

React Native has a few tricks of its own which support Flow and TypeScript to avoid such cases if the developer decides to use them. Flow grants us the ability to easily add static type checking to our JavaScript. It will also help prevent bugs and allow for better code documentation. Meanwhile, TypeScript will provide great tooling and language services for autocompletion, code navigation, and refactoring. The ecosystem you work in usually has a major influence on helping you decide what to use, as does your previous exposure to static-type systems.

However, Calcey uses these tools to make sure that developers are benefiting from them when it comes to the readability of the code or the code standards.

Extract, extract, extract

React Native projects tend to include a large number of common elements such as styles, images, and global functions (functions that format dates and times, make requests to a server, etc.). At Calcey, we generally encourage our developers to keep such elements separate from the component code. This makes it easier to share elements from anywhere within the app, while also making a given app’s codebase cleaner, and easier to maintain and scale.

Here’s an example of a color.js file coded by one of our developers:

export function hexToRgbA(hex: string, opacity: number) {
  let c;
  if (/^#([A-Fa-f0-9]{3}){1,2}$/.test(hex)) {
    c = hex.substring(1).split('');
    if (c.length === 3) {
      c = [c[0], c[0], c[1], c[1], c[2], c[2]];
    }
    c = `0x${c.join('')}`;
    return `rgba(${[(c >> 16) & 255, (c >> 8) & 255, c & 255].join(',')}, ${opacity})`;
  }
  throw new Error('Bad Hex');
}

Store management

To most React Native developers, Redux is an absolute necessity. But at Calcey, we believe that Redux is not a necessity for the most part. The way we see it, bringing Redux into the picture would be akin to using a hammer to crack open an egg.

Ever since we started using Redux, it has only come in necessary for the most complex of apps where immense scalability is required. To understand this better, consider why Redux was developed in the first place. As Facebook grew to become what was essentially the biggest web app in the world, it had to contend with the headache of not being able to show the correct number of notifications in the header bar. At the time, it was just difficult for Facebook (or any other web app) to recognize changes in one part of the app (e.g. when you read a comment on a post) and reflect that change in another area (i.e. reduce the number of unread notifications by one). Facebook wasn’t happy with forcing a web page refresh to solve the problem, so it built Redux as a solution.

Redux works by storing information of an app in a single JavaScript object. Whenever a part of an app needed to show some data, it would request the information from the server, update the single JavaScript object, and then show that data to users. By storing all information in one place, the app always displayed the correct information, no matter where thereby solving Facebook’s notification problem.

Problems cropped up when other independent developers began using a single object to store all their information—basically, every single piece of data provided by the server. This approach has three main drawbacks namely, introducing a need for extra code and creating the problem of ‘stale data,’ whereby unwanted data appears within an app from a previous state and increases the learning curve for new developers.

So how does one overcome this problem? By planning ahead, and using proper requirement identification. If you envision that your app will have extreme scalability issues in the future, it may be better to employ Redux from day one. Otherwise, deploying Redux selectively is wiser. After all, it is possible to apply ideas from Redux without using React. An example of a React component with a local state is given below:

import React, { Component } from 'react';
class Counter extends Component {
  state = { value: 0 };
  increment = (): void => {
    this.setState(prevState => ({
      value: prevState.value + 1
    }));
  };
  decrement = (): void => {
    this.setState(prevState => ({
      value: prevState.value - 1
    }));
  };
  render() {
    return (
      <View>
        <ChildComponent value={this.state.value} />
        <Button onClick={this.increment}>+</button>
        <Button onClick={this.decrement}>-</button>
      </View>
    );
  }
}

We can take these attributes or functions to any depth of the component tree and use them inside those components. This mechanism is called prop drilling. Be warned though, it’s not a good idea to drill multiple layers unless you have an understanding of where the props are coming from, and where they are going next.

Another solution we can use is the Context API provided by React itself. The Context API allows us to access these props of the parents from any child or a parallel component using the consumer design principle. All these options are used at Calcey, based on the use case.

These are a few of our internal React Native practices and tricks. What are yours? Let us know in the comments below!

How to

Automating The eSignature Process Using DocuSign

Calcey

In an ever-evolving digital world, legal documents with dotted lines for signatures are perhaps one of the last remaining analog holdouts. However, that too is now going digital, with e-signatures gaining more widespread acceptance.

There are a plethora of services online which allow users to sign documents electronically. DocuSign is one of the most well known, while HelloSign, SignNow, and Citrix RightSign are a few others that make up the rest of the pack.

The basic premise of eSignature services

In order to use an eSignature service, a user must first upload a document that will be scanned by the service. Next, the user will be allowed to define the areas on the document where a signature or some other type of input is required from the signees. Once all this is done, the signable document will be delivered to the specified signees via email.

Everything works seamlessly when it is just one document that needs to be sent across at any given time. However, what if a user needs to frequently send similar sets of documents to different groups of signees, perhaps on a daily basis?

In such scenarios, it may not be wise to require a user to upload documents and define input areas several times over. Not only is this time consuming, but it is also extremely tedious.

Faced with this problem, one of our own clients recently turned to us for help.

Our Solution

Having identified the scale of the problem, our engineers set out to develop a solution that could unite the convenience provided by a service such as DocuSign with the simplicity and seamlessness promised by automation.

Since the client was already using the DocuSign platform to send documents to signees, our engineers decided to build a layer of code that would sit above DocuSign, thus essentially building a customized eSignature platform for the client.

Our solution is expected to allow the input of all details relevant to a signee such as full name, address, etc into a database. Once the data has been input, all the client has to do is select the relevant document, select the name of the signee, and the code will take over the task of populating all the relevant fields with the correct information.

How We Built It

In order to build a code layer that runs atop DocuSign, one must first sign up for a DocuSign developer account and build a sandbox. Visit https://developers.docusign.com/ and sign up to create a sandbox.

Next, an authorization method must be chosen. Due to the need to ensure that the application is able to access the DocuSign API without the need for any human interaction,

Calcey’s engineers chose to use JWT as the authorization model. With JWT in place, our custom application will seek to impersonate a user with a DocuSign login. In order to allow the impersonation to take place smoothly, we must register the application with DocuSign, and ensure that the target user provides explicit permission for the API to use their credentials. It is important to note that the process of granting permission to use one’s login credentials is a one-time action.

You can now choose to create an envelope template, which can hold a set of documents that require signing. Once the documents have been uploaded, the user needs to manually specify where data input is necessary on each document.
Note: When creating placeholders, it must be ensured that the template contains one or more signees. It is also important to insert only the role of the signee when creating the template since all other relevant information will be taken care of by the application.

Once all placeholders have been defined, we can consider the template ‘ready’. Now, whenever a user wants to send out documents, the DocuSign API can fetch a list of pre-uploaded templates, allowing the user to pick and choose the correct set of documents to send out. With the aid of the Template ID, the DocuSign API will create what is known as an ‘envelope’ and automatically deliver the documents to the intended recipients.

How to

Skyrocketing with Android JetPack

Calcey

In 2018, at Google I/O, Android introduced a next-generation suite called Jetpack to accelerate Android development. Android Jetpack is a set of components, tools, and architectural guidance, that makes it quick and easy to build great Android apps. Components are unbundled but built to work together while leveraging Kotlin language features to make developers more productive. Technically, Jetpack consists of the existing support library, architecture components, and Android-ktx, in separate modules and rebranded in an adaptive way providing coverage for lifecycle management, robustness of data states, background tasks, navigation, and much more.

Source-: https://android.jlelse.eu/what-is-android-jetpack-737095e88161

As represented in the illustration above, Jetpack combines four major categories.

  • Foundation
  • Architecture
  • Behavior
  • UI

Each section consists of both old and latest components. The older components have been in use for quite a while. This post will focus mainly on a few newly developed components such as navigation, paging, Android KTX, and WorkManager.

Navigation

Source -: https://medium.com/@Alex.v/android-navigation-architecture-component-25b5a7aab8aa

The navigation component

  • Reduces boilerplate code to fragment transactions and reverse events – where the component is smart enough to navigate itself – and include bundle data if needed at runtime, based on provided navigation destinations and actions.
  • It gives developers an opportunity to navigate through the view hierarchy, similar to a storyboard in Xcode.

When it comes to passing data through the bundle, the navigation component library comes with a Gradle plugin called Safe Args to avoid mistakes made by developers such as passing random bundles or using the wrong keys to extract data.

Migrating to the navigation component is pretty straightforward; simply following the steps below would be adequate.

  • Create a navigation graph for separate activities if required.
  • Link separate activities through activity destinations, replacing existing startActivity()
  • In case multiple activities share the same layout, navigation graphs can be combined, replacing navigate calls to the activity destinations to navigation graphs

Paging

Apps work with enormous sets of data but only require the loading of a small portion of this data for a given timeframe. This should be a key consideration for a developer since it causes the battery to drain and wastes bandwidth. Jetpack provides a paging library to overcome this challenge by enabling gradual and graceful data loading. Furthermore, it can be integrated into RecyclerView and works with both LiveData and RxJava.

The Paging library consists of the following core elements.

  • PageList
  • DataSource

PageList is a collection that has the capability to load data as chunks asynchronously.

DataSource is the base class for loading snapshots of data to the PageList. The illustration below provides an easy guide on how data loads from the data layer to the UI components

Assuming the database is your data source and will pass the data to be created, DataSource allows the data to be handled in a repository with LiveData that is created by LivePageListBuilder. Then, through the ViewModel, data will navigate to PageListAdapter API which provides from the paging library to help present data from the page list to RecyclerView. PageListAdapter will use the Diffutill class to find new data and notifies automatically.

Refer to the following links for more details

https://developer.android.com/topic/libraries/architecture/paging/

https://medium.com/@sharmadhiraj.np/android-paging-library-step-by-step-implementation-guide-75417753d9b9

https://medium.com/@Ahmed.AbdElmeged/android-paging-library-with-rxjava-and-rest-api-e5c229fd70ba

Android KTX

Android KTX is another feature that comes with Jetpack that provides a set of Kotlin extensions. The purpose of Android KTX is to give more concision, reduce the lines of code and make them more readable. Refer to the following sample codes.

Kotlin

sharedPreferences.edit()
    .putBoolean("key", value)
    .apply() 

Kotlin + KTX

sharedPreferences.edit {
    putBoolean("key", value)
} 

Kotlin

Toast.makeText(this,
    R.string.text,
    Toast.LENGTH_SHORT)
.show()

Kotlin +KTX

context.toast(R.string.text,)

Kotlin

for (recipe in recipes) print(item)

Kotlin+KTX

recipes.forEach{
print(it)
 }

Pretty simple, isn’t it? It’s fun and simple to understand.

WorkManager

Assuming you need to execute a task immediately or at a pre-scheduled time, Jetpack provides an optimal solution called the WorkManager. WorkManager is smart enough to execute the task based on the device’s API level and the app state.

Imagine the application wants to run a task in the foreground, WorkManager runs it in a separate thread inside the app’s processes. If the app is in the background, it will schedule a background thread based on the device’s capabilities. WorkManager might use JobScheduler, Firebase Job Dispatcher, or Alarm Manager. Basically, WorkManager has the power to select the best option based on the device’s capabilities and execute the appropriate API, reducing the boilerplate code to figure out the potential device’s state.

With all the new features mentioned above, it is evident that Jetpack is a great option for developing Android apps. I personally love Jetpack because of the boost in efficiency that it brings and for allowing me to focus more on application logic, reducing boilerplate code writing to a minimum.

How to

How to Build a Simple Static Website with Jekyll

Calcey

HTML and CSS can be considered the bread and butter of any website. HTML is the standard markup language for creating web pages and CSS is a language that describes the style of an HTML element. Be it a complex website like Amazon or a simple static website, the information will be displayed to end-user users as rendered HTML. If you are a rockstar developer or a newbie, you might have to bang your head against a wall to figure out the ideal tech-stack and framework to build a website.

The goal of this article is to help you understand how easy it is to build a simple, blog-aware, static website with Jekyll in no time.

Jekyll is a static site generator written in Ruby by Tom Preston-Werner, GitHub’s co-founder. Jekyll is at its best when it comes to personal blogs, portfolios, and static websites. The real beauty in Jekyll is that you can provide the content you want to publish on a website in your favorite markup language (as plain text) and Jekyll will automagically generate static HTML pages for you.

If you already have a Ruby development environment, you can get a simple static website up and running in just four steps. [Ruby development environment install guide]

1. Install Jekyll and bundler. If you have already installed these gems, you can skip this step.

gem install jekyll bundler

2. Create a new project named personal-blog.

jekyll new myblog

3. Change into the project directory.

cd personal-blog 

4. Build the project and serve the site using a development server.

bundle exec jekyll serve

Open your favorite web browser and navigate to http://localhost:4000 to view the website just created. If everything has gone well, you should get the webpage shown below.

Let’s take a step back and see exactly what Jekyll had done and the files that were generated for us when we created the new project.

├── 404.html	  # The default 404 error page
├── Gemfile	  # Project related Gem dependencies
├── Gemfile.lock  # Used by Bundler to record installed Gem versions
├── _config.yml	  # The main configuration file of the project
├── _posts/	  # Holds the blog posts
├── _site/        # Holds the generated site
├── about.md	  # The default about page
└── index.md	  # The home page

The auto-generated file structure is pretty straightforward. But if you look at our website, you will notice that it’s already styled. That’s because Jekyll uses a default theme called minima and it is specified in a _config.yml file. Jekyll comes with an extensive theming system (or layouts in Jekyll nomenclature) and provides full support for community maintained templates. The minima theme comes with Jekyll Gem. If you want to customize the look and feel of the site, you need to copy minima into the project directory and make the required changes.

The next challenge is to deploy this website and make it available to public users. When it comes to deployment, you can go ahead with one of the following options:

A. Web Servers – NGINX/Apache
B. AWS S3 for static site hosting
C. GitHub Pages

If you want to go ahead with option A or B, you need to build the project to get the distribution ready version of the website which you can achieve by executing the following command in the project directory.

 

jekyll build

Compared to option A and B, option C is very straightforward and hasslefree. It does not involve any cost and you can host your website for free with Github Pages. Also, you do not have to build the site each time you make a change; just commit your changes to GitHub and Jekyll will automagically build and publish your website.

Resources

Hosting a Static Website on Amazon S3

GitHub Pages – Websites for you and your projects

Hosting on Github Pages

How toTrends

Efficient Engineering: How We Used Talend To Supercharge Business Intelligence.

IT companies in Sri Lanka

Despite the availability of a multitude of tools, data can be quite a beast to tame. But, the world that we live in is such that ‘data has become the new oil’, especially when it comes to business. Today, even businesses have evolved to the point where they consider data as their competitive advantage. From Amazon to Google, Spotify, and Tesco, the examples are numerous.

The Problem

However, large volumes of data can make it extremely hard to glean information. This was a recent problem faced by one of Calcey’s very own European clients. The client is in the business of providing cloud-based Point of Sale (POS) solutions to independent restaurants in Northern Europe.

As it set about scaling its operations by signing up new restaurants, the company understood that the sheer volume and complexity of data rendered analysis (in the traditional sense) a wasteful affair. To understand this problem better, consider how a standalone restaurant stores its transaction data. There could be hundreds of SKUs, all recorded using a naming convention chosen by the owner of the restaurant. The data would most likely be stored in a proprietary database, or even in Microsoft Excel. When you consider how a cloud-based solution provider will now have to aggregate all this data across hundreds of restaurants in many different municipalities, the complexity of the task at hand becomes apparent.

The legacy system our client had to contend with before they approached us creaked under the weight of the data it had to bear. Database timeouts were common, and it took around fifteen minutes for a single report to be compiled. The client had to also resign themselves to generating only daily reports since the legacy system could not aggregate data to provide a weekly or monthly report.

So, how does one sanitize and unify all this data, so that actionable information can be gleaned at the click of a button?

Our Solution

In consultation with the client, we opted to conduct a pilot using the data set belonging to a single restaurant. Since unstructured data must first be sanitized, we chose Talend Cloud as the overall data integration and governance platform, primarily because of its flexibility and speed. Talend’s support for integrating third-party business intelligence (BI) tools was also a definite advantage. This allowed Calcey’s engineers to map the database structure to a set of API endpoints, thereby allowing the BI tool to access a dataset asynchronously.

The proposed system architecture

Second, we opted to use HSQL-DB to improve query performance. By using HSQL-DB, our engineers were able to create a memory cache of the dataset, which had the advantage of improving the speed of the API and improving the application’s performance, while reducing the load on the back-end infrastructure. As a result of this structure, Calcey’s solution was able to deliver a much welcome cost saving to the client.

How the caching works
The caching mechanism within Talend

The Results
By virtue of using an in-memory database to crunch the data, we managed to shorten the time it takes for our client to generate a report to mere seconds, compared to the fifteen minutes it took previously. The in-memory database structure also allows for real-time filtering of data. Additionally, we were able to integrate the database with Power BI through the Talend API, which granted our client the ability to generate deep, detailed, and actionable business insights.

How the API works
The API within Talend

Since the API works by obtaining data directly from the cache memory, we undertook to build a job within Talend (i.e. an updater module) which automatically runs according to a predetermined schedule, thus saving time and reducing the workload of the system administrator.