AnnouncementsHow toTech

London’s leading precision nutrition service goes Mobile! Here’s how Calcey made it happen

The backstory 

Over a year ago, Fresh Fitness Food (FFF), introduced its web platform to the market. Now, FFF is kicking off 2021 with a brand new mobile app, live and ready to download!

For those unfamiliar with the brand, FFF Fresh Fitness Food (FFF) combines precision nutrition with convenience, by offering individualized meals, cooked, and delivered to your door, daily. In a market brimming with generic healthy meal providers, what sets FFF apart is its ability to individually tailor meals to every customer, taking into account their nutritional needs, health goals, allergies, and food preferences. Selecting appropriate meals per customer, then preparing and sizing them to suit individual nutritional requirements is daunting enough – but to do this at scale, particularly at the scale at which FFF operates at, is extremely complex. Currently, FFF delivers more than 70,000 meals per month in London and the suburbs. 

This is largely made possible through the end-to-end web-platform that Calcey developed for FFF to manage its entire business. This mini-ERP manages FFF’s complete workflow from customer sign-up and automates meal planning to ensure that users receive the exact nutrition they need each day. It even helps FFF manage cooking manifests and deliveries. Upon launching the solution FFF realized an immediate 14% improvement in gross margins. 

The FFF mobile app

FFF’s mobile app is not a simple extension of this web solution, providing another interface for busy customers to reschedule their deliveries. Of course, it offers these capabilities to current FFF customers, but it also offers a whole lot more, essentially a ‘complete fitness center in your pocket’ for any user. If improving your fitness is on your new year resolution list this is the app you can’t do without. It offers;

Calorie tracking 

Understanding the exact nutrients in a broad range of food items is the foundation of FFF’s business model. The mobile app is integrated with the leading food databases in the world – cataloging nutritional data for millions of food items and FFF’s team of nutrition experts are constantly validating and increasing the accuracy of these measurements. 

Activity tracking 

The app can be connected with Apple Health and Fitbits to seamlessly track daily activity levels.

Meal plan and recipes 

As the app knows the exact calories you’ve had so far in the day, how active you’ve been, plus your health goals (e.g.: gaining muscle, losing fat) it’s better placed than anyone to recommend what’s optimal for your next meal. It does exactly this and provides recipes as well! 

Guided workouts 

The app has a complete section dedicated to guided workouts by top trainers. As a bonus, on the days you workout, the app will adjust your daily caloric expenditure accordingly and perhaps let you fit in that sweet treat, without letting you get derailed from your overall plan. 

The holy grail of nutrition 

What does all this mean for current FFF customers? In a nutshell, now they have access to the holy grail of nutrition. Imagine, you had a personal chef (and nutritionist) that tracked all your snacks and your activity level on a daily basis and adjusted your main meals in real-time to still keep you on track with your health goals? With the new mobile app, this is what the FFF’s customers now experience. 

Changing the game and going international 

If you’ve read so far you would have understood how this mobile app dramatically increases the value of FFF’s service to its existing customers. But what’s more, it will attract a completely new set of users drawn by its calorie and activity tracking capabilities, guided workouts and healthy meal plans. In short, exactly the type of folk who will find FFF’s service quite valuable. Attracting your target audience by providing them great value – we can get behind that marketing strategy! 

Scaling any business worldwide is tough. Tech businesses find it easier because releasing an app for a new market is less risky and requires less investment than putting up physical infrastructure, creating delivery fleets etc. This mobile app marks FFF’s transition from a click and mortar operation to a tech business. Today it can launch its app in any city and assess the interest level around its core ethos of healthy living and directly engage the people with this mindset. The core IP it’s created by its digital properties, is scalable and franchisable worldwide. 

Want to plan out a similarly transformative journey for your business? Get in touch and we’d be happy to help. 

How toTrends

Easy API Testing With Postman

Understanding Postman, the app that has become the darling of code testers around the world

Image credits: meshworld.in

Any given app in this day and age may employ a number of different APIs from various services such as Google Analytics, Salesforce CRM, Paypal, Shopify etc. This complex combination of multiple APIs which interact seamlessly with each other through a common application codebase is what has freed us from the need to be bound to our desks. Thanks to APIs, people today can choose to even run entire businesses on the move.

However, while there is no doubt that the task of imparting various functionalities into an app has been made easier thanks to APIs, these very APIs also complicate the job of a Quality Assurance engineer in many ways, the most obvious being that every time the core codebase is modified for any reason, the APIs must also be tested for compatibility with the new code. Naturally, testing several APIs over and over again is quickly going to get tedious.

This is where Postman comes in, to help with the tedious task of API testing. API testing involves testing the collection of APIs and checking if they meet expectations for functionality, reliability, performance, and security and returns the correct response.

Postman is an API client which can be used to develop, test, share and document APIs and is currently one of the most popular tools used in API testing. Its features allow code testers to speed up their workflow while reaping the benefits of automation as much as possible. Postman’s sleek user interface is a boon to testers, who don’t have to go through the hassle of writing lots of code to test the functionality of an API.

Postman also has the following features on offer:

Accessibility

Once installed, Postman allows users to create an account which then syncs their files to the cloud. Once complete, users can access their files from any computer which has the Postman application installed.

In addition, it is also possible for users to share collections of testing requests via a unique URL or even by generating a JSON file.

Workspaces & Collections

Postman’s interface is built around workspaces and collections. Think of a workspace as an isolated container within which a tester can store, group, and manage all their code test requests. Workspaces are further divided into Personal and Team workspaces. As their names indicate, personal workspaces are visible only to a user, while team workspaces can be made available to a team. Each team gets one common workspace by default, with the option to create an unlimited number of new workspaces.

Collections are simply a collection of pre-built requests that can be organized into folders, and they can be easily exported and shared with others.

Ability to create Environments

In Postman, environments allow users to run requests and collections against different data sets. For example, users can create different environments, one for development, one for testing, and another for production. In such a scenario, authentication parameters such as usernames and passwords can change from environment to environment. Postman remedies this by allowing users to create a staging environment and assign a staging URL, staging username, and password. These variables can be then be passed between requests and tests allowing users to easily switch between different environments.

Parameterization

Postman allows users to parameterize requests as variables, thus granting users the ability to store frequently used parameters in test requests and scripts. Postman supports 5 different types of variable scopes namely Global, Collection, Environment, Data, and Local.

Scopes can be thought of as different “buckets” in which values reside. If a variable is in multiple “buckets”, the scope with a higher priority wins and the variable gets its value from there. Postman resolves scopes using this hierarchy progressing from broad to narrow scope.

Creation of Tests

It is also possible for users to create custom tests which can be added to each API call. For instance, a 200 OK request test can be created to check if an API successfully returns a given request.

Postman also contains a very helpful Snippets section which contains a set of pre-written tests which can be deployed with a single click.

Testing each field of a JSON RESTful service manually every time there is a change can be very time consuming, therefore the best way to do this is by validating the structure using a schema. Given below are the steps to follow to validate the schema using Postman.

Step 1: Assuming that we already have a JSON structure we will start with the Schema Generation. We will use https://jsonschema.net/#/ for generating the schema where we can copy and paste the JSON doc into the JSON Instance and it will generate the schema for us

Step 2: After generating the schema we will go to the tests tab of the postman and declare a variable Schema and paste the schema as follows

Var schema = { <Insert Schema here>
}

Step 3: After that we will write the test as follows to do the validation process.

pm.test('Schema is valid', function() {
pm.expect(tv4.validate(pm.response.json(), schema)).to.be.true;
});


Automation Testing

Postman has a complementary command-line interface known as Newman which can be installed separately. Newman can then be used to run tests for multiple iterations.

Consider a situation where there is a need to run a selected collection of written tests automatically without opening Postman and manually triggering those tests. This is where Newman comes in. Thanks to its ability to collaborate with any program that can trigger a command, such as Jenkins or Azure DevOps. For example, with the help of Newman our tests can be integrated with CI, and if any code change is pushed, CI will run the Postman collections which will in turn help developers obtain quick feedback on how their APIs perform after code changes.

Postman can be used to automate many types of tests including unit tests, functional tests, and integration tests, thus helping to reduce the amount of human error involved.

Newman is also special in that it allows users to deploy collections on computers which may not be running Postman. Collections can be fetched through the CLI of a host computer, by running a few commands.

For the uninitiated, here’s quick tutorial on how to install Newman:

Note: Installing Newman requires the prior installation of Node.js as well as NPM (Node Package Manager).

  1. Open the command prompt (Terminal for mac)
  2. Type npm install -g newman
    Now Newman is installed in your system.
  3. Export the collection you want to run as a json file. (For instance, collectionFile.json)
  4. On command prompt go to the location of the collection json file & run the command
    newman run collectionFile.json
  5. If you want to run the test with environment variables you can export the environment as a json file.(For instance, environmentFile.json)
  6. You can run the test with the environment variables using the command
    newman run collectionFile.json -e environmentFile.json

Following are some of the other options that can be used to customize the tests

-d, --data [file] Specify a data file to use either json or csv

-g, --global [file] Specify a Postman globals file as JSON [file]

-n, --iteration-count [number] Define the number of iterations to run

--delay-request [number] Specify a delay (in ms) between requests [number]

--timeout-request [number] Specify a request timeout (in ms) for a request

--bail Stops the runner when a test case fails

Easier Debugging

The consoles contained within Postman can be used to debug any errors that may arise. Postman contains two debugging tools. One is the console itself, which records any errors which take place while testing an API. Second is the DevTools console, which helps debug any errors occuring with respect to the Postman app itself. For instance, if Postman crashes while executing a test, the DevTools console is where you would look to diagnose the problem.

Support for Continuous Integration

By virtue of being open source, Postman supports a wide variety of Continuous Integration (CI) tools such as Jenkins and Bamboo. This helps ensure that development practices are kept consistent across various teams.

 

With so many features on offer to make life easier for code testers, it is not surprising that in the world of code testing, Postman is heralded as the best thing since sliced bread.

How to

A Picture Is Worth A 1000 Words, But What If Nobody Can See It?

A trick to speed up the loading of cached images on React Native

Be it an app or website, speed is crucial. Too much time taken to load content can turn away users, putting to waste hundreds, if not thousands of man-hours spent painstakingly writing, and reviewing code.

Quite recently, some of our code ninjas got together and built a nifty little component which can be made use of by anyone coding in a React Native environment. We have now made the code freely accessible on GitHub for you to experiment with.

Note: This library supports React Native 0.6 or above, which means iOS 9.0 and Android 4.1 (API 16) or newer.

Our module in action

What Does This Module Do?

In a nutshell, this module is a simple, lightweight CachedImage component that is adept at handling how an image will load based on network connectivity. It can deliver significant performance improvement, particularly when called upon to fetch high-resolution images within an app.

This module can be used to:

  • Manage how images are rendered within an app while reducing a user’s mobile data usage
  • Enable offline support for online images.
  • Manage an application cache within a predefined cache storage limit, or to reduce/limit how much of a device’s internal storage can be used by an app
  • Reduce loading times of static online images by switching between low resolution and high-resolution images

How Does This Module Work?

The module itself is built on React Native 16’s Context API, which allows to initiate a cache manager when an application starts up. A developer can define a cache limit for the whole application (500 MB for instance) and the module can cache images up to its defined cache limits without the need for concurrent caching processes. The cache manager will also scrap any data which exceeds defined storage limits.

The freedom to define a cache limit opens up the opportunity for developers to deploy our module on lower-end devices, which often possess relatively low internal memory capacities.

It is important to note that our module favors a client-side caching approach. Images will be downloaded and stored using a unique file naming pattern, which aligns with their original URLs. Whenever the application next requests for a similar URL, the caching module will step in to serve up the relevant cached image.

Since the whole point of a caching module is to reduce demands on device and network resources, our module makes use of the react-native-fetch-blob module to handle native (Android/ iOS ) file system access requests. Not only does this make our module lightweight, but it also reduces the dependence on excessive boilerplate code.

A Special Note From Our Developers

  1. This module provides a simple solution for handling cache with limits, but only for images. In practice though, requirements may vary from application to application. So feel free to use the architecture/structuring of this module to come up with customizable, scalable, and configurable advanced caching modules that support other file types as well.
  2. Currently, we have not implemented a validation method to prevent the scrapping of cache data which is in use. Because of this, defining a low value for the cache limit could lead to corrupted images. Therefore, use your judgment when deciding on the cache limit.

That’s about it. Do play around with this little module, and let us know what you think!

Cover image credits: salonlfc.com/

How to

Not just a ‘one-trick’ pony: How to use .net Core to Monitor App Health

Monitoring app health is of utmost importance in order to ensure that bugs and other vulnerabilities are patched on time. Today, let’s delve into ASP.net Core, a middleware solution that provides support to conduct health checks on an application.

First introduced in ASP.net Core 2.2, the health check feature is exposed via configurable HTTP endpoints. These health checks can then be used to check whether a database is responding, to check whether all dependencies are in order and more.

Getting started To get started, create an ASP.net Core project in Visual Studio 2017. To do so,

  1. Launch the Visual Studio 2017 IDE.
  2. Click on File > New > Project.
  3. Select “ASP.Net Core Web Application (.Net Core)” from the list of the templates displayed.
  4. Specify a name for the project.
  5. Click OK to save the project.
  6. A new window “New .Net Core Web Application…” is shown next.
  7. Select .Net Core as the runtime and ASP.Net Core 2.2 (or later) from the drop-down list at the top.
  8. Select API as the project template.
  9. Ensure that the checkboxes “Enable Docker Support” and “Configure for HTTPS” are unchecked
  10. Ensure that “No Authentication” is selected as it is not necessary.
  11. Click OK.

Register health check services

Next, proceed to call the AddHealthChecks method in the ConfigureServices method of the Startup class. The health check middleware can be added by calling the UseHealthChecks as shown in the code snippet below.

public void ConfigureServices(IServiceCollection services)
 {
    services.AddHealthChecks();
 }
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
 {
   app.UseHealthChecks("/health");
   app.UseStaticFiles();
   app.UseCookiePolicy();
   app.UseMvc();
 }

Do note that under this method, both the ConfigureServices and Configure methods are called by the runtime.

Built-in vs. custom health checks

Now we come to a fork in the road. ASP.net provides us the ability to either use the built-in health check or to deploy custom health checks.

The built-in health check allows you to take advantage of the Entity Framework Core DbContext health check to report if the Entity Framework Core DbContext is able to connect to a given database. To do this, add the Microsoft.Extensions.Diagnostics.HealthChecks.EntityFrameworkCore NuGet packages and configures health checks in the ConfigureServices method as shown below.

services.AddHealthChecks().AddDbContextCheck<MyDbContext>
("IDGDbContextHealthCheck");

Do remember that you always have the option of using other health check packages available on NuGet. These include SQL Server, MySQL, MongoDB, Redis, RabbitMQ, Elasticsearch, Hangfire, Kafka, Oracle, Azure Storage, and more. These community packages are available in the AspNetCore.Diagnostics.HealthChecks repository on GitHub.

This doesn’t work for me. I want to go custom.

Assume you want to verify if the application is unable to connect to a database or an external service. If you decide to create a custom health check, extend the IHealthCheck interface and implement the CheckHealthAsync method.

public class MyCustomHealthCheck : IHealthCheck
	{
        public Task<HealthCheckResult>
    	CheckHealthAsync(HealthCheckContext context,
    	CancellationToken cancellationToken =
    	default(CancellationToken))
    	{
        	throw new System.NotImplementedException();
    	}
	}

Note how the HealthCheckResult struct has been used here.

public async Task<HealthCheckResult>
     	CheckHealthAsync(HealthCheckContext context,
     	CancellationToken cancellationToken =
     	default(CancellationToken))
    	{
		if (IsDBOnline())
{
            		return HealthCheckResult.Healthy();
}
        	return HealthCheckResult.Unhealthy();
    	}

The IsDBOnline method can be used to check if the database is working as intended.

private bool IsDBOnline()
    	{
        	string connectionString =
        	"some connection string to connect to the database";
       	try
       	{
          	using (SqlConnection connection = new
          	SqlConnection(connectionString))
          	{
                	if (connection.State !=
                   	System.Data.ConnectionState.Open)
                 	connection.Open();
          	}
           	return true;
       	}
       	catch (System.Exception)
       	{
           	return false;
       	}
    	}

The HealthCheckResult object shown in the code above allows us to pass description, exception, and status data represented as a dictionary of key-value pairs. This information can then be presented on a health check web page. After building your custom health check, remember to configure the custom health check type appropriately in the ConfigureServices and Configure methods of the Startup class to start leveraging it.

Visualize your health check

If you wish to view the results of your health check in a more visually appealing format, you can use an open-source visualization tool named HealthChecksUI. To use this tool, install it from NuGet by using the following command at the package manager console window.

Install-Package AspNetCore.HealthChecks.UI

Once the installation is complete, configure the package in the ConfigureServices and Configure methods of the Startup class.

public void ConfigureServices(IServiceCollection services)
{
	services.AddHealthChecksUI();
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
app.UseHealthChecks("/health", new HealthCheckOptions
{
	Predicate = _ => true,
	ResponseWriter =
	UIResponseWriter.WriteHealthCheckUIResponse
});
	app.UseHealthChecksUI();
}

Finish things off by adding the following configuration in the appsettings.json file to let HealthChecksUI know where to fetch the health check information from.

"HealthChecks-UI": {
	"HealthChecks": [
  	{
    	"Name": "Local",
    	"Uri": "http://localhost:1994/health"
  	}
	],
	"EvaluationTimeOnSeconds": 10,
	"MinimumSecondsBetweenFailureNotifications": 60
  }

Fire up your application and navigate to /healthchecks-ui to see your health check in action!

How to

RxSwift: Noob to Student in 10 minutes

Some time ago, we wrote a blog post on what Reactive programming is. As a technique, reactive programming has become so popular that more than 40 languages, including Java, have evolved to support reactive programming techniques. Reactive programming powers the codebases of some of the world’s largest online platforms such as Netflix, AirBnB, and Crunchbase. So clearly, it’s a technique worth familiarising yourself with.

One of the key principles underpinning Reactive programming is the use of asynchronous data streams. But what is an asynchronous data stream? Simply put, an asynchronous data stream is a stream of data where values are emitted, one after another, with a delay between them. The word asynchronous means that the data emitted can appear anywhere in time, after one second or even after two minutes, for example. 

With Reactive programming, data streams will become the spine of your application. Events, messages, calls, and even failures will be conveyed by a data stream. In a reactive programming environment, these streams will be observed and reacted to, when a value is emitted.

What are the key benefits of using Reactive techniques in your codebase?

  • Functional
    Reactive programming will allow you to avoid intricate stateful programs. Instead, you will be able to make use of clean input/output functions over observable streams
  • Asynchronous
    Traditional try/catch methods cannot handle errors in asynchronous computations, but ReactiveX is equipped with better mechanisms to handle errors.
  • Less is more
    Reactive programming provides developers with operators and transformation elements, which can be used to convert boilerplate into fewer lines of code.
  • Concurrency
    Reactive programming also provides new schedules and observers to handle threading and queues.

Getting Started

RxSwift is one of the best ways to deploy reactive code in your application, especially if you develop for iOS. Essentially, it is Swift’s own version of ReactiveX (or Rx). The more technically inclined amongst us would think of RxSwift as a library to compose asynchronous and event-based code using observable sequences and functional style operators, which allows for parameterized execution through schedulers.

RxSwift can be installed through CocoaPods just like any other pod library. A typical Podfile would look something like this:

# Podfile
use_frameworks!
target 'YOUR_TARGET_NAME' do
    pod 'RxSwift',    '~> 4.0'
    pod 'RxCocoa',    '~> 4.0'
End

Next, run podfile to install the RxSwift library to your project.

$ pod install

Understanding the RxSwift Landscape

There are a few key elements in the RxSwift universe which you must keep in mind at all times.

Observable Sequences, Observables and Observers

Everything in RxSwift is an observable sequence, or something that operates on or subscribes to events emitted by an observable sequence. Observable sequences which will emit data continuously for one or more instances are simply called ‘Observables’.

Observers on the other hand, can subscribe to these observable sequences to receive asynchronous notifications as new data is gathered to perform operations.
Observable sequences can emit zero or more events over their lifetimes.

In RxSwift an Event is just an Enumeration Type with 3 possible states:

  • .next(value: T)
    When a value or collection of values is added to an observable sequence it will send the next event to its subscribers. The associated value will contain the actual value from the sequence.
  • .error(error: Error)
    If an Error is encountered, a sequence will emit an error event. This will also terminate the sequence.
  • .completed
    If a sequence ends normally it sends a completed event to its subscribers.

Subjects

Subjects are a special form of observable sequences, to which you can subscribe and dynamically add elements. Currently, RxSwift has four different kinds of subjects.

  • PublishSubject:
    If subscribed to, you will be notified of all the events that happen after you subscribed.
  • BehaviourSubject:
    A behavior subject will give any subscriber the most recent element and everything that is emitted by that sequence after subscription.
  • ReplaySubject:
    If you want to replay more than the most recent element to new subscriber on the initial subscription, you need to use a ReplaySubject. With a ReplaySubject, you can define how many recent items you want to emit to new subscribers.
  • Variable:
    A variable is nothing but a BehaviourSubject wrapper which feels more natural to non-reactive programmers. It can be used just like a normal variable.

Operators

Operators are used to filter, transform or combine data sequences before sending them to subscribers. RxSwift provides what is known as a ‘Marble Diagram’ to help select the operators you need. A Marble Diagram visualizes the transformation of an observable sequence. It consists of the input stream on top, the output stream at the bottom and the actual transformation function in the middle.

Schedulers

Schedulers are used to create thread safe operations. Generally, operators will work on the same thread where the subscription was created. With the use of a scheduler, operators can be forced to work on a specific queue.

RxSwift has five types of schedulers:

  • MainScheduler
    This scheduler abstracts work that needs to be performed on MainThread. In case schedule methods are called from the main thread, it will perform the action immediately without scheduling. This scheduler is usually used to perform UI-related work.
  • CurrentThreadScheduler
    This scheduler schedules units of work on the current thread. This is the default scheduler for operators which generate elements.
  • SerialDispatchQueueScheduler —
    This scheduler abstracts work that needs to be performed on a specific dispatch_queue_t. It will make sure that even if a concurrent dispatch queue is passed, it is transformed into a serial dispatch instead. Serial schedulers enable certain optimizations for observeOn.The main scheduler is an instance of the SerialDispatchQueueScheduler at work.
  • ConcurrentDispatchQueueScheduler —
    This scheduler abstracts work that needs to be performed on a specific dispatch_queue_t. You can also pass a serial dispatch queue, and it should not cause any problems. This scheduler can be used when some work needs to be performed in the background of the application.
  • OperationQueueScheduler —
    This scheduler abstracts work that needs to be performed on a specific NSOperationQueue. This scheduler is suitable for instances where there is some bigger chunk of work that needs to be performed in the background and you want to fine tune concurrent processing using the maxConcurrentOperationCount. function.

And that’s about it. You have now successfully learned the basics of RxSwift. Feel free to give RxSwift a try yourself by clicking on https://github.com/ameera/LoginWithRxSwift 

Happy coding!

References:

  1. Ameera Damsika: Tech Talk on RxSwift
    https://www.facebook.com/calcey/videos/vb.199985621561/2478458815600885/?type=2&theater
  2. https://medium.com/@duydtdev/concepts-about-rxswift-and-how-to-install-rxswift-library-to-project-5a1c3484ca6e
  3. https://medium.com/ios-os-x-development/learn-and-master-%EF%B8%8F-the-basics-of-rxswift-in-10-minutes-818ea6e0a05b
How to

How to set up Kafka in a Docker container

At Calcey, we recently found ourselves having to deal with linking a legacy system with a new information system on behalf of a client. In order to avoid complications, we explored the possibility of deploying Kafka within a docker container.

What is Kafka?

Kafka is an open-source, fault-tolerant event streaming platform. Kafka can help bridge the information gap between legacy systems and newer systems. Imagine a situation where you have a newer, better system that needs data from an older, legacy system. Kafka can fetch this data on behalf of the developer without the need to build an actual connection between the two systems.

Kafka, therefore, will behave as an intermediary layer between the two systems.

In order to speed things up, we recommend using a ‘Docker container’ to deploy Kafka. For the uninitiated, a ‘Docker container’ is a lightweight, standalone, executable packages of software that include everything needed to run an application: code, runtime, system tools, system libraries, and settings.

To deploy Kafka, three pieces of the puzzle need to fall in place: the ZooKeeper server, Kafka server, and a connector to the data source. In addition, we will be making use of SQL server’s Change Data Capture (CDC) feature to feed data into Kafka. CDC records any insertion, updating, and deletion activity that is applied to a SQL Server table. This makes the details of the changes available in an easily consumed relational format. But that’s a topic for another day.

The easiest way to set all this up is to use Debezium. We recommend using the Debezium image which can be downloaded from https://debezium.io/docs/tutorial/. This image will allow you to configure ZooKeeper, Kafka and the Connector in one go.

With both ZooKeeper and Kafka now set up, all you have to do is tell Kafka where your data is located. To do so, you can connect Kafka to a data source by means of a ‘connector’. While there is a wide range of connectors available to choose from, we opted to use the SQLServer connector image created by Debezium. Once a connection is established with the data source, pointing the connector back to the Kafka server will ensure that all changes are persisted with.

And that’s all there is to deploying Kafka in a Docker Container!

How to

React native advanced Mobile Application Development

New to React Native?

The technically inclined amongst us may already know the story of React Native. Developed by Facebook, it is essentially a set of libraries that can communicate with their corresponding APIs. This is where the ‘Native’ tag comes in. By design, React Native is able to easily access features native to the device it is being run on, be it a phone or tablet running Android, iOS, or even Windows, connecting native threads and JavaScript threads through its event-bridge.

React Native uses a mix of JavaScript and XML-like syntax, known as JSX, to render the user interface as a function of the application’s current state. This makes it much more interesting to build component rich UIs with principals like stateful components, a layout engine, virtual DOMs, etc.

Let’s go deep.

Here, at Calcey, React Native is one of our favorite tools to work with. Along the way, we’ve picked up a few tricks useful for scalable react-native app development which we’ll be sharing today.

Write reusable components (But don’t overdo it)

Write reusable components (But don’t overdo it)

React recommends creating reusable components as much as you can. Obviously, this makes maintenance and debugging considerably easier. However, as any experienced coder knows, defining components with too much specificity can actually render them useless. Similarly, defining components too loosely will complicate things.
Take the example of building a screen for an app. A screen is essentially a group of components. Intuitively, it makes sense to write common UI elements such as buttons, lists, etc. as reusable blocks of code. This will not only save time but also make your code cleaner.

Safe coding

Safety is determined by how far the platform will go to prevent the developer from making mistakes when writing applications. Due to the freedom given by JavaScript to decide a coding style based on the preference of the developer, code safety will become an important factor, especially when dealing with scalable apps.

React Native has a few tricks of its own which support Flow and TypeScript to avoid such cases if the developer decides to use them. Flow grants us the ability  to easily add static type checking to our JavaScript. It will also help prevent bugs and allow for better code documentation. Meanwhile, TypeScript will provide great tooling and language services for autocompletion, code navigation, and refactoring. The ecosystem you work in usually has a major influence on helping you decide  what to use, as does your previous exposure to static type systems.

However, Calcey uses these tools to make sure that developers are benefiting from them when it comes to readability of the code, or the code standards.

Extract, extract, extract

React Native projects tend to include a large number of common elements such as styles, images, and global functions (functions that format dates and times, make requests to a server, etc.). At Calcey, we generally encourage our developers to keep such elements separate from the component code. This makes it easier to share elements from anywhere within the app, while also making a given app’s codebase cleaner, and easier to maintain and scale.

Here’s an example of a color.js file coded by one of our developers:

export function hexToRgbA(hex: string, opacity: number) {
  let c;
  if (/^#([A-Fa-f0-9]{3}){1,2}$/.test(hex)) {
    c = hex.substring(1).split('');
    if (c.length === 3) {
      c = [c[0], c[0], c[1], c[1], c[2], c[2]];
    }
    c = `0x${c.join('')}`;
    return `rgba(${[(c >> 16) & 255, (c >> 8) & 255, c & 255].join(',')}, ${opacity})`;
  }
  throw new Error('Bad Hex');
}

Store management

To most React Native developers, Redux is an absolute necessity. But at Calcey, we believe that Redux is not a necessity for the most part. The way we see it, bringing Redux into the picture would be akin to using a hammer to crack open an egg.

Ever since we started using Redux, it has only come in necessary for the most complex of apps where immense scalability is required. To understand this better, consider why Redux was developed in the first place. As Facebook grew to become what was essentially the biggest web-app in the world, it had to contend with the headache of not being able to show the correct number of notifications in the header bar. At the time, it was just difficult for Facebook (or any other web-app) to recognize changes in one part of the app (e.g. when you read a comment on a post) and reflect that change in another area (i.e. reduce the number of unread notifications by one). Facebook wasn’t happy with forcing a web page refresh to solve the problem, so it built Redux as a solution.

Redux works by storing information of an app in a single JavaScript object. Whenever a part of an app needed to show some data, it would request the information from the server, update the single JavaScript object, and then show that data to users. By storing all information in one place, the app always displayed the correct information, no matter where, thereby solving Facebook’s notification problem.

Problems cropped up when other independent developers began using a single object to store all their information—basically every single piece of data provided by the server. This approach has three main drawbacks namely, introducing a need for extra code and creating the problem of ‘stale data,’ whereby unwanted data appears within an app from a previous state and increases the learning curve for new developers.

So how does one overcome this problem? By planning ahead, and using proper requirement identification. If you envision that your app will have extreme scalability issues in the future, it may be better to employ Redux from day one. Otherwise, deploying Redux selectively is wiser. After all, it is possible to apply ideas from Redux without using React. An example of a React component with a local state is given below:

import React, { Component } from 'react';
class Counter extends Component {
  state = { value: 0 };
  increment = (): void => {
    this.setState(prevState => ({
      value: prevState.value + 1
    }));
  };
  decrement = (): void => {
    this.setState(prevState => ({
      value: prevState.value - 1
    }));
  };
  render() {
    return (
      <View>
        <ChildComponent value={this.state.value} />
        <Button onClick={this.increment}>+</button>
        <Button onClick={this.decrement}>-</button>
      </View>
    );
  }
}

We can take these attributes or functions to any depth of the component tree and use them inside those components. This mechanism is called prop-drilling. Be warned though, it’s not a good idea to drill multiple layers unless you have an understanding of where the props are coming from, and where they are going next.

Another solution we can use is the Context API provided by React itself. The Context API allows us to access these props of the parents from any child or a parallel component using the consumer design principal. All these options are used at Calcey, based on the use case.

These are a few of our internal React Native practices and tricks. What are yours? Let us know in the comments below!

How to

Automating The eSignature Process Using DocuSign

In an ever-evolving digital world, legal documents with dotted lines for signatures are perhaps one of the last remaining analog holdouts. However, that too is now going digital, with e-signatures gaining more widespread acceptance.

There are a plethora of services online which allow users to sign documents electronically. DocuSign is one of the most well known, while HelloSign, SignNow, and Citrix RightSign are a few others that make up the rest of the pack.

The basic premise of eSignature services

In order to use an eSignature service, a user must first upload a document that will be scanned by the service. Next, the user will be allowed to define the areas on the document where a signature or some other type of input is required from the signees. Once all this is done, the signable document will be delivered to the specified signees via email.

Everything works seamlessly when it is just one document that needs to be sent across at any given time. However, what if a user needs to frequently send similar sets of documents to different groups of signees, perhaps on a daily basis?

In such scenarios, it may not be wise to require a user to upload documents and define input areas several times over. Not only is this time consuming, but it is also extremely tedious.

Faced with this problem, one of our own clients recently turned to us for help.

Our Solution

Having identified the scale of the problem, our engineers set out to develop a solution that could unite the convenience provided by a service such as DocuSign with the simplicity and seamlessness promised by automation.

Since the client was already using the DocuSign platform to send documents to signees, our engineers decided to build a layer of code that would sit above DocuSign, thus essentially building a customized eSignature platform for the client.

Our solution is expected to allow the input of all details relevant to a signee such as full name, address, etc into a database. Once the data has been input, all the client has to do is select the relevant document, select the name of the signee, and the code will take over the task of populating all the relevant fields with the correct information.

How We Built It

In order to build a code layer that runs atop DocuSign, one must first sign up for a DocuSign developer account and build a sandbox. Visit https://developers.docusign.com/ and sign up to create a sandbox.

Next, an authorization method must be chosen. Due to the need to ensure that the application is able to access the DocuSign API without the need for any human interaction,

Calcey’s engineers chose to use JWT as the authorization model. With JWT in place, our custom application will seek to impersonate a user with a DocuSign login. In order to allow the impersonation to take place smoothly, we must register the application with DocuSign, and ensure that the target user provides explicit permission for the API to use their credentials. It is important to note that the process of granting permission to use one’s login credentials is a one-time action.

You can now choose to create an envelope template, which can hold a set of documents that require signing. Once the documents have been uploaded, the user needs to manually specify where data input is necessary on each document.
Note: When creating placeholders, it must be ensured that the template contains one or more signees. It is also important to insert only the role of the signee when creating the template since all other relevant information will be taken care of by the application.

Once all placeholders have been defined, we can consider the template ‘ready’. Now, whenever a user wants to send out documents, the DocuSign API can fetch a list of pre-uploaded templates, allowing the user to pick and choose the correct set of documents to send out. With the aid of the Template ID, the DocuSign API will create what is known as an ‘envelope’ and automatically deliver the documents to the intended recipients.

How to

Skyrocketing with Android JetPack

In 2018, at Google I/O, Android introduced a next-generation suite called Jetpack to accelerate Android development. Android Jetpack is a set of components, tools, and architectural guidance, that makes it quick and easy to build great Android apps. Components are unbundled but built to work together while leveraging Kotlin language features to make developers more productive. Technically, Jetpack consists of the existing support library, architecture components, and Android-ktx, in separate modules and rebranded in an adaptive way providing coverage for lifecycle management, robustness of data states, background tasks, navigation, and much more.

Source-: https://android.jlelse.eu/what-is-android-jetpack-737095e88161

As represented in the illustration above, Jetpack combines four major categories.

  • Foundation
  • Architecture
  • Behavior
  • UI

Each section consists of both old and latest components. The older components have been in use for quite a while. This post will focus mainly on a few newly developed components such as navigation, paging, Android KTX, and WorkManager.

Navigation

Source -: https://medium.com/@Alex.v/android-navigation-architecture-component-25b5a7aab8aa

The navigation component

  • Reduces boilerplate code to fragment transactions and reverse events – where the component is smart enough to navigate itself – and include bundle data if needed at runtime, based on provided navigation destinations and actions.
  • It gives developers an opportunity to navigate through the view hierarchy, similar to a storyboard in Xcode.

When it comes to passing data through the bundle, the navigation component library comes with a Gradle plugin called Safe Args to avoid mistakes made by developers such as passing random bundles or using the wrong keys to extract data.

Migrating to the navigation component is pretty straightforward; simply following the steps below would be adequate.

  • Create a navigation graph for separate activities if required.
  • Link separate activities through activity destinations, replacing existing startActivity()
  • In case multiple activities share the same layout, navigation graphs can be combined, replacing navigate calls to the activity destinations to navigation graphs

Paging

Apps work with enormous sets of data but only require the loading of a small portion of this data for a given timeframe. This should be a key consideration for a developer since it causes the battery to drain and wastes bandwidth. Jetpack provides a paging library to overcome this challenge by enabling gradual and graceful data loading. Furthermore, it can be integrated into RecyclerView and works with both LiveData and RxJava.

The Paging library consists of the following core elements.

  • PageList
  • DataSource

PageList is a collection that has the capability to load data as chunks asynchronously.

DataSource is the base class for loading snapshots of data to the PageList. The illustration below provides an easy guide on how data loads from the data layer to the UI components

Assuming the database is your data source and will pass the data to be created, DataSource allows the data to be handled in a repository with LiveData that is created by LivePageListBuilder. Then, through the ViewModel, data will navigate to PageListAdapter API which provides from the paging library to help present data from the page list to RecyclerView. PageListAdapter will use the Diffutill class to find new data and notifies automatically.

Refer to the following links for more details

https://developer.android.com/topic/libraries/architecture/paging/

https://medium.com/@sharmadhiraj.np/android-paging-library-step-by-step-implementation-guide-75417753d9b9

https://medium.com/@Ahmed.AbdElmeged/android-paging-library-with-rxjava-and-rest-api-e5c229fd70ba

Android KTX

Android KTX is another feature that comes with Jetpack that provides a set of Kotlin extensions. The purpose of Android KTX is to give more concision, reduce the lines of code and make them more readable. Refer to the following sample codes.

Kotlin

sharedPreferences.edit()
    .putBoolean("key", value)
    .apply() 

Kotlin + KTX

sharedPreferences.edit {
    putBoolean("key", value)
} 

Kotlin

Toast.makeText(this,
    R.string.text,
    Toast.LENGTH_SHORT)
.show()

Kotlin +KTX

context.toast(R.string.text,)

Kotlin

for (recipe in recipes) print(item)

Kotlin+KTX

recipes.forEach{
print(it)
 }

Pretty simple, isn’t it? It’s fun and simple to understand.

WorkManager

Assuming you need to execute a task immediately or at a pre-scheduled time, Jetpack provides an optimal solution called the WorkManager. WorkManager is smart enough to execute the task based on the device’s API level and the app state.

Imagine the application wants to run a task in the foreground, WorkManager runs it in a separate thread inside the app’s processes. If the app is in the background, it will schedule a background thread based on the device’s capabilities. WorkManager might use JobScheduler, Firebase Job Dispatcher, or Alarm Manager. Basically, WorkManager has the power to select the best option based on the device’s capabilities and execute the appropriate API, reducing the boilerplate code to figure out the potential device’s state.

With all the new features mentioned above, it is evident that Jetpack is a great option for developing Android apps. I personally love Jetpack because of the boost in efficiency that it brings and for allowing me to focus more on application logic, reducing boilerplate code writing to a minimum.

How to

How to Build a Simple Static Website with Jekyll

HTML and CSS can be considered the bread and butter of any website. HTML is the standard markup language for creating web pages and CSS is a language that describes the style of an HTML element. Be it a complex website like Amazon or a simple static website, the information will be displayed to end-user users as rendered HTML. If you are a rockstar developer or a newbie, you might have to bang your head against a wall to figure out the ideal tech-stack and framework to build a website.

The goal of this article is to help you understand how easy it is to build a simple, blog-aware, static website with Jekyll in no time.

Jekyll is a static site generator written in Ruby by Tom Preston-Werner, GitHub’s co-founder. Jekyll is at its best when it comes to personal blogs, portfolios, and static websites. The real beauty in Jekyll is that you can provide the content you want to publish on a website in your favorite markup language (as plain text) and Jekyll will automagically generate static HTML pages for you.

If you already have a Ruby development environment, you can get a simple static website up and running in just four steps. [Ruby development environment install guide]

1. Install Jekyll and bundler. If you have already installed these gems, you can skip this step.

gem install jekyll bundler

2. Create a new project named personal-blog.

jekyll new myblog

3. Change into the project directory.

cd personal-blog 

4. Build the project and serve the site using a development server.

bundle exec jekyll serve

Open your favorite web browser and navigate to http://localhost:4000 to view the website just created. If everything has gone well, you should get the webpage shown below.

Let’s take a step back and see exactly what Jekyll had done and the files that were generated for us when we created the new project.

├── 404.html	  # The default 404 error page
├── Gemfile	  # Project related Gem dependencies
├── Gemfile.lock  # Used by Bundler to record installed Gem versions
├── _config.yml	  # The main configuration file of the project
├── _posts/	  # Holds the blog posts
├── _site/        # Holds the generated site
├── about.md	  # The default about page
└── index.md	  # The home page

The auto-generated file structure is pretty straightforward. But if you look at our website, you will notice that it’s already styled. That’s because Jekyll uses a default theme called minima and it is specified in a _config.yml file. Jekyll comes with an extensive theming system (or layouts in Jekyll nomenclature) and provides full support for community maintained templates. The minima theme comes with Jekyll Gem. If you want to customize the look and feel of the site, you need to copy minima into the project directory and make the required changes.

The next challenge is to deploy this website and make it available to public users. When it comes to deployment, you can go ahead with one of the following options:

A. Web Servers – NGINX/Apache
B. AWS S3 for static site hosting
C. GitHub Pages

If you want to go ahead with option A or B, you need to build the project to get the distribution ready version of the website which you can achieve by executing the following command in the project directory.

 

jekyll build

Compared to option A and B, option C is very straightforward and hasslefree. It does not involve any cost and you can host your website for free with Github Pages. Also, you do not have to build the site each time you make a change; just commit your changes to GitHub and Jekyll will automagically build and publish your website.

Resources

Hosting a Static Website on Amazon S3

GitHub Pages – Websites for you and your projects

Hosting on Github Pages