NewsTechTrends

HRGPT — a chatbot that answers questions related to human resource policies

ChatGPT for IT companies

Written by Viren Dias – Lead Data Scientist at Calcey

In recent months, ChatGPT and its uncanny ability to understand human language and mimic human responses had created quite the buzz around the Calcey office: it got us thinking about how we could incorporate it and its underlying technology into our products. We thought the best way to figure that out would be to trial it in an internal project, and for that, we needed an appropriate use case.

Our colleagues over at Human Resources (HR) had been spending their valuable time meticulously documenting HR policies in an internal wiki, only for us slackers to be too lazy to read it, and waste even more of their time by querying them directly! This sparked an idea — why not create a chatbot that could leverage the information contained within the HR wiki to answer questions related to HR policies?

What is ChatGPT?

ChatGPT is a large language model (LLM) trained to understand and respond to natural language conversations. As its name would suggest, it has been built on top of a Generative Pre-trained Transformer (GPT), more specifically its latest iteration, GPT-4. 

GPT is a family of LLMs, trained on curated portions of the internet and human feedback, in order to predict the next token in a given document. In simpler terms, given a prompt, the model will attempt to predict what the response should be, using the internet as its source of knowledge. In the case of ChatGPT, the prompt is simply an incomplete chat log, and the model will attempt to predict what the next message should be.

How we adapted ChatGPT to our task

Figure 1: A flowchart of the processes involved in producing a response to a user-supplied query regarding HR policies.

At a high level, the processes involved in adapting ChatGPT to respond to questions related to HR policies can be aggregated into three distinct stages:

  1. A preprocessing stage, where we convert the HR policy documents into a machine-readable format and break them down into congruent sections.
  2. A sorting stage, where we sort the aforementioned sections by their relevance to the query.
  3. A response stage, where we supply the ChatGPT API with the necessary information to evoke an appropriate response.

We briefly discuss each of these stages and their associated engineering challenges below.

Preprocessing the HR policies

The HR policy documents are available in the form of wiki pages, which ChatGPT does not respond well to. Consequently, we needed to convert them into a ChatGPT-friendly format, which we identified iteratively through experimentation. This involved tasks such as: 

  • Stripping HTML tags, 
  • Running optical character recognition (OCR) software on images and infographics,
  • Converting HTML tables into Markdown tables, 
  • Printing hyperlink URLs explicitly alongside their text,
  • Etc.
IT companies in Sri Lanka

Figure 2: A visual depiction of the sectioning methodology. Each rectangle represents a separate section.

Additionally, some documents can get quite large and touch on several different HR policies: we needed to break these down into more manageable, congruent sections. We found that the best way to do this was to make use of heading levels and break down each document into nested sections, with each section containing all the sections hierarchically below it.

Sorting HR Policies by Relevance

In the interest of computational efficiency and accuracy of responses, we needed to sort the HR policy sections by their relevance to the query. To do this, we made use of text embeddings: a numerical representation of a text. A simple example of this would be tallying the occurrences of each word in a text. The embeddings of two texts can be input into a similarity function to easily determine how similar they are mathematical.

Once we calculated the embeddings of the query and all the HR policy sections, we computed the pairwise similarity. We then sorted the sections by the computed similarity to yield a list of HR policy sections ordered by their relevance to the query.

Evoking an appropriate response

To elicit a desired response from ChatGPT, we need to supply it with three pieces of information:

  • The relevant HR policy sections to inform ChatGPT of the HR policies,
  • The user-supplied query, and accompanying chat history provide context for the query, and
  • A well-engineered prompt to instruct ChatGPT on how to respond using the above data.

The only thing missing is a good prompt. Engineering a good prompt is an iterative process that requires a lot of experimentation. It is not always as simple as providing instructions — ChatGPT has historically had an issue with hallucination, whereby the model incorrectly yet confidently responds to a question it does not know the answer to. However, with the right prompt, this issue can be solved.

The prompt we settled on was as follows:

“You are a helpful assistant working at Calcey. Answer the questions truthfully and explicitly, providing additional details when available. Use only the information contained within the wiki, and if the answer is not contained in the wiki, ask the user to contact HR.”

Q: What is the reward for introducing a senior software engineer?

A: The reward for introducing a Senior Software Engineer is an iPhone 13 128GB. Please note that certain conditions apply, such as the candidate must not have applied to Calcey before the referral email, and the referee will not join the interview process. The reward will be ready within one month from the date your friend starts working with Calcey. To refer someone, send their resumes to internal-referrals@calcey.com.

(a) An example of a response containing additional relevant details to the query.

Q: How long can I stay in the dorm for?

A: The Calcey Dormitory Facility Policy does not explicitly mention a maximum duration for an employee's stay. If you have specific questions about the duration of your stay, it's best to contact HR for further clarification.

(b) An example of how it responds to a query it should not know the answer to.

Figure 3: Examples of how the prompt influences ChatGPT’s response.

Breaking down the prompt and explaining what each sentence does:

  • “You are a helpful assistant working at Calcey”. This ensures that the model responds politely and from the perspective of a colleague.
  • “Answer the questions truthfully and explicitly, providing additional details when available”. Instructing the model to provide additional details results in the response including links to forms it refers to, contact details for people it refers to, etc. However, this increases the risk of the aforementioned hallucination issue.
  • “Use only the information contained within the wiki, and if the answer is not contained in the wiki, ask the user to contact HR”. This greatly curbs the hallucination issue and provides explicit instructions on how to respond to questions the model does not know the answer to.

Closing Thoughts

ChatGPT is a very powerful tool that can be molded to suit a variety of use cases. However, since its inner workings are not precisely understood and its responses are stochastic in nature, it can be tricky to instruct it to do exactly what you want it to. As a result, it can require quite a bit of experimentation to get the desired outcome.

TechTrends

React Native Tips: How to optimize assets for better performance

React Native Tips

Written by Samitha Nanayakkara – Associate Tech Lead at Calcey

React Native is a favorite of mobile app developers everywhere as it allows you to build multi-platform apps using a single codebase.

However, as the complexity of an app increases, so does its size. Unoptimized code and assets lead to larger apps which consume valuable storage space on the devices they’re installed. Larger apps also mean slower downloads, and it could lead to users abandoning the app altogether out of frustration.
That is why it is vital to optimize assets such as images and animations. Doing so can drastically reduce app size and improve overall performance.

A developer can

  • Optimize the size of images AND
  • Optimize the resolution of images

Optimizing the size of images

PNG images are popular with React Native app developers due to their efficient rendering (compared to SVG) and high quality (compared to JPEG). However, these images can sometimes negatively impact the app’s performance. To address this issue, developers can use ‘TinyPNG’ to optimize and compress their PNG images without sacrificing quality. TinyPNG is a web service that uses advanced lossless compression techniques to reduce the file size of PNG images.

For one of our projects at Calcey, we were able to reduce the size of the production app by 67.43MB solely by optimizing image size. 

For macOS users, a convenient tool called “TinyPNG4Mac” can be installed to optimize an entire folder of images quickly with a simple drag and drop. TinyPNG4Mac can be a helpful tool in streamlining the image optimization process and reducing the size of the PNG images used in a React Native app.

Optimizing the resolution of images

Services such as TinyPNG won’t be very effective if high-res images are being used. 

As app developers, we often include alternative images to provide the best experience to our users. As displays continue to become better, it’s easy to be tempted to include the highest-resolution images in our apps. However, this isn’t always necessary.

Let’s take Android devices as an example. Most modern phones have a pixel density of 400-500 DPI, with the Samsung S22 Ultra having a pixel density of approximately 500 DPI. When an app is about to display an image, the Android operating system automatically determines the device’s pixel density and serves the correct alternative image. In the case of the S22 Ultra with WQHD+ mode enabled, Android will render images specified for the xxxhdpi density.  For example, if the app is about to display ‘my-photo.png’, the OS will select the mdpi, hdpi or any other alternative version of the particular image based on the phone’s pixel density.

But what size should these xxxhdpi images be? According to the Android documentation, xxxhdpi images should be 4x the size of the base images. For example, if the base image size is 48 x 48 pixels (i.e.1x), the xxxhdpi image should be 192 x 192 pixels (Figure 1). It’s important to remember that designers typically work at a 1x scale because it’s regarded as best practice and makes easy design scaling possible.

  • 36 x 36 (0.75x) for low-density (ldpi)
  • 48 x 48 (1.0x baseline) for medium-density (mdpi)
  • 72 x 72 (1.5x) for high-density (hdpi)
  • 96 x 96 (2.0x) for extra-high-density (xhdpi)
  • 144 x 144 (3.0x) for extra-extra-high-density (xxhdpi)
  • 192 x 192 (4.0x) for extra-extra-extra-high-density (xxxhdpi)

React Native Tips

Figure 1: Relative sizes for images at different density sizes

So, it is not necessary to include ultra-high-resolution images.

Since this post focuses on React Native, you might wonder if there is a rule of thumb for what the resolution of any image you include should be. According to React Native 0.70 Documentation, it is sufficient to have 1x, 2x, and 3x scaled images for any mobile device. These scales are similar to the scaling factors used by iOS.

Including higher resolution images beyond 3x may not be of much use, as mobile devices will scale the images down to fit their displays. In fact, including higher-resolution images can increase the size of your app unnecessarily, as explained previously.

In creating a lightweight, high-performing app, it’s best to stick with the 1x, 2x, and 3x scaled images. This is especially true if you design your assets at 1x scale, and you can then use the 2x and 3x scales as needed.

In a project we worked on at Calcey, we reduced the size of the production app by an additional 20 MB by optimizing our images. Altogether, we reduced the production app size by a whopping 90 MB through image optimization alone. This demonstrates the significant impact image optimization can have on the size and performance of a React Native app.

Additionally, given below are a few other methods that can be used to arrive at a smaller app size.

For Animations: Use Lottie

React Native Tips

Lottie is a library for Android, iOS, and the Web. It allows developers to include animations in their apps with ease, and is based on the open-source Bodymovin extension for Adobe After Effects. Lottie uses JSON files to describe the animation data.

Lottie can be a good choice for mobile apps compared to GIF and other animation formats.

  • Small file size
    Lottie animations are typically minor compared to  GIF and other video formats. This can reduce the size of the app  and improve its performance.
  • High-quality animations
    Lottie animations are based on vector graphics, meaning they are scalable  to any size without a loss in quality. This can  especially be useful in creating animations with smooth, clean lines and curves.
  • Customizability
    Lottie allows developers to customize the animations they include in their apps. This is useful in creating animations that match the style and branding of the app.
  • Premium makes things even better
    Lottie’s Premium Service can be used to optimize Lottie files to obtain smaller animation files.

Analyze the JS Bundle

react-native-bundle-visualizer is a tool that can be used to visualize the bundle size of a React Native app. It provides a visual representation of the size of the app’s JavaScript, assets, and other dependencies, allowing developers to identify areas where they can optimize the size of the app.

React Native Tips

There are several reasons why developers might want to use react-native-bundle-visualizer:

  • Identify areas for optimization
    By visualizing the size of the app’s bundle, developers can quickly identify areas where they can optimize the size of the app. This can include optimizing images, removing unused code, and using code-splitting techniques.
  • Monitor changes to the bundle size
    react-native-bundle-visualizer can be used to monitor changes to the bundle size over time, allowing developers to track the impact of their optimization efforts.
  • Improve app performance
    Reducing the size of the app’s bundle can improve app performance, particularly on devices with lower specifications. Using react-native-bundle-visualizer, developers can identify and address problem areas contributing to a larger bundle size.

Analyze the Android APK file
The Android APK Analyzer is a tool that allows developers to analyze the contents of their Android app packages (APKs) and understand their size and composition. It’s included in Android Studio and can be used to examine the size of the app’s code, resources, assets and identify potential areas for optimization.

React Native Tips

Figure 2: Analysis of an APK

One of the main benefits of using the Android APK Analyzer is that it allows developers to identify and optimize the size of their apps. For example, developers can use the tool to identify resources that take up a large amount of space e.g. large images or unused code. They can then take steps to optimize these resources, such as by compressing images or removing unused code.

By following the tips and techniques outlined here, you’ll be well on your way to building lightweight, high-performing React Native apps users love. And remember, every little bit of optimization can make a difference – even reducing the size of your app by just a few megabytes can vastly improve the user experience and make your app stand out in a crowded marketplace. Happy coding!

OpinionTechTrends

Developing an app? Here’s how to choose between an in-house and a remote development team

remote development team in Sri Lanka

When building a tech company every founder and CTO needs to decide how they will structure their tech team. The options available are fairly well established.

  • Hire developers that work with you in your office (in-house team)
  • Hire developers who will work from home (remote team)
  • Hire a team of developers provided by an external software development agency. Often such agencies would be off-shore and hence work remotely with you

What’s best for you? The short answer is – it depends. The cost will play an important role in this decision, but there are several non-quantifiable factors you need to consider too. If you’d like to compare the likely cost of an in-house team vs a remote team provided by an agency we’ve broken down the costs of an in-house dev team in New York here. Here’s everything else you need to consider, other than cost/budgets;

  1. Can you attract the best local talent? 

There is a huge difference between a 10x developer vs a journeyman. This is relevant with anything, not just software development. But, software by nature has strong asymmetric returns. For example— a brilliant developer may singlehandedly create an MVP of a product that goes on to become a billion-dollar company, while a mediocre developer may write a lot of code that simply gets scrapped. The time and effort put in may not be different but the end results may vary wildly. In software, as in many other fields with asymmetric returns, the best talent isn’t only a fraction better than the mediocre players. What they produce is exponentially better in value. 

The question at hand is ‘Are you going to attract the best talent in your locality?’ If you’re based in a tech hub, your chances of acquiring that top talent will likely be low when you compete with giants like Google, Amazon, and Facebook.  But you shouldn’t be discouraged. Rather than sticking to your locality, make the whole world your hiring market. Go remote. 

2. Is speed your priority? 

Studies show thatthe average hiring time for a software development engineer in 2017 was 41 days“.  On top of recruitment time, A typical team needs to go through a process of team development i.e. forming, norming, storming, and performing, before they can operate at peak levels.  If you’re in a race against time to get to market ahead of the competition, hiring a ‘ready-made team’ from an external agency will help you avoid these obstacles altogether. When time is of the essence, a team from an agency with a prior working history will always reach the ‘performing’ stage faster and you’ll end up saving months in recruitment time too.

3. Lean and focused teams are in

Auren Hoffman is right when he says that “Almost every company spends over 95% of its time doing what every other company does. And it spends less than 5% of its time on things that are unique to the company. This makes no sense.” It’s a well-known fact that larger teams move slower and are harder to coordinate compared to smaller teams. Simply keeping everyone on the same page (more difficult than you’d expect in the highly dynamic world of technology) becomes an almost impossible task as team size grows. At Calcey  we are huge fans of Jeff Bezos’ ‘two pizza teams’ concept and we organize our software development teams accordingly. 

So how can you keep your core team lean and focused? You outsource through external agencies for everything other than for the few key core competencies that make your business unique. Surprisingly, software development doesn’t fall into this category even at some tech companies. Sometimes, their core competency may very well be product design, branding, and customer relationship management. 

4. How specialized is your business domain?

If you’re in a niche, complex business and your software project requires a deep understanding of your domain, then opting for an in-house software development team makes sense. They can learn and absorb your business domain over time rather than forcing you to spend considerable time and energy teaching an outsourced software development team about your business. You would also not be able to directly control churn within your remote development provider’s team and hence may need to repeat this onboarding process again and again if too many experienced hands leave the team.

5. Are you willing to invest in a team?

What’s the type of team that comes to your mind when you think of developing software?  Do you imagine a team made up of 3 or 4 developers? Or a team with developers and other supporting roles such as PMs, Software Architects, and QA engineers? Just a couple of developers might do when you’re starting out and only looking to create a scrappy MVP. But once you need to operate at scale; serve a sizable user base and integrate with 3rd party apps in your ecosystem etc. this bare-bones team will quickly reach its limits. At that point, it would be time to bring in the supporting roles and get your developers who were previously shipping code at abandon, accustomed to industry-standard development practices. This isn’t always an easy transition, and any technical debt created by your original developers would be exposed and will need to be fixed. 

remote development team in Sri Lanka

If you’d rather avoid these growing pains or don’t want to invest to create a fully-fledged development team, it makes sense to go with a remote development agency that can provide a fully-fledged team that can manage the full software development life cycle from day one.

6. How much flexibility do you need?

Software products often need to change tech stacks as they mature. Furthermore, development may have to speed up and slow down at different points in time. If you opt for an in-house team it’s worth considering how you will manage such changes. Can your team re-skill when you need to change tech stacks? Will you be able to hire quickly when you need to speed up development and have flexible staffing arrangements in place to ramp down the team when you slow down?

Often this is where an external agency will shine. Their business model revolves around managing staffing requests and serving clients using a variety of technologies. So flexibility and versatility are often baked into their mindset and contracts.

7. You don’t know what you don’t know

When confronting a challenge your solutions are typically generated by your prior experiences. It’s very likely that there are better solutions out there, but we can only consider a sub-set of these solutions— those that we know of. So what does this mean? Simply put, the broader your perspective the better. 

Having a broad perspective is easier said than done in the dynamic world of technology. The industry evolves at a breakneck pace and new frameworks, libraries, and tools are constantly introduced. How many of these latest and greatest tools we are exposed to depends on how often we are faced with the novel, fresh challenges. Ideally, a highly motivated developer would keep in touch with the latest developments, but given the pace of change in the industry, this is impossible. So necessity is a better driver of learning. 

What all this means is that an in-house tech team that is embedded in one business domain and a narrow technology stack will often have a narrower perspective and idea set in comparison to an outsourced agency which is more versatile due to the nature of their business.  

Principally, both approaches to software development are beneficial, however, their effectiveness can vary depending on the situation. Remote development comes at the benefit of providing a broader, more versatile team while in-house development gives your developers a deeper understanding of the project.  On the other hand, in-house teams come with the headache of time spent on recruiting and finding people with the right ambitions to help the company in their core business functions. The added benefit of having a remote development team is they often have more flexibility which is better than having less. Which option you go with depends on your own unique circumstances, but whatever it is, choose wisely.

Cover image credits: Unsplash/@wocintechchat

StartupsTrends

Navigating The Maze Of Tech Stacks

IT companies in Sri Lanka

What You Need To Know Before Choosing A Tech Stack For Your App

Image Credits: mindinventory.com

When building an app, deciding on what tech stack to use is perhaps one of the biggest obstacles to overcome. The right tech stack can help provide the user with a great experience, thus helping drive adoption and growth in the early stages of an app’s lifecycle. But if the wrong choice were to be made in selecting a tech stack, the consequences are dire. There is often no going back, and development teams will have no choice but to scrap everything, move to a new stack, and restart development efforts all over again.

There are a few important factors to consider when choosing a tech stack. They are:

  • Current requirements and feature roadmap
  • Budget (especially in the case of startups)
  • Competency of the development team

However, care must be taken to not let the capabilities of the development team override or constrain the feature roadmap.

Next, it is important to pay attention to the proposed architecture of the app. For instance, one can choose to build a native app, a cross-platform app, or a hybrid app. Today, ‘Progressive Web Apps’ are also popular, but we don’t think it is apt to consider them as a distinct application architecture, primarily because they are essentially repackaged web apps.

Let’s now compare the pros and cons of each architecture.

Native Apps

Native apps are specially made and coded for a specific mobile platform in its native programming language, and as such are extremely suitable for processor-intensive and GPU-intensive apps. Native apps make full use of technologies provided by the platform itself, and hence there is minimal chance of running into issues. The development of native apps is also relatively straightforward. Components are provided out of the box, and connecting them to an app is quite simple. 

The most obvious drawback with opting for a native tech stack is that if you decide to build apps for multiple platforms, you also have to build separate versions of the app. Native apps do not allow for code sharing between platforms and as a result, development times are longer and require a higher investment. By virtue of also having two separate codebases, maintenance can also be challenging. Even if a new feature is to be rolled out, your development will have to build the feature into two different codebases.

  • Technologies available:  Swift (iOS), Kotlin (Android), Objective-C, Java
  • Native apps: Uber, Pinterest, WhatsApp (These apps all make use of extensive functionalities available on the device, hence the need to go with a native tech stack)

Cross Platform

Cross-platform apps can be deployed or published on multiple platforms using a single codebase, instead of having to deploy multiple native apps, one for each platform.

A cross-platform tech stack will allow you to potentially use upto 80% of code used within an app, across multiple platforms. This is perhaps the biggest advantage of opting for a cross-platform stack. Apart from this, there is also the benefit of being able to quickly render UI elements using native controls, very much similar to how a native app would.

However, the very characteristics which make cross-platform tech stacks attractive can also be their downfall, depending on the envisaged use case. The fact that not all code can be shared necessitates extra, and a rather tedious amount of development. Further, a cross-platform stack may not be as fast as a native stack, and the level to which it can interact with the device is largely dependent on the framework.

  • Technologies available:  React Native, Flutter, Xamarin, NativeScript
  • Cross-platform apps: Uber Eats, FB, CitiBank, Instagram

Hybrid Apps

A hybrid app is created as a single app, but for use on multiple platforms such as Android, iPhone and Windows. From a technical standpoint, hybrid apps are actually a combination of native apps and web apps. As a result, a single hybrid app will work seamlessly on any operating system such as iOS, Android, Windows, etc.

Hybrid tech stacks allow for a significant degree of code sharing between different platforms. In a boon for developers, hybrid stacks also allow for the core part of an app to be built using web technologies, paving the way for shorter development times. The web app underpinnings of hybrid tech stacks also mean that the core codebase of a hybrid web app can always be updated via a ‘hot code push’, bypassing the formal App Store and Play Store channels.

Apart from lower performance compared to native or cross-platform tech stacks, hybrid tech stacks also suffer from a design flaw whereby not all code can be shared between different platforms, therefore a certain degree of native code development becomes mandatory. Further, performance too can take a hit, since all in-app interaction is routed through an embedded web browser control. A good example of how this can go wrong comes from Facebook, which in 2012, disastrously bet on an HTML5 stack for its apps. Today though, all of Facebook’s apps are built on React Native, which is a cross-platform tech stack. When a hybrid tech stack is used, UI elements will also be rendered as HTML components, instead of native elements, thus leading to slower performance.

  • Technologies available: Ionic, Mobile Angular UI, Bootstrap
  • Hybrid apps: Diesel, MarketWatch, Mcdonald’s, Sworkit

So Which Tech Stack Is The Best?

There’s no definitive answer to this question, and the decision would always depend on factors such as current requirements, the feature roadmap, budget, etc. as we mentioned earlier. But, what is important is to choose the right stack for the job. A misstep here can often be the difference between success and failure for your app.

How toTrends

Easy API Testing With Postman

IT companies in Sri Lanka

Understanding Postman, the app that has become the darling of code testers around the world

Image credits: meshworld.in

Any given app in this day and age may employ a number of different APIs from various services such as Google Analytics, Salesforce CRM, Paypal, Shopify etc. This complex combination of multiple APIs which interact seamlessly with each other through a common application codebase is what has freed us from the need to be bound to our desks. Thanks to APIs, people today can choose to even run entire businesses on the move.

However, while there is no doubt that the task of imparting various functionalities into an app has been made easier thanks to APIs, these very APIs also complicate the job of a Quality Assurance engineer in many ways, the most obvious being that every time the core codebase is modified for any reason, the APIs must also be tested for compatibility with the new code. Naturally, testing several APIs over and over again is quickly going to get tedious.

This is where Postman comes in, to help with the tedious task of API testing. API testing involves testing the collection of APIs and checking if they meet expectations for functionality, reliability, performance, and security and returns the correct response.

Postman is an API client which can be used to develop, test, share and document APIs and is currently one of the most popular tools used in API testing. Its features allow code testers to speed up their workflow while reaping the benefits of automation as much as possible. Postman’s sleek user interface is a boon to testers, who don’t have to go through the hassle of writing lots of code to test the functionality of an API.

Postman also has the following features on offer:

Accessibility

Once installed, Postman allows users to create an account which then syncs their files to the cloud. Once complete, users can access their files from any computer which has the Postman application installed.

In addition, it is also possible for users to share collections of testing requests via a unique URL or even by generating a JSON file.

Workspaces & Collections

Postman’s interface is built around workspaces and collections. Think of a workspace as an isolated container within which a tester can store, group, and manage all their code test requests. Workspaces are further divided into Personal and Team workspaces. As their names indicate, personal workspaces are visible only to a user, while team workspaces can be made available to a team. Each team gets one common workspace by default, with the option to create an unlimited number of new workspaces.

Collections are simply a collection of pre-built requests that can be organized into folders, and they can be easily exported and shared with others.

Ability to create Environments

In Postman, environments allow users to run requests and collections against different data sets. For example, users can create different environments, one for development, one for testing, and another for production. In such a scenario, authentication parameters such as usernames and passwords can change from environment to environment. Postman remedies this by allowing users to create a staging environment and assign a staging URL, staging username, and password. These variables can be then be passed between requests and tests allowing users to easily switch between different environments.

Parameterization

Postman allows users to parameterize requests as variables, thus granting users the ability to store frequently used parameters in test requests and scripts. Postman supports 5 different types of variable scopes namely Global, Collection, Environment, Data, and Local.

Scopes can be thought of as different “buckets” in which values reside. If a variable is in multiple “buckets”, the scope with a higher priority wins and the variable gets its value from there. Postman resolves scopes using this hierarchy progressing from broad to narrow scope.

Creation of Tests

It is also possible for users to create custom tests which can be added to each API call. For instance, a 200 OK request test can be created to check if an API successfully returns a given request.

Postman also contains a very helpful Snippets section which contains a set of pre-written tests which can be deployed with a single click.

Testing each field of a JSON RESTful service manually every time there is a change can be very time consuming, therefore the best way to do this is by validating the structure using a schema. Given below are the steps to follow to validate the schema using Postman.

Step 1: Assuming that we already have a JSON structure we will start with the Schema Generation. We will use https://jsonschema.net/#/ for generating the schema where we can copy and paste the JSON doc into the JSON Instance and it will generate the schema for us

Step 2: After generating the schema we will go to the tests tab of the postman and declare a variable Schema and paste the schema as follows

Var schema = { <Insert Schema here>
}

Step 3: After that we will write the test as follows to do the validation process.

pm.test('Schema is valid', function() {
pm.expect(tv4.validate(pm.response.json(), schema)).to.be.true;
});


Automation Testing

Postman has a complementary command-line interface known as Newman which can be installed separately. Newman can then be used to run tests for multiple iterations.

Consider a situation where there is a need to run a selected collection of written tests automatically without opening Postman and manually triggering those tests. This is where Newman comes in. Thanks to its ability to collaborate with any program that can trigger a command, such as Jenkins or Azure DevOps. For example, with the help of Newman our tests can be integrated with CI, and if any code change is pushed, CI will run the Postman collections which will in turn help developers obtain quick feedback on how their APIs perform after code changes.

Postman can be used to automate many types of tests including unit tests, functional tests, and integration tests, thus helping to reduce the amount of human error involved.

Newman is also special in that it allows users to deploy collections on computers which may not be running Postman. Collections can be fetched through the CLI of a host computer, by running a few commands.

For the uninitiated, here’s quick tutorial on how to install Newman:

Note: Installing Newman requires the prior installation of Node.js as well as NPM (Node Package Manager).

  1. Open the command prompt (Terminal for mac)
  2. Type npm install -g newman
    Now Newman is installed in your system.
  3. Export the collection you want to run as a json file. (For instance, collectionFile.json)
  4. On command prompt go to the location of the collection json file & run the command
    newman run collectionFile.json
  5. If you want to run the test with environment variables you can export the environment as a json file.(For instance, environmentFile.json)
  6. You can run the test with the environment variables using the command
    newman run collectionFile.json -e environmentFile.json

Following are some of the other options that can be used to customize the tests

-d, --data [file] Specify a data file to use either json or csv

-g, --global [file] Specify a Postman globals file as JSON [file]

-n, --iteration-count [number] Define the number of iterations to run

--delay-request [number] Specify a delay (in ms) between requests [number]

--timeout-request [number] Specify a request timeout (in ms) for a request

--bail Stops the runner when a test case fails

Easier Debugging

The consoles contained within Postman can be used to debug any errors that may arise. Postman contains two debugging tools. One is the console itself, which records any errors which take place while testing an API. Second is the DevTools console, which helps debug any errors occuring with respect to the Postman app itself. For instance, if Postman crashes while executing a test, the DevTools console is where you would look to diagnose the problem.

Support for Continuous Integration

By virtue of being open source, Postman supports a wide variety of Continuous Integration (CI) tools such as Jenkins and Bamboo. This helps ensure that development practices are kept consistent across various teams.

 

With so many features on offer to make life easier for code testers, it is not surprising that in the world of code testing, Postman is heralded as the best thing since sliced bread.

AnnouncementsTrends

Calcey organizes Colombo React Native Meetup

IT companies in Sri Lanka

Calcey organized the very first Colombo React Native Meetup last week. Premuditha Perera, one of Calcey’s Software Architects, conducted the session (refer his presentation here). As this was the first session the focus was to cover the core principals of React. The goal was to lay the foundation for future sessions, where the focus will be on hands-on coding and feature implementation. For this reason, the first session covered the principals of React. Future sessions will provide deep dives into both React and React Native, enabling our community to develop both web and mobile using React-based technologies. 

We had an excellent turnout at the meetup, with a  full house of over 250 participants. Experienced developers working in the industry and many university students were among the audience. We hope to see the same level of enthusiasm and attendance at our next meetup!

If you haven’t done so already join our meetup.com community to ensure that you are notified of our next meetup. 

StartupsTrends

Open Banking For Dummies

Custom software development in Sri Lanka

Everything you need to know about the newest buzzword everyone in the banking industry is talking about.

Banks by nature, are extremely protective of the information they hold within their ageing filing cabinets, for obvious reasons. Money is a touchy subject, and people prefer to keep details about their finances private. However, with the rise of the data economy, everyone from banks to central banks are realising that given how practically every bank has the exact same business model, there is a huge duplication of data which unwittingly takes place. If banks simply commenced sharing such data with each other, wouldn’t that make banking services much less cumbersome? With easier banking, wouldn’t life be much better?

What Is Open Banking?

In layman’s terms, open banking is all about enabling the sharing of information securely, in a standardised format, so that it makes it easier for companies to deliver services more efficiently. Under current banking practices, customers or merchants maintain separate relationships with different financial institutions in order to achieve their financial goals. This is often done by employing the practice of screen scraping, where a third party company creates a mirrored login page, which looks and feels similar to a bank’s or credit card issuer’s online login page. The customer enters their login details, passwords and additional security details such as their pet’s name, which the third party can use to log in as the customer. Once logged into the account as the customer, screen scraping tools copy available data to an external database and can be used outside of the financial institution. This is obviously dangerous, and renders the system extremely vulnerable to man-in-the-middle attacks. Instead, Open banking introduces a more consolidated experience to the customer by allowing banks to expose their functionality via APIs, but subject to the customer’s explicit consent and in compliance with strict information security requirements imposed by the Financial Conduct Authority of the UK.

The concept of Open Banking has its roots in the United Kingdom. In 2016, the Competition and Markets Authority ordered the nine biggest UK banks to allow licensed startups direct access to their data, right down to the level of current account transactions. Again, account holders must approve any exchange.

When talking about Open Banking, you will often hear ‘PSD2’ being referred to. PSD2 is the European version of Open Banking, and refers to the second Payments Services Directive which modernises European payment regulations, thereby enabling consumers and small businesses to have greater control over their data. There is just one small difference between Open Banking and PSD2. Whilst PSD2 requires banks to open up their data to third parties, Open Banking dictates that they do so in a standard format.

Open Banking is now being spoken about everywhere / Credits: Business Insider

How Will Open Banking benefit customers?

The various ways in which open banking will be used to create new services is anyone’s guess, but there are three distinct areas in which Open Banking is starting to make waves.

Money management

At the moment, customers who maintain accounts with two different banks, have no choice but to look at them separately, because the banks’ systems are resolutely incompatible. Open Banking will allow customers to manage their money from within a single app, which should make things much easier.

Banks and startups are already sensing an opportunity in this space. Dutch bank ING has an app called Yolt, while third party app Money Dashboard provides a similar service in the UK.

The Yolt app / Credits: ING Bank

Lending

When a customer takes out a loan, they are sometimes required to provide details of their finances to ensure that they are ‘credit-worthy’. Open Banking will allow customers to provide such information online – for instance, by giving an investor one-off access to 12 months income and spending history.

There are services which already do this, but in order to use them, it becomes necessary to hand over your login details – which is not as secure or seamless. It will also be more accurate, which should help people with what are known as “thin files”. (For instance, if the customer hasn’t worked or been in the country long.)

Payments

The current banking payment infrastructure used around the globe is very much a multi-layered one. For instance, when a purchase is made on Amazon, the retailer contacts an “acquirer”’, such as WorldPay or Global Payments, which gets in touch with Visa or MasterCard to deduct the payment from the customer’s account. Cue much fumbling around with cards and passwords.

By opening up banks’ data, Open Banking makes it possible to pay directly from a bank account – which should be both quicker and (since the various middlemen each charge for their service) cheaper. The bank authenticates the purchase without involving other organisations.

Open Banking will give rise to Banking-as-a-Service (BaaS) / Credits: Bankable

Is it safe?

From a technical point of view, Open Banking is at least as safe as online banking. APIs – the technology used to move the data – are trusted and the law requires account providers to use strong customer authentication, a procedure which allows the payment service provider to verify the identity of both the user and the service.

The key thing to remember is that anyone using an Open Banking service will not need to share their banking login or password with anyone but the bank. This is actually an improvement on existing services, which sometimes require this as a workaround for existing incompatibility.

All in all, Open Banking has the potential to upend the way we bank, disrupting the sector in the same way as media or retail. It could, for instance, enable digital-only banks that manage money automatically via intelligent software. Banking-as-a-Service (BaaS) too, will go mainstream, bringing to life a whole ecosystem of services running on top of an Open Banking layer. Personal finance, now an arcane subject, will become transparent and easy for everyone. Whether this is a dystopian or utopian future depends on one’s perspective – either way, it just appears to be more likely now.

StartupsTrends

What Is Spooking Casper?

Custom software development in Sri Lanka
Credits: Travel Wire News

Casper, the Direct-to-Consumer (DTC) mattress company that bills itself as ‘The Sleep Company’, has filed to go public. Founded in 2014, Casper sells mattresses of relatively good quality online. Thanks to savvy marketing and a 100-day risk-free return policy, Casper thrived in its market, going on to become the most well-known DTC mattress company in the US. At first glance, this is good. And it is on the back of this success that Casper is trying to raise funds from the public markets.

So What’s The Problem?

While things may look rosy on the surface, underneath Casper’s hood is a can of worms. This has prompted a slew of commentators, including Forbes magazine, to publish scathing criticisms of Casper’s business model. What are these criticisms, and most importantly, what can other startups learn from Casper’s mistakes? These are the questions we will try to find answers to in this blog post.

Casper has a poor competitive advantage

One of the most often repeated truths in business circles is that a business needs a competitive advantage. In simple terms, a competitive advantage is what allows a firm to perform at a higher level compared to its competitors in the same industry or market. That is why maintaining a competitive advantage becomes important if a firm intends to become profitable and reward its investors.

But for a firm operating in the DTC sector, it becomes very hard to own a competitive advantage.  Your competitors can copy your marketing advantage, your physical product distribution is mostly outsourced, and for existing categories like mattresses, price comparison is easy.

Casper’s initial success spawned hundreds of competitors (literally), who swiftly started copying Casper without much trouble. Fast Company estimates that there are nearly 178 bed-in-a-box companies, who have followed Casper’s path.

Some of Casper’s competitors /Credits: CNBC

“The products that you’re buying — there are many similarities and only some minor differences,” said Seth Basham, an analyst at Wedbush Securities who covers the mattress industry. Profit is hard to come by because the ease of forming an online mattress company makes the market competitive, according to Basham. “Barriers to entry are low, but barriers to profitability are high,” he said. “It doesn’t take that much to design a mattress, a marketing campaign, put up a website, and have one of these big companies like Carpenter do the fulfillment for you,” he said, referring to one of the key mattress manufacturing companies.

Casper has bad unit economics

If someone were to pore through Casper’s S-1 which was filed with the SEC, there is one thing that becomes absolutely clear–Casper has dominated marketing. It has spent a significant amount of capital on promotions such as ‘napmobiles’, a cruise around Manhattan, and a hotline that helped people fall asleep.

All this spending would be okay…if it made sense.

Prof. Scott Galloway of NYU writes about how for every mattress Casper sells, it spends USD 480 on marketing, going on to make a loss of USD 349 per mattress, according to his calculations. And if Casper chooses to grow bigger (which it will have to, in order to satisfy investors), it will have to continue to lose money on every mattress. Basically, Casper’s unit economics don’t look great. Worse yet, it’s hard to imagine they will get better.

Instead of spending money on marketing, Casper can send every customer $300 and still be profitable /Credits: Scott Galloway/No Mercy No Malice

Why?

Selling a durable product tied to housing makes you vulnerable to the economic cycle, and the long replacement cycle of mattresses makes it hard to build brand loyalty. Since mattress replacement cycles stretch into years, Casper has to bombard each customer with marketing for 5 or 10 years till the customer decides to buy a new mattress. This is expensive, and it is not sensible to assume that one can just blast consumers with marketing emails and hope they click “buy” before they click “unsubscribe.”

This is not just a hypothesis. Casper mentions this in its S-1, but a sharp eye is needed to decode this hidden message. Something which Byrne Smith of MAKER clearly has.

From Casper’s S-1:
“From Casper’s beginning through September 30, 2019, we have seen more than 16% of customers who have purchased at least once through our direct-to-consumer channel return to purchase another product. Importantly, 14% of our customers returned within a year of their original purchase.”

Byrne opines that a 16% repurchase rate and a 14% first-year repurchase rate imply that only about 2% of customers buy something new after a year. What this means is that since mattresses have about a 10-year replacement cycle, Casper loses the vast majority of its ongoing customer relationships before the next mattress purchase.

Economics, one. Casper, zero.

Growth Hacks can become poison too (if you are not careful)

When it launched, Casper’s claim to fame was that it offered a 100-day risk-free return option. But returning mattresses is not like returning shoes and dresses. Casper provides information about its return rates in its S-1, but the trend is far from inspiring. Returns were 15.4% of gross sales in 2017, 18.4% in 2018, and 20.4% in the first three quarters of 2019. When you’re shipping a 90-pound package to the customer, and they’re shipping it back, the costs add up quickly. 

Casper’s return policy is a drain on working capital /Credits: Casper

Also, under U.S. law, companies aren’t allowed to sell used mattresses as new. But Casper donates these mattresses to charities instead of shipping them halfway across the country to be refurbished. Again, this might look like a smart business decision at the outset. But think about this. Casper’s free return policy has been replicated by everyone. If all 178 bed-in-a-box companies resort to donating mattresses, the capacity to absorb donated mattresses is going to dry up pretty quickly. While a donation may get you a small tax benefit under the U.S. tax code, the costs associated with manufacturing it will still continue to eat into profits. And therein lies the fault in Casper’s key growth hack– the very thing which got Casper noticed, has now become a ticking financial time bomb.

To reiterate, Casper is not a bad company. It’s just a good company stuck in a bad business, as a result of which it’s entire business model is standing on shaky ground. While it remains to be seen how Casper will claw itself out of this predicament, startup founders everywhere will do well to learn from Casper’s missteps.

OpinionTrends

Bringing Calm To The Valley: Lessons From Alex Tew

Software development companies in Sri Lanka

How one man learned to build things that matter, the hard way.

Alex Tew’s name may not ring a bell to Gen-Zers today, but for Millennials and the generations before them, Tew’s story is the stuff movies are made of. Venture capital-funded growth and stock exchange listings didn’t mean anything to Tew. He just wanted to make money, quick, and that’s exactly what he did. At 21, Tew was a millionaire. And it took him only 4 months to get there!

Meet Alex Tew /Credit: Coach.me

Then he became depressed.

Then he made it big, again.

How?

And what lessons can we learn from Tew’s story?

Striking Gold and Post-success Depression

One late night in August 2005, Tew was in his room, wondering how he was going to pay off his student loans. At the time, he had just enrolled in business school at the University of Nottingham. Tew decided to brainstorm cheap things he could sell a million of. He managed to come up with a few ideas, including one for a questionable product called the ‘Gum Slinger’, which was basically a small pouch for used chewing gum.

Then he struck gold: Tew decided to start a web page with a million pixels that could be purchased for $1 apiece.

Today, this idea would have been laughable. But remember, this was 2005, and the internet as we know it today was still in its infancy. MSN Messenger ruled supreme, and internet advertising was a virtual mirror of the Wild West.

Two days and $50 in domain fees later, the Million Dollar Homepage was born.

Tew’s concept was extremely simple. For a minimum of $100, an advertiser could buy a 100-pixel block (10 x 10 grid) and display an image or logo of their choosing, with a hyperlink. The only guideline was that it couldn’t be porn.

Tew managed to successfully sell 4.7k pixels to friends and family, and he used the money to hire a PR agency to draft a press release. The release was picked up by the BBC and The Guardian, and advertisers started buying up pixels on his site.

The evolution of milliondollarhomepage.com

One month in, Tew had raked in $250k and was receiving 65k hits per day. By the end of October, he’d made $500k from more than 1.4k advertisers. Come New Year’s Eve, 999k pixels had been purchased. Tew auctioned off the last 1k on eBay; MillionDollarWeightLoss.com bid $38k, bringing his grand total to $1.04m

After paying the tax man his fair share, Tew was left with nearly $700k to his name. He promptly dropped out of college and moved to London. Over the next four years, Tew tried to replicate his initial success by launching various different ventures. But it was all in vain.

Lesson 1: Provide Value, Don’t Demand Attention

Success can be like a treadmill. Once you achieve a certain amount of it, there comes an insatiable hunger for more. This can trick people into focusing their energies on creating things whose success entirely depends on virality or fame. Instead, Tew argues that you must focus on building things that actually enhance someone’s quality of life i.e. provide value. “Success can actually be bad and can teach you the wrong things. I was thinking about ideas that would get attention instead of providing value” says Tew.

Unable to replicate the success he had with the Million Dollar Homepage, Tew moved to San Francisco and joined a friend’s startup incubator.

Lesson 2: Look To Your Own Problems To Find Your Next Idea

The four years he spent looking for his next big idea (2006-2010) took a toll on Tew. He didn’t eat or sleep well, and his mental health took a toll. Till then a lifelong meditator, Tew found himself slowly drifting away from his daily practice. Tew realised that he had to initiate some corrective action..so he built another website. The result was donothingfor2minutes.com, a simple website with a 2-minute timer that would restart if you moved your cursor. It was Tew’s way of forcing himself to meditate.

The timing was perfect. The rise of the internet and the proliferation of smartphones had brought with it loads of ‘mental clutter’ and people were starting to talk about digital detoxes. Questions were being asked about the long-term impact of social media on one’s mental health, and the practice of mindfulness was having its moment.

According to Google Trends, search interest in mindfulness has continued to grow

Lesson 3: Take Your Time To Build

Alex was in no hurry. He built donothingfor2minutes.com in 2010. He took the next two years to figure out what he was going to do with it. In 2012, he finalised his plan to build a more robust meditation app.

Finding the seed money wasn’t easy and Tew was laughed out of many meetings. “When you talk about meditation with people who don’t meditate, and who work in tech, it’s so far outside of their world of focus,” he says.

Tew eventually managed to put together $1.5 million and launched Calm, the meditation app.

It didn’t take long for the Calm app to become popular Credit: Calm App

Lesson 4: Be Clear About How You Are Going To Monetize

From the outset, Tew was very clear about what his value proposition was going to be, and how he was going to monetize his app. To this day, the app’s flagship feature– a 10-minute guided meditation– remains free. Calm makes money by selling premium access to things like ‘Sleep Stories’ (which are basically bedtime stories for adults), and masterclasses on wellness topics.

Calm, Today

Calm’s annual revenue growth is almost a perfect ‘hockey stick

Today, Calm is valued at more than $1 billion and counts the likes of TPG and Ashton Kutcher among its investors. The app is locked in a two-way battle for domination with competitor Headspace and was named Apple’s App of the Year in 2017. According to App Annie, an app market data firm, Calm is the top-grossing health and fitness app and 20th overall on iOS, while Headspace, its main competitor, is the seventh-highest-grossing health and fitness app and 103rd overall on iOS. With more than 40 million downloads and more than 1 million paying customers, Calm’s success has been nothing short of extraordinary.

And behind much of this success is one man: Alex Tew (and his life lessons).

OpinionTrends

Data May Be The New Oil, But Don’t Be A Rockefeller

Software development companies in Sri Lanka

Is there a right way to use your customers data?

In a world where data was touted as the ‘new oil’, it was only a matter of time before the debate between privacy and data sharing reached a new crescendo. Starting with Cambridge Analytica, scandal after scandal has kept alive the ever-evolving debate around how our personal data is collected and used.

In short… don’t be Dogbert Credit: Scott Adams/Dilbert

To begin with, sharing data does have its merits. For instance:

  • Medical researchers need access to confidential patient data to use in their studies of diseases to identify cures.
  • Retail chains need consumer data to identify markets that can support new stores while meeting demand.
  • Municipalities need to share data to improve transit systems and public safety.
  • Makers of intelligent connected cars need to enable vehicle data exchange and the monetization of vehicle data while protecting data privacy.

However, when companies and app developers start using this vast pool of data at their disposal to create new revenue streams by essentially commoditizing the user, there arises a question about the ethics of such practices. For instance, The Verge has revealed that while struggling to generate revenues post-IPO, Facebook considered selling access to user data in order to make money. Last year, Buzzfeed News revealed that out of nearly 160,000 free Android apps available on the Google Play Store, nearly 55% tried to extract and share the users location with third parties, while 30% accessed the device’s contact list.

In light of all this, it is only natural for users to start worrying about the privacy of their data, prompting governments to crack down hard on firms and developers who misuse personal data. But, as developers, how do we ensure that the data we collect is used for the common good, and not for any nefarious purposes (even by accident)? Where do we draw the line when it comes to data collection practices?

Here is a list of best practices (and common sense) which we advise our clients to follow

Have a privacy policy

Before you try to collect any data at all, it is important to think really hard about why you want to collect customer data, how you want to use it, and whether or not you will be sharing this data with external parties. Once these basics have been figured out, build upon them to formulate a data collection and privacy policy for your company, product, or app. Use simple, clear language (because nobody understands legalese), but run it past your lawyer to make sure that everything is okay. Finally, make the policy available and easily accessible on your website and app.

Be transparent

While the law may shape how you disclose your policies and handle your data, being transparent with your users about how their data is collected, used, and shared is a very good idea. After all, being transparent builds trust. Providing users with the power to control the data they share with you is also a giant leap forward. For instance, if you’re developing an app, consider providing users the ability to view, limit, or delete the data they have shared with you. This will ensure that whatever data you have with you, has been collected entirely with the consent of your users.

Designing self-service pages where users can control their data can be a huge step forward for user privacy and consensual collection. Users can understand the data they’ve explicitly provided, the data you’ve gathered in the background based on their usage, and the ongoing ways that data is currently entering your systems. This encourages users to take an active and considered approach to their own privacy and allows users to refuse specific types of collection with an understanding of how that may affect their access.

When given a choice between collecting and correlating data in the background and asking for it explicitly from users, it is usually best to tend towards the latter. While your privacy policy may outline various ways that you may gather data, asking directly will minimize surprises and help build trust. Users may be willing to provide more information when they feel like they control the interaction rather than when it is collected by monitoring behavior, which can feel intrusive.

If you’re domiciled in a locality where GDPR applies, then it goes without saying that almost all of the above are requirements that you must comply with. GDPR is essentially a legal framework which governs how firms can collect and handle user data, while providing greater protection and rights to individuals. The costs of non-compliance with GDPR can be quite high. Smaller offences could result in fines of up to EUR 10 million or two per cent of a firm’s global turnover (whichever is greater). Those with more serious consequences can have fines of up to EUR 20 million or four per cent of a firm’s global turnover (whichever is greater). For more information, see what The Guardian has to say.

Build strong safeguards

If you are collecting user data, a data breach can be your worst nightmare. Not only would it be a public-relations disaster, but in a worst-case scenario, it could spell the end of your company or startup. Data breaches lead to people’s identities being stolen, credit cards being opened in their name without them knowing it, and even fraudulent tax returns being filed. If you’re going to collect all this personal data, it’s your responsibility to safeguard the data you collect.

To that end, we recommend that you:

  • Back up your data in case your systems crash
  • Ensure there is no personally identifiable information within your database (make sure it’s all encrypted or anonymized)
  • Have malware, antivirus software, and firewalls that protect from data breaches (and make sure it’s all up to date)
  • Have an emergency plan in the event of a data breach

Minimise permissions

When you ask users permission to access certain data or services on their phones, ensure that you are only asking for permissions that are appropriate, and not excessively intrusive. For example, if your app is a simple tic tac toe game, it doesn’t make sense to ask the user for permission to access the camera on their device.

Don’t use code you don’t understand

Developers usually work with a lot of open-source software when building apps, and it is a very common (and good) practice to rely on other people’s code snippets, be it in the form of frameworks or libraries, where relevant. Platforms such as GitHub are a treasure trove of top-notch code snippets, which can often cut development time by a significant amount. But if that code is handling your users’ information inappropriately, it’s your problem. So make a point of checking code before you rely on it.

What are your thoughts on the data privacy vs. data sharing debate? Let us know in the comments below!

Cover image credits: Unsplash