Opinion

Platform as a Service (PaaS) is rising fast above the IT Services horizon

Calcey

Have you ever wondered how efficient software development would be, if you could open up your Integrated Development Environment (IDE) and focus on your domain-specific logic and needs, right from day #1? And not bother about architecting and configuring your development environment for days? Platform as a Service (PaaS) represents just such a zeitgeist in the industry.

Platforms that allow us to immediately code our core business processes with ease, without having to spend too much time configuring environments, have surfaced time and time again in the industry. Groupware platforms like Lotus Notes or customizable SaaS products like salesforce are historical examples of facilitation for quick-start, domain-focused development. However, most of these earlier attempts seem to have fallen short of being completely flexible platforms for bespoke software development.

With today’s advancement in cloud computing technology and the resulting shift towards distributed development over the Internet, we once again have quick-start domain-focused development knocking on the door –  in the form of “Platform as a Service”. This time there seems to be real hope.

Let me briefly explain the basic concept behind PaaS.
There are many “common needs” when developing a web app with transactional support, say for a commercial enterprise. User management, security, concurrency management, scalability, persistence and failover would all be a common requirement of bespoke development projects. Integrated lifecycle management measures like configuration management, continuous integration, code analysis and unit test cases are usually essential environmental requirements of any sizable development project. Furthermore, in today’s age of shared services on the Internet, one might want to bridge one’s bespoke apps with best of breed service infrastructures like OpenID login or Amazon S3 file storage. Perhaps, it would also be a common requirement (if it isn’t already) to integrate apps with Social Media like Facebook, passing certain information to be shared publicly.

Traditionally, we’d have to piece together “by hand” the requisite development environment and architect our project’s code structure. Most of us who have done this know that this requires a lot of thought and a ton of configurations in various components of the environment; not to mention figuring out the design for service integrations.

PaaS has envisioned to addresses just this problem –  it allows us to register over the Internet with a provider, configure a development environment in a simple way –  plugging in different “cartridges” into the environment like the required technology stack and service integrations –  and sync one’s local Integrated Development Environments with it (usually via a downloaded plug-in). Presto! We will have on our local machines a ready-for-development project that has all the environmental configurations wired, with the appropriate package structure and requisite files to address all our common infrastructure needs. The software that needs to run locally and remotely –  web server, database, life-cycle management tools, the lot –  will have been configured and structured in our local code-base. We’d just have to write our custom code that governs the business logic and user experience, and check-in! At least, that’s the concept in theory.

The most notorious early strider towards the PaaS direction is of course Google’s AppEngine, but more recent examples of cloud-based, “quick-start” integrated development providers include OpenShift, appfog and Stackato. They all have their pros and cons, and there are many online comparisons1, 2 available for those who are interested.

Calcey Technologies is a strong proponent of PaaS, having exploited several leading providers like AppEngine and AWS to the advantage of our clients.
References:

  1. A Java Developer’s Guide to PaaS: http://www.infoq.com/articles/paas_comparison
  2. Conducting a PaaS Comparison: http://apprenda.com/library/paas/conducting-a-paas-comparison/
How to

What is in a Test Case?

Calcey

As we all know, a Test Case is an important instrument in Software Quality Assurance (SQA). For the benefit of those aspiring for a career in SQA, I’d like to explain why writing good test cases is imperative, and go on to describe the key elements in a good test case template. But before I go into details, let me firstly define a test case.

Test case is a document that describes an action or an event in a software app, and the expected response for that action based on the given input. It determines whether the features of the application are working to the expectations of the end users. A collection of test cases is called as a Test Case Document. A test case consists principally of many specifications such as the Test Case ID, Description, Steps, Inputs, Expected Results and Pass/Fail criteria.

So why do we need a Test Case Document?

1. To identify gaps in business requirements at an early stage of development
In SQA Practice, it is recommended to start writing test cases in the early stages of the development lifecycle, as it can help you identify problems in the requirements or design of an application. For example, if we are unable to figure out a test case (or test cases) for a given requirement, the requirement is probably too broad to be coded. It would force the product owner to go back to the drawing board and refine his or her statement of requirements.

2. To optimize testing effort
Writing test cases requires us to “think through” the entire app upfront from a functional perspective and ensure we don’t miss out on test scenarios during actual testing. Whilst ad hoc testing is a critical component of any testing exercise, deadlines and ill-thought functional boundaries by newcomers to the team at the last minute may cause us to skip checking vital functional pathways in an app. This problem is avoided when the test cases are documented. On the other hand, having test cases also helps us from repeatedly covering the same ground and wasting time; for example stepping through the login functionality many times in an app that doesn’t have CAPTCHA secured login. By spelling out exactly the necessary maximum scenarios to ensure 100% functional coverage (I use the word functional here in a broad sense, inclusive of performance, usability and other relevant testing scopes), one can save time.

3. Instill confidence in a release
When a release is tested and a set of test cases are marked as “passed”, “failed” or “not run”, everyone in the team has a clear idea of where the release stands in terms of quality. Stakeholders can make an easy judgment call about whether to push a release to Live, or to hold off for bug fixes, when under pressure due to the business urgency of the release. If one relied on ad hoc testing, one has to depend on the gut feeling of the testers, and there is no formal accountability for the quality of a given release.

4. Easy to scope-down one’s testing effort
Having test cases will help you to identify the most critical functional pathways of an app. So it offers us the advantage when under a time constraint, to intelligently scope-down the testing effort, to cover the most relevant testing areas for the release.

Types of Test Cases
We can categorize test cases based on their purpose within the SQA lifecycle. Three of the most frequently used test case types are:
1. Functional Test Case –  a test case which steps through new functionality in the app
2. Regression Test Case –  test case that is executed to identify the impact from changes to existing functionalities
3. Smoke Test Case –  a summarized test case or a check list item which is executed to identify whether the build is stable and can be accepted for further testing

How to improve Test Case Management
Due to simplicity and cost, most small software companies write test cases in Word or Excel document formats. However, there are many Test Management Software tools available in the market such as Mercury Quality Centre, QA Complete, or even open source tools like TestLink.

What is the Calcey Test Case Template?
Calcey Technologies follows industry standards for testing, and shown below are the essential elements in our Test Case Document template, that any newbie to SQA can study and master.

Field Name Guideline Example
Test Case ID To uniquely identify a test case, when referring to it in project communication.Give the prefix/section# given in the Functional Spec document. This will facilitate mapping the traceability and identifying any missing test cases.

Note: The test case IDs need not be just sequential numbers like 1,2,3,4 etc.

Functional Spec document:5.1 Login In
Test Case document:
5.1.0.1 for the first test case
5.1.0.2 for the second test case
Category Two categories, UI and FUN.UI: UI test cases check the screen layout and all page elements presence.

FUN: Functionality test cases cover all the user actions belonging to a particular function. This ensures that the System works as intended or accepts all the valid inputs, which is supposed to be accepted.

Feature Description This is the Test Case name. Write the Test Case Name in simple present tense.1. UI category:

Feature/Functionality name.
2. FUN category:

2.1: Feature/Functionality name-Valid-<User actions if any>
For example: Add user-Valid-Submit

2.2: For invalid test case combinations: Feature/Functionality name-Invalid- <User actions if any>
For example: Add user-Invalid-Submit

2.3: For any Cancel/Abort action: Feature/Functionality name- Cancel
For example: Add user-Invalid-Submit or Add user – Cancel.

Try to cover each user action in a separate test case.

1. Scenario: Verify labels in the Login screenUI: Login Screen-Labels
2. Scenario: Verify Valid User Login to the System
FUN: User Login-Valid
3. Scenario: Verify Invalid User Login to the System
FUN: User Login-Invalid
4. Scenario: Verify Cancel button in Login screen
FUN: User Login-Invalid-Cancel
Negative/ Positive Scenario This is a mandatory field, which has to be filled at the time of writing the test cases.This is to differentiate happy scenarios and negative scenarios.
Prerequisite Any activity that must take place prior to executing the test cases.If we are using previously executed Test

Cases as pre-conditions, always use the cell reference of the corresponding test case.

When writing pre-conditions, if a user is involved in the action, state the actor(s) name mentioned in the UC. (E.g. System Admin, Us
er)

Write the pre-requisite in past tense.

1. Adobe Reader 8 must be installed.2. User is logged in to the system.
Test Steps This is a mandatory item in test cases.These are the steps used to execute the test case. Each step in the “Test Procedure” has to be numbered and placed in a new row.

The numbering format should be as follows:
1.
2.
Write the steps in simple present tense.

1. Click on the Next button2.Enter a valid security question
3. Enter the address
4. Click on the OK button
Input Data List data to be entered into these fields in order to execute the test case.Alternately, you may point to a separate excel spreadsheet that contains the input data values used for testing.

If you do not know the input data, keep it as “<TBD>”.
If there is no input data for a particular test case, include “N/A” in the cell.

Do not leave this column blank.

Admin User Login = testuser@calcey.comAsset Name = <TBD>
N/A
Expected Results This is a mandatory item in test cases.For each test step the predicted outcome should be documented under expected results. Without this step, the tester may not know whether the test case is a pass or fail.
It’s better to use the word ‘should’ when writing the expected results, since ‘should’ is used to indicate that something is expected.
1. Segments should be displayed when the user clicks on Organize tab.2. System should display the error “User Name cannot be left blank” 
Multiple Target Apps Today we often have mobile and web apps that are complementary and represent the same system under testing. In such cases, one must clearly mark whether the test case must be repeated for several user interfaces or devices prior to passing or failing. App1, App3
Automated Indicates whether the test has been automated or is performed manually.You can also give the name of the script for easy traceability.
Status (Passed / Failed) Initially left blank. This is a mandatory field that has to be filled at the time of execution of the test case.All tests executed should be marked as either passed or failed.

If a test case cannot be executed due to any reason, then it should be marked as “Not executable” or “Differed”, with a Comment.

Defect ID This is a mandatory field that has to be filled at the time of execution of the test case.For each test step if there is any deviation from predicted outcome then that should be documented in the defect-tracking tool (such as JIRA). Then the Defect ID generated for the defect by the tool has to be documented here.
Build Number This is a mandatory field that has to be filled at the time of execution of the test case.This is to differentiate the test cases for different builds.
Use Case Spec Reference This is not a mandatory field.Document any other Functional Spec document or Use Case references here.
Comments Any comments to elaborate a situation that cannot be represented via standard fields.
InterviewsLife at Calcey

Careers at Calcey, an engineer’s story

Calcey

Rajitha Egodaarachchi is a software engineer working for Calcey Technologies; a Colombo based offshore software development facility catering to clients in the San Francisco bay area. Rajitha has been a fast track performer and was recently nominated by his managers for a promotion as a senior software engineer, in recognition of his abilities and dedication. I caught up with Rajitha during his afternoon tea break on Friday, 02-Nov, to learn more about his work experiences and interests.

Sanduni: Welcome to the interview, Rajitha, and congrats on your upcoming promotion. Tell us a bit about yourself.
Rajitha: Thanks Sanduni. Well, I’m a software developer working presently for Calcey Technologies. I’m 24 years old, and a graduate in IT. I’ve been working at Calcey for the past two years.

Which university did you study, and what subjects did you major in?
Rajitha: I got my degree from Curtin University, Australia offered offshore through SLIIT campus in Malabe, back in 2010. It was a”general degree”  in Information Technology. The subjects I studied however were focused towards software engineering.

Sanduni: Why did you pick software engineering as a career?
Rajitha: I had a passion for this subject from my school days. I got to do a lot of interesting little software projects while at the IT Club of St. Peter’s College, which ultimately paved the path for my working in the software development sector. I’s the buzzword of our time and whatever we do ends up having an information technology component in it. People literally hangout in cyberspace today, like on Facebook or Twitter, and almost every business we can think of can potentially be on the Internet. So I thought that specializing in this area would make my life interesting. The software industry is booming with new inventions every day.

Sanduni: Indeed. So why did you decide to join Calcey Technologies?
Rajitha: Well, a few companies called me for interviews. As soon as I got into the premises of Calcey I felt like it had the ideal environment for me to begin my career. I always wanted to join a “not so big” company that is well established in the trade. Calcey is a sort of boutique firm, where seniorfolks are always available for brainstorming, where there is easy access to resources, including Facebook and YouTube [laughs], and where the salary scales are good. Besides, I saw that we could play games in the evening or even shoot each other with NERF guns! I just loved the “developer-friendly” environment that I was introduced to.

Sanduni: What was the first project you worked in? Tell us what the experience was like..
Rajitha: It certainly was challenging. I landed on a C# project called Vertical Platform (later I got to know it as one of the coolest projects to work at Calcey). I was a new entrant to the industry… even though I was working at HSBC previously I had minimal development experience. So I had to work a lot harder to understand the requirements, the design concepts, and basically everything that is expected from the role of a Software Engineer. There were tons of stuff to learn, ranging from configuration management using GIT, to how to keep my cool under pressure.

Sanduni: How would you describe the work environment at Calcey?
Rajitha: Very appealing. Resources ranging from books to laptops are always available without restriction. We have an ethical, heterogeneous setup, and maintain high standards in terms of the industry practices. You can always speak to the management about issues. Plenty of stress relieving activities is available like computer games, foosball, carom or even a small in-house gym. You will find peers always giving out a helping hand as well as experienced seniors mentoring us with new concepts. Everyone’s informal and on a first name basis. The leads are also straight talking, and will point out your mistakes openly and often [laughs].

Sanduni: So Rajitha, what are your hobbies and interests? How do you make it all worth it personally?
Rajitha: I play computer games, sleep [laughs], swim, dance and work on personal R&D projects in my free time; I just hang out with friends on weekends. We do pub and club once in a while and all the latest movies are watched by us!

Sanduni: Great! Ok so do you really like your job? I mean, what improvements do you expect to see in your career in the future?
Rajitha: Yes I do like my job. The job I currently do is the profession I wanted to be, it goes without saying. Moving forward, once I grasp the engineering aspects completely, I would seek to manage projects. Thus I’m looking forward to beginning my postgraduate studies in the coming months. I think it will be helpful with my long-term career.

Sanduni: What was your learning experience like at Calcey itself?
Rajitha: Calcey practices Scrum, the most successful agile project management methodology that I know of, and I’m proud to have adjusted to an agile mindset. I also had to learn Objective-C and iOS development in double-quick time. It’s easy to switch between programming languages here, as there are experts in the domain that you can learn from. SQL, asp.net, MVC, iOS and JavaScript are a few areas of expertise that I tapped into, but I am aware that we also use other languages and frameworks like Python on AppEngine or even older technologies like ColdFusion.

Sanduni: What’s your best moment at Calcey? Is there anyone particular incident that sticks in your mind?
Rajitha: I’ve nothing in particular to single out, but the zillion birthday parties, farewell parties, trips outstation and hangouts are equally memorable for me. We are getting ready for a birthday party this evening, as you know…

Sanduni: Okay. Is there any advice you’d like to give to a newbie joining the industry?
Well, being a newbie myself just over two years back, I certainly felt the stern pressure put upon me when working towards deadlines coding complex features. Looking back after two years, the experience that one gains the hard way would be the best one could get, and would lay the foundation for a long and successful career that awaits you. Never be afraid to work hard, and play hard!

Sanduni: Thank you for your time Rajitha –  and good luck!

Opinion

Is there a place for QA Testing in Scrum?

Calcey

With the emergence of the Agile Software Development zeitgeist at the onset of the 21st century, there occurred an upheaval in how professional competencies were demarcated within the software engineering industry. The established “project roles” and “professional practice groups” within the industry such as Development, QA Testing, Project Management, Business Analysis and suchlike went through an upheaval of sorts, with a general tendency towards de-specialization. A software developer was re-packaged as an “all rounder” and expected to perform well in all departments. Project Managers were attenuated to “Scrum Masters” having a narrower window of responsibility, as compared with the PMs of yore who handled everything from elucidating business requirements to billing clients. The “management” effort was decentralized and distributed throughout a cross-functional team. Many intelligent folk in the industry welcomed this change, as it made developers better aware of the overall business requirements by placing them in direct contact with the client.

One notable early trend in agile product development teams was the aversion to having dedicated “testers” –  after all, why would one need them, if one wrote one’s unit tests and tested one’s releases constantly in a continuous integration environment? For some years, agile development startups shunned the need for hiring specialized human testers; on the basis that the developers would “perfect” the functionality purely through awareness of business needs and through end-user feedback from the client. The possibility that there might be such a thing as “end user competency” in individuals that doesn’t always accompany programming competency, was completely ignored.
As with any other proposition in the scientific management of work, empirical evidence shapes engineering process. As we all know, today we find many agile development teams are back to recruiting dedicated testers to perform manual regression testing and a host of other mission-critical tasks. I’d like to detail four important tasks that our dedicated team of testers at Calcey do, and explain why a new realigned tester role is a valuable addition to software development.

1. Usability Testing
Testing the durability of user interfaces –  i.e. how the requisite functionality is translated into a user-friendly experience –  is the first stage in a project’s lifecycle where Calcey testers get involved. A product owner (or a developer) may quickly wireframe the functionality he or she needs and pass it on to the development team, but this first-cut can benefit immensely from usability testing. What the test team does is, they printout the wireframes, place it before an “ordinary user” (a tester who is not aware of the product) and observe how he or she tries to interact with the wireframe to achieve the objective, that is stated upfront. The questions raised and the time taken to achieve the objective is noted, and eye and hand movements of the user are observed. Thereafter, the wireframe is modified to improve the user experience.

Sometimes, an experienced tester doesn’t actually need to carryout the “usability test”, he or she can simply draw upon past knowledge of “good practices” to redefine a user experience and make it a better one. We have found usability testing input especially useful, in the context of developing completely different user interfaces delivering the same functionality across multiple mobile devices like Websites, iPhones or iPad.

2. Regression Testing
The beauty of Scrum is that it allows a QA team to function alongside the dev team, working more-or-less in parallel. What we discovered is that the Sprint time-box must accommodate a testing and bug fixing period, if one were to avoid bug-pileup. For example, if a single Sprint is of three weeks duration, two weeks are allocated for development, and one week is left for testing and bug fixing the Sprint demo release. During the initial two weeks of the Sprint, there is no regression testing, but the tester(s) can prepare for the upcoming release, by drawing up simplified test cases on a spreadsheet. They also can continue testing the previous Sprint release, or engage in other critical testing activities such as performance testing or test automation (see below).

There is no hard-and-fast rule, but we find that in our “parallel development and testing” setup, the optimal bandwidth ratio for dedicated test resources is modest. In our experience, a team of four developers could benefit from a dedicated tester. The critical success factor is that the tester plays an end-user role –  looking upon the whole system as though he or she would have to work with the evolving product for years to come, without worrying about engineering complexity.

3. Performance Testing
Performance testing is a much discussed and often overcomplicated activity. There are two generic types of performance tests that we setup and conduct during product development initiatives at Calcey. One is the “performance test” verbatim. What we do is we setup a reasonable transactional load on the given user interface under test, and record the response times. For example, how long would it take to login to the system via the login screen and land on the home page, when five users log in at once? We would match our results with the performance expectations for the system provided by the client, or match them against observed industry norms for different devices and environments. A page change on a native iPad app would be expected to happen within a second, for example, whereas a parameterized search result on a web page could be expected to take 3~5 seconds over the Internet.

The second type of test we do is a scalability test. Here we would gradually scale up the transactional load on a user interface’s functionality, in a ramp fashion, whilst measuring the response times at each increase in load. We’d do such a test on benchmarked hardware resources, and identify the breaking point of the system, when the response time becomes infinite or the application crashes. The evaluation of the test results are slightly more complex for a scalability test, as we have to factor in the design of the system and its dependency on hardware bandwidth.

In both of the above cases, the results are fed back to the development team for profiling and implementing performance improvement tweaks to the system. There are several automation tools we use for setting up performance tests, the most common being Apache JMeter for web apps, and Apple’s Instruments for Performance and Behavior Analysis for iOS apps.

4. Test Automation
Another important QA activity we engage in is the maintenance of automated regression test suites for web apps of significant complexity. We write Selenium test scripts embedded in native web code (such as C#) to perform the basic operations of the system; for example logging in, searching for products and adding them to a shopping cart, in the case of an ecommerce system. An automated test suite complements unit tests; as most developers know there are situations where is not feasible to write unit tests, but it is very easy to “click through” and verify continuity via a Selenium web test. These automated regression tests are a living artifact, and need to be updated based on evolving changes to the product requirements. They help to speedily flag breaks in old functionality caused by new releases, and thus save the testers time when deciding whether to accept or reject a build. Writing test scripts also gives the QA testing team a chance to dig into simple code and keep their logical reasoning abilities sharp.

The below diagram summarizes the QA Process we follow at Calcey.

In our Calcey experience, we find that “3rd eye” of the tester invaluable to producing quality, bug-free software (the first and second eyes being those of the client and the developer). The tester also acts like a sort of bridge between the developer and the client, to challenge both parties to achieve an optimal balance between usability and engineering cost.

How to

How to access native iOS functionality from JavaScript

Calcey

Many of you iOS developers may have come across the need to render HTML within your native iOS app, at some point in your mobile app development career. In such cases, have you ever found it necessary to call and execute certain native functions from within your embedded HTML code? For example, how does one print the screen’s contents whilst in an HTML Web View? We came across just such a business requirement recently, and thought it worthwhile to share how we solved this problem.

Before we step into the code, let me first provide some business context around our particular implementation scenario. Our top-level business requirement was to display various types of content like pictures, videos, slide presentations and so forth in a native iOS app. However, one particular content type to be displayed was a “dynamic web content module” –  that is a package of HTML, CSS, and complex JavaScript functions. The exact problem was that these HTML modules had to communicate with the native application and vice versa, whilst running on a Web View. Coding this requirement is not as simple as invoking a method in JavaScript from HTML. The JS-API Bridge allows the module and the native application to communicate with each other. The JS-API bridge works based on the following 2 functionalities provided by the UIWebView in iOS.

shouldStartLoadWithRequest method of the UIWebView delegate
This method of the UIWebView delegate gets called each time the UIWebView loads a new URL. We can use this method to send data from JavaScript running on the web view to our native code. We can make a web request from JavaScript with a custom nonstandard protocol identifier followed with the payload (e.g. nativecall://<native call payload>). We can look for our protocol identifier and extract the data within the shouldStartLoadWithRequest method and return NO to cancel the request.

stringByEvaluatingJavaScriptFromString
The stringByEvaluatingJavaScriptFromString method of the UIWebView enables us to evaluate a JavaScript string in the context of the currently loaded document in the web view. We can use this method to send data from the native code to JavaScript.
A working sample is available at: https://bitbucket.org/calceytechnologies/js-ios-bridge/

How toLife at Calcey

What is in a code review? Here is how Calcey Technologies does it.

Calcey

Code reviews are an important recurrent gatepost in agile software development, and a good engineering practice we follow at Calcey. As most software development teams know, frequent code reviews ensure the containment of poor code quality such as inefficiencies in unit-level design and lack of adherence to coding standards. Historically, the practice of code reviews existed in methodologies like RUP as both informal code walkthroughs and the more formal Fagan Inspection. At the onset of the agile revolution, code reviews were re-branded as peer reviews (which actually meant peer code reviews), as a necessary ingredient to building a stable software in an evolving fashion. The bottom line justification for the time spent on code reviews is that they are essential if we are to end up with a scalable and extensible piece of software, as opposed to a hack-job that is both unstable (difficult to scale) and impossible to extend later on, for emerging market needs.

I’d like to outline our approach to code reviews, and how we conduct them. We have a rule of thumb which developers and Scrum masters use to initiate code reviews –  any new release to a test environment must have been preceded by one. This simple rule gives Scrum masters the flexibility to plan the review, but binds them to conducting it within a given development sprint. Our review setting is that of an informal workshop, where the developer concerned projects the code on screen and walks through sections of the code based on the prompting of the reviewer. The review team consists of an architect and at least one other senior developer outside of the project under review, with competency in the programming language and frameworks concerned if possible. Other members of the project team are welcome to listen in and give their feedback. The Scrum master updates the code defects in the task backlog and assigns them to the developer(s) concerned. The duration of a code review session could vary from between 30 to 90 minutes, depending on the scope of work accomplished during a given sprint. We take our time, as faster is not better when it comes to an effective review; we inspect at most 300 lines of uncommented code for an hour.

The reviewers keep an eye out for all the typical code vulnerabilities during the review. We begin with readability, style and conventions –  there cannot be code that an experienced outsider cannot understand after a brief explanation by the developer concerned. If there is, the code is likely to be either poorly structured (design defects) or poorly presented (style defects), or both. Calcey generally follows the industry accepted coding style conventions for the major programming languages, such as the C# coding conventions from Microsoft. Unit tests are often a good place to assess the stability of the new functionality implemented, and the obvious presence of stringent unit tests can help reduce the subsequent line-by-line review effort. We’d then move on to trapping major issues in earnest, checking for algorithmic inaccuracy, resource leakage, exception propagation, race conditions, magic numbers and suchlike. There are several online sources that closely portray the Calcey code reviewer’s mindset, such as this checklist from projectpatterns.org.

One of the biggest benefits of a workshop-style code review is that the authors of the code themselves realize defects and improvements, as a direct result of trying to explain how the code works to reviewers who might not be fully acquainted with the design. In situations where pair programming is not feasible, the code review mitigates the risk of “coding in silos”  to a great extent.

Having said this, we also do our best to automate humdrum quality checks. Our .NET based app development projects are integrated with StyleCop (downloadable from CodePlex), to check for style issues like custom naming conventions or compulsory comments for XML. We also advocate enabling Code Analysis in Microsoft Visual Studio to warn us of potential code defects when compiling, from the viewpoint of the Microsoft .NET Framework Design Guidelines. Apple iOS development comes with its own set of code analysis tools –  we use Instruments for Performance and Behavior Analysis for profiling our code at runtime to identify memory leaks, a tendency when programming with Objective-C.

Coding review metrics such as code coverage and defect count are gathered from the individual reviews by the Scrum masters, and submitted to our principal architect for statistical analysis, strictly to improve the effectiveness of the review process (and not for finger-pointing). Junior developers can hope to learn a lot from well-conducted code reviews, not only about the specific technologies and design principles involved, but also about working together as a team to engineer a quality product. After all, our aim is to perform what Jerry Weinberg named nearly half a century ago as “egoless programming” .

“The objective is for everyone to find defects, including the author, not to prove the work product has no defects. People exchange work products to review, with the expectation that as authors, they will produce errors, and as reviewers, they will find errors. Everyone ends up learning from their own mistakes and other people’s mistakes.”  – Jerry Weinberg, “The Psychology of Computer Programming”, 1971

Life at CalceyOpinion

Haven’t yet been able to adapt Scrum to match the ground-realities of your business? Find out how we did it

Calcey

Project management is a crucial weapon in the arsenal of any software development outfit. Its probably the most-discussed competency in software engineering, judging by the sheer volume of scholarly papers, conceptual models, blog articles and entire schools of thought that have been churned out on this subject over the past two decades. We’ve seen process frameworks like Waterfall, SSADM and RUP come and go, and a shift from centralized delivery responsibility resting on the service provider, towards distributed ownership across an extended team inclusive of the client. We live in a world of “Agile” software development today, a zeitgeist of management thinking patterns that are based on keeping processes to the bare essential, building products incrementally and eliminating humbug within teams. We have even seen the formal “role” of the project manager (stereotyped as the big, bad bogyman in the team) disappear within the modern agile paradigm.

Call it what you like, a person or the collective reasoning within a team, we find that effective project management remains an essential ingredient to “getting the job done” . Moreover, project management success in software development engagements often remains illusive. I’d like to summarize our own successful methodology at Calcey, and go on to explain a few of the deeper lessons we learned through our collective management experience, for the benefit of our future clients.

We follow a project management methodology that is a derivative of Scrum, which has benefited from long years of practical experience in delivering projects of varying sizes and technical complexities. Our conceptual framework is fairly simple. We agree with our clients to form a single team having joint responsibility for the project, at the early stage of pre-sales negotiation. Whilst in theory we are not supposed to estimate the end-to-end scope of work in Scrum, in practice we have found it impossible to find a client who would agree to an entirely open budget and no indicative calendar timeline for building a product. So an initial ballpark estimate is made. This is purely for purposes of budgeting, to provided the client with a broad feel for the costs involved and to determine the resource bandwidth to be deployed in order to meet a very approximate calendar schedule. This sort of budget is made against the broad set of features that the product comprises of, as understood at the inception of the project. Once a project is contracted, we move forward in earnest to apply our Scrum model.

A Calcey Scrum Master’s life revolves around their project backlog. They manage both the product’s roadmap of features and the specific tasks (or bugs) for the current sprint via an enterprise backlog app such as JIRA, TeamworkPM or Basecamp. JIRA offers the highest flexibility in managing the complete life-cycle of a development project, but both TeamworkPM and Basecamp have proved to be interesting alternatives to managing smaller-scale engagements. In any case, it is not the choice of the tool itself that we found important, rather the diligent use of the backlog as a concept for task management that helped us most. Handwritten backlogs diligently maintained in the corner of a whiteboard tagged with the words “don’t erase” seemed to work better in some situations!

We plan development for a time-boxed sprint, whose duration is usually a fortnight for technologies that we are well experienced in, and a month for greenfield technologies or projects of high engineering complexity. The duration would be decided at the initial sprint, where we estimate what could be achieved in the first sprint, within the budgeted engineering bandwidth. Once decided, we stick to this time-box throughout the lifetime of the project. The outcome of any given sprint is of course a release of working software –  working but not bug free or complete in functionality. As the sprints progress, the software “emerges” as a viable product for launch. A lot has been said in the industry about the generic form of the Scrum methodology, so I’d like to move on to a few specific lessons we learned at Calcey through our experience. A snapshot of the recurring activities that we practiced shown below.
IT companies in Sri Lanka
The first and biggest lesson learned for those of us who were new to Scrum was that, unlike any other methodology, Scrum is an explicit activity like coding or testing. We’d scan the client horizon as well as our own engineering backyard each morning via the daily stand-up meeting, update our project backlog, and get into action to follow up on the individual tasks that need facilitation. We found that if we have a “living”  task backlog that gets updated without fail each day (with dates, milestones etc), we then could use it as the vehicle to drive our work; to psyche up the team, provide expert external assistance or reset client expectations. So the Scrum Masters don’t “go to sleep”  when not at sprint planning and the daily stand-up meeting –  on the contrary, they work hard each day to facilitate the resolution of issues arising from the daily stand-up meeting.
The effort required for effective sprint planning is not trivial, as we learned through experience. In theory, the estimate given at sprint planning (“I’ll finish task X within the next two weeks”) is considered sacrosanct. This aught to be, because cascading task “spillovers” into subsequent sprints could buckle the whole paradigm of time-boxed incremental achievement, and sprint velocities could take a nosedive. So what we found was that it was worthwhile to invest an entire day for sprint planning. This day is not counted into any given sprint, and the day’s principle goal is to freeze a list of tasks to be completed during the upcoming sprint. A full day provides enough time for the team to mull over the complexities of the tasks and break them down into smaller goals if necessary. The sprint planning meeting itself assumes a sort of “workshop” format where folks could pop outside for a quick R&D and return with better knowledge about the complexities of the work involved. Ultimately, everyone walks away to implement what they would consider as their own sprint plan, approved by the product owner.
Another significant learning was overcoming the common problem of trivialization of testing. The updating of automated unit tests and manual test plans, smoke testing, regression testing and bug fixing all take up a considerable percentage of the time taken for implementing a given functionality. Moreover, contrary to the idealistic belief amongst agile gurus that all competent software engineers are also competent testers (or aught to be such), we find in practice that the eyes of a person with a strong end-user perspective is essential to ensuring a healthy demo at the end of the sprint. So we found it useful to divide a given sprint timeline conceptually into a “new dev”  period and a “testing and bug fixing”  period, at sprint planning itself. This helped us to reduce the otherwise frightening tendency of “bug pileup”  that so often happens in Scrum projects –  where new development forges ahead of bug fixing, causing instability in the releases as time goes by.
The use of lifecycle automation tools was an immense help to us, and we considered them as part and parcel of our agile methodology. Anything useful ranging from build automation tools like Cruse Control, source repositories like GIT, test scripting frameworks like Selenium and Code Analyzers like FXCop were absorbed into our development framework.
The single hardest challenge though was making sure that the client representatives became an integral part of the team, and that they felt inherently responsible
for the incremental development in a hands-on fashion. Success or failure of a given sprint would be declared by the product owner immediately after the sprint demo; this is one important reason why the product owner (or his or her competent representative) is part of the team. This helped us prevent a situation where a client determines the product under development has veered radically off course after (say) a dozen sprints. Because if such a situation does arise, it basically tells us that the fundamental paradigm of dealing with a complex problem in small increments has not been adopted by the client. There are many ways to convey the message of joint ownership and incremental assessment, and in practice we have found that the most effective way is to discuss this very problem upfront prior to undertaking a new project. We usually stress the fact that the success or failure of each sprint must be determined at the next sprint planning, and adjustments must be made locally, at the scope of each sprint. These adjustments include making “management decisions”  like filling skill gaps or increasing the engineering bandwidth.
Lets face it, software development is not comparable to dam building –  a common misconception amongst management types. Although there is definite commonality in the broader values that are required of the team like honesty, dedication and professional competency, the fundamental drivers of work are not exactly the same. Intellectual effort taxes both our left and right brains equally with plenty of logical reasoning bootstrapped by flights of inspiration and lateral thinking. This “mind game” of software project management requires a management methodology that fosters creativity, whilst compensating for common human failings like poor memory, to drive it forward. Scrum is just such a methodology, and has proved highly effective for us at Calcey.

How toLife at Calcey

High-performance search via SQL-ElasticSearch hybrid solution

Calcey

One of our recently concluded projects was about building a generic platform for managing multiple B2B online marketplaces catering to different customers within the biotechnology space. An interesting problem we faced when building the “Search”  functionality for this platform was how we would deal with the massive volumes of products and product specifications that one had to “intelligently”  sift through, when rendering a search result to the user.

The platform supported semantically different content types like Products and Articles, and moreover, each product could have a large number of specifications tagged to it. The search functionality required was feature-rich, where the results had to be prioritized based on weights attached to different product specification types (color, weight etc), product localization and other configurable site-specific parameters. This resulted in us needing significantly complex SQL queries to generate a search result. In addition, we were dealing with large volumes of data; as much as 2.8 million products in a single online marketplace, mapped to over 40 million specification records in the MS SQL Server database. An initial proof of concept using purely SQL queries proved futile, where it took us around 20 seconds to render a properly weighted search result, in spite of the ample allocation of hardware resources within the hosting environment. We had to go back to the drawing-board and rethink our Search architecture in order to improve its performance.

We then did some research on ElasticSearch, a schema free, document oriented search solution that could be hosted on a cloud environment for scalability. This got us thinking along the lines of a hybrid architecture, where we would distribute the processing of the vast data volume of the search on a cloud deployment of ElasticSearch, whilst running the complex search queries for weighting the search results within the usual dedicated MS SQL Server environment. The final solution was a two-piece affair as depicted in the (simplified) conceptual diagram below.

When a user types a search string on a given product marketplace website and hits the “Search”  button, a two step process is invoked. There is a summarized database of products and essential specifications that is deployed on a cluster of ElasticSearch nodes on the Amazon EC2 cloud, which is updated on a daily basis (see “Daily Update Task” ). This full text search server is initially queried for the search string via the ElasticSearch web service API, and a “shortlist” of product records are sent to the DAL of the marketplace app as a JSON string, in double-quick time. The details of the search results –  i.e. all the product attributes and the application of specification-based weighting rules –  are generated via regular SQL queries. These SQL queries are run on the MS SQL Server database for the shortlisted search results recordset provided by ElasticSearch.

The actual implementation itself was a learning experience for us. For example, we initially wired the Web App, which was hosted in a data center in San Francisco, with an ElasticSearch solution hosted on Amazon’s cloud servers in the East Coast of the United States. This introduced significant network latency, which was greatly reduced by moving our ElasticSearch solution to the West Coast. We also realized that we could break up the summary recordset returned by ElasticSearch based on pagination criteria, and query the details from the local MS SQL Server to render page-wise search results. This reduced database processing time significantly. Several other minor tweaks were done to the detailed design, such as transferring the complexity of certain bitwise operations to determine localization based priority for products, to the server side SQL queries. All these improvements were based on a thorough performance testing of the Search implementation.

The final SQL-ElasticSearch hybrid solution immensely reduced the waiting time for a given search query, from as much as 20 seconds duration in the pure SQL solution, to under two seconds duration in the SQL-ElasticSearch hybrid solution. This performance improvement was immensely appreciated by our client and the end users of their online B2B marketplaces.

Life at CalceyOpinion

Learn Smart, Hire Smart

Calcey

What makes an applicant for a software engineering job an attractive candidate for hire? This is a perennial question that is likely to be uppermost in the minds of both employers and their prospective employees. Software engineering is highly competitive job market where, on the one hand, industry expectations run high, and on the other hand we have hundreds of “education providers” professing to deliver a sound education in computer science. It can get confusing to aspiring software development professionals as to what exactly is required of them to secure that first salaried position… and it can be frustrating to employers when newly hired staff are not performing up to modest expectations simply because they have not approached the trade with the right mindset.

Let us try to outline the measurable skills of an “ideal” software engineering candidate who presents herself for interview. First and foremost, she’d be very strong in her software engineering concepts. An in-depth knowledge of OOP concepts would be evident. She’d be thorough with the architecture of at least one development platform of choice, and be aware of how the various sub-systems and libraries of the platform work in cohesion. For example if we consider .NET as the development framework, a savvy software engineer would be aware of the high-level workings of the CLR and common language infrastructure. The candidate would have the essential background technical knowledge to develop Web applications such as an understanding of the lifecycle of a Web Request. She’d be familiar with Lifecycle Management concepts like Continuous Integration, Unit Test Driven Development and Configuration Management.

A “good” software engineering candidate would know about industry-recognized Design Patterns, and ideally have adopted a few patterns into their experimental project code. Conceptual knowledge of widely used Design Patterns like Singleton, Abstract Factory, Factory Method, Facade, Proxy etc is essential knowhow, since these concepts tune the programmer’s mindset to leveraging best of breed solutions to standard design problems, without reinventing the wheel (and reinventing it rather poorly perhaps). Whilst generally having a modern, agile, paperless, code-design based approach to programming, a budding developer must have enough sense to initially whiteboard any non-trivial implementation, and seek peer review from her teammates. At an interview, this skill could be displayed through one’s ability to represent a simple design problem as a UML class diagram.

In this day and age of vastly scalable non-relational databases and cloud computing, one might be tempted to frown upon knowledge of plain old-fashioned relational database concepts like ER diagramming, SQL, Normalization and Optimization. However, these concepts are still very much in use in most enterprise applications, and must be studied and understood. Woe to the interview candidate who cannot answer a question like “Why would one sometimes need to de-normalize a database? Can you think of an example?” or “What is the use of an Index?” .

Some conceptual knowledge of how people work together to deliver a project to a client is mandatory for beginners. Familiarity with a lightweight team-engagement paradigm like Scrum would be ideal to have.

Successfully facing a test for conceptual knowledge is of course not the only indicator of potential success at an interview. But the concept-savvy candidate is the quintessential software engineer. The era of concept-blind “code monkeys”  working principally through trial and error (“copy-paste coding”) is to be frowned upon by any respectable software engineer or employer. Of course, we are speaking of engineering graduates who have had ample time and guidance to perfect their attitude to programming. The story might be different if one is a job applicant fresh out of high-school. In a sense, the evaluation that such trainees would have to face would be harsh, because employers would have to simply rely on the rather controversial concept of “a high IQ” . It has to be admitted that a good developer will almost certainly have strong innate logical reasoning capacity, usually expressed as a gift for providing algorithmic solutions to real-world problems. This is the sort of test that one who has had no formal training in computer science would face at an interview (e.g. “tell us the steps to sort a randomized list of numbers from 1 to 100” etc).

Having said this, possessing a high IQ, being knowledgeable about engineering concepts and being a great team-worker still does’t complete the picture of a “good” software engineer. At the heart of a successful developer lies an appreciation of the end user’s objectives for the system under development. No matter how “smart” a software developer is at codifying algorithms or grasping new frameworks, she only succeeds when she has met the end user’s expectations. So “well rounded” developers are also great end users –   ” those who tryout new apps as hobby. Unless you use software freely in your own life, its unlikely you know what is a user-friendly, defect-free app. Along with an appreciation for usability comes an appreciation for testing one’s code.

So from the perspective of a job applicant, what does one do as prep for that first interview? Ask oneself the question, am I aware of the basic concepts? OOP-check. Design Patterns-check. Web Request Lifecycle-check. Logical reasoning and writing of algorithms-check. Configuration Management concepts-check. Relational Database design-check. And so the list goes on. Freshers preparing for an interview should master the ability to clearly demonstrate one’s knowledge of these concepts, by giving both textbook metaphors of their use and purpose, as well as recounting how one has used or encountered these concepts in past projects (at University or elsewhere, perhaps within common development frameworks itself).

Let us now look at the other side of the coin –  how does a potential employer spot someone who has great potential as a software developer? At the heart of making a good hire lies the ability to gauge if a candidate can reason from first principle, literally thinking on her feet. After making a candidate comfortable, its an excellent idea to present a simple real-world problem on the board, and see how far she can get with providing a solution in design / pseudo-code. Give the candidate ample time, support and encouragement. Once the initial jitters are at bay, a good candidate will always make a genuine attempt to answer the problem. A weak candidate will always stall early with algorithms or design problems, because of a lack of critical reasoning capacity –  an essential trait for becoming a good software engineer.

It might sound like a harsh reality, but software engineering is not for everyone, just like any other “trade” like art or music or management is not for everyone. However everyone must be given the opportunity to try their luck at this fascinating trade, and it is likely that those who succeed will be those who approach solving real-world problems logically from first principle, with great diligence. The sooner aspiring software engineers learn this, the better their career prospects. This is, and must be, a transparent truth between the industry and the thousands of prospective new entrants.

Life at Calcey

Calcey hackathon promotes entrepreneurship and product development

Calcey

Sunday Observer, June 3rd, 2012

Calcey Technologies, a software development services company in Sri Lanka catering to the US market and a full member of AmCham celebrated its 10th year in business recently in a unique and productive way by organising an inter-university hackathon.

CEO and the founder of Calcey Technologies, Mangala Karunaratne said that one objective of the hackathon was promoting entrepreneurship and the product development culture among young IT graduates in Sri Lankan universities.

This is important to develop Sri Lank’s IT industry rather than being just an offshore body shop destination for the industry. A hackathon (also known as a hack day, hackfest or codefest) is an event in which computer programmers and others in the software development field such as graphic designers, interface designers and project managers collaborate intensively on software- related projects. Hackathons typically last between a day and a week in length. Some hackathons are intended simply for educational or social purposes, although in many cases the goal is to create usable software, or to improve existing software.

Karunaratne, recounted the beginnings and ten-year long way to success in the software industry. “It was a humble beginning and the company was started in an old office belonging to my father in Kularathna Mawatha, Maradana with only two developers. I had work experience in the Silicon Valley for Nortel Networks as a product manager. I then decided to start this business.

Being an astute entrepreneur at heart, my father encouraged me to use the relationships I had forged in the US and launch an offshore consulting business.

Today, Calcey Technologies is a multi-million dollar consulting business housed in a four-story well-furnished‚  building at Seibel Avenue Kirilapona, with 60 staff, he said.

“We served many famous names along the way such as WikiMedia Foundation, Hoya ConBio, BNBuilders, or JiWire Inc. One particular client relationship that blossomed into a great partnership was with Compare Networks, a giant in the Biotechnology marketing space. We began by providing BPO services for managing their Web content, and ended up developing an entire platform to manage their online B2B marketplaces. We’ve done business with Compare Networks for over five years to date, and we hit it off really well with them”, he said.
“I believe we are a truly innovative company in comparison with most other offshore development companies.

We have dared to be different by investing some of our profits to incubate products and spin-off new ventures, harnessing the creativity and ingenuity of our own staff. Our first product is already online at Xaffo.com, and is a foray into the rapidly expanding space of Social Media Intelligence.

We hope to do what Google Analytics does for Websites, but this time with a Social Media angle to the metrics provided. Its early days for Xaffo, and we are keeping our fingers crossed, Karunarathne said. This all-weekend event took place on May 19 and 20.
The goal of a ‘hackathon’ is to have programmers “hack together” a working piece of software that addresses a real-world product requirement, in double-quick time. The emphasis in a hackathon is on meeting the specified product requirement, using whatever coding shortcuts available for the programmers.

The technical solution is considered important, but secondary to a speedy delivery of the product.
Six leading Sri Lankan universities participated in this event, The University of Colombo School Of Computing (UCSC), The Department of Computer Science and Engineering – University of Moratuwa (CSE), The IT Faculty – University of Moratuwa (Moratuwa IT), The University Of Kelaniya, The Sri Lanka Institute of Information Technology (SLIIT) and The Asia Pacific Institute of Information Technology (APIIT).
A team of three undergraduate programmers represented each campus, and they were free to use any technology platforms of their own choice, to get the job done on time. The winners of the hackathon will be announced next week after evaluation of the work done by the teams.