Opinion

Django, for better performing websites with rapid development

Software development companies in Sri Lanka

Django is a high-level Python Web framework that encourages rapid development and clean, pragmatic design.Django lets the developers build deep, dynamic, interesting sites in an extremely short time. The framework is designed to let the developer focus on the fun, interesting parts of the job while easing the pain of the repetitive bits. In doing so, it provides high-level abstractions of common Web development patterns, shortcuts for frequent programming tasks, and clear conventions on how to solve problems. At the same time, Django tries to stay out of the way, letting the developer work outside the scope of the framework as needed.

Django is usually called an MVC framework, and justifiably so. It is very heavily influenced by classical MVC and it’s even possible to argue that Django improves the architectural pattern. In Django, the three core layers are the Model, the View, and the Template. A key advantage of such an approach is that components are loosely coupled.

Having come to know of the interesting features of Django, we recently worked on a pilot product development project, called Xaffo, which had demanding needs for a web framework. Xaffo, a cloud-based social media monitoring tool,allows users to analyze the popularity of their brands among leading social networks.

Why we decided that Xaffo needs a web framework like Django
Xaffo basically deals with large chunks of social media analysis data shuttled between different web services, with huge task lists in the background that requires higher performance and scalability.The Xaffo prototype was initially built on Google App Engine and therefore in Python,and keeping the python code was also an essential part in our decision-making.

Meeting performance expectations when handling large sets of data was the challenge where features of Django became relevant and interesting. Handling large number of tasks in the background, Celery (explained below) came in handy to support Django to dynamically add or remove workers to handle the tasks.

On account of handling large sets of data, Django makes a better pair with MongoDB, which brings out easy database connectivity with high performance.

Xaffo is hosted behind nginx web server with uWSGI for high performance and static and dynamic content serving.The combination of this hosting environment and Django proved optimal. Xaffo is hosted on Amazon EC2 achieving the scalability where Django becomes the perfect web framework to match all these components.

Django made it easy to achieve tedious tasks that Xaffo demanded, with its appealing features such as,

  • Object Relational Mapping
  • Template System
  • URL Resolver
  • Forms
  • Admin Site

In addition to the aforementioned features of Django, Xaffo relies largely on Django’s modularity.
We used several 3rd party Django packages, such as:

  1. Celery
  2. MongoEngine
  3. Flower

Celery is an asynchronous task queue/job queue based on distributed message passing. It is focused on real-time operation, but supports scheduling as well.The execution units, called tasks, are executed concurrently on a single or multiple worker servers using multiprocessing, Eventlet, or gevent. Tasks can be executed asynchronously (in the background) or synchronously (wait until ready).Celery is used in production systems to process millions of tasks a day.

Celery is used with Xaffo to manage the periodic tasks that are executed to fetch and calculate data gathered from various social network APIs. RabbitMQ is used as the message broker.

MongoEngine is an object document mapper for Django, which is very similar to Django’s own ORM. This enables developers who are already familiar with the Django ORM to interact with MongoDB without having to go through a whole new API documentation.
Celery Flower is a tool used with Xaffo to monitor the periodic tasks executed by Celery. This tool provides a web interface with information such as task progress, graphs and statistics.

To wind up, Xaffo was a successful project with Django, proving to us that Django web framework is highly suitable for projects that require higher performance,scalability, and high load of backend processing with large chunks of data.Its also a RAD framework in this context, and facilitates meeting tough deadlines.

Django takes away the tedious tasks of the development environment and makes it easier to build better web apps more quickly with less code. We at Calcey recommend it wholeheartedly!

More info:
Django – http://www.djangoproject.com
MongoDB – http://mongoengine.org/
Celery –http://celeryproject.org/
Flower – http://docs.celeryproject.org/en/latest/userguide/monitoring.html#flower-real-time-celery-web-monito
RabbitMQ- http://www.rabbitmq.com/features.html

Life at CalceyOpinion

Calcey Technologies adopts a domain-driven design approach

Calcey

Domain-Driven Design (DDD) is an object-oriented approach to designing software based on the business domain, its elements and behaviours, and the relationships between them. It aims to build software systems that are a realization of the underlying business domain, by defining a domain model1 expressed in the language of business domain experts.

The core idea is that the business domain stakeholders and the technical team must communicate in a ubiquitous language. A domain model can be viewed as a framework from which different solutions can then be rationalized. For example, the domain might be retail, and three different solutions sitting on the domain model for retail sales might be:

  1. An online store for the general public.
  2. An order processing system for the store’s staff.
  3. A special offers app for mobile devices, that notifies customers about offers based on proximity to the store.

The domain model will be defined with the assistance of experts in the retail business, where certain “fundamental” concepts in the retail trade will be built into the model as entities, value objects and aggregates. These entities will reside within a domain layer in the conceptual architecture of the overall system, to be leveraged by upper layers that render end-user functionality.

We recently adopted a domain-driven design approach to build an app for the centralized distribution and control of Multimedia Marketing Content to Sales Staff. The business domain is one of marketing content management, and we sought our client’s expertise in this field to help build a generic domain model.

In marketing content management, the basic concept is that there are market segments (aka business units), and marketing content is associated with these segments. The actual content can be folders or multimedia content items. In domain-driven design, your objective is to create a model of the domain. You need to identify what are the items (entities) you need to accomplish the desired functionalities of your application. You need to identify the relationships among different entities and how they interact among themselves. You need to find if the business goal of your client is achievable using your domain model. You do not need to know how and where the data of your domain will persist or even if the data do need to persist while you do the model of the domain.

This ignorance about your persistence medium will make your domain model free from any coupling with the persistence layer of the application. This will eventually separate the concerns of persistence and its communication mechanism from your domain model. As a result, your application will be free from coupling with any data store and will be very easily unit testable.

Of course, in a real application, you do need to have a database. But your domain model will have no knowledge about that. All it will know is the existence of a “repository”  that will eventually manage your application’s persistence concerns.

I hope I was able to provide a teeny insight into what DDD is about. Eric Evans popularized the DDD approach by presenting this concept in “Domain-Driven Design: Tackling Complexity in the Heart of Software”. I strongly recommend this excellent book to all budding software architects. Here is a diagram from Evans’ work, describing the key patterns involved.

Let us part with this thought; imagine, once your code becomes readable to those familiar with the business domain, both peer review and knowledge transfer would become a whole lot easier.

  1. Domain-driven architectural style (definition by Microsoft): http://msdn.microsoft.com/en-us/library/ee658117.aspx#DomainModelStyle
  2. Domain-driven design by Eric Evans: http://www.amazon.com/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215
  3. A quick refresher on DDD: http://www.codeproject.com/Articles/339725/Domain-Driven-Design-Clear-Your-Concepts-Before-Yo
Opinion

Platform as a Service (PaaS) is rising fast above the IT Services horizon

Calcey

Have you ever wondered how efficient software development would be, if you could open up your Integrated Development Environment (IDE) and focus on your domain-specific logic and needs, right from day #1? And not bother about architecting and configuring your development environment for days? Platform as a Service (PaaS) represents just such a zeitgeist in the industry.

Platforms that allow us to immediately code our core business processes with ease, without having to spend too much time configuring environments, have surfaced time and time again in the industry. Groupware platforms like Lotus Notes or customizable SaaS products like salesforce are historical examples of facilitation for quick-start, domain-focused development. However, most of these earlier attempts seem to have fallen short of being completely flexible platforms for bespoke software development.

With today’s advancement in cloud computing technology and the resulting shift towards distributed development over the Internet, we once again have quick-start domain-focused development knocking on the door –  in the form of “Platform as a Service”. This time there seems to be real hope.

Let me briefly explain the basic concept behind PaaS.
There are many “common needs” when developing a web app with transactional support, say for a commercial enterprise. User management, security, concurrency management, scalability, persistence and failover would all be a common requirement of bespoke development projects. Integrated lifecycle management measures like configuration management, continuous integration, code analysis and unit test cases are usually essential environmental requirements of any sizable development project. Furthermore, in today’s age of shared services on the Internet, one might want to bridge one’s bespoke apps with best of breed service infrastructures like OpenID login or Amazon S3 file storage. Perhaps, it would also be a common requirement (if it isn’t already) to integrate apps with Social Media like Facebook, passing certain information to be shared publicly.

Traditionally, we’d have to piece together “by hand” the requisite development environment and architect our project’s code structure. Most of us who have done this know that this requires a lot of thought and a ton of configurations in various components of the environment; not to mention figuring out the design for service integrations.

PaaS has envisioned to addresses just this problem –  it allows us to register over the Internet with a provider, configure a development environment in a simple way –  plugging in different “cartridges” into the environment like the required technology stack and service integrations –  and sync one’s local Integrated Development Environments with it (usually via a downloaded plug-in). Presto! We will have on our local machines a ready-for-development project that has all the environmental configurations wired, with the appropriate package structure and requisite files to address all our common infrastructure needs. The software that needs to run locally and remotely –  web server, database, life-cycle management tools, the lot –  will have been configured and structured in our local code-base. We’d just have to write our custom code that governs the business logic and user experience, and check-in! At least, that’s the concept in theory.

The most notorious early strider towards the PaaS direction is of course Google’s AppEngine, but more recent examples of cloud-based, “quick-start” integrated development providers include OpenShift, appfog and Stackato. They all have their pros and cons, and there are many online comparisons1, 2 available for those who are interested.

Calcey Technologies is a strong proponent of PaaS, having exploited several leading providers like AppEngine and AWS to the advantage of our clients.
References:

  1. A Java Developer’s Guide to PaaS: http://www.infoq.com/articles/paas_comparison
  2. Conducting a PaaS Comparison: http://apprenda.com/library/paas/conducting-a-paas-comparison/
How to

What is in a Test Case?

Calcey

As we all know, a Test Case is an important instrument in Software Quality Assurance (SQA). For the benefit of those aspiring for a career in SQA, I’d like to explain why writing good test cases is imperative, and go on to describe the key elements in a good test case template. But before I go into details, let me firstly define a test case.

Test case is a document that describes an action or an event in a software app, and the expected response for that action based on the given input. It determines whether the features of the application are working to the expectations of the end users. A collection of test cases is called as a Test Case Document. A test case consists principally of many specifications such as the Test Case ID, Description, Steps, Inputs, Expected Results and Pass/Fail criteria.

So why do we need a Test Case Document?

1. To identify gaps in business requirements at an early stage of development
In SQA Practice, it is recommended to start writing test cases in the early stages of the development lifecycle, as it can help you identify problems in the requirements or design of an application. For example, if we are unable to figure out a test case (or test cases) for a given requirement, the requirement is probably too broad to be coded. It would force the product owner to go back to the drawing board and refine his or her statement of requirements.

2. To optimize testing effort
Writing test cases requires us to “think through” the entire app upfront from a functional perspective and ensure we don’t miss out on test scenarios during actual testing. Whilst ad hoc testing is a critical component of any testing exercise, deadlines and ill-thought functional boundaries by newcomers to the team at the last minute may cause us to skip checking vital functional pathways in an app. This problem is avoided when the test cases are documented. On the other hand, having test cases also helps us from repeatedly covering the same ground and wasting time; for example stepping through the login functionality many times in an app that doesn’t have CAPTCHA secured login. By spelling out exactly the necessary maximum scenarios to ensure 100% functional coverage (I use the word functional here in a broad sense, inclusive of performance, usability and other relevant testing scopes), one can save time.

3. Instill confidence in a release
When a release is tested and a set of test cases are marked as “passed”, “failed” or “not run”, everyone in the team has a clear idea of where the release stands in terms of quality. Stakeholders can make an easy judgment call about whether to push a release to Live, or to hold off for bug fixes, when under pressure due to the business urgency of the release. If one relied on ad hoc testing, one has to depend on the gut feeling of the testers, and there is no formal accountability for the quality of a given release.

4. Easy to scope-down one’s testing effort
Having test cases will help you to identify the most critical functional pathways of an app. So it offers us the advantage when under a time constraint, to intelligently scope-down the testing effort, to cover the most relevant testing areas for the release.

Types of Test Cases
We can categorize test cases based on their purpose within the SQA lifecycle. Three of the most frequently used test case types are:
1. Functional Test Case –  a test case which steps through new functionality in the app
2. Regression Test Case –  test case that is executed to identify the impact from changes to existing functionalities
3. Smoke Test Case –  a summarized test case or a check list item which is executed to identify whether the build is stable and can be accepted for further testing

How to improve Test Case Management
Due to simplicity and cost, most small software companies write test cases in Word or Excel document formats. However, there are many Test Management Software tools available in the market such as Mercury Quality Centre, QA Complete, or even open source tools like TestLink.

What is the Calcey Test Case Template?
Calcey Technologies follows industry standards for testing, and shown below are the essential elements in our Test Case Document template, that any newbie to SQA can study and master.

Field Name Guideline Example
Test Case ID To uniquely identify a test case, when referring to it in project communication.Give the prefix/section# given in the Functional Spec document. This will facilitate mapping the traceability and identifying any missing test cases.

Note: The test case IDs need not be just sequential numbers like 1,2,3,4 etc.

Functional Spec document:5.1 Login In
Test Case document:
5.1.0.1 for the first test case
5.1.0.2 for the second test case
Category Two categories, UI and FUN.UI: UI test cases check the screen layout and all page elements presence.

FUN: Functionality test cases cover all the user actions belonging to a particular function. This ensures that the System works as intended or accepts all the valid inputs, which is supposed to be accepted.

Feature Description This is the Test Case name. Write the Test Case Name in simple present tense.1. UI category:

Feature/Functionality name.
2. FUN category:

2.1: Feature/Functionality name-Valid-<User actions if any>
For example: Add user-Valid-Submit

2.2: For invalid test case combinations: Feature/Functionality name-Invalid- <User actions if any>
For example: Add user-Invalid-Submit

2.3: For any Cancel/Abort action: Feature/Functionality name- Cancel
For example: Add user-Invalid-Submit or Add user – Cancel.

Try to cover each user action in a separate test case.

1. Scenario: Verify labels in the Login screenUI: Login Screen-Labels
2. Scenario: Verify Valid User Login to the System
FUN: User Login-Valid
3. Scenario: Verify Invalid User Login to the System
FUN: User Login-Invalid
4. Scenario: Verify Cancel button in Login screen
FUN: User Login-Invalid-Cancel
Negative/ Positive Scenario This is a mandatory field, which has to be filled at the time of writing the test cases.This is to differentiate happy scenarios and negative scenarios.
Prerequisite Any activity that must take place prior to executing the test cases.If we are using previously executed Test

Cases as pre-conditions, always use the cell reference of the corresponding test case.

When writing pre-conditions, if a user is involved in the action, state the actor(s) name mentioned in the UC. (E.g. System Admin, Us
er)

Write the pre-requisite in past tense.

1. Adobe Reader 8 must be installed.2. User is logged in to the system.
Test Steps This is a mandatory item in test cases.These are the steps used to execute the test case. Each step in the “Test Procedure” has to be numbered and placed in a new row.

The numbering format should be as follows:
1.
2.
Write the steps in simple present tense.

1. Click on the Next button2.Enter a valid security question
3. Enter the address
4. Click on the OK button
Input Data List data to be entered into these fields in order to execute the test case.Alternately, you may point to a separate excel spreadsheet that contains the input data values used for testing.

If you do not know the input data, keep it as “<TBD>”.
If there is no input data for a particular test case, include “N/A” in the cell.

Do not leave this column blank.

Admin User Login = testuser@calcey.comAsset Name = <TBD>
N/A
Expected Results This is a mandatory item in test cases.For each test step the predicted outcome should be documented under expected results. Without this step, the tester may not know whether the test case is a pass or fail.
It’s better to use the word ‘should’ when writing the expected results, since ‘should’ is used to indicate that something is expected.
1. Segments should be displayed when the user clicks on Organize tab.2. System should display the error “User Name cannot be left blank” 
Multiple Target Apps Today we often have mobile and web apps that are complementary and represent the same system under testing. In such cases, one must clearly mark whether the test case must be repeated for several user interfaces or devices prior to passing or failing. App1, App3
Automated Indicates whether the test has been automated or is performed manually.You can also give the name of the script for easy traceability.
Status (Passed / Failed) Initially left blank. This is a mandatory field that has to be filled at the time of execution of the test case.All tests executed should be marked as either passed or failed.

If a test case cannot be executed due to any reason, then it should be marked as “Not executable” or “Differed”, with a Comment.

Defect ID This is a mandatory field that has to be filled at the time of execution of the test case.For each test step if there is any deviation from predicted outcome then that should be documented in the defect-tracking tool (such as JIRA). Then the Defect ID generated for the defect by the tool has to be documented here.
Build Number This is a mandatory field that has to be filled at the time of execution of the test case.This is to differentiate the test cases for different builds.
Use Case Spec Reference This is not a mandatory field.Document any other Functional Spec document or Use Case references here.
Comments Any comments to elaborate a situation that cannot be represented via standard fields.
InterviewsLife at Calcey

Careers at Calcey, an engineer’s story

Calcey

Rajitha Egodaarachchi is a software engineer working for Calcey Technologies; a Colombo based offshore software development facility catering to clients in the San Francisco bay area. Rajitha has been a fast track performer and was recently nominated by his managers for a promotion as a senior software engineer, in recognition of his abilities and dedication. I caught up with Rajitha during his afternoon tea break on Friday, 02-Nov, to learn more about his work experiences and interests.

Sanduni: Welcome to the interview, Rajitha, and congrats on your upcoming promotion. Tell us a bit about yourself.
Rajitha: Thanks Sanduni. Well, I’m a software developer working presently for Calcey Technologies. I’m 24 years old, and a graduate in IT. I’ve been working at Calcey for the past two years.

Which university did you study, and what subjects did you major in?
Rajitha: I got my degree from Curtin University, Australia offered offshore through SLIIT campus in Malabe, back in 2010. It was a”general degree”  in Information Technology. The subjects I studied however were focused towards software engineering.

Sanduni: Why did you pick software engineering as a career?
Rajitha: I had a passion for this subject from my school days. I got to do a lot of interesting little software projects while at the IT Club of St. Peter’s College, which ultimately paved the path for my working in the software development sector. I’s the buzzword of our time and whatever we do ends up having an information technology component in it. People literally hangout in cyberspace today, like on Facebook or Twitter, and almost every business we can think of can potentially be on the Internet. So I thought that specializing in this area would make my life interesting. The software industry is booming with new inventions every day.

Sanduni: Indeed. So why did you decide to join Calcey Technologies?
Rajitha: Well, a few companies called me for interviews. As soon as I got into the premises of Calcey I felt like it had the ideal environment for me to begin my career. I always wanted to join a “not so big” company that is well established in the trade. Calcey is a sort of boutique firm, where seniorfolks are always available for brainstorming, where there is easy access to resources, including Facebook and YouTube [laughs], and where the salary scales are good. Besides, I saw that we could play games in the evening or even shoot each other with NERF guns! I just loved the “developer-friendly” environment that I was introduced to.

Sanduni: What was the first project you worked in? Tell us what the experience was like..
Rajitha: It certainly was challenging. I landed on a C# project called Vertical Platform (later I got to know it as one of the coolest projects to work at Calcey). I was a new entrant to the industry… even though I was working at HSBC previously I had minimal development experience. So I had to work a lot harder to understand the requirements, the design concepts, and basically everything that is expected from the role of a Software Engineer. There were tons of stuff to learn, ranging from configuration management using GIT, to how to keep my cool under pressure.

Sanduni: How would you describe the work environment at Calcey?
Rajitha: Very appealing. Resources ranging from books to laptops are always available without restriction. We have an ethical, heterogeneous setup, and maintain high standards in terms of the industry practices. You can always speak to the management about issues. Plenty of stress relieving activities is available like computer games, foosball, carom or even a small in-house gym. You will find peers always giving out a helping hand as well as experienced seniors mentoring us with new concepts. Everyone’s informal and on a first name basis. The leads are also straight talking, and will point out your mistakes openly and often [laughs].

Sanduni: So Rajitha, what are your hobbies and interests? How do you make it all worth it personally?
Rajitha: I play computer games, sleep [laughs], swim, dance and work on personal R&D projects in my free time; I just hang out with friends on weekends. We do pub and club once in a while and all the latest movies are watched by us!

Sanduni: Great! Ok so do you really like your job? I mean, what improvements do you expect to see in your career in the future?
Rajitha: Yes I do like my job. The job I currently do is the profession I wanted to be, it goes without saying. Moving forward, once I grasp the engineering aspects completely, I would seek to manage projects. Thus I’m looking forward to beginning my postgraduate studies in the coming months. I think it will be helpful with my long-term career.

Sanduni: What was your learning experience like at Calcey itself?
Rajitha: Calcey practices Scrum, the most successful agile project management methodology that I know of, and I’m proud to have adjusted to an agile mindset. I also had to learn Objective-C and iOS development in double-quick time. It’s easy to switch between programming languages here, as there are experts in the domain that you can learn from. SQL, asp.net, MVC, iOS and JavaScript are a few areas of expertise that I tapped into, but I am aware that we also use other languages and frameworks like Python on AppEngine or even older technologies like ColdFusion.

Sanduni: What’s your best moment at Calcey? Is there anyone particular incident that sticks in your mind?
Rajitha: I’ve nothing in particular to single out, but the zillion birthday parties, farewell parties, trips outstation and hangouts are equally memorable for me. We are getting ready for a birthday party this evening, as you know…

Sanduni: Okay. Is there any advice you’d like to give to a newbie joining the industry?
Well, being a newbie myself just over two years back, I certainly felt the stern pressure put upon me when working towards deadlines coding complex features. Looking back after two years, the experience that one gains the hard way would be the best one could get, and would lay the foundation for a long and successful career that awaits you. Never be afraid to work hard, and play hard!

Sanduni: Thank you for your time Rajitha –  and good luck!

Opinion

Is there a place for QA Testing in Scrum?

Calcey

With the emergence of the Agile Software Development zeitgeist at the onset of the 21st century, there occurred an upheaval in how professional competencies were demarcated within the software engineering industry. The established “project roles” and “professional practice groups” within the industry such as Development, QA Testing, Project Management, Business Analysis and suchlike went through an upheaval of sorts, with a general tendency towards de-specialization. A software developer was re-packaged as an “all rounder” and expected to perform well in all departments. Project Managers were attenuated to “Scrum Masters” having a narrower window of responsibility, as compared with the PMs of yore who handled everything from elucidating business requirements to billing clients. The “management” effort was decentralized and distributed throughout a cross-functional team. Many intelligent folk in the industry welcomed this change, as it made developers better aware of the overall business requirements by placing them in direct contact with the client.

One notable early trend in agile product development teams was the aversion to having dedicated “testers” –  after all, why would one need them, if one wrote one’s unit tests and tested one’s releases constantly in a continuous integration environment? For some years, agile development startups shunned the need for hiring specialized human testers; on the basis that the developers would “perfect” the functionality purely through awareness of business needs and through end-user feedback from the client. The possibility that there might be such a thing as “end user competency” in individuals that doesn’t always accompany programming competency, was completely ignored.
As with any other proposition in the scientific management of work, empirical evidence shapes engineering process. As we all know, today we find many agile development teams are back to recruiting dedicated testers to perform manual regression testing and a host of other mission-critical tasks. I’d like to detail four important tasks that our dedicated team of testers at Calcey do, and explain why a new realigned tester role is a valuable addition to software development.

1. Usability Testing
Testing the durability of user interfaces –  i.e. how the requisite functionality is translated into a user-friendly experience –  is the first stage in a project’s lifecycle where Calcey testers get involved. A product owner (or a developer) may quickly wireframe the functionality he or she needs and pass it on to the development team, but this first-cut can benefit immensely from usability testing. What the test team does is, they printout the wireframes, place it before an “ordinary user” (a tester who is not aware of the product) and observe how he or she tries to interact with the wireframe to achieve the objective, that is stated upfront. The questions raised and the time taken to achieve the objective is noted, and eye and hand movements of the user are observed. Thereafter, the wireframe is modified to improve the user experience.

Sometimes, an experienced tester doesn’t actually need to carryout the “usability test”, he or she can simply draw upon past knowledge of “good practices” to redefine a user experience and make it a better one. We have found usability testing input especially useful, in the context of developing completely different user interfaces delivering the same functionality across multiple mobile devices like Websites, iPhones or iPad.

2. Regression Testing
The beauty of Scrum is that it allows a QA team to function alongside the dev team, working more-or-less in parallel. What we discovered is that the Sprint time-box must accommodate a testing and bug fixing period, if one were to avoid bug-pileup. For example, if a single Sprint is of three weeks duration, two weeks are allocated for development, and one week is left for testing and bug fixing the Sprint demo release. During the initial two weeks of the Sprint, there is no regression testing, but the tester(s) can prepare for the upcoming release, by drawing up simplified test cases on a spreadsheet. They also can continue testing the previous Sprint release, or engage in other critical testing activities such as performance testing or test automation (see below).

There is no hard-and-fast rule, but we find that in our “parallel development and testing” setup, the optimal bandwidth ratio for dedicated test resources is modest. In our experience, a team of four developers could benefit from a dedicated tester. The critical success factor is that the tester plays an end-user role –  looking upon the whole system as though he or she would have to work with the evolving product for years to come, without worrying about engineering complexity.

3. Performance Testing
Performance testing is a much discussed and often overcomplicated activity. There are two generic types of performance tests that we setup and conduct during product development initiatives at Calcey. One is the “performance test” verbatim. What we do is we setup a reasonable transactional load on the given user interface under test, and record the response times. For example, how long would it take to login to the system via the login screen and land on the home page, when five users log in at once? We would match our results with the performance expectations for the system provided by the client, or match them against observed industry norms for different devices and environments. A page change on a native iPad app would be expected to happen within a second, for example, whereas a parameterized search result on a web page could be expected to take 3~5 seconds over the Internet.

The second type of test we do is a scalability test. Here we would gradually scale up the transactional load on a user interface’s functionality, in a ramp fashion, whilst measuring the response times at each increase in load. We’d do such a test on benchmarked hardware resources, and identify the breaking point of the system, when the response time becomes infinite or the application crashes. The evaluation of the test results are slightly more complex for a scalability test, as we have to factor in the design of the system and its dependency on hardware bandwidth.

In both of the above cases, the results are fed back to the development team for profiling and implementing performance improvement tweaks to the system. There are several automation tools we use for setting up performance tests, the most common being Apache JMeter for web apps, and Apple’s Instruments for Performance and Behavior Analysis for iOS apps.

4. Test Automation
Another important QA activity we engage in is the maintenance of automated regression test suites for web apps of significant complexity. We write Selenium test scripts embedded in native web code (such as C#) to perform the basic operations of the system; for example logging in, searching for products and adding them to a shopping cart, in the case of an ecommerce system. An automated test suite complements unit tests; as most developers know there are situations where is not feasible to write unit tests, but it is very easy to “click through” and verify continuity via a Selenium web test. These automated regression tests are a living artifact, and need to be updated based on evolving changes to the product requirements. They help to speedily flag breaks in old functionality caused by new releases, and thus save the testers time when deciding whether to accept or reject a build. Writing test scripts also gives the QA testing team a chance to dig into simple code and keep their logical reasoning abilities sharp.

The below diagram summarizes the QA Process we follow at Calcey.

In our Calcey experience, we find that “3rd eye” of the tester invaluable to producing quality, bug-free software (the first and second eyes being those of the client and the developer). The tester also acts like a sort of bridge between the developer and the client, to challenge both parties to achieve an optimal balance between usability and engineering cost.

How to

How to access native iOS functionality from JavaScript

Calcey

Many of you iOS developers may have come across the need to render HTML within your native iOS app, at some point in your mobile app development career. In such cases, have you ever found it necessary to call and execute certain native functions from within your embedded HTML code? For example, how does one print the screen’s contents whilst in an HTML Web View? We came across just such a business requirement recently, and thought it worthwhile to share how we solved this problem.

Before we step into the code, let me first provide some business context around our particular implementation scenario. Our top-level business requirement was to display various types of content like pictures, videos, slide presentations and so forth in a native iOS app. However, one particular content type to be displayed was a “dynamic web content module” –  that is a package of HTML, CSS, and complex JavaScript functions. The exact problem was that these HTML modules had to communicate with the native application and vice versa, whilst running on a Web View. Coding this requirement is not as simple as invoking a method in JavaScript from HTML. The JS-API Bridge allows the module and the native application to communicate with each other. The JS-API bridge works based on the following 2 functionalities provided by the UIWebView in iOS.

shouldStartLoadWithRequest method of the UIWebView delegate
This method of the UIWebView delegate gets called each time the UIWebView loads a new URL. We can use this method to send data from JavaScript running on the web view to our native code. We can make a web request from JavaScript with a custom nonstandard protocol identifier followed with the payload (e.g. nativecall://<native call payload>). We can look for our protocol identifier and extract the data within the shouldStartLoadWithRequest method and return NO to cancel the request.

stringByEvaluatingJavaScriptFromString
The stringByEvaluatingJavaScriptFromString method of the UIWebView enables us to evaluate a JavaScript string in the context of the currently loaded document in the web view. We can use this method to send data from the native code to JavaScript.
A working sample is available at: https://bitbucket.org/calceytechnologies/js-ios-bridge/

How toLife at Calcey

What is in a code review? Here is how Calcey Technologies does it.

Calcey

Code reviews are an important recurrent gatepost in agile software development, and a good engineering practice we follow at Calcey. As most software development teams know, frequent code reviews ensure the containment of poor code quality such as inefficiencies in unit-level design and lack of adherence to coding standards. Historically, the practice of code reviews existed in methodologies like RUP as both informal code walkthroughs and the more formal Fagan Inspection. At the onset of the agile revolution, code reviews were re-branded as peer reviews (which actually meant peer code reviews), as a necessary ingredient to building a stable software in an evolving fashion. The bottom line justification for the time spent on code reviews is that they are essential if we are to end up with a scalable and extensible piece of software, as opposed to a hack-job that is both unstable (difficult to scale) and impossible to extend later on, for emerging market needs.

I’d like to outline our approach to code reviews, and how we conduct them. We have a rule of thumb which developers and Scrum masters use to initiate code reviews –  any new release to a test environment must have been preceded by one. This simple rule gives Scrum masters the flexibility to plan the review, but binds them to conducting it within a given development sprint. Our review setting is that of an informal workshop, where the developer concerned projects the code on screen and walks through sections of the code based on the prompting of the reviewer. The review team consists of an architect and at least one other senior developer outside of the project under review, with competency in the programming language and frameworks concerned if possible. Other members of the project team are welcome to listen in and give their feedback. The Scrum master updates the code defects in the task backlog and assigns them to the developer(s) concerned. The duration of a code review session could vary from between 30 to 90 minutes, depending on the scope of work accomplished during a given sprint. We take our time, as faster is not better when it comes to an effective review; we inspect at most 300 lines of uncommented code for an hour.

The reviewers keep an eye out for all the typical code vulnerabilities during the review. We begin with readability, style and conventions –  there cannot be code that an experienced outsider cannot understand after a brief explanation by the developer concerned. If there is, the code is likely to be either poorly structured (design defects) or poorly presented (style defects), or both. Calcey generally follows the industry accepted coding style conventions for the major programming languages, such as the C# coding conventions from Microsoft. Unit tests are often a good place to assess the stability of the new functionality implemented, and the obvious presence of stringent unit tests can help reduce the subsequent line-by-line review effort. We’d then move on to trapping major issues in earnest, checking for algorithmic inaccuracy, resource leakage, exception propagation, race conditions, magic numbers and suchlike. There are several online sources that closely portray the Calcey code reviewer’s mindset, such as this checklist from projectpatterns.org.

One of the biggest benefits of a workshop-style code review is that the authors of the code themselves realize defects and improvements, as a direct result of trying to explain how the code works to reviewers who might not be fully acquainted with the design. In situations where pair programming is not feasible, the code review mitigates the risk of “coding in silos”  to a great extent.

Having said this, we also do our best to automate humdrum quality checks. Our .NET based app development projects are integrated with StyleCop (downloadable from CodePlex), to check for style issues like custom naming conventions or compulsory comments for XML. We also advocate enabling Code Analysis in Microsoft Visual Studio to warn us of potential code defects when compiling, from the viewpoint of the Microsoft .NET Framework Design Guidelines. Apple iOS development comes with its own set of code analysis tools –  we use Instruments for Performance and Behavior Analysis for profiling our code at runtime to identify memory leaks, a tendency when programming with Objective-C.

Coding review metrics such as code coverage and defect count are gathered from the individual reviews by the Scrum masters, and submitted to our principal architect for statistical analysis, strictly to improve the effectiveness of the review process (and not for finger-pointing). Junior developers can hope to learn a lot from well-conducted code reviews, not only about the specific technologies and design principles involved, but also about working together as a team to engineer a quality product. After all, our aim is to perform what Jerry Weinberg named nearly half a century ago as “egoless programming” .

“The objective is for everyone to find defects, including the author, not to prove the work product has no defects. People exchange work products to review, with the expectation that as authors, they will produce errors, and as reviewers, they will find errors. Everyone ends up learning from their own mistakes and other people’s mistakes.”  – Jerry Weinberg, “The Psychology of Computer Programming”, 1971

Life at CalceyOpinion

Haven’t yet been able to adapt Scrum to match the ground-realities of your business? Find out how we did it

Calcey

Project management is a crucial weapon in the arsenal of any software development outfit. Its probably the most-discussed competency in software engineering, judging by the sheer volume of scholarly papers, conceptual models, blog articles and entire schools of thought that have been churned out on this subject over the past two decades. We’ve seen process frameworks like Waterfall, SSADM and RUP come and go, and a shift from centralized delivery responsibility resting on the service provider, towards distributed ownership across an extended team inclusive of the client. We live in a world of “Agile” software development today, a zeitgeist of management thinking patterns that are based on keeping processes to the bare essential, building products incrementally and eliminating humbug within teams. We have even seen the formal “role” of the project manager (stereotyped as the big, bad bogyman in the team) disappear within the modern agile paradigm.

Call it what you like, a person or the collective reasoning within a team, we find that effective project management remains an essential ingredient to “getting the job done” . Moreover, project management success in software development engagements often remains illusive. I’d like to summarize our own successful methodology at Calcey, and go on to explain a few of the deeper lessons we learned through our collective management experience, for the benefit of our future clients.

We follow a project management methodology that is a derivative of Scrum, which has benefited from long years of practical experience in delivering projects of varying sizes and technical complexities. Our conceptual framework is fairly simple. We agree with our clients to form a single team having joint responsibility for the project, at the early stage of pre-sales negotiation. Whilst in theory we are not supposed to estimate the end-to-end scope of work in Scrum, in practice we have found it impossible to find a client who would agree to an entirely open budget and no indicative calendar timeline for building a product. So an initial ballpark estimate is made. This is purely for purposes of budgeting, to provided the client with a broad feel for the costs involved and to determine the resource bandwidth to be deployed in order to meet a very approximate calendar schedule. This sort of budget is made against the broad set of features that the product comprises of, as understood at the inception of the project. Once a project is contracted, we move forward in earnest to apply our Scrum model.

A Calcey Scrum Master’s life revolves around their project backlog. They manage both the product’s roadmap of features and the specific tasks (or bugs) for the current sprint via an enterprise backlog app such as JIRA, TeamworkPM or Basecamp. JIRA offers the highest flexibility in managing the complete life-cycle of a development project, but both TeamworkPM and Basecamp have proved to be interesting alternatives to managing smaller-scale engagements. In any case, it is not the choice of the tool itself that we found important, rather the diligent use of the backlog as a concept for task management that helped us most. Handwritten backlogs diligently maintained in the corner of a whiteboard tagged with the words “don’t erase” seemed to work better in some situations!

We plan development for a time-boxed sprint, whose duration is usually a fortnight for technologies that we are well experienced in, and a month for greenfield technologies or projects of high engineering complexity. The duration would be decided at the initial sprint, where we estimate what could be achieved in the first sprint, within the budgeted engineering bandwidth. Once decided, we stick to this time-box throughout the lifetime of the project. The outcome of any given sprint is of course a release of working software –  working but not bug free or complete in functionality. As the sprints progress, the software “emerges” as a viable product for launch. A lot has been said in the industry about the generic form of the Scrum methodology, so I’d like to move on to a few specific lessons we learned at Calcey through our experience. A snapshot of the recurring activities that we practiced shown below.
IT companies in Sri Lanka
The first and biggest lesson learned for those of us who were new to Scrum was that, unlike any other methodology, Scrum is an explicit activity like coding or testing. We’d scan the client horizon as well as our own engineering backyard each morning via the daily stand-up meeting, update our project backlog, and get into action to follow up on the individual tasks that need facilitation. We found that if we have a “living”  task backlog that gets updated without fail each day (with dates, milestones etc), we then could use it as the vehicle to drive our work; to psyche up the team, provide expert external assistance or reset client expectations. So the Scrum Masters don’t “go to sleep”  when not at sprint planning and the daily stand-up meeting –  on the contrary, they work hard each day to facilitate the resolution of issues arising from the daily stand-up meeting.
The effort required for effective sprint planning is not trivial, as we learned through experience. In theory, the estimate given at sprint planning (“I’ll finish task X within the next two weeks”) is considered sacrosanct. This aught to be, because cascading task “spillovers” into subsequent sprints could buckle the whole paradigm of time-boxed incremental achievement, and sprint velocities could take a nosedive. So what we found was that it was worthwhile to invest an entire day for sprint planning. This day is not counted into any given sprint, and the day’s principle goal is to freeze a list of tasks to be completed during the upcoming sprint. A full day provides enough time for the team to mull over the complexities of the tasks and break them down into smaller goals if necessary. The sprint planning meeting itself assumes a sort of “workshop” format where folks could pop outside for a quick R&D and return with better knowledge about the complexities of the work involved. Ultimately, everyone walks away to implement what they would consider as their own sprint plan, approved by the product owner.
Another significant learning was overcoming the common problem of trivialization of testing. The updating of automated unit tests and manual test plans, smoke testing, regression testing and bug fixing all take up a considerable percentage of the time taken for implementing a given functionality. Moreover, contrary to the idealistic belief amongst agile gurus that all competent software engineers are also competent testers (or aught to be such), we find in practice that the eyes of a person with a strong end-user perspective is essential to ensuring a healthy demo at the end of the sprint. So we found it useful to divide a given sprint timeline conceptually into a “new dev”  period and a “testing and bug fixing”  period, at sprint planning itself. This helped us to reduce the otherwise frightening tendency of “bug pileup”  that so often happens in Scrum projects –  where new development forges ahead of bug fixing, causing instability in the releases as time goes by.
The use of lifecycle automation tools was an immense help to us, and we considered them as part and parcel of our agile methodology. Anything useful ranging from build automation tools like Cruse Control, source repositories like GIT, test scripting frameworks like Selenium and Code Analyzers like FXCop were absorbed into our development framework.
The single hardest challenge though was making sure that the client representatives became an integral part of the team, and that they felt inherently responsible
for the incremental development in a hands-on fashion. Success or failure of a given sprint would be declared by the product owner immediately after the sprint demo; this is one important reason why the product owner (or his or her competent representative) is part of the team. This helped us prevent a situation where a client determines the product under development has veered radically off course after (say) a dozen sprints. Because if such a situation does arise, it basically tells us that the fundamental paradigm of dealing with a complex problem in small increments has not been adopted by the client. There are many ways to convey the message of joint ownership and incremental assessment, and in practice we have found that the most effective way is to discuss this very problem upfront prior to undertaking a new project. We usually stress the fact that the success or failure of each sprint must be determined at the next sprint planning, and adjustments must be made locally, at the scope of each sprint. These adjustments include making “management decisions”  like filling skill gaps or increasing the engineering bandwidth.
Lets face it, software development is not comparable to dam building –  a common misconception amongst management types. Although there is definite commonality in the broader values that are required of the team like honesty, dedication and professional competency, the fundamental drivers of work are not exactly the same. Intellectual effort taxes both our left and right brains equally with plenty of logical reasoning bootstrapped by flights of inspiration and lateral thinking. This “mind game” of software project management requires a management methodology that fosters creativity, whilst compensating for common human failings like poor memory, to drive it forward. Scrum is just such a methodology, and has proved highly effective for us at Calcey.

How toLife at Calcey

High-performance search via SQL-ElasticSearch hybrid solution

Calcey

One of our recently concluded projects was about building a generic platform for managing multiple B2B online marketplaces catering to different customers within the biotechnology space. An interesting problem we faced when building the “Search”  functionality for this platform was how we would deal with the massive volumes of products and product specifications that one had to “intelligently”  sift through, when rendering a search result to the user.

The platform supported semantically different content types like Products and Articles, and moreover, each product could have a large number of specifications tagged to it. The search functionality required was feature-rich, where the results had to be prioritized based on weights attached to different product specification types (color, weight etc), product localization and other configurable site-specific parameters. This resulted in us needing significantly complex SQL queries to generate a search result. In addition, we were dealing with large volumes of data; as much as 2.8 million products in a single online marketplace, mapped to over 40 million specification records in the MS SQL Server database. An initial proof of concept using purely SQL queries proved futile, where it took us around 20 seconds to render a properly weighted search result, in spite of the ample allocation of hardware resources within the hosting environment. We had to go back to the drawing-board and rethink our Search architecture in order to improve its performance.

We then did some research on ElasticSearch, a schema free, document oriented search solution that could be hosted on a cloud environment for scalability. This got us thinking along the lines of a hybrid architecture, where we would distribute the processing of the vast data volume of the search on a cloud deployment of ElasticSearch, whilst running the complex search queries for weighting the search results within the usual dedicated MS SQL Server environment. The final solution was a two-piece affair as depicted in the (simplified) conceptual diagram below.

When a user types a search string on a given product marketplace website and hits the “Search”  button, a two step process is invoked. There is a summarized database of products and essential specifications that is deployed on a cluster of ElasticSearch nodes on the Amazon EC2 cloud, which is updated on a daily basis (see “Daily Update Task” ). This full text search server is initially queried for the search string via the ElasticSearch web service API, and a “shortlist” of product records are sent to the DAL of the marketplace app as a JSON string, in double-quick time. The details of the search results –  i.e. all the product attributes and the application of specification-based weighting rules –  are generated via regular SQL queries. These SQL queries are run on the MS SQL Server database for the shortlisted search results recordset provided by ElasticSearch.

The actual implementation itself was a learning experience for us. For example, we initially wired the Web App, which was hosted in a data center in San Francisco, with an ElasticSearch solution hosted on Amazon’s cloud servers in the East Coast of the United States. This introduced significant network latency, which was greatly reduced by moving our ElasticSearch solution to the West Coast. We also realized that we could break up the summary recordset returned by ElasticSearch based on pagination criteria, and query the details from the local MS SQL Server to render page-wise search results. This reduced database processing time significantly. Several other minor tweaks were done to the detailed design, such as transferring the complexity of certain bitwise operations to determine localization based priority for products, to the server side SQL queries. All these improvements were based on a thorough performance testing of the Search implementation.

The final SQL-ElasticSearch hybrid solution immensely reduced the waiting time for a given search query, from as much as 20 seconds duration in the pure SQL solution, to under two seconds duration in the SQL-ElasticSearch hybrid solution. This performance improvement was immensely appreciated by our client and the end users of their online B2B marketplaces.