37 Epic Software Failures that Mandate the Need for Adequate Software Testing

37 Epic Software Failures

Disaster is an understatement for any brand/organization/institution that has incurred losses due to an overtly miniscule but catastrophic software glitch. While technology and innovative applications have been empowering brands, there have been numerous disabling instances recorded by enterprises.

In this run on top software failures of 2016 -2015-2014, we take a stock of the debacles/glitches that have changed the face of software development and endorsed the role of testing in the overall SDLC process.

This is a list of software glitches/technical issues witnessed by brands and enterprises across diverse industries. Please note that the numbers 1-37 do not signify in anyway high or low impact of the software glitch on the brand/enterprise.

  1. Yahoo reports breach

Yahoo reports breach

Amongst the most recent data breaches, on September 22, 2016, Yahoo confirmed a data breach that exposed about 500 million credentials that date back to four years. It is being considered amongst the largest credential leaks of 2016. The company believes that this was a state-sponsored breach, where an individual on behalf of a government executed the entire hack. It further urged users to change their passwords and security questions. As a relief for the users, Yahoo stated that sensitive financial data like bank accounts and passwords was not stolen as part of the breach.

Source: Money.cnn.com

  1. Nest thermostat freeze

Nest thermostat freeze

Software update for the Nest ‘smart’ thermostat (owned by Google) went wrong and literally left users in the cold. When the software update went wrong, it forced the device’s batteries to drain out, which led to drop in the temperature. Consequently, the customers were unable to heat their homes or use any amenities.

Nest claimed that the fault was due to a December 4.0 firmware update, with related issues such as old air filters or incompatible boilers. Later it released a 4.0.1 software update that solved the issue for 99.5% of customers who were affected.

Source: Cio-asia.com

  1. HSBC’s major IT outage

HSBC’s major IT outage

In January 2016, HSBC suffered a major IT outage, and millions of bank customers were unable to access online accounts. The bank took almost 2 days to recover and get back to normal functioning.

HSBC’s Chief Operating Officer (COO) declared that it was a result of a ‘complex technical issue’ within the internal systems.

Source: Cio-asia.com

  1. Prison Break

Prison Break

A glitch that occurred in December 2015 led to over 3,200 US prisoners being released before their declared date. The software was designed to monitor the behaviour of prisoners and was introduced in 2002. The problem was occurring for about 13 years and on an average prisoners were released almost 49 days in advance.

Source: Cio-asia.com

  1. HSBC payments glitch


In August 2015, HSBC failed to process about 275,000 individual payments that left many people without pay before a long Bank Holiday weekend. This occurred due to a major failure with the bank’s electronic payment system for its business banking users, affecting the individual payments. Bacs, a payment system used for payment processes across the UK, later picked up on this issue, labelling it as an ‘isolated issue’.

Source: Cio-asia.com

  1. Bloomberg cancels debt issue

Bloomberg cancels debt issue

In April 2016, Bloomberg’s London office faced a software glitch, where its trading terminals went down for two hours. This came up at an unfortunate time when UK’s Debt Management Office (DMO) was about to auction a series of short-term Treasury bills. Later in a statement Bloomberg declared that the services were restored and the glitch was a result of both hardware and software failures in the network, resulting in excessive network traffic.

Source: Cio-asia.com

  1. RBS payments failure

RBS payments failure

About 6 lakh payments failed to get through the accounts of RBS overnight in June 2015, which included wages and benefits. Bank’s chief admin officer stated it as a technology fault and there was no further detail on the real cause. In 2012, about 6.5 million RBS customers had to face an outage caused due to a batch scheduling software glitch, where the bank was fined £56 million.

Source: Cio-asia.com

  1. Airbus software bug alert

Airbus software bug alert

In May 2015, Airbus issued an alert for urgently checking its A400M aircraft when a report detected a software bug that had caused a fatal crash earlier in Spain. Prior to this alert, a test flight in Seville has caused the death of four air force crew members and two were left injured.

Source: Theguardian.com

  1. UK government’s new online farming payments system gets delayed

UK government’s new online farming payments

In March 2015, the UK government delayed the launch of £154 million rural payments system.  The system is an online service for farmers to apply for Common Agricultural Policy payments from the EU. This online service that was supposed to be up and running by May 2015 got delayed due to integration issues between the portal and the rules engine software. It was then not expected to be up even by 2016.

Source: Computerworlduk.com

  1. Co-op Food’s double charges

Co-perative Food double charges

In July 2015, Co-operative Food apologized to its customers and promised a refund within 24 hours. The reason was a ‘one-off technical glitch’ while processing the software that resulted in customers being charged twice.

Source: Computerworlduk.com

  1. John Lewis

John Lewis

Mispricing is a common headache faced by retailers due to system glitches, resulting in retail outlets offering customers excessively lucrative offers. John Lewis is a recent example, where the online retailer witnessed a price glitch on its website that erroneously advertised hardware at software rates.

Source: Money.aol.co.uk

  1. Tesco iPad pricing disaster

Tesco iPad pricing disaster

In March 2012, Apple iPads worth £650 got priced at £49.99. After the glitch got identified, Tesco cancelled the sale and did not respond to these orders, resulting in dissatisfaction with the customers.

Source: Mycustomer.com

  1. Marks & Spencer 3D TV glitch

Marks & Spencer 3D TV glitch

In January 2012, 50 inch, 3D TVs worth £1,099 went up on sale for a mere £199 on the Marks and Spencer website. Eventually, the company decided to sell the Plasma TV sets at a lowered price after it faced a customer petition. The online petition called ‘Marks & Spencer supply our tvs that we paid for’ compelled M&S to honour the orders.

Source: Thisismoney.co.uk

  1. Reebok’s free trainers

Reebok’s free trainers

In November 2013, Sports retailer Reebok trainers worth £100 were getting picked up for free from the online site, where the customers were being charged only for delivery. While the company did not honour the orders and apologised to the customer, they refunded the delivery charges and additionally gave 20% off on their next order. The pricing glitch went viral on Facebook and other sport and price deal forums, where shoppers rushed to get a grab of £99.95 CrossFit Nano Speed footwear for just £8.50 postage.

Source: Theguardian.com

  1. Tennessee County kills System Update worth $1Million

Tennessee County

After investing two years of labour and investment worth $1 Million, Rutherford Country of Tennessee, US called off a court software system update. The core reason was that the software glitches were identified right when the deal took place, where problems related to issuance of checks, errors on circuit court dockets and creation of hidden charges came up in the weeks after it went Live.

Source: 99tests.com

  1. Software Security Flaws Revealed in OLA’s Mobile App

Software Security Flaws - OLA

Ola, India’s largest taxi aggregator faced major security flaws within their system. The software bugs detected helped basic programmers to enjoy unlimited free rides – at the expense of Ola and at the expense of users. The issue went public when customers brought up the weaknesses in the system. Ola tried to fix bugs when the complaints soared up and it was alarming for the brand’s reputation in the marketplace.

Source: Economictimes.indiatimes.com

  1. Leeds Pathology IT crash

Leeds Pathology IT crash

In September 2016, Leeds Teaching Hospitals NHS Trust, one of Europe’s largest teaching trusts witnessed a pathology IT crash that resulted in a delay of operations for almost 132 patients. Leeds Teaching holds a budget of a £1 billion and employs over 16,000 staff. It serves 780,000 people in the city and provides expert care for 5.4 million patients. The outage further affected Bradford Teaching Hospitals NHS Foundation Trust, GP services in Leeds and a minor number of GP services in Bradford.

Now that’s the impact!

Source: Digitalhealth.net

  1. Cisco’s Email Security Appliances glitch

cisco’s Email Security Appliances

In September 2016, Cisco Systems released a critical security bulletin to announce an IT exposure that could allow remote unauthenticated users to get access to its email security appliances. The vulnerability is associated with Cisco’s IronPort AsyncOS operating system. The company further indicated that there is a way out of this that can stop this remote access to the email appliances.

Source: Threatpost.com

  1. Cisco Nexus Switches warning

Cisco Nexus Switches

Cisco again! In October 2016, Cisco Systems released several critical software patches for its Nexus 7000-series switches and its NX-OS software. Cisco’s Security Advisory declared that both the Nexus 7000 and 7700 series switches were vulnerable to this glitch. The vulnerabilities declared allowed remote access to systems that could enable a hacker to execute code on targeted devices. Cisco further declared that this bug (CVE-2016-1453) is a result of “incomplete input validation performed on the size of overlay transport virtualization packet header parameters”.
Source: Threatpost.com

  1. Cyber Attack on Nuclear Power Plant

Cyber Attack on Nuclear Power Plant

In October 2016, the head of an international nuclear energy consortium declared that disruption at a nuclear power plant during the last several years was caused due to a ‘Cyber Attack’. Yukiya Amano, head of the International Atomic Energy Agency (IAEA) didn’t drill the matter much in detail, but did alter on the potential attacks in the future.

This shows that disruption in nuclear infrastructure due to a Cyber Attack is not a ‘Hollywood stint’!

Source: Threatpost.com

  1. Volkswagen’s ‘Dieselgate’ scandal

Volkswagen’s ‘Dieselgate’ scandal

In September 2015, the US government in a dramatic move ordered Volkswagen to recall about 500,000 cars after learning that the company had deployed advanced software to cheat emission tests and allowed its cars to produce 40 times more emissions than the decided limit. The Environment Protection Agency (EPA) accused VW for installing illegal ‘defeat device’ software that substantially reduces Nitrogen oxide (NOx) emissions only while undergoing emission test. The company further admitted it and announced a recall as well.

Source: Theguardian.com

  1. Interlogix Recalls Personal Panic Devices


In October 2016, Interlogix, a wireless personal panic devices manufacturer recalled about 67, 000 devices due to its inability to operate during emergency situations. The probable cause for this glitch in operations was that the device was unable to communicate with the security system during an event of emergency. The way out was the manufacturer replacing the devices. Furthermore, the consumers could contact their professional security system installer and call for a free monitoring and if required free replacement.

Source: News.sys-con.com

  1. IRS E-File goes Offline

IRS E-File

In February 2016, the Federal Agency suffered from a hardware failure. IRS announced that the hardware failure has affected numerous tax processing systems that went out of service, including the modernized e-file system and another related system. Majority of the folks trying to file taxes online could not complete the process. Later IRS made amendments and worked to restore regular operations to get back to the routine.

Source: Newyork.cbslocal.com

  1. 911 call outage

911 call outage

In April 2015, Emergency services got stalled for six hours for seven US states. This affected 81 call centers, literally speaking about 6,000 people made 911 calls and were unable to connect across the seven states. The nationwide outage was the third major outage in three years across telecom operators of the 911 call system. This raised worries amongst federal regulators pertaining to the vulnerability of the country’s emergency response system.

Source: Wsj.com

  1. New York Stock Exchange halts trading

New York Stock Exchange

In July 2015, The New York Stock Exchange stopped trading due to an undisclosed ‘internal technical issue’, where all open orders were cancelled and the traders were alerted and informed that they would receive information later. While responding to the shut down, NYSE announced that there was no cyber breach within the system and it resumed operations after 4 hours.

Source: Money.cnn.com

  1. UK government’s online calculator glitch

UK government’s online calculator glitch

In December 2015, UK government found out that its online calculator for estimating the spouse’s financial worth got hit with a Form E fault, where calculations went wrong for thousands of couples who had got divorced over the past 20 months. Though the issue was prevalent since April 2014, it got noticed only in December 2015. The damage caused is yet to be estimated.

Source: Cio-asia.com


Let’s take a dip into some of the interesting software debacles of 2014

27. Nissan’s recall

Nissan's recall

For over 2 years Nissan recalled over a million cars, thanks to a software glitch in the airbag sensory detectors. Practically, the affected cars were unable to assess whether an adult was seated in the car’s passenger seat and consequently would not inflate the airbags in case of a crisis.

Source: Computerworlduk.com

28. Amazon 1p price glitch

Amazon 1p price glitch

One of the most known glitches in history, Amazon 1p price glitch, where third-party sellers listed on Amazon saw their products being priced at 1p each. While the products got delivered, numerous small time retailers had to appeal to the customers for returning the items.

Source: Computerworlduk.com

29. Screwfix.com glitch

Amazon 1p price glitch

In January 2014, every item in the Screwfix catalogue got priced at £34.99 that included items costing almost £1,599.99. Smart customers quickly collected goods worth thousands after the news was spreading across twitter. Eventually, the website had to close down.

Source: Telegraph.co.uk

30. Flipkart apologizes for Big Billion Day sale fiasco


In October 2014, Filpkart, India based e-commerce giant, sent a note to its customers apologizing for the glitches that took place on the Big Billion Day Sale. The site encountered a heavy rush, which it couldn’t manage, which resulted in cancellation of orders, delayed delivery, and much more that was beyond them to manage. While the sale helped the ecommerce giant garner a billion hits in a day, it was certainly a PR nightmare for the brand.

Source: Livemint.com

31. CA Technologies paid RBS ‘millions’ for role in IT fiasco

CA Technologies

In October 2014, CA Technologies paid ‘millions of pounds’ to the Royal Bank of Scotland. This payment was a part of the settlement agreement with Royal Bank of Scotland’s (RBS) IT outage in 2012. In 2012, a failed upgrade to CA7 batch processing software by RBS IT staff resulted in breakdown of systems that affected millions of customers. The customers were unable to access their accounts or execute any payments.

Source: Computerweekly.com

32. Chaos at UK airports


On December 12, 2014, UK’s busiest airports got stranded due to a system glitch at the main national air traffic control center in Swanwick. Planes were grounded and passengers got delayed. The impact was enormous as the runways got closed at Heathrow, which is one of Europe’s busiest airports. The transport secretary called this ‘unacceptable’.

Source: Theguardian.com

33. Toyota Prius recalled over software glitch

Toyota Prius

In February 2014, Toyota Motor recalled 1.9 million newest-generation Prius vehicles worldwide due to a programming error that caused the car’s gas-electric hybrid systems to shut down. The Automaker mentioned that the problems were with the software settings on the latest Prius generation that initially went for sale in 2009 and could damage transistor in the hybrid systems. The identified problem could turn on the warning lights and trigger the vehicle to shut down the power on a fail-safe mode.

Source: Nytimes.com

34. Heartbleed the Web


In April 2014, the IT gang woke up to its worst nightmare, an emergency security advisory from the OpenSSL project warned about an open bug ‘Heartbleed’. The bug could pull out a chunk of working memory from a server and run their current software. While there was an emergency patch for it, tens of millions of servers got exposed by the time the patch got installed. This left everyone and anyone running a server in a crisis mode. This notorious bug left biggies like Yahoo, Imgur, and numerous others exposed to Heartbleed.

Source: Theverge.com

35. Apple pulls iOS 8 update

Apple pulls iOS 8

In September 2014, Apple faced an embarrassment after it had to pull out its new iOS software update only after a few hours of its release. This was post complains from iPhone users about calls getting blocked post the upgrade. The tech giant pulled out the update after a storm of complaints on Twitter, Apple user chatrooms. The update further disabled the feature where people could unlock their phones with fingerprints.

Source: Mirror.co.uk

36. iCloud hack

iCloud hack

On August 2014, almost 500 private pictures of celebrities got posted on social channels and sites like Imgur and Reddit. The images were apparently sourced through a breach of Apple’s Cloud services suite iCloud. However, later it was found that it could be due to a security issue in the iCloud API that enabled the access and innumerable attempts from try passwords. However, there have been recent reports of similar hacks into iCloud.

Source: Dailymail.co.uk

37. Air India diverts Boeing 787 flight

Air India diverts Boeing 787 flight

During an emergency stunt in Feb 2014, Air India diverted Boeing 787 plane to Kuala Lumpur when the pilots noticed a software glitch while on a flight from Melbourne to New Delhi. The Engineers were flown down from Hong Kong to fix the glitch and worked with Air India to resolve the same. It has been reported that 787 has been suffering such glitches and Boeing was aware about it.

Source: Reuters.com

Gallop Solutions has collaborated with world’s leading and innovative organizations/brands across diverse industries. Enterprises globally have trusted Gallop’s independent software testing services and expertise for over a decade and have achieved speed to market, higher returns on investments (ROI), and enhanced quality deliveries in their overall QA initiatives. Connect with our experts to bring speed and velocity to your QA practices with the best ideas in the testing space.

Application and Software failures dilute the brand’s credibility that is built over the years. Together, let’s work towards further strengthening your brand’s positioning, integrity and faith by ensuring Quality @ speed.
The opinions expressed in this blog are author's and don't necessarily represent Gallop's positions, strategies or opinions.

10 Emerging Trends in Software Testing: Predictions for the next decade

10 Emerging Trends in Software Testing

The last decade has seen an overwhelming evolution of the software testing industry giving way to greener pastures. This rapid scale of development is keeping not just the developers, but also the testers on tenterhooks, making them to continuously strive to upgrade their skills. Even businesses today need to be even more aware of what is best in terms of performance and security. This disruption has been caused by the new technologies, and it is always challenging for testers to overcome the new issues posed by these upcoming technologies.

2015 saw the acceptance of testing as an early activity in the software development lifecycle. This was predominantly due to the widespread adoption of Agile and DevOps methodologies by organizations across the globe. The goal was to get their apps faster to the market. 2015 also saw an increase in the use of virtualization and service oriented architecture along with cloud computing that led to many testing tool vendors vying for the market space in the testing arena.

This post summarizes the Top 10 emerging trends/predictions for the next decade that may change the landscape of software testing. This is based on our observations and experience with leading Fortune 500 enterprises and industry analyst research reports. It is interesting to discover each of these trends and to know how enterprises as well as testing professionals can get leverage these trends, re-strategize or re-skill themselves.

  1. The Future belongs to Open Source Tools: The next decade (may be more!) will see a lot of Open source tools in action as more and more organizations will adopt them for proper implementation of Agile, DevOps, and Test Automation. Support communities for the open source tools can only become more and more involved and active.
  1. Quality @ High speed is the new mantra: Everyone wants the best products in the fastest possible time. This is making organizations focus on providing the best user experience along with the fastest time to market. The speed is only going to increase (and the quality better) with the latest technologies and tools at the disposal of teams.
  1. Software Development Engineers in Test (SDETs) will be in huge demand: SDETs have been existing among us since almost a decade, but their role was very different from traditional testing roles. That said, by early 2020, almost all testers will need to wear an SDET hat to be successful in the field of Test Automation, that is going to become mainstream.
  1. Agile and DevOps will rule the roost – TCoE is dead: According to Forrester, organizations are not looking at having centralized Test Centers of Excellence anymore. Test automation developers today are now a part of the agile teams. The erstwhile testing arena is making a shift towards quality engineering, and testing is intended to become more iterative, progressive, and seamlessly integrated with development.
  1. Digital Transformation is here to stay: With a majority of organizations making a foray in the digital world, the need for digital transformation will require a huge shift of focus towards digital testing. Robust strategies for digital assurance will be required for focusing on optimizing functional testing across channels.
  1. BigData Testing will become really BIG: We are sitting atop an explosive amount of BigData today and need to have a very strong strategy around BigData Testing. Testing datasets requires highly analytical tools, techniques, and frameworks, and is an area that is set to grow big.
  1. IoT: Heralding an era of Connected Devices: With IoT growing in leaps and bounds, more and more customers rely on IoT Testing before using the products. If the products are not tested, their functionality, security, and effectiveness – all will come under scanner. According to a HP study, 70 percent of devices in the Internet of Things are vulnerable to security problems.
  1. DevOps will drive Quality Engineering: DevOps ideology is based on seamless collaboration and integration between the different departments of an IT Organization – developers, quality professionals, and IT professionals. Testing plays a business-critical role as developers are involved not just in the correctness of their code, but also in the testing and overall Quality engineering aspects. DevOps thus is propelling businesses towards greater speeds of deployment and quality assurance and is thus helping them realize higher returns on investment and faster time to market in a cost-efficient manner.
  1. Performance Engineering is replacing Performance Testing: Repeating a cliché – “A good user experience is the key to a successful product”. Consistent performance across diverse platforms, OSs, and devices defines how much of a market can a product really capture. The need to provide the best experience to users is making organizations change their strategy. They are now moving away from just providing Performance tests to providing Performance engineering.
  1. The best news is that Software Testing Budgets will continue to grow: It is but obvious that with such huge focus and demand for high quality products, and with major IT trends such as BigData analytics, Cloud Technologies, Mobility, and Virtualization, Testing has become more than just a need. This will push the organizations towards allocating a bigger chunk of their IT budget (around 40%) for software testing and QA.

About Kalyana Rao Konda

Kalyan is the President & Global Head of Gallop Solutions Inc. With 17+ years of experience in IT Services, specifically software testing, Kalyan has led large QA teams of 2000+ people at AppLabs as VP-Delivery and had been in QA leadership roles with Virtusa&BaaN earlier. He has a rare mix of high technical understanding with a pragmatic approach to testing services delivery. A strong proponent of Testing-as-a-Service (TaaS) delivery model, Kalyan is a thought leader with hands on expertise in building large scale test automation suites, executing and maintaining them. He has a patent pending with USPTO for ‘iGenerate test Scenario’ and ‘Web Services Validator’ filed last year. Kalyan is a speaker at various testing conferences including DevOps East, StarEast& Agile Testing Conference in Boston. He was recently awarded the prestigious “40 Under 40 Award 2016” from the Philadelphia Business Journal.

Do not forget to access the On-Demand Webinar on Testing Trends here:


The opinions expressed in this blog are author's and don't necessarily represent Gallop's positions, strategies or opinions.

Getting Started with Risk-Based Testing

risk based testing, software testing, functionality testing, vulnerability testing, security testing, quality assurance testing, software testing services, software testing company, gallop solutions review, gallop solutions, operation risk testing, external risk testing, technical risk testing, product testing, performance testing, usability testing

What is Risk?

A Risk, essentially is a possible problem. That is, it is some event that may, or may not happen, depending on other variables. In the software testing arena, a risk may be defined as a potential occurrence (leading to loss) which is a result (usually undesirable) of the presence of an issue or a bug in the product. Testing for these unwanted, possible events is known as risk-based testing.

Additionally, the definition of risk is incomplete without introduction to mitigation and contingency.

  • Mitigation: Mitigation is the act performed that reduces the possibility to defects to show up.
  • Contingency: This is the backup plan of action to be performed in case a risk becomes a possibility, and which helps to reduce the impact.

Types of Risks

In theory, there may be innumerable risks. However, following is a list of the most commonly faced risks in different domains:

Business or Operational Risks

  • Over dependence on a specific system, subsystem, function, or feature
  • Business-Criticality of a feature or function, subsystem, including the cost of failure

External Risks

  • Security related loopholes
  • Integration failures – of product or website pages
  • Regulatory requirements
  • Failures of functions
  • Performance and Usability related failures

Technical Risks

  • Collocated development teams
  • Complexity of a product

What is Risk-based Testing?

Risk-based testing (RBT) is an organizational principle that helps to prioritize testing the features and functions of a software according to the probable risks of failure, the need of the function, etc.

RBT thus is a ranking of tests, and subtests, for functionality. Tools and techniques such as equivalence partitioning, state transition tables, decision tables, boundary-value analysis, Path Flow testing, all-pairs testing etc. help assess the most risk-prone areas.

As there usually is not enough time to test complete functionality of a product, RBT involves testing the functionality that has the highest probability of failure – and thereby biggest impact.

RBT, to be fully effective, must be started in the initial stages of product development. It involves:

  • Identifying risks to system quality and guiding the process of planning, preparation and execution of the tests.
  • Risk analysis that helps identify opportunities to remove or prevent defects.
  • Mitigation–testing (that reduces the possibility of high-impact defects) and contingency–testing (that identifies the possible work-arounds for the defects found).
  • Measuring the effectiveness of finding/removing defects in critical areas.

4 Phases of Risk Based Testing Process

There are four main phases to be kept in mind while executing RBT:

  1. Identify and define all the possible risks for all the functional modules of the application under test (AUT) and assign them to the responsible stakeholders.
  2. Prioritize the tests based on the criticality of the risk associated. Come to an agreement on the prioritization, and update the functional requirement document and shared with the stakeholders.
  3. Plan and define tests according to requirement prioritization.
  4. Execute tests according to the accepted functional document.

Advantages of Risk Based Testing

  • As all the critical functions of the application are tested, it improves the overall quality of the product.
  • Planned prioritization helps take care of the business-critical areas which ensures that the product even in case of a risk-impact, does not get impacted much. On the other hand, you must keep in mind to test even the low-ranked risks so that they do not become real and cause trouble.
  • Since the problem areas are discovered early, preventive measures can be started immediately – which ends saving a lot of time and costs during production.
  • In case of limited resources (time or team), it helps as a negotiating tool for prioritization.
  • Helps make testing a better planned and organized activity.
  • Continuous monitoring of risks helps focus on the complete testing strategy and goal throughout the testing life cycle.
  • Improves customer satisfaction.

That said, the main objective of risk-based testing is to perform testing in accordance with the best practices in risk management. This helps create a product that is properly balanced in terms of quality, features, budget and schedule.

At Gallop, we cover all the bases and ensure that effective testing is performed by the right set of experts. We ensure the best quality for your product and that your customers are happy. Our tool agnostic test automation frameworks ensure accelerated testing so that you get higher productivity and an enviable time to market.

Icon vector designed by Freepik

The opinions expressed in this blog are author's and don't necessarily represent Gallop's positions, strategies or opinions.

Rise of the Software Development Engineer in Test – SDET

software testing, software development engineer in test, sdet, qa engineers, quality assurance testing, web application testing, software testing services, test automation, gallop solutions, gallop solutions review

“Program testing can be used to show the presence of bugs, but never to show their absence!” – Edsger Dijkstra, Dutch computer scientist.

The industry today has awakened to the fact that testing is actually more important than programming. Testing apps requires a bigger budget for tools and resources compared to programming. Every organization today is hunting for the best possible talent. Earlier software developers used to write code and testers checked it for quality. This won’t suffice today.

Software Development Engineer in Test (SDETs) are skilled professionals who are adept in the arena of both QA Engineering and Software Development.

Though it’s great to have someone with skills and expertise that are high in demand, it also creates a bit of confusion as to what really comprises the duties and responsibilities of SDETs. A regularly faced ambiguity is regarding the difference between an SDET and a QA Engineer.

SDETs vs QA Engineers

While the roles of SDETs may seem very similar to those of QA Automation Engineers, with even most of the tools and language expertise required being same (Selenium, Java, and Jenkins), there are certain very obvious and clear differences between the two roles.

An SDET, in layman terms, is a developer who instead of working in the product development team, works as part of the test team. In essence, SDETs are responsible not only for writing code, but are required to test the code as well. SDETs are required to continuously write, test, and fix the code they write. Their roles and responsibilities are based on the Agile lifecycle model. SDETs usually are professionals who have very strong analytical, technical, and problem solving skills.

On the other hand, QA Engineers are testers who do not need to have any programming experience as they usually are not exposed to the code. This clearly creates a demarcation between the roles and responsibilities of SDETs and QA Engineers.

SDETs: The Need

With the need and importance of software testing accepted across the globe, what, when, and how to test are areas that have never stopped evolving.

Most of the products and apps today require end-to-end Test Automation – especially in the areas of Functional, Performance, and Security Testing. SDETs, with their dual abilities in the areas of code development as well as performing tests (such as those listed) are a great fit in todays’ digital age. They help improve the Code Quality by performing strict and detailed source code reviews along with checking for the Testability of the code.

Armed with specialized testing knowledge of multiple tools, techniques, best practices, and processes, SDETs today have become a crucial part of development ecosystems. Based on their development experience, knowledge of technical architecture and design, and their programming skills SDETs are required to write a code to test the code written by developers. In addition, they are also required to write unit tests and also perform white box testing.

Following is a list of a few tasks that SDETs are usually responsible for:

  • Building robust, scalable, and high quality test automation solutions for functional, regression and performance testing
  • Developing code for quality automation and ensuring extensive unit test coverage of the code
  • Building, customizing, deploying, and managing the environment/ test automation frameworks
  • Checking for product scalability, reliability, consistency, and performance
  • Participating in design and architectural discussions
  • Performing high-class debugging
  • Preparing Test Reports

In essence, SDETs are Customer Advocates who influence product design by understanding end user expectations. While functional and automation testers will always be required, SDETs may prove to be the all-rounder that most organizations are looking for.

Gallop Solutions has a decade of expertise as an independent testing services provider. Contact Gallop’s team of testing experts for your testing requirements.

software testing, software testing life cycle, stlc, software development life cycle, testing process, software testing company, quality assurance testing, software testing phases, gallop solutions, web application testing, security testing, devops testing, agile testing

The opinions expressed in this blog are author's and don't necessarily represent Gallop's positions, strategies or opinions.

Testing Metrics – What Gets Measured, Gets Done!

testing metrics, software testing, quality assurance testing, software testing company, software testing services, web application testing, gallop solutions, gallop solutions review , software testing metrics

“What gets measured gets done. What gets measured and fed back gets done well. What gets rewarded gets repeated.”

The above stated pithy statement relates very well to the need and importance of testing metrics. When an organization is able to clearly and explicitly define the testing metrics it requires, and then is properly able to analyze them and use the analysis to fix the existing issues, it will invariably be treading on the right path towards success and growth.

In normal parlance, most organizations follow the well tread path of plan, do, check, act (- better known as PDCA) when beginning any new venture or project.

As per WhatIs.com, “PDCA (plan-do-check-act, sometimes seen as plan-do-check-adjust) is a repetitive four-stage model for continuous improvement (CI) in business process management. The PDCA model is also known as the Deming circle/cycle/wheel, Shewhart cycle, control circle/cycle, or plan–do–study–act (PDSA).”

Plan: In terms of a product/software development/testing lifecycle, Planning refers to defining and laying out specific Business Goals and gaining a thorough understanding of the need for the planned application. At a later stage, this also includes testing the product, collecting statistical data, identifying and ascertaining the root causes of the issues being faced, and planning for fixing them.

Do: This is the stage where organizations define and decide upon the multiple measurement variables and metrics. These metrics will help understand the effectiveness of the product as also help measure the quality of the product. This stage also involves developing and implementing solutions for the identified issues.

Check: This stage is used to analyse the reasons as to why a product is behaving in the way it is, as also to compare the data before-and-after a fix has been made. This stage requires you to document the observations, inform the team about any changes to the process, and also recommend changes that need to be made.

Act: As the name implies, this stage involves taking Corrective Actions and fixing the product to come up with a quality product.

As seen above, one of the most important link between all the 4 activities – PDCA – is metrics and its measurement. The question to ask is – what should we measure, and when should we start measuring?

What sort of metrics need to be collected and analysed must be decided in the planning phase. A few metrics that matter – especially when testing a product for quality – are as follows:

  • User Story Coverage
  • Planned vs Done Story Cards
  • Test Automation Coverage
  • Automated Test Effectiveness
  • Mean Time Between Failures (MTBF)
  • Mean Time To Repair (MTTR)
  • Overall Equipment Effectiveness (OEE)
  • Defect Rejection Rate
  • Production Defect Leakage
  • Defect Severity Index
  • Defects by Sprint
  • Deployment Lead Time

Unless we specify what needs to be measured, we will never really be able to give due attention and focus to completing it to the best of our abilities. Measuring the correct variable, analysing it correctly, and then performing the required tests will lead to creation of better products and make them more reliable and robust.

“Not everything that counts can be counted, and not everything that can be counted counts.” ~ Albert Einstein

That said, what is even more important than measuring metrics is the follow up – what is the actual improvement we have made in the product based on our findings. Identify your top-most, business-critical priorities, analyse the metrics against them, and then make the fixes required.

Zooming in to the top-most priorities and then working against the related metrics will help an organization provide better quality products, reduced go-to-market time and hence improve their ROI.

Experts at Gallop can help you understand what needs to be measured and tested to get the optimum outputs. Get in touch with us for meeting all your testing requirements.

software testing, software testing life cycle, stlc, software development life cycle, testing process, software testing company, quality assurance testing, software testing phases, gallop solutions, web application testing, security testing, devops testing, agile testing

The opinions expressed in this blog are author's and don't necessarily represent Gallop's positions, strategies or opinions.

5 Drivers of Differentiation which you must test

testing, software testing, quality assurance testing, web application testing, software testing services, risky businesses must tested, mobile testing, application testing, security testing, agile testing, software testing company, gallop solutions, gallop solutions review, functional testing, test automation, performance testing, ERP Testing, globalization testing, medical devices testing

Risk-free businesses don’t exist – not even in the wildest of fantasies. If you have a business to run, it will have risks involved, and to survive – and thrive – you will need to face and overcome these risks. Next generation product and services industry will heavily focus around Social, Mobile, Analytics, Cloud and more importantly, Internet of Things and Virtual Reality.

Let us try to understand a a couple of these which are already contributing to an organization’s growth, sustenance and differentiation. Subjecting these to intense, planned testing is what is needed to ensure the sustained differentiation.

  1. Mobile Apps: There’s hardly any industry left today that is untouched by the presence or use of mobiles. It is only obvious the kind of risks this involves – all features of all products must function seamlessly across all types of all mobile devices on all platforms at all times. One can only imagine the kind of risks involved in keeping all these activities running smoothly.
  2. Digital Transformation: The need to go digital and have a global digital imprint is making organizations resort to a lot of online activities that leave a big scope for hackers to make hay. More so, in the hands of the wrong people, digital security-related records of governments can pose a big security threat.
  3. Big Data & Analytics: A lot of companies are making a foray into the world of big data and analytics today. This is helping to generate a lot of statistical data that can be utilized for creating a better standard of life. Just imagine what may happen if the data collected – or analyzed – is incorrect, and the same data is implemented. Big risk, right?
  4. Automation: Automate everything seems to be today’s buzzword. Newer and better versions of automation tools and automated products are hitting the market every day. As long as the tools and products behave in the manner expected, it’s great. But what if the automation tool goes wrong and actually breaks a product? Effort will be lost, time will be lost, and go-to-market and ROI delayed.
  5. Internet of Things: With the increasing adoption of IoT there will be a tremendous opportunity for IoT testing (devices and software) in 2015 and years to come. Technology analysts Gartner suggest that IoT is currently at the ‘peak of inflated expectations’. IoT has also been identified as one of the emerging technologies in IT as noted in Gartner’s IT Hype Cycle.

So how do we create a risk mitigation plan for dealing with the challenges that come packaged when running either of the businesses listed above? Having a specialized team of experienced testing professionals with experience in each of these testing domains will ensure you leverage industry best practices in Quality Assurance.

In essence, if organizations use either in-house talent, or approach specialized independent testing service providers for performing thorough testing of apps, tools, or devices, they can take care of most of the risks.

Get in touch with us at Gallop to get further guidance on getting your products tested for a better ROI.

software testing, software testing life cycle, stlc, software development life cycle, testing process, software testing company, quality assurance testing, software testing phases, gallop solutions, web application testing, security testing, devops testing, agile testing

The opinions expressed in this blog are author's and don't necessarily represent Gallop's positions, strategies or opinions.

Testing is a Process, not just a Phase

software testing, software testing life cycle, stlc, software development life cycle, testing process, software testing company, quality assurance testing, software testing phases, gallop solutions, web application testing, security testing, devops testing, agile testing

Should Testing just be another Phase?

We all accept that Testing forms an important part of the software development life cycle (SDLC). However, the reason why a lot of organizations fail is the fact that they segregate testing as a single unit – a phase. When Testing is treated as just another ‘phase’, the implementation of this business-critical task suffers. Organizations try to club all sorts of tests and try to test their product for all possible areas at the fag-end of the development cycle. This naturally is impacted by the looming deadlines of the product launch, as also other pressures due to which the testing is not fool-proof. A product that has not been tested fully will naturally not be robust. There will always be a doubt regarding its security, performance, and functionality. So what can these organizations do to improve their testing process?

The answer is actually fairly simple.

Instead of trying to overburden the tester to test for completeness just before launching the product, plan your test related activities. While planning the product development phases, identify the types of tests you need to execute for your product. Then, allocate specific time and resources for performing tests related to the different phases. This also helps verifying and validating the product, as also drastically reduces the number of bugs that may otherwise be found later. Performing tests specific to a phase helps save a lot of time and efforts of the developers as the issues found are far easier to fix.

In essence, if we treat Testing as a process that supports the entire development process – instead of just a single phase – we can ensure products that are far more dependable and robust.

What to Test under each phase?

software testing, software testing life cycle, stlc, software development life cycle, testing process, software testing company, quality assurance testing, software testing phases, gallop solutions, web application testing, security testing, devops testing, agile testing,

Based on the common experience shared across industries, following are a few tests that can be executed under the different phases:

  1. Requirements Gathering phase: The main focus of this phase is to gather the business requirements such as who will use what type of data in what manner. Thus, it is only common sense to test and confirm (read validate) the basic requirements for the creation of a product before actually starting the development process. This ensures that we create what we initially planned to create – and not something else that got created while on the way due to unclear and ambiguous requirements. The output of this phase is a Requirement Specification document that acts as the guideline for the product development.
  2. Design phase: The Design phase uses the Requirement Specification document to prepare the layout of the system and software design. If a comprehensive and end-to-end test plan and strategy is thought of and implemented in this phase, it will help build a stable system architecture. Testing the ease of design will also help establish what and how to test in the product.
  3. Development phase: As the name suggests, the development phase involves the actual writing of code for the different modules. As a lot of code is generated in this phase that covers implementation of different features, it makes sense to test the features being developed. It also is a good time to implement regression testing on the code generated so far to verify that the software being developed performs correctly even if it is modified or interfaced with other software.
  4. Deployment phase: The deployment phase usually has two sub-phases – Beta-deployment, and Final Deployment. In the Beta-deployment phase issues are caught before a product is launched to the market. This is the time when you can implement tests related to product usage analytics, Real User Monitoring, & Automated smoke tests. Based on the results of these tests, and the other issues reported, the development team will make the final changes before the final deployment of the product.

As seen above, if Testing is implemented across all the phases of SDLC, the end result will be a product that is stable, reliable, and supports features and functions that will grab the attention of the end user. It is no surprise that the more a product is liked and appreciated, the better ROI an organization gets.

software testing, software testing life cycle, stlc, software development life cycle, testing process, software testing company, quality assurance testing, software testing phases, gallop solutions, web application testing, security testing, devops testing, agile testing

The opinions expressed in this blog are author's and don't necessarily represent Gallop's positions, strategies or opinions.

Non-Functional Testing – Quality is more than Validating Features

quality assurance testing, non-functional testing, usability testing, performance testing, efficiency testing, security testing, software testing, gallop solutions, gallop solutions review, software testing, services, software testing company, Non-Functional Testing – Quality is more than Validating Features

The term non-functional testing is commonly used to refer to testing the features not specific to functions. This includes testing for performance, usability, efficiency, security, the breaking point of the software, and many more.

In essence, most of these tests help us to understand the quality and reliability of the product. Quality is a term that has a widespread implications, and when thought about in the context of non-functional testing, it covers the following major areas:

  • Functionality (quality in terms of Features, Business Processes, and Integrations)
  • Security (quality in terms of Application, Data, Network, and Compliance)
  • Performance (quality in terms of Speed, Resource Consumption, Scalability, and Sizing)
  • Usability (quality in terms of Navigation, Aesthetics, Flexibility, A/B Testing, and Documentation)

Quality in terms of Functionality

Even though there’s a whole range of tests available for testing the Functionality of a product in terms of how to use it, when related to non-functional testing, it relates to the ease of use of the product and the experience of a consumer, and how happy is someone while using the same.

Apart from this, the seamless performance of all the processes also indicate that a lot of thought and planning have been put into the product to make it achieve the required levels of quality. This quality experience is what will put your product on top of the buyers list – thus ensuring higher ROIs for yourself.

Quality in terms of Security

The higher the levels of Security, the better is the quality of a product. With so much personal and business critical data at stake in this highly digitized world thorough, end-to-end quality testing of areas such as Application, Data, Network, and Compliance is imperative.

Not checking for the quality of security in either of the aforementioned areas may cause huge (at times not recoverable) losses in terms of finances, credibility, etc. – and in a lot of cases may result in severe security threats and legal complications as the impact of security loss may be felt on human lives across the globe too.

Quality in terms of Performance

On a different front, the quality of a software with regards to its Speed, Resource Consumption, and Scalability matters a lot while trying to retain and attract new customers. The digital world is getting smaller with a lot of apps being used on mobile platforms, tabs, laptops – and customers, with the huge array of choices before them, naturally try to buy the products that give them the best returns for their cost. They need tools that run the fastest, consume the least data charges or electricity, and are capable of working on any platform, OS, or device – all the while providing all the features seamlessly.

Quality in terms of Usability

Quality in terms of Usability refers to the ease a customer enjoys while navigating through the various features of the product, how easy is the product to learn the first time it is used; the comfort levels a customer has with the aesthetics of the product, that is, whether the colours used on the UI are too jazzy for him, or if the whole look and feel has been created thinking about what most users may enjoy looking at when using the product; and whether or not all the test cases documented earlier have been tested for practical use and cover all scenarios possible. It also means that the customers enjoyed using the product so much that they want to re-use the product and do not need to spend time again to learn the basics of the product and can quickly accomplish the desired tasks.

Thus, as can be safely deduced, that in terms of non-functional testing, good quality of a product will always help you gain (and retain) more customers rather than just focussing on testing the features of a product.

Gallop’s teams have successfully performed Non-Functional Testing for our numerous clients. Get in touch with us if you are looking to test Non-Functional Aspects of your business applications and we will be glad to do a free assessment for you.

The opinions expressed in this blog are author's and don't necessarily represent Gallop's positions, strategies or opinions.

Applying Emotional Intelligence to Testing

Emotional Intelligence & Testing, Emotional Intelligence in Software Testing, EI/EQ Software Testing, software testing, web application testing, software testing services, software testing company, quality assurance testing, Applying Emotional Intelligence to Testing

“IQ gets you hired, but EQ gets you promoted.” Anonymous

So, what is emotional intelligence (EI)(also referred to as emotional quotient (EQ))?

Emotional intelligence refers to the understanding of our own emotions, how to use this knowledge to deal effectively with people and problems to reduce anger and hostility, and create an atmosphere of collaboration to produce positive energy.

The concept of EI owes its immense popularity to New York Times science journalist, Harvard psychologist, and author Daniel Goleman who wrote the 1995 best-seller, ‘Emotional Intelligence: Why It Can Matter More Than IQ’.

How is software testing an emotional activity?

Before we answer this, let us try to analyze the basic behaviour of testers.

According to an older definition of Testing- “To tell somebody that he or she is wrong is called criticism. To do so officially is called testing.”

Testers essentially are people who like to disrupt and break things so as to find out the true quality of a product. They have the inherent need to question things, are persistent in their questioning, and at the same time are also pessimists by nature. Their basic mind set is that unless they criticize, the quality of the product will not be improved.

The biggest joy of their life is when they break a product – or when they find a Bug.

Now, you may wonder what sort of a connection EI can have with the world of testing. The answer – a huge connection. To understand the link, let us try to understand EI a bit more.

People who have a high level of EI are self-aware (know what they feel and why), self-managed (know how best to manage negative emotions, create positivity), socially aware (know the nuances of how to interact with others), and good at relationship management. Such a person also knows how to deal with anger, control negative thoughts, process constructive criticism, and deal with conflict.

All of the above qualities may actually be listed as pre-requisites for someone who wishes to join the band-wagon of testing community at any level. It is but natural in our professional lives as testers to face and undergo a wide range of similar emotions. Elacts as a tool that guides us how to identify and respond to these (- especially the negative) emotions in the best possible manner that is not counter-productive.

Testing professionals, apart from the inferences from the plethora of tools available today, also trust their intuition a lot. The test results so achieved may lead to any of the regular emotions that we face, such as amusement, anger, frustration, etc.

Professionals in the Testing arena also have to regularly face very highly strung scenarios as they work in very uncertain environments wherein tests may pass or fail. A lot of times, it is upon the shoulders of testers to inform about build failures, and also to inform if the code is not written per the best standards. On the other hand, when an already overloaded testing team is told that it’s time or resource size is being halved – it leads to further frustration and creates an atmosphere of animosity.

Naturally, these activities at times lead to fracturing of egos and unnecessary debates and endless hours of unproductive meetings. These heated reactions, per EI terminology, are a result of letting the “amygdala hijack” take control.

Amygdala is a small, almond-shaped structure at the top of the brainstem that is associated with the human tendency of “fight or flight”. It usually is what makes us burst out emotionally much before we have really thought through. Controlling the amygdala hijack is a major aspect of EI.

A good EI level helps us control the immediate reactions and makes us more empathetic towards our colleagues, and helps us try to understand why they may be reacting in a particular fashion. This genuinely helps create an amicable and friendly atmosphere in any organization – more so where DevOps culture is the need of the hour.

A mature EI level will always help us provide criticism in a manner that is more conducive for mutual growth rather than just for pointing fingers. For example, a person having such EI levels will always try to give inputs when alone, or in a mail where no one else is Cc’d. Handled so, it will surely bring down the number of conflicts.

If you are a professional tester low on EI levels, don’t fret as there’s good news. EI levels can be substantially improved at any stage of life unlike IQ. Work on controlling the amygdala hijacks and see your ranks rise in the world of Testing.

Reference: Emotional Intelligence in Software Testing

The opinions expressed in this blog are author's and don't necessarily represent Gallop's positions, strategies or opinions.

Testing Trading Systems the Right Way

Testing Trading Systems the Right Way

Trading systems include trading platforms or trading applications. While trading platforms include the software through which investors and traders can open, close, and manage market positions, trading applications are usually multi-product and cater to multi user options and consist of end-to-end functionality that processes numerous volumes of traffic at extreme speeds.

The key to creating and managing effective trading systems is to ensure high volume and effectively low latency throughput. With the activity of traders and money managers around the invention of new strategies, and new methods of electronic price discovery, this paves way for continuous increase in the volume of trades and management of huge amounts of market data. This leads to a huge network traffic from various orders being placed and definitely creates an exponential growth rate for market data. Typically, trading systems employ a set of complex rules within their matching engines in an attempt to perfectly match and handle buying and selling options on top of handling cancel and replace requests.

Recent innovations such as automated trading, need for 24 hrs continuous trading, market fragmentation, and changes in underlying technologies such as algorithmic trading, have further added to the trading system complexity. Moreover, as business opportunities continue to change in today’s rapidly evolving marketplace, the result is an increase in data flow and processing loads. In the world of trading, time is literally money. All these various reasons have led to the emergence of the need for testing trading systems and makes testing inevitable for these trading segments. Now, let us see some of the other reasons why testing is needed for trading applications.

Why Testing is needed for Trading Systems?

As trading systems need to process vast amounts of data in real time, accuracy of data is primarily crucial to avoid huge losses in terms of money and reputation. In addition, stock markets have inherent complexity in terms of business flows and business rules and testing forms a critical role to ensure effective business delivery. Thus, testing plays an important role in the efficient functioning of trading systems. There is a high importance bound around having a well-tested trading application as it will not fail in real time and gives an edge for client’s options and purchases.

Let us see some of the challenges involved with trading systems.

Key challenges involved with Trading Systems

  • Handling of the major business challenges, like developing complex trading scenarios that truly reflect real time trading
  • Difficulty in building and maintaining of the domain competence
  • Handling of technical challenges that arise due to complex scenarios and interface gateways
  • Challenges that arise due to multiple systems that work through several interfaces and gateways
  • Management of multiple API’s
  • Performance issues due to latency levels and handling of SLA’s
  • Existence of legacy applications in which testing is difficult
  • Network performance
  • Existence of regulatory compliance issues
  • 3rd party application issues that adversely affect the existing trading systems

Other challenges include changes in business models, rules and regulatory requirements and the introduction of new products

Testing Trading Platforms

The major types of testing trading systems include functional, interface, security, and performance testing. These testing types play an important role as they evaluate the speed, functionality, security and overall trading system performance. Testing of the trading applications involves smoke tests and unit test cases need to be created for each functionality. For testing the complexity of multiple, real-time order transactions, a smoke test should be implemented. Regression testing must also be taken up on a continual basis so as to ensure that the existing functionalities are not affected when a new functionality is introduced into the trading system. Interface testing is one of the key tests conducted in order to ensure the quality of software products. In trading applications, interface testing focuses on the data accuracy needs of the system and functioning of the interfaces as the test environment should be similar to that of the real time scenario. Testing needs to be performed around the network and interfaces as the loss incurred by stock exchanges due to data leakage could be huge. Security testing includes threat analysis and vulnerability analysis and threats are identified through security code reviews. Performance testing involves testing of the main trading application, its subsystems and the interfaces connecting these subsystems.

Let us also know about Backtesting that refers to applying a trading system to old data and verify how it behaves to the data during the specified time period. This type of testing evaluates simple ideas while forward performance testing, also known as paper trading, provides traders with another set of out of sample data on which to evaluate the system. It is just a simulation of the actual trading system. Thus, positive results and good performance can be obtained with effective testing performed on trading systems.

Gallop Solutions has a decade of expertise as an independent testing services provider. Contact Gallop’s team of testing experts to know more about the testing of your trading applications.

The opinions expressed in this blog are author's and don't necessarily represent Gallop's positions, strategies or opinions.