Extreme Programming Methodology
4.1 Testing at a glance
Testing is essential to the success of any project management as well as in Extreme programming methodology. Testing always means comparing. It requires an item that is to be tested and terms of reference of with which the item must comply. Testing satisfies the need for information about the difference between the item and the requirements. The international standardization organization (ISO) describes testing in the following terms: 'Technical operation that consists of the determination of one or more characteristics of a given product, process and service according to a specified procedure' (ISO/IEC, 1991).
Testing provides an insight into the difference between the actual status and the required status of an item. Since quality can be defined as 'meeting the requirements', testing therefore result in recommendations on quality. It consequently provides an insight into the risks that will be incurred if lower quality is accepted. This is also the principal objective of testing. Testing is one of the detective measures of a quality system. it is related to reviewing, simulating ,inspecting, auditing, examining, desk-checking, walk-troughs, etc. the various detective measures are divided into two groups: evaluating and testing.
In case of traditional project management we define testing as the systematic attempt to find errors in a planned way in the implemented software. Contrast this definition with another commonly used definition that's says that "testing is the process of demonstrating that errors are not presents". The explicit goal of testing is to demonstrate the presence of faults.
On the other hand testing in extreme programming or extreme testing programmers are expected to do the test code first before writing the application code. Because when we create our tests first, before the code, we will find it much easier and faster to create our code. The combined time it takes to create a test and create some code to make it pass is about the same as just coding it up straight away. But, if we already have the tests we don't need to create them after the code. It will save our time now and lots later.
That's why for saving time and doing our code in an organized way testing techniques of extreme programming is better than any other traditional testing process.
4.2 Extreme programming testing activities with respect to Traditional project management testing
Extreme Programming is a deliberate and disciplined approach of software development. XP is successful because it stresses customer satisfaction and the methodology is designed to deliver the software your customer needs when it is needed. One of the basic principles of extreme programming (XP) is that the software is developed in small increments; each of the part must pass a unit test before the next change is made. Before writing the code in many cases the unit test is written by the developers. These incremental tests which recently made are also carried out by the two-person coding team, so it seemed strange to be reading about testing XP. The basic point is the programmers are very good at testing their code at the unit level, but weak when asked to verify it at the system level. So there should be a responsible tester who examines the code at a level higher than the unit. The proposal that made by the testers are more in the realm of a manager responsible for testing rather than a tester. In case of traditional programming there are some approaches of fault detection techniques assist in finding fault in systems but do not try to recover from the failure caused by them. In general fault detection techniques are applied during development, but in some cases they are also used after the release of the system. the fault detection approaches are unit testing, integration testing which is the activity of finding fault when testing the individually tested component together. For an example subsystem describes in the subsystem decomposition ,while execution the use cases and scenarios from the RAD. another approach is which is named system testing ,tests all the component together, seen as a single system to identify errors with respect to the scenarios from the problem statement and the requirements and design goals identified in the analysis and system design, respectively: functional testing which tests the requirement form RAD and ,if available ,form the user manual. Performance testing which checks the non functional requirements and additional design goal. Functional and non functional testing both done by the developers. Acceptance testing and installation testing checks the requirements against the project agreement and should be done by the client, if necessary support by the developers.
XP suggest that doing testing after completing project is completely backwards. As a programmer, we should write tests before we write code, and then write just enough code to get the tests to pass. Following this method will help us to keep our system as simple as possible.
4.2.1 Writing tests first or Code the Unit Test First
When we create our tests first, before the code, we will find it much easier and faster to create our code. The time which is combined it takes to create a unit test and create some code to make it pass is about the same as just coding it up straight away. But, if we already have the unit tests we don't need to create them after the code saving our some time now and lots later.
Making a unit test helps a coder to really consider what needs to be done. Requirements are nailed down firmly by tests. There can't be any misunderstanding a specification written in the form of executable code. We also have immediate feedback while we work. It is often not clear when a programmer has finished all the necessary functionality. Scope creep can easily occur as extensions and error conditions are considered. If we create our unit tests first then we know when we done; the unit tests all run. There is also some benefit to system design. It is often very tough to unit test some software systems. These systems are normally built code first and testing second, often by a different team entirely. By making tests first our design will be influenced by a desire to test everything of value to our customer. Our design will reflect this by being easier to test. There is a rhythm of developing software unit test first. We create one test to define some small aspect of the problem at hand. Then we create the simple code that will make that test pass. Then we create a second test code. Now we add to the code just created for making this new test pass, but no more! Not until we have yet a third test. We do continue until there is nothing left to test. The code we will create is very simple and concise, implementing only the features we wanted. Other developers can easily see how to use this new code by browsing the tests. Input whose results are undefined will be clearly absent from the test suite.
For the last fifty years, for the traditional testing has been viewed as something that gets done toward the end of a project. But now testing is not a post development phase. It comprises a series of activities that must be carried out from an early stage in the development. At the same time as the function specifications, a test plan must be prepared in which the test to be carried out by various people are planned and harmonized. Preparation for the testing begins immediately after the function specifications have been agreed.
4.2.2 All code must have unit tests
Units are the smallest blocks of software. Unit testing is the process of validating such small blocks of a complex system much before testing of the subsystems or system as a whole. Unit tests are assumed as of the corner stones of Extreme Programming (XP). But unit tests XP style is a little different approach. First we should create or download a framework of unit test to be able to create automated unit tests suites. Second we should test all classes in the system but we usually ignore Trivial getter and setter methods. Before the code, we have to create our tests first.
Unit tests are released with the code repository along with the code they test. Code without tests may not be released. If a unit test is found to be missing it must be created at that time. The biggest problem to dedicating of time to unit tests is a fast approaching deadline. But during the life cycle of a project an automated test can very easily save our a hundred times the cost to create it by finding out and guarding against bugs. The harder the test is to write the more we need it because the greater your savings will be. Automated unit tests offer a payback far greater than the cost of creation. Another common misapprehension is that unit tests can be written in the last three months ofthe project. Without doing the unit tests development drags on and eats up those last three months and then some. If the time isavailable for good unit test suites take time to evolve. Finding all the problems that can occur take time. In order to have a complete unit test suite when we need it you must begin creating the tests today when you don't.
Unit tests enable collectiveownership. When we create unit tests we make guard our functionality from being accidentally harmed. Necessary all code to pass all unit tests before it can be released give guarantee all functionality always properly works. Individual code ownership is not necessary if all classes are guarded by unit tests. Unit tests enable refactoring as well. After each small change of the unit tests can verify that a change in formation did not introduce a change in functionality. Developing a single universal unit test suite for validation and regression testing enables frequent combination. It is possible to combine any recent changes quickly then we run our own latest version of the test suite. When a test fails our latest versions are not compatible with the team's latest versions. Fixing up small problems in every few hours takes less time than fixing huge problems chunk just before the deadline. By automated unit tests it is easy to merge a set of changes with the latest released version and release in a short time.
Traditional software testing approach also uses unit tests. But there are some limitations; unit testing only tests the functionality of the units themselves. Therefore, it won't catch integration errors in system-level errors. Unit testing must be done in conjunction with other testing activities. Like all forms of testing software, unit tests can only show the presence of errors; they cannot show the absence of errors.
4.2.3 All code must pass all unit tests before it can be released
Unit tests are one of the important parts of Extreme Programming (XP). But unit tests XP style is a little different. First we should create or download a unit test framework to be able to create automated unit tests suites. Second we should test all classes in the system. We should try to create our tests first, before the code. All code has to pass all unit tests before it can be released. Tough a unit test gives a strict, written contract that the piece of code must satisfy for the traditional project management techniques but there are so many techniques that followed by traditional project management. So there are some options to choose any software testing methodologies.
4.2.4 When a bug is found tests are created
A software bug is the most common term used to describe an error, mistake, flaw, failure, or fault in a computer program or system that produces an incorrect or unexpected result, or causes it to behave in unintended ways. A program that has a large number of bugs, and bugs that seriously interfere with its functionality, is said to be buggy. For extreme programming when a bug is found tests are created to guard against it coming back. A bug in software requires an acceptance test be written to guard against it. Making an acceptance test first before debugging helps customers concisely define the problem and communicate that problem to the programmers. Programmers which is failed test to focus their efforts and know when the problem is fixed. Given a failed acceptance test, developers then create unit tests to show the defect from a more source code specific point of view. Failing unit tests give immediate feedback to the development effort. For traditional project management there also most bugs arise from mistakes and errors made by people in either a program's source code or its design, and a few are caused by compilers producing incorrect code. Reports detailing bugs in a program are commonly known as bug reports, fault reports, problem reports, trouble reports, change requests, and so forth.
4.2.5 Acceptance tests are run often and the score is published
Acceptance testing is a term used in Extreme Programming, referring to the functional testing of a user story by the software development team during the implementation phase. The customer specifies situation to test when a user story has been correctly implemented. A story can have one or more acceptance tests, whatever it takes to make sure the functionality works. Acceptance tests are black box system tests. Each acceptance test stand for some expected result from the system. Customers are liable for verifying the correctness of the acceptance tests and reviewing test scores to decide which failed tests are of highest priority. Acceptance tests are also used as regression tests prior to a production release. A user story is not considered complete until it has passed its acceptance tests. In case of traditional project management techniques there are three ways the client evaluates a system during acceptance testing. In a benchmark test, the client prepares a set of test cases that represents typical conditions under which the system should operate. another kind of system acceptance testing is used in re engineering projects , when the new system replaces and existing system .in competitor testing ,the new system is pated against an existing system or competitor product .in shadow testing , a form or comparison testing , the new and the legacy systems are run in parallel and they are outputs are cooperated.
4.3 Traditional project management testing activities with respect to extreme programming
There are some steps which are followed by traditional software testing sometimes extreme programming testing. Software testing is a critical term of the software development cycle. And software testing procedures are base to the success of the testing phase. Software remains in a continuous state of change which is why software testing, whether manual or automated, is so vital to a software product's success. Documented procedures for traditional project management testing software must be in place before testing begins. For extreme programming testing this approach is not necessary. Software testing procedures must encompass all aspects of the software testing process. It's very important that the procedures mention the people who will be involved in the testing process, their availability for the duration of the testing cycle and the skill set of each team member. Keeping on track for the software cycle, the software testing procedure must also demarcate a carved-in-stone testing schedule including dates of important milestones. To be effective and useful, software testing procedures need much more information. Procedures must define guidelines for creating test cases. Software testing procedures must be define the hardware and software resources that are needed to keep testing process on track and the length of time each will be needed. If the resources are unavailable, it won't be possible to meet and fulfill project deadlines.
4.3.1 Developing Test Plan
Document detailing or developing test plan is the efficient approach to testing a system such as a machine or software is called a test plan. Test plan mainly contain the final workflows. Test plan is very important for the success of software testing. a test plan includes many necessary things like test case, special instruction for the test, necessary software testing tools as well as hardware and contingent plan also.
IEEE 829 test plan structure
IEEE 829-2008, also known as the 829 Standard for Software Test Documentation, is an IEEE standard that specifies the form of a set of documents for use in defined stages of software testing, each stage potentially producing its own separate type of document.
- Test plan identifier
- Test items
- Features to be tested
- Features not to be tested
- Item pass/fail criteria
- Suspension criteria and resumption requirements
- Test deliverables
- Testing tasks
- Environmental needs
- Staffing and training needs
- Risks and contingencies
- Approvals 12
These types of planning techniques are mostly followed by traditional project management testing. Extreme programming testing does not always support it.
4.3.2 Developing Test Cases & Test Data Requirements
The major document for traditional software testing is test cases, which match functional requirements of a software in details. a test case describes all the task like input, action ,event or what will be the response, to determining every feature of the a software application is working properly. a test case also have some specific such as test case identifier, case name, conditions of the test, objectives , requirement of input data , expected result etc .test cases development can help finding some sort of problem regarding designing of an application or its requirements. For this reason traditional project management testing always suggest ready test cases early in the development cycle.But in case of extreme programming its not necessary test case or something like that.
4.3.3 Setting up Test Environment
Setting up test environment preparation is developed parallel to the development stages of test cases. Software which is necessary, supporting hardware, simulators, or etc are installed. Environment initialization and the software to be tested like setting flags, breakpoints, data etc are performed. Both the traditional project management testing and extreme programming testing uses the same approach.
4.3.4 Performing Test and Recording Test Data
Various types of testing methods are involved to execute the test cases. Test data is the data which is the result testing. Test data are recorded for later analysis. For analyzing the code matches the specification we use unit test approach. For ensuring the defined functions perform as required requirements of the business we use component testing. For checking new application will work successfully without impacts on other operational system we use Integration testing .for measuring performance, stress and volume are at acceptable levels. We use system testing for manageability of the system we use operational testing.whether the system is user friendly or not we use usability testing. This approach is essential both for traditional project management system as well as extreme programming because for future analysis these data is very important.
4.3.5 Analyzing Test Results
Test data may clearly issue of success or failure of software functionality. This approach is necessary for traditional project management techniques. But for extreme programming aspect this is useless because every test made parallel with writing code. So we can easily find out are there any bugs or not. So while writing code we can analyze the result. Analysis of system performance usually involves data manipulation and statistic.
4.3.6 Approving or Rejecting the Software System
Software testing, like software development, is a continuous process. Software testing project hardly ever ends up with approving or rejecting the system in a few iterations. Testing results normally point the areas for bug-fixing and enhancements. A test project is complete when a software system is eventually approved or rejected. But in case of extreme programming system testing is a continuous process while coding. Both of the tasks go parallel. So there is no chance of rejecting software system.
1. Martin Pol,Ruud Teunissen,Erick Van Veenendaal,Software testing,Aguide to the TMap Approach,2002
2. Brend Bruegge, Allen H Dutoit, Object Oriented Software Engineering,conquering Complex and changing Systems,2000