Standard glossary of terms used in Software Testing Version 2.0 (dd.December,2d 2007) Produced by the 'Glossary Working Party International Software Testing Qualifications Board Editor:Erik van Veenendaal (The Netherlands)
1 Standard glossary of terms used in Software Testing Version 2.0 (dd. December, 2nd 2007) Produced by the ‘Glossary Working Party’ International Software Testing Qualifications Board Editor : Erik van Veenendaal (The Netherlands) Copyright Notice This document may be copied in its entirety, or extracts made, if the source is acknowledged
Contributors Rex Black (USA) d Eldh (Sw d (UK) Isabel Ev Stuart Reid (UK) Dorothy Graham (UK) Piet de Roo (The Netherlands) lulian Harty (UK) Steve Sampson(UK) David Hayman(UK) Shane Saunders (k) Juha Itkonen(Finland) Hans Schaefer(Norway) Vipul Kocher(India) Jurrien Seubers(The Netherlands) Fernando Lamas de Oliveira(Portugal) Dave Sherratt (UK) Tilo Linz(Germany) Mike Smith (UK) r(Switzerland) Geoff The Stepha i (UK) (o ermany) nann (C ermany e po erlands) i(Finlan he Netherlands Erkki Povho s(UK)
2 Contributors Rex Black (USA) Sigrid Eldh (Sweden) Isabel Evans (UK) Dorothy Graham (UK) Julian Harty (UK) David Hayman (UK) Juha Itkonen (Finland) Vipul Kocher (India) Fernando Lamas de Oliveira (Portugal) Tilo Linz (Germany) Peter Morgan (UK) Thomas Müller (Switzerland) Avi Ofer (Israel) Dale Perry (USA) Horst Pohlmann (Germany) Meile Posthuma (The Netherlands) Erkki Pöyhönen (Finland) Maaret Pyhäjärvi (Finland) Andy Redwood (UK) Stuart Reid (UK) Piet de Roo (The Netherlands) Steve Sampson (UK) Shane Saunders (UK) Hans Schaefer (Norway) Jurriën Seubers (The Netherlands) Dave Sherratt (UK) Mike Smith (UK) Andreas Spillner (Germany) Richard Taylor (UK) Geoff Thompson (UK) Stephanie Ulrich (Germany) Matti Vuori (Finland) Gearrel Welvaart (The Netherlands) Pete Williams (UK)
Change History Version 1.3 d.d.May,313 2007 New terms added: Terms changed: action word driven testing basic block bug tracking tool control flow graph coverage measurement tool defect management tool modelling too independence of testing monkey testing specif sed technique stress testingt ool test comparato test process unit tes ed technique white box technig Version 2.0 d.d.Deceml r,22007 new terms added attack buffer buffer overflow Failure Mode and Effect Analysis bug taxonomy (FMEA) classification tree Fault Tree Analysis (FTA) control flow analysis modified multiple condition testing continuous representation process cycle test cost of quality root cause specification-based technique t based test design technique stress testing onomy test charter Analysis (FMECA) Effect and Criticality false fail result false-pass result false-negative result false-positive result fault attack fault seeding fault seeding tool hazard analysis hyperlink tool load prof operation ac ance testing pro y testing pairv vise testing performance profiling pointer
3 Change History Version 1.3 d.d. May, 31st 2007 New terms added: - action word driven testing - bug tracking tool - coverage measurement tool - modelling tool - monkey testing - scripted testing - specification-based technique - stress testing tool - structure-based technique - unit test framework - white box technique Terms changed: - basic block - control flow graph - defect management tool - independence of testing - project risk - risk-based testing - test comparator - test process Version 2.0 d.d. December, 2nd 2007 New terms added: - attack - buffer - buffer overflow - bug taxonomy - classification tree - control flow analysis - continuous representation - cost of quality - defect based technique - defect based test design technique - defect taxonomy - error seeding tool - Failure Mode, Effect and Criticality Analysis (FMECA) - false-fail result - false-pass result - false-negative result - false-positive result - fault attack - fault seeding - fault seeding tool - hazard analysis - hyperlink - hyperlink tool - load profile - operational acceptance testing - operational profile - orthogonal array - orthogonal array testing - pairwise testing - performance profiling - pointer Terms changed: - bebugging - error seeding - Failure Mode and Effect Analysis (FMEA) - Fault Tree Analysis (FTA) - modified multiple condition testing - process cycle test - root cause - specification-based technique - stress testing - test charter
procedure testing process improvement production acceptance testing reliability growth model retrospe tive meeting risk leve safety attack are failure mode and effect Analysis (SEMEA) Software Failure Mode Effect and Criticality Analysis(SFMECA) Software Fault Tree Analysis(SFTA) software life cvcle staged representation system of systems test design test estima tWalhmyMoaelntegraion (TMMi) tes progress report test schedule test session wild pointer
4 - procedure testing - process improvement - production acceptance testing - qualification - reliability growth model - retrospective meeting - risk level - risk type - root cause analysis - safety critical system - software attack - Software Failure Mode and Effect Analysis (SFMEA) - Software Failure Mode Effect and Criticality Analysis (SFMECA) - Software Fault Tree Analysis (SFTA) - software life cycle - staged representation - system of systems - test design - test estimation - test implementation - Test Maturity Model Integration (TMMi) - test progress report - test rig - test schedule - test session - wild pointer
Table of Content cope native references 40 B(M menting on this glos
5 Table of Content Foreword................................................................................................................................................................ 6 1. Introduction....................................................................................................................................................... 6 2. Scope .................................................................................................................................................................. 6 3. Arrangement ..................................................................................................................................................... 6 4. Normative references........................................................................................................................................ 7 5. Trademarks....................................................................................................................................................... 7 6. Definitions.......................................................................................................................................................... 7 A.............................................................................................................................................................................. 7 B.............................................................................................................................................................................. 9 C............................................................................................................................................................................ 10 D............................................................................................................................................................................ 14 E............................................................................................................................................................................ 17 F............................................................................................................................................................................ 18 G ........................................................................................................................................................................... 20 H ........................................................................................................................................................................... 20 I............................................................................................................................................................................. 20 K ........................................................................................................................................................................... 22 L............................................................................................................................................................................ 22 M........................................................................................................................................................................... 23 N............................................................................................................................................................................ 24 O ........................................................................................................................................................................... 25 P............................................................................................................................................................................ 25 R............................................................................................................................................................................ 28 S ............................................................................................................................................................................ 30 T............................................................................................................................................................................ 34 U............................................................................................................................................................................ 39 V............................................................................................................................................................................ 40 W .......................................................................................................................................................................... 40 Annex A (Informative)........................................................................................................................................ 41 Annex B (Method of commenting on this glossary) ......................................................................................... 43
Foreword In compiling this glossary the working party has sought the views and comments of as broad a spectrum of opinion as possible in industry,commerce and government bodies and organizations,with the aim of producing an international testing standard which would gain acceptance in as wide a field as possible.Total agreement will rarely,if ever,be achieved in compiling a document of this nature.Contributions to this glossary have been received from the testing communities in Australia,Belgium, Many (software)testers have used BS 7925-1 since its original publication in 1998.It has served also as a major reference for the Information Systems Examination Board (ISEB) qualification at both Foundation and Practitioner level.The standard was initially developed with a bias towards component testing,but,since its publication,many comments and proposals for new definitions have been submitted to both improve and expand the standard to cover a wider range of software testing.In this new version of the testing glossary many of these suggested updates have been incorporated.It will be used as a reference document for the International Software Testing Qualification Board (ISTQB)software testing qualification scheme. 1.Introduction Much time effort is wasted both within and b twee nindustry.comm Ce ambi s anse as a res he erage on hese ter is often at variance with different meanings attribu uted to them. 2.Scope This document presents concepts,terms and definitions designed to aid communication in (software)testing and related disciplines. 3.Arrangement The glossary has been arranged in a single section of definitions ordered alphabetically.Some terms are preferred to other synonymous ones,in which case,the definition of the preferred ee0ieaeargTomikeeaocapkwncmaei 4 See also” ss-references are also used They assist the user to auickly navigate to the right index term."See also"cross-references are constructed for relationships such as broader term to a narrower term,and overlapping meaning between two terms
6 Foreword In compiling this glossary the working party has sought the views and comments of as broad a spectrum of opinion as possible in industry, commerce and government bodies and organizations, with the aim of producing an international testing standard which would gain acceptance in as wide a field as possible. Total agreement will rarely, if ever, be achieved in compiling a document of this nature. Contributions to this glossary have been received from the testing communities in Australia, Belgium, Finland, Germany, India, Israel, The Netherlands, Norway, Portugal, Sweden, Switzerland, United Kingdom, and USA. Many (software) testers have used BS 7925-1 since its original publication in 1998. It has served also as a major reference for the Information Systems Examination Board (ISEB) qualification at both Foundation and Practitioner level. The standard was initially developed with a bias towards component testing, but, since its publication, many comments and proposals for new definitions have been submitted to both improve and expand the standard to cover a wider range of software testing. In this new version of the testing glossary many of these suggested updates have been incorporated. It will be used as a reference document for the International Software Testing Qualification Board (ISTQB) software testing qualification scheme. 1. Introduction Much time and effort is wasted both within and between industry, commerce, government and professional and academic institutions when ambiguities arise as a result of the inability to differentiate adequately between such terms as ‘statement coverage’ and ‘decision coverage’; ‘test suite’, ‘test specification’ and ‘test plan’ and similar terms which form an interface between various sectors of society. Moreover, the professional or technical use of these terms is often at variance with different meanings attributed to them. 2. Scope This document presents concepts, terms and definitions designed to aid communication in (software) testing and related disciplines. 3. Arrangement The glossary has been arranged in a single section of definitions ordered alphabetically. Some terms are preferred to other synonymous ones, in which case, the definition of the preferred term appears, with the synonymous ones referring to that. For example structural testing refers to white box testing. For synonyms, the “See” indicator is used “See also” cross-references are also used. They assist the user to quickly navigate to the right index term. “See also” cross-references are constructed for relationships such as broader term to a narrower term, and overlapping meaning between two terms
4.Normative references At the time of publication the edition indicated was valid.All standards are subject to revision and par nts hased unon this standard are ate the possibility of applying the most recent edition of the standards listed below.Members of IEC and ISO maintain registers of currently valid International Standards. Softwar erao Airborne Systems a quipment 610.D.,1 ments a 71 ay of s for Aviation(RTC SC167 ware Engine tan 1EE1008-1993. ED1012.2004 ation Plans IEEE 1028:1997.Standard for Software Reviews s and audits IEEE 1044:1993.Standard Classification for Software Anomalies IFFF 1219-1998 Sof ware maintenance ISO/IEC 2382-1:1993.Data processing-Vocabulary-Part 1:Fundamental terms. ISO 9000:2005.Quality Management Systems-Fundamentals and Vocabulary. ISO/IEC 9126-1:2001.Software Engineering-Software Product Quality Part Quality characteristics and sub-characteristics. ISO/IE 12207:1995.Information Technology-Software Life Cycle Processes. ISO/IEC 14598-1:1999.Information Technology -Software Product Evaluation Part 1 General Overview 5.Trademarks In this document the following trademarks are used: CMM and CMMI are registered trademarks of Carnegie Mellon University TMap,TPA and TPI are registered trademarks of Sogeti Nederland BV TMM is a registered servicemark of Illinois Institute of Technology TMMi is a registed trademark of the TMMi Foundation 6.Definitions A abstract test case:See high level test case. acceptance:See acceptance testing. acceptance c must satisfy in order to be acceptance testing:Formal testing with respect to user needs,requirements,and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user,customers or other authorized entity to determine whether or not to accept the system.[After IEEE 6101 :Testing todetermine the ase by which users with disabilities component or system.[Gerrard] 7
7 4. Normative references At the time of publication, the edition indicated was valid. All standards are subject to revision, and parties to agreements based upon this Standard are encouraged to investigate the possibility of applying the most recent edition of the standards listed below. Members of IEC and ISO maintain registers of currently valid International Standards. - BS 7925-2:1998. Software Component Testing. - DO-178B:1992. Software Considerations in Airborne Systems and Equipment Certification, Requirements and Technical Concepts for Aviation (RTCA SC167). - IEEE 610.12:1990. Standard Glossary of Software Engineering Terminology. - IEEE 829:1998. Standard for Software Test Documentation. - IEEE 1008:1993. Standard for Software Unit Testing. - IEEE 1012:2004 Standard for Verification and Validation Plans - IEEE 1028:1997. Standard for Software Reviews and Audits. - IEEE 1044:1993. Standard Classification for Software Anomalies. - IEEE 1219:1998. Software Maintenance. - ISO/IEC 2382-1:1993. Data processing - Vocabulary - Part 1: Fundamental terms. - ISO 9000:2005. Quality Management Systems – Fundamentals and Vocabulary. - ISO/IEC 9126-1:2001. Software Engineering – Software Product Quality – Part 1: Quality characteristics and sub-characteristics. - ISO/IEC 12207:1995. Information Technology – Software Life Cycle Processes. - ISO/IEC 14598-1:1999. Information Technology – Software Product Evaluation - Part 1: General Overview. 5. Trademarks In this document the following trademarks are used: - CMM and CMMI are registered trademarks of Carnegie Mellon University - TMap, TPA and TPI are registered trademarks of Sogeti Nederland BV - TMM is a registered servicemark of Illinois Institute of Technology - TMMi is a registed trademark of the TMMi Foundation 6. Definitions A abstract test case: See high level test case. acceptance: See acceptance testing. acceptance criteria: The exit criteria that a component or system must satisfy in order to be accepted by a user, customer, or other authorized entity. [IEEE 610] acceptance testing: Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system. [After IEEE 610] accessibility testing: Testing to determine the ease by which users with disabilities can use a component or system. [Gerrard]
accuracy:The capability of the software product to provide the right or agreed results or effects with the needed degree of precision.IISO 9126]See also functionaliry testing. action word driven testing:See keyword driven testing actual outcome:See actual resul actual result:The behavior produced/observed when a component or system is tested ad hoc review:See informal review. ad hoc testing:Testing carried out informally;no formal test preparation takes place,no design technique is used,there are no expectations for results and arbitrariness guides the test execution activity. dabitth han those proi vided for thi is purpose for the software considered.[ISO9126]See also portability. agile testing:Testing practice for a project using agile methodologies,such as extreme programming(XP),treating development as the customer of testing and emphasizing the test-first design paradigm.See also test driven development algorithm test [TMapl:See branch testing. tes utsid pot ers/ ners an at the Alpha testing is often mployed for off-the-shelf software as a form of nterna cceptanc testing. analyzability:The capability of the software product to be diagnosed for deficiencies or causes of failures in the software.or for the parts to be modified to be identified.nIso 91261 See also maintainability. analyzer:See static analyzer. anomaly:Any ndition that deviates from tation hased on design do ser docume standards te.or from ne's ion or experience.Anomalies may be found during but not li ited to reviewing analysis,compilation,or use of software products or applicable documentation 1044]See also bug,defect,deviation,error,fault,failure,incident,problem. are testing:See branch testing. attack:Directed and focused attempt to evaluate the quality,especially reliability,of a test object by attempting to force specific failures to occur The capbility of thr product t be altractive to the user.6 usability audit:An independent evaluation of software products or processes to ascertain compliance to standards,guidelines,specifications,and/or procedures based on objective criteria, including documents that specitv (1)the form or content of the products to be produced (2)the process by which the products shall be produced (3)how compliance to standards or guidelines shall be measured.[IEEE 1028] :A path by which t to a pro ta de a proce
8 accuracy: The capability of the software product to provide the right or agreed results or effects with the needed degree of precision. [ISO 9126] See also functionality testing. action word driven testing: See keyword driven testing actual outcome: See actual result. actual result: The behavior produced/observed when a component or system is tested. ad hoc review: See informal review. ad hoc testing: Testing carried out informally; no formal test preparation takes place, no recognized test design technique is used, there are no expectations for results and arbitrariness guides the test execution activity. adaptability: The capability of the software product to be adapted for different specified environments without applying actions or means other than those provided for this purpose for the software considered. [ISO 9126] See also portability. agile testing: Testing practice for a project using agile methodologies, such as extreme programming (XP), treating development as the customer of testing and emphasizing the test-first design paradigm. See also test driven development. algorithm test [TMap]: See branch testing. alpha testing: Simulated or actual operational testing by potential users/customers or an independent test team at the developers’ site, but outside the development organization. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing. analyzability: The capability of the software product to be diagnosed for deficiencies or causes of failures in the software, or for the parts to be modified to be identified. [ISO 9126] See also maintainability. analyzer: See static analyzer. anomaly: Any condition that deviates from expectation based on requirements specifications, design documents, user documents, standards, etc. or from someone’s perception or experience. Anomalies may be found during, but not limited to, reviewing, testing, analysis, compilation, or use of software products or applicable documentation. [IEEE 1044] See also bug, defect, deviation, error, fault, failure, incident, problem. arc testing: See branch testing. attack: Directed and focused attempt to evaluate the quality, especially reliability, of a test object by attempting to force specific failures to occur. attractiveness: The capability of the software product to be attractive to the user. [ISO 9126] See also usability. audit: An independent evaluation of software products or processes to ascertain compliance to standards, guidelines, specifications, and/or procedures based on objective criteria, including documents that specify: (1) the form or content of the products to be produced (2) the process by which the products shall be produced (3) how compliance to standards or guidelines shall be measured. [IEEE 1028] audit trail: A path by which the original input to a process (e.g. data) can be traced back through the process, taking the process output as a starting point. This facilitates defect analysis and allows a process audit to be carried out. [After TMap]
automated testware:Testware used in automated testing.such as tool scripts. B back-to-back testing:Testing in which two or more variants of a component or system are executed with e inputs.the outputs compared,and analyzed in cases of baseline:a specification or softwar roduct that has been formally reviewed or eed u elor and that n he cha anged onl through a formal change control process.[After IEEE610] basic block:A sequence of one or more consecutive executable statements containing no branches.Note:A node in a control flow graph represents a basic block. basis test set:set of test s derived from the interal structure of a componentor specification to ensure that 100%of a specified coverage criterion will be achieved bebugging:See fault seeding.[Abbott] behavior:The response of a component or system to a set of input values and preconditions. benchmark test:(1)A standard against which measurements or comparisons can be made (2)A test that is be used to compare components or systems to each other or to a standard as in (1).[After IEEE 6101 developed specifically for a set of users or customers.The erior method or innovativ actice that c ontribu es to the in ext,usually recognized as be beta testing:Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers,to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes.Beta testing is often employed as a form of external acceptance testing for off-the-shelf software in order to acquire feedback from the market. A type of inte oth ware lem thanna ner 10ntor an ve s,hardw I system,rath black-box technique:See black box test design technique. black-box testing:Testing,either functional or non-functional,without reference to the internal structure of the component or system. black-box test design technique Procedure derive and/or select test cases based on an analysisfictionfunctionalo-funto without reference to its internal structure blocked test case:A test case that cannot be executed because the preconditions for its execution are not fulfilled. bottom-up testing:An incremental approach to integration testing where the lowest level components are tested first,and then used to facilitate the testing of higher level 9
9 automated testware: Testware used in automated testing, such as tool scripts. availability: The degree to which a component or system is operational and accessible when required for use. Often expressed as a percentage. [IEEE 610] B back-to-back testing: Testing in which two or more variants of a component or system are executed with the same inputs, the outputs compared, and analyzed in cases of discrepancies. [IEEE 610] baseline: A specification or software product that has been formally reviewed or agreed upon, that thereafter serves as the basis for further development, and that can be changed only through a formal change control process. [After IEEE 610] basic block: A sequence of one or more consecutive executable statements containing no branches. Note: A node in a control flow graph represents a basic block. basis test set: A set of test cases derived from the internal structure of a component or specification to ensure that 100% of a specified coverage criterion will be achieved. bebugging: See fault seeding. [Abbott] behavior: The response of a component or system to a set of input values and preconditions. benchmark test: (1) A standard against which measurements or comparisons can be made. (2) A test that is be used to compare components or systems to each other or to a standard as in (1). [After IEEE 610] bespoke software: Software developed specifically for a set of users or customers. The opposite is off-the-shelf software. best practice: A superior method or innovative practice that contributes to the improved performance of an organization under given context, usually recognized as ‘best’ by other peer organizations. beta testing: Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing for off-the-shelf software in order to acquire feedback from the market. big-bang testing: A type of integration testing in which software elements, hardware elements, or both are combined all at once into a component or an overall system, rather than in stages. [After IEEE 610] See also integration testing. black-box technique: See black box test design technique. black-box testing: Testing, either functional or non-functional, without reference to the internal structure of the component or system. black-box test design technique: Procedure to derive and/or select test cases based on an analysis of the specification, either functional or non-functional, of a component or system without reference to its internal structure. blocked test case: A test case that cannot be executed because the preconditions for its execution are not fulfilled. bottom-up testing: An incremental approach to integration testing where the lowest level components are tested first, and then used to facilitate the testing of higher level
components.This process is repeated until the component at the top of the hierarchy is tested.See also integration testing. :An input value or oupu value which isn the edge menta distance on either side of an edge,for example the boundary value analysis:A black box test design technique in which test cases are designed based on boundary values.See also bo mndary value boundary value coverage:The percentage of boundary values that have been exercised by a test suite. boundary value testing:See boundary value analysis terative program pa on a progran then-elof le,e.g.case,Jump,go to, branch condition:See condition branch condition combination coverage:See multiple condition coverage branch condition combination testing:See multiple condition testing. branch condition coverage:See condition coverage branch coverage:The percentage of branches that have been exercised by a test suite.100% branch coverage implies both 100%decision coverage and 100%statement coverage. branch testing:A white box test design technique in which test cases are designed to execute branches. of data odevice or stora area used to store data temporaly for dtedevices or processe buffer overfow:A memory access defect duc to the attempt by a process to store data beyond the boundaries of a fixed length buffer,resultir g in ove memory areas or the raising of an overflow exception.See also buffer. bug:See defect. bug report:See defect report. bug taxonomy:See defect taxonomy. bug tracking tool:See defect management tool. business process-based testing:An approach to testing in which test cases are designed based on descriptions and/or knowledge of business processes Capability Maturity Model(CMM):A five level staged framework that describes the key elements of an effective software process.The Capability Maturity Model covers best- practices for planning,engineering and managing software development and maintenance. [CMM]See also Capability Maturity Model Integration (CMMI). Capability Maturity Model Integratio (CMMI):A framework that describes the e product elopment an maintenance process.The Capability Maturity Model ntegration covers best-practices for panggan manag 10
10 components. This process is repeated until the component at the top of the hierarchy is tested. See also integration testing. boundary value: An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, for example the minimum or maximum value of a range. boundary value analysis: A black box test design technique in which test cases are designed based on boundary values. See also boundary value. boundary value coverage: The percentage of boundary values that have been exercised by a test suite. boundary value testing: See boundary value analysis. branch: A basic block that can be selected for execution based on a program construct in which one of two or more alternative program paths is available, e.g. case, jump, go to, ifthen-else. branch condition: See condition. branch condition combination coverage: See multiple condition coverage. branch condition combination testing: See multiple condition testing. branch condition coverage: See condition coverage. branch coverage: The percentage of branches that have been exercised by a test suite. 100% branch coverage implies both 100% decision coverage and 100% statement coverage. branch testing: A white box test design technique in which test cases are designed to execute branches. buffer: A device or storage area used to store data temporarily for differences in rates of data flow, time or occurrence of events, or amounts of data that can be handeld by the devices or processes involved in the transfer or use of the data. [IEEE 610] buffer overflow: A memory access defect due to the attempt by a process to store data beyond the boundaries of a fixed length buffer, resulting in overwriting of adjacent memory areas or the raising of an overflow exception. See also buffer. bug: See defect. bug report: See defect report. bug taxonomy: See defect taxonomy. bug tracking tool: See defect management tool. business process-based testing: An approach to testing in which test cases are designed based on descriptions and/or knowledge of business processes. C Capability Maturity Model (CMM): A five level staged framework that describes the key elements of an effective software process. The Capability Maturity Model covers bestpractices for planning, engineering and managing software development and maintenance. [CMM] See also Capability Maturity Model Integration (CMMI). Capability Maturity Model Integration (CMMI): A framework that describes the key elements of an effective product development and maintenance process. The Capability Maturity Model Integration covers best-practices for planning, engineering and managing