- Barajar
ActivarDesactivar
- Alphabetizar
ActivarDesactivar
- Frente Primero
ActivarDesactivar
- Ambos lados
ActivarDesactivar
- Leer
ActivarDesactivar
Leyendo...
Cómo estudiar sus tarjetas
Teclas de Derecha/Izquierda: Navegar entre tarjetas.tecla derechatecla izquierda
Teclas Arriba/Abajo: Colvea la carta entre frente y dorso.tecla abajotecla arriba
Tecla H: Muestra pista (3er lado).tecla h
Tecla N: Lea el texto en voz.tecla n
Boton play
Boton play
63 Cartas en este set
- Frente
- Atrás
• Requirement
|
A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification or other formally imposed document.
|
• Quality
|
The degree to which a component, system or process meets specified requirements and / or user / customer needs and expectations.
|
• Risk
|
A factor that could result in future negative consequences; usually expressed as impact and likelihood.
|
• Error-Mistake
|
A human action that produces an incorrect result.
|
• Failure
|
A deviation of the component or system from its expected delivery, service or result.
|
• Defect-Bug-Fault
|
A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.
|
• Incident
|
Any event occurring that requires investigation.
|
• What is Testing?
|
Execution of the test planning and control, analysis and design, implementation and execution, evaluation and closure activities integrated to the product development lifecycle.
|
• Debugging
|
The process of finding, analyzing and removing the causes of failures (defects) in software.
|
• Test Case
|
A set of input values, execution pre-conditions, expected results and execution post-conditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement
|
• Test Process
|
The fundamental test process comprises test planning and control, test analysis and design, test implementation and execution, evaluating exit criteria and reporting and test closure activities.
|
• Testware
|
Artifacts produced during the test process required to plan, design and execute tests, such as documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and any additional software or utilities used in testing.
|
• Test Plan
|
A document describing the scope, approach, resources and schedule of intended test activities. It defines amongst other test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process.
|
• Test Basis
|
All documents from which the requirements of a component or system can be inferred. The documentation on which test cases are based. If a document can be amended only by way of formal amendment procedure, then the test basis is called a frozen test basis.
|
• Test Condition
|
A testable aspect of a component or system identified as a basis for testing.
|
• Test Data
|
Data that exists (for example, in a database) before a test is executed, and that affects or is affected by the component or system under test.
|
• Test Suite
|
A set of several test cases for a component or system under test, where the post condition of one test is often used as the pre-condition for the next one.
|
• Test Coverage
|
The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite.
|
• Test Execution
|
The process of running a test on the component or system under test, producing actual results.
|
• Test Procedure Specification
|
A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script.
|
• Test Log
|
A chronological record of relevant details about the execution of tests.
|
• Re-testing-Confirmation testing
|
Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.
|
• Regression Testing
|
Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of changes made. It is performed when the software or its environment is changed.
|
• Exit criteria
|
The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. The purpose of the exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used to report against and to plan when to stop testing.
|
• Negative testing
|
Testing a component or system in a way for which it was not intended to be used.
|
• Verification
|
Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled.
|
• Validation
|
Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.
|
• V-Model
|
System Design <> Subsystem Design <> Component Design <> Component Verification <> Subsystem Verification <> System Verification
|
• Iterative Development Model
|
A development lifecycle where a project is broken into a usually large number of iterations. An iteration is a complete development loop resulting in a release (internal or external) of an executable product, a subset of the final product under development, which grows from iteration to iteration to become the final product.
|
• Incremental Development Model
|
A development lifecycle where a project is broken into a series of increments, each of which delivers a portion of the functionality in the overall project requirements. The requirements are prioritized and delivered in priority order in the appropriate increment. In some (but not all) versions of this lifecycle model, each subproject follows a ‘mini V-model’ with its own design, coding and testing phases.
|
• Iterative Incremental Development Model
|
Both developments mentioned before
|
• Test Level
|
A specific instantiation of a test process.
|
• Component Testing
|
is the testing of individual elements.
|
• Driver
|
A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system.
|
• Stub
|
A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component.
|
• Bottom-up Integration Process
|
An incremental approach to integration testing where the lowest level components are tested first, and then used to facilitate the testing of higher level components. This process is repeated until the component at the top of the hierarchy is tested.
|
• Top-down Integration Process
|
An incremental approach to integration testing where the component at the top of the hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.
|
• Integration Testing
|
Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems.
|
• System Testing
|
The process of testing an integrated system to verify that it meets specified requirements.
|
• Acceptance Testing
|
Formal testing with respect to user needs, requirements and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.
|
• Smoke/Sanity/confidence testing
|
A test suite that covers the main functionality of a component or system to determine whether it works properly before planned testing begins.
|
• Negative/invalid/dirty testing
|
Testing a component or system in a way for which it was not intended to be used.
|
• Alpha testing
|
Simulated or actual operational testing by potential users/customers or an independent test team at the developers’ site, but outside the development organization. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing.
|
• Beta-Field testing
|
Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing for off-the-shelf software in order to acquire feedback from the market.
|
• Robustness Testing
|
The degree to which a component or system operates as intended despite the presence of hardware or software faults.
|
• Test Environment
|
An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test.
|
• Functional Requirement
|
A requirement that specifies a function that a component or system must perform.
|
• Non-Functional Requirement
|
A requirement that does not relate to functionality, but to attributes such as reliability, efficiency, usability, maintainability and portability.
|
• Test Type
|
A group of test activities aimed at testing a component or system focused on a specific test objective
|
• Black-box Testing
|
Testing, either functional or non-functional, without reference to the internal structure of the * component or system.
|
• Performance Testing
|
The process of testing to determine the performance of a software product.
|
• Load Testing
|
A type of performance testing conducted to evaluate the behavior of a component or system with increasing load, e.g. numbers of parallel users and/or numbers of transactions, to determine what load can be handled by the component or system.
|
• Stress Testing
|
A type of performance testing conducted to evaluate a system or component at or beyond the limits of its anticipated or specified workloads, or with reduced availability of resources such as access to memory or servers.
|
• Structural Testing- White-box testing
|
Testing based on an analysis of the internal structure of the component or system
|
• Static testing
|
Testing of a component or system at specification or implementation level without execution of that software, e.g. reviews or static analysis
|
• Dynamic testing
|
Testing that involves the execution of the software of a component or system
|
• Traceability
|
The ability to identify related items in documentation and software, such as requirements with associated tests.
|
• Black box techniques
|
Equivalence partitioning, Boundary value analysis, State transition testing, Decision tables testing, Use case testing
|
• White box techniques
|
Statement testing, Branch testing, Condition testing, Path testing
|
• false-negative result
|
A test result which fails to identify the presence of a defect that is actually present in the test object.
|
• false-positive result
|
A test result in which a defect is reported although no such defect actually exists in the test object.
|
• Reliability
|
The degree to which a component or system performs specified functions under specified conditions for a specified period of time.
|
Gray Box Testing
|
partial knowledge of the product's internal code structure or programming logic.
|