Monday, June 28, 2010

Test Plan and Test Strategy

Test Plan:A software project test plan is a document that describes the objectives, scope, approach and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the why and how of product validation. It should be thorough enough to be useful, but not so thorough that none outside the test group will be able to read it

•Approved Test Strategy Document.
•Test tools, or automated test tools, if applicable.
•Previously developed scripts, if applicable.
•Test documentation problems uncovered as a result of testing.
•A good understanding of software complexity and module path coverage, derived from general and detailed design documents, e.g. software design document, source code and software complexity data.
Outputs for this process:
•Approved documents of test scenarios, test cases, test conditions and test data.
•Reports of software design issues, given to software developers for correction.

Test Strategy
The test strategy is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team analyzes the requirements, writes the test strategy and reviews the plan with the project team. The test plan may include test cases, conditions, the test environment, a list of related tasks, pass/fail criteria and risk assessment.

Inputs for this process:
•A description of the required hardware and software components, including test tools. This information comes from the test environment, including test tool data.
•A description of roles and responsibilities of the resources required for the test and schedule constraints. This information comes from man-hours and schedules.
•Testing methodology. This is based on known standards.
•Functional and technical requirements of the application. This information comes from requirements, change request, technical and functional design documents.
•Requirements that the system can not provide, e.g. system limitations.
Outputs for this process:
•An approved and signed off test strategy document, test plan, including test cases.
•Testing issues requiring resolution. Usually this requires additional negotiation at the project management level.

Functional and Non Functional Testing

Functional Testing: It stands for testing of a particular part or complete application functionality testing without bothering  how a particular part or system delivers the result.

There are various types of functional testing like: 
1.Unit testing refers to tests that verify the functionality of a specific section of code.

2.Integration testing is any type of software testing that seeks to verify the interfaces between components against a software design.

3.System Testing tests a completely integrated system to verify that it meets its requirements.

4.System Integration testing verifies that a system is integrated to any external or third party systems defined in the system requirements.
 
5.Regression testing focuses on finding defects after a major code change has occurred.

6.Acceptance Testing:

7. Alpha Testing:

8. Beta Testing:

Non-functional testing:It testing verifies that the software functions properly even when it receives invalid or unexpected inputs.It is designed to test whether the application under test can tolerate invalid or unexpected inputs, thereby establishing the robustness of input validation routines as well as error-handling routines.

There are various types of functional testing like:
1.Performance Testing: 

2.Stability testing:

3.Usability Testing:

4.Security Testing:

5.
Destructive testing:

6.Internationalization and localization Testing:

Saturday, June 19, 2010

Few Important Glossary

Acceptance testing: Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.

Adaptability: The capability of the software product to be adapted for different specified environments without applying actions or means other than those provided for this purpose for the software considered.

Audit: An independent evaluation of software products or processes to ascertain compliance to  standards,  guidelines,  specifications,  and/or  procedures  based  on  objective  criteria, including documents that specify:
(1) the form or content of the products to be produced
(2) the process by which the products shall be produced
(3) how compliance to standards or guidelines shall be measured.


Audit trail: A path by which the original input to a process (e.g. data) can be traced back through the  process, taking the process output as a starting point. This facilitates defect analysis and allows a process audit to be carried out.

Bespoke software: Software developed specifically for a set of users or customers. The opposite is off-the-shelf software.

Beta testing: Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the  user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing for off-the-shelf software in order to acquire feedback from the market.

Black-box test design technique: Procedure to derive and/or select test cases based on an analysis of the specification, either functional or non-functional, of a component or system without reference to its internal structure.
 
Blocked test case: A test case that cannot be executed because the preconditions for its execution are not fulfilled.

Bottom-up testing: An incremental approach to integration testing where the lowest level components  are  tested  first,  and  then  used  to  facilitate  the  testing  of  higher  level components. This process is repeated until the component at the top of the hierarchy is tested. See also integration testing.

Boundary value: An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, for example the minimum or maximum value of a range.

Boundary value analysis: A black box test design technique in which test cases are designed based on boundary values. See also boundary value.


Boundary value coverage: The percentage of boundary values that have been exercised by a test suite.

Business process-based testing: An approach to testing in which test cases are designed based on descriptions and/or knowledge of business processes.

Capability Maturity Model (CMM): A five level staged framework that describes the key elements of an  effective software process. The Capability Maturity Model covers best- practices for planning, engineering and managing software development and maintenance

Capability Maturity Model Integration (CMMI): A framework that describes the key elements of an  effective product development and maintenance process. The Capability Maturity Model Integration covers best-practices for planning, engineering and managing product development and maintenance

Concurrency testing: Testing to determine how the occurrence of two or more activities within  the  same  interval  of  time,  achieved  either  by  interleaving  the  activities  or  by simultaneous execution, is handled by the component or system.

Condition: A logical expression that can be evaluated as True or False, e.g. A>B. See also test condition.

Condition coverage: The percentage of condition outcomes that have been exercised by a test suite. 100% condition coverage requires each single condition in every decision statement to be tested as True and False.

Condition determination coverage: The percentage of all single condition outcomes that independently  affect a decision outcome that have been exercised by a test case suite.100% condition determination coverage implies 100% decision condition coverage.

Condition determination testing: A white box test design technique in which test cases are designed  to  execute  single  condition  outcomes  that  independently  affect  a  decision outcome.

Condition testing: A white box test design technique in which test cases are designed to execute condition outcomes.

Condition outcome: The evaluation of a condition to True or False.

Configuration management: A discipline applying technical and administrative direction and surveillance to:  identify and document the functional and physical characteristics of a configuration  item,  control  changes  to  those  characteristics,  record  and  report  change processing and implementation status, and verify compliance with specified requirements.

Configuration management tool: A tool that provides support for the identification and control of configuration items, their status over changes and versions, and the release of baselines consisting of configuration items

Conversion testing: Testing of software used to convert data from existing systems for use in replacement systems

Cost of quality: The total costs incurred on quality activities and issues and often split into prevention costs, appraisal costs, internal failure costs and external failure costs

COTS: Acronym for Commercial Off-The-Shelf software

Coverage: The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite.

Coverage analysis: Measurement of achieved coverage to a specified coverage item during test execution referring to predetermined criteria to determine whether additional testing is required and if so, which test cases are needed


Cyclomatic complexity: The number of independent paths through a program. Cyclomatic complexity is defined as: L – N + 2P, where
- L = the number of edges/links in a graph
- N = the number of nodes in a graph
- P = the number of disconnected parts of the graph (e.g. a called graph and a subroutine) [After McCabe]


Data driven testing: A scripting technique that stores test input and expected results in a table or spreadsheet, so that a single control script can execute all of the tests in the table. Data driven testing is often used to  support the application of test execution tools such as capture/playback tools.

Data flow testing: A white box test design technique in which test cases are designed to execute definition and use pairs of variables

Decision: A program point at which the control flow has two or more alternative routes. A
node with two or more links to separate branches.

Decision  condition  coverage:  The  percentage  of  all  condition  outcomes  and  decision outcomes that  have been exercised by a test suite. 100% decision condition coverage implies both 100% condition coverage and 100% decision coverage.

Decision condition  testing:  A  white  box  test  design  technique  in  which  test  cases  are designed to execute condition outcomes and decision outcomes.

Decision coverage: The percentage of decision outcomes that have been exercised by a test suite. 100%  decision coverage implies both 100% branch coverage and 100% statement coverage.

Decision outcome: The result of a decision (which therefore determines the branches to be taken).

Decision table: A table showing combinations of inputs and/or stimuli (causes) with their associated outputs and/or actions (effects), which can be used to design test cases.

Decision table testing: A black box test design technique in which test cases are designed to execute the  combinations of inputs and/or stimuli (causes) shown in a decision table. [Veenendaal] See also decision table.

Decision testing: A white box test design technique in which test cases are designed to execute decision outcomes.

Defect: A flaw in a component or system that can cause the component or system to fail to perform its  required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.

Defect based test design technique: A procedure to derive and/or select test cases targeted at one or more defect categories, with tests being developed from what is known about the specific defect category. See also defect taxonomy.

Defect density: The number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms, e.g. lines-of- code, number of classes or function points).

Defect Detection Percentage (DDP): The number of defects found by a test phase, divided by the number found by that test phase and any other means afterwards.

Defect management: The process of recognizing, investigating, taking action and disposing of defects. It  involves recording defects, classifying them and identifying the impact. [After IEEE 1044]


Defect management tool: A tool that facilitates the recording and status tracking of defects and  changes.  They  often  have  workflow-oriented  facilities  to  track  and  control  the allocation, correction and  re-testing of defects and provide reporting facilities. See also incident management tool.

Defect masking: An occurrence in which one defect prevents the detection of another. [After
IEEE 610]

Defect report: A document reporting on any flaw in a component or system that can cause the component or system to fail to perform its required function. [After IEEE 829]

Defect taxonomy: A system of (hierarchical) categories designed to be a useful aid for reproducibly classifying defects


Desk checking: Testing of software or specification by manual simulation of its execution.

Development testing: Formal or informal testing conducted during the implementation of a component or system, usually in the development environment by developers. [After IEEE
610]

Domain: The set from which valid input and/or output values can be selected.

Driver: A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system. [After TMap]

Dynamic analysis:  The  process  of  evaluating  behavior,  e.g.  memory  performance,  CPU
usage, of a system or component during execution. [After IEEE 610]


Elementary comparison testing: A black box test design technique in which test cases are designed to execute combinations of inputs using the concept of condition determination coverage. [TMap]

Entry criteria: The set of generic and specific conditions for permitting a process to go forward with a defined task, e.g. test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort compared to the effort needed to remove the failed entry criteria. [Gilb and Graham]

Entry point: The first executable statement within a component

Equivalence partition: A portion of an input or output domain for which the behavior of a component or system is assumed to be the same, based on the specification.

Equivalence partition coverage: The percentage of equivalence partitions that have been exercised by a test suite.

Equivalence partitioning: A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.

Error: A human action that produces an incorrect result. [After IEEE 610]

Error guessing:  A  test  design  technique  where  the  experience  of  the  tester  is  used  to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them.

Exhaustive testing: A test approach in which the test suite comprises all combinations of input values and preconditions.

Exit criteria: The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a  process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used to report against and to plan when to stop testing. [After Gilb and Graham]


Fail: A test is deemed to fail if its actual result does not match its expected result.

Failure: Deviation of the component or system from its expected delivery, service or result. [After Fenton]

Failure mode: The physical or functional manifestation of a failure. For example, a system in failure  mode  may  be  characterized  by  slow  operation,  incorrect  outputs,  or  complete termination of execution. [IEEE 610]

Failure Mode and Effect Analysis (FMEA): A systematic approach to risk identification and  analysis  of  identifying  possible  modes  of  failure  and  attempting  to  prevent  their occurrence. See also Failure Mode, Effect and Criticality Analysis (FMECA).

Failure Mode, Effect and Criticality Analysis (FMECA): An extension of FMEA, as in addition to the  basic FMEA, it includes a criticality analysis, which is used to chart the probability  of  failure  modes  against  the  severity  of  their  consequences.  The  result highlights failure modes with relatively high  probability and severity of consequences, allowing remedial effort to be directed where it will produce the greatest value. See also Failure Mode and Effect Analysis (FMEA).

Failure rate: The ratio of the number of failures of a given category to a given unit of measure, e.g.  failures per unit of time, failures per number of transactions, failures per number of computer runs. [IEEE 610]

False-fail result: A test result in which a defect is reported although no such defect actually exists in the test object.


Fault seeding: The process of intentionally adding known defects to those already in the component or system for the purpose of monitoring the rate of detection and removal, and estimating the number of remaining defects. [IEEE 610]

Fault seeding tool: A tool for seeding (i.e. intentionally inserting) faults in a component or system.

Fault tolerance: The capability of the software product to maintain a specified level of performance  in  cases  of  software  faults  (defects)  or  of  infringement  of  its  specified interface. [ISO 9126]

Fault Tree Analysis (FTA): A technique used to analyze the causes of faults (defects). The technique  visually models how logical relationships between failures, human errors, and external events can combine to cause specific faults to disclose

Frozen test basis: A test basis document that can only be amended by a formal change control process.

Function Point Analysis (FPA): Method aiming to measure the size of the functionality of an  information   system.  The  measurement  is  independent  of  the  technology.  This measurement may be used as a basis for the measurement of productivity, the estimation of the needed resources, and project control.

Functional integration: An integration approach that combines the components or systems for the purpose of getting a basic functionality working early. See also integration testing.

Functional requirement: A requirement that specifies a function that a component or system must perform. [IEEE 610]

Functional test design technique: Procedure to derive and/or select test cases based on an analysis  of  the  specification  of  the  functionality  of  a  component  or  system  without reference to its internal structure.

Functional testing: Testing based on an analysis of the specification of the functionality of a component or system.

Horizontal traceability: The tracing of requirements for a test level through the layers of test documentation  (e.g. test plan, test design specification, test case specification and test procedure specification or test script).


Impact analysis: The assessment of change to the layers of development documentation, test documentation  and  components,  in  order  to  implement  a  given  change  to  specified requirements.

Incident: Any event occurring that requires investigation. [After IEEE 1008]

Incident logging: Recording the details of any incident that occurred, e.g. during testing.

Incident management: The process of recognizing, investigating, taking action and disposing of incidents. It  involves logging incidents, classifying them and identifying the impact. [After IEEE 1044]


Incremental testing: Testing where components or systems are integrated and tested one or some at a time, until all the components or systems are integrated and tested.

Independence of testing: Separation of responsibilities,which encourages the accomplishment of objective testing. [After DO-178b]

Informal review: A review not based on a formal (documented) procedure.

Inspection: A type of peer review that relies on visual examination of documents to detect defects, e.g.  violations of development standards and non-conformance to higher level documentation.  The  most  formal  review  technique  and  therefore  always  based  on  a documented procedure. [After IEEE 610, IEEE 1028]

Iterative development model: A development life cycle where a project is broken into a usually large number of iterations. An iteration is a complete development loop resulting in a release (internal or external) of an executable product, a subset of the final product under development, which grows from iteration to iteration to become the final product.

Keyword driven testing: A scripting technique that uses data files to contain not only test data and expected results, but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts that are called by the control script for the test.

LCSAJ: A  Linear  Code  Sequence  And  Jump,  consisting  of  the  following  three  items (conventionally identified by line numbers in a source code listing): the start of the linear sequence of executable  statements, the end of the linear sequence, and the target line to which control flow is transferred at the end of the linear sequence.

LCSAJ coverage: The percentage of LCSAJs of a component that have been exercised by a test suite. 100% LCSAJ coverage implies 100% decision coverage.

LCSAJ testing: A white box test design technique in which test cases are designed to execute LCSAJs.

Management review: A systematic evaluation of software acquisition, supply, development, operation,  or  maintenance  process,  performed  by  or  on  behalf  of  management  that monitors progress, determines the status of plans and schedules, confirms requirements and their  system  allocation,  or  evaluates  the  effectiveness  of  management  approaches  to achieve fitness for purpose. [After IEEE 610, IEEE 1028]

Mutation analysis: A method to determine test suite thoroughness by measuring the extent to which a test  suite can discriminate the program from slight variants (mutants) of the program.

Off-the-shelf software: A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format.

Path coverage: The percentage of paths that have been exercised by a test suite. 100% path coverage implies 100% LCSAJ coverage.

Peer review:
A review of a software work product by colleagues of the producer of the product for the purpose of identifying defects and improvements. Examples are inspection, technical review and walkthrough.

Performance: The degree to which a system or component accomplishes its designated functions  within  given constraints regarding processing time and throughput rate. [After IEEE 610]

Priority: The level of (business) importance assigned to an item, e.g. defect.

Probe effect: The effect on the component or system by the measurement instrument when the component or system is being measured, e.g. by a performance testing tool or monitor. For example performance may be slightly worse when performance testing tools are being used.

Project: A project is a unique set of coordinated and controlled activities with start and finish dates  undertaken  to achieve an objective conforming to specific requirements, including the constraints of time, cost and resources. [ISO 9000]

Project risk: A risk related to management and control of the (test) project, e.g. lack of staffing, strict deadlines, changing requirements, etc.

Pseudo-random: A series which appears to be random but is in fact generated according to some prearranged sequence.

Quality: The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations. [After IEEE 610]

Quality assurance: Part of quality management focused on providing confidence that quality requirements will be fulfilled. [ISO 9000]

Quality attribute: A feature or characteristic that affects an item’s quality. [IEEE 610]

Quality management: Coordinated activities to direct and control an organization with regard to quality. Direction and control with regard to quality generally includes the establishment of  the  quality  policy  and  quality  objectives,  quality  planning,  quality  control,  quality assurance and quality improvement. [ISO 9000]

Regression testing: Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is  performed when the software or its environment is changed.

Release note: A document identifying test items, their configuration, current status and other delivery information delivered by development to testing, and possibly other stakeholders, at the start of a test execution phase. [After IEEE 829]

Requirement: A condition or capability needed by a user to solve a problem or achieve an objective that  must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document. [After IEEE 610]

Requirements-based testing: An approach to testing in which test cases are designed based on test  objectives and test conditions derived from requirements, e.g. tests that exercise specific functions or probe non-functional attributes such as reliability or usability.

Requirements  management  tool:  A  tool  that  supports  the  recording  of  requirements, requirements   attributes   (e.g.  priority,   knowledge   responsible)   and   annotation,   and facilitates    traceability    through    layers    of    requirements    and    requirements    change management.  Some  requirements  management  tools  also  provide  facilities  for  static analysis, such as consistency checking and violations to pre-defined requirements rules.

Requirements phase:  The  period  of  time  in  the  software  life  cycle  during  which  the requirements for a software product are defined and documented. [IEEE 610]

Re-testing: Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.

Retrospective meeting: A meeting at the end of a project during which the project team members evaluate the project and learn lessons that can be applied to the next project.

Review: An evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements. Examples include management review, informal review, technical review, inspection, and walkthrough. [After IEEE 1028]

Reviewer: The person involved in the review that identifies and describes anomalies in the product or project under review. Reviewers can be chosen to represent different viewpoints and roles in the review process.

Risk: A factor that could result in future negative consequences; usually expressed as impact and likelihood.

Risk  analysis:  The  process  of  assessing  identified  risks  to  estimate  their  impact  and probability of occurrence (likelihood).

Risk-based testing: An approach to testing to reduce the level of product risks and inform stakeholders on  their status, starting in the initial stages of a project. It involves the identification of product risks and their use in guiding the test process.

Risk control: The process through which decisions are reached and protective measures are implemented for reducing risks to, or maintaining risks within, specified levels.

Risk identification: The process of identifying risks using techniques such as brainstorming, checklists and failure history.

Risk level: The importance of a risk as defined by its characteristics impact and likelihood.The level of risk can be used to determine the intensity of testing to be performed. A risk level can be expressed either qualitatively (e.g. high, medium, low) or quantitatively.

Risk  management:  Systematic  application  of  procedures  and  practices  to  the  tasks  of identifying, analyzing, prioritizing, and controlling risk

Risk type: A specific category of risk related to the type of testing that can mitigate (control) that  category.  For  example  the  risk  of  user-interactions  being  misunderstood  can  be mitigated by usability testing.

Root cause:
A source of a defect such that if it is removed, the occurance of the defect type is decreased or removed. [CMMI]

Root cause analysis: An analysis technique aimed at identifying the root causes of defects. By directing  corrective measures at root causes, it is hoped that the likelihood of defect recurrence will be minimized.

Safety: The capability of the software product to achieve acceptable levels of risk of harm to people, business, software, property or the environment in a specified context of use. [ISO9126]

Scalability: The capability of the software product to be upgraded to accommodate increased loads. [After Gerrard]

Scalability testing: Testing to determine the scalability of the software product.

Scribe: The person who records each defect mentioned and any suggestions for process improvement during a review meeting, on a logging form. The scribe has to ensure that the logging form is readable and understandable.

Security: Attributes of software products that bear on its ability to prevent unauthorized access,  whether  accidental  or  deliberate,  to  programs  and  data.  [ISO  9126] 

Severity: The degree of impact that a defect has on the development or operation of a component or system. [After IEEE 610]

Site acceptance testing: Acceptance testing by users/customers at their site, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes, normally including hardware as well as software.

Smoke test: A subset of all defined/planned test cases that cover the main functionality of a component or system, to ascertaining that the most crucial functions of a program work, but not bothering with finer details. A daily build and smoke test is among industry best practices.

Software life cycle: The period of time that begins when a software product is conceived and ends when the  software is no longer available for use. The software life cycle typically includes a concept phase,  requirements phase, design phase, implementation phase, test phase, installation and checkout phase, operation and maintenance phase, and sometimes, retirement phase. Note these phases may overlap or be performed iteratively.

Software quality: The totality of functionality and features of a software product that bear on its ability to satisfy stated or implied needs. [After ISO 9126]

Specification: A document that specifies, ideally in a complete, precise and verifiable manner, the requirements, design, behavior, or other characteristics of a component or system, and, often, the procedures for determining whether these provisions have been satisfied. [After IEEE 610]

State diagram: A diagram that depicts the states that a component or system can assume, and shows the events or circumstances that cause and/or result from a change from one state to another. [IEEE 610]

State table: A grid showing the resulting transitions for each state combined with each possible event, showing both valid and invalid transitions.

State transition: A transition between two states of a component or system.

State transition testing: A black box test design technique in which test cases are designed to execute valid and invalid state transitions. See also N-switch testing.

Statement: An entity in a programming language, which is typically the smallest indivisible unit of execution.

Statement coverage: The percentage of executable statements that have been exercised by a test suite.

Statement testing: A white box test design technique in which test cases are designed to execute statements.

Static testing: Testing of a component or system at specification or implementation level without execution of that software, e.g. reviews or static code analysis.

Stress testing: A type of performance testing conducted to evaluate a system or component at or beyond the limits of its anticipated or specified work loads, or with reduced availability of resources such as access to memory or servers. [After IEEE 610]

Stub:
A skeletal or special-purpose implementation of a software component, used to develop or  test  a  component  that  calls  or  is  otherwise  dependent  on  it.  It  replaces  a  called component. [After IEEE 610]

Suspension criteria: The criteria used to (temporarily) stop all or a portion of the testing activities on the test items. [After IEEE 829]

System  of  systems:  Multiple  heterogeneous,  distributed  systems  that  are  embedded  in networks at multiple levels and in multiple domains interconnected addressing large-scale inter-disciplinary common problems and purposes.

System  integration  testing:  Testing  the  integration  of  systems  and  packages;  testing interfaces to external organizations (e.g. Electronic Data Interchange, Internet).

Technical review: A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken. [Gilb and Graham, IEEE 1028]

Test: A set of one or more test cases. [IEEE 829]

Test approach: The implementation of the test strategy for a specific project. It typically includes the  decisions made that follow based on the (test) project’s goal and the risk assessment carried out, starting points regarding the test process, the test design techniques to be applied, exit criteria and test types to be performed.

Test  automation:  The  use  of  software  to  perform  or  support  test  activities,  e.g.  test management, test design, test execution and results checking.

Test basis: All documents from which the requirements of a component or system can be inferred. The  documentation on which the test cases are based. If a document can be amended only by way of formal amendment procedure, then the test basis is called a frozen test basis. [After TMap]

Test case: A set of input values, execution preconditions, expected results and execution postconditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement. [After IEEE 610]

Test Maturity Model (TMM): A five level staged framework for test process improvement, related to the  Capability Maturity Model (CMM), that describes the key elements of an effective test process.

Test Maturity Model Integrated (TMMi): A five level staged framework for test process improvement, related to the Capability Maturity Model Integration (CMMI), that describes the key elements of an effective test process.

Test oracle: A source to determine expected results to compare with the actual result of the software under  test. An oracle may be the existing system (for a benchmark), a usermanual, or an individual’s specialized knowledge, but should not be the code. [After Adrion]

Test plan: A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process. [After IEEE 829]

Test planning: The activity of establishing or updating a test plan.

Test policy: A high level document describing the principles, approach and major objectives of the organization regarding testing.

Test Point Analysis (TPA): A formula based test estimation method based on function point analysis. [TMap]

Test procedure specification: A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script. [After IEEE 829]

Test process:  The fundamental test process comprises test planning and control, test analysis and design, test implementation and execution, evaluating exit criteria and reporting, and test closure activities.

Test Process Improvement (TPI): A continuous framework for test process improvement that describes the key elements of an effective test process, especially targeted at system testing and acceptance testing.

Test progress report: A document summarizing testing activities and results, produced at regular intervals,  to report progress of testing activities against a baseline (such as the original  test  plan)  and  to  communicate  risks  and  alternatives  requiring  a  decision  to management.

Test reproducibility: An attribute of a test indicating whether the same results are produced each time the test is executed.

Test schedule: A list of activities, tasks or events of the test process, identifying their intended start and finish dates and/or times, and interdependencies.

Test script: Commonly used to refer to a test procedure specification, especially an automated one.

Test session: An uninterrupted period of time spent in executing tests. In exploratory testing, each test session is focused on a charter, but testers can also explore new opportunities or issues during a session. The tester creates and executes test cases on the fly and records their progress

Test  specification:  A  document  that  consists  of  a  test  design  specification,  test  case specification and/or test procedure specification.

Test strategy: A high-level description of the test levels to be performed and the testing within those levels for an organization or programme (one or more projects).

Test suite: A set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one.

Top-down testing: An incremental approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested. See also integration testing.

Traceability: The ability to identify related items in documentation and software, such as requirements with associated tests. See also horizontal traceability, vertical traceability.

Usability: The capability of the software to be understood, learned, used and attractive to the user when used under specified conditions. [ISO 9126]

V-model: A  framework  to  describe  the  software  development  life  cycle  activities  from requirements specification to maintenance. The V-model illustrates how testing activities can be integrated into each phase of the software development life cycle.

Validation: Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled. [ISO 9000]

Variable: An element of storage in a computer that is accessible by a software program by referring to it by a name.

Verification: Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled. [ISO 9000]

Vertical  traceability:  The  tracing  of  requirements  through  the  layers  of  development documentation to components.

Volume testing:
Testing where the system is subjected to large volumes of data.

Walkthrough:
A step-by-step presentation by the author of a document in order to gather information  and  to  establish  a  common  understanding  of  its  content.  [Freedman  and Weinberg, IEEE 1028]

White-box test design technique: Procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system.

White-box testing: Testing based on an analysis of the internal structure of the component or system.

Wide Band  Delphi:  An  expert  based  test  estimation  technique  that  aims  at  making  an accurate estimation using the collective wisdom of the team members.

Wild pointer: A pointer that references a location that is out of scope for that pointer or that does not exist.

Friday, June 18, 2010

Data Defect

Data defect term is basically used when we get wrong data written in place caused due to network  problem or some error in data mining algorithm. The network problem means that the issue in data transmission over the network. In other words that you are using a web browser (which act as a client) and you are requesting something in it by typing some intended url to a particular web server then either by transmission error timing or application design the error caused which led to put wrong request to the server is known as data defect.

The difference in the actual data and the expected data is called as Data Defect.

In term of testing, you have a scenario like your company makes an health care application for different mobile platforms like iPhone, Palm.PPC. Blackberry and Android and you provide updated data to your users on weekly or fortnightly basis,  you have generated the data by the server and now you sync all the mobile platform application to verify the updated data is coming properly on all devices or not and you found that out of 5 platform 3 are getting the correct data and on two platforms complete data is not coming means some content is missing or in complete words are coming, then this is data defect that actual data on device is not matching with the expected data .

Wednesday, June 16, 2010

Generation of Wireless Telephone Technology:

First-generation wireless telephone technology or 1G:
It uses analog telecommunications standards. It uses digital signaling to connect the radio towers (which listen to the handsets) to the rest of the telephone system, the voice itself during a call is modulated to higher frequency, typically 150 MHz and up in 1G. 

The main difference between two succeeding mobile telephone systems, 1G and 2G, is that the radio signals that 1G networks use are analog, while 2G networks are digital.

Second-generation wireless telephone technology or 2G: 2G introduced data services for mobile, starting with SMS text messages.
2G technologies can be divided into TDMA-based and CDMA-based standards depending on the type of multiplexing used.

2.5G (GPRS):
GPRS could provide data rates from 56 kbit/s up to 115 kbit/s. It can be used for services such as Wireless Application Protocol (WAP) access, Multimedia Messaging Service (MMS), and for Internet communication services such as email and World Wide Web access. GPRS data transfer is typically charged per megabyte of traffic transferred, while data communication via traditional circuit switching is billed per minute of connection time, independent of whether the user actually is utilizing the capacity or is in an idle state.

2.75G (EDGE):
GPRS networks evolved to EDGE networks with the introduction of 8PSK encoding. Enhanced Data rates for GSM Evolution (EDGE), Enhanced GPRS (EGPRS), or IMT Single Carrier (IMT-SC) is a backward-compatible digital mobile phone technology that allows improved data transmission rates, as an extension on top of standard GSM.

3rd Generation wireless telephone technology or 3G:

The main technological difference that distinguishes 3G technology from 2G technology is the use of packet-switching rather than circuit-switching for data transmission.
3G allows simultaneous use of speech and data services and higher data rates. Today's 3G systems can in practice offer up to 14.0 Mbit/s on the downlink and 5.8 Mbit/s on the uplink.
By the help of 3G now a mobile user can have acess to these applications like Mobile TV, Video on demand, Video conferencing, Tele-medicine and Location-based services.

Fourth generation of cellular wireless standards or 4G:
A 4G system is expected to provide a comprehensive and secure all-IP based solution where facilities such as IP telephony, ultra-broadband Internet access, gaming services and streamed multimedia may be provided to users.

Tuesday, June 15, 2010

Interview Questions for Mobile Application Testing

Hi All,

These are few question which any interviewer can ask when you for interview for a Mobile Application Tester:

1.What is your approach for Testing or from where you start you testing?
2.What is the architecture of the application you test?
3.What all stages an application goes through from scratch to Production or Stages of Application Development?
4.What is GSM and GPRS architecture?
5.What is MMS architecture?
6.About the latest devices available for your technology say latest launch of iPhone or from Apple, latest Blackberry device, latest Android devices, and latest Palm/PPC devices?
7.What is the difference between the current device and the latest one? Or what new features the latest device carries?
8.Technology used in front end and back end for your application development?
9.On what frequency Wi-Fi and GPRS works and when both are working simultaneously at what will be data transfer rate is in M.hz?
10.Which processor is there in the device you have used in application testing?

Monday, June 14, 2010

ISTQB Advance level sample paper 1

Examination Question 1

You have recently been employed by a software development company as a Test Manager. Your first active role within the company is to manage a small test team during the development of a new software product. You have been made aware of the negative feedback provided by customers of similar developed products, and so your aim is to improve this situation.

1)You have been brought on-board this project at an early stage, even before any requirements have been formally agreed. Explain the benefits of this from a test perspective.

2)The Project Manager has asked you about Test Strategies, so provide him with a written description of exactly what a Test Strategy is. Included in your description should be a brief summary of test phases that your team may provide during this project development.

3)As a Test Manager, a common requirement of you is to produce various other types of Test Management documentation. Give a brief overview of the following typical examples:

a.    Test Policy
b.    Project Test Plan
c.    Phases Test Plan

4)List the other types of processes that can influence, or be influenced by the ‘Test Process’.


Examination Question 2

A software company is developing an update to its existing product. The update contains some fixes to existing faults. The end customer who already has the existing product installed at its premises, expressed concern over the effect that an update might have on their system.


1)The customer has asked you (the Test Manager) to provide them with some confidence that the update will not adversely affect their current system.

Specifically, the customer would like to know in detail, which method you will use to ensure that any previously existing functionality will not be affected by the update.

Also, an explanation of the method you will use to ensure that the faulty functionality has now been fixed.


2)The Project Manager would like to know details on how you will log the any problems you find with the software whilst testing. He specifically requires the following information:
An example of a typical incident report. This should include headings and explanations of the type of information to include.

An explanation of the IEEE Std. 1044-1993 standard including its steps.

Examination Question 3

As a founding member of a start-up company’s software development department, you have the responsibility to employee individuals to make up a dedicated software testing team. 

1)Provide examples of the types of skills that you would expect from test team members.

2)Briefly describe what is meant by ‘Test Team’ dynamics. Also list the common test team roles including a brief description of each.

3)Provide a description of the relationship between Developers and Testers. Include common misunderstandings and also your suggestions to avoid them.

Examination Question 4

You have recently been employed by a software development company with a view to improve certain aspects of their testing process. The software they develop is situated on large network backbone routers (i.e. embedded). This causes issues with developers with regards to testing, as they rarely have the opportunity to physically test it for real, or even in a simulated environment. This results in the software being handed-over to the systems testers with a lack of confidence in the software that they have developed.

1)It has been suggested that ‘reviews’ could be useful in this type of situation. Provide a summary of what reviews actually are.

2)Provide a description of each type of known review, and any relevant benefits or hindrances to the situation detailed within the given scenario.

3)Describe a basic structure for a typical review, including who should attend and their roles within the review process

4)Provide some suggestions/guidelines to ensure that all future reviews are successful.

ISTQB 100 success Secrets - ISTQB Foundation Certification Software Testing the

Permission levels in Testlinks

Test Link is built with 5 different permission levels built in. These permission levels are as follows:
Guest: A guest only has permission to view test cases and project metrics.
Tester: A tester outside of the company that only has permissions to run tests allotted to them. (initially tester)
Senior Tester: A tester can view, create, edit, and delete test cases as well as execute them. Testers lack the permissions to manage test plans, manage products, create milestones, or assign rights. (initially tester)
Leader: A lead has all of the same permissions as a Tester but also gains the ability to manage test plans, assign rights, create milestones, and manage keywords
Admin: An admin has all of the same permissions as a lead but gains the ability to manage products
Test designer : A user can fully work with Test Specification and Requirements

Honest Face Forgiveness

Sunday, June 13, 2010

Principles of Testing

Principles of Testing

Reliability:
The probability that software will not cause the failure of a system for a specified time under specified conditions
Software with faults may be reliable, if the faults are in code that is rarely used
Reliability is in the view of the user
Reliability measures must take account of how the software will be used by its users.

Exhaustive Testing:
Exhaustive testing of all program paths is usually impossible
Exhaustive testing of all inputs is also impossible
If we could do exhaustive testing, most tests would be duplicates that tell us nothing
We need to select tests that are effective at finding faults


Effectiveness and Efficiency:
A test that exercises the software in ways that we know will work proves nothing
Effective tests: tests that are designed to catch specific faults
Efficient tests: tests that have the best chance of detecting faults.

How much testing is enough?
Objective coverage measures can be used:
standards may impose a level of testing
test design techniques give an objective target
But all too often, time is the limiting factor so we may have to rely on a consensus view to ensure we do at least the most important tests.

Test Exit Criteria:
Measurable criteria for test completion, for example
all tests run successfully
all faults found are fixed and re-tested
coverage target (set and) met
time (or cost) limit exceeded
Coverage items defined in terms of
requirements, conditions, business transactions
code statements, branches.

Psychology of Testing
A successful test is one that locates a fault
If finding faults is the tester’s aim:
positive motivation - finding faults is good
constructive - perceived as improving quality
testers create 'tough' tests, not easy ones
If we are effective at finding faults, and then can't find any, we can be confident the system works.

Re-testing:
If we run a test that detects a fault we can get the fault corrected
We then repeat the test to ensure the fault has been properly fixed
This is called re-testing

Regression:
When a software fault is fixed, we need to check that when
changing the faulty code, the developer has not introduced new
Faults.
    There is a 50% chance of introducing regression faults
Regression tests tell us whether new faults have been introduced
i.e. whether the system still works after a change to the code or environment has been made.
When environments are changed, we might also regression test.

Regression Testing:
"Testing to ensure a change has not caused faults in unchanged parts of the system“
Not necessarily a separate stage
Regression testing most important during maintenance activities
Regression tests are usually the first to be considered for automation.

Prioritization of Testing:
First principle: to make sure the most important tests are included in test plans
Second principle: to make sure the most important tests are executed
Most important tests are those that:
address most serious risks
cover critical features
have the best chance of detecting faults.
Criteria for prioritizing tests:
Critical
Complex
error-prone.

  Amazon Gift Card Singles Generic Design Reebok Women's EasyTone Reeinspire Walking Shoe


 crocs Beach Clog

Saturday, June 12, 2010

Boundary Value Analysis and Equivalence Partitioning

Equivalence Partitioning

Equivalence Partitioning is a Black Box testing method that divides the input domain of a program into classes of data from which test cases can be derived.

An ideal test case single-handedly uncovers a class of errors (e.g. incorrect processing of all character data) that might otherwise require many cases to be executed before the general error is observed.

Equivalence Partitioning strives to define a test case that uncovers classes of errors, thereby reducing the total number of test cases that must be developed.

Boundary Value Analysis

For reasons that are not completely clear, a greater number of errors tend to occur at the boundaries of the input domain than in the “center”. It is for this reason,that boundary value analysis (BVA) has been developed as a testing technique.

BVA leads to a selection of test cases that exercise bounding values. BVA complements EP.Rather than selecting any element of an equivalence class,
BVA leads to the selection of test cases at the “edges” of the class. Rather than focusing only on Input conditions,
BVA derives test cases from the Output domain as well.

HDMI TO HDMI 6 foot cable

PURPOSE OF TESTING

Beizer’s Attitudinal Progression model:
 Phase 0 - Testing is debugging
 Phase 1 - Prove Software works
 Phase 2 - Prove Software does not work
 Phase 3 - Reduce risk of failure
 Phase 4 - A state of mind

Phase 0 :
-Purpose of Testing is debugging
-No QA, No quality
-Dominant thinking in early 70s
-Still a problem in many organizations


Phase 1:
- Prove that software works
- Dominant thinking in late 70s
- Impossible to prove
- Corrupted / Illogical process
- Best achieved by not testing at all

Phase 2:
- Prove that software does not work
- Negative role of tester
- Book keeper / Auditor
- Never ending cycle of tests ?
- When do we stop ?

Phase 3:
-Accepting the principles of statistical Quality Control
-Perception of risk reduction
-Adequate confidence in the product

Phase  4:
-A state of mind
-Software does not need much testing now
-Quality Assurance in large measure
-Impact on productivity is high

All above phases are cumulative

Kingston 4 GB Class 4 SDHC Flash Memory Card SD4/4GBET

Types of Defects

  • Inadequate or incorrect component description (Documentation)
  • Errors in grammar, spelling and specification language used (Syntax)
  • An error in the specification of the functions of a component (Functionality)
  • An error in the communication between software components (Interface)
  • An error in the internal data specification (Data)
  • An error in the procedural logic (Logic)
  • An error in communication with external data (Input/Output)
  • not meeting the required efficiency of execution (Performance)
  • A deviation from procedural or representational standards (Standards)

Testlink- Test case Management Tool

TestLink is a Web-based Test management tool. It is free, open source project comes under the GPL license.

The tool includes reporting and requirements tracking and cooperates with well-known bug tracking systems.

You can find various details about Testlink on http://testlinktool.yolasite.com/



Friday, June 11, 2010

Android Devices in India

These are the devices which are available in India:

HTC Magic
HTC Hero    
HTC Tattoo    
Motorola Milestone    
Sony Ericsson Xperia X10    
LG GW620    
Acer A1 Liquid    
Samsung Galaxy i7500    
Samsung Galaxy Spica i5700
HTC Legend

Android Platform

Android is a mobile phone operating system developed by Google. Android is unique because Google is actively developing the platform but giving it away for free to hardware manufacturers and phone carriers who want to use Android on their devices.

Android is a mobile operating system using the Linux kernel. It was initially developed by Android Inc., a firm later purchased by Google, and lately by the Open Handset Alliance.
It allows developers to write managed code in the Java language, controlling the device via Google-developed Java libraries.
The unveiling of the Android distribution on 5 November 2007 was announced with the founding of the Open Handset Alliance

Available Versions:
1.5 (Cupcake)
On 30 April 2009, the official 1.5 (Cupcake) update for Android was released
1.6 (Donut)
On 15 September 2009, the 1.6 (Donut) SDK was released
2.0/2.1 (Eclair)
On 26 October 2009 the 2.0 (Eclair) SDK was released


Features:
Handset layouts:The platform is adaptable to larger, VGA, 2D graphics library, 3D graphics library
Storage:The Database Software SQLite is used for data storage purposes
Connectivity:Android supports connectivity technologies including GSM/EDGE, CDMA, EV-DO, UMTS, Bluetooth, and Wi-Fi
Messaging:SMS and MMS are available forms of messaging including threaded text messaging
Web browser:The web browser available in Android is based on the open-source WebKit application framework
Java support:Software written in Java can be compiled to be executed in the Dalvik virtual machine
Media support :Android supports the following audio/video/still media formats: H.263, H.264 (in 3GP or MP4 container), MPEG-4 SP, AMR, AMR-WB (in 3GP container), AAC, HE-AAC (in MP4 or 3GP container), MP3, MIDI, OGG Vorbis, WAV, JPEG, PNG, GIF, BMP
Development environment:Includes a device emulator, tools for debugging, memory and performance profiling, a plugin for the Eclipse IDE
Market:Android Market is a catalog of applications that can be downloaded and installed to target hardware over-the-air, without the use of a PC.
Multi-touch:Android has native support for multi-touch which is available in newer handsets such as the HTC Hero

Architecture:


Applications: basic applications include an email client, SMS program, calendar, maps, browser, contacts, and others. All applications are written in Java programming language.

Application Framework: the developers have full access to the same framework APIs used by applications base. The architecture is designed to simplify the reuse of components, any application can publish its capabilities and any other application can then make use of those capabilities (subject to safety rules framework). This same mechanism allows components to be replaced by the user.

Libraries: Android includes a set of libraries C / C + + used by various components of the Android system. These features are exposed to developers through the Android application framework, some of them: System C library (C standard library implementation), media libraries, graphics libraries, 3d, SQLite, and others.
Runtime Android: Android includes a set of base libraries that provide most of the features available in the libraries of the Java language base. Every Android application runs its own process, with its own instance of the Dalvik virtual machine. Dalvik has been written so that a device can run multiple VMs efficiently. Dalvik executes files in the Dalvik Executable (. Dex), which is optimized for minimum memory. The virtual machine is based on records, and runs classes compiled by the Java compiler that have been transformed by the tool to formato.dex included “dx”.

Kernel – Linux: Android depends on Linux version 2.6 for basic services such as security system, memory management, process management, network stack and driver model. The kernel also acts as an abstraction layer between hardware and the rest of the software stack.

Palm OS

Overview of Palm Operating systems

1)Garnet OS expands the solid foundation of Palm OS 5 by incorporating standard support for a broad range of screen resolutions and expanded support for wireless connections including Bluetooth®. It also includes enhanced multimedia capabilities, a suite of robust security options and support for a broad set of languages

2)Palm© webOS™ is Palm's next generation operating system. Designed around an incredibly fast and beautiful user experience and optimized for the multi-tasking user, webOS integrates the power of a window-based operating system with the simplicity of a browser. Applications are built using standard web technologies and languages, but have access to device-based services and data.


The Pre is the first Palm device to use webOS, the Linux-based platform that replaces Palm's previous Palm OS. Developed from scratch for use in mobile phones—whereas Palm OS was originally designed for PDAs—webOS is capable of supporting built-in first party applications, as well as third party applications.


Features of Garnet OS:
1)Every Garnet OS handheld comes with the famously easy to use Palm software suite, including calendar, address book, alarm clock, memo pad, calculator, and email.
2) If you want to communicate with others, or pull information from the web, there are a wide array of options for the Garnet OS. Wired modems are available for many models. Add-ons supports the 802.11b, and Bluetooth wireless network standards. Some Garnet OS systems include mobile phones supporting the CDMA and GSM networks. Or you can use infrared or Bluetooth to let your Garnet OS handheld communicate through a mobile phone3.
3) Garnet OS handhelds are designed to communicate with a PC. Software programs included with many Garnet OS systems work with Microsoft® Word™, Excel™ and PowerPoint™ files. You can also exchange information with Microsoft Outlook™ and other popular PC-based information management programs1. The Garnet OS HotSync software also lets you make backup copies of your handheld information. With a press of a single button, your handheld's information is automatically backed up on the PC, so it can be restored if the handheld is ever lost or broken2
4)Beaming


Features of WebOs:
1)Over-the-air software updates
Keep up with the latest software for your Palm webOS phone. Receive a notification when there’s an update to install, or check to see if one’s available. All without connecting to your computer.1
2)WebOS includes a feature called Synergy that integrates information from many sources. webOS allows a user to sign in to accounts on Gmail, Yahoo!, Facebook, LinkedIn, and Microsoft Outlook (via Exchange ActiveSync). Contacts from all sources are then integrated into a single list. For example, if you have info about one person in many different places, your phone shows them in a single entry for that person. All of your info remains in its original location online.
3)The device makes use of the cloud based services model, but uses no desktop sync client (in the style of Palm's HotSync synchronization method


Palm has announced that the Pre will be capable of "seamlessly" synchronizing with Apple's iTunes via its Media Sync feature
The Pre will be available with high-speed connectivity on either EVDO Rev. A or UMTS HSDPA, depending on location. The Pre also includes 802.11b/g WiFi and Bluetooth 2.1+EDR with support for A2DP stereo headsets. A-GPS with support for turn-by-turn navigation is also included.


Formats supported on Palm OS and Web OS

Palm OS:
. OS: Garnet
. Networks: Quad band (850/900-1800/1900) GSM/GPRS/EDGE
  Class10.
. Connectivity: EVDO, 1xRTT, Bluetooth, Infrared, USB
. Memory Card: Micro SD up to 4GB
. Browser: Blazer Mobile Web Browser
. Files supported: .exe, .mp3, .mp4, .m4g, .wmv, .jpg, .gif
. Device Sync: Hot Sync


Palm Web OS:
OS: Web OS
. Networks: CDMA Version- Dual band CMDA 2000EV-DO Rev. A
  800/1900 MHz, GSM Version- Quad Band GSM 850/900/1800/1900
  MHz GPRS/EDGE and Tri Band UMTS 850/1900/2100 MHz HSDPA  .
. Connectivity: Wi-Fi (802.11b/g), Bluetooth  2.1+EDR, Micro USB,
  AGPS.
. Memory Card: No slot available
. Browser: Pocket IE
. Files supported: .ipk, .mp3, .mp4, .m4g, .wmv, .jpg, .gif
. Device Sync: Palm Desktop, Microsoft Outlook, IBM Lotus Notes,
  iTunes.

Other Tools supported by Web OS

Installing Eclipse and the Palm Plug-In
Emulator
Command Line Tools
Debugger


How to install Web OS:

Install the Palm® webOS™ SDK on Windows®
 1. Install Java
-Download and install the latest version of Java.
-To verify that Java is installed, go to the Command Prompt and type:
-java -version
-If Java is installed, Java version information appears.
-Download Java

2. Install VirtualBox 3.0.10
-The Palm emulator is built on VirtualBox, virtual machine software that you can download free from Sun Microsystems. VirtualBox is required before installing the Palm webOS SDK.

Download Virtualbox

3. Install the Palm webOS SDK for Windows
-Download the Windows SDK.
-Ensure VirtualBox is not running before starting the Palm SDK    Installer.
-Double-click the Palm SDK Installer file.
-Download SDK
-Windows 64-bit
-Windows Vista & Windows 7 only

4. Verify the SDK Installation
- Start the Palm emulator.
- Click OK to dismiss the dialog boxes.
- Create or choose a directory to use as your development workspace.
- Open a Command Prompt window, and then type:
                        palm-generate
To verify that the tools are installed:

If help information appears, the tools are correctly installed.
If palm-generate is not recognized as a command, the tools are not correctly installed.
If java is not recognized as a command, Java is not correctly installed
Exit both the Command Prompt window and the emulator.

How to uninstall Web OS:

Uninstall the emulator and virtual machine
Run VirtualBox.
Close the emulator (select Power off the machine ).
In VirtualBox, right-click (Windows and Linux) or control-click (Mac) the Palm Emulator, and select Delete.
Select File > Virtual Media Manager.
Click the Hard Disks tab, and delete Palm Emulator.vmdk.
Click the CD/DVD Images tab, and delete palm_emulator_sdk_XX.iso.
Delete the Palm Emulator folder:
-Windows: C:\Program Files\Palm\SDK\share\emulator
Uninstall the SDK from a Windows system
         -Select Start > Control Panel > Add or Remove Programs.
         - Choose Palm SDK and click Remove.

iPhone 4

These are new features in iPhone 4:

• Front-facing video chat camera
• Improved regular back-camera (the lens is quite noticeably larger than the iPhone 3GS)
• Camera flash
• Micro-SIM instead of standard SIM (like the iPad)
• Improved display. It's unclear if it's the 960x640 display thrown around before—it certainly looks like it, with the "Connect to iTunes" screen displaying much higher resolution than on a 3GS.
• What looks to be a secondary mic for noise cancellation, at the top, next to the headphone jack
• Split buttons for volume
• Power, mute, and volume buttons are all metallic

for more detail you can go to http://www.apple.com/iphone/features/

ISTQB Books for Certification

For foundation Level:

FOUNDATIONS OF SOFTWARE TESTING by Dorothy Graham,Erik van Veenendaal,Isabel Evans,Rex Black and ISTQB Glossary version 2 plus some dump papers are sufficient enough for clearing the exam

For Advance Level:

1) Advanced Software Testing Vol.1 - by Rex Black for Test Analyst and Technical Test Analyst

2) The Software Test Engineer's Handbook: A Study Guide for the ISTQB Test Analyst and Technical Analyst Advanced Level Certificates (Rockynook Computing) (Paperback) by Graham Bath & Judy McKay".

3) Advanced Software Testing Vol.2 - by Rex Black for Test Manager


How to decide among Technical Test Analyst and Test Analyst If ur working as a BlackBox Tester then i would suggest u to go for Test Analyst,if ur working as a Whitebox tester and responsible for portability, Reliability etc then go for Technical Test Analyst.

ISTQB Foundation Level Paper 12

1.Name the test tool used by programmers to reproduce failures, investigate the state of programs and find the corresponding defect.

a.Configuration management tool
b.Debugging tool
c.Unit test framework tool
d.Stress testing tool


2. Which of the following tools offer support more appropriate for developers?

a.Static Analysis tools
b.Coverage measurement tools
c.Test Comparators
d.Modeling tools


3. True or false, coverage measurement tools apply to all test activities over the entire software life cycle.

a.True
b.False


4. Identify an objective of a pilot project:

a.Assessment of organizational maturity, strengths and weaknesses
b.Defining usage guidelines
c.Identification of opportunities for an improved test process supported by tools
d.Learn more detail about the tool


5. A gradual implementation with initial filters to exclude some messages
is an effective approach for what type of testing tools?

a.Test management tools
b.Static analysis tools
c.Performance tools
d.Test execution tools



6. Select the testing tool(s) that may have special considerations for use:

A.Dynamic Analysis
B.Test execution
C.Static Analysis
D.Monitoring

a. A, B & C
b. C & D
c. B & C
d. all of the above


7. Identify one or more of the potential benefit(s) of using tools:

A.Objective assessment
B.Capture tests by recording the actions of a manual tester
C.Replacement for test design and/or manual testing
D.Purchasing or leasing a tool

a.A & B
b.A only
c.D
d.None of the above


8. This test tool simulates the environment in which the test object will run:

a.Dynamic analysis tool
b.Monitoring tool
c.Coverage management tool
d.Test harness/unit test framework tool


9. Which tool(s) need to interface with other tools or spreadsheets in order to produce information in the best format for an organization?

a.Monitoring tool
b.Test management tool
c.Test comparators
d.Performance testing tool


10. Requirements management tools ________.

a.check for consistency
b.offer quantitative analysis (metrics) related to the tests
c.aid in understanding the code
d.may accelerate and improve the development process



11. Which characteristic identifies a tool that supports performance and monitoring?

a.Can calculate metrics from the code
b.May facilitate the testing of components or part of a system by simulating the environment in which that test object will run.
c.Often based on automated repetitive execution of tests, controlled by parameters
d.None of the above


12. Which of the following answers reflect characteristics of test management tools?:

A.Logging of test results and generation of progress reports
B.Improve the efficiency of testing activities by automating repetitive tasks.
C.Independent version control or interface with an external configuration management tool
D.Assignment of actions to people (e.g. fix or confirmation test)

a. B & D
b. A, B & D
c. A & C
d. B, C & D


13. Performance testing tools need someone with expertise in performance testing to help design the tests and interpret the results

a.True
b.False


14. This is done in a small-scale pilot project, making it possible to minimize impact if major hurdles are found:

a.Deployment of the test tool
b.Data-driven approach
c.Proof-of-concept
d.None of the above


15. Which testing tool supports developers, testers &/or quality assurance personnel in finding defects before dynamic testing is conducted?

a.Test Data Preparation tool
b.Modeling tool
c.Static analysis tool
d.Configuration Management tool


16. Success factors for the deployment of the tool within an organization include:

a.Rolling out the tool to the rest of the organization incrementally
b.Defining usage guidelines, implementing a way to learn lessons from tool use.
c.Adapting and improving processes to fit with the use of the too
d.Evaluation against clear requirements and objective criteria


17. The probe effect is the consequence of what type of testing tool?

a.Intrusive
b.Performance
c.Inclusive
d.Functional


18. Which scripting technique uses a more generic script that can read the test data and perform the same test with different data?

a.Timing approach
b.Test execution approach
c.Data-driven approach
d.Keyword-driven approach


19. Identify the testing tool that may also be referred to as a capture playback tool

a.Test harness/unit test framework
b.Test execution
c.Coverage measurement
d.Security
e.a & b


20. Mercury Quality Test Professional, as a software testing tool, could be classified under which of the following:

a.Static Analysis tools
b.Test Data preparation tools
c.Test Execution tools
d.Heuristic tools