Laboratory Equipment Qualification and System
Frequent speaker and chair person at FDA, ISPE, PDA, USP. IVT,
and GAMP conferences and workshops
||Here Ludwig Huber
(right) in the Q&A Discussion Session on Laboratory
Compliance at a ISPE/FDA Beijing University Conference with
Nick Buhay, Acting Director in FDA/CDER's Division of
Manufacturing and Product Quality
For Dr. Huber's connection with the FDA,
2-Day Seminar with Workshops
Analytical Instrument Qualification and
San Diego, USA
With Dr. Ludwig Huber
The objective of any chemical analytical
measurement is to get consistent, reliable and accurate data. Proper
functioning and performance of analytical instruments and computer
systems plays a major role in achieving this goal. Therefore,
analytical instrument qualification (AIQ) and computer system
validation (CSV) should be part of any good analytical practice.
There is a second aspect to why validation and qualification are
important, and this is equally important for those working in a
regulated and in an accredited environment. Even though frequently
not directly spelled out in regulations and official guidelines,
such as Good Laboratory Practice (GLP), Good Clinical Practice (GCP)
and Good Manufacturing Practice (GMP), or in quality standards, such
as the International Organization for Standardization (ISO) Standard
17025, validation and qualification is usually required. This is
confirmed by typical statements such as this one that appears in the
U.S. cGMP (current Good Manufacturing Practice) regulations (1):
“Equipment shall be routinely calibrated, inspected and checked
according to a written program to ensure proper performance” or by
the more general requirement “Equipment should be suitable for its
intended use”. Although there were lots of discussions about the
approach for qualification of analytical instruments in the 90’s and
in the early years of this century, this has changed since the USP
has published the final version of the chapter <1058> entitled
Analytical Instrument Qualification (2).
Following a literature and regulatory
overview, this primer will provide information on the entire
qualification and validation process from planning, writing
specifications as well as vendor qualification to installation,
initial and on-going operation.
- Literature overview with milestones on instrument
qualification and system validation in laboratories.
- Overview on regulations and quality standards with impact on
analytical instrument qualification.
- Qualification of equipment hardware, for example, a
spectrometer or liquid chromatograph.
- Validation of analytical computerized systems.
- Implementing USP chapter <1058>.
Special focus is placed on getting a good understanding of and
implementing USP chapter <1058>. After an introduction to the
chapter’s approach for instrument qualification, this primer will
lead you through individual qualification phases and give
recommendations for implementation. The primer will not only help
readers understand the instrument qualification process, but also
offers templates and examples to easily implement qualification.
Because of the nature and size of this primer, all the details of
operational qualification and system validation cannot be given. For
more details, please refer to reference articles and text books
(3-5). Exact procedures and test parameters very much depend on the
type of instrument and applications. Details on recommendations and
services can be obtained from instrument vendors. Although the
primer has recommendations for validation of standard commercial
computerized analytical systems without the need for major
customization, it does not give details on validation of complex
systems, such as Laboratory Information Management Systems (LIMS),
or on validation activities during software development but refers
to further literature (7-11).
Due to their importance, equipment
qualification issues have been addressed by several organizations.
Before 1990 the regulatory focus of instrument and computer
validation was primarily on manufacturing equipment, which changed
after 1990. In response to industry task forces, regulatory agencies
published guidance documents that helped the industry to better
understand regulatory requirements. In addition, private authors
published reference books with practical recommendations for
- The Pharmaceutical Analysis Science Group
(UK) developed a position paper on the qualification of
analytical equipment (6).
This paper was a benchmark because it introduced the 4Q model
with design qualification (DQ), installation qualification (IQ),
operational qualification (OQ) and performance qualification
(PQ) for analytical equipment qualification.
- The Laboratory of the Government Chemist (LGC) and Eurachem-UK,
developed a guidance document with definitions and step-by-step
instructions for equipment qualification (3).
- The United States Food and Drug Administration developed
principles of software validation (7).
- The Good Automated Manufacturing Practices Forum (GAMP)
developed guidelines for computer validation in 2001 (8) and in
2008 (9). These guides have been specifically developed for
computer systems in general, and because of their importance
have also been used for validation of laboratory systems.
- GAMP also published a Good Practice Guide for Validation of
Laboratory Systems (10). It recommends validation activities and
procedures for seven different instrument categories.
- Huber authored two validation reference books for the
analytical laboratory (5, 11). The first one covers all
validation aspects of an analytical laboratory including
equipment, analytical methods, reference compounds and people
qualification. The second one covers the validation of
computerized and networked systems in analytical laboratories.
- The Parenteral Drug Association (PDA) developed a technical
paper on the validation of laboratory data acquisition system
- Coombes authored a book on laboratory systems validation
testing and practice (4). The term laboratory systems validation
(LSV) was used to make a distinction from computer system
validation (LSV) and equipment qualification (EQ).
- Chan and colleagues published the book Analytical Method
Validation and Instrument Performance Verification (13). This
book has several chapters with practical recommendations for
- PIC/S developed a good practice Guide for Using Computers in
a GxP Environment (14). This document has been written by
inspectors for inspectors as a guide to inspect computerized
All these guidelines and publications follow a couple of
- Qualification of equipment and validation of computer
systems are not one time events. They start with the definition
of the product or project and setting user requirement
specifications and cover the vendor selection process,
installation, initial operation, ongoing use and change control.
- All publications refer to some kind of life cycle model with
a formal change control procedure being an important part of the
Different models have been suggested for different kinds of
instruments. For example, the 4Q model as described by Freeman (6)
and Bedson (3) has been recommended for users
of commercial instruments without significant customization by the
user. The V-model as recommended by GAMP4 (8) is suitable for
software development as well as for users of commercial instruments
with customization by the user.
A major breakthrough came when USP released its general chapter
on analytical instrument qualification (2). The major benefit of the
chapter was that it formalized the 4Q model and clarified some
issues that have been frequently discussed before, for example, that
an instrument’s firmware does not need separate qualification, but
should be qualified as part of the instrument hardware.
Terminology: Validation vs. Qualification
An agreement on terminology is of utmost importance for a common
understanding of validation and qualification. The author has
frequently noted at validation symposia that different speakers used
different terms for the same thing and the same terms for different
things. Most frequent question arose about the words validation and
qualification. USP has recognized this and addressed it in a
paragraph on the first page of chapter <1058>. The word
qualification relates to instruments that can be individual modules
and also systems, for example, a complete HPLC system comprised of a
sampling system, a pump, a column compartment and a detector.
Checking the baseline noise of a detector and comparing the results
with previously defined specifications would be an example for
qualification. Qualification is done independently from a specific
application or sample. Typically the type of specifications can be
found in the vendor’s product specification sheet.
The word validation relates to applications, processes and
methods. For example, for method validation we specify the limit of
quantitation or limit of detection of our sample compounds. Such
specifications can only be verified with a complete system and
accessories such as the right chromatographic column, calibration
standards and SOPs for running the test.
Unfortunately, validation and qualification are frequently used
interchangeably. For example, for software and computer systems the
term validation is always used, even though according to the
previous definition qualification should be used. Therefore, in the
context of software and computer systems, this primer will also use
the term validation.
The FDA and other agencies do not really care what users call it,
validation or qualification. The inspector’s question will always
be: how did you make sure that the data are accurate. As long as
there is a good answer, for example through validation of systems
and methods, it is of secondary importance how users call it.
However, agreement on terminology is of utmost importance within a
company, so that everybody has the same understanding of
qualification and validation. Therefore, terminology and the exact
meaning should be documented in a glossary.
Components of Analytical Data Quality
There were many discussions about the need for instrument
qualification in analytical laboratories before the release of USP
<1058>, taking into account that there are several other components
of data quality, for example, method validation, system suitability
testing and the analysis of quality control samples.
USP started the chapter by giving good reasons why instrument
qualification is important. Figure 1 illustrates the different
components of data quality with analytical instrument qualification
as the foundation at the bottom.
Figure 1. Components of analytical data quality
Whether you validate methods or systems, verify the suitability
of a system for its intended use or analyze quality control samples,
you should always qualify the instrument first. It is the basis of
all other components. It is the collection of documented evidence to
demonstrate that an instrument suitably performs for its intended
purpose and that it is properly maintained and calibrated. If the
instrument is not well qualified, weeks can be spent to validate an
HPLC method without success until a determination is made that the
HPLC detector did not meet specifications for linearity or baseline
After you have qualified the instrument, you validate analytical
methods on qualified instruments. This should prove that the method
works as intended. We do this independently of any specific
instrument. If you want to use the method on instruments from
different vendors, you also should validate the method on those
Then you can combine any specific instrument with a specific
method and run system-suitability tests. This ensures that the
complete system meets the analyst’s expectations under the specific
conditions of the tests.
The highest level of testing is the analysis of quality control
samples. You analyze standards or samples with known amounts and
compare the results with the correct amounts. Again a prerequisite
to show that this works is to use qualified instruments and
2. Regulations and Quality
Qualification of instruments and validation of systems is a
requirement of the FDA and equivalent international regulations. No
or inadequate qualification can cause regulatory actions, such as
shipment stops of drugs and APIs. The rationale behind this is that
analytical test results obtained with no or inadequately qualified
instruments can be wrong. Because of the importance of compliance,
we dedicate this chapter to regulations and quality standards. The
purpose of the regulations and standards are listed together with
Regulations are quite static and typically don’t change for
several years. More dynamic than regulations are inspection and
enforcement practices. Information can be found in the FDA’s
inspection documents such as warning letters, establishment
inspection reports (EIR) and 483 form inspectional observations.
Highly important are FDA warning letters. They are sent to companies
in case of serious regulatory violations. Companies are expected to
respond within 15 days. If there is no response or if the response
is inadequate, the FDA will take further actions which may cause
delay of new product approvals, import alerts and denials, or
product recalls. Since March 2003 warning letters are reviewed by
higher-level FDA officials and reflect the FDA’s current thinking.
Warning letters are published on the FDA
The only problem is that there are thousands of them and they
mostly relate to marketing and labeling, so it is difficult to find
the ones that are of interest to laboratories. Interesting sites
exist that only publish warning letters related to GxP issues. For
http://www.fdawarningletter.com has many quotes related to the
qualification of instruments and validation of computer systems.
Good Laboratory Practice Regulations
Good laboratory practice (GLP) regulations deal with the
organization, processes and conditions under which preclinical
laboratory studies are planned, performed, monitored, recorded and
reported. GLP data are intended to promote the quality and validity
of study data. GLP regulations were first proposed by the U.S. FDA
in November 1976, and final regulations were coded as Part 58 of
Chapter 21 of the Code of Federal Regulations in 1979
(15). The Organization for Economic Cooperation and
Development (OECD) published the principles of Good Laboratory
Practice in the Testing of Chemicals in 1982
(16), which has been since updated
(17) and incorporated by OECD member countries. In the
meantime most industrial countries and some developing countries
have their own GLPs.
All GLP regulations include chapters on equipment design,
calibration and maintenance, for example, U.S. GLP regulations,
Sections 58.61 and 58.63(15):
- Automatic, mechanical, or electronic equipment used in the
generation, measurement, or assessment of data shall be of
appropriate design and adequate capacity to function according
to the protocol and shall be suitably located for operation,
inspection, cleaning, and maintenance.
- Equipment used for generation, measurement, or assessment of
data shall be adequately tested, calibrated, and/or
- Written standard operating procedures shall set forth in
sufficient detail the methods, materials, and schedules to be
used in routine inspection, cleaning, maintenance, testing,
calibration, and/or standardization of equipment and shall
specify remedial action to be taken in the event of failure or
malfunction of equipment.
- Written records shall be maintained on all inspection
The GLP principles of the OECD include similar but shorter
sections on equipment(17):
- The apparatus used for the generation of data and for
controlling environmental factors relevant to the study should
be suitably located and of appropriate design and adequate
- Apparatus and materials used in a study should be
periodically inspected, cleaned, maintained, and calibrated
according to Standard Operating Procedures. Records of
procedures should be maintained.
Current Good Manufacturing Practice Regulations
Good Manufacturing Practice (GMP) regulates manufacturing and its
associated quality control. GMP regulations have been developed to
ensure that medicinal (pharmaceutical) products are consistently
produced and controlled according to the quality standards
appropriate to their intended use. In the United States, the
regulations are called Current Good Manufacturing Practices (CGMP)
to account for the fact that the regulations are dynamic rather than
static. They are defined in Title 21 of the U.S. Code of Federal
Regulations, 21 CFR 210 - Current Good Manufacturing Practice for
Drugs, General and 21 CFR 211 - Current Good Manufacturing Practice
for Finished Pharmaceuticals(1). Drugs marketed in the United States
must first receive FDA approval and must be manufactured in
accordance with the U.S. cGMP regulations. Because of this, FDA
regulations have set an international regulation benchmark for
In Europe, local GMP regulations exist in many countries. These
are based on the EU directive: Good Manufacturing Practice for
Medicinal Products in the European Community (18). This EU GMP is
necessary to permit free trade in medicinal products between the
member countries. Regulations in the EU allow the marketing of a new
drug in the member countries with the acquisition of just a single
marketing approval. The intention of the EU GMP is to establish a
minimum manufacturing standard for all member countries.
Like GLP, also all CGMP regulations include chapters on equipment
design, calibration and maintenance, for example, U.S. CGMP
regulation for pharmaceutical drugs, sections 211-140 b and 211-68
- Laboratory controls shall include the calibration of
instruments, apparatus, gauges, and recording devices at
suitable intervals in accordance with an established written
program containing specific directions, schedules, limits for
accuracy and precision, and provisions for remedial action in
the event accuracy and/or precision limits are not met.
Instruments, apparatus, gauges, and recording devices not
meeting established specifications shall not be used.
- Automatic, mechanical, or electronic equipment or other
types of equipment, including computers, or related systems that
will perform a function satisfactorily, may be used in the
manufacture, processing, packing, and holding of a drug product.
If such equipment is so used, it shall be routinely calibrated,
inspected, or checked according to a written program designed to
assure proper performance. Written records of those calibration
checks and inspections shall be maintained.
International Conference for Harmonization
The International Conference on Harmonization of Technical
Requirements for Registration of Pharmaceuticals for Human Use (ICH)
brings together the regulatory authorities of Europe, Japan and the
United States and experts from the pharmaceutical industries in the
three regions to discuss scientific and technical aspects of product
The purpose is to make recommendations on ways to achieve greater
harmonization in the interpretation and application of technical
guidelines and requirements for product registration in order to
reduce or obviate the need to duplicate the testing carried out
during the research and development of new medicines.
ICH publishes guidelines that are either signed into law by
member countries, for example, in Europe or recommended as
guidelines by national authorities, e.g., by the US FDA.
Examples for such guidelines are:
- Testing (Q1A)
- Validation of Analytical Procedures (Q2)
- Impurities in New Drug Substances (Q3A) and
- GMP Guide for Active Pharmaceutical Ingredients (Q7) and
- Quality Risk Management (Q9)
One of the most important ICH documents is the GMP Guide for
Active Pharmaceutical Ingredients (19). Opposite to other official
documents, Q7 has very specific requirements for equipment and
computer systems in chapters 5.3 and 5.4:
- Equipment calibrations should be performed using standards
traceable to certified standards, if existing.
- Records of these calibrations should be maintained.
- The current calibration status of critical equipment should
be known and verifiable.
- Instruments that do not meet calibration criteria should not
- Deviations from approved standards of calibration on
critical instruments should be investigated to determine if
these could have had an impact on the quality of the
intermediate(s) or API(s) manufactured using this equipment
since the last successful calibration.
- GMP related computerized systems should be validated. The
depth and scope of validation depends on the diversity,
complexity and criticality of the computerized application.
- Appropriate installation qualification and operational
qualification should demonstrate the suitability of computer
hardware and software to perform assigned tasks.
Pharmaceutical Inspection Convention Scheme (PIC/S)
PIC/S' mission is "to lead the international development,
implementation and maintenance of harmonized Good Manufacturing
Practice (GMP) standards and quality systems of inspectorates in the
field of medicinal products".
This is to be achieved by developing and promoting harmonized GMP
standards and guidance documents; training competent authorities, in
particular inspectors; assessing (and reassessing) inspectorates;
and facilitating the co-operation and networking for competent
authorities and international organizations. As of October 2008
there are 34 participating authorities in PIC/S and some more have
applied for membership, for example the U.S. FDA.
The most relevant PIC/S document related to this primer is the
Good Practice Guide: Using Computers in GxP Environments(14). The
guidance document is intended to provide a logical explanation of
the basic requirements for the implementation, validation and
operation of computerized systems. Recommendations are documented in
chapters 4.6 and 4.8:
- Apart from user acceptance testing (OQ) versus the
functional specification, the regulated user also has
responsibility for the (PQ) performance qualification of the
- The validation documentation should cover all the steps of
the lifecycle with appropriate methods for measurement and
reporting, (e.g. assessment reports and details of quality and
test measures), as required.
- Regulated users should be able to justify and defend their
standards, protocols, acceptance criteria, procedures and
records in the light of their own documented risk and complexity
ISO/IEC 17025 is the most relevant ISO Standard for chemical
laboratories(20). It specifies the general requirements for the
competence to carry out tests and/or calibrations. The standard is
widely used as a quality system in environmental, food, chemical and
clinical testing laboratories. It is used to assess laboratories
that seek accreditation status.
The standard has many requirements related to the subject of this
primer. The most important ones can be found in chapter 5.5.
- Calibration programs shall be established for key quantities
or values of the instruments, where these properties have a
significant effect on the results.
- Before being placed into service, equipment (including that
used for sampling) shall be calibrated or checked to establish
that it meets the laboratory's specification requirements and
complies with the relevant standard specifications. It shall be
checked and/or calibrated before use.
- Each item of equipment and its software used for testing and
calibration and significant to the result shall, when
practicable, be uniquely identified.
- Equipment that has been subjected to overloading or
mishandling, gives suspect results, or has been shown to be
defective or outside specified limits, shall be taken out of
21 CFR Part 11 – FDA’s Regulation on Electronic Records and
In 1997 the United States Food and Drug Administration (FDA)
issued a regulation that provides criteria for acceptance by the FDA
of electronic records, electronic signatures and handwritten
signatures21. With this regulation, entitled Rule 21 CFR Part 11,
electronic records can be equivalent to paper records and
handwritten signatures. The rule applies to all industry segments
regulated by the FDA that includes Good Laboratory Practice (GLP),
Good Clinical Practice (GCP) and current Good Manufacturing Practice
Part 11 requires computer systems used in FDA regulated
environments to be validated. Chapter 10 (a) states:
- Computer systems should be validated to ensure accuracy,
reliability and consistent intended performance.
There is no further instruction on how computer systems should be
Learning from Regulations and Quality Standards
As we have seen in this chapter, all important regulations and
ISO 17025 have one or more chapters on equipment and computers. The
wording and the level of detail is different. For example, the words
calibration and qualification are interchangeably used. Despite
different terminology, the message is always the same: instruments
and computer systems should be suitable for their intended use.
This means users should:
- Define the intended use, meaning write specifications.
- Formally assess the vendor’s quality system.
- Formally document installation. ICH Q7A calls this
- Test the instrument in the user’s environment for functional
specifications. ICH and PIC/S call this operational
- Verify ongoing performance through ongoing preventive
maintenance system tests.
- Keep instruments under change control to ensure that the
validated state is ensured after changes.
3. Qualification of Analytical Instruments
Equipment qualification and validation of computerized systems
cover the entire life of a product. It starts when somebody has a
need for a specific product and ends when the equipment is retired.
For computer systems validation ends when all records on the
computer system have been migrated and validated for accuracy and
completeness on a new one. Because of the length of time and
complexity the process has been broken down into shorter phases, so
called lifecycle phases. Several lifecycle models have been
described for qualification and validation. Most common ones are the
V and 4Q model. The V model includes code development and code
testing for software, which is important when validation also covers
software development. For the purpose of this primer, where we deal
with commercially available instruments and systems, we have
selected the 4Q model with phases such as design qualification (DQ),
installation qualification (IQ), operational qualification (OQ),
performance qualification (PQ). The process is illustrated in figure
Figure 2. Qualification phases 4Q model
In the DQ phase user requirements are compared with the vendor’s
specification. In addition, users conduct an assessment of the
vendor. In the installation qualification the selected user’s
environment is checked whether it meets the vendor’s environmental
specifications. The instrument is installed according to vendor’s
recommendations and correct installation is verified and documented.
Operational qualification checks if the instrument conforms to the
functional specifications, as defined in the DQ phase. Performance
qualification verifies that the complete system works for selected
applications. Preventive maintenance activities and controlled
changes also are part of this phase. All activities are defined in a
validation or qualification plan and results are documented in a
summary report. Figure 3 illustrates the timeline for the four
Figure 3. Qualification timeline
Qualification activities should be described in a master plan.
The plan documents a company’s approach for specific activities, for
example, how to qualify analytical instruments, how to assess
vendors or what to test for commercial computer systems. A master
plan serves two purposes: when implemented right, it ensures
consistent and efficient implementation of equipment qualifications,
and it answers an inspector’s question for a company’s approach for
instrument qualification and system validation. A validation master
plan is also officially required by Annex 15(22) to the European GMP
directive: “All validation activities should be planned. The key
elements of a validation program should be clearly defined and
documented in a Validation Master Plan (VMP) or equivalent
documents”. FDA regulations and guidelines do not specifically
require a validation master plan. However, inspectors want to know
what the company’s approach towards validation is. The qualification
master plan is an ideal tool to communicate this approach both
internally and to inspectors. In case there are any questions as to
why things have been done or not done, the master plan should
provide the answers.
Within an organization a
validation master plan can be developed for:
- the entire company at a corporate level
- multiple or single sites
- system categories
The master plan is a framework for individual project plans and
should be written at the highest level possible. This ensures
consistent implementation across an organization.
Equipment and computer validation master plans should
- Introduction with a scope of the plan, e.g., sites, systems,
- Responsibilities, e.g., user departments, QA, IT
- Related documents, e.g., risk master plan
- Products/processes to be validated and/or qualified
- Qualification/validation approach
- Risk assessment
- Steps for equipment qualification and computer system
validation with examples on type and extent of testing
- Vendor assessment
- Handling existing systems
- Change Control procedures and templates
- Instrument obsolescence and removal
- Training plans (system operation, GMP)
- Templates and references to SOPs
For each individual project a validation project plan should be
developed. This plan is derived from the validation master plan.
Figure 4 shows the link between the master plan and project plan.
Ideally master plans are developed at a corporate level. Project
plans are written in departments specifically for an instrument or
system. Depending on the size, structure and geographic distribution
there also may be a site or country specific master plan that is
derived from the corporate master plan but has been customized
according to specific circumstances and requirements of that site.
Figure 4. Link between master plan and project
The project plan outlines what is to be done in order to get a
specific system into compliance. For inspectors it is a first
indication of the control a laboratory has over a specific
instrument or system and it also gives a first impression of the
For simple equipment qualification a template in table form can
be used to outline planned activities. A template example is shown
in Figure 5. The left column can be the same for all instruments in
the same category, which makes the whole qualification process very
Figure 5. Template for instrument qualification
“Design qualification (DQ) is the documented collection of
activities that define the functional and operational specifications
of the instrument and criteria for selection of the vendor, based on
the intended purpose of the instrument“ (2).
Design qualification is a shared responsibility between the
vendor and the user of an instrument.
The vendor’s responsibilities are to:
- Design, develop and manufacture instruments in a quality
- Develop functional and operational product specifications.
- Provide information on how software and instruments are
validated. during development and supported during the entire
life of the products.
- Allow user audits, if required, and share approaches for
development and testing.
The user’s responsibilities are to:
- Describe the analysis problem and selection of the
- Describe the intended use of the equipment.
- Describe the intended environment (including computer
- Select and document the functional and performance
specifications (technical, environmental, safety).
- Select and assess the vendor.
DQ should ensure that instruments have all the necessary
functions and performance criteria that will enable them to be
successfully implemented for the intended application and to meet
business requirements. Errors in DQ can have a tremendous technical
and business impact, and therefore a sufficient amount of time and
resources should be invested in the DQ phase. For example, setting
wrong operational specifications for an HPLC system can
substantially increase the workload for OQ testing, and selecting a
vendor with insufficient support capability can decrease instrument
up-time with a negative business impact.
Figure 6 shows a template that can be used to document design
qualification. User requirements for an HPLC system should not only
have a section to define chromatographic functions and performance
but also for physical requirements, construction and vendor
requirements to the vendor. A physical requirement could be that all
modules should have the same dimensions to allow stackability for
optimal use of the lab’s bench space. An example of a construction
requirement are accessibility of the detector lamp and flow cell
from the front of the instrument for easy maintenance.
Figure 6. Template for design qualification
Figure 7 shows an example of selected functional and performance
specifications of an HPLC system. The user defines his/her
requirement specifications and compares them with the vendor’s
specifications. To set the functional and performance
specifications, the vendor’s specification sheets can be used as
guidelines. However, it is not recommended to simply copy the
vendor’s specifications, because compliance to the functional and
performance specifications must be verified later in the process
during operational qualification and also when re-qualifying the
instrument at a later time. Specifying too many functions and
setting the values too stringently will significantly increase the
workload for OQ. For example, if a company has a need for an
isocratic HPLC system, but plans to purchase a gradient system for
future use, only an isocratic system should be formally specified
for regulatory purposes. This means, as long as the instrument is
not used for gradient runs no gradient test need to be conducted.
Later on, when the system is used for gradient analysis, the
specifications should be changed through a change control procedure.
Figure 7. Selected HPLC specifications for design
The specifications should be set so that there is a high
likelihood that the instrument conforms to them, not only during
initial OQ but also during requalification, for example, a year
later. Otherwise users may be expected to initiate an investigation
to determine if the non-qualified instrument could have had a
negative impact on the quality of the product. For example, these
possibilities are expressed in ICH Q7 (19):
“Deviations from approved standards of calibration on critical
instruments should be investigated to determine if these could have
had an impact on the quality of the intermediate(s) or API(s)
manufactured using this equipment since the last successful
Vendors of analytical instruments should be qualified through a
formal process. The objective is to ensure that vendors provide high
quality products and can give adequate support. For basic equipment,
such as pH-meters or a balance, this can be a single page statement
describing why the vendor XY has been selected. Certification for a
recognized quality system is sufficient for simple instruments. The
formal assessment statement should be supported by the quality
systems certificate. Figure 8 shows a template with examples to
document vendor assessment criteria for analytical instruments.
Figure 8. Selected criteria for vendor assessment
For more complex systems especially for critical computer systems
such as chromatographic data systems a more detailed assessment is
recommended. Depending on the complexity and criticality of the
system this can be a mail audit, 3rd party audit and a direct audit
through the user firm.
The purpose of the vendor assessment is to ensure that products
are designed, developed and manufactured in a documented quality
environment. The assessment should also verify that the vendor
provides the right services and can maintain the instrument through
phone and on-site support.
“Installation qualification (IQ) is the documented collection of
activities necessary to establish that an instrument is delivered as
designed and specified, is properly installed in the selected
environment, and that this environment is suitable for the
Responsibility for IQ lies with the user but activities should be
supported and can be carried out by the vendor. For example, before
the instrument arrives, the vendor should provide the user with
environmental specifications so that the user can prepare the
installation site accordingly.
Tasks performed for IQ include:
- Prepare the laboratory facility according to vendor
- Control and record environmental conditions, if critical.
For example, temperature and humidity.
- Compare equipment received with the purchase order
(including, accessories and spare parts).
- Check equipment for any damage.
- Verify that the instrument conforms with physical and
construction requirements, as specified by the user.
- Check documentation for completeness (operating manuals,
maintenance instructions, standard operating procedures for
testing, safety and validation certificates).
- Install hardware (instrument, fittings and tubing for fluid
connections, columns in HPLC and GC, power cables, data flow and
instrument control cables).
- Switch on the instruments and ensure that all modules power
up and perform an electronic self-test.
- List equipment manuals and SOPs.
- Record firmware revision. Prepare an installation report.
- Enter instrument data into an inventory data base.
- Prepare, review and sign formal IQ documentation.
Figure 9 shows a template with selected examples that can be used
to document completeness of shipment. Figure 10 shows an example of
how to check if construction requirements such as stackability and
accessibility of flow cells are met.
Figure 9. Template and examples to document
Figure 10. Verification of construction
All instruments should be entered into the IQ protocol and/or
into a database. An example of this documentation is shown in figure
11. The IQ documents should be updated whenever there is a change
made to any entry in the IQ documents. Examples of changes are a
firmware revision and the location of the instrument within a
building or site.
Figure 11. Equipment documentation for IQ
Testing for Installation Qualification
Installation should verify that the instrument hardware and
software are properly installed. It does not verify that the
instrument conforms to the functional and performance specification.
This is done later in the OQ phase. For individual modules, testing
is limited to perform and document the instruments self diagnostics
when it is switched on.
For systems comprised of multiple modules, correct connection
between the modules should be verified. For a modular analytical
system, this can be easily achieved by running a test sample and
comparing the output with a reference plot. An example of test
specifications and results are shown in figure 12.
Figure 12. Verification of correct system
installation for IQ
Operational Qualification (OQ)
“Operational qualification (OQ) is the documented collection of
activities necessary to demonstrate that an instrument will function
according to its operational specification in the selected
Emphasis should be placed on “in the selected environment”. Testing
of instrument hardware at the user’s site is required because
instrument characteristics can change when shipped from the vendor
to the user, for example through mechanical vibration.
The most frequently asked questions related to OQ testing are:
what should be tested, which are the acceptance criteria, and who
should perform the tests? USP answers all the questions in a single
sentence: “Users, or their qualified designees, should perform these
tests to verify that the instrument meets manufacturer or user
specifications in the user’s environment. Designees could be, for
example, vendor representatives.”
If a system is comprised of several modules, it is recommended to
perform system tests (holistic testing), rather than performing
tests module by module (modular testing). Individual module tests
should be performed as part of the diagnosis if the system fails.
USP does not give a detailed answer on what exactly should be
tested: “The extent of testing that an instrument undergoes depends
on its intended applications. Therefore, no specific OQ tests for
any instrument or application are offered in this chapter”.
Our recommendation is to look at the vendor’s test procedures as
a starting point and to only make adjustments if there is a specific
reason. If a laboratory uses the same type of instruments from
different vendors, it is more efficient to use the same test
procedures for all instruments than to use different ones for
different vendor instruments. We also recommend using the same test
procedure for a specific instrument throughout the company,
independent from the location. This allows comparing instrument
performance across the company and facilitates exchange of
instruments and analytical methods.
The frequency of OQ depends on the type of instrument, on the
stability of the performance characteristics, but also on the
specified acceptance criteria. In general, the time intervals should
be selected so that the probability is high that all parameters are
still within the operational specifications. Otherwise, analytical
results obtained with that particular instrument are questionable.
Here the importance of proper selection of the procedures and
acceptance limits becomes very apparent. For example, if the
baseline noise of a UV/Visible detector is set to the lowest
possible limit as specified by the vendor, the lamp will have to be
changed more frequently than when set at a factor of 5 higher.
Inspectors expect OQ tests to be quantitative. This means that
the test protocol should include expected results and actual
results. Figure 13 includes an example for recording of test results
of a balance. The header includes three control weights and
acceptable limits for the weight. The daily protocol records actual
weights and the name and signature of the test person.
Figure 13. OQ test example
“Performance qualification (PQ) is the documented collection of
activities necessary to demonstrate that an instrument consistently
performs according to the specifications defined by the user, and is
appropriate for the intended use.”(2)
Here emphasis is placed on the word ‘consistently’. Important for
consistent instrument performance are regular preventive
maintenance, making changes to a system in a controlled manner and
regular testing. The PQ test frequency is much higher than for OQ.
Another difference is that PQ should always be performed under
conditions that are similar to routine sample analysis. For a
chromatograph system this means using the same column, the same
analysis conditions and the same or similar test compounds.
PQ should be performed on a daily basis or whenever the
instrument is used. The test frequency depends on the criticality of
the tests, on the ruggedness of the instrument and on everything in
the system that may contribute to the reliability of analysis
results. For a liquid chromatograph, this may be the chromatographic
column or a detector’s lamp.
In practice, PQ testing can mean system suitability testing or
the analysis of quality control samples. This is supported by USP
<1058>: “Some system suitability tests or quality control checks
that are performed concurrently with the test samples can be used to
demonstrate that an instrument is performing suitably.“ For system
suitability testing critical system performance characteristics are
measured and compared with documented, preset limits. For example, a
well characterized standard can be injected 5 or 6 times and the
standard deviation of amounts is then compared with a predefined
value. If the limit of detection and/or quantitation is critical,
the lamp’s intensity profile or the baseline noise should be tested.
For chromatographic equipment SST tests are recommended in USP
chapter <621> (23).
For ongoing quality control checks samples with known amounts are
interspersed among actual samples at intervals determined by the
total number of samples, the stability of the system and the
specified precision. The advantage of this procedure is that
quantitative system performance is measured more or less
concurrently with sample analyses under conditions that are very
close to the actual application. Figure 14 shows a template with
examples for a PQ test protocol.
Figure 14. Documentation of PQ tests
(Preventive) Maintenance and Repair
Analytical instruments should be well maintained to ensure proper
ongoing performance. Procedures should be in place for regular
preventive maintenance of hardware to detect and fix problems before
they can have a negative impact on analytical data. The procedure
- The maintenance to be done.
- When it is to be done.
- What should be re-qualified after maintenance is done. For
example, a PQ test should always be performed after instrument
- How to document maintenance activities.
Instruments should be labeled with the dates of the last and next
Planned maintenance activities should follow a documented
instrument maintenance plan. Some vendors offer maintenance
contracts with services for preventive maintenance at scheduled time
intervals. A set of diagnostic procedures is performed and critical
parts are replaced to ensure ongoing reliable system uptime.
Unplanned activities that are necessary in addition to the
planned activities should be formally requested by the user of the
instrument or by the person who is responsible for the instrument.
An example of a request form is shown in figure 15.
Figure 15. Request form for unplanned maintenance
The reason for the requested maintenance should be entered as
well as a priority. All maintenance activities should be documented
in the instrument’s logbook. A template with examples is shown in
Figure 16. Maintenance logs
Defective instruments should be either removed from the
laboratory area or clearly labeled as being defective. Procedures
should be available for most common problems such as defective UV
detector lamps. Procedures should also include information if and
what type of requalification is required. Uncommon problems, for
example, if an HPLC pump becomes defect without any obvious reason,
should be handled through a special procedure that guides users of
instruments through the repair process and reinstallation. In this
case the impact of the failure on previously generated data should
Analytical instruments and systems go through many changes during
their lifetime. New hardware modules may be added to enhance
functionality, for example, an automated sampling system replaces a
manual one for unattended operation. Vendors may change the firmware
to a new revision to remove software errors or application software
may be upgraded to be compatible with a new operating system. Or a
complete system is moved to a newly designed laboratory. Some
changes are also initiated when new technologies are introduced, for
example, a standard HPLC pump is replaced by a rapid resolution pump
for higher sample throughput.
Any changes to instrument hardware, firmware and software should
follow written procedures and be documented. Requests for changes
should be submitted by users and authorized by the user’s supervisor
or department manager and by QA. Before any change request is
approved, business benefits should be compared with the risks a
change may bring. USP chapter <1058> states: “Implementing changes
may not always benefit users. Users should therefore adopt changes
they deem useful or necessary and should also assess the effects of
changes to determine what, if any, requalification is required”.
USP also recommends following the same 4Q model for changes as
for initial qualifications. This means:
- Specifications should be updated, for example in case a new
automated sampling system replaces a manual one.
- IQ documents should be updated, if a new firmware revision
is installed. Installation documents should also be updated when
a system is moved to a new laboratory.
- OQ documents with new test cases and test protocols should
be added if the software is upgraded with new functionality and,
- PQ tests need to be updated to verify ongoing system
suitability of a new rapid resolution HPLC pump.
Before any change is approved and implemented a thorough
evaluation should be made if OQ tests should be repeated. Depending
on the change, an instrument may need no, partial or full testing of
4. Validation of Software and Computer
Validation of software and computer systems follows the same
principle as the qualification of instrument hardware. USP <1058>
has a short chapter on software validation. Software is divided into
- Firmware integrated as chips into instrument hardware for
control through local user interface.
- Software for instrument control, data acquisition, and data
processing. An example would be a chromatography data system.
- Standalone software, for example a Laboratory Information
Management System (LIMS) package.
Most valuable is the statement about firmware: “Firmware is
considered as a component of hardware of the instrument itself.
Indeed the qualification of hardware is not possible without
operating its firmware. Thus when the hardware is qualified at the
user’s site, the integrated firmware is also essentially qualified.
No separate on-site qualification of the firmware is needed.” The
chapter further recommends recording the firmware version as part of
IQ and keeping it under change control.
For software categories two and three the chapter refers to the
4Q activities and recommends the FDA guide on software validation
for more detail7.
In general, the effort to validate a computer system is higher
than for instrument hardware. Depending on what it is, the costs for
software validation and computer system validation can be 50% or
more of the costs for the software itself, with an increasing trend.
The main reason is that software offers more and more functionality.
All software functions with high impact on drug or API quality
should be validated. This does not mean correct functionality should
always be tested in the user’s laboratory, but as a minimum, all
functions should be specified and the need for testing should be
This chapter will go into more detail on what is important for
validation of software and computer systems. We will follow the same
4Q Lifecycle model as for instrument hardware. The main focus is on
relatively small and less complex software and computer systems. As
a model we will use a chromatographic data system (CDS) with no or
little customization. For more complex systems, for systems with
high level of customization and for any software development
activities, we refer to literature references with more detail, for
example, references 7-12.
Master and Project Planning
Software and computer system validation should be well planned. A
computer system validation master plan should not only describe
validation approaches but should also have an appendix with a list
of all computer systems used in a laboratory. Typically, inspectors
ask for a list when inspecting data that have been generated by a
computer system. The list should uniquely identify all computer
systems. It should include a short description of the system and
information on the location, the application and whether the system
is used in regulated areas. Inspectors also will ask for the risk
category of the system. The risk categories can be, for example,
high, medium or low. The categories should have been determined
following a documented process and should be justified. Criteria are
complexity and impact of the system on (drug) product quality. Most
likely the inspector will focus during the inspection on systems
that have been classified as high risk.
Figure 17 shows a template with examples on how to document a
computer system inventory. The list should also include information
on the state of validation. Non-validated systems should have a
timeframe for system validation. Now the importance of the risk
category becomes obvious: Non-validated systems classified as high
risk must not be used for any regulated work.
Figure 17. Template for computer system inventory
The content items of project plans for computer systems are
similar to equipment hardware. However, because of a higher
complexity and higher validation effort, the document is longer. For
practical reasons table templates will not work well. Longer project
plans are written in text form and a hyperlinked table of contents
will help to find individual sections. Project plans should have a
section on risk assessment. It should describe how risk assessment
is planned and documented and what risk levels mean for the extent
Requirement specifications of software and computer systems
should be linked to test cases in some kind of traceability matrix.
This can be a table on its own or the link can be built into the
requirements table. A template with examples is shown in figure 18.
Each specification should have a unique ID code. Criticality of the
function can be defined as high, medium or low. Most important
questions to ask are: what happens if the function does not work at
all or if it produces wrong results? In the next column the test
priority is documented. Criticality plays a major role but also the
question as to how the user’s environment or the user, for example
through a wrong user configuration, can influence correct
Figure 18. Example for requirement specifications of a
chromatographic data system
Requirements of a CDS should not only be specified on the ability
to run a chromatographic analysis, but also on other requirements
that are mainly related to system and data security, and data
integrity. Requirements are stated in FDA’s regulation for
electronic records and signatures: 21 CFR Part 11 (21) and in Annex
11 to the European GMPs (24). Very important is the electronic audit
trail function. Selected specifications for audit trail
functionality are shown in figure 19.
Figure 19. Selected specifications for electronic audit trail
A thorough vendor assessment is even more important for computer
systems than it is for instrument hardware. When instrument hardware
arrives in a laboratory it can be physically inspected for any
damage and specifications can be fully tested so users can get a
good impression of the quality. This is not so easy with software.
DVD covers always look very nice but they say nothing about the
quality of the product. Also most likely it is impossible for a user
to test all functions of a complex commercial computer system.
Errors may not even become obvious during initial use, but only
later when certain functions are executed together.
During vendor assessment, the user should verify that the
software has been designed, developed and validated during and at
the end of development. The vendor’s capability and practice to
support the user before and during installation and as long as the
software is used should also be checked.
Figure 20 lists different assessment methodologies. Costs for the
assessment increase from 1 to 5. Vendor audits are most expensive
but could make sense when a company plans to purchase complex
computer systems for multiple laboratories or sites. The final
decision on the methodology should be based on risk assessment.
Criteria are vendor risk and product risk.
Figure 20. Methodologies for vendor assessment
Criteria for product risk are:
- System complexity
- Number of systems to be purchased
- Maturity of the system
- Influence on other systems
- Impact on (drug) product quality
- Impact on business continuity
- Level of customization
When users purchase software such as CDS they need support from a
specific vendor for a lengthy time to ensure retrieval and
readability of data for several years. Therefore, the future outlook
of a company and the ability to support data is important. The way
to make an assessment is to look for how long older data can be
supported by the current system. This in combination with the size
of the company and the position of the company in the target market
are good indications to assess the vendor risk. The selection
decision for a specific vendor should be justified and documented.
Key points for IQ of computer systems are to verify correct
software installation and to document all computer hardware,
software and configuration settings as the initial baseline
configuration. Recommended steps for installation of computer
- Install software on computer following the manufacturer’s
- Verify correct software installation to make sure that all
files are correctly installed. Software with MD5 based checksum
routines is a good tool for this.
- Make a back-up of the software.
- Configure peripherals like printers and instrument modules.
- Identify and make a list with a description of all hardware,
operating system software and application software.
Identification of software should include the version number.
- Make system drawings, where appropriate.
- For networked systems: check communication with network
As part of the installation process computer systems should be
well documented with:
- Computer hardware, e.g., manufacturer, model.
- Computer firmware, e.g., revision number.
- Operating system: vendor, product identifier and version.
- Application software: vendor, product identifier and
- Hardware peripherals, e.g., printers, CD-ROMs.
- Network hardware, firmware, software and cables.
- Documentation, e.g., operating manuals and specifications.
Information should be entered into a data base and should be
readily available when contacting vendors to report a problem during
operation. Figure 21 shows a template with examples of an
Figure 21. Computer system documentation for IQ
Testing software and computer systems can be a complex task.
Extent of testing should be based on a justified and documented risk
assessment. The required effort mainly depends on:
- the criticality of the system for (drug) product quality and
- the complexity of the system
- information on what has been tested by the vendor and the
related test environment
- the level of customization and configuration.
Most extensive tests are necessary if the system has been
developed for a specific user. In this case the user should test all
functions. For commercial off-the-shelf systems that come with a
validation certificate, only tests should be done of functions that
are highly critical for the operation or that can be influenced by
the environment. Examples are data acquisition over relatively long
distances from analytical instruments at high acquisition rates.
Specific user configurations should be documented and tested, for
example, correct settings of network IP addresses should be verified
through connectivity testing.
When computer systems can control and obtain data from multiple
analytical instruments, tests should be conducted with a high number
of instruments transmitting data. The example in figure 22
illustrates that, according to specifications, 4 instruments can be
controlled. Correct functioning of the system should be verified
with all four instruments connected and delivering data at high
Figure 22. Example for high load testing
Test results should be formally documented. Figure 23 shows a
template and examples for a test protocol. It consists of three
parts. The header describes the test system, test objective and the
specification. The step-by-step
test procedure, expected results and actual results are documented
in the middle. The test protocol also has a column to document
evidence of testing. This can be a print out, a screen capture or
just handwritten recording of visual observations. The lower part
documents the names of the test engineer and reviewer and has s
Figure 23. Template for testing
PQ should be designed to challenge a complete system’s
performance. For a computerized analytical system this can mean, for
example, running system suitability testing, where critical key
system performance characteristics are measured and compared with
documented, preset limits.
PQ activities for CDS can include:
- A complete system test to prove that the application works
as intended. This can mean running a system suitability test or
analyzing a well characterized sample through the system and
comparing the results with results previously obtained.
- Regression testing: reprocessing data files and comparing
the results with previous results.
- Regular removal of temporary files.
- Regular virus scan.
- Auditing computer systems.
Most efficient is to use software for automated regression
testing. The software runs typical data sets through a series of
applications and calculates and stores the final result using
processing parameters as defined by the user. During regression
testing the data are processed again and results are compared with
previously recorded results. Normally these tests don’t take more
than five minutes, but give assurance that the key functions of the
system work as intended.
Configuration Management and Change Control
The purpose of configuration management is to be aware of the
lifetime composition of the system from planning to retirement. The
initial or baseline configuration of a system has been documented as
part of IQ.
Any changes to specifications, programming codes or the initial
set up of computer hardware should follow written change control
procedures and be documented. Changes may be initiated because
errors have been found in the program or because additional or
different software functions or hardware may be desirable. Requests
for changes should be submitted by users and authorized by the
user’s supervisor or department manager.
Figure 24 shows a form that can be used to request changes. The
form should include information on the priority and on business
benefits versus costs for additional validation tasks. This
information is important to assess if the change has a business
advantage and should be approved or rejected.
Figure 24.Example for change request form
The form should also include information about whether regulatory
notification is required and the roll back plan. A roll back plan is like a
contingency plan that becomes effective when a change introduces an error, which
causes the system to fail. The roll back plan ensures that the system can be
brought back to the last working system configuration.
At the end of validation, a summary report should be developed.
This should be a mirror of the validation project plan. It should be
organized in such a way that it has all the elements and follows the
outline of the validation plan. This makes it easy to check if all
plan items have been completed successfully. Deviations should be
documented, if there are any, together with corrective actions
and/or work around solutions. The report should include a statement
that the instrument or system is qualified or validated. After the
statement and the report have been signed by management, the product
can be released for operation.
Typically, the validation plan and the report are the first
documents inspectors want to see when they inspect a validation
project. If everything is well organized and documented, it may well
be that after looking at both documents inspectors get such a good
impression about the validation work that they will focus on other
Validation of Existing/Legacy Systems
It frequently happens that existing instruments and systems are
not formally validated if they are not used in a regulated
environment. Sometimes these systems are called legacy systems. They
should be validated if they will be used in a regulated environment,
a process called retrospective validation. Inspectors expect the
same documented evidence that the system is suitable for its
intended use as for new systems.
We recommend following the same 4Q model for validation as for
new systems. The main difference is in the DQ phase. Most likely
there is not much information from the vendor available and vendors
cannot be assessed. There is also no need to develop requirement
specifications from scratch. The big advantage of an existing system
is that there is a lot of information from past use and the used
functions are well known.
The most important task for an existing system is to document the
system functions used along with any comments about problems with
the functions. The system should be fully documented for IQ, like a
OQ and PQ testing should focus on functions that caused problems
in the past. After successful OQ and PQ testing, a summary report
should be developed and signed by management. This means the system
can be released for use in a regulated environment.
Validation of Spreadsheet Applications
Spreadsheets are widely used in laboratories for data capture,
data evaluation and report generation. For example, they can be used
to correlate data from a single sample analyzed on different
instruments and to obtain long-term statistical information for a
single sample type. The processes may be automated, for example,
enabling the analytical data to be transferred, evaluated and
reported automatically. In all of these programs, analytical data
are converted using mathematical formulae.
Today the understanding is that the programs themselves don’t
have to be validated by the user, e.g. MS Excel. What should be
validated are the custom calculations and program steps written by
the laboratory. There should be some documentation on what the
application program, written by the user as an add on to the core
software, is supposed to do, who defined and entered the formulae
and what the formulae are.
Development and validation of spreadsheets should follow a
standard operating procedure. Recommended steps are:
- A user drafts a proposal for a new spreadsheet. The proposal
should include a description of the problem that the spreadsheet
should solve, how it is handled now and how the spreadsheet can
- The system owner writes a project plan.
- The system owner collects inputs from anticipated users on
requirement specifications and writes requirement
- The programmer defines and documents required functions.
Functions are reviewed by users.
- The programmer develops design specifications, for example,
which formulas are used and the location of input/output cells.
For complex spreadsheets and for spreadsheets with VBA scripts,
the design specifications are reviewed by peers of the
- The programmer develops the worksheet and creates functional
tests. The code is reviewed by peers of the programmer
(structural testing) for spreadsheets with VBA scripts.
- The programmer writes a user manual. The system owner
develops a test protocol for users.
- Users load the spreadsheet onto their computer.
- Users test the spreadsheet and document the results.
5. Implementing USP Chapter
USP <1058> is the authoritative guide for analytical instrument
qualification. Even though as a chapter with a number above 1000,
generally, it is not mandated and alternative approaches are
possible. Nevertheless, we would recommend implementing it for FDA
regulated environments because of several reasons.
- The chapter is mandated if any USP monographs require using
qualified instruments for a specific analysis.
- FDA inspectors expect instruments to be qualified when used
for regulated testing.
- The applied 4Q qualification model has been very well
established since over 10 years and many laboratories are
familiar with it.
- The model is applicable to all types of instruments ranging
from simple devices to complex systems.
- The model is flexible and allows laboratories to define test
procedures and acceptance criteria according to the instrument’s
Because of the importance of the chapter and its advantages we
want to dedicate this last primer chapter to recommendations for
implementation of USP <1058>.
<1058> Instrument Groups
Analytical laboratories typically include a set of tools ranging
from simple nitrogen evaporators to complex automated instruments.
Depending on the complexity, the qualification efforts vary. The
concept is always the same, but the extent of testing and the
required amount of documentation will change. For example, a very
simple instrument may only need one or two minutes for physical
inspection and making a tick mark in a check list, while more
complex systems can easily take several days for full validation.
Because of the large variety of instruments, with different
qualification and documentation requirements, it can be very
complicated if each type of instrument is handled differently. To
simplify the process, USP recommends dividing all instruments into
three groups A, B, and C and to define for each group a specific set
of qualification tasks.
The standard lists examples for each group but at the same time
makes it clear that the categories are not only instrument but also
applications specific. Examples for all three groups are shown in
Figure 25. Instrument groups A, B, and C
Group A includes standard equipment with no measurement capability. Examples
are nitrogen evaporators, magnetic stirrers, vortex mixers and centrifuges.
Group B includes standard equipment and instruments providing measure values,
for example, a balance. This group also includes equipment controlling physical
parameters, such temperature, pressure or flow. Examples are water baths and
ovens. Group C includes instruments and computerized analytical systems.
Examples are computerized IR-spectrometers, HPLCs and mass spectrometers.
Allocating Instrument into Instrument Groups
USP recommends dividing analytical instruments into groups, but
does not include a matrix with instruments allocated to groups. On
the other hand such a matrix is of utmost importance for a company;
otherwise discussions will start over and over again about the right
allocation whenever an instrument has to be qualified. Our
- Develop a list with all analytical instruments and allocate
all of them into groups A, B or C.
- Develop a list of procedures for each group that should be
available and used when qualifying instruments.
- Develop a list of tasks for each group that should be
executed when qualifying instruments.
Within a company the lists, procedures and tasks for groups A, B,
and C should be developed at the highest possible level; and
preferably there should only be one set available. Having a
harmonized approach reduces subjectivity for qualification and it is
not only very efficient but also ensures consistency. We would
suggest putting examples of instrument categories and applications
in an equipment validation master plan. A harmonized approach is
also advantageous for external audits or inspections, especially
when several laboratories are inspected by the same inspector in the
same time frame.
Procedures and Qualification Protocols for Three Groups
The number of procedures required increases from groups A to C.
Each company should have a document that specifies which type of
procedures should be developed. A template with examples is shown in
Figure 26. The point here is not to exactly follow the examples, and
this list does not originate from the USP, but it is very important
to have a list available within an organization.
Figure 26. Recommended procedures for groups A, B and C
The number of documents increases from A to C. An operating
instruction and a procedure for reporting problems are enough for
group A devices. Group B requires additional procedures for
qualification, change control and preventive maintenance and repair.
Additional procedures for group C are specific to computer systems,
for example, back-up, security, and system administration.
A company should also provide information on which qualification
steps should be executed. An example is shown in figure 27. Some
recommendations are from the USP chapter. For example it states for
group A: “The manufacturers specification of basic functionality is
accepted as user requirements. Conformance of group A equipment with
user requirements may be verified and documented through visual
inspection”. This means a simple checklist can be enough for
Figure 27. Recommended deliverables for groups A, B and C
The difference between B and C are mainly in areas of vendor qualification
and risk assessment. For B we only document the vendor’s quality system and keep
the certificate as a record. For computerized systems in C we should have a
vendor assessment program. Risk assessment is also recommended for C. The number
of required documents for B and C does not vary much. However, the size of the
documents and the format will be different. For example, a qualification plan in
B can be documented on a one or two-page template. For C this could be easily a
20 page text document.
Responsibilities, Communication and Training
Implementation of USP <1058> should be communicated to everybody
in the organization who is involved in qualification and validation
of instruments and systems. People should receive training on the
USP chapter, on why the company decided to implement the chapter and
what it means for day-to-day operation. The training should be
documented and supervisors should follow-up to verify effectiveness.
Vendors should also be informed about implementation of USP and
they should be advised to study the chapter and follow-up to fulfill
vendor requirements. USP <1058> has a chapter on roles and
responsibilities for users, the quality assurance unit and
Users of analytical equipment have the ultimate responsibility
for instrument operations and data quality. It is an FDA GMP
requirement that analysts must sign the analytical test result and
therefore also have the ultimate responsibility to make sure
instruments and computer systems are qualified and validated. Users
should be adequately trained in the instrument’s use. Training can
be provided by anybody who is proven to be competent, for example by
vendor representations, 3rd parties and by internal resources.
The fact that users have ultimate responsibilities for instrument
qualification does not mean that they have to conduct all
qualification activities. For example, IQ and OQ can be delegated to
the instrument vendors or to a 3rd party organization. On the other
hand PQ should be performed by users because the tests are
applications specific and require a good knowledge of the
application. Advantage of using vendors for IQ/OQ is that they have
all the necessary experience and procedures and even more
importantly, they bring along calibrated tools that are required for
the qualification. Vendors with worldwide presence typically also
offer qualification services around the globe. This is important for
companies operating in multinational environments. Whoever is doing
the qualification work should be trained and training certificates
should be filed with the qualification documents.
The role of the Quality Assurance unit is the same as for any
other regulated activity. QA personnel are responsible for assuring
that the qualification process meets compliance requirements and
conforms to internal procedures. QA personnel should also train or
advise user groups on regulations and lead or help with the vendor
Developers, Manufacturers and Vendors
Developers and manufacturers are responsible for the design of
the instrument or software program and for providing specifications
to the user. They should validate processes used in development and
manufacturing as well as during the entire support period.
Manufacturers should allow user audits and share validation
processes, test procedures and test results with regulated users.
Manufacturers and vendors should also notify all users about
hardware and software defects discovered after a product’s release.
Furthermore, manufacturers or vendors should provide user training,
installation and qualification support and repair services.
- U.S. FDA, Title 21 of the U.S. Code of Federal Regulations:
21 CFR 211- Current good manufacturing practice for finished
- Unites States Pharmacopeia, Chapter <1058>, Analytical
Instrument Qualification, Rockville, USA, 2008
- P.Bedson and M.Sargent, The development and application of
guidance on equipment qualification of analytical instruments,
Accreditation and Quality Assurance, 1 (6), 265-274, 1996
- P. Coombes, Laboratory Systems Validation Testing and
Practice, DHI Publishing, LTD, Raleigh, USA 2002
- L. Huber, Validation and Qualification in Analytical
Laboratories, Interpharm, Informa Healthcare, New York, USA,
1998, Second revision 2007
- M. Freeman, M.Leng, D.Morrison and R.P.Munden from the UK
Pharmaceutical Analytical Sciences Group (PASG), Position Paper
on the qualification of analytical equipment, Pharm. Techn.
Europe, 40-46, November 1995
- United States Food and Drug Administration (FDA), General
Principal of Software Validation: Final Guidance for Industry
and FDA Staff, Rockville, MD, Jan 2002
- GAMP Good Automated Manufacturing Practice, Guide for
Validation of Automated Systems, Version 4, 2001
- GAMP Good Automated Manufacturing Practice, A Risk-based
Approach for Compliant GxP Computerized Systems, Version 5: 2008
- GAMP Good Practice Guide for Validation of Laboratory
- L. Huber, Validation of Computerized Analytical and
Networked Systems, Interpharm, Englewood, CO, USA, April 2002
- Parenteral Drug Association (PDA), Validation and
qualification of computerized laboratory data acquisition
systems (LDAS), Technical paper 31, 2000
- C.C.Chan, H. Lam, Y.C.Lee, X.M. Zhang, Analytical Method
Validation andInstrument Performance Verification., Wiley
Interscience, Hoboken USA, 2004
- Pharmaceutical Inspection Convention Scheme (PIC/S), Good
practices for Computerised Systems in Regulated ‘GxP’
- U.S. FDA GLP, Good laboratory practice regulations for
non-clinical studies, Final rule, U.S. FDA, Rockville, Md., USA,
Title 21 CFR, Part 58, 1979
- Organization of Economic Co-operation and Development, Good
laboratory practice in the testing of chemicals, final report of
the Group of Experts on Good Laboratory Practice, 1982 (out of
- Organization of Economic Co-operation and Development, The
OECD principles of good laboratory practice, Series on
principles of good laboratory practice and compliance
monitoring, number 1, GLP consensus document environment
monograph No. 45, Paris, 1998
- Commission of the European Communities, The rules governing
medicinal products in the European Union, Volume 4, Good
manufacturing practices: Medicinal products for human and
veterinary use, 2003
- ICH Q7A Good Manufacturing Practice Guidance for Active
Pharmaceutical Ingredients 20 ISO/IEC 17025, General
requirements for the competence of testing and calibration
- U.S. FDA, Title 21 of the U.S. Code of Federal Regulations:
21 CFR 11 "Electronic Records; Electronic Signatures;
- Qualification and validation, Annex 15 to the EU Guide to
Good Manufacturing Practice, 2001
- Qualification and validation, Annex 15 to the EU Guide to
Good Manufacturing Practice, 2001
ORA Laboratory Procedure: Volume II - Equipment ORA-LAB.5.5,
A good relationship with industry and FDA has always been my
highest priority. Most issues between industry and FDA can
be resolved by having a good understanding of each others
position. Personal visits to FDA's centers in Rockville,
panel discussions at public conferences, joint industry/FDA
workshops and having FDA and industry guest speakers in the
Labcompliance audio seminars helped a lot to get a real good
understanding on both positions, and to get an insight on
The examples below show interactions with the FDA.
||Ludwig Huber and
Paul Lepore, FDA's 'Father of GLP' during a GLP Workshop in
Paul Lepore told the audience how they should prepare for a
GLP inspection and what inspectors will ask.
Ludwig Huber explained how to implement computer validation
in GLP environment.
||Nick Buhay, Acting
Director in FDA/CDER's Division of Manufacturing and Product
Quality, and Ludwig Huber in the Q&A Discussion Session on
||Ludwig Huber was
on conference program's Part 11 panel discussion with FDA's
the Joseph Famulare, Acting Director, Office of Compliance,
(right) , QA manager at the FDA Labs in Puerto Rico, and
author of FDA's Excel Bulletins spoke at Ludwig Huber's
seminar as guest speaker.
2005 and in 2009
||Ludwig Huber has
been asked to review FDA's internal SOP and Lab Information
Bulletin on Spreadsheet Validation.
FDA acknowledged Ludwig Huber's contribution in the LIB
||Thomas S. Savage,
Senior Coordinator at FDA's Office of Regulatory Affairs,
told the audience that he took quite a lot of material from
Ludwig Huber's Labcompliance website. He also said that he
regularly visits this website when he wants to learn about
news on FDA inspections.
||Ludwig Huber in an
joint industry/FDA panel discussion at the IVT Part11
conference in Washington DC: From right to left: Martin
Browning, fEduQuest Inc, Paul Motise, US FDA, Kathryn
Davidson. Baxter Healthcare, INC, and Ludwig Huber fAgilent,
(middle) with FDA's Paul Motise (second from left) and the
speaker panel at an IVT conference in Washington (2002).
Other speakers, from right to left: Chris Reid, Rebecca
Fuller Heyde, and Jeff Beck
Video Clips with FDA Management are on
the Lab Compliance Website
FDA management explained FDA's 21st Century Drug cGMP
initiative and the impact on Part 11
Mark B. McClellan, MD., PhD; Commissioner of Food and Drugs,
Janet Woodcock, MD. ; CDER Director David Horowitz, Director
of CDER's Office of Compliance, Joseph Famulare, Director,
Div. of Manufacturing & Product Quality Office of
(right) and John Murray, FDA, (left), speaking at the IVT
Computer System Validation Conference
FDA's national Part11 expert, at a panel discussion with
Ludwig Huber during an IVT conference 2005
Smith and Huber discussed and answered questions about
computer system validation and e-records.
(right) in a panel discussion with FDA inspectors and
Dr. Robert C. Horan, B Erik Henrikson,Joseph Famulare, and
Nicholas Buhay, Acting Director in FDA/CDERs Division of
Manufacturing and Product Quality.
Ludwig Huber (right) with Dr. Robert C. Horan, FDA,
Nicholas Buhay, and Joseph Famulare, in the joint
SINO-SFDA-US FDA cGMP /workshop
Ludwig Huber has been invited by FDA’s CDER to give a
seminar for FDA Quality Professionals
The idea was to share Ludwig’s expertise in the area of
laboratory controls, computer validation and integrity of
In 2007 Dr. Ludwig Huber has been invited by FDA's Eric
Hendrikson to give a post conference tutorial at the annual
GMP conference organized by FDA professionals.
Two FDA professionals attended Ludwig's tutorial.
Dr. Brenda Uratani, Assistant Country
Director US FDA China Offices, and Accociate Prof. Mr. Wang
Yamin, Center for Drug Evaluation, SFDA attended Dr. Huber's
GMP compliance workshop at the Bejing University.
Here Dr. Ludwig Huber (left) at lunch with Dr. Brenda
Uratani, , and with Accociate Prof. Mr. Wang Yamin,