Why the categories matter
GAMP 5 categories help regulated teams scale validation effort to the actual risk and complexity of a computerized system. They drive how much specification, testing, supplier evidence, and ongoing change assessment a system needs across its lifecycle.
ISPE published GAMP 5 in 2008 and refreshed the framework as GAMP 5 Second Edition in 2022. The Second Edition reinforces critical-thinking and risk-based effort allocation in line with FDA's Computer Software Assurance (CSA) draft guidance.
One important note: GAMP Category 2 was retired in 2008 when the original GAMP 5 was issued. Current categories are 1, 3, 4, and 5.
The GAMP 5 categories
Each category implies a different validation strategy. The table below summarizes scope, validation depth, and typical examples.
| Category | Description | Typical validation depth | Examples |
|---|---|---|---|
| Category 1 | Infrastructure software. Established platform components used to host or run other applications. | Qualification of the infrastructure environment; rely on supplier evidence; minimal application-level testing. | Operating systems, databases, network monitoring tools, virtualization platforms. |
| Category 3 | Non-configured products. Commercial off-the-shelf software used as supplied, with no business-process configuration. | Risk-based functional testing of the intended use; supplier assessment; documented requirements. | Standard firmware, basic instrument software, simple COTS tools used out-of-the-box. |
| Category 4 | Configured products. Commercial software that is configured to fit a specific business process without writing custom code. | Configuration specification, requirements traceability, risk-based functional and integration testing, change control on configuration. | Most modern eQMS, LIMS, ERP, MES, and document-management platforms. |
| Category 5 | Custom applications. Bespoke software written for a specific use, or substantial custom code on top of a configured product. | Full lifecycle controls: design specifications, code review, unit testing, system testing, performance testing, traceability throughout. | In-house data analytics tools, custom integrations, bespoke clinical or laboratory workflows. |
How categories affect eQMS evaluation
Most modern eQMS platforms are evaluated as Category 4 (configured products). They ship as commercial software with configuration applied to match the customer's process: workflow steps, role permissions, document templates, training paths, and review cycles.
That categorization has practical consequences for buyers:
- Validation effort focuses on intended use and configuration, not the underlying platform code.
- Configuration specification (CS) becomes a primary validation artefact.
- Supplier evidence (development controls, release notes, supplier audit) must be available for the customer to leverage.
- Change control after go-live should distinguish vendor releases from customer configuration changes.
Buyers occasionally encounter Cat 5 work when they bolt custom integrations or in-house extensions onto a Cat 4 platform. Those custom layers carry a higher validation burden than the base platform itself. Note: vendor-specific categorization should be confirmed with the supplier as part of vendor assessment.
What buyers should ask vendors
- How is the platform categorized under GAMP 5, and what evidence supports that categorization?
- Which validation artefacts are shipped (URS template, RA template, IQ/OQ/PQ protocols, traceability matrix, ATS, Part 11 checklist)?
- How are configuration and customer-specific changes documented, traced, and tested?
- How are vendor releases assessed for impact on the customer's validated state?
- Where do supplier-side controls end and customer-side controls begin?
For commercial selection, pair this guide with FDA validated eQMS and how to evaluate an eQMS.
Relationship to CSA and Annex 11
GAMP 5 is the framework most teams use to translate regulatory expectations (FDA Part 11, EU GMP Annex 11, PIC/S Annex 11) into a sized validation effort. FDA's Computer Software Assurance draft guidance reinforces the same principle: critical thinking and risk-based effort, not exhaustive scripted testing for every requirement.
Read more in CSV vs CSA in pharma and the Annex 11 validation playbook.
