Response
Root Cause
The AI model integration tests for version 1.1.0.0 were executed using a localized client script interacting with our Preproduction cloud environment. While the software version under test was documented, the explicit hardware, software, and configuration specifications of both the client execution environment and the cloud infrastructure hosting the API were inadvertently omitted from the final Verification Test Report.
Correction
We have updated the Software Test Report (R-TF-012-035-Software-Test-Report.mdx) to explicitly define the "System Under Test" (AWS EC2 Preproduction infrastructure, DynamoDB tables, and Docker container baseline) and the "Test Execution Environment" (Local Client hardware, OS, and Python configuration) used to generate the results in master.csv.
Furthermore, to guarantee absolute reproducibility, the updated report now explicitly documents:
- Test Software Versioning: The custom integration script has been placed under strict version control, referencing the exact GitHub repository and commit SHA used during execution.
- Test Data Versioning: The test dataset has been frozen as an immutable snapshot within our versioned S3 evidence bucket (v1.1.0.0).
The execution results previously referred to as master.csv during the audit have been formally archived in the QMS under the controlled nomenclature ai-models-integration-tests.csv, as referenced in Annex D.
Preventive Action
The Software Test Plan (R-TF-012-033-Software-Test-Plan.mdx) has been updated to mandate that all future verification campaigns explicitly record both the client execution environment and the remote infrastructure baseline in the test report and evidence manifest.