Variability of Test Results

Tests on any material produce results with a degree of scatter. In particular, tests on geological materials – natural products such as drainage gravel – produce highly variable results. In the case of hydraulic conductivity (permeability), the scatter can cover an order of magnitude.

Perhaps because they are manufactured products, it is less well recognised that test results on geosynthetics are variable. The purpose of this note is to illustrate the variability of common geosynthetics tests.

Because of the nature of ABG business, this note uses in-plane flow capacity to illustrate the variability of test results. It should be remembered that reliable, accurate measurement of the flow capacity of natural drainage materials such as gravels is notoriously difficult. It discusses only the inherent variability of the test. It is vital to recognise that permitted variations in test procedures can produce substantially different results. In the specific case of in-plane flow, boundary conditions (hard/soft platens), hydraulic gradient and water temperature can all substantially alter reported flow values. Apparently similar international procedures can also produce differing results. Every laboratory, as part of its accreditation, has to produce an ‘uncertainty budget’ which is an itemised table of components that contribute to the uncertainty in measurement results. It reveals important information that identifies, quantifies, and characterises each independent variable.

Information sources

Historically, values for material properties were published without any indication whether these were nominal values, average values, minimum values or some other “representative” value. As the geosynthetics industry matured, and in response to a more formal treatment of uncertainty in other aspects of geotechnical engineering, European manufacturers began to stipulate average values and a tolerance on those values. Even at that early stage the issue of variability of test results was recognised, with at least one major manufacturer noting the greater variability of hydraulic parameters “due to the variations between test laboratories”.

There is now more quantitative data on the subject. As new Standards have been introduced, validation processes have been undertaken to assess the variability of test methods. As certain aspects of the industry have come under greater regulatory control a number of accreditation programs have been introduced. The data used to illustrate this note is taken from the Geosynthetic Accreditation Institute – Laboratory Accreditation Program (GAI-LAP) which publishes summaries of inter-laboratory comparison testing for a wide range of geosynthetics tests. GAI is a division of the Geosynthetics Institute (GSI) and as a US organisation the tests are largely to ASTM procedures. Table 1 (overleaf) outlines the variations in a selection of the tests related to ABG’s business.

The exact meaning of Repeatability and Reproducibility is defined below. However, as a preliminary comment it is highlighted that the GSI view is that “Well behaved tests are those with an Uncertainty less than 10”. In other words none of the tests in Table 1 would be described by GSI as “well behaved”. They are, of course, tests on which the choice of geosynthetics are routinely based. There is no reason to suggest European procedures are less uncertain.

The Uncertainty values in the survey cover 105 tests and generally range up to about 50%, although there is one test with an Uncertainty of 110%.

Table 1: Variations in selected laboratory test results

Notes:
1. Although the test standards compared by the GAI-LAP were ASTM tests, there is no reason to expect that EN ISO tests would be any less variable.

Repeatability Limit

The Repeatability Standard Deviation is given the symbol Sr and is the standard deviation of results obtained under Repeatability Conditions; going on from this the Repeatability Limit is given the symbol r and is defined as 2.8 x Sr. The difference between two tests done under Repeatability Conditions should be less than r.

In simple terms, “Repeatability” is about exactly the same test being done twice. The definition of the Repeatability Limit means that, using the data from the GAI-LAP tests, the results of two “duplicate” tests of the in-plane water flow capacity of a material should be expected to differ by up to 19% and could differ by 53% without indicating that the samples differed.

Reproducibility Conditions
Conditions where test results are obtained using the same method on identical test items in different laboratories with different operators using different equipment. A formal definition of Reproducibility is “precision under reproducibility conditions”.

Reproducibility Limit

The Reproducibility Standard Deviation is given the symbol SR and is the standard deviation of results obtained under Reproducibility Conditions; going on from this the Reproducibility Limit is given the symbol R and is defined as 2.8 x SR. The difference between two tests done under Reproducibility Conditions should be less than R.

In simple terms, “Reproducibility” is about identical material being tested in the same way at two different test laboratories. Using the same example from the GAI-LAP results but now looking at the Reproducibility result, this means that the results of two tests of the in-plane water flow of a material would be expected to differ by 32% and could differ by 90% without indicating that the samples differed.

Implications for Design and Construction Practice

The first and perhaps most important implication is that all parties need to be aware of the degree of variability in test results. It is important to recognise that this is variability in test results. Variability in the product itself is only one component of that variability, and that variability inherent in the test is another component.

Consider the complexity of the in-plane flow test. The test involves cutting a test specimen to a precise size, inserting that specimen in a test rig in such a way that the volume of leakage past the specimen is acceptably small, applying a defined confining pressure through soft platen foam which is within a tolerance range for compressibility, avoiding obstruction of inlet and outlet to the specimen, feeding water with an acceptable level of dissolved gas at an acceptable temperature and at a defined pressure gradient through the specimen in such a way as to remove trapped air and then measuring the flow rate to an appropriate accuracy.

This is not an exhaustive list of the factors which may influence the result but it is sufficient to indicate the complexity. There is of course variability in products, between test rigs and between operators. It suggests that the Repeatability Standard Deviation of 0.19 measured for in-plane flow testing is a matter for congratulation not criticism! Despite all this, the variability is much lower than experienced with natural drainage materials.

This variability must be accommodated in the design and construction of projects using geosynthetics. In particular, it may be unwise to give excessive weight to a single test result from however reputable a source. Designers, installers, inspectors and regulators should all recognise that a single “dissenting” test result may be “correct” but it may also be simply a reflection of the variability of the testing. Multiple “dissenting” results from the same reputable source may simply reflect the limits on reproducibility (between test houses) of the procedure.

The majority of geosynthetics tests are defined as Index Tests in that the property tested is for manufacturers’ quality control and not for design (e.g. mass per unit area etc.). Other tests are defined as Performance Tests in that the property tested can be used in design (e.g. In-plane flow EN ISO 12958 which is equivalent to Transmissivity ASTM D4716 with soft foam rubber platens). Designs must be based on minimum values – mean minus tolerance (where the tolerance is defined as two standard deviations) – and acknowledge that occasionally a few test results (2.5% of the test results) will be expected to fall outside of tolerance. For more information ABG can provide a technical paper on this topic (see Tech Note on normal distribution). A good design will also apply several reduction factors to account for effects such as long term creep, biological, chemical and installation damage etc. Finally, the design will include a global factor of safety to cover unknowns in the design method and input values.

ABG Service

Testing is a routine part of the ABG quality control Process and ABG will assist designers and contractors in site specific tests.

 

Additional information