EPA methanol hazard assessment less health protective

The Environmental Protection Agency (EPA) IRIS program issued its draft Toxicological review of methanol for public comment and external peer review. The document provides an assessment of the noncancerous health effects of chronic methanol exposure, and updates the health-based limit for both oral/ingestion (Reference dose, RfD) and inhalation (Reference concentration, RfC).  The updated RfD/RfC values are more permissive (less health protective) by 5-fold and 10-fold respectively, compared with the 2011 draft assessment that was never finalized.

(EPA defines a chronic RfD or RfC as an estimate with uncertainty spanning perhaps an order of magnitude of a continuous exposure that is likely to be without an appreciable risk of deleterious effects during a lifetime)

These new RfD and RfC values are based on the same scientific studies that the 2011 draft values were based on, of reduced brain weights and malformations in the skeleton of rats exposed to methanol during fetal development. So, why are they less health-protective? Because EPA adjusted the data using Physiologically Based Pharmacokinetic Modeling (PBPK) – mathematical models to predict how a chemical moves through the body – that is, how it is absorbed by different body tissues and organs, how it is metabolized, and the rate at which it is excreted. PBPK models help risk assessors to compare differences between animals and humans with regards to chemical Absorption, Distribution, Metabolism, and Excretion (ADME) parameters, so that more accurate predictions of toxicity can be made when extrapolating from animal data to human risk.

In EPA’s own words, for the 2013 assessment, “Specifically, new EPA models were developed or modified from existing models, to allow for the estimation of monkey and rat internal dose metrics. A human model was also developed to extrapolate those internal metrics to inhalation and oral exposure concentrations that would result in the same internal dose in humans (human equivalent concentrations [HECs] and human equivalent doses [HEDs]).” Clear as mud?

Year of assessment

Point of Departure

Health endpoint

Uncertainty factor



500 mg/kg-day (NOAEL)

Reduced brain weight

UF of 1000

0.5 mg/kg-day

2011 draft

oral POD 38.6 mg/kg-day (BMDL)

decreases in brain weight in male rats

UF of 100

0.4 mg/kg-day


43.1 mg/L

formation of extra cervical ribs

UF of 100

2 mg/kg-day

(0.43 mg/L before PBPK)

Year of assessment

Point of Departure

Health endpoint

Uncertainty factor


2011 draft

182 mg/m3

Decreases in brain weight

UF of 100

2 mg/m3


858 mg-hr/L

decreased brain weight

UF of 100

20 mg/m3

(8.58 mg-hr/L before PBPK)

I’m not a modeling expert – I cannot evaluate how EPA has used PBPK modeling to weaken the RfC and RfD values for methanol noncancerous health hazards. Like most  non-modeling folks, I’m left to trust EPA scientists, EPA managers, and the external peer reviewers (see the Appendices that accompany the 2013 assessment, to read the reviewers comments and the Agency response). Unfortunately, I have trust issues. So, below I make some suggestions to EPA and other federal agencies about how the models used by Agencies can be made more understandable to the inquiring and engaged public, stakeholders, reporters, and others.

Use of models by the EPA should be to fill in data gaps when trying to set protective regulations, but not to overturn observations from laboratory or epidemiological studies. For example, if multiple well-conducted workplace epidemiology studies show that formaldehyde is linked to leukemia, don’t dismiss the evidence because the mathematical model does not predict leukemia (yes,industry has argued this). 

Models can be highly subjective, depending on the bias of the sponsor, and therefore correspondence from models developed by different sectors should be considered. Results from PBPK and other mathematical models are similar to a critical review of the overall scientific literature, in that they incorporate the results of many studies to generate an overall summary of the data. As such, models can be highly subjective, depending on the bias of the sponsor and any financial interests they may have in the regulations that may result. Scientific journals have recognized this reality, and many have strict guidelines against allowing financial interested parties to write scientific review papers.

The underlying assumptions that are used to build the model framework, and are used to define the parameters of the model, should be stated. Thorough documentation should be provided of the underlying assumptions that are used to build the model framework, and are used to define the parameters of the model. Model parameters are terms in the model that are fixed during a model run or simulation, but can be changed in different runs to conduct sensitivity analysis or calibrate the model. Parameters can be quantities estimated from sample data to characterize a statistical population, or known mathematical constants. For example, a pharmacokinetic model will build in assumptions regarding different pharmacokinetic algorithms such as breathing rate, heart rate, body size, diet composition, etc. While mathematical values for physiologic parameters may be built into the model, in fact, they may differ wildly among people of different ages, life stages, genetic backgrounds, health status, etc. Broad assumptions about genetic differences and variations in enzymatic activity may be reasonable, but may not adequately reflect sensitive populations of interest. While a model may reflect an average person, of average age and average weight, in fact the Agency is obligated to regulate toxic agents in food and drinking water so as to protect the most vulnerable members of the population. Only by explicit documentation of the assumptions built into the model framework, can quantitative estimates of how closely a model captures sensitive individuals be performed.

Any known limitations in the model should be stated. For example, the model may be limited to characterization of a single toxic agent or family of agents, but may be wholly inadequate for modeling synergistic effects of multiple chemicals. While a particular physiological model may include detailed algorithms for some metabolic pathways, other known metabolic pathways may not be included in the model framework. Such limitations, when known, must be explicitly documented.

Appropriate uses, and inappropriate uses, of the model should be stated. For example, while a model of a watershed may be designed to capture still water such as a reservoir, it may be inadequate to model flowing water such as a nearby river or stream. A list of appropriate uses accompanying each model would aid EPA in the appropriate application of models in the regulatory decision making process, and would aid the public in assessing the confidence in a regulatory decision under specific conditions or for specific exposed populations.

Results of uncertainty and sensitivity analysis and validation tests should be provided. Uncertainty analysis investigates the effects of database uncertainties, uncertainty associated with model parameter assumptions, uncertainty regarding appropriate application of the model, and other potential sources of error in the model. Variability results from the inherent randomness of certain input parameters, such as fluctuations in seasonal conditions or genetic variances among populations. Model corroboration describes all the methods, both qualitative and quantitative, for evaluating the degree to which a model corresponds to reality. We can learn from the EPA IRIS Draft Trichloroethylene Health Risk Assessment (2001), where two different PBPK models yielded risk estimates for lung tumors that differed by 15-fold.  

Proprietary models must be extremely well documented, if used. EPA may not rely on a proprietary model without providing substantial detail about its built-in assumptions and calculation methodologies.  As the D.C. Circuit has held, “EPA has undoubted power to use predictive models so long as it explains the assumptions and methodology used in preparing the model and provides a complete analytic defense should the model be challenged.” Appalachian Pwr. Co. v. EPA, 249 F.3d 1032, 1052 (D.C. Cir. 2001)   In so doing, EPA can keep proprietary data confidential, but must provide enough information about the underlying facts supporting its decision to the public to show that it has engaged in reasoned decision making.[i]

All models are wrong, but some are useful. The practical question is how wrong do they have to be before they are no longer useful. (quoted and paraphrased from GEP Box and NR Draper, 1987)

However, given the need to protect public health and the environment in the face of uncertainty, inadequate data, and wrong or less-wrong models, EPA should heed the wise words of Dr. Lorenzo Tomatis, former Director of the International Agency for Research on Cancer (IARC), who warned that, “In the absence of absolute certainty, rarely if ever reached in biology, it is essential to adopt an attitude of responsible caution, in line with the principles of primary prevention, the only one that may prevent unlimited experimentation on the entire human species”.

(More documentation and information on how the public can comment on the methanol assessment is here )

[i] See NRDC v. Thomas, 805 F.2d 410, 418 n.13 (D.C. Cir. 1986) (rejecting challenge to confidential data where EPA “combine[d] the data from the confidential reports . . . and plot[ted] them on a graph that was made part of the public record . . . then discussed the plotted data at some length”).  

About the Authors

Jennifer Sass

Senior Scientist, Health program

Join Us