PEER Methodology

EEG Collection and Z Scores

The first step of the assessment is to collect 21-channel, awake, eyes-closed, digital electroencephalographic (EEG) recordings on subjects who have either washed out their medications for five half-lives or who are currently medication free. The results are reviewed in raw form by an electroencephalographer to ensure that there are no abnormalities that would affect the data from going through the referenced-EEG database. The EEG is then screened to remove any “artifacts” that may exist in the EEG record. These artifacts include such things as muscle twitches, eye blinks, and periods of drowsiness.

Neurometric analysis involves computation of a series of measures that mathematically describe the EEG. These measures are then compared with a database of “normal” EEGs. There are approximately 1,200 measures derived from the EEG component wavelengths and amplitudes. These measures fall into four main categories, i.e., power, coherence, symmetry, phase and frequency. Power is the sum of the amplitudes of the wavelengths in each band, computed on an absolute and relative basis. Relative power indicates the percentage of total power in each band. Coherence measures the synchronization of electrical activity between two channels. In mathematical terms, coherence is the phase shift between similar wavelengths at the two channels. Symmetry measure the ratio of power between a symmetrical pair of electrodes and, lastly, frequency measure the average frequency of the EEG component wavelengths with each band.

Most Neurometric features are highly non-Gaussian in their characteristics. For this reason, the neurometrics are log-transformed to make the distribution more normal (Gaussian) in nature. Many quantitative EEG features also vary consistently with age. To account for the difference between the age of the patient and the age of the subjects in the normative database, these quantitative EEG features are age-regressed using a linear regression equation to yield a “standard-age” quantitative EEG feature. The comparison of the actual values of the Neurometric variables with norms is expressed as a Z score which is defined as:

Z = observed value – normative mean

standard deviation

Development of Pattern Variables

Neurometrics analysis outputs approximately 2,400 variables (known as univariables) that describe the EEG. To make this data utilizable, reference EEG transforms this data into a smaller set of multivariables (or pattern variables). These pattern variables preserve the information contained in the set of quantitative EEG univariables while retaining some degree of physical interpretation. As such, the data are not simply “mined” to come up with combinations of variables that are indicative of one state or another; instead they are combined according to anatomical location. In some cases, factor analysis is employed to give a greater weight to those univariables that preserve the largest amount of total information of all the univariables in an anatomic group. In other cases, the univariables in an anatomic group are combined in a nonlinear fashion to increase the separation of observed clusters within the data. At present there are 74 pattern variables.

Correlation of Pattern Variables with Patient Outcomes

The referenced – EEG variables for historical subjects with known positive and negative clinical outcomes to various psychotropic medications are examined in order to develop a model that will allow the prospective determination of likely patient medication responsivity to these medications. The variables are examined by stratifying the distribution according to the individual medication responsivities represented. Before utilizing this apparent relationship, the appropriateness of the pattern variables are checked. Tests of skewness and kurtosis are conducted for each of the pattern variables to ensure that the original variable distribution is Gaussian. Having ensured a Gaussian distribution, mathematics can be applied that provide a comparison of other subjects with similar patterns demonstrating whether the pattern variable value for the current test in question belongs to the distribution represented by a particular medication or belongs to the distribution defined by some other group (the rest of the population). This procedure is done for all medications represented in the database and for all of the pattern variables that serve as indicators for those medications. The weightings then are averaged to calculate a “score” for each medication.

Calibration against Patient Records

The final step is to calibrate this score against actual patient records to determine what level of score translates into a specified likelihood response to the medication. For purposes of communication, for the PEER Online service, three levels of responsiveness were created. The first is “sensitive” or “S”. This level suggests that the indicated medication, or group of medications, produced a positive outcome to treatment in 80% or more of cases. “Intermediate” or “I”, the second level, indicates that the responsivity was in less than 80% of cases but more than 35% of cases. The third level, “resistive” or “R” indicates that a response to the medication is seen in fewer than 35% of cases. In other terms, if we formulate Ho (the null hypothesis) in such a way that Ho is true if the patient is not actually responsive to the medication, then the model is calibrated to all for a type I error rate of no more that 20% in the region indicated as “S” and a type II error rate of no more that 35% in the region designated as “R”.

To calibrate the report generator model against these standards, the outcomes database is queried for all subjects’ responses that were not used in the construction of the actual model. This dataset is known as the validation sample. This sample is then divided into two subsets, the first of which is known as the tuning sample and the second is the final validation sample. To complete the model development, the scoring model is run against the tuning sample and the resulting distribution of scores is compared against the known responses. Thresholds for scores are then empirically set to implement the standards of S, I, and R described above, and which are common in such medical reports as, for example, antibiotic sensitivity results. Final validation of the model is made by running the processes, complete with the thresholds that were set, against the final validation sample. In order to preserve the fully prospective nature of this validation, no adjustment of the model parameters, including the thresholds, is made after this process. If the results of this “run” meet the specifications for the previous clinical correlations, the model is then ready to be used for a new patient.

The PEER methodology does not take into account the diagnosis of the patient when offering objective data on any specific medication. Response research has shown, and industry experience corroborates, that diagnosis is often an unreliable predictor of the treatment most likely to be successful for the individual patient. This is one of the fundamental improvements that shared quantitative EEG features correlated to long-term clinical outcomes brings to the practice of psychiatry.

This process can provide objective neurophysiologic data to assist in avoiding the unnecessary risk that comes with the practice of trial and error psychopharmacology, which is also seen through the efficiency of treating a patient, thus reducing suffering and medical costs. The report is unique to each patient’s quantitative EEG features.