Estimating Probability

The Logistic Response Function

Our goal in utilizing logistic regression is to identify the probability of an instance belonging to a class of interest. For example, we may use it to identify what loan applications will likely default based on historical experience. We need an approach that can meet the following two objectives:

  1. We need to relate one or more predictor variables to the outcome variable

  2. The function of those predictor variables must result in a value within the range of [0 <= P <= 1] so that it can be used to produce estimates of probabilities.

Without some modification, the standard MLR equation does not meet these objectives but a function called the logistic response function (LRF) will.

Figure 10.3: The Logistic Response Function

By using this function, we can plug any value from negative infinity to postive infinity in for z into the LRF and it will produce a value [ 0 <= P <= 1 ]. So, if we can fit predictor information into a function that produces a value for z, we can use that function to predict z. Then from z we can calculate probabilities. The chart below shows how different values of z along the horizontal axis are related to values of P on the vertical axis.

Relationship between z and P

Figure 10.4: Relationship between z and P

When z is zero, P is 0.50. When the value of z is positive, P is greater than 0.50. And when z is negative, the value of P is less than 0.50.

As shown in equation (1) below, the z in the Logistic Response Function is referred as Logit. The Logit represents the contribution of all the predictor variables used in the model. Therefore, by substituting in a function for z that can capture the contributions of all the predictor variables we can solve to get the probability.

Figure 10.5: Logit and the LRF

As shown in equation (2), z can be set equal to the traditional additive multiple linear regression equation that includes an intercept and Betas (regression coefficents) that are estimated to reflect how a change in the value of each input variable by 1 contributes to z. Like calculating the MLR function, if we have a known value of z and known values of the x variables for each instance, we can estimate the regression coefficients.

Since logit and the regression equation are equal, we can substitute the regression equation in for z as shown in equation (3).

Steps for Estimating P

Here are the steps for estimating P. These steps are performed for each unique combination of predictor variable classes in the sample data.

Step 1

Calculate the odds that the response variable is a success by counting the number of successes and failures (i.e., Odds = successes : failures). Then convert the odds to single-number form.

Step 2

Calculate logit, aka z, by taking the natural log of Odds. In Microsoft Excel, the LN() function takes the natual log of odds. The value of logit can be calculated as z = LN(Odds).

The example below demonstrates steps 1 and 2 for an example situation involving calculating the probability of an individual having a second heart attack based on their receiving anger management training.

Figure 10.6: Heart Attack

At this point, we have a z value for each unique combination of values for predictor variables.

Step 3

Using equation (2) above, estimate regression coefficients (Betas). This is like replacing y with z in the classific regression equation. Since we have z and the values of the predictor variables, we can us data mining software to estimate the values of the coefficients. 

Step 4

Having calculated the beta coefficients in Step 3, we are now able to use those coefficients in the equation. We can plug in specific values of x1 through xq (i.e., predictor variables) into the regression equation and solve for the z value associated with those values of the predictor variables.

Step 5

Having calculated z in the previous step, we can plug z into the LRF to calculate the probability of success. 

The following video walks through this process.

Step 6

Convert P to a classification by using a cutoff level. The default cutoff value is 0.5, but we can change this if need be. If the estimated P is greater than or equal to the cutoff, classify as “success”. Otherwise, classify as failure.

In the image below, these steps are shown on an example to estimate P.

Figure 10.7: Heart Attack