Skip to main content
  1. Posts/
  2. Some exercise about Statistical Learning/

SL2: Analysis of Prostate Cancer dataset - linear regression model

16 mins

Aim of the analysis

We want to examinate correlation between the level of prostate-specific antigen and a number of clinical parameters in men, who are about to receive a prostatectomy.
# data analysis and wrangling
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# import random as rnd

# visualization
import seaborn as sns
import matplotlib.pyplot as plt
# %matplotlib inline

# machine learning
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score
from sklearn.metrics import mean_squared_error


import statsmodels.api as sm

from IPython.display import Image # to visualize images
from tabulate import tabulate # to create tables

import os
for dirname, _, filenames in os.walk('/kaggle/input'):
    for filename in filenames:
        print(os.path.join(dirname, filename))
/kaggle/input/prostate-data/prostate.data
/kaggle/input/prostate-data/tab.png
/kaggle/input/prostate-data/tab2.png

1. Open the webpage of the book “The Elements of Statistical Learning”, go to the “Data” section and download the info and data files for the dataset called Prostate

2. Open the file `prostate.info.txt`

  • Hint: please, refer also to Section 3.2.1 (page 49) of the book “The Elements of Statistical Learning” to gather this information

  • How many predictors are present in the dataset? What are those names?

    There are 8 predictors:
    1. lcavol (log cancer volume)
    2. lweight (log prostate weight)
    3. age
    4. lbph (log of the amount of benign prostatic hyperplasia)
    5. svi (seminal veiscle invasion)
    6. lcp (log of capsular penetration)
    7. gleason (Gleason score)
    8. pgg45 (percent of Gleason scores 4 or 5)

  • How many responses are present in the dataset? What are their names?

    There is one response: 1. lpsa (log of prostate-specific antigen)

  • How did the authors split the dataset in training and test set?

    They randomly split the dataset (containing 97 observations, in prostate.data) into a training set of size 67 and a test set of size 30.
    In the file prostate.data there is a column train which is of boolean type in order to distinguish if an observation is used (T) or not (F) to train the model.

3. Open the file `prostate.data` by a text editor or a spreadsheet and have a quick look at the data

  • How many observations are present?

    There are 97 observations in total.

  • Which is the symbol used to separate the columns?

    To separate the columns there is the escape character \t tab.

4. Open Kaggle, generate a new kernel and give it the name “SL_EX2_ProstateCancer_Surname”

5. Add the dataset `prostate.data` to the kernel

  • Hint: See the Add Dataset button on the right
  • Hint: use import option “Convert tabular files to csv”

6. Run the first cell of the kernel to check if the data file is present in folder ../input

7. Add to the first cell new lines to load the following libraries: seaborn, matplotlib.pyplot, sklearn.linear_model.LinearRegression

Tip: We import also pandas.

8. Add a Markdown cell on top of the notebook, copy and paste in it the text of this exercise and provide in the same cell the answers to the questions that you get step-by-step.

9. Load the Prostate Cancer dataset into a Pandas DataFrame variable called "data"

  • How can you say Python to use the right separator between columns?

    I need to specify sep='\t into read_csv method in order to load the dataset.

Data acquisition #

# Load the Prostate Cancer dataset
data = pd.read_csv('../input/prostate-data/prostate.data',sep='\t')
# data.info() # to check if it is correct

10. Display the number of rows and columns of variable data

[num_rows,num_columns]=data.shape;

print(f"The number of rows is {num_rows} and the number of columns is {num_columns}.")
The number of rows is 97 and the number of columns is 11.

11. Show the first 5 rows of the dataset

data.head(5)

Unnamed: 0lcavollweightagelbphsvilcpgleasonpgg45lpsatrain
01-0.5798182.76945950-1.3862940-1.38629460-0.430783T
12-0.9942523.31962658-1.3862940-1.38629460-0.162519T
23-0.5108262.69124374-1.3862940-1.386294720-0.162519T
34-1.2039733.28278958-1.3862940-1.38629460-0.162519T
450.7514163.43237362-1.3862940-1.386294600.371564T

12. Remove the first column of the dataset which contains observation indices

print("* Before to drop the first column:")
data.info()
#data1 = data1.drop(columns='Unnamed: 0')
#data1 = data1.drop(labels=['Unnamed: 0'],axis=1)

print("\n* After having dropped the first column:")
data = data.drop(data.columns[0],axis=1) # without specifying the name of the variable (axis=0 indicates rows, axis=1 indicates columns)
data.info()

data['train'].value_counts()
* Before to drop the first column:

<class 'pandas.core.frame.DataFrame'>
RangeIndex: 97 entries, 0 to 96
Data columns (total 11 columns):
 #   Column      Non-Null Count  Dtype  
---  ------      --------------  -----  
 0   Unnamed: 0  97 non-null     int64  
 1   lcavol      97 non-null     float64
 2   lweight     97 non-null     float64
 3   age         97 non-null     int64  
 4   lbph        97 non-null     float64
 5   svi         97 non-null     int64  
 6   lcp         97 non-null     float64
 7   gleason     97 non-null     int64  
 8   pgg45       97 non-null     int64  
 9   lpsa        97 non-null     float64
 10  train       97 non-null     object 
dtypes: float64(5), int64(5), object(1)
memory usage: 8.5+ KB


* After having dropped the first column:

<class 'pandas.core.frame.DataFrame'>
RangeIndex: 97 entries, 0 to 96
Data columns (total 10 columns):
 #   Column   Non-Null Count  Dtype  
---  ------   --------------  -----  
 0   lcavol   97 non-null     float64
 1   lweight  97 non-null     float64
 2   age      97 non-null     int64  
 3   lbph     97 non-null     float64
 4   svi      97 non-null     int64  
 5   lcp      97 non-null     float64
 6   gleason  97 non-null     int64  
 7   pgg45    97 non-null     int64  
 8   lpsa     97 non-null     float64
 9   train    97 non-null     object 
dtypes: float64(5), int64(4), object(1)
memory usage: 7.7+ KB


T    67
F    30
Name: train, dtype: int64
Warning: Keep attention to do not run the above cell twice. Otherwise it will drop again the first column.

13. Save column train in a new variable called "train" and having type `Series` (the Pandas data structure used to represent DataFrame columns), then drop the column train from the data DataFrame

# Save "train" column in a Pandas Series variable
train = data['train']
# train = pd.Series(data['train'])

# Drop "train" variable from data
data = data.drop(columns=['train'])
Warning: Keep attention to do not run the above cell twice. In this case you have already dropped ’train'.

14. Save column lpsa in a new variable called "lpsa" and having type `Series` (the Pandas data structure used to represent DataFrame columns), then drop the column lpsa from the data DataFrame and save the result in a new DataFrame called predictors

  • How many predictors are available?

    There are 8 predictors available for each one of the 97 observations.

# Save "lpsa" column in a Pandas Series variable
lpsa = data['lpsa']
# lpsa = pd.Series(data['lpsa'])

# Drop "train" variable from data
data = data.drop(columns=['lpsa'])
predictors = data
predictors.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 97 entries, 0 to 96
Data columns (total 8 columns):
 #   Column   Non-Null Count  Dtype  
---  ------   --------------  -----  
 0   lcavol   97 non-null     float64
 1   lweight  97 non-null     float64
 2   age      97 non-null     int64  
 3   lbph     97 non-null     float64
 4   svi      97 non-null     int64  
 5   lcp      97 non-null     float64
 6   gleason  97 non-null     int64  
 7   pgg45    97 non-null     int64  
dtypes: float64(4), int64(4)
memory usage: 6.2 KB

15. Check the presence of missing values in the `data` variable

Since all the columns are numerical variables, we have not to distinguish variables between numerical and categorical kind.
  • How many missing values are there? In which columns?

    We have no missing values.

  • Which types do the variable have?

    lcavol,lweight,lbph andlcp are float64, while age,svi,gleason and pgg45 are int64

print("For each variable in data, we have no missing values:\n\n",data.isna().sum())
For each variable in data, we have no missing values:

lcavol     0
lweight    0
age        0
lbph       0
svi        0
lcp        0
gleason    0
pgg45      0
dtype: int64

16. Show histograms of all variables in a single figure

  • Use argument figsize to enlarge the figure if needed
fig = plt.figure()

predictors.hist(grid=True, figsize=(20,8), layout = (2,4))

fig.tight_layout()
plt.suptitle("Not Standardized data", fontsize=25)

fig.show()
<Figure size 432x288 with 0 Axes>

png

17. Show the basic statistics (min, max, mean, quartiles, etc. for each variable) in data

data.describe()

lcavollweightagelbphsvilcpgleasonpgg45
count97.00000097.00000097.00000097.00000097.00000097.00000097.00000097.000000
mean1.3500103.62894363.8659790.1003560.216495-0.1793666.75257724.381443
std1.1786250.4284117.4451171.4508070.4139951.3982500.72213428.204035
min-1.3470742.37490641.000000-1.3862940.000000-1.3862946.0000000.000000
25%0.5128243.37588060.000000-1.3862940.000000-1.3862946.0000000.000000
50%1.4469193.62300765.0000000.3001050.000000-0.7985087.00000015.000000
75%2.1270413.87639668.0000001.5581450.0000001.1786557.00000040.000000
max3.8210044.78038379.0000002.3263021.0000002.9041659.000000100.000000

18. Generate a new DataFrame called dataTrain and containing only the rows of data in which the train variable has value “T”

  • Hint: use the loc attribute of DataFrame to access a groups of rows and columns by label(s) or boolean arrays
  • How many rows and columns does dataTrain have?
dataTrain = data.loc[train == 'T'] # Obviously, len(idx)==len(dataTrain) is True!

# # Alternative way:
# # 1. Get the indexes corresponding to train ==  'T'
# idxTrain = train.loc[train == 'T'].index.tolist() 
# # 2. Access to interesting rows with .iloc()
# dataTrain = data.iloc[idxTrain]

print(f"dataTrain contains {dataTrain.shape[0]} rows and {dataTrain.shape[1]} columns.")
dataTrain contains 67 rows and 8 columns.

19. Generate a new DataFrame called dataTest and containing only the rows of data in which the train variable has value “F”

  • How many rows and columns does dataTest have?
dataTest = data.loc[train == 'F']

20. Generate a new Series called lpsaTrain and containing only the values of variable lpsa in which the train variable has value “T”

  • How many valuses does lpsaTrain have?
# Create a new Series variable
lpsaTrain = lpsa.loc[train == 'T']

# # Another way to define it
# data_all = pd.read_csv('../input/prostate-data/prostate.data',sep='\t')
# lpsaTrain = data_all.loc[train == 'T']['lpsa']

# # To check if it is correct:
# idxTrain = train.loc[train == 'T'].index.tolist()
# lpsaTrain == lpsa.iloc[idxTrain]

print(f"lpsaTrain has {lpsaTrain.shape[0]} values.")
lpsaTrain has 67 values.

21. Generate a new Series called lpsaTest and containing only the values of variable lpsa in which the train variable has value “F”

  • How many valuses does lpsaTest have?
lpsaTest = lpsa.loc[train == 'F']

# # To check if it is correct: 
# len(lpsaTest) == len(data)-len(lpsaTrain)

print(f"lpsaTrain has {lpsaTest.shape[0]} values.")
lpsaTrain has 30 values.

22. Show the correlation matrix among all the variables in dataTrain

  • Hint: use the correct method in DataFrame
  • Hint: check if the values in the matrix correspond to those in Table 3.1 of the book
# Create correlation matrix
corrM = dataTrain.corr().round(decimals = 3) # As in the book, I plot values up to 3 decimals

# Display only the lower diagonal correlation matrix
lowerM = np.tril(np.ones(corrM.shape), k=-1) # Lower matrix of ones. (for k=0 I include also the main diagonal)
cond = lowerM.astype(bool) # Create a matrix of false, except in lowerM
corrM = corrM.where(cond, other='') # .where() replaces values with other where the condition is False.

corrM

lcavollweightagelbphsvilcpgleasonpgg45
lcavol
lweight0.3
age0.2860.317
lbph0.0630.4370.287
svi0.5930.1810.129-0.139
lcp0.6920.1570.173-0.0890.671
gleason0.4260.0240.3660.0330.3070.476
pgg450.4830.0740.276-0.030.4810.6630.757
# We can compare the above correlation matrix with the one in the book:
Image("../input/prostate-data/tab.png")

png

23. Drop the column lpsa from the `dataTrain` DataFrame and save the result in a new DataFrame called `predictorsTrain`

Warning: I can not drop lpsa from dataTrain, because I have already done it!

In fact:

  • at step 14. I dropped lpsa from data
  • at step 18. I created dataTrain from data, by selecting certain rows. So at this step dataTrain does not contain lpsa.
# predictorsTrain = dataTrain.drop['lpsa']

predictorsTrain = dataTrain

24. Drop the column `lpsa` from the dataTest DataFrame and save the result in a new DataFrame called `predictorsTest`

dataTest.columns.tolist()

predictorsTest = dataTest
Warning: I can not drop lpsa from dataTest, because I have already done it!

In fact:

  • at step 14. I dropped lpsa from data
  • at step 19. I created dataTest from data, by selecting certain rows. So at this step dataTest does not contain lpsa.

25. Generate a new DataFrame called `predictorsTrain_std` and containing the standardized variables of DataFrame `predictorsTrain`

  • Hint: compute the mean of each column and save them in variable predictorsTrainMeans
  • Hint: compute the standard deviation of each column and save them in variable predictorsTrainStds
  • Hint: compute the standardization of each variable by the formula: \\[\\frac{predictorsTrain - predictorsTrainMeans}{predictorsTrainStd}\\]
predictorsTrainMeans = predictorsTrain.mean()
predictorsTrainStds = predictorsTrain.std()
predictorsTrain_std = (predictorsTrain - predictorsTrainMeans)/predictorsTrainStds # standardized cariables of predictorTrain

predictorsTrain_std

# Standardizing makes it easier to compare scores, even if those scores were measured on different scales.
# It also makes it easier to read results from regression analysis and ensures that all variables contribute to a scale when added together.

lcavollweightagelbphsvilcpgleasonpgg45
0-1.523680-1.797414-1.965590-0.995955-0.533063-0.836769-1.031712-0.896487
1-1.857204-0.643057-0.899238-0.995955-0.533063-0.836769-1.031712-0.896487
2-1.468157-1.9615261.233468-0.995955-0.533063-0.8367690.378996-0.213934
3-2.025981-0.720349-0.899238-0.995955-0.533063-0.836769-1.031712-0.896487
4-0.452342-0.406493-0.366061-0.995955-0.533063-0.836769-1.031712-0.896487
...........................
901.5556210.9981300.433703-0.995955-0.533063-0.836769-1.031712-0.896487
910.9813460.107969-0.4993550.8722231.847952-0.8367690.378996-0.384573
921.2206570.5251530.433703-0.9959551.8479521.0965380.3789961.151171
932.0179720.568193-2.765355-0.9959551.8479521.7014330.3789960.468618
951.2627430.3101180.4337031.0157481.8479521.2652980.3789961.833724

67 rows × 8 columns

26. Show the histogram of each variables of predictorsTrain_std in a single figure

  • Use argument figsize to enlarge the figure if needed
  • Hint: which kind of difference can you see in the histograms?
print("Now all the variables are centered at 0 and they variance equal to 1. So we can compare them in a better way.")
fig = plt.figure()

predictorsTrain_std.hist(grid=True, figsize=(20,8), layout = (2,4))

plt.suptitle("Standardized data", fontsize=25)
fig.tight_layout()

plt.show()
Now all the variables are centered at 0 and they variance equal to 1. So we can compare them in a better way.



<Figure size 432x288 with 0 Axes>

png

Linear Regression #

27. Generate a linear regression model using `predictorsTrain_std` as dependent variables and `lpsaTrain` as independent variable

  • Hint: find a function for linear regression model learning in sklearn (fit)

  • How do you set parameter fit_intercept? Why?

    The parameter fit_intercept specifies whether to calculate the intercept for this model:

    • If False, then the y-intercept to 0 (it is forced to 0);
    • if True, then the y-intercept will be determined by the line of best fit (it’s allowed to “fit” the y-axis).

    As default fit_intercept is True and it is good for us.

  • How do you set parameter normalize? Why? Can this parameter be used to simplify the generation of the predictor matrix?

    If fit_intercept = False, then the parameter normalize is ignored. If normalize = True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm.

    By default normalize = False.

    We have already standardized our variables, so we set it as False.

# Create X_test
#predictorsTestMean = predictorsTest.mean()
predictorsTestStds = predictorsTest.std()
#predictorsTest_std = (predictorsTest - predictorsTestMeans)/predictorsTestStds # standardized cariables of predictorTrain
predictorsTest_std = (predictorsTest - predictorsTrainMeans)/predictorsTrainStds # standardized cariables of predictorTrain (BETTER WAY TO DO IT)
# Prepare the independent and dependent variables for the model

# Independent variables
X_train = predictorsTrain_std
X_test = predictorsTest_std

# Dependent variable
y_train = lpsaTrain
y_test = lpsaTest
linreg = LinearRegression() # we don't need to specify args, because the default ones are already good for us
linreg.fit(X_train,y_train)

# by default: fit_intercept = True and normalize = False
# This setting is good because we want to compute the intercept and we don't need to normalize X because we have already done it

LinearRegression()

Difference in setting up fit_intercept in LinearRegression() #

# Difference in setting up fit_intercept

lr_fi_true = LinearRegression(fit_intercept=True)
lr_fi_false = LinearRegression(fit_intercept=False)

lr_fi_true.fit(X_train, y_train)
lr_fi_false.fit(X_train, y_train)

print('Intercept when fit_intercept=True : {:.5f}'.format(lr_fi_true.intercept_))
print('Intercept when fit_intercept=False : {:.5f}'.format(lr_fi_false.intercept_))

# FIGURE
# SOURCE: https://stackoverflow.com/questions/46779605/in-the-linearregression-method-in-sklearn-what-exactly-is-the-fit-intercept-par

# fig properties
row = 2
col = 4
width = 20
height = 8

# initialize the figure
fig, axes = plt.subplots(row, col,figsize=(width,height))

for ax,variable in zip(axes.flatten(),X_train.columns.tolist()):
    ax.scatter(X_train[variable],y_train, label='Actual points')
    
    ax.grid(color='grey', linestyle='-', linewidth=0.5)
    
    idx = X_train.columns.get_loc(variable) # get corresponding column index to access the right coeff
    
    lr_fi_true_yhat = np.dot(X_train[variable], lr_fi_true.coef_[idx]) + lr_fi_true.intercept_
    lr_fi_false_yhat = np.dot(X_train[variable], lr_fi_false.coef_[idx]) + lr_fi_false.intercept_
    
    ax.plot(X_train[variable], lr_fi_true_yhat, 'g--', label='fit_intercept=True')
    ax.plot(X_train[variable], lr_fi_false_yhat, 'r-', label='fit_intercept=False')

fig.tight_layout()

plt.show(fig) # force to show the plot after the print
Intercept when fit_intercept=True : 2.45235
Intercept when fit_intercept=False : 0.00000

png

lr_fi_true = LinearRegression(fit_intercept=True)
lr_fi_true.fit(X_train, y_train)
print(lr_fi_true.coef_,"\n",lr_fi_true.intercept_,"\n\n")

lr_fi_false = LinearRegression(fit_intercept=False)
lr_fi_false.fit(X_train, y_train)
print(lr_fi_true.coef_,"\n",lr_fi_false.intercept_)
[ 0.71640701  0.2926424  -0.14254963  0.2120076   0.30961953 -0.28900562
 -0.02091352  0.27734595] 
 2.4523450850746262 


[ 0.71640701  0.2926424  -0.14254963  0.2120076   0.30961953 -0.28900562
 -0.02091352  0.27734595] 
 0.0

28. Show the parameters of the linear regression model computed above. Compare the parameters with those shown in Table 3.2 of the book (page 50)

col = ['Term','Coefficient'] # headers

intercept_val = np.array([linreg.intercept_]).round(2)
coeff_val = linreg.coef_.round(2)
intercept_label = np.array(['Intercept'])
coeff_label = X_train.columns.tolist()

terms = np.concatenate((intercept_val,coeff_val), axis=0)     
coeffs = np.concatenate((intercept_label,coeff_label),axis=0)

table = np.column_stack((coeffs,terms))

print(tabulate(table, headers=col, tablefmt='fancy_grid'))
╒═══════════╤═══════════════╕
│ Term      │   Coefficient │
╞═══════════╪═══════════════╡
│ Intercept │          2.45 │
├───────────┼───────────────┤
│ lcavol    │          0.72 │
├───────────┼───────────────┤
│ lweight   │          0.29 │
├───────────┼───────────────┤
│ age       │         -0.14 │
├───────────┼───────────────┤
│ lbph      │          0.21 │
├───────────┼───────────────┤
│ svi       │          0.31 │
├───────────┼───────────────┤
│ lcp       │         -0.29 │
├───────────┼───────────────┤
│ gleason   │         -0.02 │
├───────────┼───────────────┤
│ pgg45     │          0.28 │
╘═══════════╧═══════════════╛
# We can compare the above correlation matrix with the one in the book:
Image("../input/prostate-data/tab2.png")

png

29. Compute the coefficient of determination of the prediction

For coefficient of determination we mean \(R^{2}\).

y_predicted = linreg.predict(X_test)
y_predicted
array([1.96903844, 1.16995577, 1.26117929, 1.88375914, 2.54431886,
       1.93275402, 2.04233571, 1.83091625, 1.99115929, 1.32347076,
       2.93843111, 2.20314404, 2.166421  , 2.79456237, 2.67466879,
       2.18057291, 2.40211068, 3.02351576, 3.21122283, 1.38441459,
       3.41751878, 3.70741749, 2.54118337, 2.72969658, 2.64055575,
       3.48060024, 3.17136269, 3.2923494 , 3.11889686, 3.76383999])
score = r2_score(y_test,y_predicted) # goodness of fit measure for linreg
score2 = mean_squared_error(y_test,y_predicted)

print(f"The coefficient of determination (i.e. R^2) of the prediction is {round(score,3)}\n\
The mean squared error is: {round(score2,3)}.\n\
The root of the mean squared error is {round(np.sqrt(score2),3)}")
The coefficient of determination (i.e. R^2) of the prediction is 0.503
The mean squared error is: 0.521.
The root of the mean squared error is 0.722
plt.figure(figsize=[7,5])
plt.scatter(X_test['lcavol'],y_test, marker='o', s = 50)
plt.scatter(X_test['lcavol'],y_predicted, marker='^', s = 50)
plt.title('Predicted and Real values of Y by using `lcavol`',fontsize=16)

plt.legend(labels = ['Test','Predicted'],loc = 'upper right')
plt.xlabel('lcavol',fontsize=12)
plt.ylabel('lpsa',fontsize=12)

plt.show()

png

data_all = pd.read_csv('../input/prostate-data/prostate.data',sep='\t')
data_all = data_all.drop(labels = ['Unnamed: 0'],axis=1)
featuresToPlot = data_all.columns.tolist()
dataToPlot = data_all[featuresToPlot]

pd.plotting.scatter_matrix(dataToPlot, alpha = 0.7, diagonal = 'kde', figsize=(10,10))

plt.show()

png

plt.figure(figsize=[7,5])
plt.scatter(predictorsTrain_std['lweight'],lpsaTrain, marker='o', s = 50)
plt.title('Linear relationship btw `lweight` and  `lpsa`',fontsize=16)

plt.xlabel('lweight',fontsize=12)
plt.ylabel('lpsa',fontsize=12)

plt.show()

# in this case linear regression is nice!

png

We can see how these relationships are linear. We can see linearity from the plot.

r2test = round(linreg.score(X_test,lpsaTest),2)
r2train = round(linreg.score(X_train,lpsaTrain),2)

print(f"Coefficient of determination for Test set is {r2test}\n\
Coefficient of determination for Test set is {r2train}") # it's higher because the model is created by using train set 
Coefficient of determination for Test set is 0.5
Coefficient of determination for Test set is 0.69

30. Compute the standard errors, the Z scores (Student’s t statistics) and the related p-values

  • Hint: use library `statsmodels instead of sklearn
  • Hint: compare the results with those in Table 3.2 of the book (page 50)
y_predicted = linreg.predict(X_test)

X_trainC = sm.add_constant(X_train) # We need this in order to have an intercept
                        # otherwise we will no have the costant (it would be 0)

model = sm.OLS(y_train,X_trainC) # define the OLS (Ordinary Least Square - Linear Regression model)

results = model.fit() # fit the model

results.params # Coefficients of the model
results.summary(slim=True)

# Adjusted R^2 is the r^2 scaled by the number of parameters in the model
# F-statistics tells if the value of R^2 is significant or not.

OLS Regression Results
Dep. Variable:lpsaR-squared:0.694
Model:OLSAdj. R-squared:0.652
No. Observations:67F-statistic:16.47
Covariance Type:nonrobustProb (F-statistic):2.04e-12

coefstd errtP>|t|[0.0250.975]
const2.45230.08728.1820.0002.2782.627
lcavol0.71640.1345.3660.0000.4490.984
lweight0.29260.1062.7510.0080.0800.506
age-0.14250.102-1.3960.168-0.3470.062
lbph0.21200.1032.0560.0440.0060.418
svi0.30960.1252.4690.0170.0590.561
lcp-0.28900.155-1.8670.067-0.5990.021
gleason-0.02090.143-0.1470.884-0.3060.264
pgg450.27730.1601.7380.088-0.0420.597



Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.

# We can compare the above correlation matrix with the one in the book:
Image("../input/prostate-data/tab2.png")

png