RATS 11.1
RATS 11.1

Procedures /

PLSGRID Procedure

Home Page

← Previous Next →

@PLSGrid generates a log-linear grid of test tuning values for penalized least squares based upon information about the scales of the dependent and explanatory variables.

 

@PLSGrid( options ) gridVECTOR

 

Based upon information supplied, this creates into gridVECTOR a log-linear grid that one hopes will cover the optimal value of the tuning parameter for a particular application of penalized least squares. It does this by using the information in the options to compute the tuning value where the sum of squared residuals and the penalty are roughly equal, then makes a grid including both higher and lower values. For instance, if that computed tuning value is 100, by default, the grid will run from 1 to 10000 with 8 grid points per multiple of 10, such as 1.0,1.33,1.78,2.37,....

Parameters

gridVECTOR

(output, required) VECTOR of computed grid values

Options

PENALIZE=[L1]/L2

This chooses what form of penalty you will be using. Note that the set of reasonable values can be quite different between the two.

 

R2GUESS=guess for value of \(R^2\) for regression [.7]

 

ORDERSOFMAGNITUDE=number of powers of 10 in each direction in the grid [2]

PERORDER=number of grid points per power of 10 [8]

These combine to determine how broad the coverage of the grid is, and how coarse or fine it is. The defaults will create a grid that goes from .01 times the computed guess value to 100 times it (2 orders of magnitude in each direction), with 8 logarithmically spaced values within in each multiple of 10.

 

YY=(Approximate) Value of the sum of squares of the (de-meaned and scaled) dependent variable [required]

XX=Typical value of the sum of squares of a (de-meaned and scaled) explanatory variable [1]

 

Examples

The example LASSO.RPF does two searches, one for a L1-penaliized (LASSO) estimate and one for L2-penalized (Ridge Regression). The first uses a cross product matrix which uses data which are de-meaned only. This uses a guess for \(R^2\) of .7 with a bit finer grid (16 per multiple of 10 rather than 8). The YY and XX values are pulled directly from the crossproduct matrix.

 

cmom(center) * EndTraining

# shortrate{0 to 12} longrate

*

* Compute a grid with 16 grid points per order of magnitude

*

@PLSGrid(penalty=l1,R2Guess=.7,PerOrder=16,$

   yy=%cmom(%ncmom,%ncmom),xx=%cmom(1,1)) testlambdas

 

 

This second one uses a full correlation matrix, so the all variables are scaled to produce 1's on the diagonal, so that is what is input for YY and XX.

 

cmom(corr) * EndTraining

# shortrate{0 to 12} longrate

@PLSGrid(penalty=l2,R2Guess=.7,PerOrder=16,yy=1.0,xx=1.0) testlambdas

 

 


Copyright © 2026 Thomas A. Doan