Continuing on with our rate series, let’s explore the relationship between risk and rates.
Risk is one of the largest factors affecting the rates you pay. In fact, nearly all lending rates begin at whatever the Federal Funds Rate is, and are then adjusted upwards based on risk and lender overhead.
But risk is the big one, and it’s also quite variable, with almost all borrower risk profiles being unique, and quite multifaceted. Of course, on the surface, you would think the borrower’s credit score essentially constitutes the lender’s risk. But it goes a lot further than that.
Indeed, I’m looking at my list of topics for this series:
- Common misconceptions about rates. (already posted)
- The relationship between risk and the rate. (we are here)
- How loan type and intended use affects rates.
- Collateral and rates.
- Internal cost of funds and rates (aka, “how much does money cost?”).
- Defaults and rates.
- Credit scores and other risk profiles affecting rates.
- Benchmark rates and market conditions.
And I can see that numbers 3,4,6,7 are all directly related to risk. #8 also has a little risk in there. #5 doesn’t (and I’ll likely be including lender overhead in that one too).
So yes, risk is a big factor in rates. The more risk (taking into account all of these factors), the higher the rate. That’s the quick answer.
Now someone may ask “well, why is that? Why does the rate go up with more risk? Why does more risk to the lender cost borrowers more?”
Well, riskier loans have a higher chance to default, right? And in the aggregate, this must be taken into account. A lender cannot be profitable otherwise.
Without getting too deep into the math, lenders use an actuarial approach similar to insurance companies. Just like insurers predict fire claims in certain demographics to set rates, lenders analyze numbers to predict loan defaults based on risk profiles. The thing is, the models cannot predict individual outcomes, so borrowers are categorized by credit score, collateral, loan type, etc.
Now I must be emphatic – loans going bad is a serious topic, as the consequences are devastatingly costly to a lender. Not only do they not get paid back the money they lent, but recouping any funds is difficult and drawn out.
For example, first think about the lender repossessing the item and selling it. All well and good, but this costs money – someone must physically retrieve the item (not always a pleasant transaction), inspect it, repair it, clean it, list it, and sell it. Nobody does these things for free. Plus, the item will certainly have significant depreciation. So repossession falls into the “better than nothing but certainly not optimal” bucket.
In addition, defaults do not just disappear from a lender’s books – the money owed must be made back from somewhere. Can a lender “write off” some losses? Yea, sort of. But it’s a) complicated and time-consuming, and b) only a fraction of what was truly lost.
Listen, defaults are always very unprofitable. And the money needs to be made up from other borrowers in the form of rates, which is exactly what we are talking about here. When I talk about risk, I am really talking about expected defaults, and that makes rates go up.
In the end, the rate on any given loan needs to be profitable enough to offset the expected loss for whatever the total risk profile is. Plus, lenders must also account that riskier loans require more regulatory rules (i.e., cash on hand, which we’ll discuss in internal costs). And that costs money as well.
Like I said, the actual nuts and bolts of the “why” boil down to a complex set of math formulas that take all kinds of factors into account. But for purposes of discussion in this series, just understand that increased risk = increased rate.