Answers to the recent question "Why don't bond makers just get loans?" Seem to take it as a given that lending someone money at a 12% interest makes sense when there is already doubt about their ability to pay back the same loan at a 5% interest.
That strikes me as counter-intuitive, so I would like to know how that works.
It starts to make more sense when you consider a pool of borrowers rather than an individual.
For example, consider a pool of 100 borrowers with a 1 in 100 chance of default, if they all borrow $1,000 for 1 year at 10% APR, you can expect to receive ~$100 interest x 99 borrowers = $9,900 minus the $1,000 from the defaulter. Leaving you with about $8,900 in "profit". (Obviously this is greatly simplified - we're ignoring principal repayments, compounding, the chance of a borrower making a few repayments before defaulting, etc., etc., but hopefully you get the point).
Now consider a scenario where the risk of default is 1 in 10. If you charged the same 10% interest to those 100 borrowers, you could expect to receive $9,000 in interest payments, but you'll lose $10,000 in defaults, so you'd want to charge a higher interest rate to all members of that higher risk pool so that you can still turn a profit.
It's not that their ability to pay increases with higher interest rates, it's that the lender is compensated for taking on additional risk.
If I lend you money, and have some concern (not a lot, but some) that you won't be able to pay me back, I'm going to charge a higher interest rate, so that I get more return on my money sooner.
Note that a difference between a 5% interest rate and a 12% interest rate represents a very small difference in the probability of default. If I think there's a 50% chance that you're not going to pay me back, I'm not going to lend you money at only a 12% interest rate. It's going to be high enough that I get at least my initial money back before you go bankrupt.
... there is already doubt about their ability to pay back the same loan at a 5% interest.
Another way to think about this is, "If a lender would offer someone with a good credit rating a loan at 5%, why would that same lender offer someone with a poor credit rating the same loan at 12%?"
It's because it's not the "same loan".
The important distinction is that your credit rating determines the interest rate you will receive, whereas your income determines how much you can afford per month (or year).
For example. Based on your income, suppose you can afford to pay $500/month towards a new loan. You wish to purchase a car with a 60 month term. (At 0% interest, the max loan you could afford is a $30,000 loan.)
In both scenarios the person is paying $500/month. The end result is that for the person with a lower credit rating, the total amount loaned is reduced, the bank's risk is mitigated, but the ability of the person to repay the loan is not affected.
The difference in this case is that you, the bond buyer, are now the lender and you must assess if lending money at 12% is worth the additional risk based on the lendee's credit rating. As long as you feel the lendee is not spreading themselves too thin based on their income, then it might be an investment worth considering.
Stop thinking like a consumer and start thinking like a lender. You would lend less money to a lower quality borrower at a higher rate than you would a higher quality borrower for a variety of reasons.
Interest is calculated by applying the rate to the principle outstanding in some periodic interval, usually monthly. For back of the napkin calculations you take the annual percentage rate divided by 12. So 0.05 divided by 12 is 0.00416667. So if I borrow $10,000 my first payment will include ($10,000 * 0.00416667) $41.67 of interest. Many loans are amortized over a period of time to make the payment level and predictable for the borrower. If the loan term is 3 years the payment will be $299.71 but $41.67 of the first payment is still interest the difference in payment only changes the amount of principle that's paid.
At 12% the first payment has (10,000 * 0.01) $100 of interest in it. If this is amortized over 3 years the payment is $332.14. For more than double the interest the three year amortization table costs the borrower about 11% more, $332.14 versus $299.71.
Without getting too technical, for the lender this rate change materially changes their income. Again considering the three year repayment terms, when you make your first payment at 5% the lender receives $41.67 of income and $258.04 of returned capital. At 12% the lender is receiving $100 of income and $232.14 of returned capital. The lenders rate of income is ramped up significantly and if you default the principle outstanding at the time of default is a tax deductible loss. If you default after 10 payments in on the 3 year repayment plans in the case of 5% the lender has made $367.74 in interest payments and will write off $7,370 as a loss; at 12% the lender has made $892.70 in interest and will write off $7,571. The loss risk to the lender has not changed much but the income is dramatically different.
Interest rates are a function of your perceived risk in the lending market. If you look more risky, you pay more in interest and can't borrow as much money. Now it should come as no surprise that in order to be successful in lending you must have a large pool of borrowers. If you only have $10,000 to lend you'd be wise to lend $100 to 90 high quality borrowers and maybe the remaining $1,000 is split between 20 lower quality borrowers.
Additionally, your risk as a lender changes depending on the type of loan as well as who the borrower is. A secured car loan is lower risk than a credit card because the lender can come get the car. A personal credit card may be lower risk than a corporate credit card because a fictitious entity can just disappear, but Apple corporate cards probably have more favorable terms than a college student with no credit history could get.
Excellent answers above (by CactusCake and TTT, respectively) that explain the two key points that:
1) The interest rate for a group is based on probability of default. It's a group characteristic, calculated based on historical data.
2) The interest rate doesn't increase the probability of default, because if they give you a higher interest rate they also approve a lower loan amount. Your burden is the same whichever interest rate you get and doesn't affect your ability to repay.
I would just like to tie them in.
The key thing to understand is the difference between the individual and the group. It's really a bit as if the single customer is invisible to the bank. This is counterintuitive, because he/she provides so much data upon application. Yet despite all that, the bank cannot figure out whether he's going to default or not. The only thing the bank can do is try to match the individual to a segment of existing customers with known historical default rates. So the bank will say (simplified): "OK, this guy is 25 years old, so he's in our 20-30 year-olds cohort, which has a default rate of 10%. For that group to be profitable to us, we have to charge them 8% each." (See CactusCake reply for a simple calculation.)
People think, mistakenly, that when a bank examines their application it is focusing on them and trying to figure out whether they will default. It's not really true. Imagine a bank with zero customers. No matter how many data points about a person it had, and how hard it looked, it simply could not arrive at something like "this guy is 35% likely to default". Only after having historical data can a bank try to match the new customer to an existing group and derive the probability of default from that.
So the sequence is:
Look at customer --> figure out which group he belongs to --> check historical probability of default of that group --> set the interest rate --> adjust and approve the loan amount
It is not:
Look at customer --> reject or approve the loan amount --> set the interest rate (and thus increase/decrease probability of default)
As other answers point out, one of the main reasons for increasing rates to "unsafe" borrowers is compensation for the extra risk (and where the increased chance of default due to the higher rate is accommodated by pooling loans across a number of borrowers).
Another factor is that a higher interest rate raises the "cost of money" and may make a company decide that its plans are ill-advised or not worth the risk.
If the cost of borrowing is "too cheap" a company that is already "at risk" may nevertheless go ahead with what might be a risky, over-ambitious plan to try and turn its fortunes around. If the cost of borrowing is higher (as it will be for "riskier" companies), then this increased cost may make them pause and consider whether there are better (and cheaper) ways of solving their problems.
In some cases, there may be a less risky, probably slower, course they can take to reduce what makes them "risky" in the first place. If successful, they might then be able to adopt the original plan, but should now be able to do so at lower interest rates.
In some cases, they may decide that the risk of failure is low enough (and the potential reward high enough) that they are prepared to pay the higher price, and will proceed with plan. (Or, it may be that they are in a "do or die" situation, and that not proceeding will almost guarantee failure, so they will take the chance anyway.)
Of course, some companies would go ahead with their original plans anyway (at the higher rate of interest) and ignore the possibility of a better option.
For the lenders point of view, the higher rate will weed out some of the riskiest borrowers (they'll do something else). Of those that still proceed, some will succeed (and pay back the loan/bond), compensating the lender for the (hopefully small) number that proceed with the loan and then go on to default.
If there is doubt whether the borrower can pay, it’s a riskier investment than if the borrower has a good credit score, or better history of repayment. Because of this increased risk, the lender can charge a higher interest rate to help cover their risk and potential losses.