Credit Decision Engine

Credit Decision Engine

Credit Decision Engine

Credit Decision Engine - The World of Caching

Credit Decision Engine - The World of Caching

May 4, 2024

May 4, 2024

May 4, 2024

The ability to cache data will save banks and lead aggregators a lot of time and money. While reducing operating cost and compliance issues, the caching layer of your decision engine will greatly speed up your decision making process and deflect fraud attempts.

Credit Decision Engine - The World of Caching

What is caching?

Caching is the ability to store frequently used information in memory to fast retrieval. We encounter this technique in our computer operating systems. When you close an application and relaunch it, the speed of which the application is reopened is measurable faster than on a cold start.

This is because the computer operating system remembers this recently opened application in memory and is able to relaunch the application without accessing the application from a slower part of the computer storage systems.

Some computer operating systems keep the most frequently used applications in memory for faster retrieval to speed up the overall computer experience. We leverage this concept in creating a critical component of the credit decision engine’s operating system. 

In this article, we will define a few different ways a decision engine is able to cache first party and third party data to speed up the decision making process as well as reduce operating expenses for banks and financial services companies.

Why caching?

Why do we cache data in the context of operating a credit decision engine? There are three major reasons. 

One, fraud detection. Two, money savings. Three, speed of execution. We will go into each of these reasons separately and provide use cases from the bank’s perspective on why these caching activities help them to reduce operating expense and credit loss.

Caching in fraud detection

When banks and lead aggregators acquire new leads or customers, the first line of defense for potential fraudulent applicants is your decision engine. After the applicant completes the first few pages, it’s time to run identity verification. But, before getting too deep into your underwriting decision making process, we need our decision engine to look for something else in its cached data.

One of the frequent use cases is to define a look back period in the decision engine’s caching algorithm. The look back period could be defined in seconds, minutes, hours, days. The intent is to look back in time and within the cached data to see if this applicant has appeared to the bank before.

If the applicant has been seen by the bank in the last seconds, minutes, hours, this applicant’s information is being shopped by multiple lead providers in a short period of time. Most of the time without the applicant’s knowledge.

The banks should recognize this event by using cached data to prevent multiple credit reports being pulled on this applicant in a short amount of time. This can become cost prohibitive and could be a huge compliance issue. 

In other more insidious situations, parts of the applicant’s information appear within a short amount of time. For example, the bank might see the same SSN (social security number) appear multiple times in the past few hours with different name and address combinations. To detect these fraudulent attempts, the decision engine needs to go the extra miles and scan through this look back period in milliseconds to confirm that these applicants have similarities and a swift decline action should be taken place.

In some instances, the bank account number supplied by the applicants might be the same and all other PIIs (Personally identifiable information) are different. Even if the applicant’s information is true and variable and passes the credit checks, having the same bank routing and account number is a telltale sign of a massive fraud ring. Only a speedy caching algorithm will catch and prevent additional harm done to the bank.

Caching to reduce operating cost

Legitimate credit seekers often shop for credit during a short amount of time. They may apply at the same institution or at a lead aggregator such as Lendingtree. A particular bank might see the same applicant coming through their application portal a few times during a month. A lead aggregator might see this applicant shopping within days across multiple affiliates in their lead aggregation platform.

As a bank and to reduce operating cost, the bank can use the cached credit report from say, last week, to either rerun or reproduce the underwriting decision from a few days ago. This prevents the bank from pulling the applicant’s credit report again to reduce operating cost. It could get expensive when you have hundreds of thousands of “duplicate” applications coming through the applicant portal on a daily or weekly basis.

As a lead aggregator, they could use the same cached credit report and run through multiple decisions for a group of affiliated partners. For example, the lead aggregator could pull credit once and run multiple set of underwriting criteria based on their affiliated partner’s algorithms to maximum matching rate and increase conversion rates.

Horizontal caching to reduce operating cost

In lead aggregation platforms, there might be a few thousand affiliates providing a variety of products to an applicant. A lead aggregator could cache each of their applicants and define a look back period to not send the same applicant to the same affiliate during a given period to reduce false positives. It is likely that the applicant’s financial situation hasn’t changed and the affiliate’s underwriting strategy remains the same. Therefore sending another request for a product will be a waste of time and resources.

Horizontal caching allows the aggregation platforms to reduce the amount of time the same applicant is being sent to the same affiliate and yield the same result within a given time period.

The lead aggregator could set a period of 30 days before the same applicant’s credit report is refreshed and analyzed to maximize the chances of a match or an approval to the same affiliate.

This type of caching ultimately saves operating cost for both the lead aggregator and its affiliates.

Vertical caching to maximize profit

From a bank’s perspective, reusing the same credit report to run through multiple strategies will maximize the bank's profit by matching the applicant to all of the bank's offerings.

For example, the bank might be offering checking accounts, credit cards, auto loans and mortgages. Vertical caching allows the bank to run multiple strategies and product offerings across multiple product verticals to maximize their cross sell ability at the same time.

Vertical caching comes to play in a major way for lead aggregation platforms. Lead aggregators can pull credit once for an applicant and run through all of their affiliates and affiliate’s product offerings in one shot.

Multiple offers from multiple affiliates might return a qualified signal back to the lead aggregator to present to the applicant. The lead aggregator in this case avoided the need to pull credit multiple times across multiple affiliates. The savings is enormous over time.

Speed of caching

Ultimately companies decide to use caching strategies for speed of execution. In the previous examples, banks and lead aggregators can simply retrieve the last decision made and return the same credit decision rendered last time within a given time period. 

For instance, if the same applicant returns within a very short amount of time, if the previous decision was an approval, the company can simply return the same decision for the sake of speed. Most likely the previous decision was a decline decision and the same decision will stand especially given a short time period.

This way, the caching layer can return a decision instead of sending the signal into the core of the credit underwriting path.

To summarize, we discuss the three most popular reasons why your decision engine should have a caching layer and the benefit of caching your first party and third party data. Next time, we will discuss the benefit of waterfall strategy in your decision engine design.

About LendAPI

LendAPI is a DIY digital onboarding platform with a fully customizable product builder and an integrated graphical Decision Engine. LendAPI’s application workflow, sub-tenant management, third party integrations plus customer portal and communication methods is a complete end to end solution. It’s free to signup.

The ability to cache data will save banks and lead aggregators a lot of time and money. While reducing operating cost and compliance issues, the caching layer of your decision engine will greatly speed up your decision making process and deflect fraud attempts.

Credit Decision Engine - The World of Caching

What is caching?

Caching is the ability to store frequently used information in memory to fast retrieval. We encounter this technique in our computer operating systems. When you close an application and relaunch it, the speed of which the application is reopened is measurable faster than on a cold start.

This is because the computer operating system remembers this recently opened application in memory and is able to relaunch the application without accessing the application from a slower part of the computer storage systems.

Some computer operating systems keep the most frequently used applications in memory for faster retrieval to speed up the overall computer experience. We leverage this concept in creating a critical component of the credit decision engine’s operating system. 

In this article, we will define a few different ways a decision engine is able to cache first party and third party data to speed up the decision making process as well as reduce operating expenses for banks and financial services companies.

Why caching?

Why do we cache data in the context of operating a credit decision engine? There are three major reasons. 

One, fraud detection. Two, money savings. Three, speed of execution. We will go into each of these reasons separately and provide use cases from the bank’s perspective on why these caching activities help them to reduce operating expense and credit loss.

Caching in fraud detection

When banks and lead aggregators acquire new leads or customers, the first line of defense for potential fraudulent applicants is your decision engine. After the applicant completes the first few pages, it’s time to run identity verification. But, before getting too deep into your underwriting decision making process, we need our decision engine to look for something else in its cached data.

One of the frequent use cases is to define a look back period in the decision engine’s caching algorithm. The look back period could be defined in seconds, minutes, hours, days. The intent is to look back in time and within the cached data to see if this applicant has appeared to the bank before.

If the applicant has been seen by the bank in the last seconds, minutes, hours, this applicant’s information is being shopped by multiple lead providers in a short period of time. Most of the time without the applicant’s knowledge.

The banks should recognize this event by using cached data to prevent multiple credit reports being pulled on this applicant in a short amount of time. This can become cost prohibitive and could be a huge compliance issue. 

In other more insidious situations, parts of the applicant’s information appear within a short amount of time. For example, the bank might see the same SSN (social security number) appear multiple times in the past few hours with different name and address combinations. To detect these fraudulent attempts, the decision engine needs to go the extra miles and scan through this look back period in milliseconds to confirm that these applicants have similarities and a swift decline action should be taken place.

In some instances, the bank account number supplied by the applicants might be the same and all other PIIs (Personally identifiable information) are different. Even if the applicant’s information is true and variable and passes the credit checks, having the same bank routing and account number is a telltale sign of a massive fraud ring. Only a speedy caching algorithm will catch and prevent additional harm done to the bank.

Caching to reduce operating cost

Legitimate credit seekers often shop for credit during a short amount of time. They may apply at the same institution or at a lead aggregator such as Lendingtree. A particular bank might see the same applicant coming through their application portal a few times during a month. A lead aggregator might see this applicant shopping within days across multiple affiliates in their lead aggregation platform.

As a bank and to reduce operating cost, the bank can use the cached credit report from say, last week, to either rerun or reproduce the underwriting decision from a few days ago. This prevents the bank from pulling the applicant’s credit report again to reduce operating cost. It could get expensive when you have hundreds of thousands of “duplicate” applications coming through the applicant portal on a daily or weekly basis.

As a lead aggregator, they could use the same cached credit report and run through multiple decisions for a group of affiliated partners. For example, the lead aggregator could pull credit once and run multiple set of underwriting criteria based on their affiliated partner’s algorithms to maximum matching rate and increase conversion rates.

Horizontal caching to reduce operating cost

In lead aggregation platforms, there might be a few thousand affiliates providing a variety of products to an applicant. A lead aggregator could cache each of their applicants and define a look back period to not send the same applicant to the same affiliate during a given period to reduce false positives. It is likely that the applicant’s financial situation hasn’t changed and the affiliate’s underwriting strategy remains the same. Therefore sending another request for a product will be a waste of time and resources.

Horizontal caching allows the aggregation platforms to reduce the amount of time the same applicant is being sent to the same affiliate and yield the same result within a given time period.

The lead aggregator could set a period of 30 days before the same applicant’s credit report is refreshed and analyzed to maximize the chances of a match or an approval to the same affiliate.

This type of caching ultimately saves operating cost for both the lead aggregator and its affiliates.

Vertical caching to maximize profit

From a bank’s perspective, reusing the same credit report to run through multiple strategies will maximize the bank's profit by matching the applicant to all of the bank's offerings.

For example, the bank might be offering checking accounts, credit cards, auto loans and mortgages. Vertical caching allows the bank to run multiple strategies and product offerings across multiple product verticals to maximize their cross sell ability at the same time.

Vertical caching comes to play in a major way for lead aggregation platforms. Lead aggregators can pull credit once for an applicant and run through all of their affiliates and affiliate’s product offerings in one shot.

Multiple offers from multiple affiliates might return a qualified signal back to the lead aggregator to present to the applicant. The lead aggregator in this case avoided the need to pull credit multiple times across multiple affiliates. The savings is enormous over time.

Speed of caching

Ultimately companies decide to use caching strategies for speed of execution. In the previous examples, banks and lead aggregators can simply retrieve the last decision made and return the same credit decision rendered last time within a given time period. 

For instance, if the same applicant returns within a very short amount of time, if the previous decision was an approval, the company can simply return the same decision for the sake of speed. Most likely the previous decision was a decline decision and the same decision will stand especially given a short time period.

This way, the caching layer can return a decision instead of sending the signal into the core of the credit underwriting path.

To summarize, we discuss the three most popular reasons why your decision engine should have a caching layer and the benefit of caching your first party and third party data. Next time, we will discuss the benefit of waterfall strategy in your decision engine design.

About LendAPI

LendAPI is a DIY digital onboarding platform with a fully customizable product builder and an integrated graphical Decision Engine. LendAPI’s application workflow, sub-tenant management, third party integrations plus customer portal and communication methods is a complete end to end solution. It’s free to signup.

The ability to cache data will save banks and lead aggregators a lot of time and money. While reducing operating cost and compliance issues, the caching layer of your decision engine will greatly speed up your decision making process and deflect fraud attempts.

Credit Decision Engine - The World of Caching

What is caching?

Caching is the ability to store frequently used information in memory to fast retrieval. We encounter this technique in our computer operating systems. When you close an application and relaunch it, the speed of which the application is reopened is measurable faster than on a cold start.

This is because the computer operating system remembers this recently opened application in memory and is able to relaunch the application without accessing the application from a slower part of the computer storage systems.

Some computer operating systems keep the most frequently used applications in memory for faster retrieval to speed up the overall computer experience. We leverage this concept in creating a critical component of the credit decision engine’s operating system. 

In this article, we will define a few different ways a decision engine is able to cache first party and third party data to speed up the decision making process as well as reduce operating expenses for banks and financial services companies.

Why caching?

Why do we cache data in the context of operating a credit decision engine? There are three major reasons. 

One, fraud detection. Two, money savings. Three, speed of execution. We will go into each of these reasons separately and provide use cases from the bank’s perspective on why these caching activities help them to reduce operating expense and credit loss.

Caching in fraud detection

When banks and lead aggregators acquire new leads or customers, the first line of defense for potential fraudulent applicants is your decision engine. After the applicant completes the first few pages, it’s time to run identity verification. But, before getting too deep into your underwriting decision making process, we need our decision engine to look for something else in its cached data.

One of the frequent use cases is to define a look back period in the decision engine’s caching algorithm. The look back period could be defined in seconds, minutes, hours, days. The intent is to look back in time and within the cached data to see if this applicant has appeared to the bank before.

If the applicant has been seen by the bank in the last seconds, minutes, hours, this applicant’s information is being shopped by multiple lead providers in a short period of time. Most of the time without the applicant’s knowledge.

The banks should recognize this event by using cached data to prevent multiple credit reports being pulled on this applicant in a short amount of time. This can become cost prohibitive and could be a huge compliance issue. 

In other more insidious situations, parts of the applicant’s information appear within a short amount of time. For example, the bank might see the same SSN (social security number) appear multiple times in the past few hours with different name and address combinations. To detect these fraudulent attempts, the decision engine needs to go the extra miles and scan through this look back period in milliseconds to confirm that these applicants have similarities and a swift decline action should be taken place.

In some instances, the bank account number supplied by the applicants might be the same and all other PIIs (Personally identifiable information) are different. Even if the applicant’s information is true and variable and passes the credit checks, having the same bank routing and account number is a telltale sign of a massive fraud ring. Only a speedy caching algorithm will catch and prevent additional harm done to the bank.

Caching to reduce operating cost

Legitimate credit seekers often shop for credit during a short amount of time. They may apply at the same institution or at a lead aggregator such as Lendingtree. A particular bank might see the same applicant coming through their application portal a few times during a month. A lead aggregator might see this applicant shopping within days across multiple affiliates in their lead aggregation platform.

As a bank and to reduce operating cost, the bank can use the cached credit report from say, last week, to either rerun or reproduce the underwriting decision from a few days ago. This prevents the bank from pulling the applicant’s credit report again to reduce operating cost. It could get expensive when you have hundreds of thousands of “duplicate” applications coming through the applicant portal on a daily or weekly basis.

As a lead aggregator, they could use the same cached credit report and run through multiple decisions for a group of affiliated partners. For example, the lead aggregator could pull credit once and run multiple set of underwriting criteria based on their affiliated partner’s algorithms to maximum matching rate and increase conversion rates.

Horizontal caching to reduce operating cost

In lead aggregation platforms, there might be a few thousand affiliates providing a variety of products to an applicant. A lead aggregator could cache each of their applicants and define a look back period to not send the same applicant to the same affiliate during a given period to reduce false positives. It is likely that the applicant’s financial situation hasn’t changed and the affiliate’s underwriting strategy remains the same. Therefore sending another request for a product will be a waste of time and resources.

Horizontal caching allows the aggregation platforms to reduce the amount of time the same applicant is being sent to the same affiliate and yield the same result within a given time period.

The lead aggregator could set a period of 30 days before the same applicant’s credit report is refreshed and analyzed to maximize the chances of a match or an approval to the same affiliate.

This type of caching ultimately saves operating cost for both the lead aggregator and its affiliates.

Vertical caching to maximize profit

From a bank’s perspective, reusing the same credit report to run through multiple strategies will maximize the bank's profit by matching the applicant to all of the bank's offerings.

For example, the bank might be offering checking accounts, credit cards, auto loans and mortgages. Vertical caching allows the bank to run multiple strategies and product offerings across multiple product verticals to maximize their cross sell ability at the same time.

Vertical caching comes to play in a major way for lead aggregation platforms. Lead aggregators can pull credit once for an applicant and run through all of their affiliates and affiliate’s product offerings in one shot.

Multiple offers from multiple affiliates might return a qualified signal back to the lead aggregator to present to the applicant. The lead aggregator in this case avoided the need to pull credit multiple times across multiple affiliates. The savings is enormous over time.

Speed of caching

Ultimately companies decide to use caching strategies for speed of execution. In the previous examples, banks and lead aggregators can simply retrieve the last decision made and return the same credit decision rendered last time within a given time period. 

For instance, if the same applicant returns within a very short amount of time, if the previous decision was an approval, the company can simply return the same decision for the sake of speed. Most likely the previous decision was a decline decision and the same decision will stand especially given a short time period.

This way, the caching layer can return a decision instead of sending the signal into the core of the credit underwriting path.

To summarize, we discuss the three most popular reasons why your decision engine should have a caching layer and the benefit of caching your first party and third party data. Next time, we will discuss the benefit of waterfall strategy in your decision engine design.

About LendAPI

LendAPI is a DIY digital onboarding platform with a fully customizable product builder and an integrated graphical Decision Engine. LendAPI’s application workflow, sub-tenant management, third party integrations plus customer portal and communication methods is a complete end to end solution. It’s free to signup.