An increasing number of organizations are making data and analytics an integral part of their business operations. In Thomas Davenport and Jeanne Harris’ book Competing on Analytics: The New Science of Winning, they define the most analytically advanced businesses as “Analytical Competitors”. These organizations embed data and analytics into their operations and are a fundamental aspect of their business model. Data and analytics, then, is not promoted as a technology initiative but rather as a business initiative. This is a bit of a paradigm shift for most organizations with a less advanced analytic capability. However, data and analytics projects must focus on the business first to be implemented and adopted successfully.

What Happens if Data and Analytics Projects Focus on Technology First?

A common mistake for organizations just starting to build out a data and analytics program is to focus on the technology first. These organizations tend to first ask questions like:

  • What data do we have available in our database?
  • What data visualization tool should we buy?
  • How can we get as much data as possible into our data warehouse/repository?

Each question lacks any insight into how business users will leverage data and analytics.

Focusing on what data is available currently potentially ignores a wealth of information not currently available to your business. External data, like US Census Data or social media information, is already ignored with this mindset. Especially in community banks and credit unions, their tends to be a large number of third-party applications. A majority of these applications may not be hosted on-site and, therefore, access to data stored in these systems could be currently limited. By focusing only on what data is currently accessible, potentially valuable data could be ignored.

Focusing on data visualization tools or data warehouse products should not be one of the first questions asked by your data and analytics project team. Last year, we posted an article titled “Having Business Intelligence Software is Not Equal to Having Business Intelligence”. In this article, we discussed why the tools alone are not enough. A business intelligence strategy needs to address how those tools will be leveraged. Training and development, cultural changes, and proper analysis of the business processes and needs are required for any actionable insights to be gained.

A data warehouse, which, for this post, we assume will be the foundation of your data and analytics program, is not meant to be a repository of ALL data. It is designed to to integrate data from various applications for the purposes of reporting and analysis. Simply copying operational databases with all available data doesn’t create a simpler, analysis-driven design. Organizations that don’t take an iterative approach to building their data warehouse often find themselves buried in an overly-complex project six months down the road.

So, Why Focus on the Business First?

Focusing on the business first creates some significant advantages:

  • You understand the data needs of business users
  • Better understanding of the processes that drive the business
  • Complete picture of the application environment with data integration priorities applied to each source

Since data and analytics projects are designed to provide data driven insights for the business, focusing on the business needs of the business is the best starting point for these projects. With a better understanding of what your business needs, a technological solution can be developed. Without a clear picture of the data needs and goals of the business, there is no guarantee that the needs of the business will be met by the solution. Understanding the business and, specifically, their KPI and metric requirements should be a foundational part of the data and analytics project. Especially with project teams that are mainly supported and staffed by IT, these teams lack the organizational clarity and understanding needed to define high-quality requirements for the data and analytics project.

An understanding of critical business processes is essential to successful data and analytics projects. Processes drive how customers interact with the business and how staff interact with the applications. Each process creates data and each data point gives us a potential opportunity to create insights into operations. By understanding these processes, “quick wins” can often be uncovered to simplify processes or to reduce the amount of manual effort and time involved with them. These quick wins help justify a BI investment and improve the data and analytics project’s ROI.

If you focus on the business first, you will understand what applications are used throughout the organization. We find that our bank and credit union clients typically have well over 30, 40, and even 50 applications. Each application has a purpose; figuring out where that application fits into the data and analytics project is critically important to your strategic roadmap. With an understanding of all applications used in the organization, you will be able to accurately prioritize which applications should be integrated into your data and analytics platform and in what order. Coupling this with the understanding of key business processes, the data and analytics project team will have a great amount of business knowledge with which to design and support the initiative.

Summary

The summarization of this post is simple: focus on identifying the business needs for data and analytics FIRST. Only after addressing those needs should the data and analytics project team start to explore which technologies to employ and how best to design the solution. Data and analytics projects that focus too heavily on the technology inevitably fall short of their long term goals.

In one of our last posts, we talked about how marketers can use data. As part of the growing usage of data and analytics, we want to highlight the different ways key executives can leverage data to drive their business. In this post, we discuss five ways that lending analytics can be used in any bank or credit union.

A CLO (Chief Lending Officer) often has a wealth of information at their fingertips. Modern loan origination platforms have mountains of information. Unfortunately, much of the information contained in the LOS systems goes unused by lending teams. With the growth of non-traditional lenders, like LendingClub, credit unions and community banks must become more analytically driven to remain competitive lenders.

Five Ways CLOs Can Use Lending Analytics

Below we identify five ways that CLOs and their lending teams can use data and analytics:

1. Concentration Risk

Would a wise investor put all of his funds in one sector? One company? Of course not. Just as investors must ensure their portfolio has adequate diversity, lenders must ensure their risk is not overly concentrated. Concentration risk might involve stratifying your loan portfolio by geography, product type or credit tier to ensure risk is appropriately spread across a variety of variables.

For example, if your lending portfolio is heavily concentrated in a particular county where business is booming, what happens if business collapses in that county? What might have looked like high credit quality loans could change dramatically and quickly if significant layoffs or business bankruptcy occurs. Similarly, CLOs want to strike a proper balance amongst their credit tiers. Buying A and A+ paper might reduce charge-offs, but this will negatively impact net interest margin by reducing the average yield for your portfolio.

Developing KPIs to constantly monitor concentration risk is something all lenders can and should be doing today. The reporting process should be automated to ensure accuracy and timeliness of information.

2. Segmentation Analysis

Segmentation analysis is another great way for CLOs to get started with lending analytics. When integrated with profitability data, CLOs can identify which customer segments are driving the most revenue and profits.

Segmentation analysis can manifest in a variety of forms: lenders can segment customers by profitability, by geography, origination channel, or combinations of these and other factors. Proper communication with your bank or credit union’s marketing team then enables your organization to develop strategies to effectively target and lend to key segments.

For organizations with a mature data and analytics program, integrating lending data with other data sources can prove invaluable and open up additional segmentation opportunities. Overlaying loan application data with third-party data (like US Census Data) can identify new opportunities to expand and grow your loan portfolio.

3. Loan Performance Analysis

Your lending team probably spends a significant portion of their time generating new loans. But how are you monitoring loans that have already been funded? How are these loans performing over time?

Proper monitoring of a loan’s performance throughout its life is critical to successful data-driven lending. By monitoring loan performance over time, you will start to uncover key insights like how many loans are being charged off, are in default, have been paid off and other outcomes. The results may highlight common characteristics of high-performing loans and of low-performing loans. This information can then be used to manage your underwriting criteria along with your loan growth strategy.

4. Data-Driven Pricing Models

Companies like Deep Future Analytics are starting to bring this idea to many banks and credit unions. By leveraging your historical lending data, statistical models can be applied to optimize pricing while maintaining (or even reducing) your organization’s credit risk. Based on past loan performance, your bank or credit union can identify whether or not you can offer better rates on C or D quality loans while ensuring adequate ALLL. By understanding your past charge-off history and integrating with loan performance, this form of lending analytics can keep your CFO and your CRO (Chief Risk Officer) happy.

We anticipate a sharp increase in the number of community banks and credit unions performing this type of price modeling as data and analytics become more commonly leveraged throughout the industry. Large financial institutions have been doing this type of work for years; as data and analytics technology becomes more practical for smaller organizations, these advanced models should become more frequently adopted.

5. Non-Traditional Risk Metrics

FICO score have been the dominant number used in credit decisions for the majority of financial institutions. But, as 2008 showed, credit scores are not as a flawless as they are often believed to be.

Modern banks, like SOFI, are using non-traditional metrics to fund and refinance the massive amount of student loan debt outstanding. Their lending analytics models rely on factors like where you want to college and your current employment and income status. As more and more millenials “say no to credit cards”, traditional banks and credit unions should consider how to integrate these non-traditional metrics into their underwriting criteria.

While we don’t expect smaller organizations to completely overhaul underwriting criteria using non-traditional risk factors, coupling these risk factors with traditional credit score-based underwriting models could generate additional lending opportunities.


Summary

Lending analytics are becoming more common with community banks and credit unions. For banks and credit unions to remain competitive, a greater adoption of these data-driven techniques must become a part of any strategic plan. While we understand that jumping right into, for example, social media mining for credit decisions is a big leap for most organizations, laying a foundation to continuously improve lending analytics is critical, if not necessary, to remain competitive in the next decade.

Data and analytics is exploding.

Whether you call it big data, business intelligence, or analytics, it is clear that the use of data is going to be critical to any company’s success. Especially in the bank and credit union industry, data and analytics is a relatively new concept. Sure, data has been used in the past but never has it been so critical to sustainable operations.

With the growing adoption of data and analytics, the bank and credit union industry is reaching a critical juncture. In a previous post, we discussed how data and analytics will become a competitive necessity – not a competitive advantage – in the very near future. If you believe this to be true, two questions immediately arise:

1. In what ways can we use data and analytics?

2. Once we know what we want to do, how do we actually do it?

Over the next few weeks, we will be releasing several posts that primarily address the first question. Specifically, we will be addressing question 1 from several different executive perspectives. The ways that a CMO (Chief Marketing Officer) would use data and analytics might differ from how the CLO (Chief Lending Officer) or CFO (Chief Financial Officer) would use data and analytics.

Five Ways CMOs Can Use Data

In this post, we identify five ways that CMOs can use data.

1. A/B Testing

With the analytics now available from sites like Google Analytics, HubSpot, and MailChimp, data is readily available for website traffic, conversion rates, click-through rates, and a variety of other data points that are a gold mine for marketers.

A/B testing allows marketers to compare two versions of something (a site or page design, email content, offer, etc.) and measure the results of the user experience and interaction. For example, you could use A/B testing to determine which email subject line generated more opens or which content led to more conversions.

The ability to tweak content to users while using data to support your decisions allows marketers to gain valuable insights and deliver more successful messaging to consumers.

2. Customer/Member Segmentation

We have already discussed member segmentation in a previous post, but its value to marketers cannot be stated enough. Through data and analytics, marketers can understand the channel usage, product propensities, and a variety of other details about specific segments of their customer base/membership.

Does the millennial want the same user experience as the retired baby boomer? Certainly not; it is important to understand these differences and create an experience that is most beneficial to that particular segment.

With the growth of non-traditional financial institutions like Ally Bank, a 100% digital bank, or Lending Club, a peer-to-peer lending platform, CMOs in the banking industry will need to use data and analytics to deliver the proper experience and messaging to the appropriate segments.

3. Channel Analysis

There is some overlap between channel analysis and customer/member segmentation, but, with the digital growth of the financial industry, channel analysis is deserving of its own section.

Who is contacting your call center? Who hasn’t completed an in-branch transaction throughout their entire customer/member lifetime with your FI? These are the types of questions CMOs must ask and data and analytics can answer. Asking these questions enables the CMO to define the proper growth strategies based on the underlying data.

4. Measuring Consumer Sentiment with Social Media

Facebook, Twitter, Instagram, LinkedIn and the multitude of other social media networks can offer a wealth of information about consumer sentiment. With each new post about your bank or credit union, you can gain a more complete picture of the consumer’s feelings towards your FI. CMOs can use data from social media to gain insights that would ordinarily require surveys or focus groups to gather. These insights can be gained more quickly and efficiently with new technology.

Several companies now specialize in analyzing social media posts to determine consumer sentiment towards a branch or product. These insights, when combined with data from previous marketing efforts and internal operations, allow a CMO to identify and address flaws in product, services, or messaging quickly and efficiently.

Analyzing social media posts and the associated text is considered “unstructured data”. The key difference between standard business intelligence and big data lies in big data’s integration of unstructured and structured data sources. Organizations that can leverage social media information can accurately state that they are using big data in their marketing efforts.

5. Tracking and Measuring ROI

Have you ever watched Shark Tank? How often do you hear Mark Cuban ask an entrepreneur “what is your cost of customer acquisition”? Can your bank/credit union get this metric? Can you get it quickly?

CMOs can become their CFO’s best friend by combining financial data with their marketing information. Tracking average conversion on marketing campaigns combined with product and service profitability data, you can create realistic ROI models for finance to evaluate marketing activities. Overlay this data with the strategic growth objectives of the bank/credit union, and your models and forecasts will improve significantly.

The ability to measures and track success with your campaigns enables you to learn from each marketing effort. Even some of the other techniques mentioned in this posts, like A/B testing and monitoring social media sentiment, can be combined with campaign tracking to improve the likelihood of success for future efforts.

What does this all mean?

The US Navy SEALs have a saying, “slow is smooth; smooth is fast”. For data and analytics in organizations, it can be said like this: “small steps lead to small wins; small wins lead to big wins”.

CMOs can use data to drastically improve marketing efforts. In their positions, they have the ability to drive data and analytics initiatives by applying some of the way to use data we have suggested in this post. Marketers should start by taking small steps with data and analytics. These small steps will lead to small wins. But, after a few small wins, some big wins will emerge. Like a flywheel, its challenging, at first, for data and analytics initiatives to gain momentum. But with some small wins and a good data strategy, your data and analytics initiatives will gain momentum. Once it gains momentum, those small wins will have led to big wins and your organization will be on its way to becoming data-driven.

Most people don’t know it by name, but most recognize the unique design of the Sierpinski Triangle. The fractal is designed by recursively breaking down equilateral triangles into increasingly smaller triangles. Mathematicians have a field day with this type of object, but most people don’t equate it to any type of practical business visualization. As we talk about organizational KPIs, you will see that the Sierpinski Triangle is the perfect visualization of how we would like to create clear line of sight from department operational metrics to strategic KPIs.

What are KPIs?

KPIs (Key Performance Indicators) are defined well by Investopedia who define them as “a set of quantifiable measures that a company or industry uses to gauge or compare performance in terms of meeting their strategic and operational goals.” For example, achieving a return on assets (ROA) of 70 basis points, 5% loan growth, or 3.5% share growth are all examples of KPIs that a credit union might define as their strategic metrics for the year.

KPIs are not designed to be tactical or operational in nature. Achieving an application to funding time of under thirty days for a new mortgage is more operational in nature and therefore wouldn’t be considered strategic enough to be an organizational KPI. That is not to say that the metric isn’t important – it simply is better suited as an operational, departmental metric.

The Relationship Between KPIs and Departmental Metrics

Departmental metrics are more operational in nature. In the last paragraph, we gave an example of reducing the average time to funding for mortgage applications. This would be a great example of an important metric for the mortgage lending department of a bank or credit union. A consumer lending department might be interested in the month-to-date origination volume as a departmental metric. These metrics, unless of the utmost importance to the strategic direction of the financial institution, are best suited to be measured and managed at the departmental/business unit level.

KPIs and departmental metrics should have a direct relationship. Let’s assume, for example, that return on assets (ROA) is a KPI for your financial institution. ROA is defined as the ratio between net operating income (net operating earnings) and average total assets. For those organizations with well-defined KPIs, this ratio is usually very closely monitored. The CFO, however, might care about average total assets and net operating earnings to date as two separate metrics for the finance business unit. The organization measures and manages to the ROA; the finance department might measure and manage the two individual metrics that comprise ROA.

KPIs and Sierpinski

What does the Sierpinski Triangle have to do with KPIs? Consider the outer triangle as the organizational KPIs. These KPIs encompass the entire bank/credit union and are integral to the organization’s strategic direction. Then consider the next level (the four next biggest triangles within the outer triangle) as the major business unit metrics. For example, in an organization with a COO, a CFO, a CLO, and a CTO, the four inner triangles would represent the metrics managed by each of those four C-level individuals. The next level of smaller triangles within the second level of triangles (or metrics in our analogy) might represent the departments managed by the C-level staff. For example, the CLO might have three departments under his or her guidance: consumer lending, mortgage lending, and commercial lending. These increasingly smaller triangles might represent the metrics for each of those departments.

So, the Sierpinski Triangle is the perfect representation of developing a clear line-of-sight between the departmental, operational metrics and the strategic KPIs. The loan originator should know how his or her daily work supports the strategic KPIs and goals set forth by the CEO. This line-of-sight and transparency that proliferates top-down throughout the organization is one of the greatest strengths of a successful data-driven organization.

Ask yourself this: Is there a clear line-of-sight between the daily activities from those in front-line operations and back-office administrative support roles to the KPIs and goals of the organization? Does branch staff understand how their day-to-day actions affect the overall growth of the bank or credit union? Defining and nurturing this line-of-sight is a major factor in the success of any business intelligence and analytics program. This also helps support the growth of a data-driven culture that is necessary for sustained success in the areas of analytics and BI.


For assistance in defining your organization’s data strategy or if you simply have questions about business intelligence and/or analytics, please contact Brewster Knowlton at brewster@knowlton-group.com or call 860-593-7842.

In our last post, we discussed why analytics will become a necessity to compete in an evolving banking industry. In that article, we promised to follow up with a series of posts that contain specific examples of how analytics can be applied. This first post will discuss how analytics and business intelligence can be used for member segmentation analysis and developing member/customer insights.

Using Analytics for Member Segmentation and Insights

If a member calls into a contact center asking about a fee they were assessed, what would your staff see about this person? If your organization is like most, the contact center staff would see this member’s basic accounts, current balances and maybe recent interactions with the credit union that were tracked in a CRM system. But would they see that this member also had $2 million with your wealth management unit? Or that this individual has a $500,000 mortgage with the credit union on a home worth over a million dollars?

Most banks and credit unions do not possess a complete 360-degree profile of a member’s products and services. This poses several issues that can negatively affect the organization.

Member Segmentation and Insights by Overlaying Profitability

Without a complete picture of the costs and profitability associated with all products and services, member segmentation is at worst impossible and at best incomplete. Overlaying product and service profitability with all products and services owned and used by a member allows you to segment your membership based on net profitability.

What is more profitable – the member with a share draft, a primary share, and a new auto loan who transacts in branch or the member with a primary share, a home equity loan, a debit card, a credit card who transacts using only digital channels?

By segmenting based on profitability, we gain greater insights into our membership while also furthering our understanding of the product and service offering.

How can we leverage this information?

Take one of Progressive’s best features – the competitor price comparison. If you go to Progressive’s website, you can get a quote from them and their competitors all at once.

Why would Progressive want to show you the price of competitors when Progressive might be higher?

They are so confident in their ability to calculate risk-based pricing that they believe if a competitor can offer you a lower rate, then you don’t fit into their desired risk profile. By not fitting into their desired risk profile, Progressive is creating a natural filtering of customers that might end up costing them more in claims payout than in premiums received. They are, in essence, saying “if someone can offer you a lower rate, then we feel you are too risky to be our preferred customer”.

What does this have to do with member profitability segmentation?

Not every member/customer is going to be profitable. But wouldn’t we want to maximize the number of profitable members in our membership? By identifying unprofitable members, we can target market to entice them into more profitable products and services. If your primary marketing effort to acquire new members involves a free checking account with other free benefits, is that really the best strategy? You are most likely acquiring members that tends to be predominantly unprofitable. Consider offering free checking and other benefits if they sign up for a more profitable product or service. Instead of gaining unprofitable members, your marketing now acquires profitable members which your CFO will most certainly be excited about.

These relatively simple business strategies can only be accurately designed with profitability data that allows for member segmentation.

Member Segmentation by Channel Usage

What about channel usage?

How does a primarily digital member, like myself, differ from a member who does all of their banking in a branch? What about one who frequently contacts the call center? By understanding which of our members use digital channels – and how frequently – it allows us to design marketing and member experience strategies to leverage this information.

If you want to convince me, a digital member, to get a new credit card, the best strategy is not to send me something in the mail. But, what if you could add a credit card offer to the header or a call-to-action in online/mobile banking next time I log on? Your conversion rate will be much higher using this approach than simply blanketing everyone with the same message. The ability to leverage this targeted marketing approach is only possible after segmenting your membership by channel usage.

Customer Lifecycles

What would the value be to your organization to identify a member who used to have $10,000 in savings and an average balance of $3,000 in checking but now has only $1,000 in savings and has closed their checking account? This would seem to indicate that this individual has shifted their primary banking operations to another financial institution.

With share of wallet being such a critical metric (especially in highly competitive and saturated markets), being able to identify these individuals is critical. Unfortunately, credit unions and banks tend to struggle when it comes to understanding their member/customer lifecycle.

Ideally, we would want to define triggers that would initiate customer retention measures to re-engage this member/customer before they have completely switched their primary banking activity to another FI. Using analytics and the various techniques that can be employed, an organization can relatively simply identify these individuals and potentially regain a greater share of wallet.

Last Words

The shift from reactivity to proactivity is a major paradigm shift that analytic organizations embrace. By investing in business intelligence and analytics, banks and credit unions are able to make more effective strategic and marketing decisions through improved member segmentation analysis and insights. These are but a few of the many examples and applications of how you can employ analytics within your FI.


To learn how The Knowlton Group can help you define the proper business intelligence and analytics strategy or if you need help implementing an analytics solution, contact Brewster Knowlton at brewster@knowlton-group.com or call 860-593-7842.

Analytics in Credit Unions and Banks

The word analytics now tends to bring companies like Google, Amazon and Facebook to mind. What most people don’t realize is that analytics – and the benefits it brings – can be achieved by organizations of nearly every shape, size, and industry. In the banking industry, most would tend to equate analytics with huge financial institutions like Bank of America, JPMorgan Chase, or Citigroup. But the $500 million banks and credit unions as well as the $5 billion banks and credit unions can achieve wildly successful results through the adoption of analytics and business intelligence programs.

For a variety of reasons, analytics in the banking industry tends to be treated as a “nice-to-have” innovation – ripe with risk without yielding significant returns – as opposed to a competitive necessity. Increasing regulations that place a greater strain on fee income and operating costs along with a low interest rate environment means banks and credit unions must become more innovative to stay relevant and competitive. Factoring in the advent of non-traditional financial institutions and competitors like LendingClub (a peer-to-peer lending platform) and Ally Bank (a 100% digital bank with over $100 billion in assets) places even more competitive pressure on traditional banks and credit unions.

Want to know how to combat this changing environment? Invest in analytics and business intelligence.

Over the next few weeks, we will be releasing several posts that identify specific ways financial institutions can benefit from analytics. We will discuss several areas in which analytics can be applied including member segmentation insights, profitability and lifecycle analysis, rate risk modeling, internal operation efficiency gains, channel analysis and alignment, and strategic planning and decision making.

Our hope is that through this education on analytics and business intelligence, your organization can adopt a more analytical mindset and take the first steps towards becoming data-driven

Banking in the year 2030

It is now common knowledge that purchasing airline tickets at different times relative to your departure impacts the price you pay for the seat. Hotels use a similar technique to adjust room rates. This technique is known as “yield optimization” or “revenue management” to maximize revenue based on anticipating demand and supply (either airline seats or vacant rooms). This is a highly analytical technique and represents a fantastic use of data to increase profitability. Yield optimization was first introduced by American Airlines in the 1980s which led to bringing in $1.2 billion over three years and even eliminated some competitors. Prior to this, the airline industry had no such analytical technique.

For several years, American Airlines enjoyed this competitive advantage – but not forever. Soon, other airlines adopted this analytical pricing methodology making yield optimization a necessity for survival instead of a competitive advantage. A similar trend happened in the hotel industry after Marriott introduced its own revenue management techniques. What first existed as a competitive advantage soon turned into a requirement for remaining relevant in the industry. This is discussed in quite a bit of deal in Thomas Davenport and Jeanne Harris’ book Competing on Analytics: The New Science of Winning – a book I highly recommend.

We believe that the banking industry will follow the same path as the airline industry and hotel industry. As banks and credit unions become more analytical, the early adopters will yield significant benefits in the form of operational efficiency, profitability increases, cost reductions, reduction in risk profile and several other benefits of analytics that we will discuss in upcoming posts. Soon after, all banks and credit unions will be required to develop their own analytical capabilities if they wish to remain relevant and stay in business. Those who refuse to adopt these techniques will be eliminated by competitors or absorbed through acquisitions by more analytically-driven financial institutions.

What does it all mean?

It comes down to three choices for banks and credit unions over the next several years:

1. Embrace analytics and adopt early. Early adoption will lead to significant competitive advantages in their operating regions. Market share, profitability, and member experience will all rise as a result of this early competitive advantage.

2. Wait until analytics seems less risky and adopt later. This late adoption will mean that the organization will not yield the significant gains seen by early adopters, but it will at least ensure their survival in a new data-driven industry.

3. Hold on tight to the old way of doing business, fail to embrace analytics, and – in all likelihood – lose market share, lose profitability, lose members, and eventually be acquired by a more analytical organization who can revitalize and re-engage your remaining membership.

Every bank and credit union can build an analytically driven organization with the right help. Over the next weeks, we will release posts that will highlight some very specific examples in which analytics can help your bank or credit union.


To learn how The Knowlton Group can help you define the proper business intelligence and analytics strategy or if you need help implementing an analytics solution, contact Brewster Knowlton at brewster@knowlton-group.com or call 860-593-7842 to learn more!

I get this question all the time: what makes a data warehouse different from a regular database?

To those in organizations with a less robust business intelligence and analytics maturity, it may be difficult to differentiate between the two. However, there are very distinct differences between a data warehouse and a regular database. In this post, I will quickly address the major differences.

Read vs. Write

A “regular” database is generally the destination for data modified from a front-end application. In banking, the core system will typically record all changes and data to a standard database. A CRM system (like Dynamics CRM) will write to a standard SQL Server database. These databases are specifically designed to write data (update or insert data) as quickly as possible. They are not optimized for read (data retrieval) operations. In other words, the database is not designed to efficiently get data out of it. A “regular” database is often normalized; this structure optimizes those write operations.

A data warehouse, on the other hand, is designed for read operations. The structure of the data warehouse is designed to accommodate the fact that the majority of a user’s interaction with the warehouse will be retrieving data from it. The only time a “write” operation – any operation that modifies existing data or adds new data – occurs in the warehouse is at night during the ETL process (in most data warehouses anyway).

Unlike a “regular” database, a data warehouse is denormalized. You will often hear this structure referred to as a star schema when discussed in a business intelligence context.

Point in Time vs. Historical

A credit union’s core banking database will very easily tell you what a member’s current share balance is. But will it just as easily tell you what that member’s balance was yesterday? Last week? Last month? A year ago?

Another major difference between a “regular” database and a data warehouse is that a data warehouse is designed for historical reporting. Data warehouses accrue information nightly from various source systems; a “regular” database typically only stores current data (or an extremely limited amount of historical data). Some historical data might be buried in log files or audit trails in a “regular” database, but the database is by no means designed to efficiently or easily access this data. A data warehouse, whose primary purpose is for analytics and intelligence, makes accessing this historical data incredibly easy.

Think of what you ask your front-end applications versus what you might ask a data warehouse. A front-end application (whose data is stored in a “regular” database) might be asked “what sales prospects are in our system today? A data warehouse might ask “what is the average number of sales prospects we have had in our pipeline for the last twelve months?”.

Single Source vs. Multiple Sources

A “regular” database is usually the backend to a front-end application often used by business users or customers. Each backend database is designed to only store data for that particular application.

A data warehouse, on the other hand, is designed to integrate data from various application sources. A healthcare example might be that the data warehouse integrates the main EMR system database, a 3rd party surgical scheduling application database, a billing database, and a logistics application’s database. A credit union’s data warehouse might integrate their core platform, consumer loan origination system, mortgage origination system, and CRM application. Each of these data warehouses have integrated data from various “regular” databases. The ability for a data warehouse to integrate data from various data sources is probably the most important difference between a “regular database” and a data warehouse.


There are several other differences between a data warehouse and a “regular” database, but these few points represent the major differences in structure and design. As big data, analytics, and business intelligence become more and more critical to an organization’s sustained success, becoming familiar with some of these key business intelligence concepts will become a necessity. Take a look at some of our other business intelligence posts to learn more!

Photo Credit

Data warehouses, multi-dimensional analysis cubes, and predictive analytics can appear to be imposing goals.

Where do we start?

How quickly can we accomplish our data goals?

Do I have the right talent?

These questions, and many like them, create doubt and concern amongst credit union executives wondering how they can start utilizing data better and begin to develop BI practices. Rich Jones and I have developed a business intelligence strategic engagement opportunity with credit unions seeking to answer these questions about business intelligence and data utilization. From the $300 million credit union or community bank without an MCIF to the $2.8 billion organization working to build their first data warehouse, we help each organization design a very specific roadmap that will enable them to accomplish their business intelligence goals and become a data-driven credit union.

So how does it work?

We recently completed a very successful business intelligence strategy engagement with a large east coast credit union. Their story was like most credit unions – there were many third-party applications and data sources spread throughout the organization. The information contained within these disparate applications made it incredibly challenging to get a 360-degree view of their membership and their internal business operations. Realizing that they could do more with their data, they sought our assistance to develop a long-term business intelligence and data strategy that could lead them to success.

Once we arrived onsite, we met with nearly 30 individuals over a four-day period to identify the various applications used throughout the credit union, the current reports and data uses, the needs of each department, and a variety of other relevant details. From the CEO to the Vice President of Consumer Lending to the lead Lending Analyst, each individual interviewed gave us the insight necessary to understand the organizational data needs and current data challenges. These interviews were absolutely critical to the success of the engagement and comprise the overwhelming majority of the time that we spent onsite.

With the interviews completed (and many pages of notes!), we consolidated our discovery notes and developed an 18 to 36 month data strategy that will allow the credit union to achieve their data goals. Our very detailed implementation plan included hiring milestones, third-party data integration milestones, business intelligence unit establishment deadlines, and several other key deadlines and milestones. These very specific deadlines allow the credit union to monitor their progress as they undertake each step of the process. Sequential, logical, and iterative, the roadmap breaks down the business intelligence strategy into very achievable and digestible tasks that reduce the stress that often accompany significant organizational projects.

Prior to this, the credit union was struggling with how to take the right first steps in achieving their data goals. The project loomed ominously as they continued to struggle with business intelligence and data utilization.

Now, the credit union understands exactly what they need to do, whom they need to hire or train, when they need to achieve certain milestones, and all of the details in between. Stress and concern over business intelligence has turned into excitement and opportunity. They now have the plan in place to become a data-driven credit union!

Why us?

Our implementation plans and business intelligence roadmaps are based on significant past experiences and expertise. Rich’s experience in leading the establishment of business intelligence units at multiple large credit unions, and my expertise in the tactical development of data warehouses and business intelligence operations creates a powerful combination of two distinct yet complementary skill sets. Together, we can help change the credit union world from a backwards-looking, report-reading industry into one that is forward-looking and data-driven.

You may contact Brewster Knowlton at brewster@knowlton-group.com or Rich Jones at rich@leading2leadership.com. Click to get additional information about our business intelligence strategic engagement!


In my post discussing Top-Down BI Strategy, I introduced a chart which I call the “Data Mass Value Curve”.

Value of Data vs. Amount of Data

Data Mass Value Curve

This graph identifies an exponential increase in the value of the data as a function of the amount of data available. That is to say that the more data you have available to you, the value of that data increases exponentially. But why is this true? Why does the value of the data available exponentially increase?

Why is the Increase of Data Value Exponential?

Take the example of a credit union who only has access to their core data. This credit union will only be able to provide a one-dimensional analysis of their membership. They can return simple requests like, “how many of our members have share drafts” or “how many of our members have vehicle loans”. Those questions are simple and do not provide particular insightful analysis.

Now look at the credit union that has access to both their core data and their CRM system. Questions that were once simple like “how many of our members are over the age of 65?” become “how many of our members over the age of 65 call into our call center more than once a week?”. The questions are much more insightful and the integration of data systems allows for more complex questions to be asked of the data. Instead of asking questions only about the core or only about the CRM system, you can ask questions that draws data from both systems at once.

As the number of data sources available to us increases, the number and complexity of questions we can ask of our data continues to increase. With three data sources available, we can draw analysis from one, two, or all three sources at the same time. This is the crux of the exponentially increasing value of data as the number of data sources (or amount of data) increases. This level of multi-dimensional analysis is critical to a business intelligence program that delivers significant tangible benefits to the organization.

Collect as much data as possible, right?

Your first thought might be, “let’s collect as much data as we possible can”. But there is a slight catch – the “Data Mass Value Curve” assumes a proportional level of data expertise that can properly and efficiently utilize the data available. Often, the limiting factor of an organization is the expertise with which they can utilize the data available. When the data expertise barrier is reached, the increasing amount of available data provides no additional value.

Data Mass Value Curve_Barrier_1

In this curve, the expertise barrier is reached quite early and the value of the data is stifled. The actual value of the data is a fraction of its potential given the limited expertise to utilize the data properly.

Now let’s look at an organization with a bit more data expertise available:

Data Mass Value Curve_Barrier_2

Notice that the expertise barrier is reached farther out on the curve. This organization is not able to completely leverage the amount of data available to them, but they are able to make more of an impact with their greater expertise.

The organization below has expertise that can provide nearly all the value possible:

Data Mass Value Curve_Barrier_3

This organization has the necessary resources, talent, and efforts to be able to properly utilize the data sources available to them.

How does this help me today?

This “Data Mass Value Curve” naturally is a bit abstract. However, the concepts can be applied to your BI environments. Ask yourself some simple questions:

  1. How many data sources can I access in a SQL database format?
  2. Of these data sources, how much of the data is integrated? Do you have data marts or a data warehouse in production?
  3. Do I have the necessary staff, resources, or talent to be able to effectively utilize and analyze this data?

After answering those basic questions, you can start to gauge where your organization sits on the “Data Mass Value Curve”. Once you have established your location on the curve, you can begin to roadmap a strategy to achieve proper data utilization and returned value. Depending on the answers to the questions above, you may need to hire or train SQL and BI talent. Other organizations may have a necessary level of expertise but not enough integrated or accessible data sources available to provide value. Abstractly understanding your organization’s location on the curve can help you drive your business intelligence investment in the right direction.

In our last lesson, we showed you how to use SSIS to export data to Excel. In fact, all of our previous SSIS lessons have focused on exporting data from SQL into some other format. In this lesson, however, we will show you how to import data from Excel into SQL.

This is very helpful as you start to develop your business intelligence program in your bank or credit union. Think of how much data exists at your financial institution sitting in the form of Excel spreadsheets. By importing this data into SQL, it can be cross-referenced and/or integrated with existing SQL data. This is a key first step in any business intelligence program, and in this lesson we will help you get started!

Setting up our Data Source

The Excel spreadsheet that I will be importing into SQL is based on data compiled by the NCUA. The spreadsheet contains a list of all credit unions and their associated member totals and total assets (as 6/30/2014). The image below gives you a quick snapshot of what the data looks like:

Spreadsheet Header

With our source Excel spreadsheet identified and described, open up your SSIS solution named “Credit Union SSIS Solution” if you have been following along with previous examples. Otherwise, create a new SSIS solution and name it “Import to SQL.dtsx”. Just like we have in past examples, drag a “Data Flow Task” from the “SSIS Toolbox” onto the “Control Flow” designer. Right click on the task, click “Rename”, and then rename the task “Import NCUA File into SQL”.

Renamed Data Flow Task

Now, double-click on the newly renamed task to be brought to the “Data Flow” editor. From the “SSIS Toolbox”, drag an “Excel Source” item onto the data flow designer. Rename this item “NCUA Excel File”.

NCUA Excel File Task Item

Double-click on the renamed item; the “Excel Source Editor” window will appear on your screen. Click the “New” button to the right of the “Excel connection manager” dropdown. Click “Browse”, then locate the file you will be importing (in my case, the NCUA file), press “Open” and then press “OK” when you are brought back to the “Excel Connection Manager” window.

In the “Name of the Excel sheet” dropdown, choose the specific spreadsheet within the Excel workbook that you wish to choose as your data source.

Excel Source Editor

Press “OK” once this has been completed.

Defining the Destination to Import Data into SQL

With the Excel source defined, drag an “OLE DB Destination” item from the “SSIS Toolbox” onto the data flow designer. Rename this item “SQL Destination”. Click the “NCUA Excel File” item so that it is selected then drag the blue arrow from the source item to the “SQL Destination” item.

Connected Items

Next, double-click on the “SQL Destination” item. Here, we defined where the NCUA data will reside in SQL. Click the “New” button to the right of the “OLE DB connection manager” dropdown. Select the connection from the left panel on the “Configure OLE DB Connection Manager” window that appears or define a new one by clicking the “New” button at the bottom of the window. Press “OK”.

For this example, we will need to create a new table in the database for this Excel data to reside. To do this, click the “New” button to the right of the “Name of the table or the view” dropdown. The “Create Table” window will appear on screen containing the SQL code that will be used to create a new table. I want to make one subtle change and alter the name of this table. After “CREATE TABLE” replace “[SQL Destination]” with “NCUA_Data”.

Create Table Window

Press “OK” once you have done this. If you open SQL Server Management Studio, you will see that this new table has been created in the database. This is important; the table is created once you press “OK” from the “Create Table” window – NOT when you execute the package for the first time.

To finish setting up the destination, press the “Mappings” option from the left side of the “OLE DB Destination Editor” window. Ensure that the column mappings are correct, and then press “OK”.

Execute the package (by pressing F5 or the green play button on the toolbar) to see if everything works. Assuming you successfully execute the package in SSDT, open up SQL Server Management Studio. We want to view the contents of the table we just sent the Excel data to in order to validate the process properly imported all data.

SQL Validation


It’s that easy! The process required to import data into SQL from Excel is nearly identical to the previous export examples we have completed throughout the series. We simply define the source, define the destination, and then the mappings between the two. Learning this relatively simple task in SSIS can open up tons of new opportunities for you to be able to use your data more effectively. Eliminate silos of Excel data by bringing that data into SQL. Doing this will allow you to integrate and cross-reference it with your other SQL data sources ultimately bringing you closer to business intelligence success!

Download the Excel File from this lesson by click this link: NCUA_Data.xlsx