In our efforts to help organizations develop successful business intelligence and analytics strategies, we have identified some of the most common reasons BI projects fail. Regardless of the size and scale of your organization, these common BI project issues could cause your analytics solution to fall short of its full potential. Like the picture above, avoid these common pitfalls to prevent your BI project from falling off the BI cliff!

1. The team is too technically focused or too business-focused.

A successful business intelligence project requires the right balance of technical and business expertise. An analytics solution that is solely developed by technical staff tends to fail to capture the necessary business processes and expertise required to design a sustainable solution. Similarly, a BI project managed and solely developed by business staff usually fails to understand the necessary technical aspects required for a successful deployment.

A key to success is to have a project team that is comprised of business staff along with technical team members. Cooperation between the business users and IT is absolutely crucial.

2. Too much focus is given to the tools and not the data.

There are many great business intelligence and data visualization tools out there – Qlikview, Tableau, and Cognos are just some of the more popular ones. But organizations just starting to develop their business intelligence solutions tend to focus too much on the BI tool. They tend to forget that a BI tool is only as good as the data that lies beneath it. You can build a car with the body of a Ferrari, but it is useless without an engine. The BI tool is the body, while the data warehouse is the engine.

Successful BI projects focus on developing a quality data model first and the BI tool after.

3. The “I want it all” Problem

Another common issue we find with organizations with a less mature business intelligence culture is that they want every possible piece of data right away. Sure, a successful analytics solution has a lot of data available, but the data is handpicked for a specific reason. If we just try to place all data into a data warehouse or other BI solution all at once, the solution tends to fall apart. Data quality issues often arise and unnecessary data model complexity is introduced.

Remember this important point: a data warehouse business intelligence solution is designed for reporting and analysis. Not every data point is relevant (or at least of high priority) when deploying your first BI solution. Your BI project team should take an iterative approach to developing the BI solution.

4. Data Definition Consistency

An often-overlooked yet critical aspect of a successful business intelligence implementation is clearly and consistently defined data definitions. Different areas of a credit union might have separate definitions for the term “member”. Similarly, banks might have separate definitions for the term “customer”. Depending on a hospital department’s function, they might use a slightly different definition for the term “patient” or “visit”. Without this term consistency, it is difficult to develop a BI solution where pulling a report for the number of members in a credit union could yield five different answers depending on who is pulling the report.

Definitions for key business terms must be defined up front and agreed upon by all project stakeholders. (P.S. this one might end up being harder than you think!)

5. Inflexible Data Models

This is where data warehouse vendor, consultant, or internal development team selection is critical for long-term success or failure. A business intelligence solution needs to be flexible. The data needs of the business today might be very different from the needs of tomorrow. Your development team (either internal or external) should be able to anticipate some of these changes and develop a data model that allows for this flexibility.

The last thing you want is for your BI project to be successful for 18 months and then be deemed useless. Make sure that you build you data model to be flexible as the data needs of your business change.

6. Data Quality and Data Integrity Issues

Another commonly overlooked component of a BI project relates to data quality issues. Anything from bad phone numbers, zip codes, city name spellings or a variety of other data issues can throw a wrench in your reporting. They seem trivial initially, but, if we were to trying to map our customer base using geographic information with misspellings, our information would prove incorrect. As the ETL is developed, ensure that critical details are accurate. Do the number of patients reflected in the data warehouse match the source system? Do the share account balances in the warehouse match the core?

Quick data quality and data integrity checks can save you weeks of headaches down the road. Deal with these issues early before small issues become big problems for your BI solution.

7. Talent Gaps

Do you get the most out of Excel if you don’t have staff that knows how to perform even basic functions in the software? Probably not. The same goes for your BI solution. If no one in-house knows how to write SQL queries, then a data warehouse is limited in the value that it can provide. Some of you will argue that a good BI tool can make up for this deficiency – a point I agree with to some extent, but BI tools can only make up for a fraction of the skill deficiency.

Be cognizant of the skills available internally. Do you have staff capable of developing SSRS reports or reports from another visualization tool? What about SQL – the language used to interact with databases? Answer these questions early on so that a proper plan can be designed to train current staff or hire new staff with the adequate skills.

8. “Only I Can Own the Data”

BI solutions are designed to bring data transparency to the organization. The ability to analyze data across business functions or reporting lines is critical to the success of a BI implementation. Do not fall into the trap whereby a business intelligence or data department holds data hostage.

Allow business users to access and receive information quickly and efficiently if you want the BI solution to be adopted. Hold it too close to your chest, and I can almost guarantee that user adoption will be minimal and any short-term success will be unsustainable.

9. Training, training, and more training

Would you deploy new critical business software without training your staff? Absolutely not. The same goes for deploying your business intelligence solution. Educating and training business users on how and where to access data, the capabilities and limitations of the system, key business definitions, and all other components of the solution are key for adoption.

If there is pushback, then use that pushback as a constructive criticism tool. Create focus groups for users who are not adopting the BI solution. Use these focus groups to figure out how to improve the capabilities and deployment of the system. Remember, your business intelligence solution is going to constantly be changing. Getting feedback from those using (or not using) the tools you give them is the best way to improve.

Like a retail company would never ignore customer feedback, your business intelligence department should never ignore the feedback – both positive and negative – from the consumers of data within the organization.


A BI project is not a short-term project. It is broken into a few phases: pre-deployment, development, and post-deployment. The issues above are some of the common challenges faced by organizations in each of these BI project phases.

Be aware of some of these issues that we brought up, and address them early on to ensure the most success for your BI project.

What is a column alias?

A column alias is a name given to a column that changes how it appears in the results. For example, to return the Name column from the Production.Product table but have the column’s name appear as “Product Name” in the results:

SELECT Name AS [Product Name]
FROM Production.Product

If you look in the results section, you will see “Product Name” listed as the column name instead of just “Name”. There are a few reasons to use a column alias (we will talk about some of them in more advanced lessons), but one reason is to just make the columns easier to read. Many times, we will write a query and then bring the results into an Excel file to give to someone. Since most columns in a table do not have spaces or they use underscores in their place, column aliases give us a chance to clean them.

To return the top 100 SalesOrderNumbers from the Sales.SalesOrderHeader table but give it the column alias “Order Number”, we type:

SELECT TOP 100 SalesOrderNumber AS [Order Number]
FROM Sales.SalesOrderHeader

You can create a column alias without spaces if you choose. However, if spaces exist in the alias, you must enclose the alias in either double quotes (“) or square brackets.

Practice Problems

Complete the practice problems below to get more experience using column aliases.

  1. Return the top 100 rows from the Production.Product table. Select only the Name column and assign it the column alias “Standard Product Name”.
  2. Using the Person.Person table, return the FirstName and LastName columns. Assign the alias “First Name” and “Last Name” to the columns respectively.

Follow the link to review the basic SQL SELECT statement if you having trouble with some of the practice problems. As always, leave a comment if you have any questions!

A basic SQL SELECT statement allows users to return rows and columns of data from a database table. In the previous section, we focused solely on returning literal values and results of basic mathematical expressions. Now, we will be retrieving data that is stored in the tables. SELECT statements that only retrieve data with no filtering, grouping, or sorting have the standard form:

SELECT [Column 1], [Column 2], …, [Column N]
FROM [Schema Name].[Table Name]

Every SELECT statement includes specifying one or more columns and a table from which to retrieve the columns. Each table is contained within a database schema. Unless the schema is the default “dbo” schema, you must specify the schema name with the table name.

Remember that all of our examples will use the AdventureWorks sample database provided by Microsoft. Take a look at our introduction to SQL post to learn how to get started.

To return the Name and ProductNumber columns from the Production.Product table:

SELECT Name, ProductNumber
FROM Production.Product

Notice that the column names are separated by a comma. Your results contain two columns for each row of data – one for the Name column and another for the ProductNumber column.

It is possible to return all columns and all rows from a table using the asterisk (*) symbol. To return all rows and all columns from the Production.Product table, execute the query:

SELECT *
FROM Production.Product

With larger tables, it is not a good practice to return every row of data. You can limit the number of rows returned by your SQL SELECT statement with the “TOP” expression. To return only the top one hundred rows from the Production.Product table:

SELECT TOP 100 *
FROM Production.Product

Instead of specifying a number of rows to limit, you can limit by a percent of the total number of rows in a table. For example, to limit our query to return only the top ten percent of rows from the Production.Product table:

SELECT TOP 10 PERCENT *
FROM Production.Product

Given some of you will work on large databases, it is a good practice to use the TOP expression. You run the risk of sapping much of the server’s resources by not limiting the number of rows retrieved.


This gives you the basic information you need to complete a SQL SELECT statement. You should now be able to retrieve data from a table. You can either return all columns or specify a select few. You also have the ability to limit the number of rows returned to the results.

What is a literal SELECT Statement?

A literal SELECT statement does not query a table. Instead, it returns the result of a string or expression. The classic programming example of this concept would be to try and return the string “Hello World” to the results section:

SELECT ‘Hello World’

By executing this query, you will notice that “Hello World” returns to your results. It is important to note that T-SQL uses single quotes to indicate a string of text. Be careful of this as other SQL variants and other programming languages use the double quote method to indicate strings.

We can return numbers and the results of mathematical expressions using literal SELECT statements as well. If we wanted to return the number 5 to our results section, we can execute the query:

SELECT 5

If we wanted to return the result of 5 multiplied by 3:

SELECT 5*3

Or, to return the result of adding 2 to 7:

SELECT 7 + 2

While literal SELECT statements are not the most complicated type of SQL query by any standard, they can be used in conjunction with more complicated queries later on. Learning these simple queries are a great starting point for SQL beginners and act as a way for you to become comfortable with the SQL Server Management Studio (SSMS) interface.

As organizations look to develop or expand their business intelligence departments, it is important to understand the various business intelligence roles that can exist. Organizations with a more mature data and analytics program may intuitively understand these roles more than a company just starting to develop a BI practice. In this article, we will talk about the various roles and responsibilities that can exist along with the skills needed for each role and the primary tools used by individuals fulfilling their role.

Business Intelligence Roles

Below we outline some – not all – of the most common business intelligence roles that you can look to add to your program.

Data Warehouse/Business Intelligence Manager

  • Description
    • The Data Warehouse/Business Intelligence Manager is responsible for managing and directing the project. This person is the tactical leader and not the executive sponsor. He or she needs to be able to communicate well with both IT and business staff for project success.
  • Skills Needed
    • Data Architecting Background
    • Project Management
    • Communication and Presentation Skills
    • At minimum, working knowledge of all aspects of the BI stack
  • Common Tools Used
    • SQL Server Management Studio
    • Microsoft Excel, Microsoft Project, or some other Project Management Tool
    • Data Modeling Software

Data Modeler/Data Architect

  • Description
    • The Data Modeler/Architect is responsible for developing the marts and warehouse structures based off of the business and technology requirements that have been gathered. They tend to perform quite a bit of data profiling and define the final data warehouse/data mart structure.
  • Skills Needed
    • Data Architecting Background
    • Data Profiling Skills
    • Advanced Knowledge of SQL and Data Structures
  • Common Tools Used
    • SQL Server Management Studio
    • Microsoft Excel
    • Data Modeling Software
    • SQL Server Integration Services (SSIS)

Business Analyst

  • Description
    • The Business Analyst is responsible for defining the business requirements for the business intelligence solution. A business intelligence solution helps solve business problems, therefore this role is critical to project success.
  • Skills Needed
    • Industry-specific Business Background is preferred
    • General Data Warehouse/Business Intelligence Background
    • Communication and Presentation Skills
  • Common Tools Used
    • Microsoft Excel
    • SQL Server Management Studio (depending on the technical skills of the analyst)

Extract, Transform, and Load (ETL) Developer

  • Description
    • The ETL Developer is the programmer who develops the packages that move data from its source location to the ODS (operational data store) and the marts or warehouse. In Microsoft SQL Server implementations, SSIS (SQL Server Integration Services) is the tool most often used.
  • Skills Needed
    • SQL Programming
    • Some C#/VB.net Background Preferred
    • Knowledge of Data Structures
    • ETL/SSIS Development Experience
  • Common Tools Used
    • SQL Server Integration Services (SSIS)
    • SQL Server Management Studio
    • Microsoft Excel

Report Developer

  • Description
    • The Report Developer is the developer responsible for creating reports, dashboards, and other visualizations that provide clarity to end users. Proper data visualization is a key component of a successful business intelligence implementation, so a capable Report Developer is a must.
  • Skills Needed
    • SQL Programming
    • Some C#/VB.net Background Preferred
    • SQL Server Reporting Services (SSRS), Crystal Reports, Tableau or other data visualization tool
    • Strong Industry-Specific Business Background Preferred
    • Statistics Background is a bonus
  • Common Tools Used
    • SQL Server Reporting Services (SSRS), Tableau, Crystal Reports or Other Data Visualization Software
    • SQL Server Management Studio
    • Microsoft Excel
    • Microsoft SharePoint (if SharePoint is the BI Portal in your deployment)
    • R, SAS, SPSS or some other statistical software package if the individual has a strong statistics background

Data Analyst

  • Description
    • The Data Analyst role is responsible for assisting business users to analyze and interpret the data contained in your data warehouse/data mart environment. This person often works directly with business users to help them solve specific business problems using the data contained in the business intelligence solution.
  • Skills Needed
    • SQL Programming
    • SQL Server Reporting Services (SSRS), Crystal Reports, Tableau or other data visualization tool
    • Industry-Specific Business Background
    • Statistics or Math Background Preferred
  • Common Tools Used
    • SQL Server Reporting Services (SSRS), Tableau, Crystal Reports or Other Data Visualization Software
    • SQL Server Management Studio
    • Microsoft Excel
    • Microsoft SharePoint (if SharePoint is the BI Portal in your deployment)
    • Statistical Software Package (R, SAS, SPSS, etc.)

Data Scientist

  • Description
    • The Data Scientist is by far the most technically advanced individual in the department. They must be well-versed in statistics and mathematics while also having a strong background in programming or scripting. This individual does not perform basic reporting requests. Instead, they perform advanced data-driven statistical modeling to develop predictive models, clustering analysis, and a variety of other advanced data mining and statistical models.
  • Skills Needed
    • Advanced Mathematics/Statistics Background
    • SQL Programming
    • R, SAS, SPSS, or Other Statistical Software Package Skills
    • Data Visualization Development
  • Common Tools Used
    • SQL Server Reporting Services (SSRS), Tableau, Crystal Reports or Other Data Visualization Software
    • SQL Server Management Studio
    • Microsoft Excel
    • R, SAS, SPSS, Python, or other statistical software package.

These were just some of the more common business intelligence roles that exist in data, analytics, and BI departments. In most smaller organizations, these roles may be combined and handled by a single FTE. Be aware of your departments staffing requirements as your business intelligence department becomes more mature and be sure to refer back to this post if you are looking to split responsibilities of combined roles.

Photo Credit

Step by Step

Many people out there believe that a data and analytics program can be developed with a single purchase, hire, or action. Like installing Microsoft Excel or some other “plug and play” software, these individuals believe their data and analytics program will be installed and voilà – a data and analytics program exists.

Let me be very clear: data, analytics, and business intelligence is not a “plug and play” model. To develop a truly functioning and sustainable data and analytics practice, your organization must take an iterative approach towards building out your BI program.

The Dangers of Going “All In” at Once

Some organizations believe that they should “go all in” on business intelligence and build out everything at once. This is an expensive, time-consuming, and resource-intensive process that is more than likely doomed to fall short of its long term potential.

Why?

Most financial institutions have at least a half dozen high priority data sources they would need to draw data from to ensure high analytic value. For example, a credit union may need data from their core, CRM system, personal LOS, mortgage LOS, digital banking platform, and card servicing vendor. These six applications hold massive amounts of valuable information. Integrating these data sources in a data warehouse/data mart model all at once would require a significant up-front capital investment and would require multiple FTEs to be nearly completed devoted to ensuring those projects are developed accurately. From development to testing and validation to delivering the proper analytics, reports, and dashboards to business users, the amount of work needed to be completed at once would be overwhelming.

Let’s say you did take this route and build out everything at once – what happens if your business users don’t buy into the business intelligence program? With so much change at once, your business users might be overwhelmed and user adoption would be minimal. This also assumes your organization has the necessary data skills in-house. How quickly can you train your staff to use data properly or build the necessary reports and models to make use of the data you now have? With such a significant up-front investment required in this model, your yielded value would be but a fraction of its potential and your ROI would be minimal if existent at all.

Let’s talk about the right way to go about it.

The Iterative Data, Analytics, and Business Intelligence Process

The iterative data and analytics development model is without a doubt the most effective strategy. Start by focusing on your most valuable source of information; in the credit union example, this is most likely the core banking platform. Develop the necessary analytic structures for that most valuable application first. This includes developing a data warehouse/data mart to structure the data properly, creating a reporting infrastructure, developing some type of BI portal, training internal staff on how to effectively use the data and analytics applications, and beginning to deliver incremental value to those business users from that single application. Then, once a certain level of development has been reached, your organization should start to focus on bringing another high value data source. Building off of the work completed when developing data and analytics from the first application, integrating this second data source will benefit from the work completed previously. Users will already start to see the value that data and analytics provides to them and the organization providing increase user support throughout the development process.

The iterative strategy allows you to be more agile and flexible as you go through the development lifecycle. Learning the challenges faced in creating data and analytic infrastructure for the first data source allows you to be more efficient when integrating a second data source into the business intelligence program. As your organization continues to develop the program, the time and effort required to integrate each subsequent data source is reduced and the value yielded from each additional data source grows exponentially.

How long will it take for a data and analytics program to be built?

As part of our business intelligence strategy engagements, we typically recommend an 18 to 36 month roadmap to get your data and analytics program to an effective and sustainable level. This allows for your organization to develop the necessary infrastructure, adopt a data-driven culture, and train and nurture data skills in an adequate time frame. Budgetary and resource restraints are always a potential limiting factor, but we have found that the 18 to 36 month timeframe is sufficient and effective.

So What is the Point?

The one line summary of this whole article is this: data, analytics, and business intelligence programs need to be developed step-by-step – iteratively – over the course of 18 to 36 months to reduce adoption and development risk, minimize unnecessary up-front capital expenditures, and maximize long-term, sustainable value.

Whether you develop your program internally or use external consultants to build out your data and analytics vision, taking an iterative approach is critical for your organization’s business intelligence future.

Our last post discussed the value of a data inventory – a list of all data sources throughout your organization. A data inventory is one of the first and most critical steps towards developing a long-term business intelligence strategy and roadmap. The next key discovery document we will discuss is the “Report Inventory” and why this document is critical to both long-term and short-term BI success.

What is a Report Inventory

Similar to a data inventory in some aspects, a report inventory is a listing of all of your recurring reports compiled throughout the organization. For example, lending typically has a host of monthly reports ranging from basic pipeline reports to delinquency and charge-off reports. Operations will pull a host of reports about their ACH transactions, debit and credit processing, ATM terminal volumes and several others. If you take a deep dive throughout your organization, I am sure that you will be amazed by how many reports are compiled on a recurring basis (typically monthly).

A report inventory collects some high level information about each of these recurring reports. We are interested in a description of the report, the rate of recurrence of the report, the creator of the report, the recipient of the report, and the estimated number of hours (on a monthly basis) that is required to compile the report. I have included a Sample Report Inventory template that you can use in your organization if you wish.

Report Inventory Screenshot

Why create a report inventory?

The biggest question you probably still have is why do I need a report inventory? What is the value?

What a report inventory tends to do is identify areas of significant manual effort in accessing and compiling data. A key component of business intelligence that people tend to overlook is the automation of key reporting and business processes. A report inventory helps us identify areas that spend a significant amount of manual effort and time compiling recurring reports. By estimating the amount of time that is spent, we are able to prioritize our automation efforts which can drastically reduce your organization’s total time spent creating reports.

Automating one report here and there might make a small impact but isn’t the type of impact we are shooting for. By using this report inventory, we can systematically identify ways to save your organization hundreds or thousands of hours. By understanding what data sources these reports typically access (which we would have already identified in the data inventory), we can effectively determine where our business intelligence efforts need to focus. Having this information allows you to more accurately quantify returns on your business intelligence efforts and ultimately lead to a more successful BI strategy implementation.



Interested in learning more about how The Knowlton Group can help you with your business intelligence strategy and implementation? Send an email to brewster@knowlton-group.com to learn more!

A critical first step of developing any successful business intelligence strategy is to first understand what data sources are at your disposal. Identifying these data sources, their owners, and their priority is the crux of the “Data Inventory”.

Why do I Need a Data Inventory

Did you ever get a box of legos as a kid? You rip open the box, pull out the instructions, tear open the bags of the legos and start building as the instructions describe. You get halfway through the instructions and realize that you are missing a critical piece of the set. Now what? You are stuck trying to make the set work without the missing lego. It would have been wise at the outset of building the set to make sure that all legos were accounted for before starting to build, right? This is exactly why we want to complete a data inventory before starting to build out our business intelligence strategy.

A data inventory allows you to assess what data sources are available and in what format this data exists. With a complete 360° view of your data sources, an accurate prioritization analysis can be completed. You may realize that some of your data sources aren’t quite as accessible as you would have liked. Specific departments may have been keeping their own silos of data in the form of Excel files or Access databases that pose some additional challenges when it comes time to integrate that data into a data mart or data warehouse. Only once you have a completed data inventory will you be able to accurately assess your BI roadmap and implementation strategies.

Where do I start?

Completing a data inventory simply requires the time to be able to devote to gathering the information and performing internal interviews. Click for a sample data inventory tracking spreadsheet.

Screenshot of Data Inventory Tracker

Screenshot of Data Inventory Tracker

Your goal should be to meet with department directors or managers to understand which business user is responsible for what key business applications. Ask as many questions about data usage and the data sources as possible; some significant insights can be gathered throughout these interviews that can be applied to your business intelligence roadmap and strategy development. With a data inventory completed, you have gathered a necessary piece of information required to start defining your business intelligence roadmap.

In my post discussing Top-Down BI Strategy, I introduced a chart which I call the “Data Mass Value Curve”.

Value of Data vs. Amount of Data

Data Mass Value Curve

This graph identifies an exponential increase in the value of the data as a function of the amount of data available. That is to say that the more data you have available to you, the value of that data increases exponentially. But why is this true? Why does the value of the data available exponentially increase?

Why is the Increase of Data Value Exponential?

Take the example of a credit union who only has access to their core data. This credit union will only be able to provide a one-dimensional analysis of their membership. They can return simple requests like, “how many of our members have share drafts” or “how many of our members have vehicle loans”. Those questions are simple and do not provide particular insightful analysis.

Now look at the credit union that has access to both their core data and their CRM system. Questions that were once simple like “how many of our members are over the age of 65?” become “how many of our members over the age of 65 call into our call center more than once a week?”. The questions are much more insightful and the integration of data systems allows for more complex questions to be asked of the data. Instead of asking questions only about the core or only about the CRM system, you can ask questions that draws data from both systems at once.

As the number of data sources available to us increases, the number and complexity of questions we can ask of our data continues to increase. With three data sources available, we can draw analysis from one, two, or all three sources at the same time. This is the crux of the exponentially increasing value of data as the number of data sources (or amount of data) increases. This level of multi-dimensional analysis is critical to a business intelligence program that delivers significant tangible benefits to the organization.

Collect as much data as possible, right?

Your first thought might be, “let’s collect as much data as we possible can”. But there is a slight catch – the “Data Mass Value Curve” assumes a proportional level of data expertise that can properly and efficiently utilize the data available. Often, the limiting factor of an organization is the expertise with which they can utilize the data available. When the data expertise barrier is reached, the increasing amount of available data provides no additional value.

Data Mass Value Curve_Barrier_1

In this curve, the expertise barrier is reached quite early and the value of the data is stifled. The actual value of the data is a fraction of its potential given the limited expertise to utilize the data properly.

Now let’s look at an organization with a bit more data expertise available:

Data Mass Value Curve_Barrier_2

Notice that the expertise barrier is reached farther out on the curve. This organization is not able to completely leverage the amount of data available to them, but they are able to make more of an impact with their greater expertise.

The organization below has expertise that can provide nearly all the value possible:

Data Mass Value Curve_Barrier_3

This organization has the necessary resources, talent, and efforts to be able to properly utilize the data sources available to them.

How does this help me today?

This “Data Mass Value Curve” naturally is a bit abstract. However, the concepts can be applied to your BI environments. Ask yourself some simple questions:

  1. How many data sources can I access in a SQL database format?
  2. Of these data sources, how much of the data is integrated? Do you have data marts or a data warehouse in production?
  3. Do I have the necessary staff, resources, or talent to be able to effectively utilize and analyze this data?

After answering those basic questions, you can start to gauge where your organization sits on the “Data Mass Value Curve”. Once you have established your location on the curve, you can begin to roadmap a strategy to achieve proper data utilization and returned value. Depending on the answers to the questions above, you may need to hire or train SQL and BI talent. Other organizations may have a necessary level of expertise but not enough integrated or accessible data sources available to provide value. Abstractly understanding your organization’s location on the curve can help you drive your business intelligence investment in the right direction.

Business intelligence and proper data utilization are becoming integral components of an organization’s success. As the amount of data amassed by each company continues to grow, so too will the value of properly utilizing that data. The chart below will have an entire post dedicated to it in the coming weeks.

Value of Data vs. Amount of Data

Your first thought after seeing the chart might be, “Let’s just blindly gather as much data as possible and see what happens!”. That’s an ambitious idea and will probably lead to more issues and administrative overhead than value returned. Instead we must strategically think about what questions we want to ask of our data. Only once the questions are asked can we determine what data is necessary. This is the basis of a “Top-Down BI Strategy”.

The Top-Down BI Strategy

The “Top-Down BI Strategy” focuses on what questions we would like to ask of our data first. From those questions, we determine the necessary data sources and structure needed to answer our key business questions. This is sometimes referred to as a “Business-Driven BI strategy”. This strategy focuses more on what the business needs or wants from its data and less about what data is currently available. In a perfect world, what data would help you be more profitable or gain more members or improve loan-to-share ratio or whatever goals your business has?

Another school of thought is the “Bottom-Up BI Strategy” or “Data-Driven BI Strategy” which focuses primarily on what data sources are available. From the data sources available, you then determine what questions you can ask of the data and build the data structures off of the existing sources. The “Bottom-Up BI Strategy” typically tends to be a bit easier to develop. It only uses data sources that are currently available, and often times can be the shortest method to achieving some limited success. However, these strategies typically fail to see the greater potential of business intelligence. By focusing only on what is currently available in a SQL format, you tend to ignore other data opportunities. For example, what value could integrating your core data with your credit card system bring to your business? Or integrating your core with your LOS system and with your CRM system to drastically improve marketing operations and ROI (Rich Jones has an excellent article on this topic named CMOs Need to Measure and Analyze the Data!).

So what strategy is the best?

This is the best comparison I can make. The “Bottom-Up BI Strategy” is a lot like trying to design a house after the builder tells you “I have X square feet of flooring, Y palates of wood, Z tons of cement, etc.”. Instead of designing the house first and then determining the materials needed, the “Bottom-Up BI Strategy” says, “This is what you have, figure out how best to use it”. Could you ever imagine building a house using this method? Why should your business intelligence program have to?

The “Top-Down Strategy” focuses on the key questions you would like to ask of your business regardless of whether or not the necessary data is currently accessible. Understanding the key performance indicators (KPIs) or important operational metrics that lead to your business’ success are critical before undertaking any business intelligence implementation. A business intelligence program is meant to support the business; it is therefore a prerequisite for success to understand the types of questions that are needed to be asked of your data. Using this method, we often find that organizations are able to do more with their data than they ever intended or realized was possible. Once the necessary questions have been asked, then you can begin to determine whether or not those data sources are available. If they are not currently available, is there a way that you can access them or integrate them in some other capacity? We define the technical solution only after the business problem has been accurately and completely identified. Using the “Bottom-Up BI Strategy” of only focusing on the amount of information currently available, you are setting yourself up to have, at most, limited success.

So what does it all mean?

Sure, a top “Bottom-Up BI Strategy” can often be the quickest path to implementing a BI solution. But does the quickest solutions always mean the best or most sustainable? If you take nothing else from this article, it is that the “Top-Down BI Strategy” is sure to be the best development strategy if you want to achieve incredible, sustainable results for your business intelligence program. The Knowlton Group and our partners only design business intelligence strategies using the “Top-Down BI Strategy” approach because our goal is to deliver long-term success for our clients.