
Episode 1:5 - Information Management Maturity Model
08/29/19 • 15 min
Developing a Data Strategy can be difficult, especially if you don’t know where your current organization is and where it wants to go. The Information Management Maturity Model helps CDOs and CIOs find out where they currently are in their Information Management journey and their trajectory. This map helps guide organizations as they continuously improve and progress to the ultimate data organization that allows them to derive maximum business value from their data.
The model can be seen as a series of phases, starting from least mature to most mature: Standardized, Managed, Governed, Optimized, and Innovation. Many times an organization can exist in multiple phases at the same time. Look for where the majority of your organization operates, and then identify your trail-blazers that should be further along in maturity. Use your Trail-blazers to pilot or prototype new processes, technologies or organizational structures.
Standardized Phase
The standardized phase has three sub-phases. Basic, Centralized, and Simplified. Most organizations find them self somewhere in this phase of maturity. Look at the behaviors, technology, and processes that you see in your organization to find where you fit.
Basic
Almost every organization fits into this phase, at least partially. Here data is only used reactively and in an ad hoc manner. Additionally, almost all the data collected is stored based on predetermined time frames (often “forever”). Companies in BASIC do not erase data for fear of missing out on some critical information in the future. Attributes that best describe this phase are:
- Management by Reaction
- Uncatalogued Data
- Store Everything Everywhere
Centralized (Data Collection Centralized)
As organizations begin to evaluate data strategy they first look at centralizing their storage into large Big Data Storage solutions. This approach takes the form of Cloud storage or on-prem big data appliances. Once the data is collected in a centralized location Data Warehouse technology can be used to enable basic business analytics for the derivation of actionable information. Most of the time this data is used to fix problems with customers, supply chain, product development, or any other area in your organization where data is generated and collected. The attributes that best describe this phase are:
- Management by Reaction
- Basic Data Collection
- Data Warehouses
- Big Data Storage
Simplified
As the number of data sources increase in an organization, companies begin to form organizations that focus on data strategy, organization and governance. This shift begins with a Chief Data Officer’s (CDO) office. There are several debates on where the CDO fits in the company, under the CEO or CIO. Don’t get hung up on where they sit in the organization. The important thing is to establish a data organization focus and implement a plan for data normalization. Normalization gives the ability to correlate different data sources to gather new insight into what is going on across your entire company. Note without normalization, the data remains siloed and only partially accessible. Another key attribute of this phase is the need to develop a plan to handle sheer, the massive volume of data being collected. Because of the increase in volume and cost of storing this data, tiered storage becomes important. Note that in the early stages it is almost impossible to know the optimal way to manage data storage. We recommend using the best information available to develop rational data storage plans, but with the cavett that this will need to be reviewed and improved once the data is being used. The attributes that best describe this phase are:
- Predictive Data Management (Beginning of a Data-Centric Organization)
- Data Normalization
- Centralized Tiered Storage
Managed (Standard Data Profiles)
At this phase organizations have formalized their data organization and have segmented the different roles in the data organization: Data Scientist, Data Stewards, Data Engineers are now on the team and have defined roles and responsibilities. Meta-Data Management becomes a key factor in success at this phase, and multiple applications can now take advantage of the data in the company. Movement from a Data Warehouse to a Data Lake has taken place to allow for more agility in development of data-centric applications. Data Storage has been virtualized to allow for a more efficient and dynamic Storage solution. Data Analytics can now run on data sets from multiple sources and departments in the company. These attributes best describe this phase:
- Organized Data Management (Data Org in place with key roles identified)
- Meta-Data Management
- Data Lineage <...
Developing a Data Strategy can be difficult, especially if you don’t know where your current organization is and where it wants to go. The Information Management Maturity Model helps CDOs and CIOs find out where they currently are in their Information Management journey and their trajectory. This map helps guide organizations as they continuously improve and progress to the ultimate data organization that allows them to derive maximum business value from their data.
The model can be seen as a series of phases, starting from least mature to most mature: Standardized, Managed, Governed, Optimized, and Innovation. Many times an organization can exist in multiple phases at the same time. Look for where the majority of your organization operates, and then identify your trail-blazers that should be further along in maturity. Use your Trail-blazers to pilot or prototype new processes, technologies or organizational structures.
Standardized Phase
The standardized phase has three sub-phases. Basic, Centralized, and Simplified. Most organizations find them self somewhere in this phase of maturity. Look at the behaviors, technology, and processes that you see in your organization to find where you fit.
Basic
Almost every organization fits into this phase, at least partially. Here data is only used reactively and in an ad hoc manner. Additionally, almost all the data collected is stored based on predetermined time frames (often “forever”). Companies in BASIC do not erase data for fear of missing out on some critical information in the future. Attributes that best describe this phase are:
- Management by Reaction
- Uncatalogued Data
- Store Everything Everywhere
Centralized (Data Collection Centralized)
As organizations begin to evaluate data strategy they first look at centralizing their storage into large Big Data Storage solutions. This approach takes the form of Cloud storage or on-prem big data appliances. Once the data is collected in a centralized location Data Warehouse technology can be used to enable basic business analytics for the derivation of actionable information. Most of the time this data is used to fix problems with customers, supply chain, product development, or any other area in your organization where data is generated and collected. The attributes that best describe this phase are:
- Management by Reaction
- Basic Data Collection
- Data Warehouses
- Big Data Storage
Simplified
As the number of data sources increase in an organization, companies begin to form organizations that focus on data strategy, organization and governance. This shift begins with a Chief Data Officer’s (CDO) office. There are several debates on where the CDO fits in the company, under the CEO or CIO. Don’t get hung up on where they sit in the organization. The important thing is to establish a data organization focus and implement a plan for data normalization. Normalization gives the ability to correlate different data sources to gather new insight into what is going on across your entire company. Note without normalization, the data remains siloed and only partially accessible. Another key attribute of this phase is the need to develop a plan to handle sheer, the massive volume of data being collected. Because of the increase in volume and cost of storing this data, tiered storage becomes important. Note that in the early stages it is almost impossible to know the optimal way to manage data storage. We recommend using the best information available to develop rational data storage plans, but with the cavett that this will need to be reviewed and improved once the data is being used. The attributes that best describe this phase are:
- Predictive Data Management (Beginning of a Data-Centric Organization)
- Data Normalization
- Centralized Tiered Storage
Managed (Standard Data Profiles)
At this phase organizations have formalized their data organization and have segmented the different roles in the data organization: Data Scientist, Data Stewards, Data Engineers are now on the team and have defined roles and responsibilities. Meta-Data Management becomes a key factor in success at this phase, and multiple applications can now take advantage of the data in the company. Movement from a Data Warehouse to a Data Lake has taken place to allow for more agility in development of data-centric applications. Data Storage has been virtualized to allow for a more efficient and dynamic Storage solution. Data Analytics can now run on data sets from multiple sources and departments in the company. These attributes best describe this phase:
- Organized Data Management (Data Org in place with key roles identified)
- Meta-Data Management
- Data Lineage <...
Previous Episode

Episode 1:4 - History of Data Architecture
Organizations are looking to their vast data stores for nuggets of information that give them a leg up on their competition. “Big Data Analytics” and Artificial Intelligence are the technologies promising to find those gold nuggets. Mining data is accomplished through a “Distributed Data Lake Architecture” that enables cleansing, linking, and analytics of varied distributed data sources.
Ad Hoc Data Management
- Data is generated in several locations in your organization.
- IoT (Edge Devices) has increased the number of locations and types of data that have grown.
- Organizations typically look at the data sources individually and application-centric.
- Data Scientists look at a data source and create value from it. One application at a time.
- Understanding the data and normalizing it is key to making this successful. (A zip code is a zip code, a phone number has multiple formats, Longitude, Latitude)
- Overhead of creating a Data-Centric Application is high if you start from scratch.
- People begin using repeatable applications to get the benefits of reuse.
Data Warehouse Architecture
- Data Warehouse architecture takes repeatable processes to the next level. Data is cleansed, linked, and normalized against a standard schema.
- Data is cleansed once and used several times, with different applications.
- Benefits include:
- Decrease time to answer.
- Increase reusability
- A decrease in Capital Cost
- Increase in Reliability
Data Lake Architecture
- A Data Lake moves all of the data and stores all of the data in its raw format.
- Data Lake Architecture uses Meta-Data to better understand the data.
- Allows for late binding of applications to the data.
- Gives the ability to use/see the data in different ways for different applications.
- Benefits include:
- Ability to reuse data for more than one purpose
- Decrease time to create new applications
- Increase the reusability of data
- Increase Data Governance (Security and Compliance)
Distributed Data Lake (Data Mesh)
- One of the biggest problems with Data Lake and Data Warehouse is the movement of data.
- As data volume goes up, so does its gravity. It becomes harder to move.
- Regulations can limit where the data can actually reside.
- Edge devices that have data need to encrypt and manage data on edge devices before pushing to a data lake.
- This architecture allows for data services to be pushed to compute elements into the edge. Including storage, encryption, cleanse, link, etc..
- Data is only moved to a centralized location based on policy and data governance.
- Each application developed does not need to know where the data is located. The architecture handles that for them.
- Benefits include:
- Decrease time to answer.
- Late data binding to runtime.
- Multiple applications running on the same data in the same devices
- Decrease cost due to decrease movement of data.
Rise of the Stack Developer (ROSD) - DWP
Next Episode

Podcast 1:6 - Elastic Search + Optane DCPMM = 2x Performance
Intel Optane DC Persistent Memory to the reuse
Intel's new Persistent Memory technology has three modes of operation. One as an extension of your current memory. Imagine extending your server with 9TBytes of Memory. The second is called AppDirect mode where you can use the Persistent Memory as a persistent segment of memory or as a high-speed SSD. The third mode is called mix mode. In this mode, a percentage of the persistent memory is used for AppDirect and the other to extend your standard DDR4 Memory. When exploring this new technology, I realized that I could take the persistent memory and use it as a high-speed SSD. If I did that could I increase the throughput of my ElasticSearch Server? So I set up a test suite to try this out.
Hardware Setup and configuration
1. Physically configure the memory
• 2-2-2 configuration is the faster configuration for using the Apache Pass Modules.
2. Upgrade the BIOS with the latest updates
3. Install supported OS
• ESXi6.7+
• Fedora29+
• SUSE15+
• WinRS5+
4. Configure the Persistent Memory for 100% AppDirect Mode to get maximum storage
# ipmctl create -goal MemoryMode=0 PersistentMemoryType=AppDirect# reboot
# ipmctl show -region
# ndctl list -Ru
[
{
"dev":"region1",
"size":"576.00 GiB (618.48 GB)",
"available_size":"576.00 GiB (618.48 GB)",
"type":"ElasticSearch",
"numa_node":1,
"iset_id":"0xe7f04ac404a6f6f0",
"persistence_domain":"unknown"
},
{
"dev":"region0",
"size":"576.00 GiB (618.48 GB)",
"available_size":"576.00 GiB (618.48 GB)",
"type":"ElasticSearch",
"numa_node":0,
"iset_id":"0xf5484ac4dfaaf6f0",
"persistence_domain":"unknown"
}
]
# ndctl create-namespace -r region0 --mode=sector
# ndctl create-namespace -r region1 --mode=sector
5. Now create the filesystem Create the FileSystem
# mkfs.xfs /dev/ElasticSearch0s# mkfs.xfs /dev/ElasticSearch1s
# mkdir /mnt/ElasticSearch0s
# mkdir /mnt/ElasticSearch1s
# mount /dev/ElasticSearch0s /mnt/ElasticSearch0s
# mount /dev/ElasticSearch1s /mnt/ElasticSearch1s
Now that you should be able to access the filesystems via /mnt/ElasticSearch0s and /mnt/ElasticSearch1s.
Setup and configuration
We chose to evaluate the performance and resiliency of ElasticSearch using off the shelf tools. One of the most used performance test suites for ElasticSearch is ESRally. ESRally was developed by the makers of ElasticSearch and is easy to set up and run. It comes with its nomenclature that is easy to pick up.
• Tracks - Test Cases are stored in this directory.
• Cars – configuration files to be used against the distributions
• Race – This contains the data, index, and log files for ElasticSearch run. Each run has a separate directory.
• Distributions – ElasticSearch installations
• Data – data used for the tests
• Logs – logs for ESRally not for ElasticSearch.
ESRally can attach to a specific ElasticSearch cluster, or it can be configured to install and run a standard release of ElasticSearch on the fly, stored in the distribution directory. When I was looking at what directories to move between a PMEM drive and a SATA drive, I looked at the race directory. I found out that I would be limited by the data directory and log directory as well. So I decided to move the complete .rally directory between the two drives.
ESRally Pre-requisites
• Python 3.5+ which includes pip3.
• Java JDK 1.8+
Software Installation
ESRally can be installed from GitHub repository. See http://ESRally.readthedocs.io for more information.
To install ESRally, use the pip3 utility. Pip3 is installed when you install python 3.5.
# pip3 install ESRallyNow you want to configure ESRally
# export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk# ESRally configure
ESRally config creates a “.rally” directory in your home directory. This “.rally” directory is used to store all of the tests, data, and indices.
Next, you need to install the eventdata track. https://github.com/elastic/rally-eventdata-track
This track generates about 1.8 TBytes of traffic through ElasticSearch. It is a good test because much is the data is generated and not read from a drive, which means you are not limited by another drive’s performance and you can test the raw performance of the different drives with ElasticSearch. Total storage used is about 200GBytes.
Running Tests (Races)
Next, you run the tests for the different configurations. First, I run the following test a...
If you like this episode you’ll love
Episode Comments
Generate a badge
Get a badge for your website that links back to this episode
<a href="https://goodpods.com/podcasts/rise-of-the-stack-developer-118314/episode-15-information-management-maturity-model-6039423"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to episode 1:5 - information management maturity model on goodpods" style="width: 225px" /> </a>
Copy