According to a 2012 study conducted by Arizona State University, it is estimated that the volume of business data worldwide doubles every 1.2 years.  From an ITAM perspective this has multiple impacts; as business data volume increases, so must the infrastructure to store and manage that data increase.  As a result, the datasets that ITAM leverages to fulfill its functions becomes larger and more complex.  Gleaning useful, actionable information from this deluge of data becomes more difficult, and yet more critical with each new terabyte of data gathered.

Because the data you need comes from disparate sources, the challenges that arise from rationalizing those sources can be daunting.  Different datasets will have different formats for attributes you need, which complicates cross-referencing efforts.  Any data that has been entered and/or managed via manual intervention carries the risk of error.  M&A activity creates large influxes of data with large degrees of duplication requiring reconciliation.  These factors complicate IT’s ability to provide good analytics for decision makers.

To take a specific ITAM example of this complexity, consider the data involved in software discovery and reconciliation.  Frequently companies will have different systems for software deployment, CMDB, virtualization management and auto-discovery, all of which can carry discoverable software attributes.  Normalizing title information between these sources is a necessity.  Version levels and other key attributes between tools may not be modeled consistently – some may use one field to track version, others may use multiple fields, some may not track to the same level of granularity (major/minor/release/build) as others.  The tools may not even be scanning the same portion of the machine to get its information or may not be configured to look for a specific identifier (e.g. ISO 19770 software tags).

On the entitlements side of the equation we find a similar conundrum – a wide range of vendors from which software is purchased, each with its own set of compliance measurements, terms and conditions to track.  Often product descriptions are cryptic, manufacturers and resellers may use different product codes for the same item, and interpreting when licenses are bundled with maintenance or support is tricky.

Going beyond software, any of the referential demographic attributes that we link to that software in HP Asset Manager – the make/model of the machine it’s installed on, the user, the machine’s and/or user’s physical location – all of those can potentially come from different sources, and sometimes multiple different sources.  Describing those dimensions consistently is vital to creating usable reports and dashboards for your customers.

Overcoming this complexity to provide consistent, useful ITAM data is why RevealIT partnered with BDNA to develop a certified integration that enables HP customers leverage the BDNA solution for their ITAM program.  The next few posts in this series will go into further detail about the BDNA-HP interface and the value it brings to users struggling to overcome their data normalization challenges and bring clean, consistent data from which good business decisions can be made.