The data cleansing routines we employ are rules based, exhaustive and time-tested. For more than 25 years, these stringent data cleansing processes have been instrumental in building our reputation as a data provider that brings the highest standards to data quality and accuracy. EIT has developed proprietary ETAL (Extract, Transform, Aggregate, and Load) processes to identify and correct data anomalies across many diverse data sources.
Our Cleansing Routines Identify and Correct:
- Invalid or extra characters and/or formatting
- Data that are out of the expected range vs. historical data
- Duplicate records
- Inaccurate or non-standard record descriptions across data sources
Rules based approaches define how data anomalies are managed
What we do Differently:
In close collaboration with our business partners, we identify the information they want from their data before mapping the different source data types, ensuring the business intelligence they want is easily accessible.
We utilize many different unique identifying numbers to join data from different sources together for analytical purposes. Examples include:
- Products - Case and or product UPC, PLU, retailer or distributor specific codes, SKUs, EANs, LSNs, Retek codes, etc.
- Geography - Outlet ID numbers, county FIPS codes, zip codes, TDLinx identifiers, DIBS (commodity) codes, and distributor codes
- Time - Normalizing process for smoothing out data that is received daily, weekly, calendar month