License | |
Dependencies | | | | | |
Meta |
Nielsen Retail Reader is a special-purpose library and it's main purpose is to facilitate ease of processing of Nielsen Retail Scanner data of Kilt’s Center’s Nielsen IQ data used for Academic research only. The striking feature of this library is Dask which acts as an underlying framework that uniquely empowers the user to read Nielsen data with limited on device resources (by processing larger-than-memory data in chunks and distributed fashion). It understands the Kilts/Nielsen directory structure.
Information about the Retail Scanner data can be found here: Kilts Center for Marketing
Please note that Nielsen Retail data is proprietary and access is restricted to individuals whose institutions have an existing subscription or agreement with Nielsen. If you intend to use this library for accessing and analyzing Nielsen data, you must first ensure that you are authorized to do so by your institution. Unauthorized access or use of this data may violate terms of use and could have legal implications. Nielsen dataset should strictly follow standard naming convention as per laid out by Nielsen and Kilts Center of Marketing and under no circumstances the naming convention should be changed.
NielsenRetail processes Retail Scanner Data.
Here are just a few of the things that NielsenRetail does well:
- Efficiently manages Nielsen directory and hierarchy, simplifying the process for researchers and significantly reducing the time needed to navigate through Nielsen documentation.
- Size mutability: Processes dataframes larger-than-memory on a single machine through batch processing.
- Distributed computing for terabyte sized datasets enhancing the overall data reading speed by utlising low-latency feature of Dask.
- Provides simple yet distinct commands for separating sales, stores, and products data for analysis purposes.
- This package has excellent compatibility with Numpy, Pandas etc.
The source code is currently hosted on GitHub at: https://github.com/pratikrelekar/NielsenDSRS
Binary installers will be available at Python Package Index (PyPI)
For Github pip install:
pip install git+https://github.com/pratikrelekar/NielsenDSRS
For pip install requirements:
python -m pip install -r requirements.txt
Before using NielsenRetail, ensure that all dependencies are correctly installed. Additionally, verify that the Client hosting the Python environment, the Scheduler, and the Worker nodes all have the same version installed.
- NumPy - Adds support for large, multi-dimensional arrays, matrices and high-level mathematical functions to operate on these arrays
- Pandas - Provides high-performance, easy-to-use data structures, and data analysis tools.
- Dask - Flexible parallel computing library for analytic computing, enabling performance at scale for the tools of your choice
- Dask Distributed - Enables parallel computing and scaling to clusters for large computations, enhancing Dask’s capabilities to work across multiple machines by distributing tasks and managing workloads efficiently
- Toolz - Provides functional utilities for working with iterable data, enabling more efficient and readable data processing by offering a set of pure functions inspired by constructs from functional programming
- Msgpack - Binary serialization format that allows for efficient, compact storage and is used for exchanging data between multiple languages, similar to JSON but faster and smaller
For Local on system memory:
from dask.distributed import Client
# # Calculate memory per worker based on total system memory
total_memory_gb = SYSTEM_RAM # Your system's total RAM in GB (Edit as per system memory)
n_workers = WORKERS # Number of workers you want to use (Edit the total workers you want)
memory_per_worker_gb = int(total_memory_gb / n_workers) # Memory per worker in GB
# Start the client with given specifications
client = Client(n_workers=n_workers, threads_per_worker=1,
memory_limit=f'{memory_per_worker_gb}GB')
print(client)
To utilize the power of the Dask, using auxilary memory cluster for large data processing
# you can only connect to the cluster from inside Python client environment
from dask.distributed import Client
client = Client('dask-scheduler.default.svc.cluster.local:address') #Replace the address with your actual address of the memory cluster
client
Make sure the NielsenDSRS module and all the dependencies are installed on Dask Client, Scheduler and Worker nodes. The versions should match on all. Following is the code to debug the errors related to the version mismatch:
For worker nodes:
def check_module():
try:
import NielsenDSRS
return "Installed"
except ImportError:
return "Not Installed"
# Run the check across all workers
results = client.run(check_module)
for worker, result in results.items():
print(f"{worker}: {result}")
For Scheduler:
scheduler_result = client.run_on_scheduler(check_module)
print(f"Scheduler: {scheduler_result}")
For Client:
try:
import NielsenDSRS
print("NielsenDSRS is installed on the client.")
except ImportError:
print("NielsenDSRS is not installed on the client.")
This library was developed at Data Science Research Services(University of Illinois at Urbana-Champaign) in 2024 and has been under active development since then. Currently supports Nielsen Retail Scanner data from 2006 to 2020.
For general questions and discussions, visit DSRS mailing list.