Common Data Provider Modernization Guide
Common Data Provider is a data discovery, mining and processi product by IBM. Explore technical details, modernization strategies, and migration paths below.
Product Overview
Common Data Provider (CDP) facilitated the forwarding of z/OS operational data to various analytics engines.
Configuration was primarily file-based, using files to specify data sources and transformation rules.
Modernization Strategies
Rehost
- Timeline:
- 6-12 months
Lift-and-shift to cloud infrastructure with minimal code changes. Fast migration with lower risk.
Refactor (Recommended)
- Timeline:
- 18-24 months
Optimize application architecture for cloud while preserving business logic. Best ROI long-term.
Replatform
- Timeline:
- 3-5 years
Complete rewrite to cloud-native architecture with microservices and modern tech stack.
Frequently Asked Questions
General
What type of data did Common Data Provider collect and what did it do with it?
Common Data Provider (CDP) collected z/OS system logs, SMF data, and other operational data. It then transformed and forwarded this data to analytics platforms. It supported filtering and data mapping to customize the data stream.
How was Common Data Provider configured?
CDP used configuration files to define data sources, filtering rules, and target analytics engines. These files specified which logs to collect, how to transform them, and where to send the processed data. The primary configuration file was often named something like 'cdp.config'.
With what platforms did Common Data Provider integrate?
CDP supported integration with analytics platforms like Splunk, Hadoop, and IBM Operations Analytics. It forwarded data in formats compatible with these platforms, such as syslog or delimited text. It could also load data directly into DB2 Data Analytics Accelerator.
Technical
What were the main components of the Common Data Provider architecture?
CDP's architecture included components for data collection, transformation, and forwarding. Specific components included the Data Streamer, which collected data from z/OS sources; the Data Router, which transformed and filtered the data; and the Data Forwarder, which sent the data to target systems.
How did the components of Common Data Provider communicate with each other?
CDP communicated between components using internal APIs and message queues. The Data Streamer sent data to the Data Router, which in turn forwarded it to the Data Forwarder. Communication with external systems used protocols like TCP/IP and syslog.
What configuration files were used by Common Data Provider?
CDP used configuration files to define data sources, filtering rules, and target analytics engines. These files specified which logs to collect, how to transform them, and where to send the processed data. The primary configuration file was often named something like 'cdp.config'.
How did Common Data Provider transform data?
CDP supported data transformation using mapping rules defined in configuration files. These rules allowed users to extract specific fields from log records, convert data types, and enrich data with additional information. This ensured that the data was in the correct format for the target analytics platform.
Business Value
What business value did Common Data Provider provide?
By forwarding z/OS data to analytics platforms, CDP enabled organizations to gain insights into their mainframe operations. This allowed them to identify performance bottlenecks, detect security threats, and optimize resource utilization. This improved decision-making and operational efficiency.
How did Common Data Provider help reduce costs?
CDP helped organizations reduce the cost of mainframe operations by optimizing resource utilization and improving problem resolution. By providing insights into system performance and security, CDP enabled organizations to proactively address issues and prevent costly outages.
Security
What authentication methods did Common Data Provider support?
CDP supported authentication using z/OS security mechanisms such as RACF, ACF2, and Top Secret. It integrated with these security systems to verify the identity of users and control access to data. This ensured that only authorized users could access sensitive information.
What access control model did Common Data Provider use?
CDP used an access control model based on z/OS security profiles. Access to data and functions was controlled by defining profiles in RACF, ACF2, or Top Secret. Users were granted access based on their roles and responsibilities.
What audit/logging capabilities did Common Data Provider have?
CDP logged all security-related events, such as authentication attempts, access violations, and configuration changes. These logs provided an audit trail of user activity and helped organizations detect and investigate security incidents. The logs were typically stored in SMF records or syslog files.
Operations
What administrative interfaces were available for Common Data Provider?
CDP provided a command-line interface (CLI) for administrative tasks. The CLI allowed administrators to configure data sources, define filtering rules, and monitor system performance. It also provided commands for starting and stopping the CDP components.
What monitoring and logging capabilities did Common Data Provider have?
CDP monitored system performance and logged events to SMF records and syslog files. Administrators could use these logs to identify performance bottlenecks, detect errors, and troubleshoot problems. CDP also provided commands for displaying system status and statistics.
What were the main configuration parameters for Common Data Provider?
CDP's configuration parameters included data source definitions, filtering rules, transformation mappings, and target system settings. These parameters were defined in configuration files and could be modified using the CLI. Proper configuration was essential for ensuring that CDP collected and forwarded the correct data.
Ready to Start Your Migration?
Download our comprehensive migration guide for Common Data Provider or calculate your ROI.