IBM DB2 Data Archive Expert for z/os:
|
|
|
- Derek Butler
- 9 years ago
- Views:
Transcription
1 Front cover IBM DB2 Data Archive Expert for z/os: Put Your Data in Its Place Reduce disk occupancy by removing unused data Streamline operations and improve performance Filter and associate data with DB2 Grouper Paolo Bruni Walter Huth Ernie Mancill Iain Warnock ibm.com/redbooks
2
3 International Technical Support Organization IBM DB2 Data Archive Expert for z/os: Put Your Data in Its Place February 2004 SG
4 Note: Before using this information and the product it supports, read the information in Notices on page xix. First Edition (February 2004) This edition applies to Version 1 of IBM DB2 Archive Expert for z/os (program number ) and Version 1 of IBM DB2 Grouper for z/os (program number 5799-GXQ). Copyright International Business Machines Corporation All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
5 Contents Figures ix Tables xv Examples xvii Notices xix Trademarks xx Preface xxi The team that wrote this redbook xxi Become a published author xxiii Comments welcome xxiii Part 1. Introduction Chapter 1. Archiving data with IBM DB2 Data Archive Expert Need for archiving data IBM DB2 Data Archive Expert The archive process The tool s architecture Data Archive Expert as DB2 application Part 2. Getting ready to use the product Chapter 2. The evaluation test data Our test data Test referential integrity Chapter 3. Installing and customizing DB2 Data Archive Expert Pre-installation activities Program directory Preventive service planning Review PTF cover letters Review and verify product prerequisites Installation steps: Details Step 1. Create the metadata database and table space Step 2. Create the metadata tables Step 3. Define the DB2 Data Archive Expert stored procedures Step 4. Java environment variable settings Step 5. Insert default properties Step 6. Create temporary database Step 7. Grant the appropriate authorizations Step 8. Verify that DSNUTILS is installed Step 9. Make DB2 Data Archive Expert available to users Optionally add DAE to the Administration Tool Launchpad Optionally install DB2 Grouper Execute installation verification Chapter 4. Installing and customizing DB2 Grouper Copyright IBM Corp All rights reserved. iii
6 4.1 Pre-installation activities Program Directory Preventive Service Planning Review PTF cover letters Review and verify product prerequisites Installation steps for host: Details Step 1. Installing Grouper metadata for z/os Step2. Defining Grouper Java stored procedures to DB Step 3 Java environment variable settings Bind packages and plans Installation verification Chapter 5. Stored procedures and batch execution Batch processing considerations Preliminary topics Data Archive Expert stored procedures Example batch jobs Table archive specification Building file archive specifications and using templates Retrieval specification in batch File retrieval from batch Table to file archive specification in batch Continuation of parameters and row filters Chapter 6. Optionally defining DB2 Grouper Client Setting up the Grouper Client Download the Grouper Client Run the install shield z/os Server connectivity configuration Client configuration Installation verification Part 3. Data archival Chapter 7. Scenario 1: Archiving from a table to a file Starting point Define the archive specification Run the archive specification Second run of the archive specification Result Considerations Intermediate table used when archiving to file File archives require a complete run Names of the archive data sets How to find describing information to a given archive data set Recommendation for data set names Chapter 8. Scenario 2: Archiving from a table to a table Starting point Define the archive specification Run the archive specification Result of the archive specification A second run of archive specification Using different archive tables per archive run iv DB2 Data Archive Expert for z/os
7 Chapter 9. Scenario 3: Archiving from RI related tables and deleting from one The objectives of this scenario Digging into the scenario Data before starting the archive specification Start by defining the archive unit Define archive target Run your specification - Step Run your specification - Step Second run using the same target tables Checking whether too many rows have been archived Chapter 10. Scenario 4: Archiving from RI related tables and deleting from them Archiving parts in jumbo containers Step 1 - Defining the table archive specification Define archive unit for jumbo containers Define row filter for the jumbo containers Archive unit table delete rules Define archive table targets Errors when saving the table archive specification Limitations due to on delete cascade rule Solution to the On delete cascade rule Running the table archive Table archive job validation Step 2 - Completing the table archive Step 3 - Defining archiving a table archive to file Defining the file archive specification Running the file archive specification Chapter 11. Scenario 5: Archiving Grouper discovered related tables Using the Grouper Client with Data Archive Expert Introduction and some definitions Using Grouper to manage non-enforced relationships Using Grouper to discover application relationships Chapter 12. Additional archive considerations Knowing your data DB2 enforcement Orphaned rows REXX exec to locate archived tables Part 4. Data retrieval Chapter 13. Scenario 1: Retrieving a single table from a table archive Retrieve overview Retrieve preparation Default retrieve creator name Retrieve authorizations Building and running a single table retrieve specification Building a new retrieve specification Defining a single table retrieve specification Running a retrieve specification Preparing to run the retrieve specification Defining the single table row filter Confirming the retrieve Contents v
8 13.5 Additional tasks Chapter 14. Scenario 2: Retrieving multiple tables from a file archive on tape The tables involved Define retrieve specification for multiple tables Job to run retrieve stored procedure Additional tasks Chapter 15. Scenario 3: Retrieving into the original tables Defining retrieve specification to replace rows Preparing to retrieve from the archive Defining the retrieve specification Add row filter to span archive versions Updating the target table names Running the retrieve specification Additional tasks Part 5. Operational considerations Chapter 16. Planning for and managing change in an archive environment Schema changes and impact on archival Chapter 17. Security and authorizations DB2 authorizations DB2 installation authorities DB2 archive and retrieve authorities Data set authorizations Access to archive data sets Granting access to others Access to table archives Access to retrieve tables Access to file archives Chapter 18. Performance Performance considerations Part 6. Appendixes Appendix A. REXX sample programs A.1 ARCHIVEEXECSP table archive REXX A.2 OFFLINEARCHSP file archive REXX A.3 RETRIEVEEXECSP table retrieve REXX A.4 OFFLINERETEXECSP file retrieve REXX A.5 OFFLINEONLARCSP table to file archive REXX Appendix B. Additional material Locating the Web material Using the Web material System requirements for downloading the Web material How to use the Web material Related publications IBM Redbooks Other publications Online resources vi DB2 Data Archive Expert for z/os
9 How to get IBM Redbooks Help from IBM Abbreviations and acronyms Index Contents vii
10 viii DB2 Data Archive Expert for z/os
11 Figures 1-1 An archiving model showing typical archiving components Disk-space savings achieved by archiving inactive data Archiving data The overall Data Archive Expert architecture Hierarchy of tables and their relationships OMVS Home Directory for user PAOLOR OMVS result of java -version WLM Display to verify GOAL mode DSNJDBC Package List Display DSNREXX Package List Display WLM dialog WLM Definition Menu WLM Application Selection List WLM dialog WLM Activate service policy pop-up Data Archive Expert DB2_Home error USS ISPF Shell dialog USS ISPF Directory List Data Archive Expert related tables panel Grouper WLM environment definition DB2 Administration Tool Menu DB2 Administration Tool Utility Listdef Menu DB2 Administration Tool template list panel DB2 Administration Tool Utility template panel DB2 Administration Tool Utility template panel DB2 Administration Tool DSN prompting DB2 Administration Tool data set name specification DB2 Administration Tool data set name specification results Archive specification definition panel File archive targets panel Map source tables to utility template panel Utility template specification Utility template selection list panel Using a row filter spanning many lines Sample Grouper Client Directory Client DB2 Version window Add database wizard - Source Add database wizard - Protocol Add database wizard - TCP/IP Add database wizard - Database Add database wizard - ODBC Add database wizard - Node options Add database wizard - Security options Connection configuration confirmation Connect to the database Successful connection Windows environment variable EGFCLIENTHOME Grouper Client login Copyright IBM Corp All rights reserved. ix
12 6-15 Grouper Launchpad Grouper Main window Create new set window Grouper tree with new group Configure group options window Add tables filter specification Grouper select objects window Run group discovery window\ Group discovery confirmation window Confirmation window Send group discovery job window Job Status window Group_1 related discovered tables Relationships window Relationship properties window How to invoke Data Archive Expert Data Archive Expert - Main panel Data Archive Expert - Settings Data Archive Expert - Main panel again Data Archive Expert s Archive Specification List panel Archive Specification Definition Data Archive Expert s pull-down menu for specifying starting table Data Archive Expert s pull-down menu for finding related tables Archive Unit Definition Data Archive Expert s pull-down menu for specifying rules Request to specify a row filter Data Archive Expert s panel for specifying a row filter - Part Data Archive Expert s panel for specifying a row filter - Part Data Archive Expert s overview about archive unit definitions Request to define data set targets Selecting Data Archive Expert s default data set generation Data Archive Expert s confirmation of data set specifications Request to save the archive specification Data Archive Expert s confirmation message Specification saved Request an archive specification run Confirm an archive specification run Result of an archive specification run Data Archive Expert confirms a completed archive specification run Request for a second run Data Archive Expert s presentation of a confirmation panel Changing the row filter before staring the second run Result of the second archive specification run Request for an historical overview about the specification Data Archive Expert s overview about an archive specification DAE s details about a specific run of an archive specification Test if inactive rows have been removed Browse data set containing archived rows Data Archive Expert main panel Archive Specification List Archive Specification Definition Archive Specification Definition - 1. pull-down menu Archive Specification Definition - 2. pull-down menu Archive Unit Definition panel for Specification Activity x DB2 Data Archive Expert for z/os
13 8-7 Archive Table Rules Archive Unit Definition panel, still Activity Row Filter - Part Row Filter - Part Archive Unit Definition - Completed panel Archive Specification Definition, activity 1 completed Table Archive Targets Table Archive Targets Archive Specification Definition panel Archive Specification List panel Archive Specification List panel, initiating a RUN - Part Archive Specification List panel, initiating a RUN - Part Archive Run Statistics panel Back to the Archive Specification List panel Initiate a second run of your archive specification Modifying the row filter for your second archive run Result of second run of your archive specification Requesting an overview about your archive specification runs Overview of your archive specification runs Details of a performed archive specification run The data schema Data Archive Expert s main menu panel Data Archive Expert s settings panel Archive Specification List panel Data Archive Expert s pull-down menu for filtering specification list Creating a new archive specification Data Archive Expert s panel for general archive specification definitions Data Archive Expert s pull-down menu for selecting the starting point table Selecting the starting point table from a table list Data Archive Expert s pull-down menu for finding related tables Selecting archive unit tables from the list of related tables Data Archive Expert s confirmation of selected tables Initiate the addition of a table to the archive unit tables Data Archive Expert s pull-down menu for specifying a table or a list of tables Request to define rules for the archive unit tables Data Archive Expert s pull-down menu for defining rules, here delete=yes Request to define rules for another table Data Archive Expert s pull-down menu for defining rules, here: junction table Request to define a row filter for a non-starting-point table Data Archive Expert s error message for row-filter definition request Request to define a row filter on the starting-point table Specifying a row filter with qualified column names Request for checking the connections between the tables Data Archive Expert connections of LINEITEM_TEST table Request to specify connections for a table Defining connections in DAE - Selecting the starting column Defining connections in DAE - Starting column accepted Defining connections in DAE - Request to select a child table Defining connections in DAE - Select child table from a list Defining connections in DAE - Selecting a target column Defining connections in DAE - Confirmation panel Request to list existing connections Data Archive Expert s list of connections per table Figures xi
14 9-34 Request for checking the existence of another connection Data Archive Expert s presentation of existing connections for this table Request to connect a manually added table Data Archive Expert s presentation of a defined table connection Request to specify table archives Request to specify archive table names (in one table space) Specifying archive table names per table of the archive unit Data Archive Expert s request to provide a table space name for the archive DAE s pull-down menu for specifying the table space name Accepted table space name Request to save the specification Data Archive Expert returns DB2 s SQLCODE Specification saved in state PDEF Specifying a row filter with unqualified columns Data Archive Expert returns DB2 s SQLCODE -206 upon saving Specifying a third version of row filter - now with subselect Request to save the specification now successful Request to run the archive specification Confirmation to run the archive specification Result of the first step of the first specification run - Part Result of the first step of the first specification run - Part Request to complete the first run - Part Request to complete the first run - Part Result of the completion step of the first run - Part Result of the completion step of the first run - Part Data Archive Expert s run completion panel List of archive specifications Request to run the specification again Confirmation to run the specification again Result of first step of second run - Part Result of first step of second run - Part Invoking Data Archive Expert s history of a specification Request to complete the second run Result of the second complete run - Part Result of the second complete run - Part History of a specification after two runs Tables and RI being used Defining the jumbo archiving specification Selecting the required related tables Displayed list of tables making up the archive unit Defining row filer for PART table Result of pressing enter key to save the row filter Archive unit after updating the rules flags Specifying the target database, table space and table names Error due on delete cascade Final delete rules for archiving Jumbo parts JCL used to run the Jumbo archive Results of archiving Jumbo sized parts Archive history for the Jumbo archive Completing the Jumbo archive - Deleting source rows AHXJ034 error message when using an archive table as input Defining the table to file archive specification Definition option 1 - Browsing the archive unit xii DB2 Data Archive Expert for z/os
15 10-18 Definition option 2 - Selecting the archive version Definition option 3 - Age options, shown for information only Definition option 4: Defining the file archive target data sets Selecting the template for the target data sets on tape JCL to run the table archive to file archive stored procedure Grouper Client signon Non-enforced referential constraints window Add table relationships window Select table drop down Link parent column to child column window Link key columns window Dependent columns drop down Non-enforced referential constraints confirmation Archive specification to use Grouper Data Archive Expert table selection list Data Archive Expert select starting point table Data Archive Expert related tables window Data Archive Expert select related tables panel Data Archive Expert archive unit specification Starting point table row filter Table archive targets Archive specification list Archive run statistics Grouper RI set created from DAE archive specification Configure group discovery options Run group discovery Group discovery run confirmation Send group discovery job confirmation Discovery results displayed in group tree Relationships results window Non enforced referential constraints - Add relationship Add table relationships Link key columns Data Archive Expert select starting point tables window Data Archive Expert select related tables panel Archive unit definition from Grouper discovery Orphan detection warning at definition time Orphan detection error at run time Data Archive Expert primary option panel The improved Data Archive Expert settings panel Defining a new single table retrieve specification Naming the single retrieve specification Filter for an archive list Selecting a single archive specification Selecting the single table archive version Specifying a row filter for a single table Specifying a single target table Saving the single table retrieve specification Running the single table retrieve specification Confirm row filter before running retrieve Retrieve run statistics Return to retrieve specification list panel Tables and relationships used in this scenario Figures xiii
16 14-2 Defining a new retrieve specification modelled upon a file archive Selecting the specify target tables for retrieve Specifying the retrieve table names for the jumbo containers Job to run the retrieve from file archive stored procedure Relationship between tables Order archive history details Selecting the required order archive versions for retrieving Row filter spanning two archive version Specifying target table names that are the same as the source tables Running retrieve of orders from the table archives with a row filter Schema change to source table Retrieve attempt into a target table with changed schema Archive table creator and name Retrieve table creator and name xiv DB2 Data Archive Expert for z/os
17 Tables 2-1 Test tables details DB2 enforced relationships Application enforced relationship Stored procedure name and usage ARCHIVEEXECSP parameter list OFFLINEARCHSP parameter list RETRIEVEEXECSP parameter list OFFLINERETEXECSP parameter list OFFLINEONLARCSP parameter list Host and directory names Result of queries to count the rows due for archiving Copyright IBM Corp All rights reserved. xv
18 xvi DB2 Data Archive Expert for z/os
19 Examples 3-1 Sample Data Archive JCL procedure Sample Java environment variables shipped with Data Archive Expert Modified Java environment variables JCL to invoke the IVP archive specification through batch Preventive Service Planning sample Sample SQL insert statement for Grouper metadata Sample Grouper JCL procedure DB2 Grouper Java environment variables Bind JCL for SQLJ plan JCL to exec ARCHIVEEXECSP REXX modification REXX modification REXX modification Batch archive specification output OFFLINEARCHSP REXX modification OFFLINEARCHSP REXX modification Sample JCL to run AHXARFIL REXX JCL to run file archive specification to tape JCL to execute AHXTARET table retrieve in batch Sample output from batch table retrieve execution JCL to execute OFFLINERETEXCSP stored procedure in batch Sample output from batch file retrieve execution JCL to execute AHXTB2FI archiving from table to file archives in batch Sample output from table archive to file archive batch execution Modified TEMPLATE.JCL sample Specifications in Data Archive Expert s metadata table The small sample table subject to table archiving Content of the archive table Content of the source table after archive execution Retrieving all rows with a first view Retrieving all rows with a second view Original content of NATION Original content of PART Original content of PARTSUPP Original content of CUSTOMER Original content of ORDER Original content of LINEITEM Create table space for archive tables Data Archive Expert s log shows row filter within SELECT statement SQL query to control the result of the specification run The result set of the control query Content of the archive table for CUSTOMER Content of the archive table for LINEITEM Content of the archive table for NATION Content of the archive table for ORDER Content of the archive table for PART Not-deleted order line items Creation of a view to see current and archived rows Copyright IBM Corp All rights reserved. xvii
20 9-18 Combined list of current and archived rows Checking for rows we want to retrieve from the archive RI using on delete cascade Job statistics for one of the archived tables Metadata query example Metadata query results Granting select authority to user RACF group View joining the retrieved and live parts tables SQL code View of a single archive table View of two single table archives DB2 Performance Expert accounting report - SQL attribution DB2 Performance Expert Accounting report - SQL attribution DB2 Performance Expert accounting report SQL section - Commit frequency DB2 PE accounting report highlight section - Commit frequency Examples of unload and load utility control statements A-1 ARCHIVEEXECSP REXX A-2 OFFLINEARCHSP REXX A-3 RETRIEVEEXECSP REXX A-4 OFFLINERETEXECSP REXX A-5 OFFLINEONLARCSP REXX xviii DB2 Data Archive Expert for z/os
21 Notices This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-ibm product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-ibm Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-ibm products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-ibm products. Questions on the capabilities of non-ibm products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to IBM's application programming interfaces. Copyright IBM Corp All rights reserved. xix
22 Trademarks The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: ibm.com z/os AIX AS/400 DB2 Connect DB2 Universal Database DB2 DRDA Home Director IBM MVS Notes OS/390 QMF Redbooks RACF RETAIN S/390 Redbooks(logo) The following terms are trademarks of other companies: Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.unix is a registered trademark of The Open Group in the United States and other countries. Other company, product, and service names may be trademarks or service marks of others. xx DB2 Data Archive Expert for z/os
23 Preface Databases are growing tremendously. Because of the legal requirements for trend analysis, or the need for historical data, terabytes of data are kept on-line, which causes performance and operational problems. Not all data is frequently accessed though, nor does it need to be kept on fast media. Here archiving can help. Archiving is the process of moving selected inactive data to another location and is accessed only when necessary. Data Archive Expert for z/os, Version 1 is a comprehensive data archiving tool, which enables you to move seldom used data to a less costly storage medium, without any programming. With this solution, you can save storage space and associated costs, and also improve performance of your IBM DB2 UDB for z/os environment. By helping you to quickly define and access inactive data, DB2 Data Archive Expert enhances your backup and recovery processes. Because it selects data for archive at the row level, you can precisely control which aged data are archived with an optimal level of granularity. This IBM Redbook will help you understand, install, tailor, and configure DB2 Data Archive Expert for z/os in a DB2 for z/os and OS/390 Version 7 system. By showing several archive and retrieve scenarios, it will also help you design an archive solution for your infrequently used data. This book has the following parts: Introduction: The need for and the concepts of data archiving and the DB2 Data Archive Expert solution Getting ready to use the product: The test data environment, the installation of the tool, and its components Data archival: Step by step definition and execution of several scenarios related to archiving data Data retrieval: Examples of locating archived data and making it available for processing Operational considerations: Managing changes to your data, performance, and security Appendixes: Downloadable REXX source code for the batch execution of archive and retrieve specifications The team that wrote this redbook This redbook was produced by a team of specialists from around the world working at the International Technical Support Organization, San Jose Center. Paolo Bruni is a certified Consultant IT Architect working as a Data Management Project Leader at the International Technical Support Organization, San Jose Center since In this capacity he has authored several redbooks on DB2 for OS/390 and related tools, and has conducted workshops and seminars worldwide. During Paolo s many years with IBM, in development, and in the field, his work has been mostly related to database systems. Walter Huth is a DB2 for OS/390 and DRDA Instructor and Course Developer with IBM Learning Services located in Germany. Previously, he was a Database Administrator for IBM Internal Applications. Before joining IBM Germany, Walter was a Systems Engineer with Copyright IBM Corp All rights reserved. xxi
24 Taylorix-Tymshare, Germany, where he provided support on designing and using a multi-dimensional database. Ernie Mancill is a certified IBM Data Management IT Specialist with IBM Software Group. Ernie has 27 years of experience in IT with 12 years of experience with DB2 as a Systems Programmer. He joined IBM five years ago and is currently a member of the IBM SWG DB2 Database Tools technical sales team where he is responsible for pre and post sales technical support of the IBM DB2 Database Tools portfolio. His areas of expertise include the DB2 system and database administration, as well as utilities and tools. Iain Warnock is a DBA working for IBM in the United Kingdom. He has 25 years of IT experience as a freelance Operations Analyst. Iain holds a post graduate diploma in Information Management. He has been with IBM for over 3 years as a DB2 Specialist, supporting sysplexed applications running in a data sharing environment. He has recently evaluated the use of DB2 Archive Expert for IBM internal use, and has written a guide for converting the existing housekeeping procedures. A photo of the team is in Figure 1. Figure 1 Left to right: Ernie, Paolo, Iain, and Walter (photo courtesy of Barry Kadlec) Thanks to the following people for their contributions to this project: Rich Conway Emma Jacobs Bob Haimowitz Bart Steegmans Maritza M. Dubec International Technical Support Organization Vinna Chang Nate Church xxii DB2 Data Archive Expert for z/os
25 Brian Dreher Alan Gillespie Gene Haberman Luanne Hong Jacqueline Kushner Jayashree Ramachandran Rajesh Ramachandran Dave Schwartz Dave Shough Joe Sinnott Bryan Smith Roy Smith Cherri Vidmar Tom Vogel IBM Silicon Valley Lab Gareth Jones IBM UK, EMEA ATS-PIC Hursley Become a published author Join us for a two- to seven-week residency program! Help write an IBM Redbook dealing with specific products or solutions, while getting hands-on experience with leading-edge technologies. You'll team with IBM technical professionals, Business Partners and/or customers. Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you'll develop a network of contacts in IBM development labs, and increase your productivity and marketability. Find out more about the residency program, browse the residency index, and apply online at: ibm.com/redbooks/residencies.html Comments welcome Your comments are important to us! We want our Redbooks to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways: Use the online Contact us review redbook form found at: ibm.com/redbooks Send your comments in an Internet note to: [email protected] Mail your comments to: IBM Corporation, International Technical Support Organization Dept. QXXE Building 80-E2 650 Harry Road San Jose, California Preface xxiii
26 xxiv DB2 Data Archive Expert for z/os
27 Part 1 Part 1 Introduction In this part we introduce the need for data archiving and the concepts for the DB2 Data Archive Expert solution. The only chapter in this part is: Archiving data with IBM DB2 Data Archive Expert Copyright IBM Corp All rights reserved. 1
28 2 DB2 Data Archive Expert for z/os
29 1 Chapter 1. Archiving data with IBM DB2 Data Archive Expert In this chapter we provide an introduction to archiving needs, archiving functions, and the type of solution that IBM DB2 Data Archive Expert tool provides with its functionalities. The chapter contains the following: Need for archiving data IBM DB2 Data Archive Expert The Need for archiving data section is entirely based on the original work by Brian F Smith and Tom Vogel, of IBM Silicon Valley Lab. This was also published on the DB2 magazine, Quarter 4, 2003 Vol. 8, Issue 4, Take a Load Off: Archive Inactive Data, available from: The Data Archive section is a brief introduction to the tool itself; details can be found in the DB2 Data Archive Expert User s Guide and Reference, SC Copyright IBM Corp All rights reserved. 3
30 1.1 Need for archiving data A data explosion threatens to bog down, fill up, slow, and generally complicate DBMSs. Archiving data that is rarely accessed lets businesses hold on to the data they need while keeping DB2 performing at top speed. With databases growing at an exponential rate, ever-increasing volumes of data are a pressing concern for data centers. An accumulation of historical transactions, the advent of data warehousing, and other factors contribute to the accumulation of inactive data, which is less likely to be accessed. However, businesses cannot necessarily (or may not want to) get rid of inactive data altogether for several reasons: Some might need the data to comply with government requirements. Other businesses might want the data because they anticipate building trend analysis. And many keep inactive data to maintain a complete history of their customers. As the size of a database grows, the percentage of inactive data often grows, too. Problems that can occur when maintaining large volumes of inactive data include: Slower performance from additional I/O and extensive processing Management woes, as large objects become difficult to reorganize and size limits are reached Additional hardware and storage costs If your company cannot live without all this inactive data and your database cannot live with it, data archiving might be the answer. Archiving is the process of moving inactive data to another storage location where it can be accessed when needed. Keep in mind that archiving is not the same as creating backups. Archive retrievals are very selective; backups are not. Archives are application-oriented; backups are data store-oriented. However, archives can be a component of a backup and recovery scheme. Archiving basics We have explained inactive data already, but there are a few more terms to define. Active data is current data that is accessed or that changes frequently and does not require archiving. Archived data is data that has been moved from the active data store to the archive. The archive unit is the set of tables that contain the rows to be archived. Types of archives include table and file archives. Table archives, commonly referred to as history tables, are stored in DB2 tables and accessed using SQL. File archives are stored in flat files in a different data store (for example, a different DBMS) from the active data. Choices when storing file archives include: disk (high cost), tape (lower cost), or optical (lower cost). Table archives are used when fast or SQL access to the archived data is necessary. File archives, which can be burned on CDs or stored on tape, are used when SQL access is not needed, and access is less time sensitive. Referential constraints are important in archiving. Some referential constraints are enforced by DB2, while others are enforced by the application. When data is archived, the entire referential set of data is involved. Keeping track of application-enforced referential constraints becomes crucial. Archive metadata (information about the archived data) needs to be captured at the time of archiving to aid in retrieving the archived data later. Figure 1-1 shows a sample archiving model made up of typical archiving components. 4 DB2 Data Archive Expert for z/os
31 Figure 1-1 An archiving model showing typical archiving components The benefits Archiving can result in two major benefits: First, archiving lets you reclaim disk space. Saving disk space, along with other factors, can lower storage costs. Second, archiving can improve performance. By separating inactive from active data, database scans and other data access operations become faster. Reclaiming disk space is a major benefit. Suppose you have 100 GB in your DB2 tables with 60 GB of data and 40 GB of indexes (see Figure 1-2). Figure 1-2 Disk-space savings achieved by archiving inactive data Chapter 1. Archiving data with IBM DB2 Data Archive Expert 5
32 You determine that 50% of this data is inactive. Assuming that the key distribution is uniform, 50% of the index is also inactive. Let us say you archive the inactive data. Creating indexes on the archived data probably is not necessary, because fast access to this archive is not required. In your active database, you now have 30 GB of data and 20 GB of index information. Add the new 30 GB of archive data to this active data and you get a total of 80 GB of required disk space, for a savings of 20% in total disk space. If you are running DB2 on z/os, you can save additional disk space by using DB2 hardware data compression for table archives. DB2 hardware data compression commonly yields about a 50% reduction in data size. For example, in our previous scenario, if you compress the 30 GB of archived data, you reduce your disk requirements by 15 GB. You have now reduced your disk space requirements from 100 GB to only 65 GB, a 35% savings. (When using DB2 hardware data compression, be aware of trade-offs that exist. For example, indexes are not compressed, table space scans can incur significant increases in CPU usage, and compression dictionaries consume DB2 address space memory.) Both of the benefits of archiving inactive data add up to cost savings. You can store DB2 table archives on storage media that is less expensive than what is holding your active data. For example, let us assume that your active data is stored on a top-of-the-line storage device that provides single-digit millisecond response times. When you establish an archiving strategy, you might want to incorporate a less expensive and probably lower performance storage medium for storing your archive tables and files. The improved application performance and the minimized use of CPU, I/O, and memory resources may result in additional cost savings. Designing an archiving strategy The most successful archiving implementations are well planned. Begin the design process by assessing archiving needs with the following questions: What data do you need to archive? When do you need to archive this data? How long do you need to maintain the archived data? Should the archived data be stored in DB2 table archives or in flat-file archives? Under which circumstances, and how often will you need to retrieve the archived data from either table archives or file archives? When designing the archiving strategy, make sure you understand the schemas, the data, and how the data is accessed. You need to understand the retrieval requirements for the archives, and determine how long you need to keep archived data. These factors strongly influence the design of your strategy. For example, if you require some archived data to be accessed periodically, and other archived data to be accessed very infrequently, you might use a multitiered archiving strategy. A multitiered strategy involves archiving to tables first (perhaps more frequently), and then periodically archiving those table archives to file archives. Very old archives can be deleted after a certain period of time. Establish a consistent scheduling process for your archiving. The best strategies archive data on a regular basis to maintain storage goals and help in retrieval consistency. You can still run single archive operations on an as-needed basis. Most archiving strategies are based on intervals of time. For example, you might archive customer invoices at the end of every month. Consider all users and their need for access to the archived data. Remember that most archive units involve multiple tables, in many cases tens or hundreds of tables. 6 DB2 Data Archive Expert for z/os
33 Archiving pitfalls When designing your archiving strategy, avoid these common pitfalls: Do not design an archiving strategy in a vacuum. Be sure to include key teams (especially the applications team) for input. Avoid creating high-level application designs that are separate from the application's archiving strategy. If the archiving implementation is an afterthought, the most efficient methods of archiving data may be lost. In most cases, changing applications is prohibitively expensive. Do not approach archiving at random, or on an ad hoc basis. Successful archiving strategies are designed to be proactive, rather than reactive, and are well integrated with applications processing. Determine which data is truly active and which is inactive before the archiving strategy is designed and implemented. If archived data later requires updating, it becomes active data again. The assumption is that archived data will not be updated, because once modified, it no longer represents an accurate snapshot of that data. Never assume that the schema for the active data is static. Retrieving archived data Being able to retrieve the data that you archive is an important part of any archiving scheme. Most businesses want the capability to access archives on demand; however, you can control access programmatically as well. Archived data is not usually retrieved to the original active data tables. You can use SQL to access data in table archives. When you create file archives, the data must be easily accessible as needed. You will not usually have to retrieve an entire archive unit. To implement retrieving capabilities for file archives, you can use a load utility or a 4GL program to browse the file archives. Consider the requirements for selectively retrieving from a single archive and from multiple archives. When retrieving from multiple archives, expect the schemas to vary. Finally, the destinations for the retrieved data can be flexible, depending on how you implement your strategy. Destinations can include new or existing retrieve tables, or even (with careful consideration) active data tables. Lean and mean Data archiving is an important and often overlooked element in application design. If designed and implemented carefully, the benefits can include cost savings from decreased disk space requirements and faster applications. You can learn more about archiving by exploring the items listed in the resources (including the presentation on which this article was based) at the end of this piece. Once you get the hang of archiving, you will be able to keep all the information your company needs and keep DB2 in tip-top shape. 1.2 IBM DB2 Data Archive Expert IBM s new data archiving tool, DB2 Data Archive Expert for z/os, lets you move DB2 data to DB2 tables or file archives. An ISPF interface helps you to configure and use the tool, and a callable API uses stored procedures for running the archiving and retrieving operations. Chapter 1. Archiving data with IBM DB2 Data Archive Expert 7
34 Version 1, Release 1 of IBM DB2 Data Archive Expert for z/os, product number 5655-I95, is available since September This tool supports DB2 for z/os and OS/390 Version 7 data with the purpose of facilitating the movement of DB2 data to archive tables or flat files. With the Data Archive Expert, you can define work units known as specifications, which are used to archive data to and retrieve it from your source DB2 tables. You can then run these specifications at your convenience, either using the ISPF interface or the callable SQL API. The tool creates and maintains archive metadata on source data tables, tracking information for the number of times that an archive specification has been run, and other information. A component called DB2 Grouper makes it easy to include tables related to the tables you plan to archive. DB2 Grouper offers capabilities that allow you to: Archive related sets of information across multiple tables Selectively identify data to archive and retrieve Exploit less expensive media for your archived data Access archived data with SQL Access archived data with minimal or no application changes Defer the delete portion of the archive process Compress archived data with hardware data compression This product delivers a component, DB2 Grouper, which is also used by other DB2 tools. This component has the ability to: Define non DB2-enforced referential constraints Discover relationships between tables from definitional sources as well as access sources An ISPF interface helps you to configure and use DB2 Data Archive Expert The archive process Archiving is the process of moving inactive data to another storage location that can be located and accessed when necessary. Active Data Table Archive The time stamp of when the data was archived timestamp timestamp timestamp Figure 1-3 Archiving data You can move inactive data to a slower media from which data can be retrieved when needed. The timestamp and other information about the archive helps locate and retrieve the archived data. Archiving is not the same as creating backups, because archive retrievals are selective, while backups are not. Furthermore, archives are application-oriented, whereas backups are 8 DB2 Data Archive Expert for z/os
35 oriented to the data store. However, archives can be an important component in a backup and recovery scheme The tool s architecture The Data Archive Expert architecture contains several components and interfaces to several other product components. They are depicted in Figure 1-4. The recognizable prefix for Data Archive data sets and modules is AHX. ISPF global temporary tables JAVA Stored Procedures Wrapper DAE JAVA J D DB2 Catalog DSNREXX Exec ENGINE API B C Data Archive Expert Metadata REXX result sets Grouper Grouper Metadata DSNUTILS trace log Load Unload Figure 1-4 The overall Data Archive Expert architecture Data Archive Expert metadata Data Archive Expert stores all the archive definitions, retrieve definitions, user profile settings, and runs the results in metadata. The metadata tables are implemented as DB2 tables. Java stored procedures Data Archive Expert uses Java stored procedures to save, retrieve, and run archive specification and retrieve specifications. They require the Java Store Procedure Environment and the WLM environment. They require JDK or JDK for compiling, building, and execution. The DSNREXX environment is used to make the calls to the stored procedures. This was done because ISPF services are not available to Java programs. Data Archive Expert has three categories of stored procedures: Wrapper stored procedures Chapter 1. Archiving data with IBM DB2 Data Archive Expert 9
36 The wrapper stored procedures represent the interface between the REXX execs and the Data Archive Expert engine API. Whenever definition information for DB2 catalog information is needed, the retrieval is done through a wrapper stored procedure. Whenever a definition is saved or updated to the metadata, a wrapper stored procedure is used. When retrieving information, the call is made to a wrapper with parameter information. Upon return, the result sets are processed and ISPF tables are populated. When an archive or retrieve definition is saved, a set of global temporary tables are created for each metadata table in the definition, and populated with the save information. The stored procedure then moves the definition information from the temp table to the metadata tables. Execution stored procedures The execution stored procedures are the set of stored procedures that run the archive or retrieve specification. These stored procedures are externalized and available to the customer for usage from within a user written application or script. DB2 Grouper stored procedures Data Archive Expert uses DB2 Grouper stored procedures. They are used to crawl the catalog looking for tables related to the starting point table, and to retrieve the table relationships from the Grouper metadata. ISPF panels and messages Data Archive Expert uses ISPF panels for the user interface, and ISPF messages for informational and error messages. Data Archive Expert uses ISPF tables to hold information retrieved from the catalog (DB2 tables) and information retrieved from the metadata (specification definitions). REXX execs Data Archive Expert uses REXX execs and REXX ISPF services to drive the ISPF panel interface. The panels try to as much user input validation as possible. Some of which is driven by panel code and some driven by REXX execs. The DB2 DSNREXX subcommand environment is used to interface with DB2. DSNREXX, REXX, and the Java stored procedures serve as the bridge between ISPF and Java. Data Archive Expert engine API The Data Archive Expert engine is made up of numerous Java business objects and database interface objects. These business objects interface through the database objects to retrieve information from the metadata and catalog, as well as save or update definition information in the metadata. The low level interface used by the database objects to interface to DB2 is the usage of JDBC. The engine is where the SQL is formulated for queries against the data to be archived and retrieved. It is where the DDL is generated for table targets. Data Archive Expert debugging Data Archive Expert provides three debugging functions: Circular trace It is an in memory circular trace, which is continually being written to during the execution of Data Archive Expert. At Data Archive Expert termination, or in case of severe errors, the circular trace is dumped out to a sequential data set. The data set is deleted and reallocated on each invocation of Data Archive Expert. The circular trace is dumped to the data set <userid>.ahxcirc.log. 10 DB2 Data Archive Expert for z/os
37 Logging Data Archive Expert also employs a log file, which can be controlled by the user through the user profile settings. Setting the trace level controls how much information is written to the log. The log data set is <userid>.ahxlog. Each record written for the trace is physically written to the LOG dataset. The higher the level of trace (1-3 are the settings), the higher the amount of information written to the LOG. Event table The event table is always active and provides an audit history of archive-related actions in data set AHXEVENTLOG. Specification creations and executions, as well as errors and exceptions are timestamped and recorded. DB2 Grouper DB2 Grouper is a common component that is called to get the tables related to the starting point table of an archive specification. Data Archive Expert calls two Grouper stored procedures. One that finds the table related to the starting point table and populates the metadata. The second that retrieves the information from the metadata. DSNUTILS stored procedure For archiving data to file, Data Archive Expert utilizes the DB2 utilities LOAD and UNLOAD. These utilities are called from the file archive specification execution stored procedure through the use of the DB2 stored procedure DSNTUTILS. DSNUTILS is called to invoke the appropriate utility for the specification being run (LOAD for retrieve, UNLOAD for archive) for each table in the archive unit. For UNLOAD, the LOAD control statement is captured from SYSPUNCH and stored in the metadata. Data compression In order to save space you can use compression. DB2 compression can be activated at the table space or partition level for the archive tables. DB2 compression generally provides savings on disk space of about 50%. For information on DB2 data compression, refer to DB2 manuals or the redbook DB2 for OS/390 and Data Compression, SG You can also (transparently to DB2) use hardware compression on your tape drives for archive files, and the software compression function provided by data migration products Data Archive Expert as DB2 application DB2 Data Archive, and its major component DB2 Grouper, are both DB2 applications that define data and metadata as DB2 objects during their installation phase, as well as dynamically, during execution. Details about these objects are in the standard documentation: DB2 Data Archive Expert User s Guide and Reference, SC DB2 Grouper User s Guide, SC Chapter 1. Archiving data with IBM DB2 Data Archive Expert 11
38 12 DB2 Data Archive Expert for z/os
39 Part 2 ready to Part 2 Getting use the product In this part we introduce our test data environment, and then describe the installation of the tool and its components. The chapters are: The evaluation test data Installing and customizing DB2 Data Archive Expert Installing and customizing DB2 Grouper Stored procedures and batch execution Optionally defining DB2 Grouper Client Copyright IBM Corp All rights reserved. 13
40 14 DB2 Data Archive Expert for z/os
41 2 Chapter 2. The evaluation test data In this chapter we provide a description of the data used in our test cases. This data consists of randomly generated de-personalized information and is organized into nine tables. The test application makes use of DB2 enforced referential integrity as well as enforcing application driven relationships with the TIME table. This chapter contains the following: Our test data Hierarchical relationships of test tables DB2 enforced relationships Application enforced relationships Copyright IBM Corp All rights reserved. 15
42 2.1 Our test data The test data we use is the data of an order entry type of application. Only the basic details such as the table names and how rows are identified uniquely, and the keys required for referential integrity are documented here. The general assumption is that of a retail industry application. There is no requirement to have more details of how it works, as these details will be tweaked for the purpose of justifying our scenarios in archiving rows from these tables or retrieving their archived versions. Table 2-1 shows the names of the test tables with the keys used to create the unique and primary keys, and the row counts taken prior to any archive activity. Table 2-1 Test tables details Table name Unique key Primary key Number of rows CUSTOMER C_CUSTKEY C_CUSTKEY 150,000 LINEITEM L_ORDERKEY & L_LINENUMBER L_PARTKEY & L_SUPPKEY 1,114,560 NATION N_NATIONKEY N_NATIONKEY 25 ORDER O_ORDERKEY O_ORDERKEY 336,519 PART P_PARTKEY P_PARTKEY 165,829 PARTSUPP PS_PARTKEY & PS_SUPPKEY PS_PARTKEY & PS_SUPPKEY 662,421 REGION R_REGIONKEY R_REGIONKEY 5 SUPPLIER S_SUPPKEY S_SUPPKEY 10,000 TIME T_TIMEKEY T_ALPHA Test referential integrity The test application requires the logical relationships to be enforced. Much of this is done using DB2 enforced referential integrity, which is detected by the Grouper component of Data Archive Expert. Additional application enforced referential constraints must then be added manually to the archive specifications. Hierarchical relationships of test tables With the referential integrity added, the overall relational structure of the data becomes hierarchical in appearance, see Figure 2-1 for details. Here the DB2 enforced RI is drawn using solid lines with the arrow heads pointing to the child table. The application enforced RI is indicated with dotted lines, where TIME is the parent table. The keys for these relationships have not been indicated, as they will be self evident when discussing the scenarios. 16 DB2 Data Archive Expert for z/os
43 REGION on delete no action NATION TIME on delete no action on delete no action PART SUPPLIER CUSTOMER on delete cascade on delete no action on delete no action PARTSUPP ORDER on delete no action on delete cascade Application RI DB2 Enforced RI LINEITEM Figure 2-1 Hierarchy of tables and their relationships DB2 enforced relationships Table 2-2 shows the parent tables with their primary keys, with the matching child tables and foreign keys. Where a primary key or foreign key shows two column names, both are needed in the sequence they are listed. Chapter 2. The evaluation test data 17
44 Table 2-2 DB2 enforced relationships Parent table Primary key Child table Foreign key REGION R_REGIONKEY NATION N_REGIONKEY NATION N_NATIONKEY SUPPLIER S_NATIONKEY NATION N_NATIONKEY CUSTOMER C_NATIONKEY PART P_PARTKEY PARTSUPP PS-PARTKEY SUPPLIER S_SUPPKEY PARTSUPP PS_SUPPKEY PARTSUPP PS_PARTKEY + PS_SUPPKEY LINEITEM L_PARTSUPP + L_SUPPKEY ORDER O_ORDERKEY LINEITEM L_ORDERKEY Application enforced relationships The TIME table contains rows for each of the dates being used in the LINEITEM and ORDER tables. When new orders are generated and the date does not already exist, the application inserts rows into TIME, and derives the values for the other columns of the table. See Table 2-3. Table 2-3 Application enforced relationship Parent table Primary key Child table Foreign key TIME T_ALPHA LINEITEM L_COMMITDATE TIME T_ALPHA LINEITEM L_RECEIPTDATE TIME T_ALPHA LINEITEM L_SHIPDATE TIME T_ALPHA ORDER O_ORDERDATE 18 DB2 Data Archive Expert for z/os
45 3 Chapter 3. Installing and customizing DB2 Data Archive Expert In this chapter we discuss the installation and customization steps necessary in order to enable DB2 Data Archive Expert. In our environment, the SMP/E installation has been completed prior to beginning the work. This chapter contains the following: Pre-installation activities Installation steps: Details Execute installation verification Copyright IBM Corp All rights reserved. 19
46 3.1 Pre-installation activities In this section we review the installation materials to familiarize ourselves with any issues or areas of special concern.you might not need to perform these steps as part of your installation. Your system programmer might do it for you Program directory Normally, the program directory, DB2 Data Archive Expert Program Directory, GI , is shipped as part of the installation package, along with the installation tapes. But you can always download the most current version of the program directory from the IBM Data Management Tools Web site located at the following URL: Included in the program directory is information that helps you identify the second document that needs to be reviewed for installation planning: the Preventive Service Planning document. This document is also sometimes referred to as the PSP bucket. In the program directory there is a section called Preventive Service Planning, which tells you how to identify the PSP Preventive service planning Using the UPGRADE and the SUBSET IDs documented in the program directory, you can download a current copy of the preventive service planning (PSP) using RETAIN, or by contacting the IBM Support Center and requesting a copy. For access to RETAIN, visit the URL: Information normally found in the PSP includes items such as installation notes, changes to documentation, general information, and service recommendations including any high impact or pervasive APARs. As the information contained in the PSP is kept very current, it should be reviewed at the beginning of the installation process, as well as prior to migrating DB2 Data Archive Expert into your production environment. Our PSP bucket was empty at the time we installed DB2 Data Archive Expert, so we had no additional action items documented through PSP, but we expect that some maintenance will soon be available with the increasing usage of the product Review PTF cover letters As part IBM s standard software delivery mechanism, product fixes are packaged into a series of elements known as Program Temporary Fixes or PTFs. Each PTF includes a cover letter, printed during the SMP/E receive process, which outlines any necessary action that is required in order to complete the application of the particular PTF. In our case, our system programmer had already reviewed all of the PTF cover letters and sent us a list of necessary action items. You should review your PTF cover letters and perform any documented action items Review and verify product prerequisites We found product prerequisites documented in both the program directory document as well as the DB2 Data Archive Expert User s Guide and Reference, SC We used the list in the guide as our starting point. We also found that we needed to familiarize ourselves with a 20 DB2 Data Archive Expert for z/os
47 couple of these prerequisites products in order to determine if we had the necessary product levels installed. Note: Refer to the DB2 Data Archive Expert User s Guide and Reference, SC , for additional details regarding specific installation steps. We now outline our major findings in setting up the product prerequisites. IBM Developer Kit for OS/390, Java 2 Technology Edition, SDK In order to verify the version of Java JVM, we need to use the UNIX System Services shell from TSO to find the version of Java installed in our z/os environment. We first start a shell session by executing the TSO OMVS command from TSO Option 6. See Figure 3-1. IBM Licensed Material - Property of IBM 5694-A01 (C) Copyright IBM Corp. 1993, 2001 (C) Copyright Mortice Kern Systems, Inc., 1985, (C) Copyright Software Development Group, University of Waterloo, All Rights Reserved. U.S. Government users - RESTRICTED RIGHTS - Use, Duplication, or Disclosure restricted by GSA-ADP schedule contract with IBM Corp. IBM is a registered trademark of the IBM Corp. /usr/lpp/skrb/bin:/usr/lpp/dce/bin:/usr/lpp/java/ibm/j1.3/bin:/bin:. SC63:/> ===> INPUT ESC= 1=Help 2=SubCmd 3=HlpRetrn 4=Top 5=Bottom 6=TSO 7=BackScr 8=Scroll 9=NextSess 10=Refresh 11=FwdRetr 12=Retrieve Figure 3-1 OMVS Home Directory for user PAOLOR2 From the shell screen, enter the command java -version and press Enter. See Figure 3-2. Chapter 3. Installing and customizing DB2 Data Archive Expert 21
48 IBM Licensed Material - Property of IBM 5694-A01 (C) Copyright IBM Corp. 1993, 2001 (C) Copyright Mortice Kern Systems, Inc., 1985, (C) Copyright Software Development Group, University of Waterloo, All Rights Reserved. U.S. Government users - RESTRICTED RIGHTS - Use, Duplication, or Disclosure restricted by GSA-ADP schedule contract with IBM Corp. IBM is a registered trademark of the IBM Corp. /usr/lpp/skrb/bin:/usr/lpp/dce/bin:/usr/lpp/java/ibm/j1.3/bin:/bin:. SC63:/>java -version java version "1.3.1" Java(TM) 2 Runtime Environment, Standard Edition (build 1.3.1) Classic VM (build 1.3.1, J2RE IBM OS/390 Persistent Reusable VM build cm13 1s (JIT enabled: jitc)) SC63:/> ===> INPUT ESC= 1=Help 2=SubCmd 3=HlpRetrn 4=Top 5=Bottom 6=TSO 7=BackScr 8=Scroll 9=NextSess 10=Refresh 11=FwdRetr 12=Retrieve Figure 3-2 OMVS result of java -version In our environment, we were initially running Java 2 version build We discovered during some initial product testing that we needed to regress our Java environment in order to fix a couple of issues that were caused by running Data Archive Expert Version 1 at PTF level UQ During our project, for the purpose of this redbook, we then used the Java 2 version build level, as shown in Figure 3-2. These issues were related to Java 2 and the way event checking is performed in certain classes. These problems are documented by APARs PQ79809 and PQ79946, and are resolved with service release of JVM So, if you are running a recent level of Java, make sure that build level is or later. Workload Manager application environment We next needed to verify that Workload Manager (WLM) is active and in goal mode. We assume that the WLM has been already installed. Since Data Archive uses Java stored procedures, it is good practice to allocate a dedicated WLM environment for their execution and a NUMTCB not exceeding 8. See DB2 for z/os Stored Procedures: Through the CALL and Beyond, SG , for general information on the Java environment and stored procedures. From SDSF, we executed the command /D WLM,STATUS. See Figure DB2 Data Archive Expert for z/os
49 Display Filter View Print Options Help SDSF OPERLOG DATE 10/23/ WTOR COMMAND ISSUED COMMAND INPUT ===> /D WLM,SYSTEMS SCROLL ===> CSR RESPONSE=SC63 IWM025I WLM DISPLAY 304 ACTIVE WORKLOAD MANAGEMENT SERVICE POLICY NAME: WLMPOL ACTIVATED: 2003/10/22 AT: 00:41:49 BY: BART FROM: SC63 DESCRIPTION: Sample WLM Service Policy RELATED SERVICE DEFINITION NAME: Sampdef INSTALLED: 2003/10/22 AT: 00:40:38 BY: BART FROM: SC63 WLM VERSION LEVEL: LEVEL013 WLM FUNCTIONALITY LEVEL: LEVEL011 WLM CDS FORMAT LEVEL: FORMAT 3 STRUCTURE SYSZWLM_WORKUNIT STATUS: CONNECTED STRUCTURE SYSZWLM_6A3A2084 STATUS: CONNECTED *SYSNAME* *MODE* *POLICY* *WORKLOAD MANAGEMENT STATUS* SC63 GOAL WLMPOL ACTIVE SC64 GOAL WLMPOL ACTIVE SC65 GOAL WLMPOL ACTIVE Figure 3-3 WLM Display to verify GOAL mode In reviewing the response from the D WLM command, we can see that our active WLM policy is named WLMPOL, and that it is in goal mode. For more information on WLM policy definition and implementation, refer to z/os V1R4.0 MVS Planning Workload Management, SA DB2 for OS/390 Java and JDBC type 2 driver support To verify that PQ69681 was installed on our driving system, we used the cross zone display function of the SMP/E dialog. In addition, we verified that the DB2 DSNJDBC plan was installed by using the DB2 Administration Tool Version 4.2, 5655-I23. This tool provided us with a mechanism to verify that current plans and packages were bound for the JDBC support. The example in Figure 3-4 shows our version of DSNJDBC. DB2 Admin DB2G Package List Row 1 of 4 Command ===> Scroll ===> PAGE Valid line commands are: K - Local packages I - Interpretation S PL Name Seq No Location Collection Name Timestamp * * * * * * DSNJDBC 1 DSNJDBC DSNJDBC DSNJDBC 2 DSNJDBC DSNJDBC DSNJDBC 3 DSNJDBC DSNJDBC DSNJDBC 4 DSNJDBC DSNJDBC ******************************* END OF DB2 DATA ******************************* Figure 3-4 DSNJDBC Package List Display Chapter 3. Installing and customizing DB2 Data Archive Expert 23
50 REXX language support for DB2 V7 REXX language support for DB2 V7 is ordered separately from the DB2 V7 base product. The program number and feature number for DB2 REXX language support is 5675-DB2, feature code For more information on how to order non-priced features of DB2, please refer to the DB2 V7 for UDB z/os and OS/390 Software Announcement Letter, in the USA. While it is a non-priced feature of DB2 V7, when ordered, you will be sent a separate product tape, and you will need to install this feature in order to enable REXX language support. Installation instructions to install the DB2 REXX language support feature can be found in the DB2 REXX Language Support Program Directory, GI Additional installation information can be found in the DB2 UDB for OS/390 and z/os V7 Installation Guide, SC In our environment, the DB2 REXX language support was already installed; we again used DB2 Administration Tool to verify this. Refer to Figure 3-5. Note: With DB2 for z/os Version 8, the REXX feature becomes part of the base product. DB2 Admin DB2G Packages Row 4 of 25 Command ===> Scroll ===> CSR Valid commands are: BINDALL REBINDALL FREEALL VERSIONS GRANT Valid line commands are: DP - Depend A - Auth T - Tables V - Views X - Indexes S - Table spaces Y - Synonyms RB - Rebind F - Free B - Bind BC - Bind Copy GR - Grant EN -Enab/disab con PL - Package lists P - Local plans LP - List PLAN_TABLE I - Interpretation SQ - SQL in package VE - Versions V I V O Quali- R E D S Collection Name Owner Bind Timestamp D S A P fier L X R * * * * * * * * * * * * *E DSNREXX DSNREXX BART B S Y Y BART N DSNREXUR DSNREXX BART B U Y Y BART N DSNREXCS DSNREXX BART B S Y Y BART N DSNREXRS DSNREXX BART B T Y Y BART N DSNREXRR DSNREXX BART B R Y Y BART N ******************************* END OF DB2 DATA ******************************* Figure 3-5 DSNREXX Package List Display DB2 Utilities and DSNUTILS stored procedure support File archives are unloads of source or retrieve target tables performed using the DB2 V7 Unload utility. File retrieves are loads into source or retrieve tables using the DB2 V7 Load utility. Therefore, you need to install the DB2 V7 Utilities Suite, program number 5697-E98. In addition, the DSNUTILS stored procedure is used to execute DB2 V7 utilities through a stored procedure call. Information that describes how to install both DSNUTILS and the DB2 V7 Utilities Suite can be found in the DB2 UDB for OS/390 and z/os V7 Installation Guide, SC In our environment, both of these products were already installed in our environment. For more information on DB2 V7 Utilities and DB2-supplied stored procedures, refer respectively to DB2 for OS/390 and z/os Version7 Using the Utilities Suite, SG and DB2 for z/os Stored Procedures: Through the CALL and Beyond, SG DB2 administration clients package During some of our later testing we discovered that the only way to allow file archive specifications to write to tape devices was to use the Template support feature of Data Archive Expert. In order to take advantage of this, the Template control tables used by DB2 24 DB2 Data Archive Expert for z/os
51 Control Center need to be available. The creation of these tables is described in the DB2 UDB for z/os and OS/390 Version 7 Installation Guide, GC By default, these tables are created with the standard names DSNACC.UTLIST and DSNACC.UTTEMPLATE. You can look in your DB2 catalog to see if these objects are defined. You also need a mechanism to create and modify the contents of the UTTEMPLATE table. For this, you can either use the DB2 Control Center or the DB2 Administration Tool. The DB2 Control Center is part of the DB2 Management Clients package feature of DB2. Similar to the DB2 REXX language support described previously, the DB2 Management Clients package is a non-priced, separately ordered feature of DB2. The program number and feature number for the DB2 Management Clients package is 5675-DB2, feature code For more information on how to order non-priced features of DB2, please refer to the DB2 V7 for UDB z/os and OS/390 Software Announcement Letter, in USA. This non-priced feature of DB2 V7, when ordered, will be sent as a separate product tape, and will need to be installed in order to enable DB2 Control Center. DB2 Administration Tool 4.2 is an ISPF application that uses dynamic SQL to access DB2 catalog tables. Using DB2 Admin can greatly increase the productivity of the entire DB2 staff (database administrators, system administrators, and application developers). DB2 Admin is interactive, intuitive, easy-to-use, and fast. Its function is comprehensive. See the redbook DB2 for z/os Tools for Database Administration and Change Management, SG for details. For more information also refer to the information obtained from the URL: Installation steps: Details We conducted the product installation using a RACF ID named PAOLOR2. This ID was given DB2 SYSADM authority on our DB2 subsystem Step 1. Create the metadata database and table space The default DDL is located in the hlq.sahxsamp dataset. For future reference, our installation used the value AHX110 as the high level qualifier for the product target datasets. We created a DB2 user managed stogroup specifically for the installation of the Data Archive Expert metadata database. We also decided to segregate the metadata objects into their own buffer pool for performance improvement. The installation section of the DB2 Data Archive Expert User s Guide and Reference, SC contains a recommendation that the metadata tablespace should be segmented, as defined by the sample DDL delivered with the product Step 2. Create the metadata tables We elected to use the defaults for database name, table space name, and schema name. The default DDL member AHXCRMET contained in AHX110.SAHXSAMP was executed unchanged. If you have different naming standards, you can change the database name, the table space names, and the schema name, as long as you keep the same schema name for all tables Step 3. Define the DB2 Data Archive Expert stored procedures If you have not had much experience with installing and using Java stored procedures, it is highly recommended that you review the instructions on how to set up and run interpreted Java stored procedures, which are described in the DB2 UDB for OS/390 and z/os V7 Chapter 3. Installing and customizing DB2 Data Archive Expert 25
52 Application Programming Guide and Reference for Java, SC In our environment, we elected to create a separate WLM environment for DB2 Data Archive Expert. Defining the WLM environment Using the WLM dialog, we first extracted a list of existing WLM application environments. We decided to clone an existing environment to create our new environment. See Figure 3-6. File Help Command ===> EsssssssssssssssssssssssssssssssssssssssssssssN e Choose Service Definition e e e e Select one of the following options. e e 2 1. Read saved definition e e 2. Extract definition from WLM e e couple data set e e 3. Create new definition e e e e e e e DsssssssssssssssssssssssssssssssssssssssssssssM ENTER to continue Figure 3-6 WLM dialog Next, from the WLM service definition dialog, chose option 9. See Figure 3-7. File Utilities Notes Options Help Functionality LEVEL011 Definition Menu WLM Appl LEVEL013 Command ===> Definition data set.. : none Definition name..... Sampdef (Required) Description Sample WLM Service Definition Select one of the following options Policies 2. Workloads 3. Resource Groups 4. Service Classes 5. Classification Groups 6. Classification Rules 7. Report Classes 8. Service Coefficients/Options 9. Application Environments 10. Scheduling Environments Figure 3-7 WLM Definition Menu 26 DB2 Data Archive Expert for z/os
53 Application environments, option 9, gives us a list of existing application environments to choose from. See Figure 3-8. Application-Environment Notes Options Help Application Environment Selection List Row 1 to 15 of 2 Command ===> Action Codes: 1=Create, 2=Copy, 3=Modify, 4=Browse, 5=Print, 6=Delete, /=Menu Bar Action Application Environment Name Description BARTSRV J2EE Application Server for Bart BBOASR1 WAS IVP Server BBOASR2 J2EE Application Server CBINTFRP WAS Interface Repository Server CBNAMING WAS Naming Server CBSYSMGT WAS System Management Server 2_ DB2GJAVA DB2G java SPs DB2GMETA DB2G all metadat SPs DB2GUTIL DB2G stored proc Utility DB2GWLM1 DB2G SP environment numtcb=1 DB2GW100 DB2G DSNACCMO - needs 100 TCBs DB7YJCC DB7Y all JCC SPs DB7YJSPP db7y DSNTJSPP (SPB) DB7YREXX For Rexx SPs DSNTPSMP/DSNTBIND DB7YSPB2 DSNTPSMP proc for V1.15 update Figure 3-8 WLM Application Selection List We used DB2GJAVA as our example, and we named our new WLM environment DB2GDAEE. Also, we associated the WLM environment with a JCL proclib procedure that we called DB2GDAEP. The DB2 Data Archive Expert User s Guide and Reference, SC recommended a setting of NUMBTCB from between five and seven, we opted for eight. The APPLENV parameter in the start parameters should match the WLM environment name. Also, make sure that you have specified the correct DB2 subsystem name on the start parameters. Finally, we defined no address space limit. Your environment probably has different requirements, and you need to coordinate the setting up of the WLM environment with your z/os system programmer. Once we have entered our parameters, press PF3 to save the WLM definitions. For an example see Figure 3-9. Chapter 3. Installing and customizing DB2 Data Archive Expert 27
54 Application-Environment Notes Options Help Copy an Application Environment Command ===> Application Environment... DB2GDAEE Required Description DB2G Data Extract Java STC Subsystem Type DB2 Procedure Name DB2GDAEP Start Parameters DB2SSN=DB2G,NUMTCB=8,APPLENV=DB2GDAEE Limit on starting server address spaces for a subsystem instance: 1 1. No limit 2. Single address space per system 3. Single address space per sysplex EsssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssN e Press EXIT to save your changes or CANCEL to discard them. (IWMAM970) e DsssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssM Figure 3-9 WLM dialog Once we have defined our WLM environment, we needed to activate it to WLM. From the WLM Definition Menu (see Figure 3-7), position your cursor under the field UTILITIES and press Enter. The pop-up panel shown in Figure 3-10 appears. Select option 3. Activate service policy, and press PF3. This activates your new WLM service policy. File Utilities Notes Options Help EsssssssssssssssssssssssssssssssssssssssssssssssssN Funct e 3 1. Install definition e Appl LEVEL013 Comma e 2. Extract definition e e 3. Activate service policy e Defin e 4. Allocate couple data set e e 5. Allocate couple data set using CDS values e Defin e 6. Validate definition e Descr DsssssssssssssssssssssssssssssssssssssssssssssssssM Select one of the following options Policies 2. Workloads 3. Resource Groups 4. Service Classes 5. Classification Groups 6. Classification Rules 7. Report Classes 8. Service Coefficients/Options 9. Application Environments 10. Scheduling Environments Figure 3-10 WLM Activate service policy pop-up 28 DB2 Data Archive Expert for z/os
55 Create the DAE stored procedures Member AHXCRSTP in the hlq.sahxsamp dataset contains the DDL necessary to create the DB2 stored procedures used by DAE. Edit this member and change all references from DSN7WLMR to the name of the WLM environment created in the previous step. In our case, we changed this value to DB2GDAEE. Important: Exercise care when editing this member. It contains case sensitive parameters, and should be edited with the CAPS OFF ISPF option. Once we have completed our changes, we can then use SPUFI to execute the SQL. The maintenance level of the product that we installed includes 21 stored procedures. We used DB2 Administration Tool 4.2 to verify that all of the necessary stored procedures were successfully created. Create the JCL started task procedure We modeled our JCL procedure from an existing WLM procedure. This JCL procedure needs to be placed in a PROCLIB PDS known to JES. In our environment, we placed our JCL procedure in SYS1.TEST1.PROCLIB. See Example 3-1. Example 3-1 Sample Data Archive JCL procedure //************************************************************* //* JCL FOR RUNNING THE WLM-ESTABLISHED STORED PROCEDURES //* ADDRESS SPACE //* RGN -- THE MVS REGION SIZE FOR THE ADDRESS SPACE. //* DB2SSN -- THE DB2 SUBSYSTEM NAME. //* NUMTCB -- THE NUMBER OF TCBS USED TO PROCESS //* END USER REQUESTS. //* APPLENV -- THE MVS WLM APPLICATION ENVIRONMENT //* SUPPORTED BY THIS JCL PROCEDURE. //* //************************************************************* //DB2GDAEP PROC APPLENV=DB2GDAEE,DB2SSN=DB2G,NUMTCB=8 //IEFPROC EXEC PGM=DSNX9WLM,REGION=0M,TIME=NOLIMIT, // PARM='&DB2SSN,&NUMTCB,&APPLENV' //STEPLIB DD DISP=SHR,DSN=DB2G7.SDSNLOAD // DD DISP=SHR,DSN=DB2G7.SDSNLOD2 //* DD DISP=SHR,DSN=CEE.SCEERUN //* NEED UNAUTHORIZED DATASET //JAVAENV DD DSN=AHX110.JAVAENV,DISP=SHR //JSPDEBUG DD SYSOUT=* //CEEDUMP DD SYSOUT=* //SYSPRINT DD SYSOUT=* //SYSOUT DD SYSOUT=* Your installation may have elected to include one or more of the libraries referenced in the above example into the MVS LINKLST. Libraries included in the MVS LINKLIST do not need to be specified on steplib. Make sure that you remember to include both the SDSNLOAD and SDSNLOD2 libraries. SDSNLOD2 is the library that contains the SQLJ and JDBC DLLs used by DB2. Also, note that CEE.SCEERUN contains the LE language environment dynamic runtimes. For the library access, the LE libraries should be placed in LLA with the FREEZE option, either if allocated via LNKLST or STEPLIB. Further improvements can be obtained by placing in LPA the eligible portion of SCEERUN as listed in LPALST. For details, see the z/os 1.4 Language Envronment Customization, SA We also included a DD statement for JSPDEBUG, which is used for diagnostic information Chapter 3. Installing and customizing DB2 Data Archive Expert 29
56 from Java stored procedures. Your environment may have been built with other library names; you should consult with your z/os system programmer if your environment appears different. The JCL procedure should specify the named WLM environment created in the previous step, ours is named DB2GDAEE. We elected to set NUMTCB to match the same number specified in our WLM definition. The WLM definition setting overrides the parameter specified in the JCL procedure. Also, we specified a region setting of 0M, which allows the address space to obtain the maximum amount of virtual storage capped by the IEFUSI exit. Your installation standards may require a different specification Step 4. Java environment variable settings This topic requires you know a little about both JVM, as well as Open Edition MVS and UNIX System Services. When our system programmer moved the SMP/E target libraries from the floor system to our test LPAR, he created the HFS directory structure for us that conformed to the existing HFS directory structure standards. Located in the hlq.sahxbase target library is a sample member, AHXISMKD, which is a sample job to invoke the second supplied sample REXX, AHXMKDIR that is used to allocate HFS paths for the DB2 Data Archive Expert target libraries. We did not execute this job, but it appears to build the necessary HFS into the HFS root directory. The example shown in the DB2 Data Archive Expert User s Guide and Reference, SC shows the coding of the variables specified in the JAVAENV member based on this installation into root. In our environment, our system programmer installed into a different high level directory. The next several sections attempt to show you how we used the Open Edition MVS shell to determine what directory to specify in our JAVAENV environment variables. DB2_HOME environment setting The setting for DB2_HOME is illustrated in Example 3-2. Example 3-2 Sample Java environment variables shipped with Data Archive Expert ENVAR("DB2_HOME=/usr/lpp/db2/db2710","JAVA_HOME=/usr/lpp/java/IBM/J1.3", "CLASSPATH=/u/sysadm/someother.jar:/u/sysadm/ahxv110.jar", "DB2SQLJPROPERTIES=/usr/lpp/db2/db2710/db2sqljjdbc.properties"), MSGFILE(JSPDEBUG,,,,ENQ) When we attempted to start up DB2 Data Archive Expert with the sample environment variable for DB2_Home, we received the error shown in Figure DB2 Data Archive Expert for z/os
57 AHXV Unexpected SQL Error Command ==> An unexpected SQL error was encountered while processing your request. DB2 system : DB2G Message: AHXTOOL: CALL AHXTOOLS.GETSETTINGS More: + SQLCODE: -471 SQLSTATE: SQL ERROR TOKENS: AHXTOOLS.GETSETTINGS SQL PROCEDURE DETECTING ERROR: DSNX9WCA SQL DIAGNOSTIC INFORMATION: 0,0,0,-1,0,0.00E Figure 3-11 Data Archive Expert DB2_Home error This was our clue that the USS directory path in our environment was different from the example outlined in the DB2 Data Archive Expert User s Guide and Reference, SC We then used the UNIX System Services ISPF shell dialog to discover the correct directory path. On our system, we invoked the shell dialog using the command OI; your environment may be different. Figure 3-12 shows the shell dialog. File Directory Special_file Tools File_systems Options Setup Help ssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss UNIX System Services ISPF Shell Enter a pathname and do one of these: - Press Enter. - Select an action bar choice. - Specify an action code or command on the command line. Return to this panel to work with a different pathname. More: + /usr/lpp/db2 EUID=0 EsssssssssssssssssssssssssssssssssssssssssssssssssssssssssssN e (C) Copyright IBM Corp., 1993, All rights reserved. e DsssssssssssssssssssssssssssssssssssssssssssssssssssssssssssM Command ===> Figure 3-12 USS ISPF Shell dialog Chapter 3. Installing and customizing DB2 Data Archive Expert 31
58 We started out specifying the /usr/lpp/db2 path to see what additional directories were located in this path. Figure 3-13 shows the results from this search. File Directory Special_file Commands Help sssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss Directory List Select one or more files with / or action codes. If / is used also select an action from the action bar otherwise your default action will be used. Select with S to use your default action. Cursor select can also be used for quick navigation. See help for details. EUID=0 /Z04RB1/usr/lpp/db2/ Type Filename Row 1 of 5 _ Dir. _ Dir.. _ Syml db2g _ Syml db2710 _ Syml db2810 Command ===> Figure 3-13 USS ISPF Directory List In our environment, we defined symbolic references to the USS directory for DB2 that relates to the DB2 subsystem name. In our case, the symbolic reference named DB2G points to the USS directory, which contains the Java and JDBC support for DB2. Based on that discovery, we changed the environment variable that describes DB2_HOME as shown in Example 3-3. Example 3-3 Modified Java environment variables ENVAR("DB2_HOME=/usr/lpp/db2/db2g", "JAVA_HOME=/usr/lpp/java/IBM/J1.3_V030510A", "TZ=EST05", "CLASSPATH=/usr/lpp/ahxv1r1/ahxv110.jar"), MSGFILE(JSPDEBUG,,,,ENQ) JAVA_HOME environment variable In a similar fashion, we used the USS ISPF shell dialog to determine the USS directory path for Java JDK This then brought us to a discovery regarding the level of Java JVM that was required for DB2 Data Archive Expert. Our system initially had a JDK build level of September, V During our initial testing, we discovered a problem that was introduced with the September JDK build level. We changed our JAVA_HOME environment variable to specify a directory path that pointed to the JDK build level of May, V This resolved the issue. Important: Currently, the product at PTF level UQ80364 only works with the May JDK build level. Track JDK APARs PQ79809 and PQ79946, which are currently scheduled to be included in the next JDK service refresh. Once these close, Data Archive Expert will be able to coexist with newer build levels of JDK. CLASSPATH environment variable This variable describes Java classes or jar files that are used by the Java stored procedures. Data Archive Expert ships with a jar file that is named ahxv110.jar. Again, using the USS ISPF shell dialog, we confirmed the creation of a directory named AHXV1R1, and placed our 32 DB2 Data Archive Expert for z/os
59 jar file in this HFS directory. The DB2 Data Archive Expert User s Guide and Reference, SC , contains an example that shows how you would concatenate the Data Archive Expert jar with an existing jar file. Also, there is a discussion on the LE limitations on the size of a JAVAENV file and how to use the _CEE_ENVFILE environment variable to extend the 255 byte limitation. We tried to think of the CLASSPATH in terms of traditional JCL and PDSs. The directory path describes where the loadlib, or in our Java terms classes, and jar file is located. The jar file contains the interpreted Java classes, or in traditional z/os terms, it is where programs are located. Tip: By choosing to create our own WLM environment, we also are able to associate to a JCL procedure that has our unique JAVAENV parameters dataset. This keeps us from having to worry about the additional complexity of having to concatenate paths for multiple Java stored procedure-based applications in our CLASSPATH specification. Additional environment variables During our initial testing, we observed that the timestamps appended to the end of the archive table row, and the timestamps for the individual executions of archive and retrieve specifications were showing up as GMT. Timestamps are being extracted by the Java application, which is not directly applying the z/os GMT offset for time requests. In order to have the proper GMT offset reflected, we included the Java environment variable TZ, coded in our case as TZ=EST05. The data center hosting our DB2 subsystem is located in the Eastern time zone. Also, the JAVAENV example shown in the DB2 Data Archive Expert User s Guide and Reference, SC , contains a DB2SQLJPROPERTIES environment variable. In our case, we are running with uncustomized SQLJ/JDBC properties. Because of this, the DB2_HOME directory path includes the default properties file, and specification of the DB2SQLJPROPERTIES environment variable is unnecessary. We also included a reference to MSGFILE, which points back to the JCL procedure definition and the DD name of JSPDEBUG. This output DD is used by Java stored procedures as a destination for diagnostic information. For more information about the JVM environment variables discussed above, as well as for a more detailed description of the Java stored procedure environment, see the DB2 UDB for OS/390 and z/os V7 Application Programming Guide and Reference for Java, SC Step 5. Insert default properties Member AHXDEFPR located in the hlq.sahxsamp installation library contains an SQL statement that inserts the default Data Archive Expert properties values into the user properties table. We took all defaults except LOGLEVEL, where we chose option 1. This provides the most detailed level of logging for problem determination. We would expect that most users in an operational environment would choose to take the recommended default of Step 6. Create temporary database Using the DB2 Administration Tool, we verified that in our DB2 environment there was an existing temporary database. In our case we had one available. Otherwise, you need to create one. Make sure that the table spaces are defined as segmented. Chapter 3. Installing and customizing DB2 Data Archive Expert 33
60 3.2.7 Step 7. Grant the appropriate authorizations Since we all were given user IDs with SYSADM authority, we did not run any of the GRANT statements provided in the SAHXSAMP library Step 8. Verify that DSNUTILS is installed There are several ways to do this. In our case, we just used DB2 Control Center to demonstrated a proper DSNUTILS installation by successfully running a utility. The DB2 side can be easily checked by querying the catalog table SYSIBM.SYSROUTINES using the procedure name DSNUTILS; that row will also show the WLM application environment name in the WLM_ENVIRONMENT column. Or, you can confirm that the following steps have been performed: 1. DSNUTILS stored procedure definition has been defined to DB2. We used the DB2 Administration Tool V4.2 to confirm this. 2. The WLM environment has been defined and enabled. We displayed the WLM application environment called DSNUTILS with the command D WLM,APPLENV=DSNUTILS. 3. There is a JCL procedure in a JES proclib that is referenced in the WLM application environment definition for DSNUTILS. For more information about installing and using the DSNUTILS stored procedure, refer to the IBM DB2 Universal Database for OS/390 and z/os Utility Guide and Reference Version 7, SC Step 9. Make DB2 Data Archive Expert available to users In our ISPF environment, the product integration was fairly straight forward. The DB2 SDSNLOAD library was already included in our user logon procedure. Tip: The AHXV11 REXX exec is located in hlq.sahxclst. This is not clearly documented in the DB2 Data Archive Expert User s Guide and Reference, SC We discovered that the library hlq.sahxclst was shipped as a variable blocked library. Some installations require fixed blocked CLIST libraries, including ours. We created a copy of the SAHXCLST library and named it SAHXCLST.VBS. We then reblocked the SAHXCLST library so that it is fixed blocked. Attention: SAHXCLST is shipped as a variable blocked CLIST library. Your installation may require that you create a fixed blocked equivalent. We modified the AHXV11 REXX member to specify the appropriate high-level qualifiers for the PLIB, MLIB, and CLIST libraries. In our case, it was AHX110. We then copied the modified REXX into a CLIST library that was also included in our logon procedure. At this point we were able to start Data Archive Expert from ISPF option 6 by typing AHXV Optionally add DAE to the Administration Tool Launchpad We skipped this step, we preferred launching it directly. 34 DB2 Data Archive Expert for z/os
61 Optionally install DB2 Grouper In order to execute the IVP exactly as described in the DB2 Data Archive Expert User s Guide and Reference, SC you need to install DB2 Grouper first. We elected to proceed with the IVP and defer the installation of Grouper. Refer to Chapter 4, Installing and customizing DB2 Grouper on page 37. When we elected to do this, we found that when we created our table archive specification, we needed to specify N for the Search for related Tables in the pop-up of Figure AHXV Select Starting Point Table Row 1 of 19 Command ==> Scroll ===> CSR Archive specification: tst DB2 system..... : DB2G Line commands are: Esssssssssss Search for related Tables? ssssssssssssn S - Select table S* e e D - Deselect table e Command ==> e e e Cmd * Table name e Find related tables? ==> Y (Yes/No) e e e ACT e Starting point table: DEPT e D_IP e Creator : PAOLOR2 e D_LOC e Database name... : PAOLOR2D e D_USE e DB2 system.... : DB2G e S DEPT e e DEPTDEL e e DEPTDEL1 e e EACT DsssssssssssssssssssssssssssssssssssssssssssssssssssM Figure 3-14 Data Archive Expert related tables panel If we elected to specify Y in the search for related tables field, we received a warning message that Grouper was not installed. In any event, once we installed Grouper, we re-executed the IVP a second time. 3.3 Execute installation verification With the exception of the deferred Grouper installation, we were able to execute the installation verification. In order to prevent inadvertent corruption of the DB2 sample tables used by the IVP, we first created a user duplicate set of these objects using the DB2 Administration Tool 4.2. We then executed our IVP scenarios against our cloned versions of the sample tables. Refer to the IVP description in the DB2 Data Archive Expert User s Guide and Reference, SC as it runs as documented. For the table archive specification test, we needed to modify the JCL sample member AHXIVPRN contained in the hlq.sahxsamp dataset. Example 3-4 shows our modified version. Example 3-4 JCL to invoke the IVP archive specification through batch //TSOCMD EXEC PGM=IKJEFT01,DYNAMNBR=20 //STEPLIB DD DSN=DB2G7.SDSNLOAD,DISP=SHR //SYSEXEC DD DISP=SHR,DSN=AHX110.SAHXSAMP.IVP //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * Chapter 3. Installing and customizing DB2 Data Archive Expert 35
62 %AHXIVPAR DB2G,A,*,IVPNEWFILE,SYSTOOLS, /* This job, when submitted, executes the table archive IVP in batch. Additional information on batch processing considerations, including more details about the modifications to the IVP sample, can be found in 5.1, Batch processing considerations on page DB2 Data Archive Expert for z/os
63 4 Chapter 4. Installing and customizing DB2 Grouper In this chapter we discuss the installation and configuration steps necessary in order to install the server component of DB2 Grouper. We assume that in our environment the SMP/E installation has been completed prior to beginning the installation work. The optional Grouper Client installation is discussed in Chapter 6, Optionally defining DB2 Grouper Client on page 65. This chapter contains the following: Pre-installation activities Installation steps for host: Details Installation verification Copyright IBM Corp All rights reserved. 37
64 4.1 Pre-installation activities We review the Grouper Server installation materials to familiarize ourselves with any issues or areas of special concern.you might not need to perform these steps as part of your installation. Probably your system programmer will do it for you Program Directory Normally, the program directory is shipped as part of the installation package, along with the installation tapes. You can always download the most current version of the program directory from the IBM Data Management Tools Web site located at the URL: At the time of our residency, the publication number for the DB2 Grouper Program Directory, GI was the publication number, yours may be different if a newer publication is made available after the redbook publication Included in the program directory is information that helps you identify the second document that needs to be reviewed for installation planning: the Preventive Service Planning document. This document is also sometimes referred to as the PSP bucket. In the program directory there is a section called Preventive Service Planning and it tells you how to identify the PSP Preventive Service Planning Using the UPGRADE and the SUBSET IDs documented in the program directory, you can download a current copy of the preventive service planning (PSP) using RETAIN, or by contacting the IBM Support Center and requesting a copy. For access to RETAIN, visit the URL: Information normally found in the PSP includes items such as installation notes, changes to documentation, general information, and service recommendations including any high impact or pervasive APARs. As the information contained in the PSP is kept very current, it should be reviewed at the beginning of the installation process, as well as prior to migrating DB2 Grouper into your production environment. Our PSP bucket was empty at the time we installed DB2 Grouper, so we had no additional action items documented through PSP. Example 4-1 shows the content of a typical PSP document. Example 4-1 Preventive Service Planning sample Processed: 11/08/03 --GLOBAL INFORMATION-- --INFORMATIVE TEXT-- *********************** * SUBSET H2A5110 * *********************** This subset contains installation information for DB2 GROUPER COMPONENT Version 1, Release 1, Modification 0. ************************************************************************ * C H A N G E S U M M A R Y * ************************************************************************ 38 DB2 Data Archive Expert for z/os
65 DATE LAST CHANGED SECTION 1. YY/MM/DD INSTALLATION INFORMATION NO ENTRIES 2. YY/MM/DD DOCUMENTATION CHANGES NO ENTRIES 3. YY/MM/DD GENERAL INFORMATION NO ENTRIES 4. YY/MM/DD SERVICE RECOMMENDATIONS NO ENTRIES 5. YY/MM/DD CROSS PRODUCT DEPENDENCIES NO ENTRIES SERVICE RECOMMENDATION SUMMARY DATE APAR PTF VOLID COMMENTS 1. YY/MM/DD XXXXXXX XXXXXXX XXXX XXX ************************************************************************ * SECTION 1. I N S T A L L A T I O N I N F O R M A T I O N * ************************************************************************ This section contains changes to the product's Program Directory. 1. YY/MM/DD NO ENTRIES ************************************************************************ * SECTION 2. D O C U M E N T A T I O N C H A N G E S * ************************************************************************ This section outlines major errors in the product's published documentation. 1. YY/MM/DD NO ENTRIES ************************************************************************ * SECTION 3. G E N E R A L I N F O R M A T I O N * ************************************************************************ This section contains general information, that is SYSGEN hints/tips. 1. YY/MM/DD NO ENTRIES ************************************************************************ * SECTION 4. S E R V I C E R E C O M M E N D A T I O N S * ************************************************************************ 1. YY/MM/DD PROBLEM: USERS AFFECTED: RECOMMENDATION: INSTALL XXXXXXX ON VOLID XXXX ************************************************************************ * SECTION 5. C R O S S P R O D U C T D E P E N D E N C I E S * ************************************************************************ This section contains information that is dependent upon another product other than this subset ID. It also contains information dealing with migration and product coexistence. 1. YY/MM/DD INTERDEPENDENT PRODUCT: PROBLEM: USERS AFFECTED: RECOMMENDATION: INSTALL XXXXXXX ON VOLID XXXX ************************************************************************ * I N F O R M A T I O N A L / D O C U M E N T A T I O N * * APARS FOLLOW (IF ANY) * ************************************************************************ --PTF INCLUDE LIST-- --PTF EXCLUDE LIST-- --PE APAR LIST-- Chapter 4. Installing and customizing DB2 Grouper 39
66 4.1.3 Review PTF cover letters As part IBM s standard software delivery mechanism, product fixes are packaged into a series of elements known as Program Temporary Fixes or PTFs. Each PTF includes a cover letter printed during the SMP/E receive process, which outlines any necessary action that is required in order to complete the application of the particular PTF. In our case, our system programmer had already reviewed all of the PTF cover letters and sent us a list of necessary action items. You should review your PTF cover letters and perform any documented action items Review and verify product prerequisites The system requirements for running DB2 Grouper on z/os are the same as those required for DB2 Data Archive Expert. Please refer to 3.1.4, Review and verify product prerequisites on page 20 for a description of the specific instructions for this step. In addition, you are required to have PTF UQ73301 applied to DB2 UDB for z/os and OS/390 Version 7. We used the SMP/E ISPF display dialog to verify that this particular fix was applied to our environment. 4.2 Installation steps for host: Details We performed the product installation of DB2 Grouper using a RACF ID named PAOLOR2, which had SYSADM authority on the target DB2 subsystem Step 1. Installing Grouper metadata for z/os We modified the sample DDL in SEGFSAMP member EGFDDLC and created a separate DB2 storage group for the Grouper objects. Important: The DB2 Grouper product requires that the table qualifier be SYSTOOLS. Do not change this in the samplib member EGFDDLC. Member EGFFISH contains the DDL to create necessary views, and was executed without modification. Tip: In looking at the order of instructions as described in the DB2 Grouper User s Guide, SC , it appears that you need to execute member EGFDDLD. Do not do this unless you really mean to because this member contains DDL to DROP the Grouper metadata and views. Create an SQL insert statement modified to include the USS directory where Grouper has been installed. Refer to Example 4-2 for a sample SQL statement. Example 4-2 Sample SQL insert statement for Grouper metadata insert into SYSTOOLS.EGF_PROPERTIES values ('EGFSERVERHOME', '/usr/lpp/egfv1r1') The value for server-directory in the above referenced sample refers to the USS directory path that points to the Java Jar file created by the SMP/E installation. We used the UNIX System Services ISPF shell to locate this. Section 3.2.4, Step 4. Java environment variable settings on page 30 describes how we used the USS shell for the DB2 Data Archive Expert installation. We installed the DB2 Grouper Java components into the USS directory path: 40 DB2 Data Archive Expert for z/os
67 /usr/lpp/egfv1r Step2. Defining Grouper Java stored procedures to DB2 Defining the stored procedure environment for Grouper was similar to the procedure we used to define the DB2 Data Archive Expert stored procedure. We refer to the process previously documented, but for additional information refer to the DB2 UDB for OS/390 and z/os V7 Application Programming Guide and Reference for Java, SC Defining the WLM environment We elected to create a separate WLM environment for Grouper. Using a process similar to the procedure documented in the previous chapter Defining the WLM environment on page 26, we defined our WLM environment as shown in Figure 4-1. Browse Line Col Command ===> SCROLL ===> PAGE **************************** Top of Data ****************************** Appl Environment Name.. DB2GGRPE Description DB2G Grouper Java STC Subsystem type..... DB2 Procedure name..... DB2GGRPP Start parameters.... DB2SSN=DB2G,NUMTCB=8,APPLENV=DB2GGRPE Limit on starting server address spaces for a subsystem instance: No limit *************************** Bottom of Data **************************** Figure 4-1 Grouper WLM environment definition Create the DB2 stored procedures Member EGFCPRC in the hlq.segfsamp dataset provides the Grouper stored procedure s SQL. We modified the statement to refer to our WLM application environment DB2GGRPE, and executed the member using SPUFI. Important: This member contains case sensitive parameters. Exercise care when editing, and ensure that the ISPF CAPS OFF option is used. At the time of our installation, this member created three SQL Java stored procedures. We verified the status of these procedures using the DB2 Administration Tool. Create the JCL started task procedure We created our Grouper JCL started task procedure using the one for DB2 Data Archive Expert as a sample. Remember that we chose to name the JCL procedure DB2GGRPP as shown in the WLM APPLENV definitions previously. Refer to Example 4-3. Chapter 4. Installing and customizing DB2 Grouper 41
68 Example 4-3 Sample Grouper JCL procedure //************************************************************* //DB2GGRPP PROC APPLENV=DB2GGRPE,DB2SSN=DB2G,NUMTCB=8 //IEFPROC EXEC PGM=DSNX9WLM,REGION=0M,TIME=NOLIMIT, // PARM='&DB2SSN,&NUMTCB,&APPLENV' //STEPLIB DD DISP=SHR,DSN=CEE.SCEERUN // DD DISP=SHR,DSN=DB2G7.SDSNLOAD // DD DISP=SHR,DSN=DB2G7.SDSNLOD2 // DD DISP=SHR,DSN=CBC.SCBCCMP //JAVAENV DD DSN=EGF110.JAVAENV,DISP=SHR //JSPDEBUG DD SYSOUT=* //CEEDUMP DD SYSOUT=* //SYSPRINT DD SYSOUT=* //SYSOUT DD SYSOUT=* Step 3 Java environment variable settings We had created the HFS file for hlq.javaenv, and connected to the USS directory structure. In addition, once we determined the correct settings for the DB2_HOME and JAVA_HOME environment for the DB2 Data Archive Expert, we used the same settings in the JAVAENV environment variables for DB2 Grouper. Review the procedure outlined in the preceding 3.2.4, Step 4. Java environment variable settings on page 30 for more details. CLASSPATH environment variable This variable describes Java classes or jar files that are used by Java stored procedures. DB2 Grouper ships with a jar file that is named egfsproc.jar. Again, using the USS ISPF shell dialog, we verified that a directory named EGFV1R1 existed, and we placed our jar file in this HFS directory. Additional environment variables During our initial testing of DB2 Data Archive Expert, we observed that the timestamps are being extracted by the Java application that is not directly applying the z/os GMT offset for time requests. In order to have the proper GMT offset reflected, we included the Java environment variable TZ, coded in our case as TZ=EST05. The data center hosting our DB2 subsystem is located in the Eastern time zone. Also, the JAVAENV example shown in the DB2 Grouper Users Guide, SC , contains both a DB2SQLJPROPERTIES environment variable and a WORK_DIR environment variable. In our case, we are running with uncustomized SQLJ/JDBC properties. Because of this, the DB2_HOME directory path includes the default properties file, and specification of the DB2SQLJPROPERTIES environment variable is unnecessary. We saw no reference to the need for the environment variable WORK_DIR and removed it. Refer to Example 4-4 to see how our JAVAENV variables were coded. Example 4-4 DB2 Grouper Java environment variables ENVAR("DB2_HOME=/usr/lpp/db2/db2g", "JAVA_HOME=/usr/lpp/java/IBM/J1.3_V030510A", "TZ=EST05", "CLASSPATH=/usr/lpp/egfv1r1/egfsproc.jar"), MSGFILE(JSPDEBUG,,,,ENQ) 42 DB2 Data Archive Expert for z/os
69 4.2.4 Bind packages and plans When we looked in our system catalog, there were many packages bound to the DSNJDBC collection. We were uncomfortable with the FREE statement that was part of the EGFJBND member used to bind the JDBC DBRMS, which is contained in hlq.segfsamp. What we decided to do was to run the EGFGBND job, used to bind the Grouper DBRMs, without modifying the bind control statements. We then modified the EGFJBND member to remove the FREE statements, and included the collection named DSNJDBC. Example 4-5 shows our modified EGFJBND job. Example 4-5 Bind JCL for SQLJ plan //BINDJDBC EXEC PGM=IKJEFT01,DYNAMNBR=20 //STEPLIB DD DISP=SHR,DSN=DB2G7.SDSNLOAD //*BRMLIB DD DISP=SHR,DSN=YYYY.YYYYYYY //SYSTSPRT DD SYSOUT=* //SYSPRINT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSTSIN DD * DSN SYSTEM(DB2G) BIND PLAN(SQLJPLN1) - PKLIST(DSNJDBC.*) END We determined earlier during the DB2 Data Archive Expert installation that plan DSNREXX had already been bound. 4.3 Installation verification Refer to the description of the installation verification procedure for DB2 Data Archive Expert discussed in 3.3, Execute installation verification on page 35. Again, we execute the IVP table archive specification as described in the DB2 Data Archive Expert User s Guide and Reference, SC Once we specify a starting point table, we now respond with a Y to the search for related tables pop-up. We are returned a list of associated tables, this validates that DB2 Grouper is functioning. Chapter 4. Installing and customizing DB2 Grouper 43
70 44 DB2 Data Archive Expert for z/os
71 5 Chapter 5. Stored procedures and batch execution In this chapter we discuss the preparation steps necessary to run DB2 Data Archive Expert archive and retrieve specifications in batch. Modifications to sample REXX stored procedures are discussed as well as the configuration steps necessary in order to use DB2 utility templates are shown. This chapter contains the following: Batch processing considerations Example batch jobs Copyright IBM Corp All rights reserved. 45
72 5.1 Batch processing considerations In 3.3, Execute installation verification on page 35 discussing the IVP, we introduced the batch invocation of an archive specification. We now discuss this topic in more detail showing examples of batch programs invoking the stored procedures provided by the tool Preliminary topics DB2 Data Archive Expert provides a sample REXX program and corresponding execution JCL in the hlq.saxhsamp data set. The sample provided is coded to execute an existing table archive specification in batch. With the modifications we described in 3.3, Execute installation verification on page 35, you have a working example that allows you to archive from source table to table archive. Refer to Example 3-4 on page 35 for review. You can write your own program to call the DB2 Data Archive Expert stored procedures in REXX, C, or COBOL. In our examples, we take the existing REXX sample and create a modified version for each type of archive and retrieval specification. We also show an example of how to run a file archive in batch, and specify the use of tape by the use of the DB2 Control Center 390 template tables, described earlier in DB2 administration clients package on page Data Archive Expert stored procedures Table 5-1 summarizes Data Archive Expert s stored procedures and their purpose. Table 5-1 Stored procedure name and usage Stored Procedure Name AHXTOOLS.ARCHIVEEXECSP AHXTOOLS.RETRIEVEEXECSP AHXTOOLS.OFFLINEARCHSP AHXTOOLS.OFFLINERETEXECSP AHXTOOLS.OFFLINEONLARCSP Usage Executes Table Archive Specification Executes Table Retrieve Specification Executes File Archive Specification Executes File Retrieve Specification Executes Table Archive to File Archive Specification The input parameter list for each of the supplied stored procedures is documented in the DB2 Data Archive Expert User s Guide and Reference, SC Using the sample REXX exec and associated JCL contained in hlq.sahxsamp members AHXIVPAR and AHXIVPRN, we have built working batch API examples for each type of stored procedure invocation. Tip: In this section we modify several members of the target library SAHXSAMP. To keep from losing these modifications whenever maintenance is applied to DB2 Data Archive Expert, we recommend that you create a copy of the hlq.sahxsamp library, and modify the supplied REXX and JCL into your own version of SAHXSAMP. 5.2 Example batch jobs We now discuss several examples of archive and retrieve specifications invoked through batch REXX procedures. 46 DB2 Data Archive Expert for z/os
73 5.2.1 Table archive specification The sample REXX for this type of table archive is in member AHXIVPAR. We were able to use the REXX as coded in the samplib without any modification for the IVP; however, we found that we needed to code some additional modifications as described below. The parameter list for AHXTOOLS.ARCHIVEEXECSP is shown in Table 5-2. Table 5-2 ARCHIVEEXECSP parameter list Positional parameter for procedure Parm 1 Parm 2 Parameter description SSID of DB2 subsystem to connect to Action type, A-run specification, C-complete Parm 3 Version to complete if action is C, otherwise * Parm 4 Parm 5 Parm 6 Specification name Schema name of Data Archive Expert metadata Row filter Having created a table archive specification named batcharchivetest, we then modified the executing JCL provided in the AHXIVPRN member in SAHXSAMP as shown in Example 5-1. Example 5-1 JCL to exec ARCHIVEEXECSP //TSOCMD EXEC PGM=IKJEFT01,DYNAMNBR=20 //STEPLIB DD DSN=DB2G7.SDSNLOAD,DISP=SHR //SYSEXEC DD DISP=SHR,DSN=AHX110.SAHXSAMP.IVP //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * %AHXIVPAR DB2G,A,*,batcharchivetest,SYSTOOLS,ADMRDEPT='E01' /* // Notice that we coded the archive specification name in lower case as DB2 Data Archive Expert allows the creation of mixed cased specification names. Tip: If you elect to create mixed case specification names, remember to edit the executing JCL for the API stored procedure invocation with ISPF edit property CAPS OFF. In the DB2 Data Archive Expert User s Guide and Reference, SC , the ROWFILTER parameter is described as overriding the previously defined row filter. It also states that if ROWFILTER is null, then DB2 Data Archive Expert applies the row filter used by the previous execution of the archive specification. During our experimentation, we discovered that when we coded the REXX parameter with a null, the stored procedure was interpreting that as a blank row filter, and when applying the archive specification, would materialize a result that was as if there were no row filter within the archive specification definition. We also decided to change the way that the sample REXX handled error codes (it always generated an MVS step return code of 0, even with an unsuccessful invocation of the stored procedure). We modified the sample REXX as follows: 1. We add the declaration shown in Example 5-2 in the front of the sample REXX program. Chapter 5. Stored procedures and batch execution 47
74 Example 5-2 REXX modification 1 indret = -1 /* null indicator for output msg*/ /***************************REDBOOK MODIFICATION***********************/ nullid = -1 /* null indicator for row filter*/ /***************************REDBOOK MODIFICATION ENDS******************/ This same modification will be made available as sample shipped with the product. 2. We then tested for the length of the ROWFILTER parameter and made a different format of the call based on the parameter length. A length of zero indicates no ROWFILTER parameter, and we issued the stored procedure call with the nulled parameter. If we encountered a non-zero length, we issued a second format of the call, which passes the ROWFILTER parameter as coded. See Example 5-3. Example 5-3 REXX modification 2 /***************************REDBOOK MODIFICATION***********************/ if length(where_clause) = 0 then "EXECSQL" "CALL" proc_name "(", ":user,", ":action,", ":versions,", ":spec_name,", ":where_clause :nullid,", ":schema,", ":return_code,", ":return_msg :indret", ")" else "EXECSQL" "CALL" proc_name "(", ":user,", ":action,", ":versions,", ":spec_name,", ":where_clause,", ":schema,", ":return_code,", ":return_msg :indret", ")" /***************************REDBOOK MODIFICATION ENDS******************/ 3. Finally, we also modified the exit to include the variable return_code whenever we detect an error. This modification is placed after the DSNREXX environment is removed. See Example 5-4. Example 5-4 REXX modification 3 /**********************REDBOOK MODIFICATION****************************/ exit return_code /**********************REDBOOK MODIFICATION ENDS***********************/ The complete REXX example can be seen in Appendix A.1, ARCHIVEEXECSP table archive REXX on page 292. We then run the specification in batch, and the execution generates the output shown in Example DB2 Data Archive Expert for z/os
75 Example 5-5 Batch archive specification output READY %AHXIVPAR DB2G,A,*,batcharchivetest,SYSTOOLS, *** Begin AHXIVPAR exec 11/12/03 16:44:49 *** Input parameters to execution procedure AHXTOOLS.ARCHIVEEXECSP will be: - User id = PAOLOR2 - SUBSYSTEM = DB2G - Action = A - Versions = * - Spec name = batcharchivetest - Row filter = - Metadata schema = SYSTOOLS length of whereclause = 0 Archive execution ran successfully with return code 0 Archive run statistics row 1 follows: - Spec id = Spec status = FINISHED - Spec state = PENDING - Spec version = 17 - Version status = FINISHED - Version state = PENDING - Executed by = PAOLOR2 - Executed timestamp = Number of rows deleted for this version = 0 - Number of rows inserted for this version = 7 - Source table name = DEPTDEL - Source table creator = PAOLOR2 - Target table name = AHXA_ Target table creator = ARCHIVED - Number of rows deleted for this target = 0 - Number of rows inserted for this target = 7 - Spec last updated by = PAOLOR2 - Spec last update timestamp = *** End AHXIVPAR 11/12/03 16:44:57 *** *** Processing time is seconds *** READY END Reviewing the archive specification run history using the Data Archive Expert ISPF dialog also confirms the successful execution of the batch archive specification. File archive specification There is no sample REXX or JCL supplied for batch execution of file archive specifications. We used the modified versions for the table archive REXX described above, and modified these according to the input parameter list defined in the DB2 Data Archive Expert User s Guide and Reference, SC The description of the parameters for AHXTOOLS.OFFLINEARCHSP is summarized in Table 5-3. Table 5-3 OFFLINEARCHSP parameter list Positional parameter for procedure Parm 1 Parm 2 Parameter description Subsystem Identifier File archive specification name Chapter 5. Stored procedures and batch execution 49
76 Positional parameter for procedure Parm 3 Parm 4 Parameter description Metadata schema ROWFILTER Taking the modified AHXIVPAR member from hlq.sahxsamp described above, we created a second member in SAHXSAMP and called it AHXARFIL. In addition to the changes, which we made in the AHXIVPAR member to support null ROWFILTER and the return codes, you need to make these additional changes: 1. The procedure name being invoked needs to be changed from AHXTOOLS.ARCHIVEEXECSP to AXHTOOLS.OFFLINEARCHSP. See Example 5-6. Example 5-6 OFFLINEARCHSP REXX modification 1 /*********************REDBOOK MODIFICATION*****************************/ proc_name = "AHXTOOLS.OFFLINEARCHSP" /* execution stored proc name*/ /*********************REDBOOK MODIFICATION ENDS************************/ 2. Since we are calling a different stored procedure, we need to modify the calling parameter list to conform to that documented in Example 5-3. In our modified REXX, our parameter list is described as shown in Example 5-7. Example 5-7 OFFLINEARCHSP REXX modification 2 if length(where_clause) = 0 then "EXECSQL" "CALL" proc_name "(", ":user,", ":spec_name,", ":where_clause :nullid,", ":schema,", ":return_code,", ":return_msg :indret", ")" else "EXECSQL" "CALL" proc_name "(", ":user,", ":spec_name,", ":where_clause,", ":schema,", ":return_code,", ":return_msg :indret", ")" Refer to Appendix A.2, OFFLINEARCHSP file archive REXX on page 300 for a complete copy of this REXX routine. Since we have created this new REXX routine, we also need some JCL to execute it. Again, starting with the sample AHXIVPRN member in SAHXSAMP, we modified it as shown in Example 5-8. Example 5-8 Sample JCL to run AHXARFIL REXX //TSOCMD EXEC PGM=IKJEFT01,DYNAMNBR=20 //STEPLIB DD DSN=DB2G7.SDSNLOAD,DISP=SHR //SYSEXEC DD DISP=SHR,DSN=AHX110.SAHXSAMP.IVP //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * %AHXARFIL DB2G,IVPNEWFILE,SYSTOOLS, /* 50 DB2 Data Archive Expert for z/os
77 File archive in batch using a template As mentioned earlier, we discovered that the best mechanism to allow flexibility in file archive output data set specification is to use DB2 V7 utility TEMPLATE statements. When you elect to use TEMPLATE, the DB2 Data Archive file archive specification needs to reflect the template name. We will now show how to create and manage the TEMPLATE specification, how to build the file archive specification to use the TEMPLATE, and a couple of considerations regarding TEMPLATE usage. Building and managing a template using DB2 Administration Tool DB2 Administration Tool, 4.2 is an ISPF application that uses dynamic SQL to access DB2 catalog tables. DB2 Admin can greatly increase the productivity of the entire DB2 staff. While it is also possible to create and maintain the TEMPLATE definitions using the DB2 Control Center that is part of the no charge DB2 Administrative Clients feature of DB2 V7 UDB for z/os and OS/390, the DB2 Administration Tool 4.2 facility in our opinion provides better validation of the TEMPLATE statement symbolic variables. Reference the following URL for more information: We start with the DB2 Administration Tool main menu. We select option 5. Utility generation using LISTDEFs and TEMPLATEs. See Figure 5-1. DB2 Admin DB2 Administration Menu :20 Option ===> DB2 system catalog DB2 System: DB2G 2 - Execute SQL statements DB2 SQL ID: PAOLOR2 3 - DB2 performance queries Userid : PAOLOR2 4 - Change current SQL ID DB2 Rel : Utility generation using LISTDEFs and TEMPLATEs P - Change DB2 Admin parameters DD - Distributed DB2 systems E - Explain Z - DB2 system administration SM - Space management functions W - Manage work statement lists CC - DB2 catalog copy version maintenance More: + Interface to other DB2 products and offerings: I - DB2I DB2 Interactive C - DB2 Object Comparison Tool Figure 5-1 DB2 Administration Tool Menu From the Utility LISTDEF and TEMPLATE panel, you need to validate the specification of TEMPLATE control table owner and name. You can elect to choose your own creator and name, but in our environment, the default object names are used. See Figure 5-2. Chapter 5. Stored procedures and batch execution 51
78 DB2 Admin DB2G Utility LISTDEFs and TEMPLATEs :27 Option ===> t L - Manage LISTDEFs T - Manage TEMPLATEs TU - Specify TEMPLATE usage DB2 System: DB2G DB2 SQL ID: PAOLOR2 CL - Create LISTDEF control table: Table owner ===> PAOLOR3 Table name ===> UTLIST CT - Create TEMPLATE control table: Table owner ===> DSNACC Table name ===> UTTEMPLATE Figure 5-2 DB2 Administration Tool Utility Listdef Menu Tip: For the DB2 Administration Tool user, use the default names for the TEMPLATE and LISTDEF control tables. This allows you to share these definitions between DB2 Administration Tool and the DB2 Control Center. In the TEMPLATEs panel, we see all of the existing templates that have been defined in this DB2. We are going to build a new one, so in the panel shown in Figure 5-3, specify a line command of a. DB2 Admin DB2G TEMPLATEs in DSNACC.UTTEMPLATE Row 1 of 10 Command ===> Scroll ===> CSR Valid line commands are: A - Add E - Edit D - Delete Sel Name Creator Remarks * * * CCCOPY BART Template for primary or backup image copy dat CCDISCRD BART Template for optional data set of discarded r CCFILTER BART Template for optional filter data set used by CCPUNCH BART Template for data set used to receive the LOA CCSORTIN BART Template for temporary work data set for sort CCSRTOUT BART Template for temporary work data set for sort CCUNLOAD BART Template for unload data set used by REORG IN DAETAPE PAOLOR2 a DAETEMPL PAOLOR2 Template for File Archives NEWCCTML PAOLOR2 control center created template ******************************* END OF DB2 DATA ******************************* Figure 5-3 DB2 Administration Tool template list panel This gives us the Utility Template panel where we can enter various options for the template type we are interested in building. In our example, we specified that we wanted to use tape. We called our template DAETAPE2, in our environment the device esoteric for tape is ATL2, and we specified a RETPD of 31 days. See Figure DB2 Data Archive Expert for z/os
79 DB2 Admin DB2G Utility Template :48 Command ===> TEMPLATE ===> DAETAPE2 (Template Name) More: + Remark ===> Common Options: UNIT ===> ATL2 (Device Number, Type or Group Name) DSN ===> MODELDCB ===> BUFNO ===> (Number of BSAM buffers) DATACLAS ===> (SMS Data class) MGMTCLAS ===> (SMS Management class) STORCLAS ===> (SMS Storage class) RETPD ===> 031 or EXPDL ===> Statement ===> TEMPLATE Figure 5-4 DB2 Administration Tool Utility template panel 1 On the next panel of the Template Specification menu, we also specified that we want to enable 3490 tape compression. See Figure 5-5. DB2 Admin DB2G Utility Template :48 Command ===> Disk Options: More: - SPACE( ===>, ) (Primary, Secondary) ===> (CYL TRK or MB) PCTPRIME ===> (Percentage of space obtained as primary) MAXPRIME ===> (Maximum allowable primary space allocation) NBRSECND ===> (Number of secondary allocation divisions) Tape Options: UNCNT ===> (Number of devices to allocate) STACK ===> (Stack on same tape volumes YES or NO) JES3DD ===> (JES3 DDname for tape allocation) TRTCH ===> comp (Track recording technique - NONE COMP or NOCOMP) Statement ===> TEMPLATE Figure 5-5 DB2 Administration Tool Utility template panel 2 After pressing Enter, you are prompted to enter the DSN specification, and the cursor positioned to this field automatically. See Figure 5-6. Chapter 5. Stored procedures and batch execution 53
80 DB2 Admin DB2G Utility Template :02 Command ===> Enter required field More: - + DSN ===>? MODELDCB ===> BUFNO ===> (Number of BSAM buffers) DATACLAS ===> (SMS Data class) MGMTCLAS ===> (SMS Management class) STORCLAS ===> (SMS Storage class) RETPD ===> 031 or EXPDL ===> VOLUMES( ===> ) VOLCNT ===> (Volume Count) GDGLIMIT ===> (GDG Limit) DISP( ===>,, ) Disk Options: Statement ===> TEMPLATE Figure 5-6 DB2 Administration Tool DSN prompting If we put a question mark in the DSN field, we then are taken to the data set name panel of the DB2 Administration Tool template panel. Here, you can select various symbolics from this panel, which are used to generate a dynamically derived data set name. See Figure 5-7. DB2 Admin DB2G Utility Template - Dataset Name :10 Command===> Select symbolic variables or enter non-symbolic characters. Processing for this panel occurs in left to right, and top to bottom sequence. Hit ENTER to process any current choices. DSN Model: DAEPOOL.&DB..&TS..Y&YEAR..M&MONTH..D&DAY Non-Symbolic characters ===> Symbolic Variables: More: + JOBNAME ===> MVS jobname STEPNAME ===> MVS step name UTILID ===> Utility ID SSID ===> Subsystem ID ICTYPE ===> Image Copy Type UTILNAME ===> Utility Name SEQ ===> Sequence Number LOCREM ===> IC DDN usage PRIBAC ===> IC DDN Usage LIST ===> List Name DB ===> Database name TS ===> Table space IS ===> Index Space SN ===> Space name PART ===> Part number (5-digit) Figure 5-7 DB2 Administration Tool data set name specification In this example, we are going to allocate this data set to a high-level qualifier called DAEPOOL. We also want to specify that the second node be the database name, the third 54 DB2 Data Archive Expert for z/os
81 name use the tablespace name, and we also construct nodes that constitute the year, month, and day. When we pressed Enter for Figure 5-8, the DB2 Administration Tool validates the data set name symbolics, and shows us what the generated data set name looks like. DB2 Admin DB2G Utility Template - Dataset Name :16 Command===> Select symbolic variables or enter non-symbolic characters. Processing for this panel occurs in left to right, and top to bottom sequence. Hit ENTER to process any current choices. DSN Model: DAEPOOL.&DB..&TS..Y&YEAR..M&MONTH..D&DAY Non-Symbolic characters ===> Symbolic Variables: More: + JOBNAME ===> MVS jobname STEPNAME ===> MVS step name UTILID ===> Utility ID SSID ===> Subsystem ID ICTYPE ===> Image Copy Type UTILNAME ===> Utility Name SEQ ===> Sequence Number LOCREM ===> IC DDN usage EsssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssN e Longest possible dataset name: e e e e DAEPOOL.DBNAMEDB.TSNAMETS.Y2002.M06.D21 e DsssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssM Figure 5-8 DB2 Administration Tool data set name specification results Building file archive specifications and using templates Once we have built and saved the utility template in the DSNACC.UTTEMPLATE table, we next need to construct our file template definitions, which reference the template to perform allocation of the file archive data set. First, we created a new archive specification. See Figure 5-9. As this is discussed in detail in earlier sections, we only focus on the define data set targets specification. Once we have named the archive specification, specified one or more tables, and built the necessary archive rules and row filter, we selected option 3 from the archive specification panel. Chapter 5. Stored procedures and batch execution 55
82 AHXSDEF Archive Specification Definition Command ==> Archive specification: Name......==> BATCHFILETOTAPE DB2 system. : DB2G Creator....==> PAOLOR2 Description..==> Batch file archive to tape Complete archive run (delete source data)? ==> N (Yes/No) Perform orphan row/changed data detection? ==> Y (Yes/No) Select an archive definition activity ==> 3 1. Define archive unit (completed) 2. Define table targets (active) 3. Define data set targets 4. Save archive specification (pending) Figure 5-9 Archive specification definition panel On the file archive targets panel (see Figure 5-10) we selected option 3. This tells the archive specification that we want to use templates to perform file allocation. AHXSOFLN File Archive Targets Row 1 of 1 Command ==> Scroll ===> CSR Archive specification: BATCHFILETOTAPE Creator : PAOLOR2 DB2 system: DB2G Select an option for creating targets: 3 1. Default target data set generation with high-level qualifier. High-level qualifier ==> PAOLOR2 2. Specify data set name for each source table. 3. Specify utility templates for each source table. Source table Target data set =============================================================================== Name : DEPTDEL Creator: PAOLOR2 ******************************* Bottom of data ******************************** Figure 5-10 File archive targets panel We first are given the opportunity to map each table referenced in the archive specification with the desired template. See Figure DB2 Data Archive Expert for z/os
83 AHXOFTMP Map Source Tables to Utility Templates ---- Row 1 of 1 Command ==> Scroll ===> CSR Archive specification: BATCHFILETOTAPE Primary commands are: C - Clear current template mappings Map source tables to utility template for target data set. Line commands are: M - Map source to template Cmd Source table Template === ============================= ===================================== m Name. : DEPTDEL Name..... : Creator : PAOLOR2 Creator.... : Templates table : Creator.... : Figure 5-11 Map source tables to utility template panel If we do not know the template name or the owner of the template table, we can leave these fields empty, we will then be prompted for this. See Figure AHXTMPLT Utility Template Command ==> Archive specification: BATCHFILETOTAPE Select saved DB2 Admin template? ==> Y (Yes/No) Templates table ==> UTTEMPLATE Creator ==> DSNUCC DB2 system ==> DB2G Template name ==> Template creator ==> Figure 5-12 Utility template specification If you know or remember the saved template name, you can specify them directly, or you can type Y in the Select saved DB2 Admin template field, and you will be prompted with a list of saved templates and allowed to specify the one you are interested in using in Figure Chapter 5. Stored procedures and batch execution 57
84 AHXTMPLL Select Utility Template Row 10 of 11 Command ==> Scroll ===> CSR Archive specification : BATCHFILETOTAPE DB2 system..... : DB2G Line commands are: S - Select template D - Deselect template Cmd * Template Creator Dataset template DAETAPE PAOLOR2 &USERID..&DB..&TS..D&DATE..T&TIME. s DAETAPE2 PAOLOR2 DAEPOOL.&DB..&TS..Y&YEAR..M&MONTH.. Figure 5-13 Utility template selection list panel Once you have created our file archive specification, run the specification in batch using the JCL shown in Example 5-9. Example 5-9 JCL to run file archive specification to tape //TSOCMD EXEC PGM=IKJEFT01,DYNAMNBR=20 //STEPLIB DD DSN=DB2G7.SDSNLOAD,DISP=SHR //SYSEXEC DD DISP=SHR,DSN=AHX110.SAHXSAMP.IVP //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * %AHXARFIL DB2G,BATCHFILETOTAPE,SYSTOOLS, /* Retrieval specification in batch We can also call the stored procedures through the API using REXX to execute archive retrieval specifications in batch. The stored procedure used for archive table retrieval is named AHXTOOLS.RETRIEVEEXECSP. It uses the parameters shown in Table 5-4. Table 5-4 RETRIEVEEXECSP parameter list Positional parameter for procedure Parm 1 Parm 2 Parm 3 Parm 4 Parameter description Subsystem Identifier Specification Name Schema Name Row Filter Taking the modified AHXIVPAR member from hlq.sahxsamp described above, we created a third member in SAHXSAMP and called it AHXTARET. We modified this example in a similar fashion to the changes described earlier for the AHXARFIL REXX sample. Refer to Appendix A.3, RETRIEVEEXECSP table retrieve REXX on page 308 for the complete modified source REXX. Using the parameter list definition for RETRIEVEEXECSP described above, we then created a JCL member to execute the AHXTARET REXX member. See Example 5-10 for our sample JCL. 58 DB2 Data Archive Expert for z/os
85 Example 5-10 JCL to execute AHXTARET table retrieve in batch //TSOCMD EXEC PGM=IKJEFT01,DYNAMNBR=20 //STEPLIB DD DSN=DB2G7.SDSNLOAD,DISP=SHR //SYSEXEC DD DISP=SHR,DSN=AHX110.SAHXSAMP.IVP //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * %AHXTARET DB2G,BATCHTABLERETRIEVE,SYSTOOLS, /* // Next, we created a retrieve table specification based on one of the earlier archive table specifications. We named this retrieve specification batchtableretrieve. We then executed this specification using our JCL from Example Once the job completes, you can review the resulting output as shown in Example Example 5-11 Sample output from batch table retrieve execution READY %AHXTARET DB2G,BATCHTABLERETRIEVE,SYSTOOLS, *** Begin AHXTARET exec 11/13/03 00:24:40 *** Input parameters to execution procedure AHXTOOLS.RETRIEVEEXECSP will be: - User id = PAOLOR2 - SUBSYSTEM = DB2G - Spec name = BATCHTABLERETRIEVE - Row filter = - Metadata schema = SYSTOOLS length of whereclause = 0 Archive execution ran successfully with return code 0 Archive run statistics row 1 follows: - Spec id = Spec status = FINISHED - Spec state = PENDING - Spec version = 1 - Version status = FINISHED - Version state = PENDING - Executed by = PAOLOR2 - Executed timestamp = Number of rows deleted for this version = 0 - Number of rows inserted for this version = 14 - Source table name = DEPTDEL - Source table creator = PAOLOR2 - Target table name = DEPTDEL - Target table creator = RETRIEVE - Number of rows deleted for this target = 0 - Number of rows inserted for this target = 14 - Spec last updated by = PAOLOR2 - Spec last update timestamp = *** End AHXTARET 11/13/03 00:24:48 *** *** Processing time is seconds *** READY END Chapter 5. Stored procedures and batch execution 59
86 5.2.4 File retrieval from batch For the final example, we discuss calling the stored procedures the API using REXX to execute archive table retrieval. The supplied stored procedure is named AHXTOOLS.OFFLINERETEXECSP. It uses the parameters shown in Table 5-5. Table 5-5 OFFLINERETEXECSP parameter list Positional parameter for procedure Parm 1 Parm 2 Parm 3 Parameter Description Subsystem identifier File retrieve specification name Schema name Taking the modified AHXIVPAR member from hlq.sahxsamp described above, we created a fourth member in SAHXSAMP and called it AHXFIRET. We modified this example in a similar fashion to the changes described earlier for the AHXARFIL REXX sample. Refer to Appendix A.4, OFFLINERETEXECSP file retrieve REXX on page 316. Using the parameter list definition for OFFLINERETEXECSP described above, we created a JCL member to execute the AHXFIRET REXX member. See Example Example 5-12 JCL to execute OFFLINERETEXCSP stored procedure in batch //TSOCMD EXEC PGM=IKJEFT01,DYNAMNBR=20 //STEPLIB DD DSN=DB2G7.SDSNLOAD,DISP=SHR //SYSEXEC DD DISP=SHR,DSN=AHX110.SAHXSAMP.IVP //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * %AHXFIRET DB2G,RETRIEVEFILEBATCH,SYSTOOLS /* Next, we created a retrieve table specification based on one of the earlier archive table specifications. We named this retrieve specification retrievefilebatch. We then executed this specification using our JCL from Example Once the job completes, you can review the resulting output as shown in Example Example 5-13 Sample output from batch file retrieve execution READY %AHXFIRET DB2G,RETRIEVEFILEBATCH,SYSTOOLS *** Begin AHXFIRET exec 11/13/03 01:16:45 *** Input parameters to execution procedure AHXTOOLS.OFFLINERETEXECSP will be: - User id = PAOLOR2 - SUBSYSTEM = DB2G - Spec name = RETRIEVEFILEBATCH - Metadata schema = SYSTOOLS Archive execution ran successfully with return code 0 Archive run statistics row 1 follows: - Spec id = Spec status = FINISHED - Spec state = PENDING - Spec version = 2 - Version status = FINISHED - Version state = PENDING 60 DB2 Data Archive Expert for z/os
87 - Executed by = PAOLOR2 - Executed timestamp = Number of rows deleted for this version = 0 - Number of rows inserted for this version = 14 - Source table name = DEPTDEL - Source table creator = PAOLOR2 - Target table name = DEPTDEL_FILERET - Target table creator = RETRIEVE - Number of rows deleted for this target = 0 - Number of rows inserted for this target = 14 - Spec last updated by = PAOLOR2 - Spec last update timestamp = *** End AHXFIRET 11/13/03 01:16:51 *** *** Processing time is seconds *** READY END Table to file archive specification in batch We can also call the stored procedures through API using REXX to execute archive specifications in batch to archive from table archives to file archives. The stored procedure used for file archive is AHXTOOLS.OFFLINEONLARCSP. It uses the parameters shown in Table 5-6. Table 5-6 OFFLINEONLARCSP parameter list Positional parameter for procedure Parm 1 Parm 2 Parm 3 Parameter description Subsystem Identifier Specification Name Schema Name Taking the modified AHXIVPAR member from hlq.sahxsamp described above, we created a fifth member in SAHXSAMP and called it AHXTB2FI. We modified this example in a similar fashion to the changes described earlier for the AHXARFIL REXX sample. Refer to Appendix A.5, OFFLINEONLARCSP table to file archive REXX on page 324 for the complete modified source REXX. Using the parameter list definition for OFFLINEONLARCSP described above, we then created a JCL member to execute the AHXTB2FI REXX member. See Example 5-14 for our sample JCL. Example 5-14 JCL to execute AHXTB2FI archiving from table to file archives in batch //TSOCMD EXEC PGM=IKJEFT01,DYNAMNBR=20 //STEPLIB DD DSN=DB2G7.SDSNLOAD,DISP=SHR //SYSEXEC DD DISP=SHR,DSN=AHX110.SAHXSAMP.IVP //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * %AHXTB2FI DB2G,ARC#TABARC#FILEARC,SYSTOOLS /* // Next, we created a retrieve table specification based on one of the earlier archive table specifications. We named this retrieve specification offlineonlarcsp. We then executed this Chapter 5. Stored procedures and batch execution 61
88 specification using our JCL from Example Once the job completes, you can review the resulting output as shown in Example Example 5-15 Sample output from table archive to file archive batch execution READY %AHXTB2FI DB2G,ARC#TABARC#FILEARC,SYSTOOLS *** Begin AHXIVPAR exec 11/20/03 13:11:30 *** Input parameters to execution procedure AHXTOOLS.OFFLINEONLARCSP will be: - User id = PAOLOR3 - SUBSYSTEM = DB2G - Spec name = ARC#TABARC#FILEARC - Metadata schema = SYSTOOLS Archive execution ran successfully with return code 0 Archive run statistics row 1 follows: - Spec id = Spec status = FINISHED - Spec state = PENDING - Spec version = 1 - Version status = FINISHED - Version state = PENDING - Executed by = PAOLOR3 - Executed timestamp = Number of rows deleted for this version = 0 - Number of rows inserted for this version = Source table name = IAIN2 - Source table creator = PAOLOR3 - Target table name = PAOLOR3.S00248.V0001.D T Target table creator = PAOLOR3 - Number of rows deleted for this target = 0 - Number of rows inserted for this target = Spec last updated by = PAOLOR3 - Spec last update timestamp = *** End AHXIVPAR 11/20/03 13:11:58 *** *** Processing time is seconds *** READY END Continuation of parameters and row filters The JCL shown below in Figure 5-14, contains an example of how to continue the parameters on to another line. In this example, the third parameter is followed by a comma and a hyphen (,-) without a space, and the fourth parameter starts in column one of the next line. If you have a row filter longer than 71 characters, then it may be split over multiple lines as shown in the same figure. Indeed you may use a row filter spanning many lines, and exceed the length of the row filter fields in the ISPF dialogs. This because Data Archive Expert saves the row filter in a VARCHAR(2000) column in the metadata. 62 DB2 Data Archive Expert for z/os
89 //* */ //*MODULE: AHXCRDB */ //* */ //* LICENSED MATERIALS - PROPERTY OF IBM */ //* 5655-I95 */ //* (C) COPYRIGHT IBM CORPORATION 2003 ALL RIGHTS RESERVED. */ //* US GOVERNMENT USERS RESTRICTED RIGHTS - USE, DUPLICATION OR */ //* DISCLOSURE RESTRICTED BY GSA ADP SCHEDULE CONTRACT WITH IBM CORP. */ //* */ //STEP010 EXEC PGM=IKJEFT01,DYNAMNBR=20,TIME=500 //STEPLIB DD DSN=DB2G7.SDSNEXIT,DISP=SHR // DD DSN=DB2G7.SDSNLOAD,DISP=SHR //SYSEXEC DD DISP=SHR,DSN=AHX110.SAHXSAMP.IVP //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * %AHXIVPAR DB2G,A,*,ARC#4TABLES,SYSTOOLS,- PS_PARTKEY BETWEEN 1001 AND AND PS_SUPPKEY BETWEEN 1001 AND AND PS_SUPPKEY IN (SELECT DISTINCT S_SUPPKEY - FROM SUPPLIER - WHERE S_SUPPKEY BETWEEN 1001 AND AND S_NATIONKEY IN (SELECT DISTINCT N_NATIONKEY - FROM NATION - WHERE N_NAME IN ('ITALY', 'BELGIUM', 'LUXEMBURG', 'SPAIN', 'AUSTRIA', - 'FRANCE', 'GERMANY', 'ROMANIA', 'RUSSIA', 'UNITED KINGDOM') - )) /* Figure 5-14 Using a row filter spanning many lines Note: You can only type one or more spaces when continuing the row filter, but do not use spaces if continuing the other parameters. Chapter 5. Stored procedures and batch execution 63
90 64 DB2 Data Archive Expert for z/os
91 6 Chapter 6. Optionally defining DB2 Grouper Client DB2 Data Archive Expert calls the DB2 Grouper Server component to determine the tables that are related to a single starting point table when defining an archive specification. Because Data Archive Expert calls the Grouper Server component directly, the Data Archive Expert user is not required to prepare groups ahead of time by using the Grouper Client. This operation can be completely independent of the Grouper Client and transparent to the Data Archive Expert user. However, a Data Archive Expert user might want to define referential constraints that are not enforced by DB2. The user can use the Grouper Client to easily define those referential constraints so that they are used as input to Data Archive Expert. In this case, all subsequent uses of the Grouper Server by Data Archive Expert would find tables that are referentially related by both DB2-enforced RI and those non DB2-enforced RI relationships defined to Grouper. If you want to know about tables related through triggers, package statements, or dynamic relationships, Grouper can be used to find these relationships and to build groups of tables based on them. You can then use these groups for administrative tasks beyond those provided by Data Archive Expert. In this chapter, we show how to set up the Grouper Client function, which can be later used to discover (see Chapter 11, Scenario 5: Archiving Grouper discovered related tables on page 211), and we define the referential constraints for usage by Data Archive Expert, or other administration tools and functions. This chapter contains the following: Download the Grouper Client Run the install shield z/os Server connectivity configuration Client configuration Installation verification Copyright IBM Corp All rights reserved. 65
92 6.1 Setting up the Grouper Client DB2 Grouper requires DB2 or DB2 Connect Version 7 or higher. We installed DB2 UDB for Microsoft Windows Enterprise Edition Version 7.2. We also elected to install FixPak 10 as the minimum recommendation is FixPak 9. Our client operating system environment is Windows 2000, and we were using a Pentium III class machine with 512 MB of memory. The install ID had administrator rights on the client Windows 2000 machine as well as SYSADM on both the local DB2 UDB for Windows instance, and the DB2 Version 7 UDB for z/os and OS/390 subsystem. The Grouper Client code is contained in the DB2 Grouper installation dataset hlq.segfismp located on the z/os host system. There are several ways to download binary files from z/os host systems to Windows-based Client machines. We demonstrate the use of IBM Personal Communications file transfer, you may elect to use another product Download the Grouper Client Here are the steps: 1. Create an installation directory. From the Windows Explorer file list, select your C: drive, then select New --> Folder. A new folder icon appears on your contents list. Overtype the highlighted default text with the name of your new folder; in our case we call it Grouper_Install. 2. Perform the download. Download the code by using either your workstation emulator or through FTP. We are using IBM Personal Communications for our example. The instructions specify that the files should be transferred to the same directory, so we will use our directory named Grouper_Install for all three file downloads: a. We started an IBM Personal Communication session with our z/os host system. As the file transfer mechanism of IBM Personal Communication uses the z/os command IND$FILE, you will need to be at the TSO ready prompt. b. From your IBM Personal Communication session, select the Receive files from host button on the task bar. c. Type hlq.segfismp(egfisexe) in the Host file entry field. d. Type c:\grouper_install\setup.exe in the PC file entry field, c:\grouper_install is the installation directory you created earlier. e. Select transfer type BINARY. f. Click Receive. 3. You need to repeat the above download procedure for the other two files that form the complete install shield: a. hlq.segfismp(egfisjar) should be downloaded as c:\grouper_install\setup.jar b. hlq.segfismp(egfisinf) should be downloaded as c:\grouper_install\media.inf Table 6-1 helps you associate the z/os host PDS entry with the required Windows file name and extension. It also contains the size so you can approximate how long each file transfer might take to execute. 66 DB2 Data Archive Expert for z/os
93 Table 6-1 Host and directory names Host system name Install directory name Approximate size hlq.segfismp(egfisexe) C:\Grouper_Install\setup.exe 18 MB hlq.segfismp(egfisjar) C:\Grouper_Install\setup.jar 31 MB hlq.segfismp(egfisinf) C:\Grouper_Install\media.inf 1 KB Run the install shield Once the file transfer has been completed, you should execute the install shield: 1. Install from Windows Explorer: a. Change to the installation directory where setup.exe was transferred. For example, in our case, C:\Grouper_Install\setup.exe. b. Start the setup.exe InstallShield by double-clicking setup.exe. c. The InstallShield wizard starts up automatically, guiding you though the installation steps necessary. Follow the installation instructions in each window; at the conclusion of the installation you may elect to delete the C:\Grouper_Install directory. Installation notes The install shield asks you to specify the directory where the Grouper Client should be installed. The default directory is C:\Program Files\DB2Tools; Figure 6-1 shows the standard InstallShield window. We changed our directory to specify C:\Grouper. Figure 6-1 Sample Grouper Client Directory Chapter 6. Optionally defining DB2 Grouper Client 67
94 Restriction: At the time we installed our version of the Client, there was a problem when installing the Client into a Windows path that spaces in any directory names. For our install, we chose to install into a directory named C:\Grouper. During the InstallShield execution, you are also asked to choose the DB2 version for the Client install. This is referring to the DB2 version installed on the Client machine, not the release level of the DB2 server. Refer to Figure 6-2. In our environment we have installed DB2 7.2 on our Client workstation. Figure 6-2 Client DB2 Version window Attention: If you should need to run the uninstall shield, you will not find it as part of the Program Files folder. You will find it in Windows directory path c:\egfclienthome\client\clientuninst. Tip: The Client InstallShield does not create a shortcut to your Windows desktop. Consider creating one manually using the Send to desktop (create shortcut) option of the Windows Start button z/os Server connectivity configuration As noted earlier, DB2 UDB for Windows Enterprise Edition Version 7.2 FixPak 10 was installed on our Client workstation. Because we ran JDBC applications as part of the Grouper Client, a remote bind against our z/os database instance needs to be performed. The 68 DB2 Data Archive Expert for z/os
95 following section describes how to configure the required database connection using the DB2 V7 Client configuration agent. Choose Manually configure a connection to the database, and click Next (see Figure 6-3). Figure 6-3 Add database wizard - Source 2. The protocol is TCP/IP and choose The database physically resides on a host or AS/400 server. Specify Connect directly to the server (see Figure 6-4). Click Next. Chapter 6. Optionally defining DB2 Grouper Client 69
96 Figure 6-4 Add database wizard - Protocol 3. Specify the host name and the port number. The Host name we use is wtsc63.itso.ibm.com and port number is (see Figure 6-5). This is the DB2 SQL port that our DB2G system listens on. To obtain this information, use the DSNL004I message that DB2 writes to the MVS system log and xxxmstr address space log at DB2 or DDF startup time. Click Next to continue. 70 DB2 Data Archive Expert for z/os
97 Figure 6-5 Add database wizard - TCP/IP 4. Specify the database name to which you are connecting (see Figure 6-5). This has to be the DB2 for z/os location name. Again, this information can be found in the DSNL004I message. Click Next. Chapter 6. Optionally defining DB2 Grouper Client 71
98 Figure 6-6 Add database wizard - Database 5. Select Register this database for ODBC and As a system data source (see Figure 6-7). Click Next. 72 DB2 Data Archive Expert for z/os
99 Figure 6-7 Add database wizard - ODBC 6. Bypass Specify the node options (see Figure 6-8) and click Next. Chapter 6. Optionally defining DB2 Grouper Client 73
100 Figure 6-8 Add database wizard - Node options 7. Check Configure security options and Host or AS/400 authentication (see Figure 6-9). Click Next. 74 DB2 Data Archive Expert for z/os
101 Figure 6-9 Add database wizard - Security options 8. Click Finish and you should see the message: The connection configuration for DB2G was added successfully. You may now click Test connection to test the connection. See Figure Chapter 6. Optionally defining DB2 Grouper Client 75
102 Figure 6-10 Connection configuration confirmation 9. Specify the User ID, Password, and Share for the Connection mode. See Figure Click OK. 76 DB2 Data Archive Expert for z/os
103 Figure 6-11 Connect to the database 10.You should now see that the connect test was successful. See Figure If not, you need to go back and correct the error. Figure 6-12 Successful connection Client configuration Any references in the installation chapter of the DB2 Grouper User s Guide, SC to the Windows environment variable EGFCLIENTHOME refers to the Windows directory path where the DB2 Grouper Client code was installed. You can verify this from the Microsoft Windows control window as follows: 1. Click Start --> Settings --> Control window. 2. From Control window click System. 3. From the System Properties window click the Advanced tab. 4. From the Advanced setting window, click the Environment variables button. 5. In the Environment variables window, under the system variables section, use the scroll bar to scroll down until you locate the variable named EGFCLIENTHOME. Figure 6-13 shows our setting. Chapter 6. Optionally defining DB2 Grouper Client 77
104 Figure 6-13 Windows environment variable EGFCLIENTHOME DB2 Grouper submits JCL through the z/os internal reader facility to perform group discovery against the DB2 catalog, and to run the unit of work discovery process against the DB2 archive log datasets. Since DB2 Data Archive Expert only exploits the catalog group discovery, we elected not to implement the unit of work discovery component. Before submitting the JCL to run group discovery, the TEMPLATE.JCL file in the %EGFCLIENTHOME%\grouper\Client directory must be edited specifically for the DB2 for UDB z/os and OS/390 server instance you wish run against. If you have more than one server you would like to run group discovery against, you need to make a copy of TEMPLATE.JCL with a different name for each that uniquely identifies it for that server. You also need to edit each copy of the JCL specifically for its respective server. Before running against one of those servers, you need to overlay the TEMPLATE.JCL file with the contents of your edited JCL file. This can be done by saving your edited JCL file back into the %EGFCLIENTHOME%\grouper\Client directory with the name TEMPLATE.JCL, thereby replacing the unedited version of the file with the correctly edited version for your current server. In our example, we only connected to a single DB2 server instance, therefore, we directly edited the TEMPLATE.JCL file, and did not need to save any other versions of the JCL. As indicated in the DB2 Data Archive Expert User s Guide and Reference, SC , if you choose to connect to multiple server instances, you need to create separate versions of the TEMPLATE.JCL file and provide some unique naming convention for each environment. As this is a file that is part of the EGFCLIENTHOME Client directory, you also need to consider how to control and share the various versions of TEMPLATE.JCL consistent across multiple Client instances of DB2 Grouper in your environment. See our modified version of TEMPLATE.JCL in Example 6-1. Example 6-1 Modified TEMPLATE.JCL sample //PAOLOR2G JOB (ACCOUNT),'GROUPER',NOTIFY=PAOLOR2,USER=PAOLOR2 //* //EGFRUNG EXEC PGM=IKJEFT01,DYNAMNBR=20, // PARM='%EGFRUNG <P1> <P2> <P3> DB2G EGFTOOLS' 78 DB2 Data Archive Expert for z/os
105 //STEPLIB DD DISP=SHR,DSN=CEE.SCEERUN // DD DISP=SHR,DSN=DB2G7.SDSNLOAD // DD DISP=SHR,DSN=DB2G7.SDSNLOD2 // DD DISP=SHR,DSN=CBC.SCBCCMP //SYSEXEC DD DISP=SHR,DSN=EGF110.SEGFEXEC //SYSTSPRT DD SYSOUT=* //SYSTSIN DD DUMMY In the previous example, the parameter EGFTOOLS refers to the SCHEMA name of the referenced stored procedure. Also, hlq.segexec is the allocated target library. In our environment, DB2 Grouper was installed using the EGF110 high-level qualifier. Tip: In the Windows directory %EGFCLIENTHOME%\grouper\Client, there is a backup of the TEMPLATE.JCL file named TEMPLATE_BKP.JCL. If you need to restore to an unedited original version of TEMPLATE.JCL, you can use this file. JDBC bind To bind the packages required for DB2 Grouper to run as a JDBC application, we ran the following commands: 1. From a Windows command prompt, enter db2cmd. 2. Change the directory to point to the folder containing ddcsmvs.lst: a. Type cd C:\Program Files\SQLLIB\bnd and press Enter. 3. Type db2 connect to DB2G user PAOLOR2 and press Enter. a. You are prompted for password, type the valid RACF password and press Enter. 4. Type db2 blocking all sqlerror continue messages mvs.msg grant public and press Enter. 5. As each package is bound, you will see on the CLP window messages indicating that the bind was performed. 6. Type db2 connect reset and press Enter to disconnect from the DB2 server. 7. Close the CLP window. Tip: If you encounter problems during the execution of the package bind, you can review the bind error messages and output inside the mvs.msg file that is placed in the c:\program Files\SQLLIB\bnd Windows directory. Use Notepad to view this file Installation verification During the Data Archive Expert IVP, we had decided to defer the installation of Grouper. After completing the Grouper customization, we went back and then re-executed the Data Archive Expert table archive IVP, and used Grouper to find the related tables. To verify that the DB2 Grouper Client has been configured correctly, and can connect to the z/os DB2 server instance, do the following: 1. Start the Grouper Client from the Programs menu on your Windows desktop. Click Start --> Programs --> Grouper Client --> Grouper Start. 2. Log on to Grouper, using the window shown in Figure Select a server configuration, and provide a valid RACF user ID and password. If this is the first time you connect to a server, a window will pop up explaining that Grouper must perform a remote bind. Click OK to proceed. Chapter 6. Optionally defining DB2 Grouper Client 79
106 Figure 6-14 Grouper Client login 3. The Grouper launchpad then appears, as shown in Figure Close the launchpad. 80 DB2 Data Archive Expert for z/os
107 Figure 6-15 Grouper Launchpad Tip: If you prefer not to have the launchpad whenever you start the Grouper Client, you can select the radio button Do not show the launchpad again when Grouper opens. 4. Closing the launchpad then allows you to see the Grouper Client window. In our system, we have already defined a number of DB2 Data Archive Specifications; whenever Grouper is invoked for related table discovery, a new Grouper set is created. Chapter 6. Optionally defining DB2 Grouper Client 81
108 Figure 6-16 Grouper Main window In our environment, we have already run a number of Data Archive Expert archive specifications with related table enabled. When you start up your first Grouper Client session, you should only see one Grouper set from the Data Archive Expert IVP: 5. Highlight the server instance were Grouper is installed, then from the task bar select Grouper --> Create New Set. Enter a new set name and optional description in the pop-up shown in Figure Figure 6-17 Create new set window 82 DB2 Data Archive Expert for z/os
109 6. We entered into New Set Name field NEWSETemp and some comments to identify the new set. We then clicked OK. The new set is then displayed on the tree underneath the subsystem icon. See Figure Figure 6-18 Grouper tree with new group 7. In the NEWSETemp group, right-click VERSION1 and select Configure Group Discovery from the pop-up window. From the Configure Group Options window, select the Starting Points tab; on the Starting Points page, select the Referential Integrity Relationships Only radio button, and click the Add Table button. See Figure Chapter 6. Optionally defining DB2 Grouper Client 83
110 Figure 6-19 Configure group options window 8. When we click the Add Table button, we the are presented with the add tables search pop-up. In our case, we will specify an owner of PAOLOR2 and then leave the table name unqualified. See Figure Figure 6-20 Add tables filter specification 9. Clicking OK then returns a list of tables which filter with the specified creator name of PAOLOR2. Tables under the available column can be highlighted by clicking and then moving to the selected column by using the right arrow button. In our example, we chose the table EMP and moved it to the selected column. See Figure DB2 Data Archive Expert for z/os
111 Figure 6-21 Grouper select objects window 10.Click OK to return to the tree. Just to review, we have now selected for our group the table PAOLOR2.EMP as our starting point table. Next make sure that we still have the Version1 of set NEWSETemp highlighted, right-click, and select run group discovery. We now see the run group discovery pop-up. This shows us what is taken into account in the discovery run. In our example, we only have the single starting point table and have not provided any additional inputs for discovery. See Figure Chapter 6. Optionally defining DB2 Grouper Client 85
112 r Figure 6-22 Run group discovery window\ 11.Click OK and we then see the group discovery description window. The group discovery description helps us identify uniquely different executions of the group discovery job. For our example, we used a description that was a compilation of the set name and the version number, NEWSETemp_Version1. See Figure Figure 6-23 Group discovery confirmation window 12.Click OK and the confirmation screen appears. Figure 6-24 shows this, and just click through this. 86 DB2 Data Archive Expert for z/os
113 Figure 6-24 Confirmation window 13.The job is then built using the %EGFCLIENTHOME%\Grouper\Client\TEMPLATE.JCL file that was customized earlier in the installation. A temporary file is built and stored in the same directory using the name EGFDSCVY.JCL, and then submitted through the internal reader. The job is not submitted until we respond to the send group discover job window. See Figure Figure 6-25 Send group discovery job window 14.From the Grouper tree window, we then highlighted the Version 1 of our Grouper set, right-clicked and selected the View group discovery job status option. The Job Status panel will eventually show that our group discovery job execution was successful. See Figure Chapter 6. Optionally defining DB2 Grouper Client 87
114 Figure 6-26 Job Status window Tip: The job submission executes in a CLP window. When the submission is complete, select this window from the Windows task bar and close it. This is also a place to look for errors in the event of an unsuccessful job submission attempt. 15.Back at the Grouper tree window, we highlighted the version, right-clicked, and selected the view group discovery results selection. This then opens the version and a new level is shown in the tree. In our example, it is called Group_1. When we clicked the Group_1 icon on the tree for all of the tables that were discovered during our group discovery run. Figure 6-27 shows the resulting list of related tables discovered by Grouper for PAOLOR2.EMP. 88 DB2 Data Archive Expert for z/os
115 Figure 6-27 Group_1 related discovered tables 16.We then selected EMP from the displayed table list, right-clicked, and chose Show table relationships; you can then see the relationships to PAOLOR2.EMP. Figure 6-28 shows the DB2 enforced relationships defined to the EMP table. Chapter 6. Optionally defining DB2 Grouper Client 89
116 Figure 6-28 Relationships window 17.From the relationships window, you can select one of the relationships and click the Properties button located on the right side of the window. You can then see the details about the DB2 defined RI for this relationship. Figure 6-29 shows the relationship details stored in the DB2 catalog for this relationship to EMP. 90 DB2 Data Archive Expert for z/os
117 Figure 6-29 Relationship properties window This concludes the DB2 Grouper installation IVP. Closing the Grouper tree window terminates the Client running on your workstation. Chapter 6. Optionally defining DB2 Grouper Client 91
118 92 DB2 Data Archive Expert for z/os
119 Part 3 Part 3 Data archival With DB2 Data Archive Expert, you can easily move DB2 data to table or flat-file archives. An ISPF interface helps you to configure and use the tool. You can also write your own programs for calling the API interface provided by Data Archive Expert. In this part we show step-by-step how to go about using the windows to define and execute several scenarios related to archiving data. The chapters in this part provide details on the following representative scenarios: Scenario 1: Archiving from a table to a file Scenario 2: Archiving from a table to a table Scenario 3: Archiving from RI related tables and deleting from one Scenario 4: Archiving from RI related tables and deleting from them Scenario 5: Archiving Grouper discovered related tables Additional archive considerations For tutorial reasons, the archiving scenarios are shown in order of increasing complexity. In the next Part 4, Data retrieval on page 239 we will show key scenarios related to retrieving the archived data for exceptional SQL processing. Copyright IBM Corp All rights reserved. 93
120 94 DB2 Data Archive Expert for z/os
121 7 Chapter 7. Scenario 1: Archiving from a table to a file Assume you have a large table with a lot of rows, which will presumably rarely be accessed anymore. Examples can be data pertaining to employees who worked in a division you sold, or data pertaining to orders, which are almost ten years old, and are only kept for legal reasons. In order to boost performance of applications accessing such large tables, and to speed up the utility runs on these tables, you want to get rid of such inactive rows in your operational, performance-critical tables. Not to mention the disk space you save if you store such inactive rows on tape. In this chapter we show you our first scenario. You see how you can use DB2 Data Archive Expert for z/os (DAE) to archive such inactive rows into a dataset, called a file archive. An alternative is to store such inactive rows in DB2 tables so that they can still be accessed through SQL. This is shown in Chapter 7, Scenario 1: Archiving from a table to a file on page 95. Here we are only interested in reducing the size of a single table, a 1.1-million-row table of sample line items of customer orders. In Chapter 2, The evaluation test data on page 15 you find the description of the sample data we used. The table stores data since 1992, and we have decided to remove the line item rows that were shipped in January This chapter contains the following: Starting point Define the archive specification Run the archive specification Second run of the archive specification Results Considerations Copyright IBM Corp All rights reserved. 95
122 7.1 Starting point Let us start on our journey through the Data Archive Expert panels in order to get our main job done: to archive some inactive rows. During this journey you may have additional questions on whether variations or more detailed specifications are possible. In this case you have the alternative to either be patient for the moment and just follow this first walk-through, or to turn to the more advanced topics of this book right away. First you have to invoke Data Archive Expert through ahxv11, as can be seen in Figure 7-1. See also 3.2.9, Step 9. Make DB2 Data Archive Expert available to users on page 34 for details about ahxv11. ISPF Command Shell Enter TSO or Workstation commands below: ===> ahxv11 Place cursor on choice and press enter to Retrieve command => ahxv11 => => => => => => => => => Figure 7-1 How to invoke Data Archive Expert If you press Enter, you will see the main panel of the Data Archive Expert tool (Figure 7-2). 96 DB2 Data Archive Expert for z/os
123 AHXV IBM DB2 Data Archive Expert for z/os Select Archive Expert Action ==> 0 DB2 system : DB2G Schema : SYSTOOLS User ID : PAOLOR1 0 View and set Archive Expert settings Time : 22:39 1 Work with archive specifications 2 Work with retrieve specifications X Exit IBM* Licensed Materials - Property of IBM 5655-I95 (c) Copyright IBM Corp All Rights Reserved. *Trademark of International Business Machines Figure 7-2 Data Archive Expert - Main panel When first used, you have to supply the DB2 subsystem name, in our case DB2G; and the schema name of Data Archive Expert s metadata, SYSTOOLS, which is the install schema default. See 3.2.5, Step 5. Insert default properties on page 33 for setting these defaults. If you specify 0 and press Enter, you will look at Data Archive Expert s global settings (see Figure 7-3), but you should live with these default values for your first tests. AHXV Data Archive Expert Settings Command ===> Scroll ===> CSR_ User ID : PAOLOR1 Time : 13:46 Set or change the following settings, press <ENTER> then <END>. Log data sets qualifier (xxxxxxxx.ahxcirc.log) (xxxxxxxx.ahxlog).... PAOLOR1 Level of logging (1 - Info 2 - Warning 3 - Error) COMMIT level (1 to rows) Default owner for archive target tables. ARCHIVED retrieve target tables. RETRIEVE Grouper schema names metadata SYSTOOLS stored procedures... EGFTOOLS These settings cannot be modified: DB2 subsystem ID : DB2G Archive Expert schema names metadata : SYSTOOLS stored procedures : AHXTOOLS Figure 7-3 Data Archive Expert - Settings Chapter 7. Scenario 1: Archiving from a table to a file 97
124 Therefore, press F3 to return to the main panel (shown in Figure 7-4), which confirms that you did not change the settings. AHXV IBM DB2 Data Archive Expert for Select Archive Expert Action ==> 1 Settings not modified DB2 system : DB2G Schema : SYSTOOLS User ID : PAOLOR1 0 View and set Archive Expert settings Time : 22:40 1 Work with archive specifications 2 Work with retrieve specifications X Exit IBM* Licensed Materials - Property of IBM 5655-I95 (c) Copyright IBM Corp All Rights Reserved. *Trademark of International Business Machines Figure 7-4 Data Archive Expert - Main panel again Here are some explanations of the terms found on this and the following panels: You either want to archive rows or you want to retrieve rows once they have been archived. Accordingly, you see the two main choices 1 and 2 on this panel. If you want to archive rows, you must first define which rows you want to archive, and then actually archive these rows. For the first step, Data Archive Expert uses the term archive specification; for the second step, Data Archive Expert uses the term archive specification run. Option 1. Work with archive specifications encompasses both these steps. Similarly, the same applies to option 2. Work with retrieve specifications. 7.2 Define the archive specification Let us start easily, so that you do not have to worry about things that distract your attention from the main path when using the DB2 Data Archive Expert for z/os tool for the first time: Use the easiest, walk through the panels to get the job done, that is: No space considerations No security considerations By specifying 1 on the previous panel, and pressing Enter, you will see the panel in Figure 7-5 to start your first project. 98 DB2 Data Archive Expert for z/os
125 AHXV Archive Specifications List ---- No Specifications exist Command ===> Scroll ===> CSR Primary commands are: DB2 system : DB2G FI - Filter archive specification list User ID. : PAOLOR1 RE - Refresh archive specification list Time... : 22:42 Line commands are: N - New C - Copy D - Delete R - Run B - Browse details E - Edit H - History of runs I - Info summary F - Table archive to file T - Define retrieve specification L - List retrieve specifications Cmd Name Description Creator Updated State Type n <empty> ******************************* Bottom of data ******************************** Figure 7-5 Data Archive Expert s Archive Specification List panel As there is no previous archive specification yet, Data Archive Expert provides the entry <empty>. Specify N for NEW as the line command in front of this entry. If there were already specifications in the Data Archive Expert system, you could specify the N in front of any of those specifications. Press Enter to proceed to the next panel shown in Figure 7-6. AHXV Archive Specification Definition Command ==> Archive specification: Name......==> SCENARIO1 DB2 system. : DB2G Creator....==> PAOLOR1 Description..==> Archive old LINEITEM rows to a file (incl. deletion) Complete archive run (delete source data)? ==> N (Yes/No) Perform orphan row/changed data detection? ==> Y (Yes/No) Select an archive definition activity ==> 1 1. Define archive unit (required) 2. Define table targets (active) 3. Define data set targets 4. Save archive specification Figure 7-6 Archive Specification Definition On this panel, some values are already provided by default, but you must or should specify: The name of the archive specification you are going to define, here SCENARIO1 A description for your new archive specification Y for completing the archive run, that is, no pause between the two steps: Copying rows from the source tables to the archive tables or files, and then deleting the same rows from the source tables Y for performing some detection that is not discussed here. For more information on this, see Chapter 12., Additional archive considerations on page 233. Chapter 7. Scenario 1: Archiving from a table to a file 99
126 Attention_1: Data Archive Expert is case sensitive! It takes your input as it is, so you might prefer to specify the name of the specification in uppercase, as it is done in the figure, although you can work with lower case specification names. Using uppercase is especially important when dealing with DB2 objects like DB2 tables, as many of your DB2 for z/os data management programs and tools are not able to handle lower case table names. Attention_2: If you also want to delete the rows in the source table as part of your archive process and if you want to archive to a file (rather than to another table), you cannot split the archiving process into two separate steps: First copying some rows into a file (or data set) and afterwards, after a pause, deleting these rows from the source table Attention_3: Although you have now specified to delete the rows in the source table after they had been archived, you must specify this deletion request in a later panel again, otherwise the deletion does not take place. Why? The Y on this panel only indicates that you request the deletion to happen right after the copying of the rows; in a later panel you must specify on which specific tables you want the deletion to be performed. As we deal with only one table in this scenario, the reason for requesting the deletion twice is not obvious yet. Attention_4: In the Specification List panel (see Figure 7-19 on page 108) only the first 21 characters of your description are displayed. For future selections make sure that the distinctive part of your description is already mentioned in these first 21 characters (in contrast to the description you see in this example). Now you must perform three steps (called definition activities) in order to complete your archive specification: 1. Activity 1: Specifying the archiving source tables, called the archive unit 2. Either activity 2 or 3: Specifying the archiving target, either tables or datasets In this scenario 1, we performed activity 3 rather than 2, as we wanted to archive into a data set rather than into a table. 3. Activity 4: Saving your definition Let us finally start with activity 1! Press Enter for the first pull-down menu in Figure DB2 Data Archive Expert for z/os
127 AHXV Archive Specification Definition Command ==> Archive specification: Name......==> SCENARIO1 DB2 system. : DB2G Creator....==> Essssssss Specify Starting Point Table ssssssssn Description..==> e e on) Complete archive ru e Command ==> e Perform orphan row/ e e e Provide table selection list? ==> N (Y/N) e Select an archive def e e e Table name. ==> LINEITEM e 1. Define archive e Creator.. ==> PAOLOR1 e 2. Define table t e Database.. ==> % e 3. Define data se e DB2 system.. : DB2G e 4. Save archive s e e e ( % or blank indicates all ) e e e e e DssssssssssssssssssssssssssssssssssssssssssssssM Figure 7-7 Data Archive Expert s pull-down menu for specifying starting table In this first pull-down menu, you provide the name of the table from which you want to archive some rows, in our case LINEITEM. The qualifier already defaults to your user ID, which is OK in this scenario. As you have specified the correct table name already, you do not want a list of all tables with names starting with LINEITEM. Therefore, you specify N for whether to provide a table selection list. Attention again: Data Archive Expert is case sensitive! As your table probably has an uppercase name in your DB2 for z/os catalog, your input in this pull-down menu should better be in uppercase, otherwise, DB2 issues the message: No tables were found. Press Enter for the pull-down menu in Figure 7-8. Chapter 7. Scenario 1: Archiving from a table to a file 101
128 AHXV Archive Specification Definition Command ==> Archive specification: Name......==> SCENARIO1 DB2 system. : DB2G Creator....==> Esssssssssss Search for related Tables? ssssssssssssn Description..==> e e Complete archive ru e Command ==> e Perform orphan row/ e e e Find related tables? ==> N (Yes/No) e Select an archive def e e e Starting point table: LINEITEM e 1. Define archive e Creator : PAOLOR1 e 2. Define table t e Database name... : PAOLODB e 3. Define data se e DB2 system.... : DB2G e 4. Save archive s e e e e e e DsssssssssssssssssssssssssssssssssssssssssssssssssssM Figure 7-8 Data Archive Expert s pull-down menu for finding related tables On this second pull-down menu you are asked whether you want Data Archive Expert to look for tables that are referentially connected to your LINEITEM table, as you may want to archive some of their data together with the LINEITEM rows you want to archive. As this is not the case in our first scenario, enter N and press Enter for the panel in Figure 7-9. AHXV Archive Unit Definition Row 1 of 1 Command ==> Scroll ===> CSR Archive specification: Name... : SCENARIO1 Creator. : PAOLOR1 DB2 system: DB2G Starting point table: LINEITEM Creator..... : PAOLOR1 Database name.. : PAOLODB Line commands are: A - Add C - Columns K - Connection keys D - Delete R - Rules P - Starting point table U - Index cols W - Row filter Rules Cmd Table name Creator Database StP Jct Del Row filter r LINEITEM PAOLOR1 PAOLODB SP N ******************************* Bottom of data ******************************** Figure 7-9 Archive Unit Definition Attention (repetition): On this panel you see that the DEL rule for table LINEITEM is set to N per default. As you want to delete the inactive rows from this table once they have been archived, you must request to change this rule by specifying r as the line command in front of this table. 102 DB2 Data Archive Expert for z/os
129 Now press Enter for the panel in Figure AHXV Archive Unit Definition Row 1 of 1 Command ==> Scroll ===> CSR Archive specification: Name... : SCENARIO1 Starting point table: LINEITEM Creato Esssssssssss Select Archive Table Rules ssssssssssssn DB2 sy e AHXV Archive Table Rules e e Command ==> e Line com e e A - Add e Archive specification : SCENARIO1 e R - Rul e Archive unit table : LINEITEM e ilter e Creator : PAOLOR1 e e e Cmd Tab e Table archive rule: e er e e R LIN e Make the table a junction table? N (Yes/No) e ******** e e **************** e Table delete rule: e e e e Delete data from table? Y (Yes/No) e e e e e DsssssssssssssssssssssssssssssssssssssssssssssssssssM Figure 7-10 Data Archive Expert s pull-down menu for specifying rules In this pull-down menu you specify Y for deleting the rows in the original source table after they have been archived. Then press Enter again, now you can observe that the DEL rule has changed to Y for your table, as shown in Figure AHXV Archive Unit Definition Row 1 of 1 Command ==> Scroll ===> CSR Archive specification: Name... : SCENARIO1 Creator. : PAOLOR1 DB2 system: DB2G Starting point table: LINEITEM Creator..... : PAOLOR1 Database name.. : PAOLODB Line commands are: A - Add C - Columns K - Connection keys D - Delete R - Rules P - Starting point table U - Index cols W - Row filter Rules Cmd Table name Creator Database StP Jct Del Row filter w LINEITEM PAOLOR1 PAOLODB SP Y ******************************* Bottom of data ******************************** Figure 7-11 Request to specify a row filter Up to now you have specified which tables are subject of you archive specification, and whether archived rows should be deleted. Chapter 7. Scenario 1: Archiving from a table to a file 103
130 Now you proceed in your archive specification from the table level to the row level. In other words, you can specify how to determine which rows are considered inactive and should be archived. This is done in Data Archive Expert through a so called row filter, which Data Archive Expert converts to a WHERE clause when it accesses DB2. To let Data Archive Expert generate such a WHERE clause, specify W as line command in front of your table, and press Enter for the panel in Figure AHXV Starting Point Table Row Filter Row 1 of 16 Command ==> Scroll ===> CSR Archive specification : SCENARIO1 Starting point table: LINEITEM Creator : PAOLOR1 DB2 system: DB2G Row filter ==> Columns Num Type Length Scale L_ORDERKEY 1 INTEGER 4 0 L_PARTKEY 2 INTEGER 4 0 L_SUPPKEY 3 INTEGER 4 0 L_LINENUMBER 4 INTEGER 4 0 L_QUANTITY 5 INTEGER 4 0 L_EXTENDEDPRICE 6 FLOAT 4 0 L_DISCOUNT 7 FLOAT 4 0 L_TAX 8 FLOAT 4 0 L_RETURNFLAG 9 CHAR 1 0 L_LINESTATUS 10 CHAR 1 0 Figure 7-12 Data Archive Expert s panel for specifying a row filter - Part 1 Press F8 to see the table columns shown in Figure We were interested in L_SHIPDATE. Now specify your archiving criteria, here: L_SHIPDATE < ' ' to archive all rows from January In our test data there were no rows for the years before DB2 Data Archive Expert for z/os
131 AHXV Starting Point Table Row Filter Row 11 of 16 Command ==> Scroll ===> CSR Archive specification : SCENARIO1 Starting point table: LINEITEM Creator : PAOLOR1 DB2 system: DB2G Row filter ==> L_SHIPDATE < ' ' Columns Num Type Length Scale L_SHIPDATE 11 DATE 4 0 L_COMMITDATE 12 DATE 4 0 L_RECEIPTDATE 13 DATE 4 0 L_SHIPINSTRUCT 14 CHAR 25 0 L_SHIPMODE 15 CHAR 10 0 L_COMMENT 16 VARCHAR 44 0 ******************************* Bottom of data ******************************** Figure 7-13 Data Archive Expert s panel for specifying a row filter - Part 2 Attention: When specifying your row filter, use uppercase, and omit the word WHERE. Press Enter to return to the panel in Figure 7-14, which displays the first part of your selection criteria for this table. AHXV Archive Unit Definition Row 1 of 1 Command ==> Scroll ===> CSR Archive specification: Name... : SCENARIO1 Creator. : PAOLOR1 DB2 system: DB2G Starting point table: LINEITEM Creator..... : PAOLOR1 Database name.. : PAOLODB Line commands are: A - Add C - Columns K - Connection keys D - Delete R - Rules P - Starting point table U - Index cols W - Row filter Rules Cmd Table name Creator Database StP Jct Del Row filter _ LINEITEM PAOLOR1 PAOLODB SP Y L_SHIPDATE < ' ******************************* Bottom of data ******************************** Figure 7-14 Data Archive Expert s overview about archive unit definitions Press F3 to go back to the menu of specification activities shown in Figure Chapter 7. Scenario 1: Archiving from a table to a file 105
132 AHXV Archive Specification Definition Command ==> Archive specification: Name......==> SCENARIO1 DB2 system. : DB2G Creator....==> PAOLOR1 Description..==> Archive old LINEITEM rows to a file (incl. deletion) Complete archive run (delete source data)? ==> Y (Yes/No) Perform orphan row/changed data detection? ==> Y (Yes/No) Select an archive definition activity ==> 3 1. Define archive unit (completed) 2. Define table targets (active) 3. Define data set targets 4. Save archive specification (pending) Figure 7-15 Request to define data set targets As definition activity 1. Define Archive Unit is now completed, you can proceed to the second activity for your archive specification. As this scenario deals with data set archives (rather than table archives, which are discussed in the next scenario) enter 3 on the panel, and press Enter, which leads you to the panel in Figure AHXV File Archive Targets Row 1 of 1 Command ==> Scroll ===> CSR Archive specification: SCENARIO1 Creator : PAOLOR1 DB2 system: DB2G Select an option for creating targets: 1 1. Default target data set generation with high-level qualifier. High-level qualifier ==> PAOLOR1 2. Specify data set name for each source table. 3. Specify utility templates for each source table. Source table Target data set =============================================================================== Name : LINEITEM Creator: PAOLOR1 ******************************* Bottom of data ******************************** Figure 7-16 Selecting Data Archive Expert s default data set generation In a production environment, the usage of templates (option 3) is recommended. How to use templates is explained in File archive in batch using a template on page 51. But here, as this is an introductory scenario, accept the default 1 on the panel and let Data Archive Expert create a data set for you according to its own default naming and space conventions. You can see the data set name Data Archive Expert generates in Figure 7-22 on page DB2 Data Archive Expert for z/os
133 Pressing Enter just gives you the acknowledging message shown in Figure Attention: After you have saved an archive specification, you cannot change this default mapping any more, which is the decision into which tables you are going to archive. AHXV File Archive Targets --- Default mappings set Command ==> Scroll ===> CSR Archive specification: SCENARIO1 Creator : PAOLOR1 DB2 system: DB2G Select an option for creating targets: 1 1. Default target data set generation with high-level qualifier. High-level qualifier ==> PAOLOR1 2. Specify data set name for each source table. 3. Specify utility templates for each source table. Source table Target data set =============================================================================== Name : LINEITEM Creator: PAOLOR1 ******************************* Bottom of data ******************************** Figure 7-17 Data Archive Expert s confirmation of data set specifications F3 brings you back to the definition activity menu shown in Figure AHXV Archive Specification Definition Command ==> Archive specification: Name......==> SCENARIO1 DB2 system. : DB2G Creator....==> PAOLOR1 Description..==> Archive old LINEITEM rows to a file (incl. deletion) Complete archive run (delete source data)? ==> Y (Yes/No) Perform orphan row/changed data detection? ==> Y (Yes/No) Select an archive definition activity ==> 4 1. Define archive unit (completed) 2. Define table targets 3. Define data set targets (active) 4. Save archive specification (pending) Figure 7-18 Request to save the archive specification Now you can proceed to the last activity. Select 4 and press Enter. You will see the confirmation message on the panel in Figure Chapter 7. Scenario 1: Archiving from a table to a file 107
134 AHXV Archive Specifications List Specification saved Command ===> Scroll ===> CSR Primary commands are: DB2 system : DB2G FI - Filter archive specification list User ID. : PAOLOR1 RE - Refresh archive specification list Time... : 22:54 Line commands are: N - New C - Copy D - Delete R - Run B - Browse details E - Edit H - History of runs I - Info summary F - Table archive to file T - Define retrieve specification L - List retrieve specifications Cmd Name Description Creator Updated State Type SCENARIO1 Archive old LINEITEM PAOLOR DEF FILE ******************************* Bottom of data ******************************** Figure 7-19 Data Archive Expert s confirmation message Specification saved You have now created a new archive specification SCENARIO1. This new archive specification is only defined (State = DEF), but not yet completed (that is, not yet executed), and it is going to archive to a data set (Type = FILE) rather than to a table. 7.3 Run the archive specification Your next step is to actually perform the archiving of the inactive rows, in other words, to execute or run the archive specification SCENARIO1, which is only defined up to now. You do this by specifying r as line command in front of your specification SCENARIO1. See Figure AHXV Archive Specifications List Row 1 of 1 Command ===> Scroll ===> CSR Primary commands are: DB2 system : DB2G FI - Filter archive specification list User ID. : PAOLOR1 RE - Refresh archive specification list Time... : 23:12 Line commands are: N - New C - Copy D - Delete R - Run B - Browse details E - Edit H - History of runs I - Info summary F - Table archive to file T - Define retrieve specification L - List retrieve specifications Cmd Name Description Creator Updated State Type r SCENARIO1 Archive old LINEITEM PAOLOR DEF FILE ******************************* Bottom of data ******************************** Figure 7-20 Request an archive specification run Press Enter for the panel in Figure DB2 Data Archive Expert for z/os
135 AHXV Run Archive Specification Command ==> r Primary commands are: R - Run the archive specification Confirm the row filter and run the archive specification. Change the row filter if desired, then run archive specification. Specification name: SCENARIO1 DB2 system... DB2G Creator..... PAOLOR1 User ID..... PAOLOR1 Description: Archive old LINEITEM rows to a file (incl. deletion) Archive will be immediately completed during archive. Row filter ==> L_SHIPDATE < ' ' Figure 7-21 Confirm an archive specification run On this panel, major characteristics of your archive specification are shown to you for verification, and if you are happy with these definitions, you can then trigger their execution by specifying r for RUN on the command line and pressing Enter. The resulting panel is shown in Figure AHXV Archive Run Statistics Archive run successful Command ==> Scroll ===> CSR Archive specification : SCENARIO1 DB2 system. : DB2G Creator : PAOLOR1 Description : Archive old LINEITEM rows to a file (incl. deletion) Row filter : L_SHIPDATE < ' ' Run.... : 1 =============================================================================== Source table: LINEITEM Creator: PAOLOR1 Del: 0 Target : PAOLOR1.S00085.V0001.D T Ins: 0 ******************************* Bottom of data ******************************** Figure 7-22 Result of an archive specification run Now the whole archive process should have been performed as requested, and indeed, you can see the message Archive run successful on this panel. On the other hand, what a disappointment: 0 rows have been archived, and accordingly 0 rows have been deleted from the source table PAOLOR1.LINEITEM; apparently the LINEITEM table does not have any line item with a shipment date in January 1992 or earlier. But, the target data set has been created anyway, namely PAOLOR1.S00085.V0001.D T Now press F3 to return to the panel shown in Figure Chapter 7. Scenario 1: Archiving from a table to a file 109
136 AHXV Archive Specifications List Row 1 of 1 Command ===> Scroll ===> CSR Primary commands are: DB2 system : DB2G FI - Filter archive specification list User ID. : PAOLOR1 RE - Refresh archive specification list Time... : 23:16 Line commands are: N - New C - Copy D - Delete R - Run B - Browse details E - Edit H - History of runs I - Info summary F - Table archive to file T - Define retrieve specification L - List retrieve specifications Cmd Name Description Creator Updated State Type SCENARIO1 Archive old LINEITEM PAOLOR COM FILE ******************************* Bottom of data ******************************** Figure 7-23 Data Archive Expert confirms a completed archive specification run As you can see on this panel, the state of your archive specification has changed from defined to complete (State = COM), as you have executed your specification at least once, although no rows have been archived yet. This first archive specification run did not archive any rows; we are now going to show you that you can run the specification again! 7.4 Second run of the archive specification Specify r again as a line command in front of your archive specification on the panel in Figure 7-24 to run the already completed specification. AHXV Archive Specifications List Row 1 of 1 Command ===> Scroll ===> CSR Primary commands are: DB2 system : DB2G FI - Filter archive specification list User ID. : PAOLOR1 RE - Refresh archive specification list Time... : 23:16 Line commands are: N - New C - Copy D - Delete R - Run B - Browse details E - Edit H - History of runs I - Info summary F - Table archive to file T - Define retrieve specification L - List retrieve specifications Cmd Name Description Creator Updated State Type r SCENARIO1 Archive old LINEITEM PAOLOR COM FILE ******************************* Bottom of data ******************************** Figure 7-24 Request for a second run Press Enter for the panel in Figure DB2 Data Archive Expert for z/os
137 AHXV Run Archive Specification Command ==> Primary commands are: R - Run the archive specification Confirm the row filter and run the archive specification. Change the row filter if desired, then run archive specification. Specification name: SCENARIO1 DB2 system... DB2G Creator..... PAOLOR1 User ID..... PAOLOR1 Description: Archive old LINEITEM rows to a file (incl. deletion) Archive will be immediately completed during archive. Row filter ==> L_SHIPDATE < ' ' Figure 7-25 Data Archive Expert s presentation of a confirmation panel Figure 7-26 shows how to change the row filter to archive the line items with a shipment date before March AHXV Run Archive Specification Command ==> r Primary commands are: R - Run the archive specification Confirm the row filter and run the archive specification. Change the row filter if desired, then run archive specification. Specification name: SCENARIO1 DB2 system... DB2G Creator..... PAOLOR1 User ID..... PAOLOR1 Description: Archive old LINEITEM rows to a file (incl. deletion) Archive will be immediately completed during archive. Row filter ==> L_SHIPDATE < ' ' Figure 7-26 Changing the row filter before staring the second run Now specify r for RUN on the command line and press Enter. The result of this second run of the specification is shown in Figure Chapter 7. Scenario 1: Archiving from a table to a file 111
138 AHXV Archive Run Statistics Archive run successful Command ==> Scroll ===> CSR Archive specification : SCENARIO1 DB2 system. : DB2G Creator : PAOLOR1 Description : Archive old LINEITEM rows to a file (incl. deletion) Row filter : L_SHIPDATE < ' ' Run.... : 2 =============================================================================== Source table: LINEITEM Creator: PAOLOR1 Del: 5112 Target : PAOLOR1.S00085.V0001.D T Ins: 5112 ******************************* Bottom of data ******************************** Figure 7-27 Result of the second archive specification run So, the archive specification run did work! Our source table eliminated 5112 rows. 7.5 Result Let us look at what has been achieved and how this is documented. With F3 you go back to the panel in Figure AHXV Archive Specifications List Row 1 of 1 Command ===> Scroll ===> CSR Primary commands are: DB2 system : DB2G FI - Filter archive specification list User ID. : PAOLOR1 RE - Refresh archive specification list Time... : 23:20 Line commands are: N - New C - Copy D - Delete R - Run B - Browse details E - Edit H - History of runs I - Info summary F - Table archive to file T - Define retrieve specification L - List retrieve specifications Cmd Name Description Creator Updated State Type H SCENARIO1 Archive old LINEITEM PAOLOR COM FILE ******************************* Bottom of data ******************************** Figure 7-28 Request for an historical overview about the specification Specify H for History on this panel; the Data Archive Expert shows you the actions that have been taken for this archive specification SCENARIO1; see Figure DB2 Data Archive Expert for z/os
139 AHXV Archive Specification History Row 1 of 3 Command ===> Scroll ===> CSR Specification name: SCENARIO1 Creator: PAOLOR1 Description: Archive old LINEITEM rows to a file (incl. deletion) Line commands are: S - Show statistics Cmd Run State Run by Date Row filter Defined L_SHIPDATE < ' ' s 1 Complete PAOLOR L_SHIPDATE < ' ' 2 Complete PAOLOR L_SHIPDATE < ' ' ******************************* Bottom of data ******************************** Figure 7-29 Data Archive Expert s overview about an archive specification Please note that the row filter of the archive definition reflects the row filter of the last run. With s you can see the details of a previous run again (see Figure 7-30). Archive specification : SCENARIO1 DB2 system. : DB2G Creator : PAOLOR1 Description : Archive old LINEITEM rows to a file (incl. deletion) Row filter : L_SHIPDATE < ' ' Run.... : 1 =============================================================================== Source table: LINEITEM Creator: PAOLOR1 Del: 0 Target : PAOLOR1.S00085.V0001.D T Ins: 0 ******************************* Bottom of data ******************************** Figure 7-30 DAE s details about a specific run of an archive specification Now let us check whether the archive specification run has really completed, in other words, whether the archived rows have been deleted from the original LINEITEM table. We issued a SELECT COUNT, and Figure 7-31 shows that everything has worked properly. Chapter 7. Scenario 1: Archiving from a table to a file 113
140 SELECT COUNT(*) FROM PAOLOR1.LINEITEM WHERE L_SHIPDATE < ' ' ; DSNE610I NUMBER OF ROWS DISPLAYED IS 1 DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS Figure 7-31 Test if inactive rows have been removed Furthermore, if you view the contents of the dataset, you can see that the second archive specification run has stored 5112 rows (Figure 7-32), and we assumed for the moment that these are the correct ones. In scenario 2, we also checked that the correct rows are archived. File Edit Edit_Settings Menu Utilities Compilers Test Help sssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss VIEW PAOLOR1.S00085.V0001.D T Columns Command ===> Scroll ===> CSR ****** ***************************** Top of Data ****************************** Ï$ Q á [ : ööòrf delive Ú à 1 Ôe : RF NONE g àl W rª éôerf take B... etc a Z_ Þ äêàu : ç RF NONE ea 9 à ï éôe 5CRF NONE ea ) à<óc rª : AF NONE ib d N àzû Ôe éôerf take B ib Ïð á I 5B±RF NONE ****** **************************** Bottom of Data **************************** Figure 7-32 Browse data set containing archived rows 7.6 Considerations We list here some considerations related to the experience gained in running this scenario Intermediate table used when archiving to file Internally, Data Archive Expert does not just unload the data from the source table through the UNLOAD utility. Instead, Data Archive Expert first inserts the rows to be archived into an intermediate table, then unloads the rows from that table, and finally deletes the rows in the source table. Here are the reasons why does Data Archive Expert works that way: 114 DB2 Data Archive Expert for z/os
141 Your row filter might be too complicated for the WHEN clause of the UNLOAD utility, whereas the WHERE clause of the SELECT can handle your row filter specification. So, Data Archive Expert first stores all rows that satisfy the row filer in the intermediate table, then it can unload from this table without worrying about the WHEN clause. After the UNLOAD, Data Archive Expert must ensure that only those rows are deleted from the source table that have been unloaded. If Data Archive Expert applies the same row filter for the deletion as it did for the inserting into the intermediate table when accessing the source table again, it could delete more (in the meantime, inserted or updated) rows than it had unloaded, so you would loose these data. Therefore, Data Archive Expert deletes exactly those rows from the source table it can find in the intermediate table File archives require a complete run Of course, the intermediate table is dropped after the file archive specification run. As a consequence, Data Archive Expert cannot split the work to be done into two steps: First, unloading the rows to a data set, and some time later deleting the rows from the source table. The intermediate table is needed for this deletion. On the panel in Figure 7-6 on page 99 you specified: Complete archive run (delete source data)? ==> Y (Yes/No) If you only want to archive rows into data sets without deleting them from the original table, then you may specify N, but this is meaningless, as the whole run only consists of the first step anyway. But if you want to also delete the rows in the original table, it now becomes clear that you cannot specify N in that case. Note, you are dealing with file archives, specifying N for table archives is fine. Archive scenario 3 shows among other things how to split the archive run into two steps when archiving to tables. Attention: Data Archive Expert lets you specify a file archive with an awkward combination: 1. Complete run = N and 2. Delete rule for a table = Y In this case, 1) overrules 2), so no rows are deleted from the original table Names of the archive data sets The two archive runs in this scenario have created the two datasets: PAOLOR1.S00085.V0001.D T PAOLOR1.S00085.V0001.D T You can derive from these names the default naming convention Data Archive Expert uses for the archive data sets: S00085 is the specification number in the system (as we have seen no other specifications, other users must have defined some specifications already). For the time being, V0001 is constant. (This may change to reflect the version, that is, the number of the specification run.) D is the Julian date. Chapter 7. Scenario 1: Archiving from a table to a file 115
142 T is the time. In particular, data set names for different tables cannot be distinguished by a table number, they just differ because of the (date and) time qualifiers How to find describing information to a given archive data set So, if you received a data set name and want to know whether you still need that data set, it is not an easy task to find the related archive specification, or even the related table in this specification whose data is stored in that data set. You have to dig into Data Archive Expert s metadata tables and look there for your specification number, in this case for 85. In Data Archive Expert s SYSTOOLS.AHXSPECS table, you might find something like Example 7-1. Example 7-1 Specifications in Data Archive Expert s metadata table SPECID SPECTYPE SPECNAME ARCHIVE IVP No Grouper 4 RETRIEVE Retrieve w/o Groupr 5 RETRIEVE IVP W/O Grouper #2 6 ARCHIVE IVPARCHIVE OFFARCH1 COPY2TAPE3 77 OFFARCH1 LINEITE1 85 OFFARCH1 SCENARIO1 From that you know the specification name SCENARIO1. In Data Archive Expert you can then find out what has been archived into the data set. More examples for querying the metadata can be found in Chapter 12., Additional archive considerations on page Recommendation for data set names As can be seen from the last section, using Data Archive Expert s default names is not optimal from a maintenance perspective; for instance, you might not have the authorization to access Data Archive Expert s metadata tables. Therefore, you should use templates for your file archives, as explained in Chapter 10., Scenario 4: Archiving from RI related tables and deleting from them on page DB2 Data Archive Expert for z/os
143 8 Chapter 8. Scenario 2: Archiving from a table to a table In our first scenario we have been looking at the case of large tables containing many rows, which will rarely be accessed anymore. In order to speed up applications and utility runs on these tables, you want to eliminate such inactive rows in your operational, performance-critical tables. We have seen how to do that by archiving rows to a flat file. And by migrating this data set to tape, you want to save disk space. You can also directly archive to tape as described in Chapter 5, Stored procedures and batch execution on page 45. In this second scenario we assume a slightly different situation. This time we still have rows which are seldom accessed (inactive in a sense) but we anticipate the need for their retrieval. An example is a large retail company that archives (excludes) its cashier transactions every month to keep the transaction table small. But at the end of the year, the company wants to have these 12 groups of archived rows stored in DB2 tables in order to perform queries against the data of the entire year. In addition, to complete the scenario, these archive tables can used to feed your data warehouse tables. But, the discussion of such an overall data management solution lies beyond the limits of this redbook. Another reason for archiving to tables is the possibility to test whether your archive specifications work as expected: The results of your archive runs can easily be checked through SQL. In this chapter we show how you can work with DB2 Data Archive Expert to perform such a periodical (in our example yearly) archiving. This second scenario archives inactive rows into a DB2 table. This chapter contains the following: Starting point Define the archive specification Run the archive specification Result of the archive specification A second run of archive specification Using different archive tables per archive run Copyright IBM Corp All rights reserved. 117
144 8.1 Starting point Let us start easily, so that we do not have to worry about things that distract our attention from the main path when using the Data Archive Expert Tool: No space considerations No security considerations In contrast to the first chapter where we used a 1.1-million-row table, we now use a small table of sample line items of some customer orders, as shown in Example 8-1. By using this small table it is easier to check which rows Data Archive Expert processes and in which way. In our example, we want to archive to a target table the line items that were shipped before Example 8-1 The small sample table subject to table archiving SELECT L_ORDERKEY, L_PARTKEY, L_SHIPDATE FROM PAOLOR1.LINEITE2 ; L_ORDERKEY L_PARTKEY L_SHIPDATE DSNE610I NUMBER OF ROWS DISPLAYED IS 12 DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS Define the archive specification We invoke the Data Archive Expert Tool, go to the main panel. We assume that the installation and customization has specified the correct defaults, so we skip option 0 and proceed directly to option 1 in order to specify how we want to archive some rows of our small table. See Figure DB2 Data Archive Expert for z/os
145 AHXV IBM DB2 Data Archive Expert for z/os Select Archive Expert Action ==> 1 DB2 system : DB2G Schema : SYSTOOLS User ID : PAOLOR1 0 View and set Archive Expert settings Time : 14:44 1 Work with archive specifications 2 Work with retrieve specifications X Exit IBM* Licensed Materials - Property of IBM 5655-I95 (c) Copyright IBM Corp All Rights Reserved. *Trademark of International Business Machines Figure 8-1 Data Archive Expert main panel We press Enter for the screen in Figure 8-2, where we request to define a new archive specification by means of an n as line command in front of any existing specification. If there is no previous specification, <empty> is listed as the single specification entry in front of which you can specify the n. AHXV Archive Specifications List Row 1 of 1 Command ===> Scroll ===> CSR Primary commands are: DB2 system : DB2G FI - Filter archive specification list User ID. : PAOLOR1 RE - Refresh archive specification list Time... : 12:33 Line commands are: N - New C - Copy D - Delete R - Run B - Browse details E - Edit H - History of runs I - Info summary F - Table archive to file T - Define retrieve specification L - List retrieve specifications Cmd Name Description Creator Updated State Type n LINEITE0 Archive old LINEITE0 PAOLOR COM FILE ******************************* Bottom of data ******************************** Figure 8-2 Archive Specification List Now, we press Enter, and go to the Archive Specification Definition screen in Figure 8-3. Chapter 8. Scenario 2: Archiving from a table to a table 119
146 AHXV Archive Specification Definition Command ==> Archive specification: Name......==> LINEITE2 DB2 system. : DB2G Creator....==> PAOLOR1 Description..==> Archive old LINEITE2 rows to a table (incl. deletion) Complete archive run (delete source data)? ==> Y (Yes/No) Perform orphan row/changed data detection? ==> Y (Yes/No) Select an archive definition activity ==> 1 1. Define archive unit (required) 2. Define table targets (active) 3. Define data set targets 4. Save archive specification Figure 8-3 Archive Specification Definition On this panel, we first specify: The name, here LINEITE2, of the archive specification we are going to define A description for this new archive specification Y for completing the archive run (that is, delete the rows in the source tables) Y for performing some detection which is not discussed here. Attention_1: Data Archive Expert is case sensitive! It takes your input as it is, so you might prefer to specify the name of the specification in uppercase, as it is done in the figure, although you can work with lower case specification names. Using uppercase is especially important when dealing with DB2 objects like DB2 tables, as many of your DB2 for z/os data management programs and tools are not able to handle lower case table names. Attention_2: Although you have now specified to delete the rows in the source table after they had been archived, you must specify this deletion request in a later panel again, otherwise, the deletion does not take place. Why? The Y on this panel only indicates that you request the deletion of the rows to occur directly after the rows are copied to the archive - without any delay. On a later panel you must again specify for which specific tables you want the deletion to happen. As we deal with only one table in this scenario, the reason for requesting the deletion twice is not obvious yet. Now we must perform three steps (called definition activities) in order to complete our archive specification: 1. Activity 1: Specifying the source tables of our archiving process, called the archive unit 2. Either activity 2 or 3: Specifying the archiving target, either tables or datasets In this scenario, we perform activity 2 rather than 3, as we want to archive into a table rather than into a dataset. 3. Activity 4: Saving our definition Let us now start with activity 1. Press Enter to get the pull-down menu in Figure DB2 Data Archive Expert for z/os
147 AHXV Archive Specification Definition Command ==> Archive specification: Name......==> LINEITE2 DB2 system. : DB2G Creator....==> Essssssss Specify Starting Point Table ssssssssn Description..==> e e on) Complete archive ru e Command ==> e Perform orphan row/ e e e Provide table selection list? ==> N (Y/N) e Select an archive def e e e Table name. ==> LINEITE2 e 1. Define archive e Creator.. ==> PAOLOR1 e 2. Define table t e Database.. ==> % e 3. Define data se e DB2 system.. : DB2G e 4. Save archive s e e e ( % or blank indicates all ) e e e e e DssssssssssssssssssssssssssssssssssssssssssssssM Figure 8-4 Archive Specification Definition - 1. pull-down menu In this first pull-down menu we provide the name of the table from which we want to archive some rows, in our case LINEITE2. The qualifier already defaults to our user ID, which is OK in this scenario. As we have specified the correct table name already, we do not want a list of all tables with names starting with LINEITE2. Therefore, you specify N for whether to provide a table selection list. Attention: Once again, Data Archive Expert is case sensitive! As your table probably has an uppercase name in your DB2 for z/os catalog, your input in this pull-down menu should better be in uppercase, otherwise, DB2 issues the message: No tables were found. Press Enter for the pull-down menu in Figure 8-5. Chapter 8. Scenario 2: Archiving from a table to a table 121
148 AHXV Archive Specification Definition Command ==> Archive specification: Name......==> LINEITE2 DB2 system. : DB2G Creator....==> Esssssssssss Search for related Tables? ssssssssssssn Description..==> e e Complete archive ru e Command ==> e Perform orphan row/ e e e Find related tables? ==> N (Yes/No) e Select an archive def e e e Starting point table: LINEITE2 e 1. Define archive e Creator : PAOLOR1 e 2. Define table t e Database name... : PAOLODB e 3. Define data se e DB2 system.... : DB2G e 4. Save archive s e e e e e e DsssssssssssssssssssssssssssssssssssssssssssssssssssM Figure 8-5 Archive Specification Definition - 2. pull-down menu As we only want to process one single table in this scenario, specify N when asked for finding related tables. Press Enter, and for the screen in Figure 8-6. AHXV Archive Unit Definition Row 1 of 1 Command ==> Scroll ===> CSR Archive specification: Name... : LINEITE2 Creator. : PAOLOR1 DB2 system: DB2G Starting point table: LINEITE2 Creator..... : PAOLOR1 Database name.. : PAOLODB Line commands are: A - Add C - Columns K - Connection keys D - Delete R - Rules P - Starting point table U - Index cols W - Row filter Rules Cmd Table name Creator Database StP Jct Del Row filter r LINEITE2 PAOLOR1 PAOLODB SP N ******************************* Bottom of data ******************************** Figure 8-6 Archive Unit Definition panel for Specification Activity 1 Attention: On the panel in Figure 8-6 we see for our table LINEITE2 that the DEL rule is set to N per default. As you want to delete the inactive rows from this table once they have been archived, you must change this rule by first specifying r as line command in front of this table. Now press Enter for the panel in Figure DB2 Data Archive Expert for z/os
149 AHXV Archive Unit Definition Row 1 of 1 Command ==> Scroll ===> CSR Archive specification: Name... : LINEITE2 Starting point table: LINEITE2 Creato Esssssssssss Select Archive Table Rules ssssssssssssn DB2 sy e AHXV Archive Table Rules e e Command ==> e Line com e e A - Add e Archive specification : LINEITE2 e R - Rul e Archive unit table : LINEITE2 e ilter e Creator : PAOLOR1 e e e Cmd Tab e Table archive rule: e er e e R LIN e Make the table a junction table? N (Yes/No) e ******** e e **************** e Table delete rule: e e e e Delete data from table? Y (Yes/No) e e e DsssssssssssssssssssssssssssssssssssssssssssssssssssM Figure 8-7 Archive Table Rules In the pull-down menu in Figure 8-7 we can and must specify Y for deleting the rows in the original source table after they have been archived. Then press Enter again, and observe that the DEL rule has changed to Y for our table, as shown in Figure 8-8. AHXV Archive Unit Definition Row 1 of 1 Command ==> Scroll ===> CSR Archive specification: Name... : LINEITE2 Creator. : PAOLOR1 DB2 system: DB2G Starting point table: LINEITE2 Creator..... : PAOLOR1 Database name.. : PAOLODB Line commands are: A - Add C - Columns K - Connection keys D - Delete R - Rules P - Starting point table U - Index cols W - Row filter Rules Cmd Table name Creator Database StP Jct Del Row filter w LINEITE2 PAOLOR1 PAOLODB SP Y ******************************* Bottom of data ******************************** Figure 8-8 Archive Unit Definition panel, still Activity 1 Now we are at the point where we can specify which rows are considered inactive and should be archived: To let Data Archive Expert generate a WHERE clause, specify w as the line command in front of our table, and press Enter for the panel in Figure 8-9. Chapter 8. Scenario 2: Archiving from a table to a table 123
150 AHXV Starting Point Table Row Filter Row 1 of 16 Command ==> Scroll ===> CSR Archive specification : LINEITE2 Starting point table: LINEITE2 Creator : PAOLOR1 DB2 system: DB2G Row filter ==> Columns Num Type Length Scale L_ORDERKEY 1 INTEGER 4 0 L_PARTKEY 2 INTEGER 4 0 L_SUPPKEY 3 INTEGER 4 0 L_LINENUMBER 4 INTEGER 4 0 L_QUANTITY 5 INTEGER 4 0 L_EXTENDEDPRICE 6 FLOAT 4 0 L_DISCOUNT 7 FLOAT 4 0 L_TAX 8 FLOAT 4 0 Figure 8-9 Row Filter - Part 1 We press F8 to see the table columns shown in Figure We specify our archiving criteria, here: L_SHIPDATE < ' ' in the row filter (WHERE clause) line. AHXV Starting Point Table Row Filter Row 9 of 16 Command ==> Scroll ===> CSR Archive specification : LINEITE2 Starting point table: LINEITE2 Creator : PAOLOR1 DB2 system: DB2G Row filter ==> L_SHIPDATE < ' ' Columns Num Type Length Scale L_RETURNFLAG 9 CHAR 1 0 L_LINESTATUS 10 CHAR 1 0 L_SHIPDATE 11 DATE 4 0 L_COMMITDATE 12 DATE 4 0 L_RECEIPTDATE 13 DATE 4 0 L_SHIPINSTRUCT 14 CHAR 25 0 L_SHIPMODE 15 CHAR 10 0 L_COMMENT 16 VARCHAR 44 0 Figure 8-10 Row Filter - Part 2 Attention: When specifying your row filter, use uppercase, and omit the word WHERE. 124 DB2 Data Archive Expert for z/os
151 We press Enter to return to the panel in Figure 8-11, which displays the first part of our selection criteria for this table. AHXV Archive Unit Definition Row 1 of 1 Command ==> Scroll ===> CSR Archive specification: Name... : LINEITE2 Creator. : PAOLOR1 DB2 system: DB2G Starting point table: LINEITE2 Creator..... : PAOLOR1 Database name.. : PAOLODB Line commands are: A - Add C - Columns K - Connection keys D - Delete R - Rules P - Starting point table U - Index cols W - Row filter Rules Cmd Table name Creator Database StP Jct Del Row filter _ LINEITE2 PAOLOR1 PAOLODB SP Y L_SHIPDATE < ' ******************************* Bottom of data ******************************** Figure 8-11 Archive Unit Definition - Completed panel We press F3 to go back to the menu of specification activities shown in Figure AHXV Archive Specification Definition Command ==> Archive specification: Name......==> LINEITE2 DB2 system. : DB2G Creator....==> PAOLOR1 Description..==> Archive old LINEITE2 rows to a table (incl. deletion) Complete archive run (delete source data)? ==> Y (Yes/No) Perform orphan row/changed data detection? ==> Y (Yes/No) Select an archive definition activity ==> 2 1. Define archive unit (completed) 2. Define table targets (active) 3. Define data set targets 4. Save archive specification (pending) Figure 8-12 Archive Specification Definition, activity 1 completed As definition activity 1 (Define archive unit) is now completed, we can proceed to the second activity for our archive specification. Therefore, we enter 2 (as scenario 2 deals with table archives) on the panel shown in Figure 8-12, and then press Enter. Chapter 8. Scenario 2: Archiving from a table to a table 125
152 AHXV Table Archive Targets Row 1 of 1 Command ==> Scroll ===> CSR Archive specification: LINEITE2 Creator : PAOLOR1 DB2 system: DB2G Select an option for creating targets: 1 1. Default table/default table spaces (one table per table space) 2. Specify table/default table spaces (one table per table space) 3. Default tables/specify table space (all tables in one table space) 4. Specify tables/specify table space (all tables in one table space) 5. Default tables/specify table spaces (any combination) 6. Specify tables/specify table spaces (any combination) Target table space Source table Target table Database =============================================================================== Name : LINEITE2 Name : Name : Creator: PAOLOR1 Creator: Database: DSNDB04 ******************************* Bottom of data ******************************** Figure 8-13 Table Archive Targets As this is still an introductory scenario, we accept the default option 1 on this panel, and hence let Data Archive Expert create an archive table and a table space according to its default naming convention. Pressing Enter just gives an acknowledgement message as shown in Figure AHXV Table Archive Targets Default mappings set Command ==> Scroll ===> CSR Archive specification: LINEITE2 Creator : PAOLOR1 DB2 system: DB2G Select an option for creating targets: 1 1. Default table/default table spaces (one table per table space) 2. Specify table/default table spaces (one table per table space) 3. Default tables/specify table space (all tables in one table space) 4. Specify tables/specify table space (all tables in one table space) 5. Default tables/specify table spaces (any combination) 6. Specify tables/specify table spaces (any combination) Target table space Source table Target table Database =============================================================================== Name : LINEITE2 Name : Name : Creator: PAOLOR1 Creator: Database: DSNDB04 ******************************* Bottom of data ******************************** Figure 8-14 Table Archive Targets F3 brings us back to the definition activity menu in Figure DB2 Data Archive Expert for z/os
153 AHXV Archive Specification Definition Command ==> Archive specification: Name......==> LINEITE2 DB2 system. : DB2G Creator....==> PAOLOR1 Description..==> Archive old LINEITE2 rows to a table (incl. deletion) Complete archive run (delete source data)? ==> Y (Yes/No) Perform orphan row/changed data detection? ==> Y (Yes/No) Select an archive definition activity ==> 4 1. Define archive unit (completed) 2. Define table targets (active) 3. Define data set targets 4. Save archive specification (pending) Figure 8-15 Archive Specification Definition panel Now we can proceed to the last activity. Specify 4 and press Enter. We go to the panel in Figure AHXV Archive Specifications List Specification saved Command ===> Scroll ===> CSR Primary commands are: DB2 system : DB2G FI - Filter archive specification list User ID. : PAOLOR1 RE - Refresh archive specification list Time... : 13:14 Line commands are: N - New C - Copy D - Delete R - Run B - Browse details E - Edit H - History of runs I - Info summary F - Table archive to file T - Define retrieve specification L - List retrieve specifications Cmd Name Description Creator Updated State Type LINEITE2 Archive old LINEITE2 PAOLOR DEF TABL LINEITE0 Archive old LINEITE0 PAOLOR COM FILE ******************************* Bottom of data ******************************** Figure 8-16 Archive Specification List panel As you can see on this panel, We have created a new archive specification LINEITE2 in addition to the already existing LINEITE0. Our new archive specification is DEFined, not yet COMpleted (that is, not yet executed), and it is going to archive to table (TABL) rather than to FILE. 8.3 Run the archive specification Our next step is to actually perform the archiving of the inactive rows. We now execute or run the archive specification LINEITE2, which is only defined up to now. To this end we must specify an r as the line command in front of our specification LINEITE2. See Figure Chapter 8. Scenario 2: Archiving from a table to a table 127
154 AHXV Archive Specifications List Specification saved Command ===> Scroll ===> CSR Primary commands are: DB2 system : DB2G FI - Filter archive specification list User ID. : PAOLOR1 RE - Refresh archive specification list Time... : 13:14 Line commands are: N - New C - Copy D - Delete R - Run B - Browse details E - Edit H - History of runs I - Info summary F - Table archive to file T - Define retrieve specification L - List retrieve specifications Cmd Name Description Creator Updated State Type r LINEITE2 Archive old LINEITE2 PAOLOR DEF TABL LINEITE0 Archive old LINEITE0 PAOLOR COM FILE ******************************* Bottom of data ******************************** Figure 8-17 Archive Specification List panel, initiating a RUN - Part 1 We press Enter for the panel in Figure AHXV Run Archive Specification Command ==> r Primary commands are: R - Run the archive specification Confirm the row filter and run the archive specification. Change the row filter if desired, then run archive specification. Specification name: LINEITE2 DB2 system... DB2G Creator..... PAOLOR1 User ID..... PAOLOR1 Description: Archive old LINEITE2 rows to a table (incl. deletion) Archive will be immediately completed during archive. Row filter ==> L_SHIPDATE < ' ' Figure 8-18 Archive Specification List panel, initiating a RUN - Part 2 On this panel, major characteristics of the archive specification are shown for verification, and if you are happy with their definitions, you can then actually trigger the execution by specifying r for RUN on the command line and pressing Enter. Data Archive Expert presents the results of the execution in Figure DB2 Data Archive Expert for z/os
155 AHXV Archive Run Statistics Archive run successful Command ==> Scroll ===> CSR Archive specification : LINEITE2 DB2 system. : DB2G Creator : PAOLOR1 Description : Archive old LINEITE2 rows to a table (incl. deletion) Row filter : L_SHIPDATE < ' ' =============================================================================== Run: 1 Source table: LINEITE2 Creator: PAOLOR1 Del: 5 Act: R Target table: AHXA_ Creator: ARCHIVED Ins: 5 ******************************* Bottom of data ******************************** Figure 8-19 Archive Run Statistics panel Now, the whole archive process should have been performed as requested. Indeed, you can see that five rows have been archived and (afterwards) five rows have been deleted from the source table PAOLOR1.LINEITE2. Note that you now can see into which table the inactive rows are archived, namely into ARCHIVED.AHXA_ You can deduce from that name the default naming convention Data Archive Expert uses for its archive tables: The is the specification number in the system (as we have seen only two specifications, other users must have defined some specifications already). The first 0001 is the run or execution number of this specification. The second 001 is the number of the table within your specification unit. We now press F3, and see the panel in Figure AHXV Archive Specifications List Row 1 of 2 Command ===> Scroll ===> CSR Primary commands are: DB2 system : DB2G FI - Filter archive specification list User ID. : PAOLOR1 RE - Refresh archive specification list Time... : 13:26 Line commands are: N - New C - Copy D - Delete R - Run B - Browse details E - Edit H - History of runs I - Info summary F - Table archive to file T - Define retrieve specification L - List retrieve specifications Cmd Name Description Creator Updated State Type LINEITE2 Archive old LINEITE2 PAOLOR COM TABL LINEITE0 Archive old LINEITE0 PAOLOR COM FILE ******************************* Bottom of data ******************************** Figure 8-20 Back to the Archive Specification List panel On the panel you see that the state of the archive specification LINEITE2 has changed from DEFined (see Figure 8-16 on page 127) to COMpleted. Chapter 8. Scenario 2: Archiving from a table to a table 129
156 8.4 Result of the archive specification Now let us look what has been achieved. In Example 8-2 you see the archived rows in the archived table ARCHIVED.AHXA_ Example 8-2 Content of the archive table SELECT L_ORDERKEY, L_PARTKEY, L_SHIPDATE FROM ARCHIVED.AHXA_ ; L_ORDERKEY L_PARTKEY L_SHIPDATE DSNE610I NUMBER OF ROWS DISPLAYED IS 5 DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS And in the source table only seven out of the original 12 active rows remain; see Example 8-3. Example 8-3 Content of the source table after archive execution SELECT L_ORDERKEY, L_PARTKEY, L_SHIPDATE FROM PAOLOR1.LINEITE2 ; L_ORDERKEY L_PARTKEY L_SHIPDATE DSNE610I NUMBER OF ROWS DISPLAYED IS 7 DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS As pointed out at the beginning of this chapter, when archiving to a table rather than to a dataset, you are still able to access all rows whether archived or not; in simple cases you do not need Data Archive Expert s retrieve feature. You simply create a view with a UNION, which reads from the operational table and from the archive table. Example 8-4 shows a simple view for that purpose: the definition of a view with UNION ALL, which includes all columns must take into account that the archive table has an additional timestamp column called AHXEXECUTEDTS. As an example, let us assume you want to retrieve all rows with a shipment date in April; you can proceed as outlined in Figure DB2 Data Archive Expert for z/os
157 Example 8-4 Retrieving all rows with a first view CREATE VIEW LINEITE2_VIEW AS SELECT L_ORDERKEY, L_PARTKEY, L_SHIPDATE FROM ARCHIVED.AHXA_ UNION ALL SELECT L_ORDERKEY, L_PARTKEY, L_SHIPDATE FROM PAOLOR1.LINEITE2 ; DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS SELECT * FROM LINEITE2_VIEW WHERE MONTH(L_SHIPDATE) = L_ORDERKEY L_PARTKEY L_SHIPDATE DSNE610I NUMBER OF ROWS DISPLAYED IS 3 DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS As you can see, the query returns 1 archived and 2 non-archived rows. 8.5 A second run of archive specification By typing an r as the line command in front of your specification, you can run this specification yet another time. See Figure AHXV Archive Specifications List Row 1 of 3 Command ===> Scroll ===> CSR Primary commands are: DB2 system : DB2G FI - Filter archive specification list User ID. : PAOLOR1 RE - Refresh archive specification list Time... : 15:45 Line commands are: N - New C - Copy D - Delete R - Run B - Browse details E - Edit H - History of runs I - Info summary F - Table archive to file T - Define retrieve specification L - List retrieve specifications Cmd Name Description Creator Updated State Type r LINEITE2 Archive old LINEITE2 PAOLOR COM TABL LINEITE0 Archive old LINEITE0 PAOLOR COM FILE ******************************* Bottom of data ******************************** Figure 8-21 Initiate a second run of your archive specification Change the row filter as shown in Figure 8-22, so that you now archive the line items from Chapter 8. Scenario 2: Archiving from a table to a table 131
158 AHXV Run Archive Specification Command ==> r Primary commands are: R - Run the archive specification Confirm the row filter and run the archive specification. Change the row filter if desired, then run archive specification. Specification name: LINEITE2 DB2 system... DB2G Creator..... PAOLOR1 User ID..... PAOLOR1 Description: Archive old LINEITE2 rows to a table (incl. deletion) Archive will be immediately completed during archive. Row filter ==> L_SHIPDATE < ' ' Figure 8-22 Modifying the row filter for your second archive run Press Enter to actually run your specification, the result is shown through the panel in Figure AHXV Archive Run Statistics Archive run successful Command ==> Scroll ===> CSR Archive specification : LINEITE2 DB2 system. : DB2G Creator : PAOLOR1 Description : Archive old LINEITE2 rows to a table (incl. deletion) Row filter : L_SHIPDATE < ' ' =============================================================================== Run: 2 Source table: LINEITE2 Creator: PAOLOR1 Del: 1 Act: R Target table: AHXA_ Creator: ARCHIVED Ins: 1 ******************************* Bottom of data ******************************** Figure 8-23 Result of second run of your archive specification Press F3 to get back to the panel in Figure DB2 Data Archive Expert for z/os
159 AHXV Archive Specifications List Row 1 of 3 Command ===> Scroll ===> CSR Primary commands are: DB2 system : DB2G FI - Filter archive specification list User ID. : PAOLOR1 RE - Refresh archive specification list Time... : 15:49 Line commands are: N - New C - Copy D - Delete R - Run B - Browse details E - Edit H - History of runs I - Info summary F - Table archive to file T - Define retrieve specification L - List retrieve specifications Cmd Name Description Creator Updated State Type h LINEITE2 Archive old LINEITE2 PAOLOR COM TABL LINEITE0 Archive old LINEITE0 PAOLOR COM FILE ******************************* Bottom of data ******************************** Figure 8-24 Requesting an overview about your archive specification runs Specify H for History, press Enter, and you get the list of runs in Figure AHXV Archive Specification History Row 1 of 3 Command ===> Scroll ===> CSR Specification name: LINEITE2 Creator: PAOLOR1 Description: Archive old LINEITE2 rows to a table (incl. deletion) Line commands are: S - Show statistics Cmd Run State Run by Date Row filter Defined L_SHIPDATE < ' ' s 1 Complete PAOLOR L_SHIPDATE < ' ' 2 Complete PAOLOR L_SHIPDATE < ' ' ******************************* Bottom of data ******************************** Figure 8-25 Overview of your archive specification runs Specify S for statistics. The S shows more detail (see Figure 8-26) for entries with a state of Complete rather than Defined. Chapter 8. Scenario 2: Archiving from a table to a table 133
160 AHXV Archive Run Statistics Row 1 of 1 Command ==> Scroll ===> CSR Archive specification : LINEITE2 DB2 system. : DB2G Creator : PAOLOR1 Description : Archive old LINEITE2 rows to a table (incl. deletion) Row filter : L_SHIPDATE < ' ' =============================================================================== Run: 1 Source table: LINEITE2 Creator: PAOLOR1 Del: 5 Act: Target table: AHXA_ Creator: ARCHIVED Ins: 5 ******************************* Bottom of data ******************************** Figure 8-26 Details of a performed archive specification run Receive all data through yet another view with UNIONs as shown in Example 8-5. Example 8-5 Retrieving all rows with a second view CREATE VIEW LINEITE2_VIEW2 AS SELECT L_ORDERKEY, L_PARTKEY, L_SHIPDATE FROM ARCHIVED.AHXA_ UNION ALL SELECT L_ORDERKEY, L_PARTKEY, L_SHIPDATE FROM ARCHIVED.AHXA_ UNION ALL SELECT L_ORDERKEY, L_PARTKEY, L_SHIPDATE FROM PAOLOR1.LINEITE2 ; DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS SELECT * FROM LINEITE2_VIEW2 -- WHERE MONTH(L_SHIPDATE) = L_ORDERKEY L_PARTKEY L_SHIPDATE DSNE610I NUMBER OF ROWS DISPLAYED IS 12 DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS As you can see, the query returns both the archived and the non-archived rows. 134 DB2 Data Archive Expert for z/os
161 8.6 Using different archive tables per archive run As you could see, you had to create a new view after each specification run. In the process you also press the current DB2 V7 limitation of 16 tables in a view. Therefore, you might want to always archive into the same table. In Chapter 9, Scenario 3: Archiving from RI related tables and deleting from one on page 137 we show you how to do that. Chapter 8. Scenario 2: Archiving from a table to a table 135
162 136 DB2 Data Archive Expert for z/os
163 9 Chapter 9. Scenario 3: Archiving from RI related tables and deleting from one In Chapter 8, Scenario 2: Archiving from a table to a table on page 117 you have seen how to archive a single table. That table contained a lot of key values that cannot be easily understood outside of the original application without enough pertaining information: For some usage scenarios, the archived LINEITEM rows on their own are just a lot of part numbers, which do not have any description. In this chapter we describe for the first time how to archive related information, such as part names, together with the main table you want to archive. We also outline how DB2 Data Archive Expert for z/os can help you find the list of tables, which contain the information you need in order to understand the meaning of the LINEITEM rows. We start from the business requirement: Starting last year, our USA-based company has stopped deliveries to the United Kingdom. This is because a business partner in the UK is now doing it for us. Therefore, we want to exclude old order line items from the large and busy LINEITEM table if they had been delivered to the UK since they are probably not going to be accessed anymore. We define old by a combination of the shipment date and the receipt date of the part in that order item. In order to exclude these LINEITEM rows, and still have them stored somewhere, we want to archive these LINEITEM rows together with sufficient descriptive information from the other related tables. But, we only want to delete these LINEITEM rows, not any rows of other tables, as these might be needed for other LINEITEM rows not archived or deleted. This chapter contains the following: The objectives of this scenario Digging into the scenario Data before starting the archive specification Start by defining the archive unit Define archive target Run your specification - Step 1 Run your specification - Step 2 Copyright IBM Corp All rights reserved. 137
164 Second run using the same target tables Checking whether too many rows have been archived 138 DB2 Data Archive Expert for z/os
165 9.1 The objectives of this scenario Before starting with digging into Data Archive Expert and its panels, let us clarify which challenges are solved in this chapter, so that you can decide whether you are interested in reading this chapter: As pointed out, the main purpose in this scenario is to add data (mostly entire rows) from other tables to the archive consisting mainly of LINEITEM rows. In addition to that, we want Data Archive Expert come up with the list of tables, which contain the additional information. Of course, the Referential Integrity (RI) defined in DB2 is optimal for this purpose. Hence, you will learn how to let Data Archive Expert find the RI-related tables, and which rows to choose to be archived together with the LINEITEM rows. After Data Archive Expert has presented a list or RI-related tables, you might want to add additional tables to the list of RI-related tables. To do so becomes necessary in the very common case where some tables are referentially related, but you have chosen to enforce the integrity of this referential relationship through your application rather than through DB2. Actually, it is quite common that some lookup tables (like NATION in the following example) are not subjected to DB2-enforced RI in order to reduce the size of the RI structures in DB2, for instance, for maintenance purposes. Therefore, this chapter tells you how to manually add tables to a given list. Tip: Data Archive Expert is also able to find tables with application-enforced referential relationships, or even otherwise related tables. If you want to use this feature, make sure you have a look at Chapter 6, Optionally defining DB2 Grouper Client on page 65 for examples on defining the Grouper GUI Client, and Chapter 11, Scenario 5: Archiving Grouper discovered related tables on page 211. Similarly, you might want to exclude a table from the Data Archive Expert-provided list of RI-related tables, as you are not interested in storing the rows of this particular table in the archive together with the LINEITEM rows. This chapter tells how to manually exclude tables from the given list. When you include multiple tables in your archive specification, a question concerning the granularity arises: It appears first that you can choose to either store an entire row of an additional table together with your LINEITEM row, or to not store this additional row at all. But there is a third choice: If two tables are related through an intermediate table, you may only be interested in archiving the data of these first two tables, but not in archiving the data of this intermediate table. Yet, you need this intermediate table to connect the data of the two tables for archiving the correct pairs of rows, and again when retrieving related row pairs from these tables. This chapter tells you how to advice Data Archive Expert to only archive the connecting columns of this intermediate table rather than its entire rows. Preview: You specify such a table as a so called junction table in the archive specification, meaning it only serves as a connecting table. Furthermore, when dealing with multiple tables in your archive specification, it is very likely that the criteria which rows to archive is based on multiple columns of different tables - not just based on one table. In other words, the row filter (which Data Archive Expert converts to a WHERE clause) contains columns of different tables. This chapter shows by example how you can specify the row filter correctly so that Data Archive Expert converts it to a valid WHERE clause when Data Archive Expert accesses DB2. Chapter 9. Scenario 3: Archiving from RI related tables and deleting from one 139
166 Although not related to the previous points, other issues are tackled in this chapter as well: You can learn how to use the same tables for repeated runs or executions of your archive specification, rather than letting Data Archive Expert create new archive target tables for each run. You may want to use the same tables for multiple archive runs for two reasons: You do not explode the number of tables in your DB2 system, affecting the size of the catalog, and causing increasing maintenance effort. You can access all archived data with one view rather than having to define and use a new view after each archive run (see scenario 2), and potentially hitting the limit of 16 tables in a view. If you are cautious, you might want to first check which table rows are inserted into the archive tables before you actually allow Data Archive Expert to delete exactly these rows from the source tables, which is similar to using CHECK DATA with exception tables but with DELETE NO. In this chapter, you learn how to run the archive specification in two steps: First, copy the rows into the target tables, and secondly, complete (which is the Data Archive Expert term) the run, that is, delete the rows from the source tables. This chapter demonstrates how you can change the row filter in a succeeding run. Also, for test purposes, this chapter tells you how to trace Data Archive Expert activities: You can specify which kind of messages Data Archive Expert should write in its log. Last but not least, if you and your colleagues have already defined many archive specifications, you must learn how to filter these specifications so that you only get a list of the specifications you want to see. As you certainly already noticed, this chapter archives into tables again, rather than into data sets. Why? Simply because this way it is much easier to check whether the following already very complex example works correctly. Are you interested in the stuff you can learn in this chapter? Then jump on the bandwagon, join the gang, and proceed. 9.2 Digging into the scenario First, let us have a look at how our tables are related. This is shown in Figure 9-1. Notice that in contrast to the sample data used elsewhere in this redbook, we assume that the NATION table is not part of the DB2-enforced RI. 140 DB2 Data Archive Expert for z/os
167 NATION N - NATIONKEY PART N - NAME CUSTOMER application enforced P - PARTKEY C - CUSTKEY C P - NAME PS - PARTKEY PS - SUPPKEY PARTSUPP manually NA C - NAME C - ADDRESS C - NATIONKEY ORDER O - ORDERKEY O - CUSTKEY LINEITEM C NA Figure 9-1 The data schema { L - ORDERKEY L - LINENUMBER L - PARTKEY L - SUPPKEY L - SHIPDATE L - RECIPTDATE Again, in this scenario our company does not deliver to the United Kingdom anymore. Concerning the customer orders, the LINEITEM rows that pertain to the UK and are old enough should be archived. Together with each archived LINEITEM row, we want to have archived the actual name of the ordered part (not just the part number), and the customer name and the customer s address and nation (not just the customer number and the nation number). On the other hand, we are not interested in any additional supplier information pertaining to a particular LINEITEM row. So, we can conclude that we need the following source tables: LINEITEM, our so called starting point table in Data Archive Expert In order to archive not just the numbers in LINEITEM: CUSTOMER, as we want archive the customer address and filter on C_NATIONKEY PART, as we want to archive the actual part name: P_NAME) NATION, as we want to archive the nation s name: N_NAME. ORDER - but just as a connecting table, or junction table, in order to get to the related CUSTOMER table rows. We do not want to archive more than just the connection keys of ORDER, that is, the columns O_ORDERKEY and O_CUSTKEY. On the contrary, the PARTSUPP table is not needed at all, as we can skip this table when we want to proceed from the LINEITEM table to the related rows in the PART table. What data are we going to delete? Chapter 9. Scenario 3: Archiving from RI related tables and deleting from one 141
168 Those LINEITEM rows, for which both is true: a. L_SHIPDATE or L_RECEIPTDATE are older than some given dates b. The corresponding customer has an address in the UK. What data should not be deleted? Rows from any other table. What are the reasons: 1. ORDER: As LINEITEM is a dependent entity of ORDER, a pertaining order is still needed for possibly existing line items not to be archived, e.g., because they are not old enough. 2. ORDER, PART: Both these tables store entity types. And such entities must not be deleted only because a (1:n) relationship between these two entities does not exist any more. 9.3 Data before starting the archive specification In order to control our specification and what Data Archive Expert does with it, we should have a look at the current content of our test tables. Please have a look at Example 9-1 until Example 9-6. Please note that our sample table names have a trailing _TEST. Example 9-1 shows the original content of NATION. Example 9-1 Original content of NATION SELECT * FROM NATION_TEST ORDER BY N_NATIONKEY ; N_NATIONKEY N_NAME N_REGIONKEY N_COMMENT UK 2 zqn3okwz1wln7pls3ohcgn56kp5 2 USA 1 glms0nacamnbcj2klki7rcpngpx 3 EIRE 2 4yMO AhnQ5Lh wzqam662aw1by DSNE610I NUMBER OF ROWS DISPLAYED IS 3 DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS Example 9-2 shows the original content of PART. Example 9-2 Original content of PART SELECT * FROM PART_TEST ORDER BY P_PARTKEY ; P_PARTKEY P_NAME P_MFGR SHIP Manufactur 40 CAR Manufactur 50 BOAT Manufactur 60 MOTORBIKE Manufactur 70 PLANE Manufactur 80 TRUCK Manufactur DSNE610I NUMBER OF ROWS DISPLAYED IS 6 DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS DB2 Data Archive Expert for z/os
169 Example 9-3 shows the original content of PARTSUPP. Example 9-3 Original content of PARTSUPP SELECT * FROM PARTSUPP_TEST ORDER BY PS_PARTKEY, PS_SUPPKEY ; PS_PARTKEY PS_SUPPKEY PS_AVAILQTY PS_SUPPLYCOST PS_COMMENT E OCA5ghw0P0gS3n2jCS E+03 6S66zNlykhii26wwAxz1PRM E+03 MMNOM3BnMM6NBzjB 2mg i E+03 BPOgj3k MgQR2 x6kn3br6l E+03 nso7mln4n7llxgaym2mznno E+03 00PL56QkQRSkg2z7MANNj4i E k jlciznlobl62np4ll E+03 PyRmlwO76kO3igxhS64h5x E+03 P7 437MmnM0Pik lawbj0gs E+03 mqsg0n6a74lm2cl7is221c E+03 5mzk1mC6 lll25p6crsghqw E+03 y5bny3aw02nxymxgzp5bs14 DSNE610I NUMBER OF ROWS DISPLAYED IS 12 DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS Example 9-4 shows the original content of CUSTOMER. Example 9-4 Original content of CUSTOMER SELECT C_CUSTKEY, C_NAME, SUBSTR(C_ADDRESS, 1, 12) AS C_ADDRESS, C_NATIONKEY FROM PAOLOR1.CUSTOMER_TEST ; C_CUSTKEY C_NAME C_ADDRESS C_NATIONKEY A_CUSTOMER LONDON B_CUSTOMER BELFAST C_CUSTOMER NEW YORK D_CUSTOMER CHIKAGO E_CUSTOMER BELFAST F_CUSTOMER MANCHESTER G_CUSTOMER DUBLIN H_CUSTOMER LIVERPOOL 1 DSNE610I NUMBER OF ROWS DISPLAYED IS 8 DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS Example 9-5 shows the original content of ORDER. Example 9-5 Original content of ORDER SELECT * FROM ORDER_TEST ORDER BY O_ORDERKEY, O_CUSTKEY ; O_ORDERKEY O_CUSTKEY O_ORDERSTATUS O_TOTALPRICE O_ORDERDATE O_ORDERP O E LOW O E URGENT Chapter 9. Scenario 3: Archiving from RI related tables and deleting from one 143
170 3 700 F E LOW O E LOW F E LOW F E NOT SP O E HIGH O E HIGH F E MEDIUM DSNE610I NUMBER OF ROWS DISPLAYED IS 9 DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS Example 9-6 shows the original content of LINEITEM. Example 9-6 Original content of LINEITEM SELECT L_ORDERKEY, L_LINENUMBER, L_PARTKEY, L_SHIPDATE, L_RECEIPTDATE, L_SUPPKEY FROM LINEITEM_TEST ORDER BY L_SHIPDATE, 1, 2 ; L_ORDERKEY L_LINENUMBER L_PARTKEY L_SHIPDATE L_RECEIPTDATE L_SUPPKEY DSNE610I NUMBER OF ROWS DISPLAYED IS 18 DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS Start by defining the archive unit In Figure 9-2 you see Data Archive Expert by the now well-known starting panel. 144 DB2 Data Archive Expert for z/os
171 AHXV IBM DB2 Data Archive Expert for Select Archive Expert Action ==> 0 Settings not modified DB2 system : DB2G Schema : SYSTOOLS User ID : PAOLOR1 0 View and set Archive Expert settings Time : 20:02 1 Work with archive specifications 2 Work with retrieve specifications X Exit IBM* Licensed Materials - Property of IBM 5655-I95 (c) Copyright IBM Corp All Rights Reserved. *Trademark of International Business Machines Figure 9-2 Data Archive Expert s main menu panel Later in this chapter we want to know what Data Archive Expert does; you have a look into Data Archive Expert s Log. In order to let Data Archive Expert log as much as possible, you might want to specify that: to this end specify 0 and press Enter to proceed to the panel in Figure 9-3. AHXV Data Archive Expert Settings Command ===> Scroll ===> CSR_ DB2: DSN7 Metadata Schema: SYSTOOLS User ID: PAOLOR1 Time: 20:03 Set or change the following settings, press <ENTER> then <END>. Log data sets qualifier (xxxxxxxx.ahxcirc.log) (xxxxxxxx.ahxlog).... PAOLOR1 Level of logging (1 - Info 2 - Warning 3 - Error) COMMIT level (1 to rows) Default owner for archive target tables. ARCHIVED retrieve target tables. RETRIEVE Grouper schema names metadata SYSTOOLS stored procedures... EGFTOOLS File archive names working database.... AHXFLWDB working storage group. AHXFLWSG Archive Expert schema name stored procedures.... AHXTOOLS DB2 Authorization ID... PRDDBA Figure 9-3 Data Archive Expert s settings panel Chapter 9. Scenario 3: Archiving from RI related tables and deleting from one 145
172 It is important that the user ID PRDDBA, or the RACF group the user is assigned to, has DBADM to the target databases to be used for archiving and retrieving. Set the logging level to 1 so that Data Archive Expert documents which SQL commands it issues. Of course, you should not specify 1 in an production environment, as this logging has a negative impact on Data Archive Expert s performance. Press Enter to save these Data Archive Expert settings. If you select 1 for archive specification on the primary Data Archive Expert panel, you will see all your archive specifications, so the panel looks like Figure 9-4. AHXV Archive Specifications List Row 1 of 13 Command ===> FI Scroll ===> CSR Primary commands are: DB2 system : DB2G FI - Filter archive specification list User ID. : PAOLOR1 RE - Refresh archive specification list Time... : 20:05 Line commands are: N - New C - Copy D - Delete R - Run B - Browse details E - Edit H - History of runs I - Info summary F - Table archive to file T - Define retrieve specification L - List retrieve specifications Cmd Name Description Creator Updated State Type testdelete tttt for archive AND PAOLOR PCOM TABL deferred delete kjgk PAOLOR COM TABL SCENARIO1 Archive old LINEITEM PAOLOR COM FILE LINEITE1 Archive old LINEITEM PAOLOR COM FILE LINEITE2 Archive old LINEITE2 PAOLOR COM TABL LINEITE3 table with delete PAOLOR COM TABL LINEITE0 Archive old LINEITE0 PAOLOR COM FILE delete7 trial with tab archiv PAOLOR COM TABL delete6 another trial after - PAOLOR COM FILE testdelete5 file PAOLOR DEF FILE Figure 9-4 Archive Specification List panel If you want to see certain specifications only, you can filter the scenarios that Data Archive Expert should present to you. Simply specify fi on the command line for a pull-down menu in Figure 9-5 that lets you specify a filter criteria. 146 DB2 Data Archive Expert for z/os
173 AHXV Archive Specifications List Row 1 of 13 Command ===> FI Scroll ===> CSR Primary commands are: DB2 system : DB2G FI - Fil Essssssssssssss Archive Specification Filter Criteria ssssssssssssssn RE - Ref e e e Command ==> e Line comm e e N - New e Specify archive specification filter criteria. e H - Hist e e T - Defi e Name... ==> scen% e e Creator.. ==> PAOLOR1 e Cmd Name e Type... ==> % (TABL,FILE) e e Status.. ==> % (PDEF,DEF,PCOM,COM) e test e e defe e ( % or blank indicates all ) e SCEN e e LINE e e LINE e e LINE e e LINE DsssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssM delete7 trial with tab archiv PAOLOR COM TABL delete6 another trial after - PAOLOR COM FILE testdelete5 file PAOLOR DEF FILE Figure 9-5 Data Archive Expert s pull-down menu for filtering specification list By default, your user ID (here PAOLOR1) is already supplied as creator of the specification. Here you specify that you only want to see your specifications whose names start with scen (% is a wildcard as in LIKE) and Figure 9-6 shows the results. AHXV Archive Specifications List ---- No Specifications exist Command ===> Scroll ===> CSR Primary commands are: DB2 system : DB2G FI - Filter archive specification list User ID. : PAOLOR1 RE - Refresh archive specification list Time... : 20:07 Line commands are: N - New C - Copy D - Delete R - Run B - Browse details E - Edit H - History of runs I - Info summary F - Table archive to file T - Define retrieve specification L - List retrieve specifications Cmd Name Description Creator Updated State Type n <empty> ******************************* Bottom of data ******************************** Figure 9-6 Creating a new archive specification Are you surprised? Remember, Data Archive Expert is case-sensitive, so SCENARIO1 does not match! So we just specify n as the line command in front of the <empty> entry for the panel in Figure 9-7. Chapter 9. Scenario 3: Archiving from RI related tables and deleting from one 147
174 AHXV Archive Specification Definition Command ==> Archive specification: Name......==> SCEN3TEST DB2 system. : DB2G Creator....==> PAOLOR1 Description..==> archive lineitem-related rows,only purge lineitem rows Complete archive run (delete source data)? ==> n (Yes/No) Perform orphan row/changed data detection? ==> Y (Yes/No) Select an archive definition activity ==> 1 1. Define archive unit (required) 2. Define table targets (active) 3. Define data set targets 4. Save archive specification Figure 9-7 Data Archive Expert s panel for general archive specification definitions Now you should try out something new: Executing the archive specification in two distinct steps (remember - only possible for table archives), so specify n for completing the archive run. This way Data Archive Expert stops after having stored the rows in the archive, and waits for you to start the second step - the deleting of the rows in the source tables at a later time. You do not want to change any other pre-supplied value, so you just press Enter for defining the archive unit, that is, the list of tables to archive, and look at the panel in Figure 9-8. AHXV Archive Specification Definition Command ==> Archive specification: Name......==> SCEN3TEST DB2 system. : DB2G Creator....==> Essssssss Specify Starting Point Table ssssssssn Description..==> e e rows Complete archive ru e Command ==> e Perform orphan row/ e e e Provide table selection list? ==> Y (Y/N) e Select an archive def e e e Table name. ==> LINEITEM% e 1. Define archive e Creator.. ==> PAOLOR1 e 2. Define table t e Database.. ==> % e 3. Define data se e DB2 system.. : DB2G e 4. Save archive s e e e ( % or blank indicates all ) e e e e e DssssssssssssssssssssssssssssssssssssssssssssssM Figure 9-8 Data Archive Expert s pull-down menu for selecting the starting point table For a reduced list of tables, provide the partial table name LINEITEM with a wildcard, and then ask for a table selection list. The result is presented in Figure DB2 Data Archive Expert for z/os
175 AHXV Select Starting Point Table Row 1 of 3 Command ==> Scroll ===> CSR Archive specification: SCEN3TEST DB2 system..... : DB2G Line commands are: S - Select table S* - Select all D - Deselect table D* - Deselect all Cmd * Table name Creator Database Table space LINEITEM PAOLOR1 PAOLODB TS$ITEM LINEITEM_BACKUP PAOLOR1 DSNDB04 LINE15FF s LINEITEM_TEST PAOLOR1 DSNDB04 LINE15VZ ******************************* Bottom of data ******************************** Figure 9-9 Selecting the starting point table from a table list For this scenario, we provide the _TEST tables, so select the LINEITEM_TEST table and you will get the pull-down menu in Figure 9-10 with the one question that is crucial: The important part in this scenario: Do you want Data Archive Expert to find related tables? AHXV Select Starting Point Table Row 1 of 3 Command ==> Scroll ===> CSR Archive specification: SCEN3TEST DB2 system..... : DB2G Line commands are: Esssssssssss Search for related Tables? ssssssssssssn S - Select table S* e e D - Deselect table e Command ==> e e e Cmd * Table name e Find related tables? ==> Y (Yes/No) e e e LINEITEM e Starting point table: LINEITEM_TEST e LINEITEM_BACKUP e Creator : PAOLOR1 e S LINEITEM_TEST e Database name... : DSNDB04 e ********************* e DB2 system.... : DB2G e *** e e e e e e DsssssssssssssssssssssssssssssssssssssssssssssssssssM Figure 9-10 Data Archive Expert s pull-down menu for finding related tables With Y for Find related tables? you are asking Data Archive Expert to invoke its sub-component Grouper to find the related tables. Up to now, we have not done anything in Grouper. Therefore, Grouper returns only those tables, which belong to the same RI structure in DB2. Chapter 9. Scenario 3: Archiving from RI related tables and deleting from one 149
176 Tip: In Chapter 6, Optionally defining DB2 Grouper Client on page 65 you can see how to generate other kinds of groups of tables in Grouper so that your request in Data Archive Expert to find related tables returns more tables than just the tables that are RI-related in DB2. According to the RI structure we use throughout this scenario, you get the list (in Figure 9-11) of all the tables which are in DB2 RI-related to LINEITEM_TEST. Note that LINEITEM _TEST itself is not listed, as Data Archive Expert assumes that you want to archive its rows in any case, as you started your specification with this table. AHXV Select Related Tables Row 1 of 5 Command ==> Scroll ===> CSR Archive Specification : SCEN3TEST Starting point table : LINEITEM_TEST Creator : PAOLOR1 DB2 system.... : DB2G Line commands are: S - Select table for archive unit S* - Select all tables D - Deselect table D* - Deselect all tables Cmd * Table name Creator Database Table space S CUSTOMER_TEST PAOLOR1 DSNDB04 CUST1LAY S ORDER_TEST PAOLOR1 DSNDB04 ORDERRTE S PART_TEST PAOLOR1 DSNDB04 PARTRTES PARTSUPP_TEST PAOLOR1 DSNDB04 PARTSUPP SUPPLIER_TEST PAOLOR1 DSNDB04 SUPPLIER ******************************* Bottom of data ******************************** Figure 9-11 Selecting archive unit tables from the list of related tables Specify S (=select) for the CUSTOMER_TEST and PART_TEST tables, because you want to archive their related rows together with your LINEITEM_PART rows. Also, specify S for the ORDER_TEST table, as you need its rows to find the way from your starting LINEITEM_TEST rows to the associated CUSTOMER_TEST rows. If you press Enter, Data Archive Expert accepts your input, and hence, moves the S a bit more to the right, as you can see in Figure DB2 Data Archive Expert for z/os
177 AHXV Select Related Tables Row 1 of 5 Command ==> Scroll ===> CSR Archive Specification : SCEN3TEST Starting point table : LINEITEM_TEST Creator : PAOLOR1 DB2 system.... : DB2G Line commands are: S - Select table for archive unit S* - Select all tables D - Deselect table D* - Deselect all tables Cmd * Table name Creator Database Table space S CUSTOMER_TEST PAOLOR1 DSNDB04 CUST1LAY S ORDER_TEST PAOLOR1 DSNDB04 ORDERRTE S PART_TEST PAOLOR1 DSNDB04 PARTRTES PARTSUPP_TEST PAOLOR1 DSNDB04 PARTSUPP SUPPLIER_TEST PAOLOR1 DSNDB04 SUPPLIER ******************************* Bottom of data ******************************** Figure 9-12 Data Archive Expert s confirmation of selected tables F3 brings you back to the panel in Figure 9-13, which lists the tables whose rows you want to archive by now. Note that your initial LINEITEM _TEST table is marked as SP for starting point table. AHXV Archive Unit Definition Row 1 of 4 Command ==> Scroll ===> CSR Archive specification: Name... : SCEN3TEST Creator. : PAOLOR1 DB2 system: DB2G Starting point table: LINEITEM_TEST Creator..... : PAOLOR1 Database name.. : DSNDB04 Line commands are: A - Add C - Columns K - Connection keys D - Delete R - Rules P - Starting point table U - Index cols W - Row filter Rules Cmd Table name Creator Database StP Jct Del Row filter a LINEITEM_TEST PAOLOR1 DSNDB04 SP N _ CUSTOMER_TEST PAOLOR1 DSNDB04 N _ ORDER_TEST PAOLOR1 DSNDB04 N _ PART_TEST PAOLOR1 DSNDB04 N ******************************* Bottom of data ******************************** Figure 9-13 Initiate the addition of a table to the archive unit tables If you had selected too many tables in the previous panel, you could still exclude some of those from this list by using D for Delete against those tables. On the contrary, in this scenario you want to archive rows from yet another - the NATION_TEST table because the contents of its N_NAME column should be archived, too. Therefore, you specify a for adding another table and press Enter to get the already well-known panel in Figure Chapter 9. Scenario 3: Archiving from RI related tables and deleting from one 151
178 AHXV Archive Unit Definition Row 1 of 4 Command ==> Scroll ===> CSR Archive specification: Name... : SCEN3TEST Starting point table: LINEITEM_TEST Creator. : PAOLOR1 Essssssss Select Archive Unit Table(s) ssssssssn DB2 system: DB2G e e e Command ==> e Line commands are: e e A - Add C - Columns e Provide table selection list? ==> N (Y/N) e R - Rules P - Start e e e Table name. ==> NATION_TEST e e Creator.. ==> PAOLOR1 e Cmd Table name e Database.. ==> % e e DB2 system.. : DB2G e A LINEITEM_TEST e e _ CUSTOMER_TEST e ( % or blank indicates all ) e _ ORDER_TEST e e _ PART_TEST e e ********************* DssssssssssssssssssssssssssssssssssssssssssssssM ******** Figure 9-14 Data Archive Expert s pull-down menu for specifying a table or a list of tables As you know exactly what you are looking for, just specify the table NATION_TEST. With Enter you get the enlarged list in Figure AHXV Archive Unit Definition Row 1 of 5 Command ==> Scroll ===> CSR Archive specification: Name... : SCEN3TEST Creator. : PAOLOR1 DB2 system: DB2G Starting point table: LINEITEM_TEST Creator..... : PAOLOR1 Database name.. : DSNDB04 Line commands are: A - Add C - Columns K - Connection keys D - Delete R - Rules P - Starting point table U - Index cols W - Row filter Rules Cmd Table name Creator Database StP Jct Del Row filter r LINEITEM_TEST PAOLOR1 DSNDB04 SP N _ NATION_TEST PAOLOR1 DSNDB04 N _ CUSTOMER_TEST PAOLOR1 DSNDB04 N _ ORDER_TEST PAOLOR1 DSNDB04 N _ PART_TEST PAOLOR1 DSNDB04 N ******************************* Bottom of data ******************************** Figure 9-15 Request to define rules for the archive unit tables Done! You assume that you have completely specified which table rows should be copied to the archive. The next step is to specify the tables from which the once archived rows should be deleted afterwards. Up to now, you see the default setting of Del=N for all tables. To change this for a specific table, specify r for rules; Enter then provides you with the pull-down menu in Figure DB2 Data Archive Expert for z/os
179 AHXV Archive Unit Definition Row 1 of 5 Command ==> Scroll ===> CSR Archive specification: Name... : SCEN3TEST Starting point table: LINEITEM_TEST Creato Esssssssssss Select Archive Table Rules ssssssssssssn DB2 sy e AHXV Archive Table Rules e e Command ==> e Line com e e A - Add e Archive specification : SCEN3TEST e R - Rul e Archive unit table : LINEITEM_TEST e ilter e Creator : PAOLOR1 e e e Cmd Tab e Table archive rule: e er e e R LIN e Make the table a junction table? N (Yes/No) e _ NAT e e _ CUS e Table delete rule: e _ ORD e e _ PAR e Delete data from table? Y (Yes/No) e ******** e e **************** e e DsssssssssssssssssssssssssssssssssssssssssssssssssssM Figure 9-16 Data Archive Expert s pull-down menu for defining rules, here delete=yes Just specify Y for Delete data from table? and with Enter you proceed to Figure AHXV Archive Unit Definition Row 1 of 5 Command ==> Scroll ===> CSR Archive specification: Name... : SCEN3TEST Creator. : PAOLOR1 DB2 system: DB2G Starting point table: LINEITEM_TEST Creator..... : PAOLOR1 Database name.. : DSNDB04 Line commands are: A - Add C - Columns K - Connection keys D - Delete R - Rules P - Starting point table U - Index cols W - Row filter Rules Cmd Table name Creator Database StP Jct Del Row filter _ LINEITEM_TEST PAOLOR1 DSNDB04 SP Y _ NATION_TEST PAOLOR1 DSNDB04 N _ CUSTOMER_TEST PAOLOR1 DSNDB04 N r ORDER_TEST PAOLOR1 DSNDB04 N _ PART_TEST PAOLOR1 DSNDB04 N ******************************* Bottom of data ******************************** Figure 9-17 Request to define rules for another table Please observe that Data Archive Expert s delete rule has changed to Y for the LINEITEM_TEST table. Chapter 9. Scenario 3: Archiving from RI related tables and deleting from one 153
180 Tip: In this scenario 3 we delete only from one table. If you want to delete from multiple tables, you must be aware of some special considerations: You can find an example for this situation in Chapter 10, Scenario 4: Archiving from RI related tables and deleting from them on page 191. Something is missing: Currently, Data Archive Expert archives the entire ORDER_TEST rows, but in this scenario only the key columns of this table should be archived. To make this happen, specify r as the line command in front of the ORDER_TEST table to get the wanted pull-down menu in Figure AHXV Archive Unit Definition Row 1 of 5 Command ==> Scroll ===> CSR Archive specification: Name... : SCEN3TEST Starting point table: LINEITEM_TEST Creato Esssssssssss Select Archive Table Rules ssssssssssssn DB2 sy e AHXV Archive Table Rules e e Command ==> e Line com e e A - Add e Archive specification : SCEN3TEST e R - Rul e Archive unit table : ORDER_TEST e ilter e Creator : PAOLOR1 e e e Cmd Tab e Table archive rule: e er e e _ LIN e Make the table a junction table? Y (Yes/No) e _ NAT e e _ CUS e Table delete rule: e R ORD e e _ PAR e Delete data from table? N (Yes/No) e ******** e e **************** e e DsssssssssssssssssssssssssssssssssssssssssssssssssssM Figure 9-18 Data Archive Expert s pull-down menu for defining rules, here: junction table OK, with Y for Make the table a junction table? and Enter you have fixed that: Data Archive Expert clearly points this out with Jct=J in Figure DB2 Data Archive Expert for z/os
181 AHXV Archive Unit Definition Row 1 of 5 Command ==> Scroll ===> CSR Archive specification: Name... : SCEN3TEST Creator. : PAOLOR1 DB2 system: DB2G Starting point table: LINEITEM_TEST Creator..... : PAOLOR1 Database name.. : DSNDB04 Line commands are: A - Add C - Columns K - Connection keys D - Delete R - Rules P - Starting point table U - Index cols W - Row filter Rules Cmd Table name Creator Database StP Jct Del Row filter _ LINEITEM_TEST PAOLOR1 DSNDB04 SP Y _ NATION_TEST PAOLOR1 DSNDB04 N w CUSTOMER_TEST PAOLOR1 DSNDB04 N _ ORDER_TEST PAOLOR1 DSNDB04 J N _ PART_TEST PAOLOR1 DSNDB04 N ******************************* Bottom of data ******************************** Figure 9-19 Request to define a row filter for a non-starting-point table Your next task is to specify the row filter. In this scenario you want to archive the LINEITEM_TEST rows, which were shipped to customers in the United Kingdom, that is, for which the related CUSTOMER_TEST row has the value 1 (for UK) in its column C_NATIONKEY. Consequently, you want to filter the rows in CUSTOMER_TEST, and hence specify W in front of this table. But, look at the message Data Archive Expert issues in Figure AHXV Archive Unit Definition Row filter not allowed Command ==> Scroll ===> CSR Archive specification: Name... : SCEN3TEST Creator. : PAOLOR1 DB2 system: DB2G Starting point table: LINEITEM_TEST Creator..... : PAOLOR1 Database name.. : DSNDB04 Line commands are: A - Add C - Columns K - Connection keys D - Delete R - Rules P - Starting point table U - Index cols W - Row filter Rules Cmd Table name Creator Database StP Jct Del Row filter _ LINEITEM_TEST PAOLOR1 DSNDB04 SP Y _ NATION_TEST PAOLOR1 DSNDB04 N W CUSTOMER_TEST PAOLOR1 DSNDB04 N _ ORDER_TEST PAOLOR1 DSNDB04 J N _ PART_TEST PAOLOR1 DSNDB04 N ******************************* Bottom of data ******************************** Figure 9-20 Data Archive Expert s error message for row-filter definition request Chapter 9. Scenario 3: Archiving from RI related tables and deleting from one 155
182 Important rule: The row filter must be specified on the starting point table. AHXV Archive Unit Definition Row filter not allowed Command ==> Scroll ===> CSR Archive specification: Name... : SCEN3TEST Creator. : PAOLOR1 DB2 system: DB2G Starting point table: LINEITEM_TEST Creator..... : PAOLOR1 Database name.. : DSNDB04 Line commands are: A - Add C - Columns K - Connection keys D - Delete R - Rules P - Starting point table U - Index cols W - Row filter Rules Cmd Table name Creator Database StP Jct Del Row filter w LINEITEM_TEST PAOLOR1 DSNDB04 SP Y _ NATION_TEST PAOLOR1 DSNDB04 N CUSTOMER_TEST PAOLOR1 DSNDB04 N _ ORDER_TEST PAOLOR1 DSNDB04 J N _ PART_TEST PAOLOR1 DSNDB04 N ******************************* Bottom of data ******************************** Figure 9-21 Request to define a row filter on the starting-point table Although being a bit skeptical, you try and specify W in front of the starting point table (see Figure 9-21) to get the row filter panel in Figure DB2 Data Archive Expert for z/os
183 AHXV Starting Point Table Row Filter Row 1 of 16 Command ==> Scroll ===> CSR Archive specification : SCEN3TEST Starting point table: LINEITEM_TEST Creator : PAOLOR1 DB2 system: DB2G Row filter ==> CUSTOMER_TEST.C_NATIONKEY = 1 AND ( LINEITEM_TEST.L_SHIPDATE < ' ' OR LINEITEM_TEST.L_RECEIPTDATE < ' ' ) Columns Num Type Length Scale L_ORDERKEY 1 INTEGER 4 0 L_PARTKEY 2 INTEGER 4 0 L_SUPPKEY 3 INTEGER 4 0 L_LINENUMBER 4 INTEGER 4 0 L_QUANTITY 5 INTEGER 4 0 L_EXTENDEDPRICE 6 FLOAT 4 0 L_DISCOUNT 7 FLOAT 4 0 L_TAX 8 FLOAT 4 0 L_RETURNFLAG 9 CHAR 1 0 L_LINESTATUS 10 CHAR 1 0 Figure 9-22 Specifying a row filter with qualified column names You specify your complete row filter, which is based on two tables, as C_NATIONKEY is a column of the CUSTOMER_TEST table, whereas L_SHPDATE and L_RECEIPTDATE are columns of the LINEITEM_TEST table. In order to help Data Archive Expert and DB2 to handle this difficulty, you were tempted to qualify all columns by their respective tables. After you press Enter, Data Archive Expert seems to accept your input; see Figure Attention: As this is not a valid row filter, we will get an error message in a later step. But we intentionally want to show which actions are still possible, and how to handle such an intuitive (hence likely) yet wrong approach. Of course, later on you will see the valid row filter. Chapter 9. Scenario 3: Archiving from RI related tables and deleting from one 157
184 AHXV Archive Unit Definition Row 1 of 5 Command ==> Scroll ===> CSR Archive specification: Name... : SCEN3TEST Creator. : PAOLOR1 DB2 system: DB2G Starting point table: LINEITEM_TEST Creator..... : PAOLOR1 Database name.. : DSNDB04 Line commands are: A - Add C - Columns K - Connection keys D - Delete R - Rules P - Starting point table U - Index cols W - Row filter Rules Cmd Table name Creator Database StP Jct Del Row filter k LINEITEM_TEST PAOLOR1 DSNDB04 SP Y CUSTOMER_TEST.C_NATIONKE _ NATION_TEST PAOLOR1 DSNDB04 N _ CUSTOMER_TEST PAOLOR1 DSNDB04 N _ ORDER_TEST PAOLOR1 DSNDB04 J N _ PART_TEST PAOLOR1 DSNDB04 N ******************************* Bottom of data ******************************** Figure 9-23 Request for checking the connections between the tables Does Data Archive Expert really archive the correct rows? Let us check whether the DB2 s RI-relationships are completely incorporated into Data Archive Expert. So specify k for the LINEITEM_TEST table to see the connection keys on the Data Archive Expert panel in Figure AHXV Archive Unit Table Connections Row 4 of 4 Command ==> Scroll ===> CSR Archive specification : SCEN3TEST Archive unit table. : LINEITEM_TEST Creator : PAOLOR1 DB2 system: DB2G Line commands are: A - add column D - delete P - add parent connection C - add child connection Cmd Connection column Parent table Child table === ================== =========================== ============================ L_ORDERKEY Name : ORDER_TEST Name : INTEGER Creator: PAOLOR1 Creator: Column : O_ORDERKEY Column : ******************************* Bottom of data ******************************** Figure 9-24 Data Archive Expert connections of LINEITEM_TEST table You see that Data Archive Expert has stored the connection to the ORDER_TEST table, which is a parent table of the LINEITEM_TEST table. Furthermore, the connecting columns are displayed: both the foreign key column L_ORDERKEY in the dependent table and the primary key column O_ORDERKEY in the parent table. 158 DB2 Data Archive Expert for z/os
185 On the other hand, the relationship to the PARTSUPP_TEST table is not shown. Rightly so, as you did not select this table when you were asked which tables should be archived. This reminds you that it is now your duty to provide the connection from the LINEITEM_TEST table to the PART_TEST table, as the intermediate table PARTSUPP_TEST is not part of the archive unit, so all RI information from DB2 about this connection path is lost in Data Archive Expert. With F3 you go back to the previous panel (see Figure 9-25). AHXV Archive Unit Definition Row 1 of 5 Command ==> Scroll ===> CSR Archive specification: Name... : SCEN3TEST Creator. : PAOLOR1 DB2 system: DB2G Starting point table: LINEITEM_TEST Creator..... : PAOLOR1 Database name.. : DSNDB04 Line commands are: A - Add C - Columns K - Connection keys D - Delete R - Rules P - Starting point table U - Index cols W - Row filter Rules Cmd Table name Creator Database StP Jct Del Row filter _ LINEITEM_TEST PAOLOR1 DSNDB04 SP Y CUSTOMER_TEST.C_NATIONKE _ NATION_TEST PAOLOR1 DSNDB04 N _ CUSTOMER_TEST PAOLOR1 DSNDB04 N _ ORDER_TEST PAOLOR1 DSNDB04 J N k PART_TEST PAOLOR1 DSNDB04 N ******************************* Bottom of data ******************************** Figure 9-25 Request to specify connections for a table Specify k for the PART_TEST key in order to define the connection through the pull-down menu in Figure Attention: For the same purpose, you can specify k in front of the LINEITEM_TEST table. But this will not work apparently as Data Archive Expert according to the information deriving from DB2. Always picks the L_ORDERKEY as the connecting column (of this table), which is of no use for the connection to the PART_TEST table. Chapter 9. Scenario 3: Archiving from RI related tables and deleting from one 159
186 AHXV Archive Unit Definition Row 1 of 5 Command ==> Scroll ===> CSR EsssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssN Archi e AHXV Select Columns Row 1 of 34 e e Command ==> Scroll ===> CSR e Name e e Cre e Archive specification : SCEN3TEST e DB2 e Archive unit table. : PART_TEST e e Creator : PAOLOR1 e Line e DB2 system..... : DB2G e A - e e R - e Line commands are: e e S - Select column D - Deselect column e e e Cmd e Cmd * Column Num Type Length Scale e --- e e _ e s P_PARTKEY 1 INTEGER 4 0 e _ e P_NAME 2 VARCHAR 55 0 e _ e P_MFGR 3 CHAR 25 0 e _ e P_BRAND 4 CHAR 10 0 e K e P_TYPE 5 VARCHAR 25 0 e ***** e P_SIZE 6 INTEGER 4 0 e e P_CONTAINER 7 CHAR 10 0 e DsssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssM Figure 9-26 Defining connections in DAE - Selecting the starting column Now you can select the P_PARTKEY column of the PART_TEST table as the connecting column to our LINEITEM_TEST table: Just enter s next to this column and press Enter. Then you will see in Figure 9-27 that Data Archive Expert apparently accepts your input by moving the s a little bit to the right. 160 DB2 Data Archive Expert for z/os
187 AHXV Archive Unit Definition Row 1 of 5 Command ==> Scroll ===> CSR EsssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssN Archi e AHXV Select Columns Row 1 of 34 e e Command ==> Scroll ===> CSR e Name e e Cre e Archive specification : SCEN3TEST e DB2 e Archive unit table. : PART_TEST e e Creator : PAOLOR1 e Line e DB2 system..... : DB2G e A - e e R - e Line commands are: e e S - Select column D - Deselect column e e e Cmd e Cmd * Column Num Type Length Scale e --- e e _ e S P_PARTKEY 1 INTEGER 4 0 e _ e P_NAME 2 VARCHAR 55 0 e _ e P_MFGR 3 CHAR 25 0 e _ e P_BRAND 4 CHAR 10 0 e K e P_TYPE 5 VARCHAR 25 0 e ***** e P_SIZE 6 INTEGER 4 0 e e P_CONTAINER 7 CHAR 10 0 e DsssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssM Figure 9-27 Defining connections in DAE - Starting column accepted Proceed by pressing F3 for the panel in Figure AHXV Archive Unit Table Connections Row 1 of 5 Command ==> Scroll ===> CSR Archive specification : SCEN3TEST Archive unit table. : PART_TEST Creator : PAOLOR1 DB2 system: DB2G Line commands are: A - add column D - delete P - add parent connection C - add child connection Cmd Connection column Parent table Child table === ================== =========================== ============================ C P_PARTKEY Name : Name : INTEGER Creator: Creator: Column : Column : ******************************* Bottom of data ******************************** Figure 9-28 Defining connections in DAE - Request to select a child table Here you can add to the PART_TEST table either a parent connection or a child connection, similar to the terms parent table or dependent table in an RI context. Obviously then from the PART_TEST perspective, LINEITEM_TEST is a dependent table. Therefore, you specify c to add a child table to the PART_TEST table. Data Archive Expert presents the next panel (see Figure 9-29) for choosing the child table. Chapter 9. Scenario 3: Archiving from RI related tables and deleting from one 161
188 AHXV Select Table to Connect Row 1 of 4 Command ==> Scroll ===> CSR Archive Specification : SCEN3TEST Table : PART_TEST Creator : PAOLOR1 Connection column : P_PARTKEY Column type : INTEGER Line commands are: S - Select table to connect to Cmd Table name Creator s LINEITEM_TEST PAOLOR1 NATION_TEST PAOLOR1 CUSTOMER_TEST PAOLOR1 ORDER_TEST PAOLOR1 ******************************* Bottom of data ******************************** Figure 9-29 Defining connections in DAE - Select child table from a list Of course, you select the LINEITEM_TEST table through s and press Enter. Data Archive Expert then presents you with the panel in Figure 9-30, so that you can complete the definition of the connection. AHXV Select Table to Connect Row 1 of 4 Command ==> Scroll ===> CSR EsssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssN Archi e AHXV Select Columns Row 19 of 34 e Tab e Command ==> Scroll ===> CSR e Cre e e Con e Archive specification : SCEN3TEST e Col e Archive unit table. : LINEITEM_TEST e e Creator : PAOLOR1 e Line e DB2 system..... : DB2G e S - e e e Line commands are: e Cmd T e S - Select column D - Deselect column e e e S L e Cmd * Column Num Type Length Scale e N e e C e L_ORDERKEY 1 INTEGER 4 0 e O e s L_PARTKEY 2 INTEGER 4 0 e ***** e L_SUPPKEY 3 INTEGER 4 0 e e L_LINENUMBER 4 INTEGER 4 0 e e L_QUANTITY 5 INTEGER 4 0 e e L_EXTENDEDPRICE 6 FLOAT 4 0 e e L_DISCOUNT 7 FLOAT 4 0 e DsssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssM Figure 9-30 Defining connections in DAE - Selecting a target column The connecting column in LINEITEM_TEST is obviously L_PARTKEY. Press Enter to see Data Archive Expert s confirmation of your input in Figure DB2 Data Archive Expert for z/os
189 AHXV Archive Unit Table Connections Row 1 of 6 Command ==> Scroll ===> CSR Archive specification : SCEN3TEST Archive unit table. : PART_TEST Creator : PAOLOR1 DB2 system: DB2G Line commands are: A - add column D - delete P - add parent connection C - add child connection Cmd Connection column Parent table Child table === ================== =========================== ============================ P_PARTKEY Name : Name : LINEITEM_TEST INTEGER Creator: Creator: PAOLOR1 Column : Column : L_PARTKEY ******************************* Bottom of data ******************************** Figure 9-31 Defining connections in DAE - Confirmation panel Here you see clearly that LINEITEM_TEST is considered a child table rather than a parent table of the PART_TEST table. And the connecting columns are also correct. Press F3 to go back one level (see Figure 9-32). AHXV Archive Unit Definition Row 1 of 5 Command ==> Scroll ===> CSR Archive specification: Name... : SCEN3TEST Creator. : PAOLOR1 DB2 system: DB2G Starting point table: LINEITEM_TEST Creator..... : PAOLOR1 Database name.. : DSNDB04 Line commands are: A - Add C - Columns K - Connection keys D - Delete R - Rules P - Starting point table U - Index cols W - Row filter Rules Cmd Table name Creator Database StP Jct Del Row filter k LINEITEM_TEST PAOLOR1 DSNDB04 SP Y CUSTOMER_TEST.C_NATIONKE _ NATION_TEST PAOLOR1 DSNDB04 N _ CUSTOMER_TEST PAOLOR1 DSNDB04 N _ ORDER_TEST PAOLOR1 DSNDB04 J N _ PART_TEST PAOLOR1 DSNDB04 N ******************************* Bottom of data ******************************** Figure 9-32 Request to list existing connections With k you can now review all defined connections of the LINEITEM_TEST table. Press Enter to see the list in Figure Chapter 9. Scenario 3: Archiving from RI related tables and deleting from one 163
190 AHXV Archive Unit Table Connections Row 2 of 6 Command ==> Scroll ===> CSR Archive specification : SCEN3TEST Archive unit table. : LINEITEM_TEST Creator : PAOLOR1 DB2 system: DB2G Line commands are: A - add column D - delete P - add parent connection C - add child connection Cmd Connection column Parent table Child table === ================== =========================== ============================ L_PARTKEY Name : PART_TEST Name : INTEGER Creator: PAOLOR1 Creator: Column : P_PARTKEY Column : === ================== =========================== ============================ L_ORDERKEY Name : ORDER_TEST Name : INTEGER Creator: PAOLOR1 Creator: Column : O_ORDERKEY Column : ******************************* Bottom of data ******************************** Figure 9-33 Data Archive Expert s list of connections per table As you hoped, you see now two connections, and both connected tables are seen as parent tables from the perspective of the LINEITEM_TEST table. So, you are done with this table and press F3 for the panel in Figure 9-34 in order to continue with the other connections. AHXV Archive Unit Definition Row 1 of 5 Command ==> Scroll ===> CSR Archive specification: Name... : SCEN3TEST Creator. : PAOLOR1 DB2 system: DB2G Starting point table: LINEITEM_TEST Creator..... : PAOLOR1 Database name.. : DSNDB04 Line commands are: A - Add C - Columns K - Connection keys D - Delete R - Rules P - Starting point table U - Index cols W - Row filter Rules Cmd Table name Creator Database StP Jct Del Row filter _ LINEITEM_TEST PAOLOR1 DSNDB04 SP Y CUSTOMER_TEST.C_NATIONKE _ NATION_TEST PAOLOR1 DSNDB04 N k CUSTOMER_TEST PAOLOR1 DSNDB04 N _ ORDER_TEST PAOLOR1 DSNDB04 J N _ PART_TEST PAOLOR1 DSNDB04 N ******************************* Bottom of data ******************************** Figure 9-34 Request for checking the existence of another connection Specify k for testing the connection between CUSTOMER_TEST and ORDER_TEST, then press Enter to see the current state in Figure DB2 Data Archive Expert for z/os
191 AHXV Archive Unit Table Connections Row 3 of 6 Command ==> Scroll ===> CSR Archive specification : SCEN3TEST Archive unit table. : CUSTOMER_TEST Creator : PAOLOR1 DB2 system: DB2G Line commands are: A - add column D - delete P - add parent connection C - add child connection Cmd Connection column Parent table Child table === ================== =========================== ============================ C_CUSTKEY Name : Name : ORDER_TEST INTEGER Creator: Creator: PAOLOR1 Column : Column : O_CUSTKEY ******************************* Bottom of data ******************************** Figure 9-35 Data Archive Expert s presentation of existing connections for this table Everything is fine here, Data Archive Expert (with the help of its sub-component Grouper) did a great job for you! You are satisfied, do not want to go on checking the other connections, but a last connection still needs to be defined. So, press F3 for the well-known panel shown in Figure AHXV Archive Unit Definition Row 1 of 5 Command ==> Scroll ===> CSR Archive specification: Name... : SCEN3TEST Creator. : PAOLOR1 DB2 system: DB2G Starting point table: LINEITEM_TEST Creator..... : PAOLOR1 Database name.. : DSNDB04 Line commands are: A - Add C - Columns K - Connection keys D - Delete R - Rules P - Starting point table U - Index cols W - Row filter Rules Cmd Table name Creator Database StP Jct Del Row filter _ LINEITEM_TEST PAOLOR1 DSNDB04 SP Y CUSTOMER_TEST.C_NATIONKE k NATION_TEST PAOLOR1 DSNDB04 N _ CUSTOMER_TEST PAOLOR1 DSNDB04 N _ ORDER_TEST PAOLOR1 DSNDB04 J N _ PART_TEST PAOLOR1 DSNDB04 N ******************************* Bottom of data ******************************** Figure 9-36 Request to connect a manually added table Remember, the referential integrity between NATION_TEST and CUSTOMER_TEST was not defined in DB2, hence, it is not DB2-enforced. In our scenario 3 we assume that the application enforces the referential integrity between those two tables. Therefore, Data Archive Expert did not find NATION_TEST as a related table to LINEITEM_TEST, hence, you manually added NATION_TEST to this list of tables. As a consequence, you now must manually connect NATION_TEST to one of the other tables, here to CUSTOMER_TEST. Chapter 9. Scenario 3: Archiving from RI related tables and deleting from one 165
192 So, you specify k for one of those tables, better to say (remember before?), for that table: Which one is the parent table of those two tables? Which one has no column which Data Archive Expert already recognizes as a connecting key column? If you repeat the same sequence of steps you performed for connecting the PART_TEST and the LINEITEM_TEST tables, you should see the entries in Figure AHXV Archive Unit Table Connections Row 1 of 8 Command ==> Scroll ===> CSR Archive specification : SCEN3TEST Archive unit table. : NATION_TEST Creator : PAOLOR1 DB2 system: DB2G Line commands are: A - add column D - delete P - add parent connection C - add child connection Cmd Connection column Parent table Child table === ================== =========================== ============================ N_NATIONKEY Name : Name : CUSTOMER_TEST INTEGER Creator: Creator: PAOLOR1 Column : Column : C_NATIONKEY ******************************* Bottom of data ******************************** Figure 9-37 Data Archive Expert s presentation of a defined table connection Finally, you have not only defined this connection, but the entire archive unit! So, you happily quit this step by pressing twice F3, so that Data Archive Expert presents the panel in Figure 9-38 to proceed with your next steps. 9.5 Define archive target In this scenario we want to archive into tables rather than files. Furthermore, different from scenario 2, we want to archive a given source table into the same archive table for each run, as opposed to archiving one source table into multiple tables, one for each run of the archive specification. As we want to also specify the table spaces for these archive tables, these table spaces must exist ahead of the first run of the archive specification. To make life easier in this test scenario 3, you create just one table space for all the archive tables, that is, for the five archive tables in this scenario. See Example 9-7. Example 9-7 Create table space for archive tables CREATE TABLESPACE SCEN3TST IN PAOLODB ; DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS Then you can start on the panel in Figure 9-38 with the second step of the archive specification, the specification of the archive target. 166 DB2 Data Archive Expert for z/os
193 AHXV Archive Specification Definition Command ==> Archive specification: Name......==> SCEN3TEST DB2 system. : DB2G Creator....==> PAOLOR1 Description..==> archive lineitem-related rows,only purge lineitem rows Complete archive run (delete source data)? ==> N (Yes/No) Perform orphan row/changed data detection? ==> Y (Yes/No) Select an archive definition activity ==> 2 1. Define archive unit (completed) 2. Define table targets (active) 3. Define data set targets 4. Save archive specification (pending) Figure 9-38 Request to specify table archives So, in order to archive to tables rather than files, you specify 2 for the panel in Figure AHXV Table Archive Targets Row 1 of 5 Command ==> Scroll ===> CSR Archive specification: SCEN3TEST Creator : PAOLOR1 DB2 system: DB2G Select an option for creating targets: 4 1. Default table/default table spaces (one table per table space) 2. Specify table/default table spaces (one table per table space) 3. Default tables/specify table space (all tables in one table space) 4. Specify tables/specify table space (all tables in one table space) 5. Default tables/specify table spaces (any combination) 6. Specify tables/specify table spaces (any combination) Target table space Source table Target table Database =============================================================================== Name : LINEITEM_TEST Name : Name : Creator: PAOLOR1 Creator: Database: DSNDB04 =============================================================================== Name : CUSTOMER_TEST Name : Name : Creator: PAOLOR1 Creator: Database: DSNDB04 Figure 9-39 Request to specify archive table names (in one table space) In contrast to the second scenario, we want the subsequent archive runs to all use the archive tables of the first archive run. In this case, you have to obey some rules. Important rules: If you want to use the same archive table for all subsequent executions of your archive specification, you must specify a table name. And, if you specify a table space name, this table space must already exist. Chapter 9. Scenario 3: Archiving from RI related tables and deleting from one 167
194 Accordingly, you select 4 so that you can specify both, the archive table name per source table and the table space (which you have created earlier). The 4 leads you to the panel in Figure AHXV Map Source Tables to Archive Tables Row 1 of 5 Command ==> Scroll ===> CSR Archive specification: SCEN3TEST Primary commands are: C - Clear current table mappings TS - Specify Table space and database for target tables Map source tables to archive tables in the specified database and table space. ======================================================================= Source : LINEITEM_TEST Target.. ==> LINEITEM_ARCH1 Creator: PAOLOR1 Creator. ==> PAOLOR1 Table space : Database.. : DSNDB04 ======================================================================= Source : CUSTOMER_TEST Target.. ==> CUSTOMER_ARCH1 Creator: PAOLOR1 Creator. ==> PAOLOR1 Table space : Database.. : DSNDB04 Figure 9-40 Specifying archive table names per table of the archive unit On this panel you enter the name of the archive table per source table, that is, LINEITEM_ARCH1 for LINEITEM_TEST. Remember that Data Archive Expert is case sensitive, so you might want to key in your input in uppercase. You specify the name of the archive tables for the other source tables, too: With F8 you can scroll through the list of source tables. Finally, you specify NATION_ARCH1 for the last table in the list; see Figure If you are done and press F3, Data Archive Expert issues the message: Enter table spaces. AHXV Map Source Tables to Archive Command ==> ts Enter table spaces Scroll ===> CSR Archive specification: SCEN3TEST Primary commands are: C - Clear current table mappings TS - Specify Table space and database for target tables Map source tables to archive tables in the specified database and table space. ======================================================================= Source : NATION_TEST Target.. ==> NATION_ARCH1 Creator: PAOLOR1 Creator. ==> PAOLOR1 Table space : Database.. : DSNDB04 ******************************* Bottom of data ******************************** Figure 9-41 Data Archive Expert s request to provide a table space name for the archive 168 DB2 Data Archive Expert for z/os
195 As you cannot do this on the current panel, you must enter ts on the command line, and press Enter for the panel in Figure AHXV Map Source Tables to Archive Tables Row 5 of 5 Command ==> TS Scroll ===> CSR Archive specification: SCEN3TEST Primary commands are: C - Clear current Esssssssssss Specify Table Space and Database sssssssssssn TS - Specify Table e e e Command ==> e Map source tables to e e and table space. e Provide table space selection list? ==> N (Y/N) e e e ==================== e Table space name ==>: SCEN3TST e Source : NATION_ e Database name. ==>: PAOLODB e Creator: PAOLOR1 e DB2 system.... : DB2G e e e e ( % or blank indicates all ) e ******************** e e e e e e e e e e DssssssssssssssssssssssssssssssssssssssssssssssssssssssssM Figure 9-42 DAE s pull-down menu for specifying the table space name On this panel you directly enter the table space name SCEN3TST in database PAOLODB, as you know that this table space exists already. Alternatively, you can request a list of table spaces to choose from. After pressing Enter, Data Archive Expert applies your input to all archive tables. Remember, you have chosen option 4 (all tables in the same table space), see Figure 9-39 on page 167. You will see the result in Figure AHXV Map Source Tables to Archive Tables Row 5 of 5 Command ==> Scroll ===> CSR Archive specification: SCEN3TEST Primary commands are: C - Clear current table mappings TS - Specify Table space and database for target tables Map source tables to archive tables in the specified database and table space. ======================================================================= Source : NATION_TEST Target.. ==> NATION_ARCH1 Creator: PAOLOR1 Creator. ==> PAOLOR1 Table space : SCEN3TST Database.. : PAOLODB ******************************* Bottom of data ******************************** Figure 9-43 Accepted table space name Chapter 9. Scenario 3: Archiving from RI related tables and deleting from one 169
196 As you are almost done with your specification, press F3 twice to return to the panel in Figure AHXV Archive Specification Definition Command ==> Archive specification: Name......==> SCEN3TEST DB2 system. : DB2G Creator....==> PAOLOR1 Description..==> archive lineitem-related rows,only purge lineitem rows Complete archive run (delete source data)? ==> N (Yes/No) Perform orphan row/changed data detection? ==> Y (Yes/No) Select an archive definition activity ==> 4 1. Define archive unit (completed) 2. Define table targets (active) 3. Define data set targets 4. Save archive specification (pending) Figure 9-44 Request to save the specification Now try to save your specification through 4 and press Enter. Unfortunately, you get the error message AHXJ041: Invalid row filter specified shown in Figure AHXV Archive Specification Validation Errors Row 1 of 1 Command ==> Scroll ===> CSR The archive specification definition being saved has failed definition rules. You can return to the definition to fix the errors. Name: SCEN3TEST ENTER to exit and save the definition CANCEL or END to return to definition panel Validation messages: Message AHXJ041: Invalid row filter specified. Error Tokens: DB2JDBCCursor Received Error in Method prepare:sqlcode==> -107 SQ LSTATE ==> Error Tokens ==> <<DB2 7.1 SQLJ/JDBC>> LINEITEM_TEST 8 Figure 9-45 Data Archive Expert returns DB2 s SQLCODE -107 SQLCODE -107: (name too long) is apparently due to the fact that our row filter consists of qualified column names, and Data Archive Expert adds a qualifying table name once again (which on top creates a column specification with two dots in it). In order not to waste all the work you did, press Enter to save this invalid specification anyway. The result is shown in Figure DB2 Data Archive Expert for z/os
197 AHXV Archive Specifications List Archive spec saved Command ===> Scroll ===> CSR Primary commands are: DB2 system : DB2G FI - Filter archive specification list User ID. : PAOLOR1 RE - Refresh archive specification list Time... : 23:12 Line commands are: N - New C - Copy D - Delete R - Run B - Browse details E - Edit H - History of runs I - Info summary F - Table archive to file T - Define retrieve specification L - List retrieve specifications Cmd Name Description Creator Updated State Type e SCEN3TEST archive lineitem-rela PAOLOR PDEF TABL ******************************* Bottom of data ******************************** Figure 9-46 Specification saved in state PDEF Note the State=PDEF, where PDEF stands for pending definition. Specify e for Edit in front of your specification. Then change the row filter so that the columns are not qualified by their table names any more. Here we no longer show all intermediate panels, just the new row filter in Figure Although we are still being a bit skeptical if the new row filter might work as all column names are unique in the group of tables to be archived. AHXV Starting Point Table Row Filter Row 9 of 46 Command ==> Scroll ===> CSR Archive specification : SCEN3TEST Starting point table: LINEITEM_TEST Creator : PAOLOR1 DB2 system: DB2G Row filter ==> C_NATIONKEY = 1 AND ( L_SHIPDATE < ' ' OR L_RECEIPTDATE < ' ' ) Columns Num Type Length Scale L_ORDERKEY 1 INTEGER 4 0 L_PARTKEY 2 INTEGER 4 0 L_SUPPKEY 3 INTEGER 4 0 L_LINENUMBER 4 INTEGER 4 0 L_QUANTITY 5 INTEGER 4 0 L_EXTENDEDPRICE 6 FLOAT 4 0 L_DISCOUNT 7 FLOAT 4 0 L_TAX 8 FLOAT 4 0 L_RETURNFLAG 9 CHAR 1 0 L_LINESTATUS 10 CHAR 1 0 Figure 9-47 Specifying a row filter with unqualified columns But, if you go back again to save your archive specification, Data Archive Expert issues another error message, see Figure Chapter 9. Scenario 3: Archiving from RI related tables and deleting from one 171
198 AHXV Archive Specification Validation Errors Row 1 of 1 Command ==> Scroll ===> CSR The archive specification definition being saved has failed definition rules. You can return to the definition to fix the errors. Name: SCEN3TEST ENTER to exit and save the definition CANCEL or END to return to definition panel Validation messages: Message AHXJ041: Invalid row filter specified. Error Tokens: DB2JDBCCursor Received Error in Method prepare:sqlcode==> -206 SQ LSTATE ==> Error Tokens ==> <<DB2 7.1 SQLJ/JDBC>> C_NATIONKEY Figure 9-48 Data Archive Expert returns DB2 s SQLCODE -206 upon saving Of course, we almost expected SQLCODE -206: (column not found), as C_NATIONKEY is not a column of our starting point table LINEITEM_TEST, for which the row filter is defined. Are we stuck? No, the solution is to use subselects! By using a subselect, we construct a valid WHERE clause for the LINEITEM_TEST table, which would search exactly the rows you want to archive. Do not qualify any columns. Then, omit the WHERE keyword, and use the remaining part as your row filter. In our case, we provide the row filter shown in Figure DB2 Data Archive Expert for z/os
199 AHXV Starting Point Table Row Filter Row 9 of 46 Command ==> Scroll ===> CSR Archive specification : SCEN3TEST Starting point table: LINEITEM_TEST Creator : PAOLOR1 DB2 system: DB2G Row filter ==> (L_SHIPDATE < ' ' OR L_RECEIPTDATE < ' ' ) AND L_ORDERKEY IN (SELECT L_ORDERKEY FROM LINEITEM_TEST, ORDER_TEST, CUSTOMER_TEST_ WHERE O_CUSTKEY = C_CUSTKEY AND L_ORDERKEY = O_ORDERKEY AND C_NATIONKEY = 1 ) Columns Num Type Length Scale L_ORDERKEY 1 INTEGER 4 0 L_PARTKEY 2 INTEGER 4 0 L_SUPPKEY 3 INTEGER 4 0 L_LINENUMBER 4 INTEGER 4 0 L_QUANTITY 5 INTEGER 4 0 L_EXTENDEDPRICE 6 FLOAT 4 0 L_DISCOUNT 7 FLOAT 4 0 L_TAX 8 FLOAT 4 0 L_RETURNFLAG 9 CHAR 1 0 Figure 9-49 Specifying a third version of row filter - now with subselect Now you can finally save your archive specification: Note the message Specification saved and the state DEF rather than PDEF in Figure AHXV Archive Specifications List Specification saved Command ===> Scroll ===> CSR Primary commands are: DB2 system : DB2G FI - Filter archive specification list User ID. : PAOLOR1 RE - Refresh archive specification list Time... : 12:17 Line commands are: N - New C - Copy D - Delete R - Run B - Browse details E - Edit H - History of runs I - Info summary F - Table archive to file T - Define retrieve specification L - List retrieve specifications Cmd Name Description Creator Updated State Type SCEN3TEST archive lineitem-rela PAOLOR DEF TABL testdelete tttt for archive AND PAOLOR PCOM TABL deferred delete kjgk PAOLOR COM TABL SCENARIO1 Archive old LINEITEM PAOLOR COM FILE LINEITE1 Archive old LINEITEM PAOLOR COM FILE LINEITE2 Archive old LINEITE2 PAOLOR COM TABL LINEITE3 table with delete PAOLOR COM TABL LINEITE0 Archive old LINEITE0 PAOLOR COM FILE delete7 trial with tab archiv PAOLOR COM TABL Figure 9-50 Request to save the specification now successful To summarize our row filter experiences. Chapter 9. Scenario 3: Archiving from RI related tables and deleting from one 173
200 Tip: If your row filter is based on columns of more than one table, you must specify the row filter on the starting point table, and you must specify a valid search condition for this table. Columns which do not belong to this table can be referenced in a subselect. Do not qualify the columns. Some background information: Remember, you specified 1 to let Data Archive Expert produce a detailed log. Browse your personal Data Archive Expert log, PAOLOR1.AHXLOG, and find the SQL statement you see in Example 9-8. Example 9-8 Data Archive Expert s log shows row filter within SELECT statement SELECT COUNT(*) FROM "PAOLOR1"."LINEITEM_TEST" A, "PAOLOR1"."LINEITEM_TEST" A WHERE ((A."L_SHIPDATE" < ' ' OR A."L_RECEIPTDATE" < ' ' ) AND A."L_ORDERKEY" IN (SELECT A."L_ORDERKEY" FROM LINEITEM_TEST, ORDER_TEST, CUSTOMER_TEST WHERE O_CUSTKEY = C_CUSTKEY AND A."L_ORDERKEY"= O_ORDERKEY AND C_NATIONKEY = 1 ) ) FETCH FIRST 1 ROW ONLY Here you see that Data Archive Expert qualifies most column names with the starting point table name, except those columns within the WHERE clause of the subselect. So, all references to other tables must be within a subselect. 174 DB2 Data Archive Expert for z/os
201 9.6 Run your specification - Step 1 As usually, start from the panel in Figure AHXV Archive Specifications List Row 1 of 14 Command ===> Scroll ===> CSR Primary commands are: DB2 system : DB2G FI - Filter archive specification list User ID. : PAOLOR1 RE - Refresh archive specification list Time... : 12:37 Line commands are: N - New C - Copy D - Delete R - Run B - Browse details E - Edit H - History of runs I - Info summary F - Table archive to file T - Define retrieve specification L - List retrieve specifications Cmd Name Description Creator Updated State Type r SCEN3TEST archive lineitem-rela PAOLOR DEF TABL testdelete tttt for archive AND PAOLOR PCOM TABL deferred delete kjgk PAOLOR COM TABL SCENARIO1 Archive old LINEITEM PAOLOR COM FILE LINEITE1 Archive old LINEITEM PAOLOR COM FILE LINEITE2 Archive old LINEITE2 PAOLOR COM TABL LINEITE3 table with delete PAOLOR COM TABL LINEITE0 Archive old LINEITE0 PAOLOR COM FILE delete7 trial with tab archiv PAOLOR COM TABL Figure 9-51 Request to run the archive specification Specify r to see the current row filter (Figure 9-52). AHXV Run Archive Specification Command ==> r Primary commands are: R - Run the archive specification Confirm the row filter and run the archive specification. Change the row filter if desired, then run archive specification. Specification name: SCEN3TEST DB2 system... DB2G Creator..... PAOLOR1 User ID..... PAOLOR1 Description: archive lineitem-related rows,only purge lineitem rows Completion of the archive will be deferred. Row filter ==> (L_SHIPDATE < ' ' OR L_RECEIPTDATE < ' ' ) AND_ L_ORDERKEY IN (SELECT L_ORDERKEY FROM LINEITEM_TEST, ORDER_TEST, CUSTOMER_TEST WHERE O_CUSTKEY = C_CUSTKEY AND L_ORDERKEY = O_ORDERKEY AND C_NATIONKEY = 1 ) Figure 9-52 Confirmation to run the archive specification Specify R, press Enter, and let Data Archive Expert execute your archive specification. Chapter 9. Scenario 3: Archiving from RI related tables and deleting from one 175
202 Tip: If your row filter is longer than the four provided rows on this panel, the solution is to specify a longer row filter in a batch job. See 5.1, Batch processing considerations on page 46. But be aware that there may be implications when editing the row filter through the panel afterwards. Meanwhile, let us have a little intermezzo to relax your tension by having a look at what Data Archive Expert is supposed to archive, so that we can check whether our elaborated specification was not only syntactically, but also semantically correct. The SELECT command in Example 9-9 lists exactly these LINEITEM_TEST rows we want to archive. Example 9-9 SQL query to control the result of the specification run SELECT L_ORDERKEY, L_LINENUMBER AS L_LINE#, L_SHIPDATE, L_RECEIPTDATE, L_PARTKEY, SUBSTR(P_NAME, 1, 10) AS P_NAME, C_CUSTKEY, SUBSTR(C_NAME, 1, 10) AS C_NAME, SUBSTR(C_ADDRESS, 1, 10) AS C_ADDRESS, C_NATIONKEY, N_NAME FROM (((( LINEITEM_TEST INNER JOIN ORDER_TEST ON L_ORDERKEY = O_ORDERKEY ) INNER JOIN CUSTOMER_TEST ON O_CUSTKEY = C_CUSTKEY ) INNER JOIN NATION_TEST ON C_NATIONKEY = N_NATIONKEY ) INNER JOIN PARTSUPP_TEST ON L_PARTKEY = PS_PARTKEY AND L_SUPPKEY = PS_SUPPKEY ) INNER JOIN PART_TEST ON PS_PARTKEY = P_PARTKEY WHERE (L_SHIPDATE < ' ' OR L_RECEIPTDATE < ' ') AND L_ORDERKEY IN (SELECT O_ORDERKEY FROM LINEITEM_TEST, ORDER_TEST, CUSTOMER_TEST WHERE O_CUSTKEY = C_CUSTKEY AND L_ORDERKEY = O_ORDERKEY AND C_NATIONKEY = 1 ) ORDER BY L_SHIPDATE, 1, 2 We see in Example 9-10 that seven LINEITEM_TEST rows qualify the row filter, hence, they should be archived. Example 9-10 The result set of the control query L_ORDERKEY L_LINE# L_SHIPDATE L_RECEIPTDATE L_PARTKEY P_NAME PLANE TRUCK SHIP CAR PLANE TRUCK MOTORBIKE DSNE610I NUMBER OF ROWS DISPLAYED IS 7 DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS continuation: C_CUSTKEY C_NAME C_ADDRESS C_NATIONKEY N_NAME F_CUSTOMER MANCHESTER 1 UK 176 DB2 Data Archive Expert for z/os
203 600 F_CUSTOMER MANCHESTER 1 UK 100 A_CUSTOMER LONDON 1 UK 100 A_CUSTOMER LONDON 1 UK 200 B_CUSTOMER BELFAST 1 UK 200 B_CUSTOMER BELFAST 1 UK 800 H_CUSTOMER LIVERPOOL 1 UK So we want Data Archive Expert save: Seven LINEITEM rows Five PART rows (as 5 different L_PARTKEYs are displayed) Four CUSTOMER rows (as 4 different C_CUSTKEYs are displayed) One NATION row (as only one C_NATIONKEY is displayed) Now check, whether this is the same as what the Data Archive Expert output tells you in Figure AHXV Archive Run Statistics Archive run successful Command ==> Scroll ===> CSR Archive specification : SCEN3TEST DB2 system. : DB2G Creator : PAOLOR1 Description : archive lineitem-related rows,only purge lineitem rows Row filter : (L_SHIPDATE < ' ' OR L_RECEIPTDATE < ' ' ) AND L_ORDERKEY IN (SELECT L_ORDERKEY FROM LINEITEM_TEST, ORDER_TEST, CUSTOMER_TEST WHERE O_CUSTKEY = C_CUSTKEY AND L_ORDERKEY = O_ORDERKEY AND C_NATIONKEY = 1 ) =============================================================================== Run: 1 Source table: CUSTOMER_TEST Creator: PAOLOR1 Del: 0 Act: R Target table: CUSTOMER_ARCH1 Creator: PAOLOR1 Ins: 4 =============================================================================== Run: 1 Source table: LINEITEM_TEST Creator: PAOLOR1 Del: 0 Act: R Target table: LINEITEM_ARCH1 Creator: PAOLOR1 Ins: 7 =============================================================================== Run: 1 Source table: NATION_TEST Creator: PAOLOR1 Del: 0 Act: R Target table: NATION_ARCH1 Creator: PAOLOR1 Ins: 1 =============================================================================== Run: 1 Source table: ORDER_TEST Creator: PAOLOR1 Del: 0 Act: R Target table: ORDER_ARCH1 Creator: PAOLOR1 Ins: 4 Figure 9-53 Result of the first step of the first specification run - Part 1 First of all, Data Archive Expert issues the message Archive run successful, which is fine. Then, together with the panel in Figure 9-54 (which you get by pressing F8), you see the correct number of rows inserted into the archive tables. But no single row has been deleted from the source tables! So, what is the use of such an archive run? Ah yes, of course, you have asked Data Archive Expert to split the run in two steps, so Data Archive Expert has worked correctly by not deleting any row yet. Chapter 9. Scenario 3: Archiving from RI related tables and deleting from one 177
204 AHXV Archive Run Statistics Row 5 of 5 Command ==> Scroll ===> CSR Archive specification : SCEN3TEST DB2 system. : DB2G Creator : PAOLOR1 Description : archive lineitem-related rows,only purge lineitem rows Row filter : (L_SHIPDATE < ' ' OR L_RECEIPTDATE < ' ' ) AND L_ORDERKEY IN (SELECT L_ORDERKEY FROM LINEITEM_TEST, ORDER_TEST, CUSTOMER_TEST WHERE O_CUSTKEY = C_CUSTKEY AND L_ORDERKEY = O_ORDERKEY AND C_NATIONKEY = 1 ) =============================================================================== Run: 1 Source table: PART_TEST Creator: PAOLOR1 Del: 0 Act: R Target table: PART_ARCH1 Creator: PAOLOR1 Ins: 5 ******************************* Bottom of data ******************************** Figure 9-54 Result of the first step of the first specification run - Part 2 You may want to check the content of the archive tables. You do so by selecting from the five archive tables (named _ARCH1), as shown in Example 9-11 on page 178 to Example 9-15 on page 179. Example 9-11 shows the content of the archive table for CUSTOMER. Example 9-11 Content of the archive table for CUSTOMER SELECT C_CUSTKEY, C_NAME, SUBSTR(C_ADDRESS, 1, 12) AS C_ADDRESS, C_NATIONKEY FROM PAOLOR1.CUSTOMER_ARCH1 ; C_CUSTKEY C_NAME C_ADDRESS C_NATIONKEY A_CUSTOMER LONDON B_CUSTOMER BELFAST F_CUSTOMER MANCHESTER H_CUSTOMER LIVERPOOL 1 DSNE610I NUMBER OF ROWS DISPLAYED IS 4 DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS Example 9-12 shows the content of the archive table for LINEITEM. Example 9-12 Content of the archive table for LINEITEM SELECT L_ORDERKEY, L_LINENUMBER, L_SHIPDATE, L_RECEIPTDATE, L_PARTKEY FROM PAOLOR1.LINEITEM_ARCH1 ORDER BY L_SHIPDATE, L_ORDERKEY, L_LINENUMBER ; L_ORDERKEY L_LINENUMBER L_SHIPDATE L_RECEIPTDATE L_PARTKEY DB2 Data Archive Expert for z/os
205 DSNE610I NUMBER OF ROWS DISPLAYED IS 7 DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS Example 9-13 shows the content of the archive table for NATION. Example 9-13 Content of the archive table for NATION SELECT * FROM PAOLOR1.NATION_ARCH1 ; N_NATIONKEY N_NAME N_REGIONKEY N_COMMENT UK 2 zqn3okwz1wln7pls3ohcgn56kp5 DSNE610I NUMBER OF ROWS DISPLAYED IS 1 DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS Example 9-14 shows the content of the archive table for ORDER. Example 9-14 Content of the archive table for ORDER SELECT * FROM PAOLOR1.ORDER_ARCH1 ; O_ORDERKEY O_CUSTKEY AHXEXECUTEDTS DSNE610I NUMBER OF ROWS DISPLAYED IS 4 DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS Example 9-15 shows the content of the archive table for PART. Example 9-15 Content of the archive table for PART SELECT * FROM PAOLOR1.PART_ARCH1 ; P_PARTKEY P_NAME P_MFGR SHIP Manufactur 40 CAR Manufactur 60 MOTORBIKE Manufactur 70 PLANE Manufactur 80 TRUCK Manufactur DSNE610I NUMBER OF ROWS DISPLAYED IS 5 DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS Please note that the ORDER_ARCH1 table has only three columns in contrast to the source table ORDER_TEST. The reason for this is that you have specified the ORDER_TEST table as a junction table in order to find the way from the LINEITEM_TEST table to CUSTOMER_TEST table. Therefore, only the two connection key columns have been saved, Chapter 9. Scenario 3: Archiving from RI related tables and deleting from one 179
206 together with an additional timestamp column, which Data Archive Expert adds to all target archive tables. 9.7 Run your specification - Step 2 So far, only the first part of the first run of your archive specification has been carried out. The deletion of the archived LINEITEM_TEST rows is still missing. This is reflected by State=PCOM (PCOM for pending completion) on the panel in Figure AHXV Archive Specifications List Row 1 of 14 Command ===> Scroll ===> CSR Primary commands are: DB2 system : DB2G FI - Filter archive specification list User ID. : PAOLOR1 RE - Refresh archive specification list Time... : 14:09 Line commands are: N - New C - Copy D - Delete R - Run B - Browse details E - Edit H - History of runs I - Info summary F - Table archive to file T - Define retrieve specification L - List retrieve specifications Cmd Name Description Creator Updated State Type r SCEN3TEST archive lineitem-rela PAOLOR PCOM TABL testdelete tttt for archive AND PAOLOR PCOM TABL deferred delete kjgk PAOLOR COM TABL SCENARIO1 Archive old LINEITEM PAOLOR COM FILE LINEITE1 Archive old LINEITEM PAOLOR COM FILE LINEITE2 Archive old LINEITE2 PAOLOR COM TABL LINEITE3 table with delete PAOLOR COM TABL LINEITE0 Archive old LINEITE0 PAOLOR COM FILE delete7 trial with tab archiv PAOLOR COM TABL Figure 9-55 Request to complete the first run - Part 1 Therefore, specify r once again, and press Enter to come to the next panel in Figure AHXV Run/Complete Archive Specifications Row 1 of 2 Command ===> Scroll ===> CSR Specification name: SCEN3TEST Creator: PAOLOR1 Description: archive lineitem-related rows,only purge lineitem rows Line commands are: C - Complete R - Run Cmd Run State Run by Date Row filter Defined (L_SHIPDATE < ' ' OR L_REC c 1 Pending PAOLOR (L_SHIPDATE < ' ' OR L_REC ******************************* Bottom of data ******************************** Figure 9-56 Request to complete the first run - Part DB2 Data Archive Expert for z/os
207 You see two entries: the archive specification in the state of Defined and the first archive specification run in the state of Pending. So, specify c in order to complete the run, and press Enter. The result of this second step is shown in Figure 9-57 and Figure AHXV Archive Run Statistics Successful completion Command ==> Scroll ===> CSR Archive specification : SCEN3TEST DB2 system. : DB2G Creator : PAOLOR1 Description : archive lineitem-related rows,only purge lineitem rows Row filter : =============================================================================== Run: 1 Source table: CUSTOMER_TEST Creator: PAOLOR1 Del: 0 Act: C Target table: CUSTOMER_ARCH1 Creator: PAOLOR1 Ins: 0 =============================================================================== Run: 1 Source table: LINEITEM_TEST Creator: PAOLOR1 Del: 7 Act: C Target table: LINEITEM_ARCH1 Creator: PAOLOR1 Ins: 0 =============================================================================== Run: 1 Source table: NATION_TEST Creator: PAOLOR1 Del: 0 Act: C Target table: NATION_ARCH1 Creator: PAOLOR1 Ins: 0 =============================================================================== Run: 1 Source table: ORDER_TEST Creator: PAOLOR1 Del: 0 Figure 9-57 Result of the completion step of the first run - Part 1 Proceed through F8. AHXV Archive Run Statistics Row 4 of 5 Command ==> Scroll ===> CSR Archive specification : SCEN3TEST DB2 system. : DB2G Creator : PAOLOR1 Description : archive lineitem-related rows,only purge lineitem rows Row filter : =============================================================================== Run: 1 Source table: ORDER_TEST Creator: PAOLOR1 Del: 0 Act: C Target table: ORDER_ARCH1 Creator: PAOLOR1 Ins: 0 =============================================================================== Run: 1 Source table: PART_TEST Creator: PAOLOR1 Del: 0 Act: C Target table: PART_ARCH1 Creator: PAOLOR1 Ins: 0 ******************************* Bottom of data ******************************** Figure 9-58 Result of the completion step of the first run - Part 2 So, only LINEITEM_TEST rows were deleted. This is exactly what we wanted! F3 shows you that the first archive specification run is now Completed (see Figure 9-59). Chapter 9. Scenario 3: Archiving from RI related tables and deleting from one 181
208 AHXV Run/Complete Archive Specifications Row 1 of 2 Command ===> Scroll ===> CSR Specification name: SCEN3TEST Creator: PAOLOR1 Description: archive lineitem-related rows,only purge lineitem rows Line commands are: C - Complete R - Run Cmd Run State Run by Date Row filter Defined (L_SHIPDATE < ' ' OR L_REC 1 Completed PAOLOR (L_SHIPDATE < ' ' OR L_REC ******************************* Bottom of data ******************************** Figure 9-59 Data Archive Expert s run completion panel F3 again shows that the whole archive specification has the state COM (=completed), see Figure AHXV Archive Specifications List Row 1 of 14 Command ===> Scroll ===> CSR Primary commands are: DB2 system : DB2G FI - Filter archive specification list User ID. : PAOLOR1 RE - Refresh archive specification list Time... : 17:21 Line commands are: N - New C - Copy D - Delete R - Run B - Browse details E - Edit H - History of runs I - Info summary F - Table archive to file T - Define retrieve specification L - List retrieve specifications Cmd Name Description Creator Updated State Type SCEN3TEST archive lineitem-rela PAOLOR COM TABL testdelete tttt for archive AND PAOLOR PCOM TABL deferred delete kjgk PAOLOR COM TABL SCENARIO1 Archive old LINEITEM PAOLOR COM FILE LINEITE1 Archive old LINEITEM PAOLOR COM FILE LINEITE2 Archive old LINEITE2 PAOLOR COM TABL LINEITE3 table with delete PAOLOR COM TABL LINEITE0 Archive old LINEITE0 PAOLOR COM FILE delete7 trial with tab archiv PAOLOR COM TABL Figure 9-60 List of archive specifications You want to check whether the correct rows have been deleted? In this sample environment, you can issue an SQL statement to retrieve the remaining rows in LINEITEM_TEST (see Example 9-16). Example 9-16 Not-deleted order line items SELECT L_ORDERKEY, L_LINENUMBER, L_PARTKEY, L_SHIPDATE, L_RECEIPTDATE, L_SUPPKEY FROM LINEITEM_TEST ORDER BY L_SHIPDATE, 1, 2 ; DB2 Data Archive Expert for z/os
209 L_ORDERKEY L_LINENUMBER L_PARTKEY L_SHIPDATE L_RECEIPTDATE L_SUPPKEY DSNE610I NUMBER OF ROWS DISPLAYED IS 11 DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS So, the correct rows have been deleted. 9.8 Second run using the same target tables Time flies, the next archive specification run is due. Again, retrieve your archive specification (see Figure 9-61). AHXV Archive Specifications List Row 1 of 14 Command ===> Scroll ===> CSR Primary commands are: DB2 system : DB2G FI - Filter archive specification list User ID. : PAOLOR1 RE - Refresh archive specification list Time... : 17:21 Line commands are: N - New C - Copy D - Delete R - Run B - Browse details E - Edit H - History of runs I - Info summary F - Table archive to file T - Define retrieve specification L - List retrieve specifications Cmd Name Description Creator Updated State Type r SCEN3TEST archive lineitem-rela PAOLOR COM TABL testdelete tttt for archive AND PAOLOR PCOM TABL deferred delete kjgk PAOLOR COM TABL SCENARIO1 Archive old LINEITEM PAOLOR COM FILE LINEITE1 Archive old LINEITEM PAOLOR COM FILE LINEITE2 Archive old LINEITE2 PAOLOR COM TABL LINEITE3 table with delete PAOLOR COM TABL LINEITE0 Archive old LINEITE0 PAOLOR COM FILE delete7 trial with tab archiv PAOLOR COM TABL Figure 9-61 Request to run the specification again Specify r as usual and press Enter to proceed to the panel in Figure Chapter 9. Scenario 3: Archiving from RI related tables and deleting from one 183
210 AHXV Run Archive Specification Command ==> r Primary commands are: R - Run the archive specification Confirm the row filter and run the archive specification. Change the row filter if desired, then run archive specification. Specification name: SCEN3TEST DB2 system... DB2G Creator..... PAOLOR1 User ID..... PAOLOR1 Description: archive lineitem-related rows,only purge lineitem rows Completion of the archive will be deferred. Row filter ==> ( L_SHIPDATE < CURRENT_DATE - 2 YEARS OR L_RECEIPTDATE < CURRENT_DATE - 1 YEARS ) AND L_ORDERKEY IN (SELECT L_ORDERKEY FROM LINEITEM_TEST, ORDER_TEST, CUSTOMER_TEST WHERE O_CUSTKEY = C_CUSTKEY AND L_ORDERKEY = O_ORDERKEY AND C_NATIONKEY = 1 ) Figure 9-62 Confirmation to run the specification again Now you may want to change the row filter in a way that you do not have to adjust it each time. Use current_date instead of a fixed date. Then initiate a second run of your archive specification by specifying r and pressing Enter. You will see the results in Figure 9-63 and Figure AHXV Archive Run Statistics Archive run successful Command ==> Scroll ===> CSR Archive specification : SCEN3TEST DB2 system. : DB2G Creator : PAOLOR1 Description : archive lineitem-related rows,only purge lineitem rows Row filter : ( L_SHIPDATE < CURRENT_DATE - 2 YEARS OR L_RECEIPTDATE < CURRENT_DATE - 1 YEARS ) AND L_ORDERKEY IN (SELECT L_ORDERKEY FROM LINEITEM_TEST, ORDER_TEST, CUSTOMER_TEST WHERE O_CUSTKEY = C_CUSTKEY AND L_ORDERKEY = O_ORDERKEY AND C_NATIONKEY = 1 ) =============================================================================== Run: 2 Source table: CUSTOMER_TEST Creator: PAOLOR1 Del: 0 Act: R Target table: CUSTOMER_ARCH1 Creator: PAOLOR1 Ins: 2 =============================================================================== Run: 2 Source table: LINEITEM_TEST Creator: PAOLOR1 Del: 0 Act: R Target table: LINEITEM_ARCH1 Creator: PAOLOR1 Ins: 3 =============================================================================== Run: 2 Source table: NATION_TEST Creator: PAOLOR1 Del: 0 Act: R Target table: NATION_ARCH1 Creator: PAOLOR1 Ins: 1 =============================================================================== Run: 2 Source table: ORDER_TEST Creator: PAOLOR1 Del: 0 Figure 9-63 Result of first step of second run - Part DB2 Data Archive Expert for z/os
211 AHXV Archive Run Statistics Row 4 of 5 Command ==> Scroll ===> CSR Archive specification : SCEN3TEST DB2 system. : DB2G Creator : PAOLOR1 Description : archive lineitem-related rows,only purge lineitem rows Row filter : ( L_SHIPDATE < CURRENT_DATE - 2 YEARS OR L_RECEIPTDATE < CURRENT_DATE - 1 YEARS ) AND L_ORDERKEY IN (SELECT L_ORDERKEY FROM LINEITEM_TEST, ORDER_TEST, CUSTOMER_TEST WHERE O_CUSTKEY = C_CUSTKEY AND L_ORDERKEY = O_ORDERKEY AND C_NATIONKEY = 1 ) =============================================================================== Run: 2 Source table: ORDER_TEST Creator: PAOLOR1 Del: 0 Act: R Target table: ORDER_ARCH1 Creator: PAOLOR1 Ins: 2 =============================================================================== Run: 2 Source table: PART_TEST Creator: PAOLOR1 Del: 0 Act: R Target table: PART_ARCH1 Creator: PAOLOR1 Ins: 3 ******************************* Bottom of data ******************************** Figure 9-64 Result of first step of second run - Part 2 You can see now that the second run has copied the rows into the same table as the first run. This if fine, as now you can retrieve the current and the archived rows with one constant view regardless of the number of runs. But again, this second run has not deleted any rows. Before completing the run, go the specification list (Figure 9-65). AHXV Archive Specifications List Row 1 of 14 Command ===> Scroll ===> CSR Primary commands are: DB2 system : DB2G FI - Filter archive specification list User ID. : PAOLOR1 RE - Refresh archive specification list Time... : 18:12 Line commands are: N - New C - Copy D - Delete R - Run B - Browse details E - Edit H - History of runs I - Info summary F - Table archive to file T - Define retrieve specification L - List retrieve specifications Cmd Name Description Creator Updated State Type H SCEN3TEST archive lineitem-rela PAOLOR PCOM TABL testdelete tttt for archive AND PAOLOR PCOM TABL deferred delete kjgk PAOLOR COM TABL SCENARIO1 Archive old LINEITEM PAOLOR COM FILE LINEITE1 Archive old LINEITEM PAOLOR COM FILE LINEITE2 Archive old LINEITE2 PAOLOR COM TABL LINEITE3 table with delete PAOLOR COM TABL LINEITE0 Archive old LINEITE0 PAOLOR COM FILE delete7 trial with tab archiv PAOLOR COM TABL Figure 9-65 Invoking Data Archive Expert s history of a specification Chapter 9. Scenario 3: Archiving from RI related tables and deleting from one 185
212 Specify H for History to see how Data Archive Expert presents the current state of your specification (see Figure 9-66). AHXV Run/Complete Archive Specifications Row 1 of 3 Command ===> Scroll ===> CSR Specification name: SCEN3TEST Creator: PAOLOR1 Description: archive lineitem-related rows,only purge lineitem rows Line commands are: C - Complete R - Run Cmd Run State Run by Date Row filter Defined ( L_SHIPDATE < CURRENT_DATE Completed PAOLOR (L_SHIPDATE < ' ' OR L_REC c 2 Pending PAOLOR ( L_SHIPDATE < CURRENT_DATE - 2 ******************************* Bottom of data ******************************** Figure 9-66 Request to complete the second run You can observe: The definition has the row filter of the last run. Run 1 (or version 1 of your specification) is completed. Run 2 (or version 2 of your specification) is pending. Even though you could start yet another, third, run before completing the second run, we try to work properly this time and specify c for completing the second run. Enter presents you with the result of the completed run; see Figure 9-67 and Figure AHXV Archive Run Statistics Successful completion Command ==> Scroll ===> CSR Archive specification : SCEN3TEST DB2 system. : DB2G Creator : PAOLOR1 Description : archive lineitem-related rows,only purge lineitem rows Row filter : ================================================================================ Run: 2 Source table: CUSTOMER_TEST Creator: PAOLOR1 Del: 0 Act: C Target table: CUSTOMER_ARCH1 Creator: PAOLOR1 Ins: 0 =============================================================================== Run: 2 Source table: LINEITEM_TEST Creator: PAOLOR1 Del: 3 Act: C Target table: LINEITEM_ARCH1 Creator: PAOLOR1 Ins: 0 =============================================================================== Run: 2 Source table: NATION_TEST Creator: PAOLOR1 Del: 0 Act: C Target table: NATION_ARCH1 Creator: PAOLOR1 Ins: 0 =============================================================================== Run: 2 Source table: ORDER_TEST Creator: PAOLOR1 Del: 0 Act: C Target table: ORDER_ARCH1 Creator: PAOLOR1 Ins: 0 Figure 9-67 Result of the second complete run - Part DB2 Data Archive Expert for z/os
213 =============================================================================== Run: 2 Source table: PART_TEST Creator: PAOLOR1 Del: 0 Act: C Target table: PART_ARCH1 Creator: PAOLOR1 Ins: 0 ******************************* Bottom of data ******************************** Figure 9-68 Result of the second complete run - Part 2 Result: Data Archive Expert has deleted three rows of the LINEITEM_TEST table. F3 shows the changed state for this second run (see Figure 9-69). AHXV Run/Complete Archive Specifications Row 1 of 3 Command ===> Scroll ===> CSR Specification name: SCEN3TEST Creator: PAOLOR1 Description: archive lineitem-related rows,only purge lineitem rows Line commands are: C - Complete R - Run Cmd Run State Run by Date Row filter Defined ( L_SHIPDATE < CURRENT_DATE Completed PAOLOR (L_SHIPDATE < ' ' OR L_REC 2 Completed PAOLOR ( L_SHIPDATE < CURRENT_DATE - 2 ******************************* Bottom of data ******************************** Figure 9-69 History of a specification after two runs Your archive task is completed, too! As you have archived to tables, users can still retrieve all data, the current rows in the original tables, and the old rows in the archive tables. To support your users, you may want to create a UNION ALL view like the one in Example Example 9-17 Creation of a view to see current and archived rows CREATE VIEW LINEITEM_VIEW AS SELECT L_ORDERKEY, L_PARTKEY, L_SUPPKEY, L_LINENUMBER, L_QUANTITY, L_EXTENDEDPRICE, L_DISCOUNT, L_TAX, L_RETURNFLAG, L_LINESTATUS, L_SHIPDATE, L_COMMITDATE, L_RECEIPTDATE, L_SHIPINSTRUCT, L_SHIPMODE, L_COMMENT, AHXEXECUTEDTS FROM LINEITEM_ARCH1 UNION ALL SELECT L_ORDERKEY, L_PARTKEY, L_SUPPKEY, L_LINENUMBER, L_QUANTITY, L_EXTENDEDPRICE, L_DISCOUNT, L_TAX, L_RETURNFLAG, L_LINESTATUS, L_SHIPDATE, L_COMMITDATE, L_RECEIPTDATE, L_SHIPINSTRUCT, L_SHIPMODE, L_COMMENT, CURRENT_TIMESTAMP AS AHXEXECUTEDTS FROM LINEITEM_TEST ; DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS Chapter 9. Scenario 3: Archiving from RI related tables and deleting from one 187
214 Then your users can issue such global queries like the one in Example 9-18 for all of the original existing 18 rows again. Example 9-18 Combined list of current and archived rows SELECT L_ORDERKEY, L_LINENUMBER, L_SHIPDATE, L_RECEIPTDATE, L_PARTKEY FROM LINEITEM_VIEW ORDER BY 1, 2 ; L_ORDERKEY L_LINENUMBER L_SHIPDATE L_RECEIPTDATE L_PARTKEY DSNE610I NUMBER OF ROWS DISPLAYED IS 18 DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS Checking whether too many rows have been archived We look at our scenario again, and we suddenly realize that we have made a mistake. Our company must still deliver to Northern Ireland, as our business partner delivers to Great Britain only. So we check whether LINEITEM rows with shipping destination Northern Ireland have been archived, in particular, having been deleted from the source table. In our sample data we have only one address in Northern Ireland, namely Belfast, so we only have to check for Belfast (see Example 9-19). Example 9-19 Checking for rows we want to retrieve from the archive SELECT DISTINCT SUBSTR(C_ADDRESS, 1, 12) AS C_ADDRESS, N_NAME FROM ( ( LINEITEM_ARCH1 INNER JOIN ORDER_ARCH1 ON L_ORDERKEY = O_ORDERKEY ) INNER JOIN CUSTOMER_ARCH1 ON O_CUSTKEY = C_CUSTKEY ) INNER JOIN NATION_ARCH1 ON C_NATIONKEY = N_NATIONKEY C_ADDRESS N_NAME BELFAST UK 188 DB2 Data Archive Expert for z/os
215 LIVERPOOL UK LONDON UK MANCHESTER UK DSNE610I NUMBER OF ROWS DISPLAYED IS 4 DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS Indeed, BELFAST is among the rows being archived, so you would like to get these LINEITEMS back into your original table again! Hence you may want to define a retrieve specification. Retrieve specifications are explained in Chapter 7, Scenario 1: Archiving from a table to a file on page 95. Chapter 9. Scenario 3: Archiving from RI related tables and deleting from one 189
216 190 DB2 Data Archive Expert for z/os
217 10 Chapter 10. Scenario 4: Archiving from RI related tables and deleting from them In this scenario we consider the case in which we want to archive to a table, delete the archived rows from the original table, and them move the archived rows to a file. This is a three stage archiving process as follows: Run a table archive to copy rows from source tables to archive tables. Complete the table archive, which deletes the rows archived from the source table. Archive the table archives to file archives and delete rows from the table archives. This chapter contains the following: Archiving parts in jumbo containers Step 1 - Defining the table archive specification Step 2 - Completing the table archive Step 3 - Defining archiving a table archive to file Copyright IBM Corp All rights reserved. 191
218 10.1 Archiving parts in jumbo containers Since we no longer produce parts which are sold in jumbo sized containers with a retail price greater than $ , we are going to delete them. The starting point is the PART table, which is linked by DB2 enforced RI to other tables. However, we will only be processing the PART, PARTSUPP, LINEITEM, and ORDER tables, and all processing will take place in batch. The table hierarchy is shown in Figure In this figure, the tables and relationships that do not belong to the archive unit have been shaded out. REGION on delete no action NATION TIME 1 on delete no action on delete no action PART SUPPLIER CUSTOMER on delete cascade on delete no action on delete no action PARTSUPP ORDER on delete no action on delete cascade Application RI DB2 Enforced RI LINEITEM Figure 10-1 Tables and RI being used After defining the archive specification, run it with a batch job, so that your TSO session is not tied up. In addition, you will expect to encounter some issues in this scenario because of the on delete cascade rule being used in two of the defined constraints. 192 DB2 Data Archive Expert for z/os
219 10.2 Step 1 - Defining the table archive specification Define the archive specification using option 1 and listing the existing archive specifications and typing the N command for a new specification. In our example, the specification name is JUMBO#PART#ARCHIVE. Tip: Use uppercase for specification and DB2 object names. Type the specification name and leave Complete archive run (delete source data)? as N so that an archive is run, but no rows will be deleted yet. See Figure AHXV Archive Specification Definition Command ==> Archive specification: Name......==> JUMBO#PART#ARCHIVE DB2 system. : DB2G Creator....==> PAOLOR3 Description..==> Strip out all JUMBO sized packaged parts +RI Complete archive run (delete source data)? ==> N (Yes/No) Perform orphan row/changed data detection? ==> Y (Yes/No) Select an archive definition activity ==> 1 1. Define archive unit (required) 2. Define table targets (active) 3. Define data set targets 4. Save archive specification Figure 10-2 Defining the jumbo archiving specification Now proceed to define the archive unit, define the target (table or data set), and finally, save the specification Define archive unit for jumbo containers On entry, a filter screen is supplied where you may specify the starting point table or allow the dialog to search the DB2 catalog. If searching for tables, a list is displayed satisfying your search criteria. Select the starting point table from this list and press the Enter key again. The ISPF dialog shows a pop-up window and asks Find related tables? Reply Y to allow Grouper to find all of the tables that are linked by RI to the starting point table. By default, this list will contain tables related by DB2 enforced RI only. Figure 10-3 shows the list of related tables. Use the S command to select the PARTSUPP, LINEITEM and ORDER tables, and press the Enter key. Do not select PART, as this is already known to the archive unit (starting point table). The selection is then confirmed by an uppercase S displayed between the row command entry line and the tables names. Press PF3 to leave the screen. Chapter 10. Scenario 4: Archiving from RI related tables and deleting from them 193
220 AHXV Select Related Tables Row 1 to 7 of 7 Command ==> Scroll ===> CSR Archive Specification : JUMBO#PART#ARCHIVE Starting point table : PART Creator : PAOLOR3 DB2 system.... : DB2G Line commands are: S - Select table for archive unit S* - Select all tables D - Deselect table D* - Deselect all tables Cmd * Table name Creator Database Table space CUSTOMER PAOLOR3 PAOLODB TSCUST S_ LINEITEM PAOLOR3 PAOLODB TSLINEI NATION PAOLOR3 PAOLODB TSNATION S_ ORDER PAOLOR3 PAOLODB TSORDER S_ PARTSUPP PAOLOR3 PAOLODB TSPSUPP REGION PAOLOR3 PAOLODB TSREGION SUPPLIER PAOLOR3 PAOLODB TSSUPLY ******************************* Bottom of data ******************************** Figure 10-3 Selecting the required related tables On return to the archive unit definition screen, only the tables making up the archive unit will be displayed, that is PART, LINEITEM, ORDER and PARTSUPP, as shown in Figure AHXV Archive Unit Definition Row 1 to 4 of 4 Command ==> Scroll ===> CSR Archive specification: Name... : JUMBO#PART#ARCHIVE Starting point table: PART Creator. : PAOLOR3 Creator..... : PAOLOR3 DB2 system: DB2G Database name.. : PAOLODB Line commands are: A - Add C - Columns K - Connection keys D - Delete R - Rules P - Starting point table U - Index cols W - Row filter Rules Cmd Table name Creator Database StP Jct Del Row filter W PART PAOLOR3 PAOLODB SP N _ LINEITEM PAOLOR3 PAOLODB N _ ORDER PAOLOR3 PAOLODB N _ PARTSUPP PAOLOR3 PAOLODB N ******************************* Bottom of data ******************************** Figure 10-4 Displayed list of tables making up the archive unit 194 DB2 Data Archive Expert for z/os
221 Define row filter for the jumbo containers Enter the W command to display the next screen, and type in the row filter as shown in Figure 10-5 on the four input lines provided. In this example we are using P_CONTAINER LIKE 'JUMBO%' AND P_RETAILPRICE > and pressing the Enter key to confirm and save the filter. AHXV Starting Point Table Row Filter Row 1 from 9 Command ==> Scroll ===> CSR Archive specification : JUMBO#PART#ARCHIVE Starting point table: PART Creator : PAOLOR3 DB2 system: DB2G Row filter ==> P_CONTAINER LIKE 'JUMBO%' AND P_RETAILPRICE > Columns Num Type Length Scale P_PARTKEY 1 INTEGER 4 0 P_NAME 2 VARCHAR 55 0 P_MFGR 3 CHAR 25 0 P_BRAND 4 CHAR 10 0 P_TYPE 5 VARCHAR 25 0 P_SIZE 6 INTEGER 4 0 P_CONTAINER 7 CHAR 10 0 P_RETAILPRICE 8 FLOAT 4 0 P_COMMENT 9 VARCHAR 23 0 ******************************* Bottom of data ******************************** Figure 10-5 Defining row filer for PART table Figure 10-6 shows the result of pressing the Enter key. If PF3 had been pressed instead of the Enter key, the row filter would not be shown. Chapter 10. Scenario 4: Archiving from RI related tables and deleting from them 195
222 AHXV Archive Unit Definition Row 1 to 4 of 4 Command ==> Scroll ===> CSR Archive specification: Name... : JUMBO#PART#ARCHIVE Starting point table: PART Creator. : PAOLOR3 Creator..... : PAOLOR3 DB2 system: DB2G Database name.. : PAOLODB Line commands are: A - Add C - Columns K - Connection keys D - Delete R - Rules P - Starting point table U - Index cols W - Row filter Rules Cmd Table name Creator Database StP Jct Del Row filter _ PART PAOLOR3 PAOLODB SP N P_CONTAINER LIKE 'JUMBO% _ LINEITEM PAOLOR3 PAOLODB N _ ORDER PAOLOR3 PAOLODB N _ PARTSUPP PAOLOR3 PAOLODB N ******************************* Bottom of data ******************************** Figure 10-6 Result of pressing enter key to save the row filter Archive unit table delete rules Our intention is to archive the selected rows and then to be able to delete archived rows. This is done using the R for rules row command, and setting the delete field to Y for each of the four tables. Figure 10-7 shows the results of setting all of the delete flags. AHXV Archive Unit Definition Row 1 to 4 of 4 Command ==> Scroll ===> CSR Archive specification: Name... : JUMBO#PART#ARCHIVE Starting point table: PART Creator. : PAOLOR3 Creator..... : PAOLOR3 DB2 system: DB2G Database name.. : PAOLODB Line commands are: A - Add C - Columns K - Connection keys D - Delete R - Rules P - Starting point table U - Index cols W - Row filter Rules Cmd Table name Creator Database StP Jct Del Row filter _ PART PAOLOR3 PAOLODB SP Y P_CONTAINER LIKE 'JUMBO% _ LINEITEM PAOLOR3 PAOLODB Y _ ORDER PAOLOR3 PAOLODB Y _ PARTSUPP PAOLOR3 PAOLODB Y ******************************* Bottom of data ******************************** Figure 10-7 Archive unit after updating the rules flags 196 DB2 Data Archive Expert for z/os
223 Having completed the definition of the archive unit, press PF3 to return to the archive specification screen Define archive table targets On the archive specification definition panel, use option 2 to define the target tables. For the purposes of this scenario, we are specifying the database, table space, and table names to be used as targets for this archive specification. Choosing option 6 allows us to specify all of the DB2 objects to used as targets. Restriction: When specifying the database and table space names, these objects must already exist. However, when specifying a table name, the table must not exist. Also, be aware that Data Archive Expert will not create indexes on table archives when you name objects. Figure 10-8 shows an example of database, table space, and table used for one of the target tables. We are using the previously created table space in PAOLODB.JUMB01 for all of the target tables. AHXV Map Source Tables to Archive Targets Row 1 to 1 of 4 Command ==> Scroll ===> CSR Archive specification: JUMBO#PART#ARCHIVE Primary commands are: C - Clear table mappings Map source tables to the specified archive table, table space and database. Line commands are: M - Map table space Cmd ======================================================================= Source : LINEITEM Target ==> JUMBO_ARC_LINEITEM Creator: PAOLOR3 Creator ==> PAOLOR3 Table space ==> JUMBO01 Database ==> PAOLODB Figure 10-8 Specifying the target database, table space and table names When completed, four target tables will be used: JUMBO_ARC_PART JUMBO_ARC_PARTSUPP JUMBO_ARC_ORDER JUMBO_ARC_LINEITEM Once all of the table mappings have been completed, press the Enter key to be sure that the names have been accepted, and the use PF3 to return to the definition of the archive specification screen. Important: Data Archive Expert only creates unique indexes when you use default target names. Unique indexes are important if data is going to be retrieved or archived to file. Chapter 10. Scenario 4: Archiving from RI related tables and deleting from them 197
224 Errors when saving the table archive specification On the definition screen, use option 4 to save the archive specification, and expect the specification to be saved. However, an AHXJ071 error message is displayed as shown in Figure AHXV Archive Specification Validation Errors --- Row 1 to 1 of 2 Command ==> Scroll ===> CSR The archive specification definition being saved has failed definition rules. You can return to the definition to fix the errors. Name: JUMBO#PART#ARCHIVE ENTER to exit and save the definition CANCEL or END to return to definition panel Validation messages: Message AHXJ071: Parent table has a child table defined with the CASCADE or SET NULL fo reign key delete rule. Change this archive unit definition so that the parent ta ble delete rule is set to N. Error Tokens: PAOLOR3.PART Figure 10-9 Error due on delete cascade This error is the result of the way Data Archive Expert applies Perform orphan row/changed data detection and the rule to delete rows in a parent table where the on delete is cascade. Example 10-1 shows the ALTER statement used to enforce the RI on delete cascade. Example 10-1 RI using on delete cascade ALTER TABLE PAOLOR3.PARTSUPP FOREIGN KEY PART1 (PS_PARTKEY) REFERENCES PAOLOR3."PART" (P_PARTKEY) ON DELETE CASCADE ; ALTER TABLE PAOLOR3.PARTSUPP FOREIGN KEY SUPP1 (PS_SUPPKEY) REFERENCES PAOLOR3.SUPPLIER (S_SUPPKEY) ON DELETE CASCADE ; Limitations due to on delete cascade rule Where there is a DB2 enforced relationship using on delete cascade when a row in the parent table is deleted, all related rows in the child table are deleted too. It is possible that a parent row may be selected for archiving and deleting, but not all of the related child rows are selected. In this case, Data Archive Expert does not archive all of the rows. To prevent such a situation from arising, Data Archive Expert asks the question Perform orphan row/ changed data detection? in the first panel of the define archive specification: If using Y (default), Data Archive Expert will not allow deletions from the parent table. The Perform orphan row/changed data detection is a complicated and expensive part of the data integrity checking performed by Data Archive Expert. Basically, Data Archive 198 DB2 Data Archive Expert for z/os
225 Expert performs a number of checks, identifying rows that may be orphaned, and checks the rows archived match what will now be deleted. If any of these checks fail, the process fails. If using N, then you may delete from the parent table. However, be aware that you are also turning off all orphan row checking, and all insert/delete/update checking too. With the checking turned off, you are in the driver s seat, and Data Archive Expert just does as it is told. It allows parent rows to be deleted, which may cause DB2 to cascade delete and will if no action creates orphans. In addition, if any inserts or updates take place to rows within the scope of the row filter, then these rows will be lost. Attention: Always build archive units thoughtfully: If orphan checking is off and your archive unit excludes a related child table where an on delete cascade rule applies, then Data Archive Expert does not take this into account. As a result, rows in this external child table will not be archived, but DB2 will enforce the delete rule causing rows to be lost. See the end of , Table archive job validation on page 201, where the number of rows to be archived and the consequences of a on delete cascade are discussed Solution to the On delete cascade rule There are two ways to proceed when faced with this situation: Cautiously: In response to the AHXJ071 message, press PF3 and return to the define archive specification panel. Select option 1 to allow the archive unit to be changed, and use the R command and reset the delete rule to N for table PART. There is a second on delete cascade rule, so the delete rule for table ORDER will also have to be reset to N. The result of this is shown in Example Less cautiously: Press PF3 and return to the define archive specification panel. Change the Perform orphan row/changed data detection? field to N. This will turn off all checking. We have used the cautious approach and option to perform orphan and data change detection changes. As a result, we will not be able to delete any rows from the PART or ORDER tables. This of course is not a problem in our test scenario, but it serves to highlight some very important issues. If this were a real life scenario, then it would be wise to re-think what you were trying to achieve, and tackle the problem differently. Chapter 10. Scenario 4: Archiving from RI related tables and deleting from them 199
226 AHXV Archive Unit Definition Row 1 to 4 of 4 Command ==> Scroll ===> CSR Archive specification: Name... : JUMBO#PART#ARCHIVE Starting point table: PART Creator. : PAOLOR3 Creator..... : PAOLOR3 DB2 system: DB2G Database name.. : PAOLODB Line commands are: A - Add C - Columns K - Connection keys D - Delete R - Rules P - Starting point table U - Index cols W - Row filter Rules Cmd Table name Creator Database StP Jct Del Row filter _ PART PAOLOR3 PAOLODB SP N P_CONTAINER LIKE 'JUMBO% _ LINEITEM PAOLOR3 PAOLODB Y _ ORDER PAOLOR3 PAOLODB N _ PARTSUPP PAOLOR3 PAOLODB Y ******************************* Bottom of data ******************************** Figure Final delete rules for archiving Jumbo parts Running the table archive Since there is going to be a lot of processing, the archive will take a while to run, so it will be run as a job. There are five versions of the jobs to perform archiving in batch. The job required here executes the REXX Exec AHXIVPAR used to perform table archives. The REXX Exec uses DSNREXX to execute the Data Archive Expert stored procedure AHXTOOLS.ARCHIVEEXECSP. Full details of running these jobs is covered elsewhere in this redbook. Example 10-2 shows the JCL that was used to run the archive.specification to archive the jumbo sized parts. This run will not delete any rows because the specification had the Complete archive run (delete source data)? field set to N. Working this way allows us to take the archive and inspect the output, and delete the rows at a later date. 200 DB2 Data Archive Expert for z/os
227 //* */ //* LICENSED MATERIALS - PROPERTY OF IBM */ //* 5655-I95 */ //* (C) COPYRIGHT IBM CORPORATION 2003 ALL RIGHTS RESERVED. */ //* US GOVERNMENT USERS RESTRICTED RIGHTS - USE, DUPLICATION OR */ //* DISCLOSURE RESTRICTED BY GSA ADP SCHEDULE CONTRACT WITH IBM CORP. */ //* */ //STEP010 EXEC PGM=IKJEFT01,DYNAMNBR=20,TIME=500 //STEPLIB DD DSN=DB2G7.SDSNEXIT,DISP=SHR // DD DSN=DB2G7.SDSNLOAD,DISP=SHR //SYSEXEC DD DISP=SHR,DSN=AHX110.SAHXSAMP.IVP //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * %AHXIVPAR DB2G,A,*,JUMBO#PART#ARCHIVE,SYSTOOLS,- P_CONTAINER LIKE 'JUMBO%' AND P_RETAILPRICE > /* Figure JCL used to run the Jumbo archive The archive REXX is executed with the parameters. Full details of using the stored procedures in batch are documented elsewhere in the book Table archive job validation Once the job has ended, inspect the output and ensure it has run correctly. You will notice that no rows have been deleted. Example 10-2 contains statistics for one of the tables in the archive unit, printed in the SYSTSPRT output. Example 10-2 Job statistics for one of the archived tables Archive execution ran successfully with return code 0 Archive run statistics row 1 follows: - Spec id = Spec status = FINISHED - Spec state = PENDING - Spec version = 1 - Version status = FINISHED - Version state = PENDING - Executed by = PAOLOR3 - Executed timestamp = Number of rows deleted for this version = 0 - Number of rows inserted for this version = Source table name = LINEITEM - Source table creator = PAOLOR3 - Target table name = JUMBO_ARC_LINEITEM - Target table creator = PAOLOR3 - Number of rows deleted for this target = 0 - Number of rows inserted for this target = Spec last updated by = PAOLOR3 - Spec last update timestamp = Tip: An archive specification may be run many times without completing (deleting source rows). Take care that the same rows are not being processed twice. Chapter 10. Scenario 4: Archiving from RI related tables and deleting from them 201
228 Archiving to tables may be run several times, but care must be taken to ensure the same rows will never be selected for archiving. Otherwise, when the archive process is to be completed, there will be a mismatch in the number of rows archived and rows deleted. Figure shows the details displayed in the archive specification history. Archive specification : JUMBO#PART#ARCHIVE DB2 system. : DB2G Creator : PAOLOR3 Description : Strip out all JUMBO sized packaged parts +RI Row filter : P_CONTAINER LIKE 'JUMBO%' AND P_RETAILPRICE > =============================================================================== Run: 1 Source table: LINEITEM Creator: PAOLOR3 Del: 3096 Act: Target table: JUMBO_ARC_LINEITEM Creator: PAOLOR3 Ins: 3096 =============================================================================== Run: 1 Source table: ORDER Creator: PAOLOR3 Del: 0 Act: Target table: JUMBO_ARC_ORDER Creator: PAOLOR3 Ins: 3087 =============================================================================== Run: 1 Source table: PART Creator: PAOLOR3 Del: 0 Act: Target table: JUMBO_ARC_PART Creator: PAOLOR3 Ins: 455 =============================================================================== Run: 1 Source table: PARTSUPP Creator: PAOLOR3 Del: 1820 Act: Target table: JUMBO_ARC_PARTSUPP Creator: PAOLOR3 Ins: 1820 Figure Results of archiving Jumbo sized parts We may check the results of the archive or, in advance of running this archive, we can run some queries to check the number of rows expected from each table. For example, run a query to count the rows in PART matching the row filter. Then count the rows in SUPPPART using the previous query as a sub selects, and so on. Table 10-1 shows the results of the queries. You will notice that there are about the same number of rows in tables ORDER and LINEITEM, where a typical order will have multiple order lines. Table 10-1 Result of queries to count the rows due for archiving Table name Number of rows PART 455 PARTITEM 1,820 LINEITEM 3,096 ORDER 3,087 The reason for the apparent error, is in fact a reflection of the orders in the test data. Where orders do not normally have more than one part in a jumbo sized container. Importantly, this is why the parent rows must not be deleted from the ORDER table. In our test data, the consequences of deleting the 3,087 ORDER rows would have been to delete 13,261 LINEITEM rows because of the on delete cascade rule, when only 3,096 rows were archived. 202 DB2 Data Archive Expert for z/os
229 10.3 Step 2 - Completing the table archive When you are satisfied with the archive results, you can delete the rows from the archive source tables. This step is known by Data Archive Expert as completing the archive. Completion may be done on-line or in batch. The first step is to identify which version of the archive is to run by listing the table archive specifications and using the H command for History. Figure shows that there is only one archive version here. AHXV Archive Specification History Row 1 to 2 of 2 Command ===> Scroll ===> CSR Specification name: JUMBO#PART#ARCHIVE Creator: PAOLOR3 Description: Strip out all JUMBO sized packaged parts +RI Line commands are: S - Show statistics Cmd Run State Run by Date Row filter _ 0 Defined P_CONTAINER LIKE 'JUMBO%' AND P 1 Pending PAOLOR P_CONTAINER LIKE 'JUMBO%' AND P_ ******************************* Bottom of data ******************************** Figure Archive history for the Jumbo archive If the table archive had been run more than once, then archive versions will appear in the list. We now run the table archive job again, but this time adjust the parameter list, so that the Action is C for complete (it was A) and specify the archive version as a string value. So, for this example, version number will be 1. Note that the row filter is not used, and need not be specified. Figure shows the job step and archive parameters. //* */ //* LICENSED MATERIALS - PROPERTY OF IBM */ //* 5655-I95 */ //* (C) COPYRIGHT IBM CORPORATION 2003 ALL RIGHTS RESERVED. */ //* US GOVERNMENT USERS RESTRICTED RIGHTS - USE, DUPLICATION OR */ //* DISCLOSURE RESTRICTED BY GSA ADP SCHEDULE CONTRACT WITH IBM CORP. */ //* */ //STEP010 EXEC PGM=IKJEFT01,DYNAMNBR=20,TIME=500 //STEPLIB DD DSN=DB2G7.SDSNEXIT,DISP=SHR // DD DSN=DB2G7.SDSNLOAD,DISP=SHR //SYSEXEC DD DISP=SHR,DSN=AHX110.SAHXSAMP.IVP //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * %AHXIVPAR DB2G,C,'1',JUMBO#PART#ARCHIVE,SYSTOOLS, /* Figure Completing the Jumbo archive - Deleting source rows If there were multiple archives to be completed, then the version would be specified as a list with the version numbers separated by the vertical bar character, e.g. versions 1 to 3 would be Chapter 10. Scenario 4: Archiving from RI related tables and deleting from them 203
230 10.4 Step 3 - Defining archiving a table archive to file This section describes the process of defining and running in batch, and an archive specification to archive from a table archive to a file archive. Attention: Archiving from a table archive to a file archive, is a special type of archive. So, do not copy the table archive specification, or use the N command. This type of archive is different from what we have seen so far. This is because the archive unit (input) is an existing archive. If we were to copy the table archive specification used for the first archive, or define a new archive using the N command, then the save process will fail, displaying the message AHXJ034 as shown in Figure The reason for the error is that Data Archive Expert adds an extra column called AHXEXECUTEDTS, which contains an archive timestamp, and Data Archive Expert is unable to add a second column of the same name. AHXVLMSG Archive Specification Validation Errors --- Row 1 to 1 of 8 Command ==> Scroll ===> CSR The archive specification definition being saved has failed definition rules. You can return to the definition to fix the errors. Name: JUMBO#PART#FILE ENTER to exit and save the definition CANCEL or END to return to definition panel Validation messages: Message AHXJ034: Source table cannot contain a column with a name of AHXEXECUTEDTS. Error Tokens: PAOLOR3.JUMBO_ARC_PART Figure AHXJ034 error message when using an archive table as input Nothing can be done to fix this archive specification, so do not press PF3, instead press the Enter key and save the specification. You see that it has a PDEF or pending definition status, so delete it using the D command. Now proceed to , Defining the file archive specification on page 204, to correctly define a table archive to a file archive specification Defining the file archive specification To correctly define a file archive using a table archive as its source, start by listing the archive specifications (filter as required), and find the table archive specification used to create the table archive. Now use the F command to build a specification based upon the first archive. As before, type in the name (uppercase) and supply a suitable the description. As shown in Figure 10-16, proceed to define the required options. 204 DB2 Data Archive Expert for z/os
231 AHXSDEFF Archive Specification Definition Command ==> Archive specification: Name......==> JUMBO#PART#FILEARC DB2 system. : DB2G Creator....==> PAOLOR3 Description..==> Strip out all JUMBO sized packaged parts +RI Starting point table: PART Creator : PAOLOR3 Database name... : PAOLODB Select an archive definition activity ==> 2 1. Browse archive unit (completed) 2. Select archive runs for archive to file (required) 3. Set age criteria for table archive data (required) 4. Update/define archive data set targets (active) 5. Save archive specification Figure Defining the table to file archive specification The following sections demonstrate the available archive definition activity options. Browsing the jumbo archive unit Option 1 shows the source archive unit. The details may only be browsed, not amended in any way. AHXAUNIT Archive Unit Definition Row 1 to 4 of 4 Command ==> Scroll ===> CSR Archive specification: Name... : JUMBO#PART#FILEARC Starting point table: PART Creator. : PAOLOR3 Creator..... : PAOLOR3 DB2 system: DB2G Database name.. : PAOLODB Line commands are: A - Add C - Columns K - Connection keys D - Delete R - Rules P - Starting point table U - Index cols W - Row filter Rules Cmd Table name Creator Database StP Jct Del Row filter _ LINEITEM PAOLOR3 PAOLODB Y _ ORDER PAOLOR3 PAOLODB N _ PART PAOLOR3 PAOLODB SP N P_CONTAINER LIKE 'JUMBO% _ PARTSUPP PAOLOR3 PAOLODB Y ******************************* Bottom of data ******************************** Figure Definition option 1 - Browsing the archive unit Chapter 10. Scenario 4: Archiving from RI related tables and deleting from them 205
232 Select archive runs for archive to file Selecting option 2 will allow the table archive version to be selected. Figure shows that we only have one version. Type S to select the version and press the Enter key. Important: For the type of archive we are running, the table achieve version being processed must be in a Completed state. That is, the rows archived must have been deleted from the source tables. AHXSOFFV Select Archive Data Runs for File Archive Row 1 to 2 of 2 Command ===> Scroll ===> CSR Specification name: JUMBO#PART#FILEARC Creator: PAOLOR3 Description: Strip out all JUMBO sized packaged parts +RI Line commands are: S - Select for file archive D - Deselect Cmd * Run State Run by Date Row filter _ 0 Defined P_CONTAINER LIKE 'JUMBO%' AN _ 1 Completed PAOLOR P_CONTAINER LIKE 'JUMBO%' AN ******************************* Bottom of data ******************************** Figure Definition option 2 - Selecting the archive version Set age criteria for table archive data Attention: Do not use Option 3. Set age criteria for table archive data unless you are using the age of an archive as archiving criteria. This option allows the selection of previous archives dependant upon their age. Do not use this option if age is not going to be the criteria. Important: Be sure to understand the differences between explicitly selecting table archive runs and setting an age criteria. When explicitly selecting table archive runs, the archive specification is a one-time run, and successive runs yield no archiving activity to file. If you specify age criteria, you can run the archive specification multiple times. On each run, the age criteria is applied to identify any runs that qualify to be archived to files. If you know that you want to use age criteria in determining the runs to be archived, you can create the file archive specification in advance of the run of the data archive specification. Select archive runs by choosing explicit runs from the list of runs or by specifying an age criteria. You cannot select them using both methods. The method that you most recently use sets the current method for selecting the data versions. The age option is shown in just for completeness, but it is not used in this example. 206 DB2 Data Archive Expert for z/os
233 AHXSOAGE Table Archive Age Specification Command ==> Archive specification: JUMBO#PART#FILEARC Creator : PAOLOR3 DB2 system: DB2G Archive table archive data to file every: Days ==> 30 Hours ==> 0 Select preset age ==> 4 1. User specified age. 2. Archive data daily (24 hours or older). 3. Archive data weekly (7 days or older). 4. Archive data monthly (30 days or older). 5. Archive data quarterly (90 days or older). 6. Archive data semi-annually (180 days or older). Figure Definition option 3 - Age options, shown for information only Define archive data set targets On selecting option 4, you are able to define the target data set names. In this case you must use a template because you are writing to a tape. As shown in Figure 10-20, the source archive tables are shown without any target details. AHXSOFLN File Archive Targets Row 1 to 3 of 4 Command ==> Scroll ===> CSR Archive specification: JUMBO#PART#FILEARC Creator : PAOLOR3 DB2 system: DB2G Select an option for creating targets: 3 1. Default target data set generation with high-level qualifier. High-level qualifier ==> PAOLOR3 2. Specify data set name for each source table. 3. Specify utility templates for each source table. Source table Target data set =============================================================================== Name : LINEITEM Creator: PAOLOR3 =============================================================================== Name : PARTSUPP Creator: PAOLOR3 =============================================================================== Name : PART Creator: PAOLOR3 Figure Definition option 4: Defining the file archive target data sets Chapter 10. Scenario 4: Archiving from RI related tables and deleting from them 207
234 Selecting Specify utility templates for each source table (option 3) displays the list of source tables that need to be mapped with required template. The mapping process needs to be repeated for each table, by typing the M for mapping the row command next to the tables. Figure shows the first template selection screen, which shows the default template creator and table names used by the DB2 Administration Tool. AHXTMPLT Utility Template Command ==> Archive specification: JUMBO#PART#FILEARC Select saved DB2 Admin template? ==> Y (Yes/No) Templates table ==> UTTEMPLATE Creator ==> DSNACC DB2 system ==> DB2G Template name ==> Template creator ==> Figure Selecting the template for the target data sets on tape Press the Enter key to move to the list of templates, where the required template is selected. As stated earlier, repeat the mapping for all four tables. Finally, save the archive specification using option Running the file archive specification The file archive specification is being run in batch, and executes the stored procedure AHXTOOLS.OFFLINEONLARCSP. //* LICENSED MATERIALS - PROPERTY OF IBM */ //* 5655-I95 */ //* (C) COPYRIGHT IBM CORPORATION 2003 ALL RIGHTS RESERVED. */ //* US GOVERNMENT USERS RESTRICTED RIGHTS - USE, DUPLICATION OR */ //* DISCLOSURE RESTRICTED BY GSA ADP SCHEDULE CONTRACT WITH IBM CORP. */ //* */ //STEP010 EXEC PGM=IKJEFT01,DYNAMNBR=20 //STEPLIB DD DSN=DB2G7.SDSNEXIT,DISP=SHR // DD DSN=DB2G7.SDSNLOAD,DISP=SHR //SYSEXEC DD DISP=SHR,DSN=AHX110.SAHXSAMP.IVP //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * %AHXTB2FI DB2G,JUMBO#PART#FILEARC,SYSTOOLS /* Figure JCL to run the table archive to file archive stored procedure Important: Check unique indexes exist for all table archives, and RUNSTATS are run. 208 DB2 Data Archive Expert for z/os
235 If we had used default table names then Data Archive Expert would have created unique indexes on the table archives. These indexes would be based upon the unique indexes, or columns, which collectively identify the source rows uniquely. However, we named our own table names for the table archives. So, as a result no index was built automatically, and unique indexes must be created and RUNSTATS run, that is, with INDEX (ALL). To run the job, only three parameters are needed, the DB2 subsystem ID, archive specification name, and schema name. After the job completes, check the SYSTPRT output. The stored procedure generates statistics for each table being processed, and displays the number of rows processed. Chapter 10. Scenario 4: Archiving from RI related tables and deleting from them 209
236 210 DB2 Data Archive Expert for z/os
237 11 Chapter 11. Scenario 5: Archiving Grouper discovered related tables In this chapter we describe the process of defining and running archives from non DB2 enforced RI. We show how to use the DB2 Grouper to define non DB2 enforced referential integrity. We then show a scenario where we use the Grouper maintained relationship to build and execute a table archive specification. This chapter contains the following: Using the Grouper Client with Data Archive Expert Introduction and some definitions Using Grouper to manage non-enforced relationships Using Grouper to discover application relationships Copyright IBM Corp All rights reserved. 211
238 11.1 Using the Grouper Client with Data Archive Expert DB2 Grouper is a tool that can be used to group DB2 tables based on relationships between those tables. Grouper discovers, records, and allows the administration of these groups. In addition, Grouper provides this group relationship information to other IBM tools including DB2 Data Archive Expert. DB2 Grouper is called by the DB2 Data Archive Expert as part of the archive specification process whenever you select Find Related Tables. By default, you are not required to define groups ahead of time in Grouper in order to allow DB2 Data Archive Expert. While this use of Grouper by DB2 Data Archive Expert is transparent, you might wish to include referential constraints that are not enforced by DB2 RI. The Grouper Client will allow you to add these table relationships that will then be included whenever DB2 Data Archive Expert calls DB2 Grouper. The Grouper Client and its installation are described in Chapter 6, Optionally defining DB2 Grouper Client on page 65. For more information on Grouper refer to DB2 Grouper User s Guide, SC Introduction and some definitions In order to help understand the following set of examples, we define a few terms and describe some of the main components of Grouper before we start using the product. Group discovery is a stored procedure that runs on the server and discovers relationships between various tables in the database. This discovery can be based on: Definitions in the DB2 Catalog, also known as DB2 enforced RI Non enforced referential relationships input through the DB2 Grouper Client and from DB2 UDB for z/os Version 8 using a new feature called informational referential constraints Parameters input from DB2 Data Archive Expert and the DB2 Grouper Client to guide the discovery. Examples of this includes specification of a starting point table, already introduced, and boundary objects, which we will discuss later on. Results from using the DB2 Grouper Unit of Work discovery Unit of work discovery searches the DB2 archive log and examines the individual log records for relationships that are not stored in DB2. Tables identified by the unit of work discovery are considered to be related if they were updated within the same unit of work. Note: The DB2 Grouper Unit of Work discovery relationships are not considered when Grouper is called by DB2 Data Archive Expert. Therefore, we do not discuss the Unit of Work discovery topic in this redbook. A set is an object that is created by the DB2 Grouper, either explicitly by using Grouper Client, or implictly by a call from DB2 Data Archive Expert to perform the find related tables function. Sets created by the DB2 Data Archive Expert call are created with a set name that matches the DB2 Data Archive Expert archive specification name. Sets created through the Client interface are named explicitly. Sets are stored in the DB2 Grouper metadata and constitute a place to hold versions of groups. A group is a collection of related tables, found either through group discovery or using the DB2 Grouper Client to relate them by defining a non enforced relationship. A version is a collection of groups resulting from a single run of Grouper Discovery. There can be multiple groups in a single version, there can also be multiple versions in a single set. 212 DB2 Data Archive Expert for z/os
239 11.3 Using Grouper to manage non-enforced relationships To demonstrate this functionality, we have created an additional table to our existing test data environment. This table named CONTAIN has a primary key column named contain_container. We want to relate this table to the PART table using the P_CONTAINER column as a parent key. We do not want to have this relationship maintained through DB2 RI enforcement. As we have seen earlier, we can use the add additional tables feature of the DB2 Data Archive Expert retrieve specification function and define the relationship using connection keys. However, this table connection is only defined for that particular archive specification, and with each new archive specification has to be re-established. A better and easier approach is to use DB2 Grouper and build this application relationship with the DB2 Grouper Client. Then, each archive specification that refers to the PART table automatically includes the CONTAIN table. Defining a non-enforced relationship Here are the steps necessary to build this non-enforced relationship. 1. Start the Grouper Client from the Programs menu on your Windows desktop. Click Start --> Programs --> Grouper Client --> Grouper Start. 2. Log on to the Client using a valid RACF user ID and password for the selected z/os server using the logon screen seen in Figure Figure 11-1 Grouper Client signon 3. Close the launchpad and from the taskbar select Grouper --> Edit Non-enforced RI. This will then present the non enforced referential constraints panel shown in Figure Chapter 11. Scenario 5: Archiving Grouper discovered related tables 213
240 Figure 11-2 Non-enforced referential constraints window 4. Clicking the button Add relationship launches the Add Table Relationship window. This window allows us to either explicitly identify the source and target tables, or prompts us with a table list based on the filters that we provide. In our example we specified the source table PART, but allowed Grouper to prompt us with the target table list. Figure 11-3 shows that we specified the source owner and table name, but only provided the owner for the target. 214 DB2 Data Archive Expert for z/os
241 Figure 11-3 Add table relationships window 5. This gave us a list of tables from which we selected the table PART, as shown in Figure Figure 11-4 Select table drop down Chapter 11. Scenario 5: Archiving Grouper discovered related tables 215
242 6. Next, we are shown the unique keys of the parent table. In our case we have a primary key on the CONTAIN table called CONTAIN_CONTAINER as shown in Figure Figure 11-5 Link parent column to child column window 7. We want to link the container column between the two tables, so we clicked the Unique key shown in the window, and the Link unique key to dependent button becomes visible. Clicking this button then shows us the link key columns window. See Figure Figure 11-6 Link key columns window We want to see the list of dependent table columns that are available. We clickd the select field underneath the dependent table column heading. This presents us with the dependent table column drop down shown in Figure DB2 Data Archive Expert for z/os
243 Figure 11-7 Dependent columns drop down 8. We selected the requested column. In our scenario we chose P_CONTAINER and clicked OK. This shows us the confirmation screen where we can see the new non-enforced relationship we just built. See Figure Figure 11-8 Non-enforced referential constraints confirmation 9. We clicked Close. This completes the entry of the non-enforced relationship. Keep in mind that this added relationship will be used by all tools, and in particular by all archive specifications just as if it had been found in the DB2 catalog. Chapter 11. Scenario 5: Archiving Grouper discovered related tables 217
244 Data Archive Expert archive specification Now that we have created this non-enforced relationship, we next created a Data Archive Expert archive specification that will use the relationship. In our scenario, we chose to remove a single container type, and by the enforcement coded in the Grouper constraint, we expected to see a number of parts affected by this as well. We used DB2 Data Archive Expert to build a new table archive specification. We specified that CONTAINER is the starting point table. Figure 11-9 shows this. AHXV Archive Specification Definition Command ==> Archive specification: Name......==> Container#Part DB2 system. : DB2G Creator....==> PAOLOR2 Description..==> Container Archive as starting point table Complete archive run (delete source data)? ==> N (Yes/No) Perform orphan row/changed data detection? ==> Y (Yes/No) Select an archive definition activity ==> 1 1. Define archive unit (required) 2. Define table targets (active) 3. Define data set targets 4. Save archive specification Figure 11-9 Archive specification to use Grouper Next, we specified the filters to allow DB2 Data Archive Expert to locate a list of tables. See Figure AHXV Archive Specification Definition Command ==> Archive specification: Name......==> Container#Part DB2 system. : DB2G Creator....==> Essssssss Specify Starting Point Table ssssssssn Description..==> e e Complete archive ru e Command ==> e Perform orphan row/ e e e Provide table selection list? ==> Y (Y/N) e Select an archive def e e e Table name. ==> % e 1. Define archive e Creator.. ==> PAOLOR2 e 2. Define table t e Database.. ==> % e 3. Define data se e DB2 system.. : DB2G e 4. Save archive s e e e ( % or blank indicates all ) e e e e e DssssssssssssssssssssssssssssssssssssssssssssssM Figure Data Archive Expert table selection list 218 DB2 Data Archive Expert for z/os
245 We specified that we want Data Archive Expert to provide us with a list of tables from which we define the table to use as our starting point. Figure shows that we chose to use CONTAIN as our starting point for this archive. AHXV Select Starting Point Table Row 1 of 21 Command ==> Scroll ===> CSR Archive specification: Container#Part DB2 system..... : DB2G Line commands are: S - Select table S* - Select all D - Deselect table D* - Deselect all Cmd * Table name Creator Database Table space ACT PAOLOR2 PAOLOR2D DSN8S71P s CONTAIN PAOLOR2 GROUPDB TSCONTNR D_IP PAOLOR2 DSQDBDEF DSQTSDEF D_LOC PAOLOR2 DSQDBDEF DSQTSDEF D_USE PAOLOR2 DSQDBDEF DSQTSDEF DEPT PAOLOR2 PAOLOR2D DSN8S71D DEPTDEL PAOLOR2 DSNDB04 DEPTDEL Figure Data Archive Expert select starting point table Next, we specified that we want Data Archive Expert to call DB2 Grouper to find related tables. See Figure Command ==> Scroll ===> CSR Archive specification: Container#Part DB2 system..... : DB2G Line commands are: Esssssssssss Search for related Tables? ssssssssssssn S - Select table S* e e D - Deselect table e Command ==> e e e Cmd * Table name e Find related tables? ==> Y (Yes/No) e e e ACT e Starting point table: CONTAIN e S CONTAIN e Creator : PAOLOR2 e D_IP e Database name... : GROUPDB e D_LOC e DB2 system.... : DB2G e D_USE e e DEPT e e DEPTDEL e e DEPTDEL1 DsssssssssssssssssssssssssssssssssssssssssssssssssssM DEPTNEW PAOLOR2 DSNDB04 DEPTNEW EACT PAOLOR2 PAOLOR2D DSN8S71R EDEPT PAOLOR2 PAOLOR2D DSN8S71R EEMP PAOLOR2 PAOLOR2D DSN8S71R Figure Data Archive Expert related tables window Chapter 11. Scenario 5: Archiving Grouper discovered related tables 219
246 Notice that DB2 Grouper presents back a list of tables that is larger than our original archive scenario, which only included CONTAIN and PART. However, because we have specified a non-enforced relationship between CONTAIN and PART in Grouper, we have PART selected first, then Grouper follows the enforced relationships in the DB2 catalog between PART and the other tables in our sample application. Grouper has found a number of other tables that are referentially related to the tables we want to archive. These are presented here for our consideration. For this example, suppose we know it is safe to archive the intended tables without including these other referentially related tables. We can then specify N for the Search for related tables in the pop-up. See Figure AHXV Select Related Tables Row 1 of 8 Command ==> Scroll ===> CSR Archive Specification : Container#Part Starting point table : CONTAIN Creator : PAOLOR2 DB2 system.... : DB2G Line commands are: S - Select table for archive unit S* - Select all tables D - Deselect table D* - Deselect all tables Cmd * Table name Creator Database Table space CUSTOMER PAOLOR3 PAOLODB TSCUST LINEITEM PAOLOR3 PAOLODB TSLINEI NATION PAOLOR3 PAOLODB TSNATION ORDER PAOLOR3 PAOLODB TSORDER s PART PAOLOR3 PAOLODB TSPART PARTSUPP PAOLOR3 PAOLODB TSPSUPP REGION PAOLOR3 PAOLODB TSREGION SUPPLIER PAOLOR3 PAOLODB TSSUPLY ******************************* Bottom of data ******************************** Figure Data Archive Expert select related tables panel To support our scenario, select PART. Then the table archive unit specification will look like that shown in Figure DB2 Data Archive Expert for z/os
247 AHXV Archive Unit Definition Row 1 of 2 Command ==> Scroll ===> CSR Archive specification: Name... : Container#Part Creator. : PAOLOR2 DB2 system: DB2G Starting point table: CONTAIN Creator..... : PAOLOR2 Database name.. : GROUPDB Line commands are: A - Add C - Columns K - Connection keys D - Delete R - Rules P - Starting point table U - Index cols W - Row filter Rules Cmd Table name Creator Database StP Jct Del Row filter _ CONTAIN PAOLOR2 GROUPDB SP N _ PART PAOLOR3 PAOLODB N ******************************* Bottom of data ******************************** Figure Data Archive Expert archive unit specification We then put a row filter specification on the starting point table. In this case, we chose to archive the container named JUMBO BOX. Figure shows how to add this specification. AHXV Starting Point Table Row Filter Row 1 of 4 Command ==> Scroll ===> CSR Archive specification : Container#Part Starting point table: CONTAIN Creator : PAOLOR2 DB2 system: DB2G Row filter ==> CONTAIN_CONTAINER = 'JUMBO BOX" Columns Num Type Length Scale CONTAIN_CONTAINER 1 CHAR 10 0 CONTAIN_SHIPPER 2 CHAR 55 0 CONTAIN_SHIP_ADR 3 CHAR 25 0 CONTAIN_SHIP_BILL 4 CHAR 23 0 ******************************* Bottom of data ******************************** Figure Starting point table row filter We pressed PF3 to save and then we specified that we want to pick our archive target tables. We have decided to let DB2 Data Archive Expert build our archive table into the default tablespaces, and also to allow Data Archive Expert to choose the table name. Figure shows how we map the archive targets. Chapter 11. Scenario 5: Archiving Grouper discovered related tables 221
248 AHXV Table Archive Targets Row 1 of 2 Command ==> Scroll ===> CSR Archive specification: Container#Part Creator : PAOLOR2 DB2 system: DB2G Select an option for creating targets: 1 1. Default table/default table spaces (one table per table space) 2. Specify table/default table spaces (one table per table space) 3. Default tables/specify table space (all tables in one table space) 4. Specify tables/specify table space (all tables in one table space) 5. Default tables/specify table spaces (any combination) 6. Specify tables/specify table spaces (any combination) Target table space Source table Target table Database =============================================================================== Name : CONTAIN Name : Name : Creator: PAOLOR2 Creator: Database: DSNDB04 =============================================================================== Name : PART Name : Name : Creator: PAOLOR3 Creator: Database: DSNDB04 Figure Table archive targets We pressed PF3 to confirm and exit, then we saved the specification. Next, from the archive specifications list, select the specification that we just completed building and execute it. See Figure AHXV Archive Specifications List Specification saved Command ===> Scroll ===> CSR Primary commands are: DB2 system : DB2G FI - Filter archive specification list User ID. : PAOLOR2 RE - Refresh archive specification list Time... : 22:02 Line commands are: N - New C - Copy D - Delete R - Run B - Browse details E - Edit H - History of runs I - Info summary F - Table archive to file T - Define retrieve specification L - List retrieve specifications Cmd Name Description Creator Updated State Type r Container#Part Container Archive as PAOLOR DEF TABL Container Archive PAOLOR PCOM TABL container archive first container archi PAOLOR PCOM TABL ARCHFORFILERET Archive to file for f PAOLOR COM FILE ARCHIVEFORBATCHRET Table Archive for Bat PAOLOR PCOM TABL BATCHFILETOTAPE Batch file archive to PAOLOR DEF FILE IVPNEWFILE PAOLOR COM FILE batcharchivetest PAOLOR PCOM TABL Figure Archive specification list Running this specification results in the archival of the CONTAIN row Jumbo Box, and all of the PART table rows that contain that container type. This is shown in Figure DB2 Data Archive Expert for z/os
249 AHXV Archive Run Statistics Archive run successful Command ==> Scroll ===> CSR Archive specification : Container#Part DB2 system. : DB2G Creator : PAOLOR2 Description : Container Archive as starting point table Row filter : CONTAIN_CONTAINER = 'JUMBO BOX' =============================================================================== Run: 1 Source table: CONTAIN Creator: PAOLOR2 Del: 0 Act: R Target table: AHXA_ Creator: ARCHIVED Ins: 1 =============================================================================== Run: 1 Source table: PART Creator: PAOLOR3 Del: 0 Act: R Target table: AHXA_ Creator: ARCHIVED Ins: 4150 ******************************* Bottom of data ******************************** Figure Archive run statistics With a successful archive run, go back to DB2 Grouper. See that a Grouper RI set was built when we performed our DB2 Data Archive Expert related tables discovery request. Figure shows us this new set. Figure Grouper RI set created from DAE archive specification Chapter 11. Scenario 5: Archiving Grouper discovered related tables 223
250 11.4 Using Grouper to discover application relationships We also want to show how to use the group discovery component of Grouper to help us understand how to discover and use information obtained from sources outside of the DB2 catalog. Consider the following scenario: We have a series of application objects that are not related with DB2 enforced RI. We do not know how many objects are related, but we do know there are packages bound in the DB2 catalog that reference all of our objects of interest. We also know at least one of the table names. For the scenario described above, we have created three tables in the DB2 catalog, none of them with DB2 enforced RI. These tables are SYS248.RESTAURANTS, SYS248.STATES, and SYS248.PGM_CONTROL. We have also bound a plan that has SQL, which references these three tables, the plan name is SYS248.TJHPGM Start the Grouper Client, Start --> Grouper --> Grouper Client. 2. Create a new set. We call this set GROUPERPACKAGE. 3. Using the configure group discovery options window, specify one of our known tables as our starting point table. Figure shows this. Also, notice that the radio button All relationship types is enabled. This will tell DB2 Grouper to look in the catalog for other relationships including packages. Figure Configure group discovery options 4. Click OK to close the configure group discovery options window. Next, click the run group discovery option on the task bar. This shows us the run group discovery window shown in Figure DB2 Data Archive Expert for z/os
251 Figure Run group discovery 5. Note that in this window we have one starting point table and that there is one non-enforced RI relationships that will be used. Clicking OK to confirm our options for this group discovery run takes us to the group discovery confirmation window. Here, enter a unique name for this discovery run. We chose to call this run GROUPERPACK1, as shown in Figure Figure Group discovery run confirmation 6. Clicking OK, we then went to the send group discovery job window. A quick click causes this job to be submitted. Figure shows this window. Chapter 11. Scenario 5: Archiving Grouper discovered related tables 225
252 Figure Send group discovery job confirmation 7. Once the discovery job execution completes, we select selected --> view group discovery results from the Grouper Client task bar. We can now see the group icon that appears under our version in the Grouper Client tree. Clicking the group, we also see that the other tables referenced in our plan TJHPGM01 were also discovered. Figure shows the results. Figure Discovery results displayed in group tree 8. Navigating using the selected --> show relationships from the Grouper taskbar will then show us the relationships used to perform the object groupings. Notice the object named 226 DB2 Data Archive Expert for z/os
253 is TJHPGM01, and that the relationship type is Package. Figure shows the relationships details. Figure Relationships results window 9. Remember, when data Archive expert calls DB2 Grouper as a result of creating an archive specification, only enforced and explicitly defined unenforced RI relationships are used to determine what tables are passed back. Now we can use the information found in this group discovery run to guide us in looking for additional unenforced RI information. Keep in mind that unenforced means not enforced by DB2. We may very well have referential relationships that are indeed enforced by our applications. In our example, we examine the state table and the restaurant table, and determine that they are linked by unenforced RI, which is maintained by our application program. So, we want to explicitly declare this relationship using Grouper. First, open the edit non enforced relationship window by selecting Grouper --> Edit Non-enforced Relationship from the Grouper task bar. Figure shows this window, then click the Add Relationship button to begin building your new relationship. Chapter 11. Scenario 5: Archiving Grouper discovered related tables 227
254 Figure Non enforced referential constraints - Add relationship 10.We then use the Add Table Relationship window to find the tables we are interested in. In this case, specify both the source and target table explicitly, but we can have the Add Table Relationship prompt us with a list of eligible tables. We see in Figure that we are going to link the STATE table as the parent to the RESTAURANTS table. Figure Add table relationships 228 DB2 Data Archive Expert for z/os
255 11.Click OK. Next, you need to specify a column from the parent table to link to the child table. You do this in the link key columns window. In this case, we elected to link the ABR column in STATE to the STATE column in RESTAURANTS. See Figure Figure Link key columns Click OK and save the new non-enforced relationship. Next, we wanted to build a Data Archive Expert table archive specification. We specified STATE as our starting point table, and see what Grouper returns. By now, we have seen a number of examples of how to build archive specifications, so we just show a few windows. Select a starting point table of STATE. See Figure HXV Select Starting Point Table Row 1 of 3 Command ==> Scroll ===> CSR Archive specification: STATE#RESTAURANT DB2 system..... : DB2G Line commands are: S - Select table S* - Select all D - Deselect table D* - Deselect all Cmd * Table name Creator Database Table space PROGRAM_CONTROL SYS248 DSNDB04 PROGRAMR RESTAURANTS SYS248 DSNDB04 RESTAURA S STATE SYS248 DSNDB04 STATE Figure Data Archive Expert select starting point tables window Chapter 11. Scenario 5: Archiving Grouper discovered related tables 229
256 From here, we specified that we want Data Archive Expert to return a list of related tables through the find related tables option, which we specified as Y. We can see that the RESTAURANTS table was returned to us in Figure AHXV Select Related Tables Row 1 of 1 Command ==> Scroll ===> CSR Archive Specification : STATE#RESTAURANT Starting point table : STATE Creator : SYS248 DB2 system.... : DB2G Line commands are: S - Select table for archive unit S* - Select all tables D - Deselect table D* - Deselect all tables Cmd * Table name Creator Database Table space s RESTAURANTS SYS248 DSNDB04 RESTAURA ******************************* Bottom of data ******************************** Figure Data Archive Expert select related tables panel Now, when we go back to the archive unit definition panel, we see that both STATE and RESTAURANTS have been selected. Remember, when we ran our group discovery, we also had the RUN_CONTROL table specified in the Group, but without any non-enforced relationships, we will not have that table returned to the Data Archive Expert find related tables call. Figure shows us the final list of tables for this archive specification. AHXV Archive Unit Definition Row 1 of 2 Command ==> Scroll ===> CSR Archive specification: Name... : STATE#RESTAURANT Creator. : PAOLOR2 DB2 system: DB2G Starting point table: STATE Creator..... : SYS248 Database name.. : DSNDB04 Line commands are: A - Add C - Columns K - Connection keys D - Delete R - Rules P - Starting point table U - Index cols W - Row filter Rules Cmd Table name Creator Database StP Jct Del Row filter _ STATE SYS248 DSNDB04 SP N _ RESTAURANTS SYS248 DSNDB04 N ******************************* Bottom of data ******************************** Figure Archive unit definition from Grouper discovery Summary We have demonstrated how to use Grouper to find relationships between objects by examining catalog elements not related to relationship enforcement in our case packages. Using this information, we can then build non-enforced RI and store this information in the 230 DB2 Data Archive Expert for z/os
257 Grouper metadata tables. This will then allow Grouper to share information about these relationships with Data Archive Expert for the purpose of building archive specifications. Chapter 11. Scenario 5: Archiving Grouper discovered related tables 231
258 232 DB2 Data Archive Expert for z/os
259 12 Chapter 12. Additional archive considerations Archiving rows from a single table is pretty simple. But in real life a table is very often logically related to others. At a minimum you might need data from other tables to explain the contents of the one being archived, due to its level of normalization. And often there are referential integrity relationships. They can be seen in the DB2 catalog if DB2 is enforced, but even more frequently they are implicitly defined in the application. So, logically grouping your tables for archiving purposes becomes mandatory. In this chapter we stress the need to have detailed knowledge of your application, if archiving of real life data is to be done with confidence and safety. This chapter contains the following: Knowing your data DB2 enforcement Orphaned rows REXX exec to locate archived tables Copyright IBM Corp All rights reserved. 233
260 12.1 Knowing your data The primary purpose of archiving is to remove inactive rows from tables and place them in store in case they are needed again. To do this effectively requires that you to know your data, and that means you must know your application. Some of the relationships can be seen in the DB2 Catalog as DB2 enforced RI, but most of it is in your applications and out of sight. Important: You must know your data, and to do that you must understand your application. Some of the data will be easily identifiable, because it has a date or timestamp, a status flag, or other easily recognizable indicators. The bulk of the data will not be so clearly identifiable, and may only be identified indirectly through other tables. When defining an archive specification, Data Archive Expert needs to know where to start. This starting point table is the first table, or the only table of your archive specification. Then related tables may be found or added to the starting point, and these tables together form your archive unit. If the archive unit is too small (that is related tables have been excluded) then there is a risk of deleting rows from tables in the archive unit and producing orphan rows. Orphan rows are rows in the child table, or tables which do not have a related row in a parent or parent tables. Consider the situation where there are relationships between tables inside and outside of the archive unit. Where this relationship is known to DB2, Data Archive Expert can anticipate when orphans may be detected and alert you. However, if this relationship is only known to the application, Data Archive Expert is oblivious to it. So under the second condition, parent rows may be deleted causing orphan rows in other tables. Where there is a table that has two parent tables, and the child is being processed with one of the parents, rows may be deleted from the child that will no longer be related to the other parent. This situation is allows by DB2 s RI rule enforcement so Data Archive Expert allows this. However, this may be unwanted by your application. Consider our test data and the tables PART and SUPPLIER, which are both parents of PARTSUPP. In turn, PARTSUPP and ORDER are parents of LINEITEM; see Chapter 2, The evaluation test data on page DB2 enforcement Where a tables is related by a ON DELETE CASCADE rule, when rows may be deleted from a parent table, DB2 automatically deletes the related child rows. Data Archive Expert alerts you to this when saving an archive specification. The tool errs on the side of safety and does not allow deleting from the parent table. If after the warnings have been given and you elect to turn off orphan checking, then Data Archive Expert allows you to perform the archive with deletion from the parent table. However, if the parent is not the starting point table, consider the scenario in Chapter 10, Scenario 4: Archiving from RI related tables and deleting from them on page 191, where PART is the starting point table, and the other related tables are PARTSUPP, ORDER, and LINEITEM. Then with orphan checking turned off, you can get very different results. This is because the archive chain started with PART, a parent of PARTSUPP, which is a parent of LINEITEM. Archiving down this chain collects all of the related rows that will be deleted. Looking at the row counts, an almost one-to-one relationship is found between LINEITEMS and ORDERS. This because order will only have one number sized container on their order. 234 DB2 Data Archive Expert for z/os
261 However, the order may well have several other items in many other container sizes and types. So when going from the child table LINEITEM up to the parent table ORDER, too few LINEITEM rows will be archived. So, when the parent s rows are deleted, a lot of unratified rows will also be deleted. This situation is not desirable. Attention: To err is human, to really foul things up takes a computer! If the exercise in scenario 4 had been a real life situation, the problem should be tackled differently. A few rules will help overcome problems: Parent tables should be used selected to be the starting point. Avoid changes in direction in a relationship chain. Where archive units have to be split into two archive specifications, perform all archiving before any deletes are carried out. Multiple archive specification should not delete from the same rows from the same table Orphaned rows As we have seen, turning off orphan checking may not be wise. If the archive unit inhibits you from achieving the desired results, look at the problem again as it may be better to redesign the archive to turn off orphan checking. Important: Turning off orphan checking also turns of change detection. When the apparently safe approach to archiving has been used, the biggest risk of creating orphan rows is where there is no obvious relationship between tables. Application knowledge is essential to prevent problems. When saving an application, and there is a risk of orphans being produced, Data Archive Expert alerts you as shown in Figure Chapter 12. Additional archive considerations 235
262 AHXV Archive Specification Validation Errors Row 1 of 1 Command ==> Scroll ===> CSR The archive specification definition being saved has failed definition rules. You can return to the definition to fix the errors. Name: O6C ENTER to exit and save the definition CANCEL or END to return to definition panel Validation messages: Message AHXJ116: The rows in the parent table cannot be deleted because the archive uni t contains a related child table that is not being deleted. Press ENTER to save the specification as a valid specification and accept potential child orphans. Error Tokens: Parent:PAOLOR1.NATION_O6 Child:PAOLOR1.CUSTOMER_O6 Figure 12-1 Orphan detection warning at definition time If orphans are going to be produced at run time, the archive fails as shown in Figure AHXV Unexpected AE Engine Error Command ==> An unexpected validation or AE engine error was encountered while processing your request. DB2 system : DB2G More: + Return code: 121 Message key: AHXJ121 Message: While archiving data, one or more rows in a child table will bec ome orphans. PAOLOR1.NATION_O6,PAOLOR1.CUSTOMER_O6,2 SQLCODE: n/a SQLSTATE: n/a SQL ERROR TOKENS: n/a SQL PROCEDURE DETECTING ERROR: n/a Figure 12-2 Orphan detection error at run time 236 DB2 Data Archive Expert for z/os
263 12.4 REXX exec to locate archived tables Currently, the DB2 Data Archive Tool panels provide a list of archive and retrieve specifications based on the specification name. However, sometimes you might need to see specifications in a different order. For example, you might want to find all the archive and retrieve specifications for a particular table. This information is stored in the Data Archive Expert metadata, and can be accessed through SQL. As an example, we coded a simple select statement, which shows all archive and retrieve specifications for a qualified table. Example 12-1 shows a sample query. Example 12-1 Metadata query example SELECT A.CREATOR, A.TBNAME, A.SPECID, B.SPECTYPE, B.SPECNAME, B.SPECID FROM SYSTOOLS.AHXTBSOURCES A, SYSTOOLS.AHXSPECS B WHERE A.SPECID = B.SPECID AND A.CREATOR = 'PAOLOR3' -- CREATOR AND A.TBNAME = 'LINEITEM' -- TABLE NAME ORDER BY A.SPECID; Using SPUFI, we can then see what our materialized result set looks like, and identify the specification names that are of interest. See Example Example 12-2 Metadata query results CREATOR TBNAME SPECID SPECTYPE SPECNAME PAOLOR3 LINEITEM 103 ARCHIVE CUST#ORD#LINEITEM PAOLOR3 LINEITEM 105 ARCHIVE CUST#ORD#LINEITEM2 PAOLOR3 LINEITEM 106 RETRIEVE RET#CUST#ORD#ITEM PAOLOR3 LINEITEM 107 RETRIEVE RET#CUST#ORD#ITEM2 This will then allow us to specify these specifications using the specification name filters on the archive and retrieve main panels. The format, content, and key relationships in the DB2 Data Archive Expert metadata tables are documented in the DB2 Data Archive Expert User s Guide and Reference, SC Chapter 12. Additional archive considerations 237
264 238 DB2 Data Archive Expert for z/os
265 Part 4 Part 4 Data retrieval We have archived some data. Now, an inactive account that is in the archive needs to be reactivated for exceptional processing. Or, you are running a yearly procedure that processes all your monthly archived line items. The reactivation of archived data requires their retrieval. Retrieval is the process of locating archived data and making it available for periodical or exceptional processing. Usually this process happens on demand, is more selective than the archival, and does not require the insertion in the original tables. It should however be programmatically controlled and pre-verified because it can produce or find a schema different from the original one, and require ad hoc queries rather than the pre-existing programs. Flexibility in destination of retrieval is also required. Data Archive Expert helps you in the retrieval specifications as well as archival. In this part we look at: Scenario 1: Retrieving a single table from a table archive Scenario 2: Retrieving multiple tables from a file archive on tape Scenario 3: Retrieving into the original tables Copyright IBM Corp All rights reserved. 239
266 240 DB2 Data Archive Expert for z/os
267 13 Chapter 13. Scenario 1: Retrieving a single table from a table archive In this scenario we need to retrieve the rows from a table archive of the LINEITEM table taken some time ago. We retrieved a subset of the archived rows into a named retrieve table. The intent is to give you background information about retrieving data from table archives, as different from files archive, preparing your defaults, and defining and running a retrieve specification from a table archive. This chapter contains the following: Retrieve overview Retrieve preparation Building and running a single table retrieve specification Additional tasks Copyright IBM Corp All rights reserved. 241
268 13.1 Retrieve overview The retrieve specification is generally easier to define than an archive specification since there are fewer variables and decisions involved. Here are the basic facts: An archive specification must have been run successfully before the data can be retrieved (indicated by PCOM or COM status, that is, the specification is partly completed or completed). Specifications may include retrieve from a table or file archive, into a retrieve table or the source tables. A retrieve specification may retrieve a subset of the archived tables, but the tables retrieved must include the starting point table. Row filters can only be used if retrieving from a table archive. The row filter of a retrieve specification may span multiple archive versions of the same archive specification Retrieve preparation We started by checking the defaults, identifying the archive specification, and then defining and running the retrieve specification. Figure 13-1 shows the Data Archive Expert primary option panel where option 0 is selected to update the retrieve defaults. We discuss the details in , Default retrieve creator name on page 242. AHXV IBM DB2 Data Archive Expert for z/os Select Archive Expert Action ==> DB2 system : DB2G Schema : SYSTOOLS User ID : PAOLOR3 0 View and set Archive Expert settings Time : 14:11 1 Work with archive specifications 2 Work with retrieve specifications X Exit IBM* Licensed Materials - Property of IBM 5655-I95 (c) Copyright IBM Corp All Rights Reserved. *Trademark of International Business Machines Figure 13-1 Data Archive Expert primary option panel Default retrieve creator name The default creator name for retrieve tables is the creator name set up on the Data Archive Expert Settings panel, accessed using option 0, View and set Archive Expert settings from the Data Archive Expert primary panel. See Figure DB2 Data Archive Expert for z/os
269 During this project, and while writing this redbook, a number of fixes to minor problems and improvements to functionality were being developed and introduced by APAR PQ Make sure the PTF for PQ78705 is applied because there are some important changes to the Data Archive Expert option 0 panel, which allow more control over where intermediate file archive objects can be placed, and more flexibility on the DB2 Authorization ID to be used. In the middle of the screen, under Default owner for are two options: archive target tables and retrieve target tables. The default setting for retrieve target tables is RETRIEVE, which may not be desirable, so ensure this is changed to the user ID or RACF group name you wish to use. At the bottom of the screen the user can change the authorization ID by entering a value different from the default. The default is the user profile ID. AHXV Data Archive Expert Settings Command ===> Scroll ===> CSR_ DB2: DSN7 Metadata Schema: SYSTOOLS User ID: PAOLOR3 Time: 14:19 Set or change the following settings, press <ENTER> then <END>. Log data sets qualifier (xxxxxxxx.ahxcirc.log) (xxxxxxxx.ahxlog).... PAOLOR3 Level of logging (1 - Info 2 - Warning 3 - Error) COMMIT level (1 to rows) Default owner for archive target tables. ARCHIVED retrieve target tables. RETRIEVE Grouper schema names metadata SYSTOOLS stored procedures... EGFTOOLS File archive names working database.... AHXFLWDB working storage group. AHXFLWSG Archive Expert schema name stored procedures.... AHXTOOLS DB2 Authorization ID... PRDDBA Figure 13-2 The improved Data Archive Expert settings panel The file archive, working database, and STOGROUP names, and the DB2 Authorization ID fields have been highlighted in Figure Retrieve authorizations To use the retrieve function, you need to consider the DB2 and RACF authorities. The minimum level of DB2 authorization for a DBA is CREATEDBA to create new databases, and DBADM authority on all existing databases being processed. To access file archives, the minimum data set authority required is READ access on the data set or generic profile. Details are found in 17.1, DB2 authorizations on page 278. Chapter 13. Scenario 1: Retrieving a single table from a table archive 243
270 Tip: To simplify archiving and retrieving authorizations, ensure authorities in DB2 and for data set profiles are granted to a RACF group used to authorize all database administrators, rather than individual user IDs Building and running a single table retrieve specification In this scenario, our requirement is to retrieve rows from an archived copy of the LINEITEM table, for all order lines with shipping dates in February The rows will be retrieved and placed into a new table for use by an auditor. Afterwards, the table and table space can be dropped. Important: Data Archive Expert does not enforce uppercase for retrieve specification names or DB2 object names, so use the Caps Lock key. Currently, the Data Archive Expert tool does not enforce or give an option to work with uppercase characters when typing the names of specifications or DB2 objects. This also applies to column names and the contents of text in quotes used in row filters (predicates). As a result ABC is not the same as abc. This may cause some adjustment to the hardened ISPF users. Tip: All screens work in 24x80 format by default, however, it may be better to work in the 32x80 or larger format. Working with a longer screen will allow more columns to be displayed, which can be of assistance when displaying a list of column names Building a new retrieve specification For the purpose of this scenario we define the retrieve specification from the Retrieve Specification List. This is done with purpose to show how to select the original archive specification. In the other scenarios, a quicker method will be used. The first step is to go into the Data Archive Expert dialog, choose option 1 to work with archive specifications, and find the Archive Specification used to archive rows from the production tables. Once we found the archive, we returned to the Data Archive Expert primary panel (Figure 13-1 on page 242). In order to work with retrieve specifications, we typed 2 on the command line of the Data Archive Expert primary panel, and pressed the Enter key. If you have not created any retrieve specifications before, <empty> will be displayed (see Figure 13-3), otherwise, a list of specifications is displayed. As with the archive panels, the contents of the list may be controlled by the filter (FI) primary command. 244 DB2 Data Archive Expert for z/os
271 AHXRLIST Retrieve Specifications List Row 1 to 1 of 1 Command ===> Scroll ===> CSR Primary Commands are: DB2 system: DB2G FI - Filter retrieve specification list User ID : PAOLOR3 RE - Refresh retrieve specification list Time : 14:11 Line Commands are: N - New E - Edit C - Copy D - Delete B - Browse details I - Info summary R - Run H - History of runs Cmd Name Description Creator Updated State Type N <empty> ******************************* Bottom of data ******************************** Figure 13-3 Defining a new single table retrieve specification Defining a single table retrieve specification We typed N on the command row of the Retrieve Specification List panel, and pressed Enter to define a new retrieve specification. We can also supply a suitable name description for the specification as shown in Figure As with archive specifications, it is useful to follow a consistent naming standard, and add meaningful comments in order to make future identification easier. Keep in mind that once a specification has been run, it cannot be deleted or renamed. AHXDFRET --- Define a New Retrieve Specification Command ===> Scroll ===> CSR_ DB2 system: DB2G User ID : PAOLOR3 Retrieve specification: Time : 14:11 Name LINEITEM#RET#TAB1 Description... Retrieve Feb 1992 data from archive For Archive specification: Name Type (TABL,FILE) Description.. : Archive LINEITEM to a table More details?.. (Yes) Select a retrieve definition activity ==> 1 1. Select an archive specification to retrieve.. : (required or enter name) 2. Select archived source tables to retrieve data : (optional) 3. Select archived data versions to be retrieved. : (optional) 4. Add a row filter to retrieve subset of rows.. : (optional) 5. Specify target tables for retrieve : (optional) 6. Specify target table spaces for retrieve... : (optional) 7. Save definition of new retrieve : (required) 8. Cancel definition of new retrieve (exit, no save) Figure 13-4 Naming the single retrieve specification Chapter 13. Scenario 1: Retrieving a single table from a table archive 245
272 With the Archive specification Name and Type left blank, we used the Select a retrieve definition activity field (middle of the screen) to build up the specification details. The options used for this example are options 1, 3, 4, 5, and 7. Option 1 - Select an archive specification to retrieve By selecting 1 for the Select a retrieve definition activity option, we displayed the pop-up panel shown in Figure The panel gives the opportunity to filter (manage) the list of available archive specifications. In this example, no changes were made to the screen. AHXDFRET --- Define a New Retrieve Specification Command ===> Scroll ===> CSR_ DB2 system: DB2G User ID : PAOLOR3 Retrie 4:12 Name AHXAFLTR --- Filter for Archives List Desc Command ===> Scroll ===> CSR For Ar Name Type Specify criteria for listing known archive specifications, Desc press Enter. Some criteria is case sensitive. More (% or blank indicates all) Select Archive name.. % Creator... PAOLOR3 1. Se Type.... % (TABL,FILE) 2. Se Source table.. % 3. Se Creator... % 4. Ad 5. Sp 6. Sp 7. Save definition of new retrieve : (required) 8. Cancel definition of new retrieve (exit, no save) Figure 13-5 Filter for an archive list Once the optional filter is set up, we pressed the Enter key to display the selection list shown in Figure DB2 Data Archive Expert for z/os
273 AHXALIS Archive Spec List for Retrieve Row 1 to 1 of 1 Command ===> Scroll ===> CSR_ Filter of list complete Primary commands are: DB2 system: DB2G FI - Filter archive specification list User ID : PAOLOR3 RE - Refresh archive specification list Time : 14:12 Line commands are: S - Select for retrieve B - Browse details I - Info summary H - History of runs Cmd Name Description Creator Updated State Type S LINEITEM#ARC#TAB1 Archive LINEITEM to a tab PAOLOR PCOM TABL ******************************* Bottom of data ******************************** Figure 13-6 Selecting a single archive specification To select an archive, we placed an S command alongside the required archive specification and pressed the Enter key. This returns you to the Define a New Retrieve Specification screen again. Here we entered option 3 and selected the archive version described in Option 3 - Select archived data versions to be retrieved. Note: The selection process is also the same in case that the archive has been a file archive. Option 3 - Select archived data versions to be retrieved We selected option 3 on the Define a New Retrieve Specification panel (see Figure 13-7) and selected the desired version of archive using the S command, and then pressed the Enter key. AHXV Select Archived Data Versions Row 1 to 2 of 2 Command ===> Scroll ===> CSR Current selections shown Line commands are: DB2 system: DB2G S - Select S* - Select all User ID : PAOLOR3 D - Deselect data version D* - Deselect all Time : 14:12 W - Show row filter T - Show target archive info Cmd S Ver Archive Timestamp Row filter used to archive rows NOT RUN YET L_SHIPDATE < ' ' *_ S L_SHIPDATE < ' ' ******************************* Bottom of data ******************************** Figure 13-7 Selecting the single table archive version Option 4 - Add a row filter to retrieve a subset of rows We selected option 4 on the Define a New Retrieve Specification panel, and pressed the Enter key. This displays the Specify Row Filter for Retrieve panel shown in Figure Chapter 13. Scenario 1: Retrieving a single table from a table archive 247
274 AHXV Specify Row Filter for Retrieve Row 1 to 3 of 16 Command ===> Scroll ===> CSR_ To retrieve a subset of rows from the selected DB2 system: DB2G archived data versions, specify the row filter SQL User ID : PAOLOR3 predicates to be applied to the starting point table Time : 14:13 of the retrieve unit. Enter the SQL predicates below without the WHERE keyword and without the ending SQL terminator. The columns of the starting point table are shown below. Starting point table name.. : LINEITEM table owner : PAOLOR1 Row filter ==> L_SHIPDATE BETWEEN ' ' AND ' ' Data Column name Type Length Scale L_ORDERKEY INTEGER 4 0 L_PARTKEY INTEGER 4 0 L_SUPPKEY INTEGER 4 0 Figure 13-8 Specifying a row filter for a single table In order to limit the rows being retrieved, you need to define a row filter (WHERE predicate). Our example requires only rows for February 1992 to be retrieved, so we specified a row filter of L_SHIPDATE BETWEEN ' ' AND ' '. We pressed the Enter key to confirm the row filter as input on the panel, then pressed the End key (PF3) to return to the Define a New Retrieve Specification panel. Attention: In case of retrieving from a file archive, the row filter is not available. All rows will be loaded and will need to be filtered later. Option 5 - Specify target tables for retrieve Option 5 is now used. This allows our own table to be specified, instead of using the default retrieve table name. We typed the required table name, in this case RETRIEVED#LINEITEM, into the Retrieve Target Table Name field as shown in Figure We pressed the Enter key to confirm the target table information as input on the panel, then the End key (PF3) to validate the target table information, and returned to the Define a New Retrieve Specification panel. Note that the rules about specifying target table are explained in Help for the panel AHXRTGTS. 248 DB2 Data Archive Expert for z/os
275 AHXRTGTS ---- Specify Target Tables for Retrieve Row 1 to 1 of 1 Command ===> Scroll ===> CSR Primary command is: DB2 system: DB2G C - Clear current table mappings User ID : PAOLOR3 Time : 14:13 Specify names and owners of target tables below. If you specify names, for each run of the retrieve spec new rows will be appended to pre-existing target tables. Press End when you are done. Retr Retrieve Archived Table Name Owner StPt Target Table Name Owner LINEITEM PAOLOR1 SP RETRIEVED#LINEITEM PAOLOR3 ******************************* Bottom of data ******************************** Figure 13-9 Specifying a single target table Option 7 - Save the definition of the new retrieve On return to the Define a New Retrieve Specification panel, we checked that completed is placed along side each retrieve definition activity option we have just processed (1, 3, 4 and 5) as shown in Figure You may at any time prior to running the retrieve specification return to these options and update them as required, even after saving the retrieve specification. However, the changes must be completed before running the retrieve specification. AHXDFRET --- Define a New Retrieve Specification Command ===> Scroll ===> CSR_ Target tables complete DB2 system: DB2G User ID : PAOLOR3 Retrieve specification: Time : 20:00 Name LINEITEM#RET#TAB1 Description... Retrieve Feb 1992 data from archive For Archive specification: Name LINEITEM#ARC#TAB1 Type TABL (TABL,FILE) Description.. : Archive LINEITEM to a table More details?.. (Yes) Select a retrieve definition activity ==> 7 1. Select an archive specification to retrieve.. : (completed) 2. Select archived source tables to retrieve data : (optional) 3. Select archived data versions to be retrieved. : (completed) 4. Add a row filter to retrieve subset of rows.. : (completed) 5. Specify target tables for retrieve : (completed) 6. Specify target table spaces for retrieve... : (optional) 7. Save definition of new retrieve : (required or cancel) 8. Cancel definition of new retrieve (exit, no save) Figure Saving the single table retrieve specification Chapter 13. Scenario 1: Retrieving a single table from a table archive 249
276 We typed 7 and pressed Enter to save the specification. The next step is to execute or run the retrieve specification, which has just been created. The running of data retrieving is discussed in 13.4, Running a retrieve specification on page Running a retrieve specification Having saved the retrieve specification, you have the option of going back and editing the specification (command E on the Define a New Retrieve Specification panel). You might want to do it in case you need to make any last changes Preparing to run the retrieve specification To run the retrieve specification, place the row command letter R next to the specification, and press the Enter key, as shown in Figure AHXRLIST Retrieve Specifications List Row 1 to 1 of 1 Command ===> Scroll ===> CSR Primary Commands are: DB2 system: DB2G FI - Filter retrieve specification list User ID : PAOLOR3 RE - Refresh retrieve specification list Time : 14:14 Line Commands are: N - New E - Edit C - Copy D - Delete B - Browse details I - Info summary R - Run H - History of runs Cmd Name Description Creator Updated State Type R LINEITEM#RET#TAB1 Retrieve Feb 1992 data fr PAOLOR DEF TABL ******************************* Bottom of data ******************************** Figure Running the single table retrieve specification Defining the single table row filter Once you have entered the R command, a panel is displayed to allow the row filter (that is, the WHERE predicate) to be changed. In our case there is no need to change it, so we accepted the confirmation N and pressed the Enter key on the panel shown in Figure DB2 Data Archive Expert for z/os
277 AHXRLIST Retrieve Specifications List Row 1 to 2 of 2 Command ===> Scroll ===> CSR Primary Commands are: DB2 system: DB2G FI - Filter retrieve specification list User ID : PAOLOR3 RE - Refresh retrieve specification list Time : 14:14 Line Commands are: N - New E - Edit C - Copy D - Delete B - Browse details I - Info summary R - Run H - History of runs AHXRUNWQ -- Confirm Row Filter before Run Retrieve Command ===> Scroll ===> CSR Retrieve Name.. : LINEITEM#RET#TAB1 Do you want to confirm or modify the retrieve row filter before running this retrieve specification?.. N (Yes,No) (Default is No. Press <enter> to continue) Figure Confirm row filter before running retrieve As soon as the Enter key is pressed, the retrieve process begins. For a moderate number of rows, this will not take long to process. Attention: If retrieving from a FILE archive, then the panel of Figure is not displayed, and the retrieve process starts immediately, resulting in all rows being retrieved without any filtering Confirming the retrieve When the running of the retrieve specification ends, the Retrieve Run Statistics panel of Figure is displayed. This panel shows the source and target object names, with the number of rows retrieved. Chapter 13. Scenario 1: Retrieving a single table from a table archive 251
278 AHXV Retrieve Run Statistics Row 1 to 1 of 1 Command ===> Scroll ===> CSR DB2 system: DB2G Userid ID : PAOLOR3 Retrieve specification: Time : 14:16 Name.... : LINEITEM#RET#TAB3 Description. : Retrieve Feb 1992 data from archive Defined by. : PAOLOR3 Run version. : 1 run by. : PAOLOR3 run on. : Archived table : LINEITEM Creator: PAOLOR1 Target retrieve table : RETRIEVE#LINEITEM Creator: PAOLOR3 Rows retrieved: 5112 ******************************* Bottom of data ******************************** Figure Retrieve run statistics We pressed PF3 to exit, and return to the Retrieve Specification List where the status is now changed from DEF (defined) to COM (completed), as shown in Figure AHXV Retrieve Specifications List Row 1 to 1 of 1 Command ===> Scroll ===> CSR Run retrieve complete Primary Commands are: DB2 system: DB2G FI - Filter retrieve specification list User ID : PAOLOR3 RE - Refresh retrieve specification list Time : 14:17 Line Commands are: N - New E - Edit C - Copy D - Delete B - Browse details I - Info summary R - Run H - History of runs Cmd Name Description Creator Updated State Type LINEITEM#RET#TAB1 Retrieve Feb 1992 data fr PAOLOR COM TABL ******************************* Bottom of data ******************************** Figure Return to retrieve specification list panel Important: Keep in mind that when data is recovered to a retrieve table, it will contain an additional column AHXEXECUTEDTS containing the archive timestamp. Once you have run a retrieve spec, you can only change the row filter and the list of archived data versions selected for retrieval. 252 DB2 Data Archive Expert for z/os
279 13.5 Additional tasks Depending upon the type of data you have retrieved, the number of rows, and how it is to be used, you may be required to perform additional tasks. All of these tasks fall outside of the scope of the Data Archive Expert tool. These tasks may include: Creating one or more indexes Running housekeeping: RUNSTATS COPY Creating views Granting access to the table/views When the table is no longer required, dropping the table and table space, and deleting any Image Copy data sets has to be managed outside of Data Archive Expert. Chapter 13. Scenario 1: Retrieving a single table from a table archive 253
280 254 DB2 Data Archive Expert for z/os
281 14 Chapter 14. Scenario 2: Retrieving multiple tables from a file archive on tape In this chapter we describe the process of retrieving multiple data sets from a tape to a set of retrieve tables. The data being used will be the file archives on tape, containing the data associated with the archiving of jumbo sized containers as described in Chapter 10, Scenario 4: Archiving from RI related tables and deleting from them on page 191. This chapter contains the following: The tables involved Define retrieve specification for multiple tables Job to run retrieve stored procedure Additional tasks Copyright IBM Corp All rights reserved. 255
282 14.1 The tables involved Figure 14-1 shows the tables used in this scenario, with their DB2 enforced RI rules. The tables not included in the original archive unit are shaded out. REGION on delete no action NATION TIME on delete no action on delete no action PART SUPPLIER CUSTOMER on delete cascade on delete no action on delete no action PARTSUPP ORDER on delete no action on delete cascade Application RI DB2 Enforced RI LINEITEM Figure 14-1 Tables and relationships used in this scenario 256 DB2 Data Archive Expert for z/os
283 In this retrieve scenario, data is being retrieved to named tables. We had only archived data out of PART, PARTSUPP, ORDER, and LINEITEM. This data was archived in 10.4, Step 3 - Defining archiving a table archive to file on page 204, where rows from four tables were moved to a file archive from a table archive. In this scenario the four tables are being retrieved from data sets on tape to tables, which we will name explicitly, rather than leaving them to default. In our case, we were naming the tables so that the names are meaningful. If you are planning to allow access through views, then it may be better just to use default names, as they would not matter to the user, and instead create views with meaningful names. The same basic rules for retrieving tables described in 13.1, Retrieve overview on page 242 apply here as well Define retrieve specification for multiple tables Before setting up a retrieve specification, we selected option 0 on the Data Archive Expert primary option menu and verified the settings. Set the names of default retrieve table creator, database, table space, and SQLID. Then press the Enter key to commit changes and press PF3 to leave the panel. To set up the retrieve specification, select option 1 on the Data Archive Expert primary option menu and display the archive specifications. You can either scroll through the list, or adjust the filter to find the archive specification. In this example, the specification is JUMBO#PART#FILEARC, which was used in Chapter 10, Scenario 4: Archiving from RI related tables and deleting from them on page 191. This time we used an easy way to define the retrieve specification. As shown in Figure 14-2, type the T command next to the archive specification. AHXSLIST Archive Specifications List Row 1 to 2 of 2 Command ===> Scroll ===> CSR Primary commands are: DB2 system : DB2G FI - Filter archive specification list User ID. : PAOLOR3 RE - Refresh archive specification list Time... : 20:21 Line commands are: N - New C - Copy D - Delete R - Run B - Browse details E - Edit H - History of runs I - Info summary F - Table archive to file T - Define retrieve specification L - List retrieve specifications Cmd Name Description Creator Updated State Type T JUMBO#PART#FILEARC Write Jumbo packaged PAOLOR COM FILE JUMBO#PART#ARCHIVE Strip out all JUMBO s PAOLOR PCOM TABL ******************************* Bottom of data ******************************** Figure 14-2 Defining a new retrieve specification modelled upon a file archive On the new Define Retrieve Specification panel, we typed a specification name (in uppercase) and gave it a suitable description. Then we selected the required Select a retrieve definition activity options to complete the definition. Notice from Figure 14-3 that option 1 is already flagged as being completed. This is because we are generating this Chapter 14. Scenario 2: Retrieving multiple tables from a file archive on tape 257
284 retrieve specification based upon the correct archive specification. As a result, no changes should be made. For this example, select option 5 to specify target tables for retrieve. The other options 2 and 3 are not needed. Option 2 allows a subset of the tables to be retrieved, but we needed all of them, and option 3 allows us to select an archive run or version number. However, we already know that there is only one version, so no selections need to be made. As you can see, option 4 is unavailable. Row filtering is not permitted by Data Archive Expert when retrieving from a file archive because Data Archive Expert uses the LOAD utility and the WHEN clause, which is more restrictive than the SQL WHERE predicate. So we jumped straight to option 5, so that we could specify the retrieve table names, rather than leaving the names to the defaults. AHXDFRET --- Define a New Retrieve Specification Command ===> Scroll ===> CSR_ Archive info copied DB2 system: DB2G User ID : PAOLOR3 Retrieve specification: Time : 20:28 Name JUMBO#PART#RET Description... Retrieve Delete Jumbo Sized Containers for AUDITORS For Archive specification: Name JUMBO#PART#FILEARC Type FILE (TABL,FILE) Description.. : Write Jumbo packaged parts etc. to tape More details?.. (Yes) Select a retrieve definition activity ==> 5 1. Select an archive specification to retrieve.. : (completed) 2. Select archived source tables to retrieve data : (optional) 3. Select archived data versions to be retrieved. : (optional) 4. Add a row filter to retrieve subset of rows.. : (not allowed) 5. Specify target tables for retrieve : (optional) 6. Specify target table spaces for retrieve... : (optional) 7. Save definition of new retrieve : (required) 8. Cancel definition of new retrieve (exit, no save) Figure 14-3 Selecting the specify target tables for retrieve In our example, we were using the default database and table space name, but we were using our own target table names. Remember that the defaults are set up in option 0 of the Data Archive Expert Tip: The default names for DB2 objects are set up in option 0 of the Data Archive Expert primary option panel. This panel allows the database, table space, and table creator names to be set, and the SQLID to be used. We typed in the table names as shown in Figure 14-4 under the Retrieve Target Table Name heading on the Specify Target Tables For Retrieve panel. Notice that the table creator column has already been pre-filled with the default retrieve creator name. 258 DB2 Data Archive Expert for z/os
285 AHXRTGTS ---- Specify Target Tables for Retrieve Row 1 to 4 of 4 Command ===> Scroll ===> CSR Primary command is: DB2 system: DB2G C - Clear current table mappings User ID : PAOLOR3 Time : 20:30 Specify names and owners of target tables below. If you specify names, for each run of the retrieve spec new rows will be appended to pre-existing target tables. Press End when you are done. Retr Retrieve Archived Table Name Owner StPt Target Table Name Owner LINEITEM PAOLOR3 JUMBO_LINEITEM PAOLOR3 ORDER PAOLOR3 JUMBO_ORDER PAOLOR3 PART PAOLOR3 SP JUMBO_PART PAOLOR3 PARTSUPP PAOLOR3 JUMBO_PARTSUPP PAOLOR3 ******************************* Bottom of data ******************************** Figure 14-4 Specifying the retrieve table names for the jumbo containers Press the Enter key to confirm the target table information, including any revised creator names, as input on the panel. Then press PF3 to validate the target table information and return to the Define a New Retrieve Specification panel. Option 6, which can be used to specify the database and table space is not needed, since we are leaving these to the default. With no further updates to do to our definition, we used option 7 to save the retrieve specification Job to run retrieve stored procedure Now, set up a job like the one shown in Figure 14-5, to run the retrieve from file archive stored procedure AHXTOOLS.OFFLINERETEXECSP. Chapter 14. Scenario 2: Retrieving multiple tables from a file archive on tape 259
286 //* */ //*MODULE: AHXCRDB */ //* */ //* LICENSED MATERIALS - PROPERTY OF IBM */ //* 5655-I95 */ //* (C) COPYRIGHT IBM CORPORATION 2003 ALL RIGHTS RESERVED. */ //* US GOVERNMENT USERS RESTRICTED RIGHTS - USE, DUPLICATION OR */ //* DISCLOSURE RESTRICTED BY GSA ADP SCHEDULE CONTRACT WITH IBM CORP. */ //* */ //STEP010 EXEC PGM=IKJEFT01,DYNAMNBR=20 //STEPLIB DD DSN=DB2G7.SDSNEXIT,DISP=SHR // DD DSN=DB2G7.SDSNLOAD,DISP=SHR //SYSEXEC DD DISP=SHR,DSN=AHX110.SAHXSAMP.IVP //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * %AHXFIRET DB2G,JUMBO#PART#RET,SYSTOOLS /* Figure 14-5 Job to run the retrieve from file archive stored procedure There are only three parameters needed: the DB2 subsystem ID, retrieve specification name, and stored procedure schema name. Once the JCL and parameters are adjusted, submit the job. Upon completion of the job, we checked the condition code and the SYSTSPRT job output. This output is written in blocks, one per table. It is important to ensure the correct number of rows have been retrieved as were archived, and the job ended without error. The row count information is also available in the history information for both the archive and retrieve specifications Additional tasks Depending upon the set up of your site, and the creator of the retrieve tables, etc., you may have to grant select authority to the group of user IDs needing access, as in Example Example 14-1 Granting select authority to user RACF group GRANT SELECT ON JUMBO_LINEITEM, JUMBO_ORDER, JUMBO_PART, JUMBO_PARTSUPP TO USRGRP; If required, a view can be created to join the retrieved data with the current live data in the production tables. A sample view for joining the parts tables is shown in Example Example 14-2 View joining the retrieved and live parts tables CREATE VIEW ALLPARTS (P_PARTKEY,P_NAME,P_MFGR,P_BRAND,P_TYPE,P_SIZE,P_CONTAINER,P_RETAILPRICE,P_COMMENT ) AS SELECT P_PARTKEY,P_NAME,P_MFGR,P_BRAND,P_TYPE,P_SIZE,P_CONTAINER,P_RETAILPRICE,P_COMMENT FROM PAOLOR3.JUMBO#PART UNION ALL SELECT P_PARTKEY,P_NAME,P_MFGR,P_BRAND,P_TYPE,P_SIZE,P_CONTAINER,P_RETAILPRICE,P_COMMENT 260 DB2 Data Archive Expert for z/os
287 FROM PAOLOR3.PART; Be aware that you cannot create a view using SELECT * from the retrieve table. This is because retrieve tables have a column, in addition to the original ones in the source tables, called AHXEXECUTEDTS, which holds the retrieve timestamp. If data of the retrieved tables is to be kept alive for any length of time, make a back up of the table spaces by taking an image copy. Also, create suitable indexes and run RUNSTATS. All of these activities are all outside the scope of the Data Archive Expert tool. However, other tools such as DB2 Administration Tool can assist you. When the tables are no longer required, drop the tables. This task is also outside the control and scope of the Data Archive Expert tool. Currently, Data Archive Expert does not manage the dropping of tables or deleting of retrieve specifications. So, if you drop the retrieve table, Data Archive Expert will not be aware of this. As a result, if you attempt to rerun the retrieve specification using the same row filter, no rows will be retrieved. If a row filter was previously used and a retrieve is now rerun without a row filter, it will only retrieve the rows not previously processed. Also, if using explicitly named tables, Data Archive Expert will expect the tables to exist. Chapter 14. Scenario 2: Retrieving multiple tables from a file archive on tape 261
288 262 DB2 Data Archive Expert for z/os
289 15 Chapter 15. Scenario 3: Retrieving into the original tables In this scenario the ORDER and LINEITEM tables have been archived a total of three times to table archives. Each archive holds a full year s worth of data, from the 1st of January to the 31st of December. The requirement is to retrieve rows back into the original tables for the date range April 6, 1992 to April 5, Basically, we need to implement the retrieval of two tables, linked by RI, back into their original source tables. This chapter contains the following: Defining retrieve specification to replace rows Running the retrieve specification Additional tasks Copyright IBM Corp All rights reserved. 263
290 15.1 Defining retrieve specification to replace rows Figure 15-1 shows the tables and relationships. The tables excluded from the archive unit have been shaded out. We were only interested in archiving tables ORDER and LINEITEM. REGION on delete no action NATION TIME on delete no action on delete no action PART SUPPLIER CUSTOMER on delete cascade on delete no action on delete no action PARTSUPP ORDER on delete no action on delete cascade Application RI DB2 Enforced RI LINEITEM Figure 15-1 Relationship between tables 264 DB2 Data Archive Expert for z/os
291 The same basic rules for retrieving tables (as described in 13.1, Retrieve overview on page 242) apply here too. From the introduction to this scenario, you have seen that the data was originally archived in units of whole years, and we needed to recall data spanning two years, which is April 6, 1992 to April 5, This may be tackled in two ways, for example: Run two retrieves: Firstly, use a row filter to select rows after 5th April for the 1992 archive. Secondly, use a row filter to select rows before 6th April for the 1993 archive. Use one archive, selecting dates between 6th April 1992 and 5th April 1993, and select both the 1992 and 1993 archives. This is the method we will be using Preparing to retrieve from the archive As before, we start with option 0 on the Data Archive Expert primary panel, and check that the default database, table space, creator names, and SQLID have been set up to suit this task. We pressed the Enter key to confirm the updates, and pressed PF3 to return to the Data Archive Expert primary option panel again. Now, use option 1 to list the archive specifications. If required you can use the filter or FI page command to limit the display. Having found the archive specification, we typed H for history, and displayed the details shown in Figure The listed runs or version may be selected and the details checked if required. AHXV Archive Specification History Row 1 to 4 of 4 Command ===> Scroll ===> CSR Specification name: ORDER#ARCHIVE Creator: PAOLOR3 Description: Same as ORDER#ARCHIVE but this time set up delete rule Line commands are: S - Show statistics Cmd Run State Run by Date Row filter Defined O_ORDERDATE BETWEEN ' ' 1 Complete PAOLOR O_ORDERDATE BETWEEN ' ' 2 Complete PAOLOR O_ORDERDATE BETWEEN ' ' 3 Complete PAOLOR O_ORDERDATE BETWEEN ' ' ******************************* Bottom of data ******************************** Figure 15-2 Order archive history details Defining the retrieve specification After leaving the archive history dialog, but still listing the archive specifications, we used the T command to define a new retrieve specification. We gave the new specification a name (uppercase) and a description and selected option 3 for Select archived data versions to be retrieved and pressed the Enter key, as in Figure Options 1 and 2 are not needed because we have already selected the archive, and we will be retrieving rows for all of the archived tables. Chapter 15. Scenario 3: Retrieving into the original tables 265
292 Option 3 lists all of the archive versions produced by running this archive specification. As shown in Figure 15-2, there are three versions. We typed S (select the row command) next to the version being selected. In this case, we selected versions 1 and 2, for orders placed between 1st January and 31st December for the years 1992 and Pressing the Enter key confirms the selection. AHXV Select Archived Data Versions Row 1 to 4 of 4 Command ===> Scroll ===> CSR Line commands are: DB2 system: DB2G S - Select S* - Select all User ID : PAOLOR3 D - Deselect data version D* - Deselect all Time : 17:15 W - Show row filter T - Show target archive info Cmd S Ver Archive Timestamp Row filter used to archive rows NOT RUN YET O_ORDERDATE BETWEEN ' ' AND ' S_ O_ORDERDATE BETWEEN ' ' AND ' S_ O_ORDERDATE BETWEEN ' ' AND ' O_ORDERDATE BETWEEN ' ' AND ' ******************************* Bottom of data ******************************** Figure 15-3 Selecting the required order archive versions for retrieving After checking that the selection is correct, we pressed PF3 to leave this panel Add row filter to span archive versions We selected option 4 and supplied the row filter. As discussed, this must select the rows spanning the two archive versions. See Figure DB2 Data Archive Expert for z/os
293 AHXV Specify Row Filter for Retrieve Row 1 to 3 of 9 Command ===> Scroll ===> CSR_ Continue or press End To retrieve a subset of rows from the selected DB2 system: DB2G archived data versions, specify the row filter SQL User ID : PAOLOR3 predicates to be applied to the starting point table Time : 17:16 of the retrieve unit. Enter the SQL predicates below without the WHERE keyword and without the ending SQL terminator. The columns of the starting point table are shown below. Starting point table name.. : ORDER table owner : PAOLOR3 Row filter ==> O_ORDERDATE BETWEEN ' ' AND ' ' Data Column name Type Length Scale O_ORDERKEY INTEGER 4 0 O_CUSTKEY INTEGER 4 0 O_ORDERSTATUS CHAR 1 0 Figure 15-4 Row filter spanning two archive version Press the Enter key to confirm the row filter before returning. If the Enter key is not pressed, the row filter will be lost Updating the target table names On return to the define retrieve archive panel, select option 5 to list the target tables. The list is shown in Figure 15-5 where the target table names are highlighted. When typing the target table name and creator names, exactly match the source names. We suggest that you use the mouse and copy and paste the details to avoid errors. Chapter 15. Scenario 3: Retrieving into the original tables 267
294 AHXV Specify Target Tables for Retrieve Row 1 to 2 of 2 Command ===> Scroll ===> CSR Primary command is: DB2 system: DB2G C - Clear current table mappings User ID : PAOLOR3 Time : 17:17 Specify names and owners of target tables below. If you specify names, for each run of the retrieve spec new rows will be appended to pre-existing target tables. Press End when you are done. Retr Retrieve Archived Table Name Owner StPt Target Table Name Owner LINEITEM PAOLOR3 LINEITEM PAOLOR3_ ORDER PAOLOR3 SP ORDER PAOLOR3_ ******************************* Bottom of data ******************************** Figure 15-5 Specifying target table names that are the same as the source tables Once satisfied with the names, press the Enter key to ensure the table and creator names have been accepted, and then press the PF3 key to return from this panel. Option 6 is not required, as we have just identified the target tables, and there is no need to name the database and table spaces. Save the retrieve specification using option Running the retrieve specification Now, set up a job to execute a REXX Exec to invoke the stored procedure to retrieve table archives, AHXTOOLS.RETRIEVEEXECSP with a suitable row filter. As Figure 15-6 shows, four parameters are required: the DB2 subsystem ID, retrieve specification name, schema, and a row filter. Once set up, run the job. //* */ //* LICENSED MATERIALS - PROPERTY OF IBM */ //* 5655-I95 */ //* (C) COPYRIGHT IBM CORPORATION 2003 ALL RIGHTS RESERVED. */ //* US GOVERNMENT USERS RESTRICTED RIGHTS - USE, DUPLICATION OR */ //* DISCLOSURE RESTRICTED BY GSA ADP SCHEDULE CONTRACT WITH IBM CORP. */ //* */ //STEP010 EXEC PGM=IKJEFT01,DYNAMNBR=20 //STEPLIB DD DSN=DB2G7.SDSNEXIT,DISP=SHR // DD DSN=DB2G7.SDSNLOAD,DISP=SHR //SYSEXEC DD DISP=SHR,DSN=AHX110.SAHXSAMP.IVP //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * %AHXTARET DB2G,ORDER#RETRIEVE2,SYSTOOLS,- O_ORDERDATE BETWEEN ' ' AND ' ' /* Figure 15-6 Running retrieve of orders from the table archives with a row filter 268 DB2 Data Archive Expert for z/os
295 Upon completion, check the SYSTSPRT job output and ensure the rows have been correctly retrieved. For additional information on running retrieve specifications in batch, and continuation of parameters and multi-line row filters, see Chapter 5, Stored procedures and batch execution on page Additional tasks Here are some tasks that may be required after the retrieve: Run an image copy (COPY) of the updated table spaces. Depending upon your application and how these retrieved rows are to be accessed, an additional index may be required. If access by the QMF user is to be limited just to these rows, then views can be created. Additional accesses may have to be granted. Table spaces and indexes may now be in multiple extents, so REORG INDEX or TABLESPACE may be required. Run RUNSTATS. Chapter 15. Scenario 3: Retrieving into the original tables 269
296 270 DB2 Data Archive Expert for z/os
297 Part 5 Part 5 Operational considerations In this part we add considerations of an operational nature. We look at: Planning for and managing change in an archive environment Security and authorizations Performance Copyright IBM Corp All rights reserved. 271
298 272 DB2 Data Archive Expert for z/os
299 16 Chapter 16. Planning for and managing change in an archive environment In this chapter we provide a brief discussion of source object schema modification, and show how DB2 Data Archive Expert detects and allows you to manage the impact of these changes in your archive environment. This chapter contains: Schema changes and impact on archival Copyright IBM Corp All rights reserved. 273
300 16.1 Schema changes and impact on archival After you have initially archived active data, you would expect that over time the likelihood increases that application changes that modify your source object data schema can occur. Undetected and unmanaged data schema changes may cause problems that might make archived data unusable. In order to manage this situation, as each table or file archive specification is created, DB2 Data Archive Expert saves the data schema for each object listed in the archive unit. We ran a brief scenario to validate and understand how this process works on archive, and the impact on subsequent retrievals For this test, we ran a table archive against our test data. Once the table archive is completed, we then added a column to the source table. For the purposes of our test, we simply added a single character column to the end of the table. Next, we attempted to execute a second run using the original archive specification. We observed the results shown in Figure AHXV Unexpected AE Engine Error Command ==> An unexpected validation or AE engine error was encountered while processing your request. DB2 system : DB2G More: + Return code: 33 Message key: AHXJ033 Message: The source table definition has changed since the prior archive. A column has been added to the source table. NEWCOLUMN,DEPTDEL SQLCODE: n/a SQLSTATE: n/a SQL ERROR TOKENS: n/a SQL PROCEDURE DETECTING ERROR: n/a SQL DIAGNOSTIC INFORMATION: n/a Figure 16-1 Schema change to source table We then created a new archive specification and ran the archive specification successfully. This archive specification is then used to build a retrieve specification. The retrieve target table in the specification points to an existing table with yet another column appended to the end of the table. When this retrieve specification was run, we observed the results shown in Figure DB2 Data Archive Expert for z/os
301 AHXV Unexpected AE Engine Error Command ==> An unexpected validation or AE engine error was encountered while processing your request. DB2 system : DB2G More: + Return code: 107 Message key: AHXJ107 Message: DB2JDBCSection Received Error in Method prepare:sqlcode==> -206 QLSTATE ==> Error Tokens ==> <<DB2 7.1 SQLJ/JDBC>>. <IDENTIFIER> SQLCODE: n/a SQLSTATE: n/a SQL ERROR TOKENS: n/a SQL PROCEDURE DETECTING ERROR: n/a Figure 16-2 Retrieve attempt into a target table with changed schema As described in the DB2 Version 7 UDB for z/os and OS/390 Messages and Codes, GC , an SQL code -206 is an error of the type defined in Example This is what we would expect to see since the original source table has additional columns defined over it. Example 16-1 SQL code -206 column-name IS NOT A COLUMN OF AN INSERTED TABLE, UPDATED TABLE, OR ANY TABLE IDENTIFIED IN A FROM CLAUSE, OR IS NOT A COLUMN OF THE TRIGGERING TABLE OF A TRIGGER To resolve this situation, we built a new retrieve specification to allow DB2 Data Archive Expert to generate default tables, which are then built according to the stored source table schema. Our conclusion is that DB2 Archive Expert detects when the source table is changed on archive specification execution. You cannot continue to run additional archives from the original specification, but you can create a new archive specification, which will reflect the source table schema change in the metadata. When attempting to retrieve into an existing source or retrieve table changed since the archive run, DB2 Data Archive Expert does not allow you to retrieve into the original tables, but does allow you to retrieve into a new retrieve table, which is created with the schema stored in the metadata. Additional considerations for source table schema changes Views are not allowed on archive specifications, only tables. Retrieval from an old archive can be made back to a changed source table if the new column is declared as nullable, or as not null with default. Chapter 16. Planning for and managing change in an archive environment 275
302 Retrieval from an old archive can be made back to a changed source table if a column is altered to be longer than declared when the archive was made. There is no validation of schema changes. If a table has changed, two specifications have been defined, and two archivals now exist corresponding to old and new table layouts; the direct retrieval of rows from the archive across the two definitions (not knowing if the row was archived before or after the table change) is not supported. You can retrieve directly using the definition which matched the data. For the older archived data, if the schema change is incompatible (it is up to DB2 to verify this) you must define an intermediate table. Data Archive Expert does not yet handle schema changes. The rows go back to the source table when the table has changed by specifying the table that was the source of the archive as the target of the retrieve as long as the schema change in the source table does not prevent it. DB2 errors are reported during retrieve execution. 276 DB2 Data Archive Expert for z/os
303 17 Chapter 17. Security and authorizations In this chapter we discuss the DB2 and data set security considerations when installing and using the Data Archive Expert tool. This chapter contains the following: DB2 authorizations Data set authorizations Granting access to others Copyright IBM Corp All rights reserved. 277
304 17.1 DB2 authorizations In this section we look at the DB2 authorities needed to perform Data Archive Expert s tasks DB2 installation authorities SYSADM authority and RACF authority for the libraries will be required by the DB2 systems programmer who is installing Data Archive Expert DB2 archive and retrieve authorities In this section we will consider the DB2 authorizations required by the Database Administrator (DBA) to run an archive or a retrieve using Data Archive Expert. If the DBA has SYSADM authority, then the remainder of this section may be bypassed. The minimum level of authority required to use this tool is DBADM on the databases to be used in the archive and retrieve processes. We recommend that the DBA also has CREATEDBA system authority. The summary of authorizations are: CREATEDBA system privilege To create intermediate work databases and to be able to create new databases for table archives DBADM on existing databases Source database containing the live and production tables, archive target databases, and retrieve target database SELECT access On source tables (live tables) to archived, and archive tables required for a retrieve If DBAs use their own user IDs then authorizations should be granted to a single RACF group (the DB2 Authorization ID) used by the DBAs to create new application databases. In this way all DBAs will have the required accesses and will be able to retrieve from each others archives without any further grants. Basically, with PQ78705 applied, the installer will need to: or GRANT ALL ON <list of dae table names> TO <dba-user-id>; GRANT DBADM ON <dae db name list> TO <dba- >; Important: Ensure PTF PQ78705 has been applied to Data Archive Expert, as this allows the DB2 Authorization ID to be specified. It also allows specification of the file archive working database and storage group names. With the application of PQ78705 the user will also have to reference DB2 PQ51163 (secondary authorization ID does not take effect for stored procedure DSNUTILS invoking DB2 Utility). To enable secondary authorization ID support, the identify exit may be changed so that the task level ACEE is used. 278 DB2 Data Archive Expert for z/os
305 17.2 Data set authorizations We describe authorizations needed for archive data sets Access to archive data sets When running a file archive specification, the DBA will require ALTER access to the generic data set profiles for data sets being created. Using the RACF dialog to give access, do the following: 1. Select option 1 for DATA SET PROFILES. 2. Option 4 for ACCESS Maintain the access lists. 3. Supply the data set profile (in single quotes) and G for Generic. 4. Choose option 1 to ADD 5. Set Copy to NO and Specify to YES. 6. Give authority of ALTER to a RACF group name*. 7. Leave the RACF dialog. 8. Type TSO SETR REFRESH GENERIC(DATASET) on the command line to refresh the generic data set profiles in RACF. 9. Give authorities to a RACF group used to authorize DBAs rather than individual user IDs. Providing all DBAs share the same RACF authorities, no special authorizations will be required to retrieve DB2 data from table or file archives Granting access to others In this section we discuss how to authorize other users to access archives, and archive or retrieve tables Access to table archives When a table or group of related tables have been archived, there may be a need to have access granted to an application or user IDs. These functions are not a part of the Data Archive Expert archive process, and must be performed by the DBA after the archive has completed. The Creator and Name of the archive table is shown in the history of the archive specification. To find the archive creator and table name, list the archive specifications and use the H for HISTORY row command, and then S to SELECT the archive version. Figure 17-1 shows three tables in an archive group, and the creator and table names of the archives have been highlighted. Chapter 17. Security and authorizations 279
306 AHXOLSTS Archive Run Statistics Row 1 to 3 of 3 Command ==> Scroll ===> CSR Archive specification : CUST#ORD#LINEITEM DB2 system. : DB2G Creator : PAOLOR3 Description : Archive RI group Customer, Order and Lineitem Row filter : C_MKTSEGMENT IN ('BUILDING','AUTOMOBILE') AND C_ACCTBAL BETWEEN 0.01 AND =============================================================================== Run: 1 Source table: CUSTOMER Creator: PAOLOR3 Del: 0 Act: Target table: AHXA_ Creator: PAOLOR3 Ins: 2662 =============================================================================== Run: 1 Source table: LINEITEM Creator: PAOLOR3 Del: Act: Target table: AHXA_ Creator: PAOLOR3 Ins: =============================================================================== Run: 1 Source table: ORDER Creator: PAOLOR3 Del: 0 Act: Target table: AHXA_ Creator: PAOLOR3 Ins: 5885 ******************************* Bottom of data ******************************** Figure 17-1 Archive table creator and name Having identified the archive table, execute a GRANT table privilege command. Typically, SELECT access will be granted to a RACF group rather than to in individual user IDs. If there are several archives of the same source table, then it may be better to create a view to join all of the tables and GRANT access to the view. A benefit of using a view even for a single archive table is that it removes the archive timestamp column, AHXEXECUTEDTS. See Example 17-1, which creates the view and grants access to ordering users. Example 17-1 View of a single archive table CREATE VIEW LINEITEM_ARCV01 (L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT) AS SELECT L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT FROM PAOLOR3.AHXA_ ; GRANT SELECT ON LINEITEM_ARCV01 TO ORDUSR; In the next example two archive tables are joined in one view. Example 17-2 returns rows from tables taken at different dates where AHXA_ was run first (version 1), and AHXA_ was taken a month later (version 2). Example 17-2 View of two single table archives CREATE VIEW LINEITEM_ARCV02 (L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT) 280 DB2 Data Archive Expert for z/os
307 AS SELECT L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT FROM PAOLOR3.AHXA_ FIRST ARCHIVE. UNION ALL SELECT L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT FROM PAOLOR3.AHXA_ ; -- SECOND ARCHIVE. GRANT SELECT ON LINEITEM_ARCV02 TO ORDUSR; Access to retrieve tables Where it is undesirable or there is no need to make all of the archive rows available, a retrieve of the required rows from a table archive to a retrieve table may be carried out. If the archive is a data set, it must be retrieved in full (no filtering is allowed) to a retrieve table. To identify the creator and table name of this retrieved data, check the output of the specification being run. Otherwise, list the retrieve specification list and type H for the HISTORY row command, and then S to SELECT the retrieve version. Figure 17-2 shows the history details, and the table creator and name have been highlighted. AHXRSTAT Retrieve Run Statistics Row 1 to 1 of 1 Command ===> Scroll ===> CSR DB2 system: DB2G Userid ID : PAOLOR3 Retrieve specification: Time : 15:40 Name.... : LINEITEM#RET#FILE1 Description. : Retrieve Feb 92 data from file archive Defined by. : PAOLOR3 Run version. : 1 run by. : PAOLOR3 run on. : Archived table : LINEITEM Creator: PAOLOR1 Target retrieve table : RETRIEVE#FEB92ITEM Creator: PAOLOR3 Rows retrieved: 0 ******************************* Bottom of data ******************************** Figure 17-2 Retrieve table creator and name If all of the rows have been retrieved or there are multiple retrieves etc., a view of joined tables with a predicate to limit the rows visible may be created. Then the SELECT privilege on the view or table may be granted to a RACF group or user ID Access to file archives If there is a need to grant other persons access to the archived data set, then you may consider setting up another generic data set profiles in RACF. Doing this will limit access to a small number of data sets. Chapter 17. Security and authorizations 281
308 To use the RACF dialog to give access, do the following: 1. Select option 1 for DATA SET PROFILES. 2. Option 4 for ACCESS Maintain the access lists. 3. Supply the data set profile (in single quotes) and G for Generic. 4. Choose option 1 to ADD. 5. Set Copy to NO and Specify to YES. 6. Give authority of READ to a RACF name or user ID. 7. Leave the RACF dialog. 8. Issue a TSO SETR REFRESH GENERIC(DATASET) on the command line to refresh the generic data set profiles in RACF. 282 DB2 Data Archive Expert for z/os
309 18 Chapter 18. Performance From the DB2 performance perspective, Data Archive Expert is a DB2 application that uses dynamic SQL, Java stored procedures, global temporary tables, and depending on the size and options of your archiving, it can select, update, and delete large numbers of rows. General performance considerations for similar applications do apply. In this chapter we discuss some general observations, which were made during our use of the product regarding performance of the Data Archive Expert archive environment. Copyright IBM Corp All rights reserved. 283
310 18.1 Performance considerations As part of a general overview of the various performance characteristics of running Data Archive Expert, we ran a series of archive and retrieve specifications with DB2 accounting and SQL activity trace classes active. We then ran DB2 Performance Expert accounting long reports and SQL activity trace reports to discover and validate some of the performance characteristics of the product. Attention: We ran a specific set of scenarios on a lab machine with limited contention for resources from other concurrent workloads. Certainly your workloads, data, environment, and hardware configurations are going to be unique. We did not try to draw any specific performance conclusions from these scenarios, but attempted to gain a general understanding of the performance characteristics of the product. Logging considerations We observed certain situations where we would see a noticeable increase in the amount of DB2 logging that would occur as part of a specification execution. Let us discuss one scenario in a little more detail. We set up a table archive specification with a single target archive table. We have three indexes on the source (original) table, and by default, we allowed Data Archive Expert to build a unique index over the archive table. We also defined a delete rule on the archive specification and set the complete archive specification flag to a Y. This ensures that once the archive operation is complete, you then will delete from the original table based on the row filter specification. Example 18-1 shows the various SQL counts for this archive specification execution. Example 18-1 DB2 Performance Expert accounting report - SQL attribution SQL DML AVERAGE TOTAL SELECT INSERT UPDATE DELETE DESCRIBE DESC.TBL PREPARE OPEN FETCH CLOSE We then reviewed the logging activity that was captured during the archive specification, we observed the following activity in Example Example 18-2 DB2 Performance Expert Accounting report - SQL attribution LOGGING AVERAGE TOTAL LOG RECORDS WRITTEN TOT BYTES WRITTEN K At first glance, we thought that this was an excessive amount of logging, but after some analysis, which included looking at some of the log records using the DSN1LOGP utility, we had the following observations: 284 DB2 Data Archive Expert for z/os
311 We can account for 13,941 records that were a result of the compensation records being written for the inserts and deletes from the respective source and archive target table. In addition, since we have 1 unique index on the target table, we have approximately 7,000 index inserts being logged as well. When Data Archive Expert deletes the source table row, because we have three indexes over this table, we see 3 x 6971 or about 21,000 index deletes being logged. We then reconciled the remaining 20% as being related to the page formatting, space map page updates, and index leaf page split processing. In this example, we did not have referential integrity defined in our original source table, this would also have an influence on the amount of logging. So, whenever a table archive specification is run without deleting the source table rows, the logging level is fairly predictable and is a function of the number of target archive tables in the specification, and the presence of the unique index created by Data Archive Expert. If the archive specification is completed, careful consideration needs to be given to the amount of logging that will occur due to the number of deletes that are performed on the source tables, number of associated indexes, and updates relating to DB2 enforced RI. File archives will work a little differently. In the case of a file archive, a set of tables, we refer to these as transient tables, are created with a create table like clause. As rows are materialized from the source table by application of the row filter, they are inserted into the transient table. Once the row filter materialization is complete, this table is then unloaded through an invocation of the DB2 V7 Unload utility through a call to DSNUTILS. So, while at first glance one might assume that an archive to file would reduce the number of log records that are generated, the characteristics that we observed showed that there was equivalent logging between both the file and table archive execution. Logging overhead associated with retrieve specifications will be dependent on the choice of retrieve target table or original source table. The approach for a retrieve specification from table archive table to retrieve archive table or original source table is through SQL insert and the logging associated with this has been described above. For the retrieve specification from file archive, DB2 data archive expert uses the DB2 V7 Load utility with the log yes specification. Dynamic SQL tuning We observed during our tracing a significant numbers of dynamic SQL statements being executed by the product. Using DB2 Performance Expert, we were seeing an SQL dynamic statement global cache hit ratio of about 80%. This indicates that for a reasonable percentage of the time, DB2 was able to find a skeleton copy of prepared statements on the cache and was able to perform a short prepare. However, since there was little competition for the cache on this particular system, we were hesitant to quantify the benefit obtained from dynamic statement. For information about this topic as well as other general suggestions regarding performance tuning of dynamic applications, refer to the redbook Squeezing the Most out of Dynamic SQL with DB2 for z/os and OS/390, SG Data Archive Expert logging level In the DB2 Data Archive Expert settings panel, AHXSETTS, you can specify the level of logging that the tool will produce during product execution. The information being logged can be used to help with problem determination and diagnosis. There are three levels of logging that can be specified: Level one includes definition and execution information, warning messages, and error information. Chapter 18. Performance 285
312 Level two will log warning and error messages. Level three will log only error information. Our recommendation is to set the logging level to three and then specify a more granular level of logging in the event of problems to assist in diagnosis. This information is kept in a dataset named xxxxxxx.ahxlog where xxxxxxx is a value that you specify in the settings panel. By default, this is the RACF TSO user identifier used to run DB2 Data Archive Expert. Data Archive Expert commit level Also, specified in the AHXSETSS settings panel, the field COMMIT LEVEL can influence the commit frequency used by certain types of archive and retrieve specifications. Currently, the commit level will not be used during the complete phase of a deferred delete, or for an archive with immediate complete specified. APAR PQ79107, currently open will apply the commit interval to both archiving and deleting. The specified commit level, which refers to the number of inserts per commit, is used when archiving into an archive table or on a retrieve from archive table. We ran a trace against a table archive with the same row filter as our earlier example. The DB2 Performance Expert accounting long report SQL statement section shows the information in Example Example 18-3 DB2 Performance Expert accounting report SQL section - Commit frequency SQL DML AVERAGE TOTAL SELECT INSERT UPDATE DELETE DESCRIBE DESC.TBL PREPARE OPEN FETCH CLOSE We set the value in COMMIT LEVEL to 100 in the profile, and looked at the highlights section of the DB2 Performance Expert accounting report; see Example You see that the number of commits is 78, which when combined with some of the additional metadata updates, confirms the affect of COMMIT LEVEL on table archive specification execution. Example 18-4 DB2 PE accounting report highlight section - Commit frequency HIGHLIGHTS #OCCURRENCES : 1 #ALLIEDS : 1 #ALLIEDS DISTRIB: 0 #DBATS : 0 #DBATS DISTRIB. : 0 #NO PROGRAM DATA: 0 #NORMAL TERMINAT: 1 #ABNORMAL TERMIN: 0 #CP/X PARALLEL. : 0 #IO PARALLELISM : 0 #INCREMENT. BIND: 1 #COMMITS : 78 #ROLLBACKS : DB2 Data Archive Expert for z/os
313 #SVPT REQUESTS : 0 In a similar fashion, we traced a second execution of a table archive retrieve into a retrieve table and observed a similar impact on the commit level. As for the immediate complete and deferred complete, delete yes archive specification executions, the current implementation is to perform all the insert and delete processing in a single unit of work.this approach was chosen for recovery purposes. DSNUTILS and utility performance considerations We observed some of the utility characteristics in the execution of the file archive and subsequent file archive retrieval to retrieve table. In the Data Archive Expert AHXLOG dataset mentioned above, you can find information that describes how the DB2 V7 unload and load utilities are invoked. For reference, the utility parameters currently used by Data Archive Expert are shown in Example Example 18-5 Examples of unload and load utility control statements UNLOAD DATA FROM TABLE ARCHIVED.AHXA_ NOPAD LOAD DATA INDDN U LOG YES RESUME YES EBCDIC INTO TABLE "RETRIEVE"."AHXR_ The AHXLOG also contains all of the DSNU* prefixed messages from the DSNUTILS invocation of the DB2 V7 utilities. In the event of a utility failure, this information is available to assist you in debugging and problem determination. Because the utilities are being run under control of the DSNUTILS stored procedure, you might want review the WLM performance parameters that have been defined by your z/os system programmer. In particular, the service class association given to your WLM environment for DSNUTILS can play a significant role in the performance of the unload and load utilities. DSNUTILS should be assigned to a dedicated WLM environment with NUMTCB of 1. For more information about WLM definitions and their relationship to DB2-supplied stored procedure performance, refer to the redbook DB2 for z/os Stored Procedures: Through the CALL and Beyond, SG Index creation When you archive data, DB2 Data Archive Expert creates indexes on the target tables, for table archives. The sequence columns on the created index can include the timestamp column inserted by DB2 Data Archive Expert, followed by the columns that make up row uniqueness. Creating these indexes improves performance when retrieving the archived data for subsequent DB2 Data Archive Expert processes, such as retrieve table creation. No indexes are created by DB2 Data Archive Expert on the retrieve archive tables. When accessing archive tables or retrieve tables, you may find that you need to create additional indexes to help optimize your SQL being executed against these tables. Weigh the frequency of data access, the performance of the application requesting the data, and the additional maintenance overhead for creating and maintaining these indexes to help you determine an indexing strategy. DB2 Administration Tool V4.2 can help you create and maintain these additional indexes. In addition, DB2 Performance Expert, 5655-I21 and the DB2 SQL Performance Analyzer, 5655-I22 are both tools that can assist you in the understanding and tuning of application SQL. Locking consideration We have discussed in an earlier section the COMMIT LEVEL and the impact on table archive and retrieve specifications. We also observed some evidence of locking and the escalation of locks during some of our archive and retrieve scenarios. Because of mechanism used for the Chapter 18. Performance 287
314 implementation of the delete processing during specification completion, we anticipate that archive specifications that contain row filters, which materialize large result sets can cause lock escalation to occur on a relatively frequent basis. Because of this, our observations are that large archive units should be treated as any other application that generates long running uncommitted units of work. Usual rules would apply such as scheduling these archive request to run at points during your application window where there would be minimal conflicting workload; break the archive specification into smaller pieces by modifying the row filter; and reviewing the locksize specification on the source tables. 288 DB2 Data Archive Expert for z/os
315 Part 6 Appendixes Part 6 Copyright IBM Corp All rights reserved. 289
316 290 DB2 Data Archive Expert for z/os
317 A Appendix A. REXX sample programs This appendix provides sample REXX source code for the batch execution of archive and retrieve specifications. For downloading the source code described in this Appendix, please refer to Appendix B, Additional material on page 333. This appendix contains: ARCHIVEEXECSP table archive REXX OFFLINEARCHSP file archive REXX RETRIEVEEXECSP table retrieve REXX OFFLINERETEXECSP file retrieve REXX OFFLINEONLARCSP table to file archive REXX Copyright IBM Corp All rights reserved. 291
318 A.1 ARCHIVEEXECSP table archive REXX This is the complete sample REXX invoked to run the ARCHIVEEXECSP stored procedure. This is used to execute table archive specifications in batch. Example: A-1 ARCHIVEEXECSP REXX /* */ /* Module: AHXIVPAR */ /* */ /**********************************************************************/ /* */ /* Licensed materials - Property of IBM */ /* 5655-I95 */ /* (c) Copyright IBM Corporation 2003 All Rights Reserved. */ /* US Government Users Restricted Rights - Use, duplication or */ /* Disclosure restricted by GSA ADP Schedule Contract with IBM Corp. */ /* */ /* Copy this exec to a PDS and name it AHXIVPAR. Edit the IVPRUN job,*/ /* alter the SYSEXEC DD statement to be the PDS where this exec */ /* resides. */ /* */ /* This sample DB2 REXX/SQL application will call the archive */ /* execution stored procedure ARCHIVEEXECSP to run the archive spec */ /* named IVPARCHIVE defined previously during the installation */ /* verification scenario. */ /**********************************************************************/ Signal on novalue Signal on failure x = time('r') /* reset elapsed time */ say '*** Begin AHXIVPAR exec' date('u') time('n') '***' say '' /**********************************************************************/ /* Parse out the input parameters which are separated by commas. */ /**********************************************************************/ parse arg ssid ',', /* DB2 subsystem to connect to*/ action ',', /* A=Archive, C=Complete */ versions ',', /* * for archive, # for compl*/ spec_name ',', /* name of spec to run */ schema ',', /* qualifier for AE metadata tables*/ where_clause /* row filter/where clause */ /**********************************************************************/ /* Rest of input parameters to archive execution stored procedure */ /**********************************************************************/ proc_name = "AHXTOOLS.ARCHIVEEXECSP" /* execution stored proc name*/ user = USERID() /* Userid in */ say "Input parameters to execution procedure" proc_name "will be:" say "- User id =" user say "- SUBSYSTEM =" ssid say "- Action =" action say "- Versions =" versions say "- Spec name =" spec_name say "- Row filter =" where_clause say "- Metadata schema =" schema say "length of whereclause =" length(where_clause) 292 DB2 Data Archive Expert for z/os
319 say "" /**********************************************************************/ /* Add host command environment for REXX/SQL. */ /**********************************************************************/ Address TSO "SUBCOM DSNREXX" if rc Then do s_rc = RXSUBCOM('ADD','DSNREXX','DSNREXX') if s_rc \= 0 Then do say 'RXSUBCOM failed with' s_rc exit s_rc end end /**********************************************************************/ /* Connect to DB2. */ /**********************************************************************/ Address DSNREXX "CONNECT" ssid if rc \= 0 then do say 'CONNECT to DB2 subsystem' ssid 'failed with rc=' rc call sqlca_exit 'CONNECT' signal end_of_exec end /**********************************************************************/ /* Initialize storage for output parameters. */ /**********************************************************************/ return_code = 0 /* return code from execution stored procedure*/ return_msg = copies(' ',4096) "'" /* output message from execution*/ indret = -1 /* null indicator for output msg*/ /***************************REDBOOK MODIFICATION***********************/ nullid = -1 /***************************REDBOOK MODIFICATION ENDS******************/ /* Call the archive execution stored procedure. */ /**********************************************************************/ /**********************************************************************/ /***************************REDBOOK MODIFICATION***********************/ if length(where_clause) = 0 then "EXECSQL" "CALL" proc_name "(", ":user,", ":action,", ":versions,", ":spec_name,", ":where_clause :nullid,", ":schema,", ":return_code,", ":return_msg :indret", ")" else "EXECSQL" "CALL" proc_name "(", ":user,", ":action,", ":versions,", ":spec_name,", ":where_clause,", Appendix A. REXX sample programs 293
320 ":schema,", ":return_code,", ":return_msg :indret", ")" /***************************REDBOOK MODIFICATION ENDS******************/ /**********************************************************************/ /* Check if result sets were returned or if procedure was called. */ /**********************************************************************/ if sqlcode = 466 sqlcode = 0 then do select /****************************************************************/ /* Return_code <= 4 indicates archive spec ran successfully. */ /****************************************************************/ when return_code <= 4 then do /************************************************************/ /* Display success message and return code. */ /************************************************************/ say 'Archive execution ran successfully with', 'return code' return_code /************************************************************/ /* Successfully running an archive spec will return a */ /* statistics referenced by result set locator 1. */ /************************************************************/ if sqlcode = 466 then do "EXECSQL ASSOCIATE LOCATOR (:LOC1,:LOC2)", "WITH PROCEDURE :proc_name" if sqlcode < 0 then call SQLCA_EXIT 'ASSOCIATE LOCATOR - STATISTICS' "EXECSQL ALLOCATE C101 CURSOR FOR RESULT SET :LOC1" if sqlcode < 0 then call SQLCA_EXIT 'ALLOCATE CURSOR - STATISTICS' statsrows = 0 /********************************************************/ /* Fetch and display every row of statistics result */ /* set. */ /********************************************************/ do until(sqlcode \= 0) "EXECSQL FETCH C101 INTO", ":statsspecid,", ":statsspstatus,", ":statsspstate,", ":statsspversion,", ":statsverstatus,", ":statsverstate,", ":statsverexby,", ":statsverexts,", ":statsverdelrows,", ":statsverinsrows,", ":statssrctable,", ":statssrccrtr,", ":statstartable,", 294 DB2 Data Archive Expert for z/os
321 ":statstarcrtr,", ":statstbldelrows,", ":statstblinsrows,", ":statsupdby,", ":statsupdts" if sqlcode = 0 then do statsrows = statsrows + 1 say 'Archive run statistics row' statsrows 'follows:' say "- Spec id =" statsspecid say "- Spec status =" STATUS(statsspstatus) say "- Spec state =" STATE(statsspstate) say "- Spec version =" statsspversion say "- Version status =" STATUS(statsverstatus) say "- Version state =" STATE(statsverstate) say "- Executed by =" statsverexby say "- Executed timestamp =" statsverexts say "- Number of rows deleted for this version =", statsverdelrows say "- Number of rows inserted for this version =", statsverinsrows say "- Source table name =" statssrctable say "- Source table creator =" statssrccrtr say "- Target table name =" statstartable say "- Target table creator =" statstarcrtr say "- Number of rows deleted for this target =", statstbldelrows say "- Number of rows inserted for this target =", statstblinsrows say "- Spec last updated by =" statsupdby say "- Spec last update timestamp =" statsupdts say "" end else if sqlcode \= 100 then call SQLCA_EXIT 'FETCH - STATISTICS' end "EXECSQL CLOSE C101" /********************************************************/ /* Check result set 2 for any warning messages. */ /********************************************************/ "EXECSQL ALLOCATE C102 CURSOR FOR RESULT SET :LOC2" if sqlcode \= 0 then call SQLCA_EXIT 'ALLOCATE CURSOR - WARNINGS' /********************************************************/ /* Fetch and display every row of warnings result */ /* set. */ /********************************************************/ do until(sqlcode \= 0) /******************************************************/ /* Pre-init values for display in case null. */ /******************************************************/ error_rc = 'n/a' error_key = 'n/a' error_msg = 'n/a' Appendix A. REXX sample programs 295
322 error_code = 'n/a' error_errmc = 'n/a' error_state = 'n/a' "EXECSQL FETCH C102 INTO", ":error_rc :i1,", ":error_key :i2,", ":error_code :i3,", ":error_state :i4,", ":error_msg :i5,", ":error_errmc :i6" if sqlcode = 0 then do say 'Warnings result set row:' say '- Return code =' error_rc say '- Message key =' error_key say '- Message =' error_msg say '- SQLCODE =' error_code say '- SQLERRMC =' error_errmc say '- SQLSTATE =' error_state end else if sqlcode \= 100 then call SQLCA_EXIT 'FETCH - WARNINGS' end "EXECSQL CLOSE C102" end end /****************************************************************/ /* Return_code = 8 indicates the execution of an archive spec */ /* failed. */ /****************************************************************/ when return_code = 8 then do /************************************************************/ /* Display return code and any error message. */ /************************************************************/ say 'Archive execution failed with return code' return_code say 'Error message:' return_msg /************************************************************/ /* Check if an errors result set was returned, in which */ /* case it will be referenced by locator 1. */ /************************************************************/ if sqlcode = 466 then do "EXECSQL ASSOCIATE LOCATOR (:LOC1,:LOC2)", "WITH PROCEDURE :proc_name" if sqlcode < 0 then call SQLCA_EXIT 'ASSOCIATE LOCATOR - ERRORS' "EXECSQL ALLOCATE C101 CURSOR FOR RESULT SET :LOC1" if sqlcode \= 0 then call SQLCA_EXIT 'ALLOCATE CURSOR - ERRORS' 296 DB2 Data Archive Expert for z/os
323 /********************************************************/ /* Fetch and display every row of errors result */ /* set. */ /********************************************************/ do until(sqlcode \= 0) /******************************************************/ /* Pre-init values for display in case null. */ /******************************************************/ error_rc = 'n/a' error_key = 'n/a' error_msg = 'n/a' error_code = 'n/a' error_errmc = 'n/a' error_state = 'n/a' "EXECSQL FETCH C101 INTO", ":error_rc :i1,", ":error_key :i2,", ":error_code :i3,", ":error_state :i4,", ":error_msg :i5,", ":error_errmc :i6" if sqlcode = 0 then do say 'Error result set row:' say '- Return code =' error_rc say '- Message key =' error_key say '- Message =' error_msg say '- SQLCODE =' error_code say '- SQLERRMC =' error_errmc say '- SQLSTATE =' error_state end else if sqlcode \= 100 then call SQLCA_EXIT 'FETCH - ERRORS' end "EXECSQL CLOSE C101" end end /****************************************************************/ /* Handle in case procedure returns unexpected return code. */ /****************************************************************/ otherwise do say 'Archive execution failed with return code' return_code say 'Error message:' return_msg end end end /**********************************************************************/ /* Execution stored procedure could not be called. */ /**********************************************************************/ else call SQLCA_EXIT 'CALL' Appendix A. REXX sample programs 297
324 end_of_exec: Address DSNREXX 'DISCONNECT' /* Disconnect from DB2 */ Address TSO /* When done with DSNREXX, remove it*/ s_rc = RXSUBCOM('DELETE','DSNREXX','DSNREXX') say '*** End AHXIVPAR' date('u') time('n') '***' say '*** Processing time is' time('e') 'seconds ***' /**********************REDBOOK MODIFICATION****************************/ exit return_code /**********************REDBOOK MODIFICATION ENDS***********************/ /**********************************************************************/ /* Return string representation of status. */ /**********************************************************************/ status: parse arg status. select when status = 0 then return 'STARTED' when status = 1 then return 'FINISHED' otherwise return 'UNDEFINED STATUS' status end return /**********************************************************************/ /* Return string representation of state. */ /**********************************************************************/ state: parse arg state. select when status = 0 then return 'INVALID' when status = 1 then return 'PENDING' when status = 2 then return 'IN PROGRESS' when status = 3 then return 'COMPLETED' when status = 4 then return 'FILED' otherwise return 'UNDEFINED STATE' state end return /**********************************************************************/ /* Display SQLCA then exit */ /**********************************************************************/ sqlca_exit: SAY 'ERROR SQL STATEMENT - ' ARG(1) SAY 'SQLCODE ='SQLCODE SAY 'SQLSTATE='SQLSTATE SAY 'SQLERRMC ='SQLERRMC SAY 'SQLERRP ='SQLERRP SAY 'SQLERRD ='SQLERRD.1',', SQLERRD.2',', SQLERRD.3',', SQLERRD.4',', SQLERRD.5',', SQLERRD DB2 Data Archive Expert for z/os
325 SAY 'SQLWARN ='SQLWARN.0',', SQLWARN.1',', SQLWARN.2',', SQLWARN.3',', SQLWARN.4',', SQLWARN.5',', SQLWARN.6',', SQLWARN.7',', SQLWARN.8',', SQLWARN.9',', SQLWARN.10 signal end_of_exec /**********************************************************************/ /* Display any caught uninitialized variables and exit exec */ /**********************************************************************/ novalue: say 'Uninitialized variable in exec at line' sigl'.' say sigl':' sourceline(sigl) signal end_of_exec /**********************************************************************/ /* Display any command failures and exit exec */ /**********************************************************************/ failure: say 'Negative RC='rc 'at line' sigl'.' say sigl':' sourceline(sigl) Address TSO s_rc = RXSUBCOM('DELETE','DSNREXX','DSNREXX') exit 12 Appendix A. REXX sample programs 299
326 A.2 OFFLINEARCHSP file archive REXX This is the complete REXX sample invoked to run the OFFLINEARCHSP stored procedure. This is used to execute file archive specifications in batch. Example: A-2 OFFLINEARCHSP REXX /* */ /* Module: AHXARFIL */ /* */ /**********************************************************************/ /* */ /* Licensed materials - Property of IBM */ /* 5655-I95 */ /* (c) Copyright IBM Corporation 2003 All Rights Reserved. */ /* US Government Users Restricted Rights - Use, duplication or */ /* Disclosure restricted by GSA ADP Schedule Contract with IBM Corp. */ /* */ /* Copy this exec to a PDS and name it AHXARFIL. Edit the IVPRUN job,*/ /* alter the SYSEXEC DD statement to be the PDS where this exec */ /* resides. */ /* */ /* This sample DB2 REXX/SQL application will call the archive */ /* execution stored procedure OFFLINEARCHSP to run the archive spec */ /* named IVPARCHIVE defined previously during the installation */ /* verification scenario. */ /**********************************************************************/ Signal on novalue Signal on failure x = time('r') /* reset elapsed time */ say '*** Begin AHXARFIL exec' date('u') time('n') '***' say '' /**********************************************************************/ /* Parse out the input parameters which are separated by commas. */ /**********************************************************************/ parse arg ssid ',', /* DB2 subsystem to connect to*/ spec_name ',', /* name of spec to run */ schema ',', /* qualifier for AE metadata tables*/ where_clause /* row filter/where clause */ /**********************************************************************/ /* Rest of input parameters to archive execution stored procedure */ /**********************************************************************/ /*********************REDBOOK MODIFICATION*****************************/ proc_name = "AHXTOOLS.OFFLINEARCHSP" /* execution stored proc name*/ /*********************REDBOOK MODIFICATION ENDS************************/ user = USERID() /* Userid in */ say "Input parameters to execution procedure" proc_name "will be:" say "- User id =" user say "- SUBSYSTEM =" ssid say "- Spec name =" spec_name say "- Row filter =" where_clause say "- Metadata schema =" schema say "length of whereclause =" length(where_clause) say "" 300 DB2 Data Archive Expert for z/os
327 /**********************************************************************/ /* Add host command environment for REXX/SQL. */ /**********************************************************************/ Address TSO "SUBCOM DSNREXX" if rc Then do s_rc = RXSUBCOM('ADD','DSNREXX','DSNREXX') if s_rc \= 0 Then do say 'RXSUBCOM failed with' s_rc exit s_rc end end /**********************************************************************/ /* Connect to DB2. */ /**********************************************************************/ Address DSNREXX "CONNECT" ssid if rc \= 0 then do say 'CONNECT to DB2 subsystem' ssid 'failed with rc=' rc call sqlca_exit 'CONNECT' signal end_of_exec end /**********************************************************************/ /* Initialize storage for output parameters. */ /**********************************************************************/ return_code = 0 /* return code from execution stored procedure*/ return_msg = copies(' ',4096) "'" /* output message from execution*/ indret = -1 /* null indicator for output msg*/ /***************************REDBOOK MODIFICATION***********************/ nullid = -1 /***************************REDBOOK MODIFICATION ENDS******************/ /* Call the archive execution stored procedure. */ /**********************************************************************/ /**********************************************************************/ /***************************REDBOOK MODIFICATION***********************/ if length(where_clause) = 0 then "EXECSQL" "CALL" proc_name "(", ":user,", ":spec_name,", ":where_clause :nullid,", ":schema,", ":return_code,", ":return_msg :indret", ")" else "EXECSQL" "CALL" proc_name "(", ":user,", ":spec_name,", ":where_clause,", ":schema,", ":return_code,", ":return_msg :indret", ")" /***************************REDBOOK MODIFICATION ENDS******************/ Appendix A. REXX sample programs 301
328 /**********************************************************************/ /* Check if result sets were returned or if procedure was called. */ /**********************************************************************/ if sqlcode = 466 sqlcode = 0 then do select /****************************************************************/ /* Return_code <= 4 indicates archive spec ran successfully. */ /****************************************************************/ when return_code <= 4 then do /************************************************************/ /* Display success message and return code. */ /************************************************************/ say 'Archive execution ran successfully with', 'return code' return_code /************************************************************/ /* Successfully running an archive spec will return a */ /* statistics referenced by result set locator 1. */ /************************************************************/ if sqlcode = 466 then do "EXECSQL ASSOCIATE LOCATOR (:LOC1,:LOC2)", "WITH PROCEDURE :proc_name" if sqlcode < 0 then call SQLCA_EXIT 'ASSOCIATE LOCATOR - STATISTICS' "EXECSQL ALLOCATE C101 CURSOR FOR RESULT SET :LOC1" if sqlcode < 0 then call SQLCA_EXIT 'ALLOCATE CURSOR - STATISTICS' statsrows = 0 /********************************************************/ /* Fetch and display every row of statistics result */ /* set. */ /********************************************************/ do until(sqlcode \= 0) "EXECSQL FETCH C101 INTO", ":statsspecid,", ":statsspstatus,", ":statsspstate,", ":statsspversion,", ":statsverstatus,", ":statsverstate,", ":statsverexby,", ":statsverexts,", ":statsverdelrows,", ":statsverinsrows,", ":statssrctable,", ":statssrccrtr,", ":statstartable,", ":statstarcrtr,", ":statstbldelrows,", ":statstblinsrows,", ":statsupdby,", ":statsupdts" 302 DB2 Data Archive Expert for z/os
329 if sqlcode = 0 then do statsrows = statsrows + 1 say 'Archive run statistics row' statsrows 'follows:' say "- Spec id =" statsspecid say "- Spec status =" STATUS(statsspstatus) say "- Spec state =" STATE(statsspstate) say "- Spec version =" statsspversion say "- Version status =" STATUS(statsverstatus) say "- Version state =" STATE(statsverstate) say "- Executed by =" statsverexby say "- Executed timestamp =" statsverexts say "- Number of rows deleted for this version =", statsverdelrows say "- Number of rows inserted for this version =", statsverinsrows say "- Source table name =" statssrctable say "- Source table creator =" statssrccrtr say "- Target table name =" statstartable say "- Target table creator =" statstarcrtr say "- Number of rows deleted for this target =", statstbldelrows say "- Number of rows inserted for this target =", statstblinsrows say "- Spec last updated by =" statsupdby say "- Spec last update timestamp =" statsupdts say "" end else if sqlcode \= 100 then call SQLCA_EXIT 'FETCH - STATISTICS' end "EXECSQL CLOSE C101" /********************************************************/ /* Check result set 2 for any warning messages. */ /********************************************************/ "EXECSQL ALLOCATE C102 CURSOR FOR RESULT SET :LOC2" if sqlcode \= 0 then call SQLCA_EXIT 'ALLOCATE CURSOR - WARNINGS' /********************************************************/ /* Fetch and display every row of warnings result */ /* set. */ /********************************************************/ do until(sqlcode \= 0) /******************************************************/ /* Pre-init values for display in case null. */ /******************************************************/ error_rc = 'n/a' error_key = 'n/a' error_msg = 'n/a' error_code = 'n/a' error_errmc = 'n/a' error_state = 'n/a' "EXECSQL FETCH C102 INTO", ":error_rc :i1,", Appendix A. REXX sample programs 303
330 ":error_key :i2,", ":error_code :i3,", ":error_state :i4,", ":error_msg :i5,", ":error_errmc :i6" if sqlcode = 0 then do say 'Warnings result set row:' say '- Return code =' error_rc say '- Message key =' error_key say '- Message =' error_msg say '- SQLCODE =' error_code say '- SQLERRMC =' error_errmc say '- SQLSTATE =' error_state end else if sqlcode \= 100 then call SQLCA_EXIT 'FETCH - WARNINGS' end "EXECSQL CLOSE C102" end end /****************************************************************/ /* Return_code = 8 indicates the execution of an archive spec */ /* failed. */ /****************************************************************/ when return_code = 8 then do /************************************************************/ /* Display return code and any error message. */ /************************************************************/ say 'Archive execution failed with return code' return_code say 'Error message:' return_msg /************************************************************/ /* Check if an errors result set was returned, in which */ /* case it will be referenced by locator 1. */ /************************************************************/ if sqlcode = 466 then do "EXECSQL ASSOCIATE LOCATOR (:LOC1,:LOC2)", "WITH PROCEDURE :proc_name" if sqlcode < 0 then call SQLCA_EXIT 'ASSOCIATE LOCATOR - ERRORS' "EXECSQL ALLOCATE C101 CURSOR FOR RESULT SET :LOC1" if sqlcode \= 0 then call SQLCA_EXIT 'ALLOCATE CURSOR - ERRORS' /********************************************************/ /* Fetch and display every row of errors result */ /* set. */ /********************************************************/ do until(sqlcode \= 0) 304 DB2 Data Archive Expert for z/os
331 /******************************************************/ /* Pre-init values for display in case null. */ /******************************************************/ error_rc = 'n/a' error_key = 'n/a' error_msg = 'n/a' error_code = 'n/a' error_errmc = 'n/a' error_state = 'n/a' "EXECSQL FETCH C101 INTO", ":error_rc :i1,", ":error_key :i2,", ":error_code :i3,", ":error_state :i4,", ":error_msg :i5,", ":error_errmc :i6" if sqlcode = 0 then do say 'Error result set row:' say '- Return code =' error_rc say '- Message key =' error_key say '- Message =' error_msg say '- SQLCODE =' error_code say '- SQLERRMC =' error_errmc say '- SQLSTATE =' error_state end else if sqlcode \= 100 then call SQLCA_EXIT 'FETCH - ERRORS' end "EXECSQL CLOSE C101" end end /****************************************************************/ /* Handle in case procedure returns unexpected return code. */ /****************************************************************/ otherwise do say 'Archive execution failed with return code' return_code say 'Error message:' return_msg end end end /**********************************************************************/ /* Execution stored procedure could not be called. */ /**********************************************************************/ else call SQLCA_EXIT 'CALL' end_of_exec: Address DSNREXX 'DISCONNECT' /* Disconnect from DB2 */ Address TSO /* When done with DSNREXX, remove it*/ s_rc = RXSUBCOM('DELETE','DSNREXX','DSNREXX') Appendix A. REXX sample programs 305
332 say '*** End AHXARFIL' date('u') time('n') '***' say '*** Processing time is' time('e') 'seconds ***' /**********************REDBOOK MODIFICATION****************************/ exit return_code /**********************REDBOOK MODIFICATION ENDS***********************/ /**********************************************************************/ /* Return string representation of status. */ /**********************************************************************/ status: parse arg status. select when status = 0 then return 'STARTED' when status = 1 then return 'FINISHED' otherwise return 'UNDEFINED STATUS' status end return /**********************************************************************/ /* Return string representation of state. */ /**********************************************************************/ state: parse arg state. select when status = 0 then return 'INVALID' when status = 1 then return 'PENDING' when status = 2 then return 'IN PROGRESS' when status = 3 then return 'COMPLETED' when status = 4 then return 'FILED' otherwise return 'UNDEFINED STATE' state end return /**********************************************************************/ /* Display SQLCA then exit */ /**********************************************************************/ sqlca_exit: SAY 'ERROR SQL STATEMENT - ' ARG(1) SAY 'SQLCODE ='SQLCODE SAY 'SQLSTATE='SQLSTATE SAY 'SQLERRMC ='SQLERRMC SAY 'SQLERRP ='SQLERRP SAY 'SQLERRD ='SQLERRD.1',', SQLERRD.2',', SQLERRD.3',', SQLERRD.4',', SQLERRD.5',', SQLERRD.6 SAY 'SQLWARN ='SQLWARN.0',', SQLWARN.1',', SQLWARN.2',', SQLWARN.3',', SQLWARN.4',', SQLWARN.5',', 306 DB2 Data Archive Expert for z/os
333 signal end_of_exec SQLWARN.6',', SQLWARN.7',', SQLWARN.8',', SQLWARN.9',', SQLWARN.10 /**********************************************************************/ /* Display any caught uninitialized variables and exit exec */ /**********************************************************************/ novalue: say 'Uninitialized variable in exec at line' sigl'.' say sigl':' sourceline(sigl) signal end_of_exec /**********************************************************************/ /* Display any command failures and exit exec */ /**********************************************************************/ failure: say 'Negative RC='rc 'at line' sigl'.' say sigl':' sourceline(sigl) Address TSO s_rc = RXSUBCOM('DELETE','DSNREXX','DSNREXX') exit 12 Appendix A. REXX sample programs 307
334 A.3 RETRIEVEEXECSP table retrieve REXX This is the complete sample REXX invoked to run the RETRIEVEEXECSP stored procedure. This is used to execute table archive retrieve specifications in batch. Example: A-3 RETRIEVEEXECSP REXX /* */ /* Module: AHXTARET */ /* */ /**********************************************************************/ /* */ /* Licensed materials - Property of IBM */ /* 5655-I95 */ /* (c) Copyright IBM Corporation 2003 All Rights Reserved. */ /* US Government Users Restricted Rights - Use, duplication or */ /* Disclosure restricted by GSA ADP Schedule Contract with IBM Corp. */ /* */ /* Copy this exec to a PDS and name it AHXTARET. Edit the IVPRUN job,*/ /* alter the SYSEXEC DD statement to be the PDS where this exec */ /* resides. */ /* */ /* This sample DB2 REXX/SQL application will call the archive */ /* execution stored procedure ARCHIVEEXECSP to run the archive spec */ /* named IVPARCHIVE defined previously during the installation */ /* verification scenario. */ /**********************************************************************/ Signal on novalue Signal on failure x = time('r') /* reset elapsed time */ say '*** Begin AHXTARET exec' date('u') time('n') '***' say '' /**********************************************************************/ /* Parse out the input parameters which are separated by commas. */ /**********************************************************************/ parse arg ssid ',', /* DB2 subsystem to connect to*/ spec_name ',', /* name of spec to run */ schema ',', /* qualifier for AE metadata tables*/ where_clause /* row filter/where clause */ /**********************************************************************/ /* Rest of input parameters to archive execution stored procedure */ /********************REDBOOK MODIFICATION******************************/ proc_name = "AHXTOOLS.RETRIEVEEXECSP" /* execution stored proc name*/ /********************REDBOOK MODIFICATION END**************************/ user = USERID() /* Userid in */ say "Input parameters to execution procedure" proc_name "will be:" say "- User id =" user say "- SUBSYSTEM =" ssid say "- Spec name =" spec_name say "- Row filter =" where_clause say "- Metadata schema =" schema say "length of whereclause =" length(where_clause) say "" 308 DB2 Data Archive Expert for z/os
335 /**********************************************************************/ /* Add host command environment for REXX/SQL. */ /**********************************************************************/ Address TSO "SUBCOM DSNREXX" if rc Then do s_rc = RXSUBCOM('ADD','DSNREXX','DSNREXX') if s_rc \= 0 Then do say 'RXSUBCOM failed with' s_rc exit s_rc end end /**********************************************************************/ /* Connect to DB2. */ /**********************************************************************/ Address DSNREXX "CONNECT" ssid if rc \= 0 then do say 'CONNECT to DB2 subsystem' ssid 'failed with rc=' rc call sqlca_exit 'CONNECT' signal end_of_exec end /**********************************************************************/ /* Initialize storage for output parameters. */ /**********************************************************************/ return_code = 0 /* return code from execution stored procedure*/ return_msg = copies(' ',4096) "'" /* output message from execution*/ indret = -1 /* null indicator for output msg*/ /***************************REDBOOK MODIFICATION***********************/ nullid = -1 /***************************REDBOOK MODIFICATION ENDS******************/ /* Call the archive execution stored procedure. */ /**********************************************************************/ /**********************************************************************/ /***************************REDBOOK MODIFICATION***********************/ if length(where_clause) = 0 then "EXECSQL" "CALL" proc_name "(", ":user,", ":spec_name,", ":where_clause :nullid,", ":schema,", ":return_code,", ":return_msg :indret", ")" else "EXECSQL" "CALL" proc_name "(", ":user,", ":spec_name,", ":where_clause,", ":schema,", ":return_code,", ":return_msg :indret", ")" /***************************REDBOOK MODIFICATION ENDS******************/ /**********************************************************************/ Appendix A. REXX sample programs 309
336 /* Check if result sets were returned or if procedure was called. */ /**********************************************************************/ if sqlcode = 466 sqlcode = 0 then do select /****************************************************************/ /* Return_code <= 4 indicates archive spec ran successfully. */ /****************************************************************/ when return_code <= 4 then do /************************************************************/ /* Display success message and return code. */ /************************************************************/ say 'Archive execution ran successfully with', 'return code' return_code /************************************************************/ /* Successfully running an archive spec will return a */ /* statistics referenced by result set locator 1. */ /************************************************************/ if sqlcode = 466 then do "EXECSQL ASSOCIATE LOCATOR (:LOC1,:LOC2)", "WITH PROCEDURE :proc_name" if sqlcode < 0 then call SQLCA_EXIT 'ASSOCIATE LOCATOR - STATISTICS' "EXECSQL ALLOCATE C101 CURSOR FOR RESULT SET :LOC1" if sqlcode < 0 then call SQLCA_EXIT 'ALLOCATE CURSOR - STATISTICS' statsrows = 0 /********************************************************/ /* Fetch and display every row of statistics result */ /* set. */ /********************************************************/ do until(sqlcode \= 0) "EXECSQL FETCH C101 INTO", ":statsspecid,", ":statsspstatus,", ":statsspstate,", ":statsspversion,", ":statsverstatus,", ":statsverstate,", ":statsverexby,", ":statsverexts,", ":statsverdelrows,", ":statsverinsrows,", ":statssrctable,", ":statssrccrtr,", ":statstartable,", ":statstarcrtr,", ":statstbldelrows,", ":statstblinsrows,", ":statsupdby,", ":statsupdts" if sqlcode = 0 then 310 DB2 Data Archive Expert for z/os
337 do statsrows = statsrows + 1 say 'Archive run statistics row' statsrows 'follows:' say "- Spec id =" statsspecid say "- Spec status =" STATUS(statsspstatus) say "- Spec state =" STATE(statsspstate) say "- Spec version =" statsspversion say "- Version status =" STATUS(statsverstatus) say "- Version state =" STATE(statsverstate) say "- Executed by =" statsverexby say "- Executed timestamp =" statsverexts say "- Number of rows deleted for this version =", statsverdelrows say "- Number of rows inserted for this version =", statsverinsrows say "- Source table name =" statssrctable say "- Source table creator =" statssrccrtr say "- Target table name =" statstartable say "- Target table creator =" statstarcrtr say "- Number of rows deleted for this target =", statstbldelrows say "- Number of rows inserted for this target =", statstblinsrows say "- Spec last updated by =" statsupdby say "- Spec last update timestamp =" statsupdts say "" end else if sqlcode \= 100 then call SQLCA_EXIT 'FETCH - STATISTICS' end "EXECSQL CLOSE C101" /********************************************************/ /* Check result set 2 for any warning messages. */ /********************************************************/ "EXECSQL ALLOCATE C102 CURSOR FOR RESULT SET :LOC2" if sqlcode \= 0 then call SQLCA_EXIT 'ALLOCATE CURSOR - WARNINGS' /********************************************************/ /* Fetch and display every row of warnings result */ /* set. */ /********************************************************/ do until(sqlcode \= 0) /******************************************************/ /* Pre-init values for display in case null. */ /******************************************************/ error_rc = 'n/a' error_key = 'n/a' error_msg = 'n/a' error_code = 'n/a' error_errmc = 'n/a' error_state = 'n/a' "EXECSQL FETCH C102 INTO", ":error_rc :i1,", ":error_key :i2,", Appendix A. REXX sample programs 311
338 ":error_code :i3,", ":error_state :i4,", ":error_msg :i5,", ":error_errmc :i6" if sqlcode = 0 then do say 'Warnings result set row:' say '- Return code =' error_rc say '- Message key =' error_key say '- Message =' error_msg say '- SQLCODE =' error_code say '- SQLERRMC =' error_errmc say '- SQLSTATE =' error_state end else if sqlcode \= 100 then call SQLCA_EXIT 'FETCH - WARNINGS' end "EXECSQL CLOSE C102" end end /****************************************************************/ /* Return_code = 8 indicates the execution of an archive spec */ /* failed. */ /****************************************************************/ when return_code = 8 then do /************************************************************/ /* Display return code and any error message. */ /************************************************************/ say 'Archive execution failed with return code' return_code say 'Error message:' return_msg /************************************************************/ /* Check if an errors result set was returned, in which */ /* case it will be referenced by locator 1. */ /************************************************************/ if sqlcode = 466 then do "EXECSQL ASSOCIATE LOCATOR (:LOC1,:LOC2)", "WITH PROCEDURE :proc_name" if sqlcode < 0 then call SQLCA_EXIT 'ASSOCIATE LOCATOR - ERRORS' "EXECSQL ALLOCATE C101 CURSOR FOR RESULT SET :LOC1" if sqlcode \= 0 then call SQLCA_EXIT 'ALLOCATE CURSOR - ERRORS' /********************************************************/ /* Fetch and display every row of errors result */ /* set. */ /********************************************************/ do until(sqlcode \= 0) /******************************************************/ 312 DB2 Data Archive Expert for z/os
339 /* Pre-init values for display in case null. */ /******************************************************/ error_rc = 'n/a' error_key = 'n/a' error_msg = 'n/a' error_code = 'n/a' error_errmc = 'n/a' error_state = 'n/a' "EXECSQL FETCH C101 INTO", ":error_rc :i1,", ":error_key :i2,", ":error_code :i3,", ":error_state :i4,", ":error_msg :i5,", ":error_errmc :i6" if sqlcode = 0 then do say 'Error result set row:' say '- Return code =' error_rc say '- Message key =' error_key say '- Message =' error_msg say '- SQLCODE =' error_code say '- SQLERRMC =' error_errmc say '- SQLSTATE =' error_state end else if sqlcode \= 100 then call SQLCA_EXIT 'FETCH - ERRORS' end "EXECSQL CLOSE C101" end end /****************************************************************/ /* Handle in case procedure returns unexpected return code. */ /****************************************************************/ otherwise do say 'Archive execution failed with return code' return_code say 'Error message:' return_msg end end end /**********************************************************************/ /* Execution stored procedure could not be called. */ /**********************************************************************/ else call SQLCA_EXIT 'CALL' end_of_exec: Address DSNREXX 'DISCONNECT' /* Disconnect from DB2 */ Address TSO /* When done with DSNREXX, remove it*/ s_rc = RXSUBCOM('DELETE','DSNREXX','DSNREXX') Appendix A. REXX sample programs 313
340 say '*** End AHXTARET' date('u') time('n') '***' say '*** Processing time is' time('e') 'seconds ***' /**********************REDBOOK MODIFICATION****************************/ exit return_code /**********************REDBOOK MODIFICATION ENDS***********************/ /**********************************************************************/ /* Return string representation of status. */ /**********************************************************************/ status: parse arg status. select when status = 0 then return 'STARTED' when status = 1 then return 'FINISHED' otherwise return 'UNDEFINED STATUS' status end return /**********************************************************************/ /* Return string representation of state. */ /**********************************************************************/ state: parse arg state. select when status = 0 then return 'INVALID' when status = 1 then return 'PENDING' when status = 2 then return 'IN PROGRESS' when status = 3 then return 'COMPLETED' when status = 4 then return 'FILED' otherwise return 'UNDEFINED STATE' state end return /**********************************************************************/ /* Display SQLCA then exit */ /**********************************************************************/ sqlca_exit: SAY 'ERROR SQL STATEMENT - ' ARG(1) SAY 'SQLCODE ='SQLCODE SAY 'SQLSTATE='SQLSTATE SAY 'SQLERRMC ='SQLERRMC SAY 'SQLERRP ='SQLERRP SAY 'SQLERRD ='SQLERRD.1',', SQLERRD.2',', SQLERRD.3',', SQLERRD.4',', SQLERRD.5',', SQLERRD.6 SAY 'SQLWARN ='SQLWARN.0',', SQLWARN.1',', SQLWARN.2',', SQLWARN.3',', SQLWARN.4',', SQLWARN.5',', SQLWARN.6',', 314 DB2 Data Archive Expert for z/os
341 signal end_of_exec SQLWARN.7',', SQLWARN.8',', SQLWARN.9',', SQLWARN.10 /**********************************************************************/ /* Display any caught uninitialized variables and exit exec */ /**********************************************************************/ novalue: say 'Uninitialized variable in exec at line' sigl'.' say sigl':' sourceline(sigl) signal end_of_exec /**********************************************************************/ /* Display any command failures and exit exec */ /**********************************************************************/ failure: say 'Negative RC='rc 'at line' sigl'.' say sigl':' sourceline(sigl) Address TSO s_rc = RXSUBCOM('DELETE','DSNREXX','DSNREXX') exit 12 Appendix A. REXX sample programs 315
342 A.4 OFFLINERETEXECSP file retrieve REXX This is the complete sample REXX invoked to run the OFFLINERETEXECSP stored procedure. This is used to execute file archive retrieve specifications in batch. Example: A-4 OFFLINERETEXECSP REXX /* */ /* Module: AHXFIRET */ /* */ /**********************************************************************/ /* */ /* Licensed materials - Property of IBM */ /* 5655-I95 */ /* (c) Copyright IBM Corporation 2003 All Rights Reserved. */ /* US Government Users Restricted Rights - Use, duplication or */ /* Disclosure restricted by GSA ADP Schedule Contract with IBM Corp. */ /* */ /* Copy this exec to a PDS and name it AHXFIRET. Edit the IVPRUN job,*/ /* alter the SYSEXEC DD statement to be the PDS where this exec */ /* resides. */ /* */ /* This sample DB2 REXX/SQL application will call the archive */ /* execution stored procedure OFFLINEARCHSP to run the archive spec */ /* named IVPARCHIVE defined previously during the installation */ /* verification scenario. */ /**********************************************************************/ Signal on novalue Signal on failure x = time('r') /* reset elapsed time */ say '*** Begin AHXFIRET exec' date('u') time('n') '***' say '' /**********************************************************************/ /* Parse out the input parameters which are separated by commas. */ /**********************************************************************/ parse arg ssid ',', /* DB2 subsystem to connect to*/ spec_name ',', /* name of spec to run */ schema /* qualifier for AE metadata tables*/ /**********************************************************************/ /* Rest of input parameters to archive execution stored procedure */ /**********************************************************************/ /*********************REDBOOK MODIFICATION*****************************/ proc_name = "AHXTOOLS.OFFLINERETEXECSP" /* execution stored proc name*/ /*********************REDBOOK MODIFICATION ENDS************************/ user = USERID() /* Userid in */ say "Input parameters to execution procedure" proc_name "will be:" say "- User id =" user say "- SUBSYSTEM =" ssid say "- Spec name =" spec_name say "- Metadata schema =" schema say "" /**********************************************************************/ /* Add host command environment for REXX/SQL. */ 316 DB2 Data Archive Expert for z/os
343 /**********************************************************************/ Address TSO "SUBCOM DSNREXX" if rc Then do s_rc = RXSUBCOM('ADD','DSNREXX','DSNREXX') if s_rc \= 0 Then do say 'RXSUBCOM failed with' s_rc exit s_rc end end /**********************************************************************/ /* Connect to DB2. */ /**********************************************************************/ Address DSNREXX "CONNECT" ssid if rc \= 0 then do say 'CONNECT to DB2 subsystem' ssid 'failed with rc=' rc call sqlca_exit 'CONNECT' signal end_of_exec end /**********************************************************************/ /* Initialize storage for output parameters. */ /**********************************************************************/ return_code = 0 /* return code from execution stored procedure*/ return_msg = copies(' ',4096) "'" /* output message from execution*/ indret = -1 /* null indicator for output msg*/ /***************************REDBOOK MODIFICATION***********************/ /***************************REDBOOK MODIFICATION ENDS******************/ /* Call the archive execution stored procedure. */ /**********************************************************************/ /**********************************************************************/ /***************************REDBOOK MODIFICATION***********************/ "EXECSQL" "CALL" proc_name "(", ":user,", ":spec_name,", ":schema,", ":return_code,", ":return_msg :indret", ")" /***************************REDBOOK MODIFICATION ENDS******************/ /**********************************************************************/ /* Check if result sets were returned or if procedure was called. */ /**********************************************************************/ if sqlcode = 466 sqlcode = 0 then do select /****************************************************************/ /* Return_code <= 4 indicates archive spec ran successfully. */ /****************************************************************/ when return_code <= 4 then do /************************************************************/ /* Display success message and return code. */ /************************************************************/ Appendix A. REXX sample programs 317
344 say 'Archive execution ran successfully with', 'return code' return_code /************************************************************/ /* Successfully running an archive spec will return a */ /* statistics referenced by result set locator 1. */ /************************************************************/ if sqlcode = 466 then do "EXECSQL ASSOCIATE LOCATOR (:LOC1,:LOC2)", "WITH PROCEDURE :proc_name" if sqlcode < 0 then call SQLCA_EXIT 'ASSOCIATE LOCATOR - STATISTICS' "EXECSQL ALLOCATE C101 CURSOR FOR RESULT SET :LOC1" if sqlcode < 0 then call SQLCA_EXIT 'ALLOCATE CURSOR - STATISTICS' statsrows = 0 /********************************************************/ /* Fetch and display every row of statistics result */ /* set. */ /********************************************************/ do until(sqlcode \= 0) "EXECSQL FETCH C101 INTO", ":statsspecid,", ":statsspstatus,", ":statsspstate,", ":statsspversion,", ":statsverstatus,", ":statsverstate,", ":statsverexby,", ":statsverexts,", ":statsverdelrows,", ":statsverinsrows,", ":statssrctable,", ":statssrccrtr,", ":statstartable,", ":statstarcrtr,", ":statstbldelrows,", ":statstblinsrows,", ":statsupdby,", ":statsupdts" if sqlcode = 0 then do statsrows = statsrows + 1 say 'Archive run statistics row' statsrows 'follows:' say "- Spec id =" statsspecid say "- Spec status =" STATUS(statsspstatus) say "- Spec state =" STATE(statsspstate) say "- Spec version =" statsspversion say "- Version status =" STATUS(statsverstatus) say "- Version state =" STATE(statsverstate) say "- Executed by =" statsverexby say "- Executed timestamp =" statsverexts say "- Number of rows deleted for this version =", statsverdelrows 318 DB2 Data Archive Expert for z/os
345 say "- Number of rows inserted for this version =", statsverinsrows say "- Source table name =" statssrctable say "- Source table creator =" statssrccrtr say "- Target table name =" statstartable say "- Target table creator =" statstarcrtr say "- Number of rows deleted for this target =", statstbldelrows say "- Number of rows inserted for this target =", statstblinsrows say "- Spec last updated by =" statsupdby say "- Spec last update timestamp =" statsupdts say "" end else if sqlcode \= 100 then call SQLCA_EXIT 'FETCH - STATISTICS' end "EXECSQL CLOSE C101" /********************************************************/ /* Check result set 2 for any warning messages. */ /********************************************************/ "EXECSQL ALLOCATE C102 CURSOR FOR RESULT SET :LOC2" if sqlcode \= 0 then call SQLCA_EXIT 'ALLOCATE CURSOR - WARNINGS' /********************************************************/ /* Fetch and display every row of warnings result */ /* set. */ /********************************************************/ do until(sqlcode \= 0) /******************************************************/ /* Pre-init values for display in case null. */ /******************************************************/ error_rc = 'n/a' error_key = 'n/a' error_msg = 'n/a' error_code = 'n/a' error_errmc = 'n/a' error_state = 'n/a' "EXECSQL FETCH C102 INTO", ":error_rc :i1,", ":error_key :i2,", ":error_code :i3,", ":error_state :i4,", ":error_msg :i5,", ":error_errmc :i6" if sqlcode = 0 then do say 'Warnings result set row:' say '- Return code =' error_rc say '- Message key =' error_key say '- Message =' error_msg say '- SQLCODE =' error_code say '- SQLERRMC =' error_errmc Appendix A. REXX sample programs 319
346 say '- SQLSTATE =' error_state end else if sqlcode \= 100 then call SQLCA_EXIT 'FETCH - WARNINGS' end "EXECSQL CLOSE C102" end end /****************************************************************/ /* Return_code = 8 indicates the execution of an archive spec */ /* failed. */ /****************************************************************/ when return_code = 8 then do /************************************************************/ /* Display return code and any error message. */ /************************************************************/ say 'Archive execution failed with return code' return_code say 'Error message:' return_msg /************************************************************/ /* Check if an errors result set was returned, in which */ /* case it will be referenced by locator 1. */ /************************************************************/ if sqlcode = 466 then do "EXECSQL ASSOCIATE LOCATOR (:LOC1,:LOC2)", "WITH PROCEDURE :proc_name" if sqlcode < 0 then call SQLCA_EXIT 'ASSOCIATE LOCATOR - ERRORS' "EXECSQL ALLOCATE C101 CURSOR FOR RESULT SET :LOC1" if sqlcode \= 0 then call SQLCA_EXIT 'ALLOCATE CURSOR - ERRORS' /********************************************************/ /* Fetch and display every row of errors result */ /* set. */ /********************************************************/ do until(sqlcode \= 0) /******************************************************/ /* Pre-init values for display in case null. */ /******************************************************/ error_rc = 'n/a' error_key = 'n/a' error_msg = 'n/a' error_code = 'n/a' error_errmc = 'n/a' error_state = 'n/a' "EXECSQL FETCH C101 INTO", ":error_rc :i1,", ":error_key :i2,", ":error_code :i3,", 320 DB2 Data Archive Expert for z/os
347 ":error_state :i4,", ":error_msg :i5,", ":error_errmc :i6" if sqlcode = 0 then do say 'Error result set row:' say '- Return code =' error_rc say '- Message key =' error_key say '- Message =' error_msg say '- SQLCODE =' error_code say '- SQLERRMC =' error_errmc say '- SQLSTATE =' error_state end else if sqlcode \= 100 then call SQLCA_EXIT 'FETCH - ERRORS' end "EXECSQL CLOSE C101" end end /****************************************************************/ /* Handle in case procedure returns unexpected return code. */ /****************************************************************/ otherwise do say 'Archive execution failed with return code' return_code say 'Error message:' return_msg end end end /**********************************************************************/ /* Execution stored procedure could not be called. */ /**********************************************************************/ else call SQLCA_EXIT 'CALL' end_of_exec: Address DSNREXX 'DISCONNECT' /* Disconnect from DB2 */ Address TSO /* When done with DSNREXX, remove it*/ s_rc = RXSUBCOM('DELETE','DSNREXX','DSNREXX') say '*** End AHXFIRET' date('u') time('n') '***' say '*** Processing time is' time('e') 'seconds ***' /**********************REDBOOK MODIFICATION****************************/ exit return_code /**********************REDBOOK MODIFICATION ENDS***********************/ /**********************************************************************/ /* Return string representation of status. */ /**********************************************************************/ status: parse arg status. Appendix A. REXX sample programs 321
348 select when status = 0 then return 'STARTED' when status = 1 then return 'FINISHED' otherwise return 'UNDEFINED STATUS' status end return /**********************************************************************/ /* Return string representation of state. */ /**********************************************************************/ state: parse arg state. select when status = 0 then return 'INVALID' when status = 1 then return 'PENDING' when status = 2 then return 'IN PROGRESS' when status = 3 then return 'COMPLETED' when status = 4 then return 'FILED' otherwise return 'UNDEFINED STATE' state end return /**********************************************************************/ /* Display SQLCA then exit */ /**********************************************************************/ sqlca_exit: SAY 'ERROR SQL STATEMENT - ' ARG(1) SAY 'SQLCODE ='SQLCODE SAY 'SQLSTATE='SQLSTATE SAY 'SQLERRMC ='SQLERRMC SAY 'SQLERRP ='SQLERRP SAY 'SQLERRD ='SQLERRD.1',', SQLERRD.2',', SQLERRD.3',', SQLERRD.4',', SQLERRD.5',', SQLERRD.6 SAY 'SQLWARN ='SQLWARN.0',', SQLWARN.1',', SQLWARN.2',', SQLWARN.3',', SQLWARN.4',', SQLWARN.5',', SQLWARN.6',', SQLWARN.7',', SQLWARN.8',', SQLWARN.9',', SQLWARN.10 signal end_of_exec /**********************************************************************/ /* Display any caught uninitialized variables and exit exec */ /**********************************************************************/ novalue: say 'Uninitialized variable in exec at line' sigl'.' say sigl':' sourceline(sigl) 322 DB2 Data Archive Expert for z/os
349 signal end_of_exec /**********************************************************************/ /* Display any command failures and exit exec */ /**********************************************************************/ failure: say 'Negative RC='rc 'at line' sigl'.' say sigl':' sourceline(sigl) Address TSO s_rc = RXSUBCOM('DELETE','DSNREXX','DSNREXX') exit 12 Appendix A. REXX sample programs 323
350 A.5 OFFLINEONLARCSP table to file archive REXX This is the complete sample REXX invoked to run the OFFLINEONLARCSP stored procedure. Used for batch execuation of a special type of file archive specification, archiving from table archives to file archives. Example: A-5 OFFLINEONLARCSP REXX /* */ /* Module: AHXTB2FI */ /* */ /**********************************************************************/ /* */ /* Licensed materials - Property of IBM */ /* 5655-I95 */ /* (c) Copyright IBM Corporation 2003 All Rights Reserved. */ /* US Government Users Restricted Rights - Use, duplication or */ /* Disclosure restricted by GSA ADP Schedule Contract with IBM Corp. */ /* */ /* Copy this exec to a PDS and name it AHXTB2FI. Edit the IVPRUN job, */ /* alter the SYSEXEC DD statement to be the PDS where this exec */ /* resides. */ /* */ /* This sample DB2 REXX/SQL application will call the archive */ /* execution stored procedure OFFLINEONLARCSP to run the archive spec */ /* named IVPARCHIVE defined previously during the installation */ /* verification scenario. */ /**********************************************************************/ Signal on novalue Signal on failure x = time('r') /* reset elapsed time */ say '*** Begin AHXIVPAR exec' date('u') time('n') '***' say '' /**********************************************************************/ /* Parse out the input parameters which are separated by commas. */ /**********************************************************************/ parse arg ssid ',', /* DB2 subsystem to connect to*/ spec_name ',', /* name of spec to run */ schema /* qualifier for AE metadata tables*/ /**********************************************************************/ /* Rest of input parameters to archive execution stored procedure */ /**********************************************************************/ /*********************REDBOOK MODIFICATION*****************************/ proc_name = "AHXTOOLS.OFFLINEONLARCSP" /* execution stored proc name*/ /*********************REDBOOK MODIFICATION ENDS************************/ user = USERID() /* Userid in */ say "Input parameters to execution procedure" proc_name "will be:" say "- User id =" user say "- SUBSYSTEM =" ssid say "- Spec name =" spec_name say "- Metadata schema =" schema say "" /**********************************************************************/ 324 DB2 Data Archive Expert for z/os
351 /* Add host command environment for REXX/SQL. */ /**********************************************************************/ Address TSO "SUBCOM DSNREXX" if rc Then do s_rc = RXSUBCOM('ADD','DSNREXX','DSNREXX') if s_rc \= 0 Then do say 'RXSUBCOM failed with' s_rc exit s_rc end end /**********************************************************************/ /* Connect to DB2. */ /**********************************************************************/ Address DSNREXX "CONNECT" ssid if rc \= 0 then do say 'CONNECT to DB2 subsystem' ssid 'failed with rc=' rc call sqlca_exit 'CONNECT' signal end_of_exec end /**********************************************************************/ /* Initialize storage for output parameters. */ /**********************************************************************/ return_code = 0 /* return code from execution stored procedure*/ return_msg = copies(' ',4096) "'" /* output message from execution*/ indret = -1 /* null indicator for output msg*/ /***************************REDBOOK MODIFICATION***********************/ nullid = -1 /***************************REDBOOK MODIFICATION ENDS******************/ /* Call the archive execution stored procedure. */ /**********************************************************************/ "EXECSQL" "CALL" proc_name "(", ":user,", ":spec_name,", ":schema,", ":return_code,", ":return_msg :indret", ")" /**********************************************************************/ /* Check if result sets were returned or if procedure was called. */ /**********************************************************************/ if sqlcode = 466 sqlcode = 0 then do select /****************************************************************/ /* Return_code <= 4 indicates archive spec ran successfully. */ /****************************************************************/ when return_code <= 4 then do /************************************************************/ /* Display success message and return code. */ /************************************************************/ say 'Archive execution ran successfully with', 'return code' return_code Appendix A. REXX sample programs 325
352 /************************************************************/ /* Successfully running an archive spec will return a */ /* statistics referenced by result set locator 1. */ /************************************************************/ if sqlcode = 466 then do "EXECSQL ASSOCIATE LOCATOR (:LOC1,:LOC2)", "WITH PROCEDURE :proc_name" if sqlcode < 0 then call SQLCA_EXIT 'ASSOCIATE LOCATOR - STATISTICS' "EXECSQL ALLOCATE C101 CURSOR FOR RESULT SET :LOC1" if sqlcode < 0 then call SQLCA_EXIT 'ALLOCATE CURSOR - STATISTICS' statsrows = 0 /********************************************************/ /* Fetch and display every row of statistics result */ /* set. */ /********************************************************/ do until(sqlcode \= 0) "EXECSQL FETCH C101 INTO", ":statsspecid,", ":statsspstatus,", ":statsspstate,", ":statsspversion,", ":statsverstatus,", ":statsverstate,", ":statsverexby,", ":statsverexts,", ":statsverdelrows,", ":statsverinsrows,", ":statssrctable,", ":statssrccrtr,", ":statstartable,", ":statstarcrtr,", ":statstbldelrows,", ":statstblinsrows,", ":statsupdby,", ":statsupdts" if sqlcode = 0 then do statsrows = statsrows + 1 say 'Archive run statistics row' statsrows 'follows:' say "- Spec id =" statsspecid say "- Spec status =" STATUS(statsspstatus) say "- Spec state =" STATE(statsspstate) say "- Spec version =" statsspversion say "- Version status =" STATUS(statsverstatus) say "- Version state =" STATE(statsverstate) say "- Executed by =" statsverexby say "- Executed timestamp =" statsverexts say "- Number of rows deleted for this version =", statsverdelrows say "- Number of rows inserted for this version =", statsverinsrows 326 DB2 Data Archive Expert for z/os
353 say "- Source table name =" statssrctable say "- Source table creator =" statssrccrtr say "- Target table name =" statstartable say "- Target table creator =" statstarcrtr say "- Number of rows deleted for this target =", statstbldelrows say "- Number of rows inserted for this target =", statstblinsrows say "- Spec last updated by =" statsupdby say "- Spec last update timestamp =" statsupdts say "" end else if sqlcode \= 100 then call SQLCA_EXIT 'FETCH - STATISTICS' end "EXECSQL CLOSE C101" /********************************************************/ /* Check result set 2 for any warning messages. */ /********************************************************/ "EXECSQL ALLOCATE C102 CURSOR FOR RESULT SET :LOC2" if sqlcode \= 0 then call SQLCA_EXIT 'ALLOCATE CURSOR - WARNINGS' /********************************************************/ /* Fetch and display every row of warnings result */ /* set. */ /********************************************************/ do until(sqlcode \= 0) /******************************************************/ /* Pre-init values for display in case null. */ /******************************************************/ error_rc = 'n/a' error_key = 'n/a' error_msg = 'n/a' error_code = 'n/a' error_errmc = 'n/a' error_state = 'n/a' "EXECSQL FETCH C102 INTO", ":error_rc :i1,", ":error_key :i2,", ":error_code :i3,", ":error_state :i4,", ":error_msg :i5,", ":error_errmc :i6" if sqlcode = 0 then do say 'Warnings result set row:' say '- Return code =' error_rc say '- Message key =' error_key say '- Message =' error_msg say '- SQLCODE =' error_code say '- SQLERRMC =' error_errmc say '- SQLSTATE =' error_state end Appendix A. REXX sample programs 327
354 end else if sqlcode \= 100 then call SQLCA_EXIT 'FETCH - WARNINGS' "EXECSQL CLOSE C102" end end /****************************************************************/ /* Return_code = 8 indicates the execution of an archive spec */ /* failed. */ /****************************************************************/ when return_code = 8 then do /************************************************************/ /* Display return code and any error message. */ /************************************************************/ say 'Archive execution failed with return code' return_code say 'Error message:' return_msg /************************************************************/ /* Check if an errors result set was returned, in which */ /* case it will be referenced by locator 1. */ /************************************************************/ if sqlcode = 466 then do "EXECSQL ASSOCIATE LOCATOR (:LOC1,:LOC2)", "WITH PROCEDURE :proc_name" if sqlcode < 0 then call SQLCA_EXIT 'ASSOCIATE LOCATOR - ERRORS' "EXECSQL ALLOCATE C101 CURSOR FOR RESULT SET :LOC1" if sqlcode \= 0 then call SQLCA_EXIT 'ALLOCATE CURSOR - ERRORS' /********************************************************/ /* Fetch and display every row of errors result */ /* set. */ /********************************************************/ do until(sqlcode \= 0) /******************************************************/ /* Pre-init values for display in case null. */ /******************************************************/ error_rc = 'n/a' error_key = 'n/a' error_msg = 'n/a' error_code = 'n/a' error_errmc = 'n/a' error_state = 'n/a' "EXECSQL FETCH C101 INTO", ":error_rc :i1,", ":error_key :i2,", ":error_code :i3,", ":error_state :i4,", ":error_msg :i5,", 328 DB2 Data Archive Expert for z/os
355 ":error_errmc :i6" if sqlcode = 0 then do say 'Error result set row:' say '- Return code =' error_rc say '- Message key =' error_key say '- Message =' error_msg say '- SQLCODE =' error_code say '- SQLERRMC =' error_errmc say '- SQLSTATE =' error_state end else if sqlcode \= 100 then call SQLCA_EXIT 'FETCH - ERRORS' end "EXECSQL CLOSE C101" end end /****************************************************************/ /* Handle in case procedure returns unexpected return code. */ /****************************************************************/ otherwise do say 'Archive execution failed with return code' return_code say 'Error message:' return_msg end end end /**********************************************************************/ /* Execution stored procedure could not be called. */ /**********************************************************************/ else call SQLCA_EXIT 'CALL' end_of_exec: Address DSNREXX 'DISCONNECT' /* Disconnect from DB2 */ Address TSO /* When done with DSNREXX, remove it*/ s_rc = RXSUBCOM('DELETE','DSNREXX','DSNREXX') say '*** End AHXIVPAR' date('u') time('n') '***' say '*** Processing time is' time('e') 'seconds ***' /**********************REDBOOK MODIFICATION****************************/ exit return_code /**********************REDBOOK MODIFICATION ENDS***********************/ /**********************************************************************/ /* Return string representation of status. */ /**********************************************************************/ status: parse arg status. select when status = 0 then return 'STARTED' Appendix A. REXX sample programs 329
356 when status = 1 then return 'FINISHED' otherwise return 'UNDEFINED STATUS' status end return /**********************************************************************/ /* Return string representation of state. */ /**********************************************************************/ state: parse arg state. select when status = 0 then return 'INVALID' when status = 1 then return 'PENDING' when status = 2 then return 'IN PROGRESS' when status = 3 then return 'COMPLETED' when status = 4 then return 'FILED' otherwise return 'UNDEFINED STATE' state end return /**********************************************************************/ /* Display SQLCA then exit */ /**********************************************************************/ sqlca_exit: SAY 'ERROR SQL STATEMENT - ' ARG(1) SAY 'SQLCODE ='SQLCODE SAY 'SQLSTATE='SQLSTATE SAY 'SQLERRMC ='SQLERRMC SAY 'SQLERRP ='SQLERRP SAY 'SQLERRD ='SQLERRD.1',', SQLERRD.2',', SQLERRD.3',', SQLERRD.4',', SQLERRD.5',', SQLERRD.6 SAY 'SQLWARN ='SQLWARN.0',', SQLWARN.1',', SQLWARN.2',', SQLWARN.3',', SQLWARN.4',', SQLWARN.5',', SQLWARN.6',', SQLWARN.7',', SQLWARN.8',', SQLWARN.9',', SQLWARN.10 signal end_of_exec /**********************************************************************/ /* Display any caught uninitialized variables and exit exec */ /**********************************************************************/ novalue: say 'Uninitialized variable in exec at line' sigl'.' say sigl':' sourceline(sigl) signal end_of_exec 330 DB2 Data Archive Expert for z/os
357 /**********************************************************************/ /* Display any command failures and exit exec */ /**********************************************************************/ failure: say 'Negative RC='rc 'at line' sigl'.' say sigl':' sourceline(sigl) Address TSO s_rc = RXSUBCOM('DELETE','DSNREXX','DSNREXX') exit 12 Appendix A. REXX sample programs 331
358 332 DB2 Data Archive Expert for z/os
359 B Appendix B. Additional material This redbook refers to additional material that can be downloaded from the Internet as described below. Locating the Web material The Web material associated with this redbook is available in softcopy on the Internet from the IBM Redbooks Web server. Point your Web browser to: ftp:// Alternatively, you can go to the IBM Redbooks Web site at: ibm.com/redbooks Select the Additional materials and open the directory that corresponds with the redbook form number, SG Using the Web material The additional Web material that accompanies this redbook includes the following files: File name Description Examples1to5.zip Five zipped REXX samples, listed and described in Appendix A, REXX sample programs on page 291. System requirements for downloading the Web material The following system configuration is recommended: Hard disk space: 1 MB Operating System: Windows Processor: Intel 386 or higher Memory: 16 MB Copyright IBM Corp All rights reserved. 333
360 How to use the Web material Create a subdirectory (folder) on your workstation, and unzip the contents of the Web material zip file into this folder. 334 DB2 Data Archive Expert for z/os
361 Related publications The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this redbook. IBM Redbooks For information on ordering these publications, see How to get IBM Redbooks on page 336. Note that some of the documents referenced here may be available in softcopy only. DB2 for z/os Stored Procedures: Through the CALL and Beyond, SG DB2 for z/os Tools for Database Administration and Change Management, SG Squeezing the Most out of Dynamic SQL with DB2 for z/os and OS/390, SG Cross-Platform Stored Procedures: Building and Debugging, SG Other publications These publications are also relevant as further information sources: DB2 Data Archive Expert Program Directory, GI DB2 Data Archive Expert User s Guide and Reference, SC DB2 Grouper User s Guide, SC DB2 UDB for OS/390 and z/os V7 Application Programming Guide and Reference for Java, SC z/os V1R4.0 MVS Planning Workload Management, SA DB2 UDB for OS/390 and z/os V7 Installation Guide, SC DB2 magazine, Quarter 4, 2003 Vol. 8, Issue 4, Take a Load Off: Archive Inactive Data by Bryan F. Smith and Thomas A. Vogel, available from: Online resources These Web sites and URLs are also relevant as further information sources: DB2 and IMS tools Web site Retain Web site Copyright IBM Corp All rights reserved. 335
362 How to get IBM Redbooks You can search for, view, or download Redbooks, Redpapers, Hints and Tips, draft publications and Additional materials, as well as order hardcopy Redbooks or CD-ROMs, at this Web site: ibm.com/redbooks Help from IBM IBM Support and downloads ibm.com/support IBM Global Services ibm.com/services 336 DB2 Data Archive Expert for z/os
363 Abbreviations and acronyms AIX APAR ARM ASCII BLOB CCSID CCA CFCC CTT CEC CD CF CFRM CLI CLP CPU CSA DASD DB2 PM DBAT DBD DBID DBRM DCL DDCS DDF DDL DLL DML DNS DRDA DSC DSN DTT EA Advanced Interactive executive from IBM authorized program analysis report automatic restart manager American National Standard Code for Information Interchange binary large objects coded character set identifier client configuration assistant coupling facility control code created temporary table central electronics complex compact disk coupling facility coupling facility resource management call level interface command line processor central processing unit common storage area direct access storage device DB2 performance monitor database access thread database descriptor database identifier database request module data control language distributed database connection services distributed data facility data definition language dynamic load library manipulation language data manipulation language domain name server distributed relational database architecture dynamic statement cache, local or global data set name declared temporary tables extended addressability EBCDIC ECS ECSA EDM ERP ESA ESP ETR FTD FTP GB GBP GRS GUI HPJ IBM ICF ICF ICMF IFCID IFI IPLA IRLM ISPF ISV I/O IT ITR ITSO IVP JDBC JFS extended binary coded decimal interchange code enhanced catalog sharing extended common storage area environment descriptor management enterprise resource planning Enterprise Systems Architecture Enterprise Solution Package external throughput rate, an elapsed time measure, focuses on system capacity functional track directory File Transfer Program gigabyte (1,073,741,824 bytes) group buffer pool global resource serialization graphical user interface high performance Java International Business Machines Corporation integrated catalog facility integrated coupling facility internal coupling migration facility instrumentation facility component identifier instrumentation facility interface IBM Program Licence Agreement internal resource lock manager interactive system productivity facility independent software vendor input/output Information Technology internal throughput rate, a processor time measure, focuses on processor capacity International Technical Support Organization installation verification process Java Database Connectivity journaled file systems Copyright IBM Corp All rights reserved. 337
364 JNDI JVM KB LOB LPL LPAR LRECL LRSN LUW LVM MB NPI ODB ODBC OS/390 PAV PDS PIB PSID PSP PTF PUNC QMF QA RACF RBA RECFM RI RID RRS RRSAF RS RR SDK SMIT SMP/E SU UOW Java Naming and Directory Interface Java Virtual Machine kilobyte (1,024 bytes) large object logical page list logical partition logical record length log record sequence number logical unit of work logical volume manager megabyte (1,048,576 bytes) non-partitioning index object descriptor in DBD Open Data Base Connectivity Operating System/390 parallel access volume partitioned data set parallel index build pageset identifier preventive service planning program temporary fix possibly uncommitted Query Management Facility Quality Assurance Resource Access Control Facility relative byte address record format referential integrity record identifier resource recovery services resource recovery services attach facility read stability repeatable read software developers kit System Management Interface Tool System Modification Program Extended Service Unit unit of work 338 DB2 Data Archive Expert for z/os
365 Index Symbols _CEE_ENVFILE 33 Numerics , tape compression I I I E98 24 A a 151 add a column 274 add data to the archive 139 adding a child table 161 adding another table 151 age criteria 206 AHX AHX110.SAHXSAMP 25 AHXARFIL 50 AHXCIRC.LOG 10 AHXCRMET 25 AHXCRSTP 29 AHXDEFPR 33 AHXEXECUTEDTS 130, 252, 261, 280 AHXEXECUTEDTS column 204 AHXISMKD 30 AHXIVPAR 46 47, 50, 58, AHXIVPRN 35, 46 AHXJ034 message 204 AHXJ AHXJ071 message 199 AHXLOG 11 AHXMKDIR 30 AHXTOOLS.ARCHIVEEXECSP 46 47, 292 AHXTOOLS.OFFLINEARCHSP 46, 49, 300 AHXTOOLS.OFFLINEONLARCSP 46, 61, 324 AHXTOOLS.OFFLINERETEXECSP 46, 60, 259, 316 AHXTOOLS.RETRIEVEEXECSP 46, 58, 308 AHXV11 REXX exec 34 ALTER access 279 APAR PQ PQ PQ PQ PQ , 32 PQ , 32 API 10 APPLENV 27 application-enforced referential relationship 139 architecture 9 archive and retrieve authorities 278 archive considerations 233 archive data sets 279 archive process 8 archive specification 55 Archive Specification Definition 119 archive tables default naming convention 129 archive target 166 archived 134 ARCHIVEEXECSP 292 archives 4 archiving benefits 5 archiving Grouper discovered related tables 211 archiving basics 4 archiving criteria 124 archiving data 4 archiving from a table to a file 95 archiving from a table to a table 117 archiving from RI related tables 137, 191 archiving into tables 166 archiving pitfalls 7 archiving strategy 6 ATL2 52 authorizations 278 B batch programs 46 batcharchivetest 47 batchtableretrieve 59 boundary objects 212 buffer pool 25 C c 161 CAPS OFF 29, 47 case sensitive 120 CEE.SCEERUN 29 child connection 161 child table 18 circular trace 10 CLASSPATH 33, 42 CLASSPATH environment variable 32 COM 182 commit level 286 completed 182 CONTAIN 213 Control Center Template tables 25 CREATEDBA 243, 278 Copyright IBM Corp All rights reserved. 339
366 D D WLM command 23 Data Archive Expert stored procedures 46 data schema 274 DB2 Admin TEMPLATEs panel 52 DB2 Administration Tool 23 25, 29, 51 DB2 Control Center 24, 46, DB2 Data Archive Expert installation 19 DB2 Data Archive Expert for z/os 7 DB2 Grouper 11, 65, 212 installation 37 DB2 Grouper Client 65 DB2 Grouper Java components 40 DB2 Grouper rerequisites 40 DB2 Grouper server 37 DB2 Grouper stored procedures 10 DB2 Management Clients Package fetaure code DB2 Management Clients package feature 25 DB2 V7 Utilities Suite 24 DB2_HOME environment 30 DB2GGRPP 41 DB2SQLJPROPERTIES 42 DB2SQLJPROPERTIES environment variable 33 DBADM 243, 278 DDL 29 debugging 10 DEF 173 default DDL 25 default properties 33 definition activities 120 Del=N 152 delete rule 153 deleting from an archived table 137 deleting from archived tables 191 disk 4 DSNACC.UTLIST 25 DSNACC.UTTEMPLATE 25 DSNJDBC 23, 43 DSNJDBC plan 23 DSNREXX 43 DSNUTILS 11, 24, 34, 287 dynamic SQL tuning 285 E EGFCPRC 41 EGFDDLC 40 EGFDDLD 40 EGFDSCVY.JCL 87 EGFFISH 40 EGFJBND 43 EGFTOOLS 79 EGFV1R1 42 environment variable EGFCLIENTHOME 77 environment variable TZ 33, 42 environment variable WORK_DIR 42 event table 11 example batch jobs 46 exclude tables from the given list 139 execution stored procedures 10 F fi 146 field UTILITIES 28 file archive in batch 46 file archive in batch using a template 51 file archive specification 49, 55 file archives access 281 file retrieval from batch 60 find related tables 149 fixed blocked library 34 foreign key 18 FTP 66 G global temporary tables 10 GMT 33 goal mode GRANT 280 granting access 279 Group 212 group discovery 78, 86, 212 Grouper 65 bind packages and plans 43 installation verification 43 Grouper Client 65 connectivity to server 68 download 66 install shield 67 installation verification 79 JDBC bind 79 setting up 66 Grouper client host and directory name 67 Grouper Java stored procedures 41 Grouper JCL started task 41 Grouper metadata 40 Grouper Server 38 Grouper stored procedures 41 groups 212 H H 112, 133, 186, 203 HFS 30 history 4, 112, 133, 186, history tables 4 hlq.sahxclst 34 I IBM Personal Communications 66 IEFUSI 30 inactive data 4 index creation 287 indexes on the target tables 287 informational referential constraints DB2 Data Archive Expert for z/os
367 installation authority 278 installation verification 35 intermezzo 176 introduction 1 ISPF panels 10 IVP 35 36, 43, 79 J jar files 32 Java 22 Java environment variable settings 42 Java environment variable TZ 42 JAVA_HOME environment variable 32 JAVAENV 30, 33, 42 Jct=J 154 JDBC 10 JDBC type 2 23 JDK 32 JSPDEBUG 29, 33 junction 139 junction table 139 JVM 21 JVM K k 158 key columns 154, 179, 216, 229 L L_COMMITDATE 18 L_RECEIPTDATE 18 L_SHIPDATE 18 LINKLST 29 LISTDEF 51 LOAD 11 Load utility 24 locking consideration 287 logging 11 logging considerations 284 logging level 285 LOGLEVEL 33 M managing non-enforced relationships 213 metadata 9, 25 metadata database 25 modified version of TEMPLATE.JCL 78 modify the calling parameter list 50 MSGFILE 33 N NUMBTCB 27 NUMTCB 22, 287 O O_ORDERDATE 18 OFFLINEARCHSP 300 OFFLINEONLARCSP 324 offlineonlarcsp 61 OFFLINERETEXECSP 316 OI 31 on delete cascade 198 Open Edition MVS 30 optical 4 P parent connection 161 parent table 18 PART 214 PCOM 180 PDEF 171 pending completion 180 pending definition 171 performance 5, 284 PQ PQ PQ , 278 PQ PQ , 32 PQ , 32 Preventive Service Planning 20, 38 preventive service planning 20 primary key 18 product number 8 product prerequisites 20 program directory 20 Program Temporary Fixes 20 PSP 38 PSP bucket 20, 38 Q qualified column names 157 R R 24, 99, 102, 119, 146, 175 RACF dialog 279 READ access 243 reclaim disk space 5 Redbooks Web site 336 Contact us xxiii referential integrity 15 16, 165, 211, 233, 285 RETAIN 20, 38 RETPD 52 retrieve authorization 243 retrieve creator name 242 retrieve specification 242 retrieve specification for multiple tables 257 retrieve stored procedure 259 retrieve tables access 281 retrieve timestamp 261 RETRIEVEEXECSP 308 retrievefilebatch 60 retrieving a single table from a file archive 241 retrieving archived data 7 Index 341
368 retrieving into the original tables 263 REXX 24, feature code REXX exec samples 333 REXX execs 10 REXX sample 292, 300, 308, 316, 324 REXX samples zipped 333 REXX source code 291 REXX to execute archive retrieval 58 REXX to execute archive specifications in batch 61 RI 16, 90 row filter 156 ROWFILTER parameters continuation 62 S S 150 SAHXCLST 34 SAHXSAMP 25, 47 SAHXSAMP library 46 schema changes 274 SDK 21 SDSNLOAD 29 SDSNLOD2 29 security considerations 277 SEGFSAMP 40 SELECT access 278 set 212 SP 151 SQLCODE SQLCODE started task procedure 29, 41 starting point table 43, 151 statistics 133 stored procedures 9, 25, 41 DB2 Grouper 10 DSNUTILS 11 wrapper 9 10 subselect 172 SYS248.PGM_CONTROL 224 SYS248.STATES 224 SYSADM 66, 278 SYSTOOLS 40 SYSTPRT 209 SYSTSPRT output 201 T T_ALPHA 18 table archive specification 47 table archives access 279 table to file archive specification in batch 61 tables discovered by Grouper 88 tables in view 135 tape 4, 52 target table 118 TEMPLATE 51 Template 24 Template control tables 24 templates for file allocation 56 temporary database 33 test cases 15 test data application enforced relationships 18 DB2 enforced relationships 17 referential integrity 16 test tables 16 CUSTOMER 16 DB2 and application enforced RI 16 foreign keys 18 LINEITEM 16 NATION 16, 18 number of rows 16 ORDER 16, 18 PART 16, 18 PARTSUPP 16, 18 primary key 18 primary keys 16 REGION 16, 18 SUPPLIER 16, 18 TIME 16, 18 unique keys 16 timestamp 33, 42, 130, 180, 252, 261, 280 ts 169 TSO OMVS command 21 TZ 33, 42 U UNION 134 UNION ALL 130 UNION ALL view 187 unit of work discovery 212 UNIX System Services 21, 30 UNLOAD 11 Unload utility 24 UPGRADE 20 uppercase S 193 UQ UQ , 32 utility performance 287 UTTEMPLATE 25 V VARCHAR(2000) column 62 variable blocked library 34 view for archived and the non-archived rows 134 view with UNION ALL 130 W W 104, 123, 195, 230, 266 w 103, 123, 156 wildcard 147 WLM 26 WLM environment 26, 41 WLM service policy 28 Workload Manager 22 wrapper stored procedures DB2 Data Archive Expert for z/os
369 Z z/os GMT 33 Index 343
370 344 DB2 Data Archive Expert for z/os
371 IBM DB2 Data Archive Expert for z/os: Put Your Data in Its Place (0.5 spine) <-> <-> 459 pages
372
373
374 Back cover IBM DB2 Data Archive Expert for z/os: Put Your Data in Its Place Reduce disk occupancy by removing unused data Streamline operations and improve performance Filter and associate data with DB2 Grouper Databases are growing tremendously. Because of the legal requirements for trend analysis, or the need for historical data, terabytes of data are kept on-line, which causes performance and operational problems. Not all data is frequently accessed though, nor does it need to be kept on fast media. Here archiving can help. Archiving is the process of moving selected inactive data to another location and is accessed only when necessary. Data Archive Expert for z/os, Version 1 is a comprehensive data archiving tool, which enables you to move seldom used data to a less costly storage medium, without any programming. With this solution, you can save storage space and associated costs, and also improve performance of your IBM DB2 UDB for z/os environment. By helping you to quickly define and access inactive data, DB2 Data Archive Expert enhances your backup and recovery processes. Because it selects data for archive at the row level, you can precisely control which aged data are archived with an optimal level of granularity. This IBM Redbook will help you understand, install, tailor, and configure DB2 Data Archive Expert for z/os in a DB2 for z/os and OS/390 Version 7 system. By showing several archive and retrieve scenarios, it will also help you design an archive solution for your infrequently used data. INTERNATIONAL TECHNICAL SUPPORT ORGANIZATION BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment. For more information: ibm.com/redbooks SG ISBN
IBM Configuring Rational Insight 1.0.1.1 and later for Rational Asset Manager
IBM Configuring Rational Insight 1.0.1.1 and later for Rational Asset Manager Rational Insight and Rational Asset Manager...4 Prerequisites...5 Configuring the XML data configuration for Rational Asset
IBM Security QRadar Version 7.1.0 (MR1) Checking the Integrity of Event and Flow Logs Technical Note
IBM Security QRadar Version 7.1.0 (MR1) Checking the Integrity of Event and Flow Logs Technical Note Note: Before using this information and the product that it supports, read the information in Notices
IBM Security QRadar Version 7.1.0 (MR1) Replacing the SSL Certificate Technical Note
IBM Security QRadar Version 7.1.0 (MR1) Technical Note Note: Before using this information and the product that it supports, read the information in Notices and Trademarks on page 5 Copyright IBM Corp.
IBM Cognos Controller Version 10.2.0. New Features Guide
IBM Cognos Controller Version 10.2.0 New Features Guide Note Before using this information and the product it supports, read the information in Notices on page 9. Product Information This document applies
Active Directory Synchronization with Lotus ADSync
Redbooks Paper Active Directory Synchronization with Lotus ADSync Billy Boykin Tommi Tulisalo The Active Directory Synchronization tool, or ADSync, allows Active Directory administrators to manage (register,
OS Deployment V2.0. User s Guide
OS Deployment V2.0 User s Guide User s Guide i Note: Before using this information and the product it supports, read the information in Notices. Copyright IBM Corporation 2003, 2011. US Government Users
IBM Cognos Controller Version 10.2.1. New Features Guide
IBM Cognos Controller Version 10.2.1 New Features Guide Note Before using this information and the product it supports, read the information in Notices on page 3. Product Information This document applies
Big Data Analytics with IBM Cognos BI Dynamic Query IBM Redbooks Solution Guide
Big Data Analytics with IBM Cognos BI Dynamic Query IBM Redbooks Solution Guide IBM Cognos Business Intelligence (BI) helps you make better and smarter business decisions faster. Advanced visualization
Getting Started With IBM Cúram Universal Access Entry Edition
IBM Cúram Social Program Management Getting Started With IBM Cúram Universal Access Entry Edition Version 6.0.5 IBM Cúram Social Program Management Getting Started With IBM Cúram Universal Access Entry
IBM Enterprise Marketing Management. Domain Name Options for Email
IBM Enterprise Marketing Management Domain Name Options for Email Note Before using this information and the product it supports, read the information in Notices on page 3. This document applies to all
Platform LSF Version 9 Release 1.2. Migrating on Windows SC27-5317-02
Platform LSF Version 9 Release 1.2 Migrating on Windows SC27-5317-02 Platform LSF Version 9 Release 1.2 Migrating on Windows SC27-5317-02 Note Before using this information and the product it supports,
IBM Endpoint Manager Version 9.2. Software Use Analysis Upgrading Guide
IBM Endpoint Manager Version 9.2 Software Use Analysis Upgrading Guide IBM Endpoint Manager Version 9.2 Software Use Analysis Upgrading Guide Upgrading Guide This edition applies to IBM Endpoint Manager
Disaster Recovery Procedures for Microsoft SQL 2000 and 2005 using N series
Redpaper Alex Osuna Bert Jonker Richard Waal Henk Vonk Peter Beijer Disaster Recovery Procedures for Microsoft SQL 2000 and 2005 using N series Introduction This IBM Redpaper gives a example of procedures
Patch Management for Red Hat Enterprise Linux. User s Guide
Patch Management for Red Hat Enterprise Linux User s Guide User s Guide i Note: Before using this information and the product it supports, read the information in Notices. Copyright IBM Corporation 2003,
Release Notes. IBM Tivoli Identity Manager Oracle Database Adapter. Version 5.0.1. First Edition (December 7, 2007)
IBM Tivoli Identity Manager Version 5.0.1 First Edition (December 7, 2007) This edition applies to version 5.0 of Tivoli Identity Manager and to all subsequent releases and modifications until otherwise
Version 8.2. Tivoli Endpoint Manager for Asset Discovery User's Guide
Version 8.2 Tivoli Endpoint Manager for Asset Discovery User's Guide Version 8.2 Tivoli Endpoint Manager for Asset Discovery User's Guide Note Before using this information and the product it supports,
IBM Enterprise Marketing Management. Domain Name Options for Email
IBM Enterprise Marketing Management Domain Name Options for Email Note Before using this information and the products that it supports, read the information in Notices on page 3. This document applies
IBM VisualAge for Java,Version3.5. Remote Access to Tool API
IBM VisualAge for Java,Version3.5 Remote Access to Tool API Note! Before using this information and the product it supports, be sure to read the general information under Notices. Edition notice This edition
IBM Rational Rhapsody NoMagic Magicdraw: Integration Page 1/9. MagicDraw UML - IBM Rational Rhapsody. Integration
IBM Rational Rhapsody NoMagic Magicdraw: Integration Page 1/9 MagicDraw UML - IBM Rational Rhapsody Integration IBM Rational Rhapsody NoMagic Magicdraw: Integration Page 2/9 Notices Copyright IBM Corporation
IBM Security QRadar Version 7.2.0. Installing QRadar with a Bootable USB Flash-drive Technical Note
IBM Security QRadar Version 7.2.0 Installing QRadar with a Bootable USB Flash-drive Technical Note Note: Before using this information and the product that it supports, read the information in Notices
Tivoli Endpoint Manager for Security and Compliance Analytics. Setup Guide
Tivoli Endpoint Manager for Security and Compliance Analytics Setup Guide Setup Guide i Note: Before using this information and the product it supports, read the information in Notices. Copyright IBM Corporation
Redbooks Paper. Local versus Remote Database Access: A Performance Test. Victor Chao Leticia Cruz Nin Lei
Redbooks Paper Victor Chao Leticia Cruz Nin Lei Local versus Remote Database Access: A Performance Test When tuning a database for better performance, one area to examine is the proximity of the database
Tivoli IBM Tivoli Monitoring for Transaction Performance
Tivoli IBM Tivoli Monitoring for Transaction Performance Version 5.3.0 Evaluation Guide GC32-9190-00 Tivoli IBM Tivoli Monitoring for Transaction Performance Version 5.3.0 Evaluation Guide GC32-9190-00
IBM Financial Transaction Manager for ACH Services IBM Redbooks Solution Guide
IBM Financial Transaction Manager for ACH Services IBM Redbooks Solution Guide Automated Clearing House (ACH) payment volume is on the rise. NACHA, the electronic payments organization, estimates that
Tivoli Endpoint Manager for Configuration Management. User s Guide
Tivoli Endpoint Manager for Configuration Management User s Guide User s Guide i Note: Before using this information and the product it supports, read the information in Notices. Copyright IBM Corporation
IBM FileNet System Monitor 4.0.1.5. FSM Event Integration Whitepaper SC19-3116-00
IBM FileNet System Monitor 4.0.1.5 FSM Event Integration Whitepaper SC19-3116-00 Before using this information and the product it supports, read the information in Notices at the end of this document.
IBM TRIRIGA Anywhere Version 10 Release 4. Installing a development environment
IBM TRIRIGA Anywhere Version 10 Release 4 Installing a development environment Note Before using this information and the product it supports, read the information in Notices on page 9. This edition applies
Installing and Configuring DB2 10, WebSphere Application Server v8 & Maximo Asset Management
IBM Tivoli Software Maximo Asset Management Installing and Configuring DB2 10, WebSphere Application Server v8 & Maximo Asset Management Document version 1.0 Rick McGovern Staff Software Engineer IBM Maximo
Rapid Data Backup and Restore Using NFS on IBM ProtecTIER TS7620 Deduplication Appliance Express IBM Redbooks Solution Guide
Rapid Data Backup and Restore Using NFS on IBM ProtecTIER TS7620 Deduplication Appliance Express IBM Redbooks Solution Guide This IBM Redbooks Solution Guide provides an overview of how data backup and
Tivoli Endpoint Manager for Security and Compliance Analytics
Tivoli Endpoint Manager for Security and Compliance Analytics User s Guide User s Guide i Note: Before using this information and the product it supports, read the information in Notices. Copyright IBM
Integrating ERP and CRM Applications with IBM WebSphere Cast Iron IBM Redbooks Solution Guide
Integrating ERP and CRM Applications with IBM WebSphere Cast Iron IBM Redbooks Solution Guide Cloud computing has become a business evolution that is impacting all facets of business today, including sales,
Installing on Windows
Platform LSF Version 9 Release 1.1 Installing on Windows SC27-5316-01 Platform LSF Version 9 Release 1.1 Installing on Windows SC27-5316-01 Note Before using this information and the product it supports,
IBM Security SiteProtector System Migration Utility Guide
IBM Security IBM Security SiteProtector System Migration Utility Guide Version 3.0 Note Before using this information and the product it supports, read the information in Notices on page 5. This edition
Redpaper. IBM Workplace Collaborative Learning 2.5. A Guide to Skills Management. Front cover. ibm.com/redbooks. Using the skills dictionary
Front cover IBM Workplace Collaborative Learning 2.5 A Guide to Skills Management Using the skills dictionary Using the Career Development portlet and creating a Learning Plan Generating reports for Skills
IBM SmartCloud Analytics - Log Analysis. Anomaly App. Version 1.2
IBM SmartCloud Analytics - Log Analysis Anomaly App Version 1.2 IBM SmartCloud Analytics - Log Analysis Anomaly App Version 1.2 Note Before using this information and the product it supports, read the
IBM TRIRIGA Application Platform Version 3.3.2. Reporting: Creating Cross-Tab Reports in BIRT
IBM TRIRIGA Application Platform Version 3.3.2 Reporting: Creating Cross-Tab Reports in BIRT Cheng Yang Application Developer IBM TRIRIGA Copyright International Business Machines Corporation 2013. US
IBM PowerSC Technical Overview IBM Redbooks Solution Guide
IBM PowerSC Technical Overview IBM Redbooks Solution Guide Security control and compliance are some of the key components that are needed to defend the virtualized data center and cloud infrastructure
Remote Support Proxy Installation and User's Guide
IBM XIV Storage System Remote Support Proxy Installation and User's Guide Version 1.1 GA32-0795-01 IBM XIV Storage System Remote Support Proxy Installation and User's Guide Version 1.1 GA32-0795-01 Note
Case Study: Process SOA Scenario
Redpaper Martin Keen Michele Chilanti Veronique Moses Scott Simmons Srinivasan Vembakkam Case Study: Process SOA Scenario This paper one in a series of service-oriented architecture (SOA) papers that feature
Packet Capture Users Guide
IBM Security QRadar Version 7.2.2 Packet Capture Users Guide SC27-6512-00 Note Before using this information and the product that it supports, read the information in Notices on page 9. Copyright IBM Corporation
Linux. Managing security compliance
Linux Managing security compliance Linux Managing security compliance Note Before using this information and the product it supports, read the information in Notices on page 7. First Edition (December
IBM TRIRIGA Version 10 Release 4.2. Inventory Management User Guide IBM
IBM TRIRIGA Version 10 Release 4.2 Inventory Management User Guide IBM Note Before using this information and the product it supports, read the information in Notices on page 19. This edition applies to
IBM Endpoint Manager for OS Deployment Windows Server OS provisioning using a Server Automation Plan
IBM Endpoint Manager IBM Endpoint Manager for OS Deployment Windows Server OS provisioning using a Server Automation Plan Document version 1.0 Michele Tomassi Copyright International Business Machines
Tivoli Security Compliance Manager. Version 5.1 April, 2006. Collector and Message Reference Addendum
Tivoli Security Compliance Manager Version 5.1 April, 2006 Collector and Message Reference Addendum Copyright International Business Machines Corporation 2006. All rights reserved. US Government Users
IBM Security QRadar Version 7.1.0 (MR1) Configuring Custom Email Notifications Technical Note
IBM Security QRadar Version 7.1.0 (MR1) Technical Note Note: Before using this information and the product that it supports, read the information in Notices and Trademarks on page 7. Copyright IBM Corp.
IBM Endpoint Manager for Software Use Analysis Version 9 Release 0. Customizing the software catalog
IBM Endpoint Manager for Software Use Analysis Version 9 Release 0 Customizing the software catalog IBM Endpoint Manager for Software Use Analysis Version 9 Release 0 Customizing the software catalog
IBM Tivoli Web Response Monitor
IBM Tivoli Web Response Monitor Release Notes Version 2.0.0 GI11-4068-00 +---- Note ------------------------------------------------------------+ Before using this information and the product it supports,
WebSphere Application Server V6: Diagnostic Data. It includes information about the following: JVM logs (SystemOut and SystemErr)
Redbooks Paper WebSphere Application Server V6: Diagnostic Data Carla Sadtler David Titzler This paper contains information about the diagnostic data that is available in WebSphere Application Server V6.
SupportPac CB12. General Insurance Application (GENAPP) for IBM CICS Transaction Server
SupportPac CB12 General Insurance Application (GENAPP) for IBM CICS Transaction Server SupportPac CB12 General Insurance Application (GENAPP) for IBM CICS Transaction Server ii General Insurance Application
DB2 Database Demonstration Program Version 9.7 Installation and Quick Reference Guide
DB2 Database Demonstration Program Version 9.7 Installation and Quick Reference Guide George Baklarz DB2 Worldwide Technical Sales Support IBM Toronto Laboratory DB2 Demonstration Program Version 9.7 Usage
Remote Control 5.1.2. Tivoli Endpoint Manager - TRC User's Guide
Tivoli Remote Control 5.1.2 Tivoli Endpoint Manager - TRC User's Guide Tivoli Remote Control 5.1.2 Tivoli Endpoint Manager - TRC User's Guide Note Before using this information and the product it supports,
InfoPrint 4247 Serial Matrix Printers. Remote Printer Management Utility For InfoPrint Serial Matrix Printers
InfoPrint 4247 Serial Matrix Printers Remote Printer Management Utility For InfoPrint Serial Matrix Printers Note: Before using this information and the product it supports, read the information in Notices
QLogic 4Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter IBM BladeCenter at-a-glance guide
QLogic 4Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter IBM BladeCenter at-a-glance guide The QLogic 4Gb Fibre Channel Expansion Card (CIOv) for BladeCenter enables you to quickly and simply
IBM Security QRadar Version 7.1.0 (MR1) Installing QRadar 7.1 Using a Bootable USB Flash-Drive Technical Note
IBM Security QRadar Version 7.1.0 (MR1) Installing QRadar 7.1 Using a Bootable USB Flash-Drive Technical Note Note: Before using this information and the product that it supports, read the information
QLogic 8Gb FC Single-port and Dual-port HBAs for IBM System x IBM System x at-a-glance guide
QLogic 8Gb FC Single-port and Dual-port HBAs for IBM System x IBM System x at-a-glance guide The QLogic 8Gb FC Single-port and Dual-port HBA for IBM System x are PCI Express 2.0 x8 8Gb Fibre Channel adapters
IBM Connections Plug-In for Microsoft Outlook Installation Help
IBM Connections Version 5 IBM Connections Plug-In for Microsoft Outlook Installation Help Edition Notice Note: Before using this information and the product it supports, read the information in "Notices."
Software Usage Analysis Version 1.3
Software Usage Analysis Version 1.3 Catalog Editor s Guide Catalog Editor s Guide i Note: Before using this information and the product it supports, read the information in Notices. Copyright IBM Corporation
Sametime Version 9. Integration Guide. Integrating Sametime 9 with Domino 9, inotes 9, Connections 4.5, and WebSphere Portal 8.0.0.
Sametime Version 9 Integration Guide Integrating Sametime 9 with Domino 9, inotes 9, Connections 4.5, and WebSphere Portal 8.0.0.1 Edition Notice Note: Before using this information and the product it
z/os V1R11 Communications Server system management and monitoring
IBM Software Group Enterprise Networking Solutions z/os V1R11 Communications Server z/os V1R11 Communications Server system management and monitoring z/os Communications Server Development, Raleigh, North
IBM FlashSystem. SNMP Guide
IBM FlashSystem SNMP Guide IBM FlashSystem SNMP Guide Note Before using this information and the product it supports, read the information in Notices on page 9. This edition applies to IBM FlashSystem
Table 1 shows the LDAP server configuration required for configuring the federated repositories in the Tivoli Integrated Portal server.
Configuring IBM Tivoli Integrated Portal server for single sign-on using Simple and Protected GSSAPI Negotiation Mechanism, and Microsoft Active Directory services Document version 1.0 Copyright International
IBM WebSphere Message Broker - Integrating Tivoli Federated Identity Manager
IBM WebSphere Message Broker - Integrating Tivoli Federated Identity Manager Version 1.1 Property of IBM Page 1 of 18 Version 1.1, March 2008 This version applies to Version 6.0.0.3 of IBM WebSphere Message
IBM Lotus Enterprise Integrator (LEI) for Domino. Version 8.5.2. August 17, 2010
IBM Lotus Enterprise Integrator (LEI) for Domino Version 8.5.2 August 17, 2010 A) What's new in LEI V8.5.2 B) System requirements C) Installation considerations D) Operational considerations E) What's
IBM. Job Scheduler for OS/400. AS/400e series. Version 4 SC41-5324-00
AS/400e series IBM Job Scheduler for OS/400 Version 4 SC41-5324-00 AS/400e series IBM Job Scheduler for OS/400 Version 4 SC41-5324-00 Note Before using this information and the product it supports, be
Cúram Business Intelligence and Analytics Guide
IBM Cúram Social Program Management Cúram Business Intelligence and Analytics Guide Version 6.0.4 Note Before using this information and the product it supports, read the information in Notices at the
Business Intelligence Tutorial
IBM DB2 Universal Database Business Intelligence Tutorial Version 7 IBM DB2 Universal Database Business Intelligence Tutorial Version 7 Before using this information and the product it supports, be sure
IBM Client Security Solutions. Password Manager Version 1.4 User s Guide
IBM Client Security Solutions Password Manager Version 1.4 User s Guide IBM Client Security Solutions Password Manager Version 1.4 User s Guide First Edition (October 2004) Copyright International Business
IBM Enterprise Content Management Software Requirements
IBM Enterprise Content Management Software Requirements This document describes the software prerequisite requirements for the IBM Enterprise Content Management suite of products. Last Updated: May 31,
IBM RDX USB 3.0 Disk Backup Solution IBM Redbooks Product Guide
IBM RDX USB 3.0 Disk Backup Solution IBM Redbooks Product Guide The new IBM Removable Disk EXchange (RDX) USB 3.0 removable disk backup solution is designed to address your increasing capacity and backup
Reading multi-temperature data with Cúram SPMP Analytics
IBM Cúram Social Program Management Reading multi-temperature data with Cúram SPMP Analytics Anthony Farrell is a senior software engineer in the IBM Cúram platform group. Anthony has technical responsibility
DataPower z/os crypto integration
New in version 3.8.0 DataPower z/os crypto integration Page 1 of 14 DataPower z/os crypto integration NSS performs requested key operation using certificates and keys stored in RACF RACF Administrator
Power Management. User s Guide. User s Guide
Power Management User s Guide User s Guide i Note: Before using this information and the product it supports, read the information in Notices. Copyright IBM Corporation 2003, 2011. US Government Users
Implementing the End User Experience Monitoring Solution
IBM Tivoli Application Performance Management Implementing the End User Experience Monitoring Solution John Griffith Copyright International Business Machines Corporation 2012. US Government Users Restricted
Installing and using the webscurity webapp.secure client
Linux Utilities for IBM System z Installing and using the webscurity webapp.secure client SC33-8322-00 Linux Utilities for IBM System z Installing and using the webscurity webapp.secure client SC33-8322-00
IBM FileNet Capture and IBM Datacap
Front cover IBM FileNet Capture and IBM Datacap Kevin Bowe Redpaper Introduction This IBM Redpaper publication has various objectives. It uses a fictional capture processing scenario to identify the differences
IBM Security QRadar Version 7.2.0. Common Ports Guide
IBM Security QRadar Version 7.2.0 Common Ports Guide Note: Before using this information and the product that it supports, read the information in Notices and Trademarks on page 11. Copyright IBM Corp.
Creating Applications in Bluemix using the Microservices Approach IBM Redbooks Solution Guide
Creating Applications in Bluemix using the Microservices Approach IBM Redbooks Solution Guide Across 2014 and into 2015, microservices became the new buzzword for application development style. So what
Getting Started with IBM Bluemix: Web Application Hosting Scenario on Java Liberty IBM Redbooks Solution Guide
Getting Started with IBM Bluemix: Web Application Hosting Scenario on Java Liberty IBM Redbooks Solution Guide Based on the open source Cloud Foundry technology, IBM Bluemix is an open-standard, cloud-based
Redbooks Redpaper. IBM TotalStorage NAS Advantages of the Windows Powered OS. Roland Tretau
Redbooks Redpaper Roland Tretau IBM TotalStorage NAS Advantages of the Windows Powered OS Copyright IBM Corp. 2002. All rights reserved. ibm.com/redbooks 1 What is Network Attached Storage (NAS) Storage
Deploying Business Objects Crystal Reports Server on IBM InfoSphere Balanced Warehouse C-Class Solution for Windows
Deploying Business Objects Crystal Reports Server on IBM InfoSphere Balanced Warehouse C-Class Solution for Windows I Installation & Configuration Guide Author: Thinh Hong Business Partner Technical Enablement
Communications Server for Linux
Communications Server for Linux SNA connectivity ^business on demand software Multiple types of connectivity exist within the Communications Server for Linux. CSLinux_snaconn.ppt Page 1 of 10 SNA connectivity
Getting Started with Tuning SQL Statements in IBM Data Studio and IBM Data Studio (stand-alone), Version 2.2.1
Getting Started with Tuning SQL Statements in IBM Data Studio and IBM Data Studio (stand-alone), Version 2.2.1 Getting Started with Tuning SQL Statements in IBM Data Studio and IBM Data Studio (stand-alone),
Scheduler Job Scheduling Console
Tivoli IBM Tivoli Workload Scheduler Job Scheduling Console Feature Level 1.3 (Revised December 2004) User s Guide SC32-1257-02 Tivoli IBM Tivoli Workload Scheduler Job Scheduling Console Feature Level
IBM WebSphere Adapter for PeopleSoft Enterprise 6.2.0. Quick Start Tutorials
IBM WebSphere Adapter for PeopleSoft Enterprise 6.2.0 Quick Start Tutorials Note: Before using this information and the product it supports, read the information in "Notices" on page 94. This edition applies
IBM Flex System PCIe Expansion Node IBM Redbooks Product Guide
IBM Flex System PCIe Expansion Node IBM Redbooks Product Guide The IBM Flex System PCIe Expansion Node provides the ability to attach additional PCI Express cards, such as High IOPS SSD adapters, fabric
Rational Build Forge. AutoExpurge System. Version7.1.2andlater
Rational Build Forge AutoExpurge System Version7.1.2andlater Note Before using this information and the product it supports, read the information in Notices, on page 11. This edition applies to ersion
IBM Lotus Protector for Mail Encryption
IBM Lotus Protector for Mail Encryption Server Upgrade Guide 2.1.1 Version Information Lotus Protector for Mail Encryption Server Upgrade Guide. Lotus Protector for Mail Encryption Server Version 2.1.1.
IBM Tivoli Service Request Manager 7.1
IBM Tivoli Service Request Manager 7.1 Using the e-mail listener and workflow to generate, query, update, and change the status of tickets Updated September 29, 2009 IBM Tivoli Service Request Manager
Flexible Decision Automation for Your zenterprise with Business Rules and Events
Front cover Flexible Decision Automation for Your zenterprise with Business Rules and Events Understand the benefits of operational decision management Build dynamic solutions with business events and
IBM Lotus Protector for Mail Encryption. User's Guide
IBM Lotus Protector for Mail Encryption User's Guide Version Information Lotus Protector for Mail Encryption User's Guide. Lotus Protector for Mail Encryption Version 2.1.0. Released December 2010. This
CS z/os Application Enhancements: Introduction to Advanced Encryption Standards (AES)
Software Group Enterprise Networking and Transformation Solutions (ENTS) CS z/os Application Enhancements: Introduction to Advanced Encryption Standards (AES) 1 A little background information on cipher
IBM XIV Management Tools Version 4.7. Release Notes IBM
IBM XIV Management Tools Version 4.7 Release Notes IBM First Edition (September 2015) This edition applies to IBM XIV Management Tools version 4.7 release, and to all subsequent releases and modifications
Brocade Enterprise 20-port, 20-port, and 10-port 8Gb SAN Switch Modules IBM BladeCenter at-a-glance guide
Brocade Enterprise 20-port, 20-port, and 10-port 8Gb SAN Switch Modules IBM BladeCenter at-a-glance guide The Brocade Enterprise 20-port, 20-port, and 10-port 8 Gb SAN Switch Modules for IBM BladeCenter
WebSphere Commerce V7.0
IBM Software Group WebSphere Commerce V7.0 Multi-channel precision marketing overview Updated December 3, 2009 This presentation introduces multi-channel precision marketing in WebSphere Commerce version
IBM Endpoint Manager. Security and Compliance Analytics Setup Guide
IBM Endpoint Manager Security and Compliance Analytics Setup Guide Version 9.2 IBM Endpoint Manager Security and Compliance Analytics Setup Guide Version 9.2 Note Before using this information and the
Database lifecycle management
Lotus Expeditor 6.1 Education IBM Lotus Expeditor 6.1 Client for Desktop This presentation explains the Database Lifecycle Management in IBM Lotus Expeditor 6.1 Client for Desktop. Page 1 of 12 Goals Understand
Business Intelligence Tutorial: Introduction to the Data Warehouse Center
IBM DB2 Universal Database Business Intelligence Tutorial: Introduction to the Data Warehouse Center Version 8 IBM DB2 Universal Database Business Intelligence Tutorial: Introduction to the Data Warehouse
Continuous access to Read on Standby databases using Virtual IP addresses
Continuous access to Read on Standby databases using Virtual IP addresses January 2011 Table of contents 1 Executive summary...1 1.1 HADR system configuration...1 1.2 System pre-requisites...1 2 Setup
IBM Network Advisor IBM Redbooks Product Guide
IBM Network Advisor IBM Redbooks Product Guide This IBM Redbooks Product Guide describes IBM Network Advisor Version 12.4. Although every network type has unique management requirements, most organizations
