decor
 

planetDB2 logo

Planet DB2 is an aggregator of blogs about the IBM DB2 database server. We combine and republish posts by bloggers around the world. Email us to have your blog included.

 

July 27, 2016


Craig Mullins

The Tao of DB2 - Part 1: Achieving Balance and Understanding with DB2

This blog post begins a multi-part series of posts as we look in on a new DB2 DBA learning from a long-time DB2 DBA who is preparing to retire... Our expert DB2 DBA sure has his work cut out for himself, but he understand the Tao of DB2 and is armed with database Te... He recognizes the Li of the application programs and how they work with DB2... He can remove tweaks that work against the...

(Read more)

Dave Beulke

3 Big Data Performance: Building Analytical Consensus

Working with big data performance everyday takes the “big” out of it. The big data is just a bigger data beast that your procedures and processes need to plan for and deal with. Every big data decision is just another decision, but a bit more interesting and intensified because the outcome is more...

(Read more)

Willie Favero

A step back: A DB2 optimization term, then and now (revised/replaced)

(Posted Wednesday, July 27, 2016 - Revised) The terms ?sargable? and ?non-sargable? became common place in the very first releases of DB2. However, the terms most commonly used today are indexable, stage 1, and stage 2. Since I have been dying to write about something (anything) since retiring,...

(Read more)
 

July 26, 2016


Data and Technology

Designing Applications for Relational Access

Application design is an important component of assuring relational database performance and efficiency. When performance problems persist it may become necessary to revisit application design. But...

(Read more)
Big Data University

This Week in Data Science (July 26, 2016)

Here’s this week’s news in Data Science and Big Data. Light Based Communication

Don’t forget to subscribe if you find this useful!

Interesting Data Science Articles and News

Upcoming Data Science Events

The post This Week in Data Science (July 26, 2016) appeared first on Big Data University.


DB2utor

Why You Should Move to Universal Table Space

Almost six years ago, I wrote that the future for all DB2 data storage on z/OS will be Universal Table Space (UTS). The purpose of this post isn't to pat myself on the back, but to note that the future is here.
 

July 24, 2016


Willie Favero

A step back: A DB2 optimization term, then and now

(Posted Sunday, July 24, 2016) The terms ?sagable? and ?non-sagable? became common place in the very first releases of DB2. However, the terms most commonly used today are indexable, stage 1, and stage 2. Since I have been dying to write about something (anything) since retiring, this seemed...

(Read more)
 

July 22, 2016

Big Data University

Welcome to the new BDU!

After several months of hard work, we are proud to unveil our web site with a new look and feel!  We hope you like it!

Though you may have already seen it through our beta.bigdatauniversity.com site, we now have completely migrated all user information (including information about completion certificates previously earned), and we now have enabled final exams and badges.

Here is a summary of all changes made:

  • Migration from the Moodle learning management system to OpenEdx
  • Migration of all user information
  • Improved course structure to include Review Questions at the end of Modules
  • New badges offered, and the ability to claim level-1 badges immediately after satisfying minimum requirements
  • New courses!
  • New learning paths!

The old Moodle platform will still be available as our archive (archive.bigdatauniversity.com) for at least one year, though NO new registrations will be accepted. If you were working on a course on the old platform, and were close to finishing it, you will have until Friday August 5th, 2016 (12noon EST time) to complete the course in order to get a badge and a completion certificate.  After that, you can still see the course content, and even take the Final Exam; however, the corresponding badge and completion certificate will not be issued.  Badges offered in the old system will still be valid, but will no longer be issued. (See our FAQs for answers to questions about the old and new sites.)

More changes to come

We are not done with the site. We are continually changing our site, including improvements to all our courses, more learning paths, better user profiles, and more!

We want to hear from you!

Feedback or issues can be provided through the Support button, on the right side of the screen.

Happy Learning!

Big Data University Team

The post Welcome to the new BDU! appeared first on Big Data University.

 

July 19, 2016

Big Data University

This Week in Data Science (July 19, 2016)

Here’s this week’s news in Data Science and Big Data. Prisma App

Don’t forget to subscribe if you find this useful!

Interesting Data Science Articles and News

Upcoming Data Science Events

The post This Week in Data Science (July 19, 2016) appeared first on Big Data University.


ChannelDB2 Videos

DB2 Tips n Tricks Part 92 - How LOB Values are not logged for Non Recoverable Databases


Thumbnail

How LOB Values are not logged for Non Recoverable Databases Configure to Recoverable Database for logging LOB Values into Transactional Log Files Happy Learn...
 

July 15, 2016


Willie Favero

An update on the DB2 for z/OS Diagnosis Guide and Reference for DB2 10 and DB2 11

(Posted Friday, July 15, 2016) If you have a product license for DB2 11 for z/OS or DB2 10 for z/OS, it?s time to get your updated copy of the Diagnosis Guide and Reference. DB2 11 for z/OS Diagnostics Guide and Reference (LY37-3222-04) and DB2 10 for z/OS Diagnosis Guide and Reference...

(Read more)

Data and Technology

Transforming Mainframe Data Management

Just a quick post today to let my readers — specifically those in the US Midwest — that I will be presenting at an executive briefing at the River Forest Country Club in Elmhurst, IL...

(Read more)
 

July 13, 2016


Henrik Loeser

Data, Dialogs, and Databases

Watson Dialog Specification I recently wrote about managing IBM Watson dialogs from the command line and that I wanted to bring database records into a dialog, combining different IBM Bluemix...

(Read more)
DB2 Programmers Guide

Indexes in DB2 - Part I

In order to get the bottom of any topic, I always prefer getting the answer of  three important questions.

What is it ?
Why we need it?
What if we don't have it?

So, today we are going to understand the topic indexes in DB2 using these questions and try to learn as much as possible.

What is it??

Index is very powerful concept , which uses the pointers to actual data to more efficiently access the specific data item.

DB2 Indexes are the DB2 objects which can be created on columns of the table to speed up the processing of SQL queries and sometimes can also help in uniquely identifying each row in a table.

To give one very practical example of what indexes are , think of indexing in books which you often see either at the start or end of the book catalog.




Why we need it?

As we mentioned above, it helps in speeding up the query processing by allowing DB2 optimizer to choose most optimized path to access the data you are looking for.

It also helps (Depending on type of index) to uniquely identify each single row in your DB2 table.

Lets understand this using our example of books.

Consider any book of 1000 pages and you don't want to read a whole book but a specific information/chapter in that book. what you will do?

Will you start looking/reading from page 1 ,each and every page/line ,until you find the information you are looking for ?  If yes, what is the problem here?

Obviously, It will be very time consuming to get a small piece of information in big book. So here indexing of the books helps us , you can just look for the specific keyword in book index and that will tell you what all the pages in book have that keyword and then you just read those specific pages to find out the exact information you are looking for.

What it did, reduces a lot of time to get to the data and speed up our process of looking information.

DB2 indexes are same, book is DB2 table, pages in book are DB2 pages, lines are rows in a table and specific data you are looking is the value of columns of the DB2 tables.

What if we don't have it?

In above example of books, imaging you don't have indexes. You get the drill, right?

If there is a DB2 table with large amount of data in it and there are no indexes defined on any of the columns, and you fire any simple SELECT SQL query on it.

SELECT EMP_NAME FROM EMP_TABLE
WHERE EMP_NO = '123456'

It will start reading the each and every DB2 page sequentially until it finds the data matching to predicates in your query and it will consume lot of CPU and Elapsed time. With time comes lot of other consequence like if the query is not defined with UR(Uncommitted read) , the locks will be held on tables/pages/rows and other process/program trying to access the same table/data at around same time will have to wait and more likely the waiting process fails with TIMEOUT.

So proper indexing is very important and there are only pros of having indexes.

But in some rare cases there are times when indexing  a table is not the way to go.

Consider a table with very less data.. may be 100 or even 1000 rows. Here query can get you the data faster when there are no indexes.

This is because having index is two way read process, first the indexes have to be read and then the pointers/pages  mentioned by index have to be read and with table having very less rows it is rather faster to read the data sequentially than having indexes.

So consider having indexed when there are tables with large number of rows.

Now, in next post we will cover the answer of next set of questions like how it works??.How indexes work and what are the types of indexes etc.




 
Jack Vamvas

ADM5530W - NOT LOGGED INITIALLY

Question: I was reviewing messages in db2diag.log and identified some recurring ADM5530W NOT LOGGED INITIALLY messages

ADM5530W The COMMIT processing of table "myschema.mytable"

          that used NOT LOGGED INITIALLY has been initiated.  It is recommended

          that you take a backup of this table's table space(s).

What do they mean?

Answer: The main purpose of NOT LOGGED INITIALLY is to restrict logging for a transaction.

If NOT LOGGED INITIALLY is initiated     and there is a : CREATE INDEX,DROP INDEX,ALTER TABLE,INSERT,DELETE or UPDATE  , these will not be logged.

It doesn’t mean there is an error, and is the expected behaviour for NOT LOGGED INITIALLY.

As a developer , you need to weigh up the benefits of using NOT LOGGED INITIALLY and the potential disadvantages.

One of the main benefits for NOT LOGGED INITIALLY is improved performance on certain operations and decreased amounts of transaction logs.

One of the main disadvantages is if the server crashes and there has not been a COMMIT on the transaction , then the ROLLBACK to the DB2 savepoint cannot work. Also , in the HADR environment where logs are passed from one server to another – then non logged operations will not be reflected on the STANDBY server

Another disadvantage is the Archival Logging situation

Some thought needs to be given as to when NOT LOGGED INITIALLY is appropriate. These are some suggestions:

1)There is a data load process with a range of sources

2) There are some complex calculations where temporary tables are required , and can be discarded

3)Limited space drive supporting DB2  database transaction logs .

 

Read More

How to report a DB2 performance problem (DBA DB2)

 

July 12, 2016

Big Data University

This Week in Data Science (July 12, 2016)

Here’s this week’s news in Data Science and Big Data. Musicmap

Don’t forget to subscribe if you find this useful!

Interesting Data Science Articles and News

Upcoming Data Science Events

The post This Week in Data Science (July 12, 2016) appeared first on Big Data University.


Craig Mullins

DB2 for z/OS Webinar: Who Did What to Which Data When?

Today's blog post is to let you know about an upcoming webinar (July 21, 2016) that I am conducting with CorreLog on the topic of database auditing for DB2 for z/OS.  Unless you have been living under a rock these past few years you will have heard at least something about the many data breaches, hacks, and security issues that have been prevalent in the news. These issues have resulted...

(Read more)

DB2utor

RESTful APIs Unlock the Value of z/OS Assets

If you work on the mainframe using z/OS and haven't heard of RESTful APIs, you need to wake up and smell the coffee. Seriously, even though RESTful APIs are still a fairly new thing, they're already essential. If you don't know about them, ask the people on the distributed side of your business -- who I can assure you already rely upon them -- to help bring you up to speed.
 

July 11, 2016

DB2 Programmers Guide

DB2 Image Copy

In a Production environment, there are constant updates happening on your DB2 tables as a result of several jobs/programs functionality. These updates are nothing but the results of the execution of  Insert/Update/Delete statement on your various DB2 tables.

What if, I want to take a look at the data in tables before my today's batch pass ran ?? this may be for some analysis/understanding  or worse inadvertently something went  wrong and you want to recover the data back to where it was before the job ran. Is there a way? Wouldn't it be nice if some one can take a back up of my DB2 tables before the batch pass run so I can always go and get it back where we started.

Well , it's already happening the only thing is as a programmer/ developer you may not be knowing it. This is something done by using DB2 utility and by your DataBaseAdministrator .

This process of taking full back-ups of your data objects is called as DB2 full image copy. It is achieved by DB2 utility COPY.

You can make full image copies of a variety of data objects. Data objects include table spaces, table space partitions, data sets of nonpartitioned table spaces, index spaces, and index space partitions.

The following statement specifies that the COPY utility is to make a full image copy of the TABSPACE table space in database DATABASE.

COPY TABLESPACE TABSPACE.DATABASE

The COPY utility writes pages from the table space or index space to the output data sets.

So, if you want to take a look at old data before pass, all you have to do is dump that image copy dataset from COPY Utility into normal DASD file on disk.

There is also something called as incremental image copy, this as it name suggest does not take complete copy but only the records which has changed (Insert/Update/Delete) from last run .

The information of this COPY Utility can be found on the DB2 catalog table SYSIBM.SYSCOPY 

More on how JCL'S for DB2 Utility looks and how it works will be covered later in DB2 utility post which I will be doing shortly.

 
DB2 Programmers Guide

Is it a PLAN? Is it a PACKAGE? NO, It's a CONFUSION !!!

No matter how long programmers have worked with DB2 for z/OS , they still have the confusion on difference between a plan and a package and what the heck is that collection anyway.

In this post i will be specific on the PLAN/PACKAGE difference and the things related to it (Such as COLLECTION and PKLIST). If you want to know from basic and detailing of what is PLAN/PACKAGE and how it get formed and why we need it then ,THIS is the great place to start up with,rather i would recommend you to read it before going further with this post.

OK so here it goes.

The PLAN/PACKAGE are nothing but the container which contains the logic of how your SQL statement in the program should get executed.[Remember when we write any query in SQL, we say what we want and NOT how it is to be done (We don't care how DB2 internally does it)].So PLAN/PACKAGE contains the LOGIC how it is to be done.Then this PLAN/PACKAGE is used with load module (For COBOL) to run your application program smoothly.

Now why two things?? PLAN and  PACKAGE , if both are the same (Contains the LOGIC for executing your SQL) and what exactly is the difference between the two.


Binding DBRM to a PLAN :

  • The PLAN contains the (Logic of) one or DBRM OR one or more PACKAGES OR the combination of both.

  • The PLAN is executable.When i say executable it means ,it can be executed with COBOL load module to run your application program.

  • Now let's consider there are 10 programs with names A TO J. So there will be 10 DBRM ( 1 DBRM for each program). I bind all 10 DBRM's to a PLAN (with name 'PLANA').Now let's say i got a requirement to change one of the Program ( let's say Program 'B') then i NOT ONLY need to bind this DBRM again to plan PLANA but all remaining 9 DBRM too (even though they didn't went under any change) and that is the biggest disadvantage of a binding a DBRM to a PLAN.NOT only this, if in my system there are various other PLAN's which uses this DBRM (The one which corresponds to Program 'B') then all those PLAN'S need to rebind along with all the other existing DBRM'S they may be having.

  • There are various parameters called as Bind parameter are available, when you bind your DBRM to a PLAN. These Bind parameters helps in deciding various important aspects for you program such as when to acquire a lock for the particular resource in your program,when to release the lock,what qualifier to use for the objects refered in your application program,the owner of plan etc. So when you bind your all DBRM's (10 DBRM in this example) to PLAN you have only one set of combination of these bind parameters for all 10 DBRM (For example you can not set different isolation level for different Program/DBRM , it has to be same)

Binding DBRM to a PACKAGE:

  • The PACKAGE contains the (Logic of) ONLY one DBRM.It's a single bound DBRM.
  • The PACKAGE is not executable that means it can not be use with COBOL load module to run your Application program.In order to make your PACKAGE to be executed it has to bonded to PLAN again.So you bind your DBRM to PACKAGE and then in turn this PACKAGE to PLAN.I know what you must be thinking now,why to go this way,  isn't it better to bind DBRM to straight away to PLAN.NO IT IS NOT, i will explain you why it is so.When i say you have to bind the DBRM to PACKAGE and then this PACKAGE to PLAN again,you need not to do it for ever package. What i mean is, let's say i have 50 PACKAGES to bind to a PLAN no need to BIND it 50 times instead just bind a single package to PLAN using PKLIST(Package List) that's it ,you are done and remaining 49 PACKAGES will be automatically added to your plan (For this all your 49 Packages need to be in same PKLIST what we have bonded with PLAN earlier)

  • Let's take the same example what we use above for PLAN. I have 10 programs A TO J with 10 DBRM. Now in case of PACKAGE , I will bind each DBRM to one PACKAGE so i will have 10 Packages then include all 10 PACKAGES to single PKLIST (Package List) bind the Package list to a PLAN.Now if one of my Program changes , then i have to rebind only the DBRM corresponding to the Program changing and NOT the remaining DBRM'S/PROGRAM (Which didn't went under the change)

  • Regarding Bind parameter, since i can bind 10 DBRM to 10 different PACKAGES, I can have various combination of Bind parameters for all PACKAGES.(For example Isolation level of CS for 1 PACKAGE and RR to other etc.)

Having said that, you will wonder,you have only the advantages of binding DBRM to PACKAGE and don't have any reason to go for binding DBRM to a PLAN.

That is the reason , IBM is going to remove this facility of binding DBRM to PLAN in DB2 V9 Onwards
For more detail on the same refer HERE.

JQEUME5S4HKU
 

July 08, 2016


DB2Night Replays

The DB2Night Show #182: DB2 LUW V11.1 Upgrade Best Practices and Tips!

Follow @mstopfer1 !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs'); Special Guest: Melanie Stopfer, IBM DB2 LUW Brain Surgeon DB2 LUW V11.1 UPGRADE Best Practices plus TIPS and Hidden Gems! 100% of our audience...

(Read more)

ChannelDB2 Videos

DDB2 Tips n Tricks Part 91 - How To Add or Drop Containers from SMS Temporary TablespacesB2 TnT 91


Thumbnail

Description: How To Add or Drop Containers from SMS Temporary Tablespaces without Redirected Restore. Happy Learning & Sharing

Craig Mullins

Good Old Partitioned Table Space Confusion

Partitioned table spaces have been with us in the world of DB2 forever, but even as we embark on moving to Universal table spaces, there still exists some confusion "out there" in terms of the behavior of partitioning...  For example, consider this question: If a table is in a partitioned table space, say four partitions, for example, then to process the table in batch can I run four...

(Read more)

Henrik Loeser

Bluemix: Where Python and Watson are in a Dialog

Right now I am working on a side project to hook up the Watson Dialog Service on Bluemix with dashDB and DB2. The idea is to dynamically feed data from DB2 into a conversation bot. To register and...

(Read more)
 

July 05, 2016

DB2 Programmers Guide

Understanding Locking in DB2 - I

Locking in DB2 is the least understood concept among the programmer's.
There are lots of chaos ranging from what is lock to why we need it to end up discussing what exactly is isolation level etc.and i don't blame them.

Some programmer's who have somewhat understanding about it ,they have there different set of questions to ask on it ,such as we have something called as LOCKSIZE parameter while creating tablespace, then we have bind parameter such as Isolation Level and Aquire and Release. Now what exactly the function of these parameter? do we need all of them for deciding the kind of lock DB2 will take on resource for my particular Application Program.


Well to answer these questions let me clarify first that , Locking is not as simple subject as to cover it into a single post , if one want to understand the detailing of it , it may take up reading fully dedicated book on it.

I will try to explain the overview of it in this post and will cover the various aspect of it in coming posts.

Well i will not dwell much into what is the definition of lock and why we need it . you can read it anywhere (better go for IBM manual )


So read on


There are three things which you need to understand first in order to get rid of the concept of Locking.


-- The object on which the LOCK to be taken (That is Table,Page,Row etc.).
    It can also be called as size of  the LOCK.

 
-- The mode or type of Locking (Share,Update etc.).
     This controls the degree to which the resource can be accessed concurrently
     by multiple processes.


-- The duration of lock.This refers to the length of time that the lock is held.

Now lets discuss the above mentioned point one by one.


Lock Size

The size of the lock determines how much data is within the scope of a lock (hence the term lock size).

For example, a page lock includes all rows on a page, while a row lock just covers a single row.

The lock size is determined by the LOCKSIZE parameter of the CREATE and ALTER TABLESPACE data definition language statements. The options are ANY, TABLESPACE,TABLE, PAGE, ROW.
ANY is the default. TABLE should only be used for segmented table spaces.

DB2 uses locks of the following sizes:

– Table space

– Partition

– Table

– Page

– Row

Out of theses first three are high level locks(table space,partition,table) and the next two (page and row) are called as low level locks.The low level locks cannot be taken before taking the corresponding high level lock for them while high level locks can be taken directly ( except intent share, intent exclusive, and share with intent exclusive.) I know this statement is bit of confusing here but had to go with flow,don't worry will explain below. 

Lock state or lock mode

The state of a lock, also called the mode, determines the type of access to the locked object permitted to the lock owner and to any other concurrent application processes. The lock states available for an object depend on the lock size.

There are three possible states for a low level lock:

- Share
- Update
- Exclusive

Now these three modes of locks are available for low level lock( page and row) while high level lock size have these three modes of locking with the addition of three intent locks referred to as intent share, intent exclusive, and share with intent exclusive.

The primary difference is that whenever an intent lock is held,DB2 also has to request a lock at the page or row level, hence the reference to low level locks.

The share, update, and exclusive locks at the table space, partition, and table level are also
referred to as gross locks because they do not require low level locks to be taken.

Now, i think you got what i was trying to say previously.

When to take a lock ?

The low level locks, that is page and row are acquired when necessary while a process is being executed.
When the high level locks, that is, table space, partition, or table, are acquired depends on the ACQUIRE BIND option.

For ACQUIRE(ALLOCATE), the high level locks are acquired at the first SQL call.

For ACQUIRE(USE), the high level locks are acquired at the first actual use of the table.

The ACQUIRE bind options have no influence on when high level locks are acquired for dynamic SQL, which is always when the object is first accessed.


So until now we are clear with size and modes of lock. Lets revise it.

We can have high level locks (lock size) which covers table,tablespace and partition also we have low level locks which are on row and pages.Then we have various modes of locking that is intent share,intent exclusive,share intent exclusive,share,update and exclusive.Out of theses six modes of locking , all six are applicable for high level locking while for low level locking only three modes( share,update and exclusive) are applicable.

Now,I must stop this post here so you can grasp it. Don't worry if you don't get the concept in one reading , go through the post number of times till you get clear picture out of it.

In next post  i will discuss what all these six modes of locking exactly mean in detail and there compatibility with each other. Then we have the third attribute remaining to discuss that is duration of lock (May be one after the next post)

>> Read Part II
 
DB2 Programmers Guide

Checkpoint Restart in DB2 Part - I



>> Click here for Part II <<

Before getting into what is check point restart and how to implement it,you need to understand when and why you should go for it.

HERE’S THE SCENARIO:

Suppose, a batch program that basically reads an input file and posts the updates/inserts/deletes to DB2 tables in the database was abended before the end of the job because of some reasons; Is it possible to tell - How many input records were processed? Were any of the updates committed to the database or can the job be started from the beginning?

Assume that COMMIT logic was not coded for large batch jobs that process millions of records. If an ABEND occurs all database updates will be rolled back and the job can be resubmitted from the beginning. If an ABEND occurs near the end of the process, the rollback of all the updates is performed. Also, DB2 will maintain a large number of locks for a long period of time, reducing concurrency in the system. In fact, the program may ABEND if it tries to acquire more than the installation-defined maximum number of locks.

This involves setting up a batch-restart control table (CHECKPOINT_RESTART in our case) to store the last input record processed and other control information. The restart control table can also be used as an instrumentation table to control the execution, commit frequency, locking protocol and termination of batch
jobs.

One of the problems with restart is synchronizing DB2 tables and output files(If your program updates some output file). DB2 will rollback all work on DB2 tables to the last commit point; but for output files we have to delete all the records up to the last commit point. (One option to do this would be via a global temporary table, FILE_POSITION_GTT, See FILE REPOSITIONING section for further details.).

COMMIT Function

The COMMIT statement ends a unit of recovery and commits the relational database changes that were made in that unit of recovery. If relational databases are the only recoverable resources used by the application process, COMMIT also ends the unit of work. The unit of recovery in which the statement is executed is ended and a new unit of recovery is effectively started for the process. All changes made by ALTER, COMMENT ON, CREATE, DELETE, DROP, EXPLAIN, GRANT, INSERT, LABEL ON SQL connections are ended when any of the following apply:

► The connection is in the release pending state
► The connection is not in the release pending state but it is a remote connection and:

  • The DISCONNECT(AUTOMATIC) bind option is in effect, or
  • The DISCONNECT (CONDITIONAL) bind option is in effect and an open WITH HOLD cursor is not associated with the connection.
For existing connections,

► All open cursors that were declared with the WITH HOLD option are preserved, along with any 
SELECT statements that were prepared for those cursors.

► All other prepared statements are destroyed unless dynamic caching is enabled.

► If dynamic caching is enabled, then all prepared SELECT, INSERT, UPDATE 
and DELETE statements
 that are bound with KEEPDYNAMIC (YES) are kept past the commit.
 Prepared statements cannot be kept past a commit if:

      * SQL RELEASE has been issued for that site, or
      * Bind option DISCONNECT(AUTOMATIC) was used, or 
      * Bind option DISCONNECT (CONDITIONAL) was used and there are no hold cursors.

All implicitly acquired locks are released, except for those required for the cursors that were
  • A place to store the details (CHECKPOINT-COMMIT record) pertaining to the current execution of the program, like various counts (number of inserts/deletes/updates/selects), number of processed, processing dates, and other details which are needed in the program after a RESTART. 
  • A reliable FILE RE-POSITIONING logic with minimal changes to the existing PROCJCL.
  • Flexibility, to modify the commit frequency without changing the program code.
Where we can store this CHECKPOINT-COMMIT record?

We can store the CHECKPOINT-COMMIT record, COMMIT-FREQUENCY and other relevant
information in a DB2 table .

CHECKPOINT_RESTART TABLE DESCRIPTION

FILE RE-POSITIONING
At restart, all records written to the output file since the last commit will have to be removed. To accomplish this, FILE_POSITION_GTT global temporary table is used. SQL statements that use global temporary tables can run faster because:


• DB2 does not log changes to global temporary tables
• Global temporary tables do not experience lock contention
• DB2 creates an instance of the temp table OPEN/SELECT/INSERT/DELETE stmts. only occur:

• An instance of a temporary table exists at the current server until one of the following actions occur:

• The remove server connection under which the instance was created terminates
     For ROLLBACK stmt, DB2 deletes the instance of the temporary table.
     For COMMIT stmt, DB2 deletes the instance of the temporary table unless cursor
     for accessing temporary table is defined WITH HOLD and is open.
 •  The application process ends.



In next part we will discuss step by step process of check point restart installation along with sample code.


    
DB2 Programmers Guide

Cusror in DB2 ---> Part III (Multirow fetch)


    ==> Click Here to Read Part I <==


    ==> Click Here to Read Part II <==


    In this post ,we are going to discuss how to fetch and process multiple rows (Multirow fetch) at a time using single fetch statement in DB2 cursor.

    The multirow fetch was introduced in DB2 version 8 onwards.

    Why to use it ?

    By fetching multiple rows at once, your request can become more efficient and it can improve the performance by reducing the CPU time.

    How to use it ?

    As we know with regular Cursor fetch statement we can fetch one row at a time, values will be fetched into host variables and processed later into the program.

    Ex -

    EXEC SQL
          FETCH  CURSOR_NAME
          IN TO :A,:B
    END-EXEC.


    Now if we want to use Multirow Fetch feature, First of all our Cursor should be defined with something called as "With Rowset positioning".

    A rowset is a group of rows that are operated on as a set. Such a cursor enables your program to retrieve more than one row using a single FETCH statement

    Define Cursor:

    EXEC SQL
        DECLARE CURSOR CURSOR_NAME
        WITH ROWSET POSITIONING
        FOR
        SELECT EMP_NAME FROM EMPTABLE
    END-EXEC.

    Now with this cursor when we use fetch statement we will get multiple rows at a time which normal host variable structure will not be able to handle it as it was designed to receive only one row at a time.  

    So,  to use a multi-row fetch you must have defined the appropriate structures to receive multi-row data. This means you must defined an array of host variables into which the fetched rows can be placed. Each column fetched requires its own host variable array into which its values will be placed. If column is nullable then make sure we have separate  host variable array defined for null indicator.

    Be sure to match the array size to the rowset size. This way a single FETCH statements can be written to retrieve more than a single row from the result set.

    Fetch Cursor :

    EXEC SQL
         FETCH ROWSET FROM CURSOR_NAME
         FOR 10 ROWS
         INTO :HOST-VAR-ARRAY
    END-EXEC.

    So it looks exactly like normal fetch statement except the word ROWSET and FOR 10 ROWS.

    ROWSET keyword indicates that this is a cursor with Rowset positioning.
    The FOR 10 ROWS clause specifies the size of the rowset to be returned. The maximum rowset size is 32,767.

    Rowset cursors are very useful when you need to retrieve many rows or large amounts of data.
    By retrieving multiple rows with a single FETCH, multiple trips between the application and the database can be eliminated, thereby improving network performance.


     
    DB2 Programmers Guide

    Cusror in DB2 ---> Part II

    ==> Click Here to Read Part III <==


    In first part we discussed about, what the cursor is and why we need it.


    In this part we will discuss about the types of cursor. Yes there are different types of cursor and they work differently depending on their type.



    In normal or simple cursor the data can only be retrieved one row at a time in forward direction using "FETCH" statement.



    Scrollable Cursor :-

    With Scrollable cursor, you can scroll forward/backward through the data(Result set of select statement) in an application program.

    Ok but before everything else how do you distinguish that the cursor defined in the program is scrollable or not.

    The keyword "SCROLL" is use to specify that the cursor defined is scrollable.

    Example:

    Declare  Cursor1 (cursor name) 
                 SCROLL Cursor

    Ok so this is how you define it.Now, how does it work? how do you move through the data?

    Similar to normal cursor, Fetch statement is still used  but is combined with different keyword to move the fetching in specific direction.

    Example :

    FIRST            :- This will fetch the first row in the result set.
    CURRENT     :- This will fetch the current row from the result set.

    Similarly there are different keywords like LAST,NEXT,BEFORE,AFTER,PRIOR etc.

    So for the above defined cursor , the fetch statement will be

    FETCH FIRST Cursor1 INTO .....

    Fetch first statement will position the cursor to the first row of the result set and then fetch it to the defined host variable.

    Scrollable cursor makes it easy to scroll through the data thus saves amount of time and effort.

    There are two types of scrollable cursor.

    Yes there are again types of scrollable cursor but it make sense to understand on what factor they are diffracted.

    1. Sensitive      : With this type of scrollable cursor,the program can access the
                               data changed /updated by the user.So at the time, the data
                               being fetch from the table using the cursor if some  other
                               process update/delete the data from table that get reflected in
                               the result set.
                               However, Inserts will not be reflected in the result set.

    2. Insensitive :   This type of scrollable cursor will not show any change that are 
                              made to the data at the time of fetch so the underlying data will 
                              not be reflected in the result set
                                

    So the definition of cursor finally would finally look like


    Declare Cursor1 (cursor name) 
             SENSITIVE SCROLL Cursor

    Thing to remember: -

    Though easier to use , you should not define every cursor to be scrollable as it require more overhead to the normal/conventional type of cursor.
    To be specific , if you do not need to move forward /backword through the result set do not use scrollable cursor.

    Until now we saw that using cursor(be it any type) we can fetch only one row at a time.We can also fetch multiple rows using single fecth statement in a cursor.

    How do you do it?? we will see it in next part.

    ==> Click Here to Read Part III <==


    DB2 Programmers Guide

    Cusror in DB2 ---> Part I


    ==> Click Here to Read Part II <==


    When we write a Sql query in an application program and if it returns more than one row (Qualifying rows are more than 1) then DB2 cannot handle it and throws the SQLCODE -811.
    What if, i have to select/update/delete more than one row from the table via application program??--> DB2 cursors is all what you need.

    Cursor will process all the qualifying rows one by one sequentially.
    To have cursor in your program , you need to follow below four steps.


     1. Declare (Define) cursor
     2. Open Cursor
     3. Fetch Cursor
     4. Close Cursor

    1. Declare(Define) Cursor:

    You declare it in Working storage section with below syntax

    Declare cursor for < Your SQL Query >

    You can also define your cursor in Procedure division,only condition is,it must be declare before Open cursor statement.

    If you want to Update/delete the qualifying rows from the cursor then you have add the "FOR UPDATE OF" clause at the end of query in Declare cursor statement.

    Declare cursor for< Your SQL Query >
    For Update of /For UPDATE (If not sure about the column being updated)

    For update of clause will take the lock on the rows being updated so that no one can change it before your update it.If you don't add "for update of" but update the rows ...lock will not be taken and there is danger of rows being updated by other process between your select and update statement and hence result can be unpredicted.

    Declare(Define) Cursor is non executable statement.

    2.Open Cursor :

    This is an executable statement and executes the query from declare statement.Qualifying rows will be stored in buffer and pointer will be set just before the first qualified rows.If there are no qualifying rows( The predicate in your query yields no rows) pointer will be set after last row.

    3.Fetch Cursor :

    Fetch will first move the pointer to next row and then fetch that row from buffer until end of rows.Once the end of row happens SQLCODE +100 will be returned.

    4.Close Cursor:

    This will close the cursor which you have open.

    Points to remember :

    1. If you want to handle more than one row in your application program, use cursor

    2. If multiple rows are returned by query and you didn't use cursor ,then first row will be fetched from the table and db2 will throw SQLCODE -811.

    3. If you want only one row(any) from multiple qualifying rows from your query, you can avoid using cursor in this  case. Use fetch first 1 row only (Depend on your DB2 version)

    4. Cursor can either be defined in working storage section or procedure division. The condition for declaring it in procedure division is it should be defined prior to open cursor statement.

    5. DB2 can process a cursor in two different ways:

    a> It can create the result table(all qualifying rows) during the execution of the OPEN statement.

    b> It can derive the result table rows as they are needed during the execution of later FETCH statements.If the result table is not read-only, DB2 uses the latter method. If the result table is read-only, either method could be used.

    6. All the Cursors used in program will be automatically closed at the end of program or at COMMIT AND ROLLBACK statement (if used in the program).

    If you don't want to close the cursor at COMMIT OR ROLLBACK. Define CURSOR WITH HOLD ,it will not allow the cursor to be closed and keep the position of pointer.

    In next part we will discuss the different types of cursor and there use.


    ==> Click Here to Read Part II <==
    Big Data University

    This Week in Data Science (July 05, 2016)

    Here’s this week’s news in Data Science and Big Data. Global Immigration

    Don’t forget to subscribe if you find this useful!

    Interesting Data Science Articles and News

    Upcoming Data Science Events

    The post This Week in Data Science (July 05, 2016) appeared first on Big Data University.


    DB2utor

    Moving to the DB2 Connect Client

    Recently I’ve fielded a number of questions about the best use of DB2 Connect. Specifically, customers want to know if they should continue to use the DB2 Connect gateway or move applications toward a direct connection using the DB2 Connect client with IBM DB2 data server drivers.

    Subscribe by email

     

    About

    planetDB2 is an aggregator of blogs about the IBM DB2 database server. We combine and republish posts by bloggers around the world. Email us to have your blog included.
     

    Bloggers

    decor