planetDB2 logo

Planet DB2 is an aggregator of blogs about the IBM DB2 database server. We combine and republish posts by bloggers around the world. Email us to have your blog included.


May 28, 2016

Omer Brandis

Scale out your existing MySQL landscape with Scalebase

In a nut shell, scalebase is to mysql what greenplum db is to postgresql, it makes it possible to create an MPP database based on mysql. You can use it to scale out your existing "mysql applications" without changing your code and/or create new MPP databases for handling big data.

May 27, 2016

Modern SQL

filter — Selective Aggregates

The filter clause extends aggregate functions (sum, avg, count, …) by an additional where clause. The result of the aggregate is built from only the rows that satisfy the additional where clause too.


The filter clause follows an aggregate function:

SUM(<expression>) FILTER(WHERE <condition>)

With the exception of subqueries and window functions, the <condition> may contain any expression that is allowed in regular where clauses0.

The filter clause works for any aggregate function: besides the well-known functions such as sum and count, it also works for array_agg and sorted set functions (e.g., percentile_cont).

If an aggregate function is used as a window function (over clause), the syntactic order is: aggregate function, filter clause, over clause:

SUM(...) FILTER(WHERE ...) OVER (...)

However, the filter clause is not generally allowed before overrather, it is only allowed after an aggregate function, but not after other window functions—it is not allowed before ranking functions (rank, dense_rank, etc.) for example.

Use Cases

The following articles describe common use cases of filter:


SQL:2003 introduced the filter clause as part of the optional feature “Advanced OLAP operations” (T612). It is barely supported today, but is easy to emulate using case (see Conforming Alternatives).

Availability of FILTER

Conforming Alternatives

Generally, the filter clause can be implemented as a case expression inside the aggregate function: the filter condition has to be put into the when-clause, the value to be aggregated into the then clause. Because aggregate functions generally skip over null values1, the implicit else null clause is enough to ignore non-matching rows. The following two expressions are equivalent:

SUM(<expression>) FILTER(WHERE <condition>)
SUM(CASE WHEN <condition> THEN <expression> END)

Count(*) needs some special treatment because “*” cannot be put into the then clause. Instead, it is enough to use a non-null constant value. This ensures that every matching row is counted. The implicit else null clause maps non-matching rows to null, which is ignored by count too.

COUNT(*) FILTER (WHERE <condition>)

When using a set quantifier (distinct or all) it must remain in the aggregate function prior the case expression.

Proprietary Extensions

PostgreSQL: Subqueries Allowed

The PostgreSQL database supports subqueries inside the filter clause (e.g., via exists).

filter — Selective Aggregates” by Markus Winand was originally published at modern SQL.


May 25, 2016

Robert Catterall

DB2 for z/OS: When Did You Last Check Your Checkpoint Frequency?

I have written in this blog a few entries about various DB2 for z/OS ZPARM parameters, including a post on DSMAX and another on EDM_SKELETON_POOL. Time for another such entry, this time focused on the parameters that determine when a DB2 subsystem will perform checkpoint processing. Checkpointing is an availability-enhancing mechanism that is related to DB2's deferred write functionality. DB2 externalizes changed pages to disk in a deferred manner so as to boost the efficiency of insert,...

(Read more)

May 24, 2016

DB2 Guys

DB2 Direct: A new way of consuming your Database

by Phillip Downey, WW program Director, IBM Analytics Platform Hybrid Cloud Strategy   In DB2 11.1, we introduced two new and easy to consume DB2 Direct editions: DB2 Direct Advanced and DB2 Direct Standard. Both editions bring a new dimension to the database offerings for the small and larger enterprise clients that are looking for […]


Using Trusted Context on a Local Connection

Back in August 2011, I wrote about the use of roles and trusted context in auditing DBA activity. Just recently two separate customers have asked me about how and when to use this. Given that -- not to mention the increased attention on enterprise security, the growing demand for remote access through mobile devices, and the ongoing need to audit all management activities -- this topic is well worth revisiting.
Big Data University

This Week in Data Science (May 24, 2016)

Here’s this week’s news in Data Science and Big Data. Smart City

Don’t forget to subscribe if you find this useful!

Interesting Data Science Articles and News

Upcoming Data Science Events

The post This Week in Data Science (May 24, 2016) appeared first on Big Data University.


May 23, 2016

Jack Vamvas

How to put TSAMP into maintenance mode

A common problem for managing TSMAMP DB2 high availability is how to deal with non DB2 maintenance. An example is OS patching.

Let's say there is a 2 node clustering architecture set up. The OS administrators will want to patch the servers, but there is a requirement to maintain uptime on the DB2.

If using TSAMP a straightforward plan, could be based on these steps

The basic idea is to change the Automation mode to manual mode .

# samctrl –M T

  • TSAMP will continue to monitor the resources . It will execute the monitoring scripts related to the  resources
  • Step 3 : To check the current automation mode

# lssamctrl

You can also use the lssam command. A status line appears indicating Automation=manual.

Step 4 : Re – enable automation mode (Automation = Auto)

#samctrl –M F


Read More on DB2 high availability

TSAMP maintenance and diagnostics (DBA DB2)

How to view harvested user defined storage resources for TSAMP ...




May 22, 2016

DB2Night Replays

The DB2Night Show #Z70: Advanced and Complex SQL Refresher

Presented by: Sheryl Larsen Sr. DB2 Product – Specialist, BMC Software "The DB2Night Show z70: Advanced and Complex SQL Refresher" Replays available in WMV and M4V formats! 98% of our studio audience learned something!Sheryl discussed how to improve SQL coding skills; specifically in knowing when to use which SQL feature. Watch the replay...

(Read more)

May 18, 2016

Dave Beulke

Learn from Everyone’s Sessions at IDUG

Another IDUG conference is coming up, and the agenda of presentations looks great. The IDUG conference planning Committee always does a great job, and the agenda shows many extra educational sessions. This year’s IDUG conference addresses the latest DB2 releases and business issues with many great...

(Read more)

May 17, 2016

Kim May

Chicago IBM DB2 Analytics Accelerator (IDAA) Class – Seats Still Available!

There are still seats available for the June 6th and 7th *FREE* hands-on IDAA training class, scheduled for delivery in the IBM Chicago offices.  DB2 Gold Consultant Frank Fillmore will be delivering...

(Read more)

Craig Mullins

Come See Me at IDUG!

Just a quick blog post today to let all of my readers know my schedule for the IDUG DB2 Technical Conference being held next week in Austin, TX. With this information you can track me down in multiple places if you want to ask me a question, catch up, or just say “Hi!” First of all, you can find me on Tuesday at the Expo Hall in the CorreLogbooth (#300) from 12:00 to 1:00 and from 5:00 to 6:00....

(Read more)
Big Data University

This Week in Data Science (May 17, 2016)

Here’s this week’s news in Data Science and Big Data. Wild Data

Don’t forget to subscribe if you find this useful!

Interesting Data Science Articles and News

Upcoming & Recent Data Science Events

Cool Data Science Videos

The post This Week in Data Science (May 17, 2016) appeared first on Big Data University.


Using Predicate Selectivity Overrides

Back in 2011, I wrote a series of posts about the evolution of optimization hints and the new DB2 10 statement-level optimization hints.
Now with DB2 11, the statement-level optimization hint can override the selectivity of predicates for matching statements. This is needed because DB2 cannot estimate filter factors for certain types of predicates. It must rely on inaccurate estimates or even default filter factors to select access paths for statements that use such predicates.

May 16, 2016

Data and Technology

Data Summit: A Trip Report

Last week I had the pleasure of attending, and speaking at, the annual Data Summit event in New York City. The event, sponsored by Database Trends & Applications, boasted knowledge-packed days of...

(Read more)
Jack Vamvas

The case of the missing numeric ip format in TSM TCPCLIENTADDRESS

As part of the DB2 clustering solution using Tivoli Storage Automation(TSA) there were some customisations required to make the monitoring and backup procedures “cluster aware”. One of the customisations was to facilitate TSM  scheduler to point to the active DB2 node.

Read more on How to customise TSAMP start and stop scripts to make monitoring ...

Therefore if there is a DB2 failover from Server 1 to Server 2 , with all the associated resources failing over, such as ip address, DB2 instance and storage , TSM scheduler can point to the current active node and continue with backups as scheduled.

My initial idea was to use a CNAME as the address for the TSM scheduler to use. It turned out using the hostname caused some problems.     The fix was to adopt the usage of the parameters  TCPCLIENTADDRESS  and  TCPCLIENTPORT  in the dsm.sys   details. 

According to the IBM documentation , TCPCLIENTADDRESS is defined as :

“The tcpclientaddress option specifies a TCP/IP address if your client node has more than one address, and you want the server to contact an address other than the one that was used to make the first server contact.”

When the value associated with TCPCLIENTAddress is  resolved to a numeric TCP/IP address, the address is sent by the client to the Tivoli Storage Manager server. TSM records the address in the Schedules.Node.Addresses table .

Initially I used the hostname with the TCPCLIENTADDRESS . When a hostname is used with TCPCLIENTADDRESS , an attempt is made to resolve to a numeric TCP/IP address using standard DNS TCP/IP calls.  

If the value does not resolve , this causes a situation where the Scheduler cannot find the server. The incorrect name is stored in the TSM server.The fix is to use the numeric ip format.

TSAMP Cheat Sheet for DBA managing DB2 clustering (DBA DB2)


May 15, 2016

DB2Night Replays

The DB2Night Show #179: DB2 LUW V11.1 Deep Dive on OLTP and pureScale

What's New in DB2 LUW V11.1: DEEP DIVE on OLTP and pureScale Guests: Steve Rees, STSM Keri Romanufa, STSM Xun Xue, STSM IBM Canada Lab 100% of our audience learned something! As a follow up to our 15 April 2016 show Episode #177 that provided a broad overview of "What's New" in DB2 LUW V11.1, our special guests @db2steve, @xun_ca, and @RomanufaKeri gave us a deep dive that f ..

(Read more)

May 14, 2016

Willie Favero


(Posted Monday, April 25, 2016) I think that misquote from the Apollo 13 movie is perfect for today?s blog post. You see, here in Houston, we have a DB2 User Group that has its attendance dropping off. The solution of course, is to get more folk from around the Houston area to attend our meetings. However, I just can?t seem to come up with a way to do that.
Agent DB2

Taking IBM Data Studio 4.1.x for a test ride.

IBM Data studio has come long way ahead from days of DB2 Control Center. I wonder if DB2 Administrators still miss working with Control Center? Data Studio Client is an IDE that is used to manage DB2 instances and databases through GUI, provides for facilities like visual explain , routine and Java stored procedure development in a nice GUI. For instance and database administration, I still prefer DB2 command line. Newer DBAs would find use of Data Studio web console or its newer avatar Data Server Manager much appealing.


  • About 2 GB free disk space for Data Studio client and about 500 MB for web console.
  • IBM Installation Manager version 1.7. The Data Studio installer will install this first, if it is not installed.

What’s new

As of this writing there is newer version 4.1.2 but that has more of incremental changes. Version 4.1.1 introduced lot of changes which I am going to talk about.

  • Support for DB2 LUW v10.5 Fixpak 4
    • Support to add columns to BLU tables.
    • Supports Columnar MQT.
    • Supports Shadow tables.
  • More options to show related objects for a database object. This one feature was much needed.
    • Table -> Related views, triggers, views, UDTs, UDFs and Packages.
    • View -> Related Triggers, UDFs and Views.
    • Alias -> Related Views.
    • Trigger -> Views, UDFs.
    • Index -> Tablespaces.
    • Tablespace -> Indexes.
    • UDF -> Packages.
    • Stored Procedure -> Packages.
  • DB2 command line can now be run from inside SQL editor, without ever leaving Data Studio! This is great feature for DBAs because earlier one needed to open separate db2 command line.

  • Another useful feature added, view SQL results and history in split pane or in their individual panes.

  • Where supported “CREATE or REPLACE” clause can be used for ALIAS, VIEWS, NICKNAMES, SEQUENCES, VARIABLES, TRIGGERS, TYPES etc.

  • Enhancements to use BigInsights
    • Browse/create/alter/list data of Hadoop tables and views
    • Create BIG SQL Physical Data Models
    • Run DDL/DML for Hadoop tables/views
    • Use Visual Explain on Big SQL
  • More information displayed for data objects.

  • The routine debugger now detects and advises a fix if there is an issue connecting to a DB2 server.

Download Steps

For this purpose I used Windows 10, 64 - bit Virtual Machine(VM) with about 15GB free disk space.

  • Visit IBM website to download the latest version (4.1.2) Download page

  • Choose one of Data Studio client or bundle with Analytics accelerator. The bundle is only available for Windows OS. Choose download

  • Agree to license conditions. If not already signed in you will be asked to sign-in using IBM id or sign-up for a free account. Agree license

  • Next is the download Page. It offers two ways to download, one using Download Director which allows restarting the interrupted downloads. The other is normal http download. The download image size is about 1.5 GB. Download page

  • I tried download using Download Director on Windows 10. However, Download director failed to load. I thought java version could be the issue. DD failed

  • Clicked the link to download java and then it showed the real problem. The new EDGE browser in WIndows 10 does not support java plugin so need to install another browser like FireFox for Download Director to work. EDGE does not support java

  • I chose the alternative http route. Http download

  • The image takes some time to extract depending on CPU speed. Extract image

  • Go inside the extracted folder and right-click launchpad.exe and click Run as administrator launch as admin Say yes

  • Welcome screen gives a brief introduction to Data Studio and Analytics Accelerator. Welcome screen

  • There are two ways to install, one as administrator and restricted installation. If you do not have admin rights on computer go for restricted installation. This is useful for enterprises where group policy do not allow users to install software as administrators. Choose install type

  • Since Installation Manager is not installed so first Installation Manager will be installed. The installer comes already bundled with the code. IM v1.7

  • Accept license agreement. Accept IM lic

  • Review the install directory location. IM install directory

  • IM installation will take about 200 MB on disk. Confirm the to begin instalation. Confirm IM install

  • After IM installation completed, click the restart installation manager button to continue with Data Studio installation. Restart IM

  • Click Install button Install button

  • Just check Data Studio client if you are not going to use Analytic Accelerator. Choose DS

  • Review and select the APAR fix to be applied. APAR fix

  • Review and agree to License Agreement. Agree license

  • Review Shared Resourced Directory. This is different from Data Studio install directory and holds all the shared components for different IBM products. Shared Directory

  • Review and choose Data Studio installation Directory. At this point you can choose between 32-bit and 64- Install Directory

  • Select Language. Select Language

  • Select product features. For DBA most useful are Administration and Query tuning. I have not tried Simplified product option, but sounds like light weight version of Data Studio client. Feature selection

  • Review install instructions and click Install. Approximate disk space is about 900MB. Review Install Progress

  • Windows firewall will ask to Allow Access to Installer. Click Allow Access. Firewall

  • On install finish screen reiew any errors and view the log file if needed. Install Finish

  • First time you lauch IBM Data Studio, it asks to create a workspace directory. Workspace

  • Windows firewall might also block java. Ensure java is allowed access. Firewall java

Data Studio client is now installed! Feel free to try out different options within DS.


IBM Data Studio Quick Start Guide


May 10, 2016


DB2 11 REORG Optimum Parallel Processing

I recently worked with a customer who hadn't been using the PARALLEL parameter for REORG TABLESPACE. They'd set the LISTDEF parameter at the database level, which qualifies a large number of table spaces with hundreds of partitions per table space. Because the customer lacked the disk space to process hundreds of partitions simultaneously, they set the REORG_LIST_PROCESSING system parameter to SERIAL, which defaults the value of PARALLEL to (NO).
Big Data University

This Week in Data Science (May 10, 2016)

Here’s this week’s news in Data Science and Big Data. Immigration

Don’t forget to subscribe if you find this useful!

Interesting Data Science Articles and News

Upcoming & Recent Data Science Events

The post This Week in Data Science (May 10, 2016) appeared first on Big Data University.


May 09, 2016

Agent DB2

DB2 UDB 11.1 - A database with something for everyone.


Let us talk about the latest announcement for DB2 UDB 11.1. The offering was announced April 2016. This offering is based on latest trends in Big Data, Cloud, Analytics, IOT and of course Startups. The Big Blue has delivered yet another punch. You can download the full content from IBM website


  • Leverage disparate data sources be it cloud, web, mobile applications or sensors.

  • BLU Acceleration now with clusterd server support. Not sure if this cluster is similar to earlier DPF/ISAS clusters or something completely new.

  • HADR now integrated into PureScale - Peace of mind with the highly available cluster. I felt this was much needed feature for companies looking to implement PureScale technology. A standby PureScale cluster can be configured using HADR capability.

  • Subscription based licensing for cloud based applications. Startups can now easily deploy Enterprise class database in production.

  • Advanced recovery tools are now bundled as separate package - Trial version of this bundle will be available seprately on developerworks.

  • Three new database editions, added to the list of existing editions of DB2 UDB:

    1. DB2 Direct Standard edition with subscription based licensing model, ideal for cloud.
    2. DB2 Direct Advanced edition with subscription based licensing model, ideal for cloud.
    3. DB2 for Big Data - enables you to deploy Hadoop based IBM Biginsights with IBM DB2.It also enables data virtualization to use built-in data virtualization capabilities to unify both relational and Hadoop for applications and users. The data virtulization capability I believe will be an imporved version of DB2 Information Integrator.
  • Data Server Manager-Smart solution to administer, monitor, manage, and optimize the performance of DB2 for Linux, UNIX, and Windows databases across the enterprise. It is easy to deploy and cloud-ready. It simplifies database administrative tasks for both expert and novice users. New features include more robust pureScale monitoring and database administration enhancement.

  • Enhancements to DB2 Federation feature including homogeneous and hetrogeneous federation.


All big database vendors seem to be threatend by new wave of Open source and NoSQL databases. The least they could do is to allow data integration and more flexible licensing. Overall IBM has tried to address current trends in a way that existing customers are not left high and dry, when it comes to integrate new startup type capabilities. I like the license feature for cloud based models and Big Data offering.


May 07, 2016

Manohar Visnawatha

Browsers and Me

If there's anything inevitable in our daily life's, that is Browser.

Choosing a right browser for the kind of work we do is a real important deal.

What the options we have with us

  1. Internet explorer
  2. Microsoft Edge
  3. Google chrome
  4. Mozilla Firefox
  5. Opera
Read more »

May 06, 2016

DB2Night Replays

Z69: Connecting DB2 Applications with Data on z Systems

Presented by: John Iczkovits and Paul Wirth - Senior IT Specialist - IBM Connecting DB2 Applications, Including Mobile and Cloud, with Data on z Systems EVERYBODY (100%) in our audience learned something! In the polls we found out that 80% of the audience is using mainframe DB2 data in some web or mobile application. How can you connect these applications in a secure way to you mainframe data? Watch, learn and enjoy...

(Read more)

May 05, 2016

Frank Fillmore

Kim May Named #IBMChampion @IBMAnalytics for 2016

My colleague, Kim May, Vice President for Business Development of The Fillmore Group, has been named an IBM Champion for 2016.  Kim’s IBM Analytics community outreach includes: blogging at...

(Read more)

Craig Mullins

I'm Co-Presenting with a Couple of DB2 Experts at IDUG

I have the absolute pleasure of being able to co-present on a couple of great DB2 topics with two very knowledgeable and great speakers at this year's North American IDUG conference in Austin, TX.  The first session is on Tuesday, May 24, 2016 at 1:00 PM (Session V02). In this session I will be co-presenting with Ulf Heinrich, the Director of Solutions Delivery at Software Engineering GmbH....

(Read more)


DB2 SQL: Rewriting a Distinct with a Correlated Sub-Query to a Group By for Performance Improvement

Sometimes a client calls for help with a performance problem. On one particular Tuesday, a client called about a long-running query. Actually they basically said “What’s wrong with DB2? my query is running long! Are there lock timeouts happening?”
The query came from a new application implemented just the weekend before. When I asked how long the query was running, the answer was “more than the three-minute time-out”. This is a transaction processing database, so three minutes is generally not acceptable.

The Query

The query in this case was amazingly simple – joining only three tables:

select distinct(driver_id),
    (select max(pos_timestamp) from schema1.position as pos2
            where pos2.driver_id = pos.driver_id) as pos_timestamp,
    (select max(pos_timestamp) from schema1.breadcrumb as bc
            where bc.driver_id = pos.driver_id) as last_breadcrumb_ts
    from schema1.position as pos where pos_timestamp > current_timestamp - 6 hours
    order by pos_timestamp

Explaining this statement gives a fairly ugly explain plan, although still simple:

Access Plan:
	Total Cost: 		1.16818e+006
	Query Degree:		1

                                              (   1)
                                              (   2)
                                              (   3)
                                              (   4)
                                 27300.6                      1 
                                 NLJOIN                    GRPBY 
                                 (   5)                    (  12)
                              1.08987e+006                 529.777 
                                 125115                      245 
                           /-------+--------\                |
                       27300.6                 1           1.42729 
                       FETCH                GRPBY          TBSCAN
                       (   6)               (  10)         (  13)
                       2612.27              39.9434        529.777 
                       871.255              4.55094          245 
                    /----+----\               |              |
                27300.6    9.61044e+007     19091.1         7185 
                RIDSCN   TABLE: SCHEMA1     IXSCAN    TABLE: SCHEMA1
                (   7)       POSITION       (  11)       BREADCRUMB
                905.859         Q7           1534            Q1
                102.546                     174.777 
                  |                           |
                27300.6                  9.61044e+007 
                SORT                   INDEX: SCHEMA1
                (   8)                  POS_DRIVER_NDX
                905.859                       Q4
                (   9)
           INDEX: SCHEMA1

Looking at this explain plan, we can see that most of the expense comes in with operator #5 – an NLJOIN that is joining the POSITION table to ITSELF.


I immediately thought that rewriting might help this particular query significantly. If I could just make that join more efficient somehow. Both accesses to the table were through indexes, and one of them through index-only access.

I first tried to break out the correlated subqueries to Common Table Expressions(CTE). This bumped up my timeron count to over 5 million – 5 times worse than the original. We can’t all come up with the perfect answer the first time. Then as I was continuing to look at the query, I realized that I could write the distinct as a group-by instead. I rewrote the query to this:

select pos.driver_id
        , max(pos_timestamp) as max_pos_timestamp
        ,(select max(pos_timestamp) from schema1.breadcrumb as bc
            where bc.driver_id = pos.driver_id) as last_breadcrumb_ts
    from schema1.position as pos
    where pos_timestamp > current_timestamp - 6 hours
    group by pos.driver_id
    order by max_pos_timestamp

In every scenario that I could come up with, the results from the two queries were the same. I asked the application owner to verify that the results of this other way of writing the query were indeed what he needed.

The cost of the second query was just 17,489 Timerons! That was a 98.5% reduction in the cost of the query. Here’s what the explain plan looks like for the rewritten query:

Access Plan:
	Total Cost: 		17489.1
	Query Degree:		1

                              (   1)
                              (   2)
                        5034             1 
                       TBSCAN         GRPBY 
                       (   3)         (  12)
                       2622.28        529.777 
                       871.255          245 
                         |              |
                        5034          1.42729 
                       SORT           TBSCAN
                       (   4)         (  13)
                       2622.12        529.777 
                       871.255          245 
                         |              |
                        5034           7185 
                       GRPBY     TABLE: SCHEMA1
                       (   5)       BREADCRUMB
                       2620.41          Q1
                       (   6)
                       (   7)
                       (   8)
                27300.6    9.61044e+007 
                RIDSCN   TABLE: SCHEMA1
                (   9)       POSITION
                905.859         Q4
                (  10)
                (  11)
           INDEX: SCHEMA1

According to the DB2 index advisor, there are also indexes that I can add to reduce each side of the explain to index-only access, further reducing the cost by about 80%.


Some say that the DB2 optimizer is so good that you don’t have to rewrite queries. A significant portion of the time I find that to be true, but there are some edge cases like this type of query that are the exception. An SQL code review with a talented DB2 DBA can cut minutes or hours off of query execution time.

The post DB2 SQL: Rewriting a Distinct with a Correlated Sub-Query to a Group By for Performance Improvement appeared first on Xtivia.


May 04, 2016

Dave Beulke

DB2 LUW Version 11- 5 Great New Features and Many More to Come

DB2 LUW Version 11 was unveiled last Tuesday, April 12th. The announcement described the many tremendous items which are included in the new DB2 LUW 11. Being part of the DB2 LUW 11 Beta program has been exciting. Now that IBM has announced it, I’ve highlighted some of the new features in DB2 LUW...

(Read more)

Leons Petrazickis

Preserve bash history across multiple terminals and sessions

Is it possible to configure the terminal to preserve bash history? Yes, it's easy to configure bash so that it preserves history across sessions and tabs. The post Preserve bash history across...

(Read more)

Dave Beulke

DB2 LUW Version 11: 5 BLU Performance Improvements for Applications

The DB2 LUW Version 11 announcement described many DB2 BLU performance improvements and below are five important ones that will help your DB2 LUW applications improve performance. What is good about most of these DB2 LUW Version 11 BLU performance improvements is that they are built into DB2 LUW...

(Read more)

Subscribe by email



planetDB2 is an aggregator of blogs about the IBM DB2 database server. We combine and republish posts by bloggers around the world. Email us to have your blog included.