planetDB2 logo

Planet DB2 is an aggregator of blogs about the IBM DB2 database server. We combine and republish posts by bloggers around the world. Email us to have your blog included.


November 23, 2017

Craig Mullins

Happy Thanksgiving 2017

Today, November 23rd, in the United States of America, we celebrate Thanksgiving by gathering together with our loved ones and giving thanks for what we have.  Typically, this involves celebrations with food, traditionally a big turkey dinner with stuffing, mashed potatoes and gravy, as we watch a parade and football games. I plan to follow this tradition to the letter this year and I wish...

(Read more)

November 21, 2017

Leons Petrazickis

Python Library of the Day: retrying

I’ve learned through extensive experience that Bash is the wrong choice for anything longer than a few lines. I needed to write a command line app, so I put one together in Python —...

(Read more)

Robert Catterall

Db2 12 SQL Enhancement: Temporal Logical Transactions

Temporal data support, introduced with Db2 10 for z/OS, is one of the more interesting SQL-related Db2 enhancements delivered in recent releases of the DBMS. Temporal data support comes in two flavors (which can both be utilized for a single table): business-time temporal and system-time temporal. With business-time temporal support enabled for a table, an organization can put future changes into the table (e.g., price changes for products or services that will not go into effect until...

(Read more)


The Evolution of Db2 Compression

You don't hear much about data compression these days, but recently I encountered a customer who was curious about it. He said his company never used compression due to the belief that because the CPU overhead was too high, but he was wondering if the feature has improved, and if so, how can you determine which tables will benefit from compression?

November 17, 2017

Data and Technology

SQL Coding and Tuning for Efficiency

Coding and tuning SQL is one of the most time consuming tasks for those involved in coding, managing and administering relational databases and applications. There can be literally thousands of...

(Read more)

November 14, 2017

Leons Petrazickis

Kudos to Firefox team on Quantum release

The new Firefox Quantum release is incredibly fast. It feels faster than Chrome, faster than old Firefox, and faster than all the other browsers on my Macbook. Impressively, despite Firefox ditching...

(Read more)

Kim May

Webinar Thursday: Calling IBM Sellers with Q4 Deals!

Sirius/XM already has the Holly channel playing Christmas music and Halloween was just a couple weeks ago. It’s way too early!  But…the clock IS ticking and there really are only 4 solid weeks left...

(Read more)


The Future of Db2 Documentation: More PDFs, More Frequent Updates.

As I've noted a few times, I maintain a PDF library of Db2 for z/OS documentation. I just find it easier to search PDF docs as opposed to looking up information online.

Henrik Loeser

Latest News on Bluemix and IBM Cloud

IBM Cloud News Sometimes it's quite hard to keep an overview of what is going on with the IBM Cloud. I had been out on vacation and needed to catch up. Want to learn with me? Here is some of the...

(Read more)

November 13, 2017

Craig Mullins

The Db2 12 for z/OS Blog Series - Part 19: Profile Monitoring Improvements

The ability to monitor Db2 using profile tables is a newer, though by no means brand new capability for Db2 DBAs. You can use profile tables to monitor and control various aspects of Db2 performance such as remote connections and certain DSNZPARMs. But this blog post is not intended to describe what profile monitoring is, but to discuss the new capabilities added in Db2 12 to enhance profile...

(Read more)

November 11, 2017

Henrik Loeser

Use Db2 as Cloud SQL Database with Python

Load Data into IBM Db2 on Cloud Over the Summer I learned that Python is top in the IEEE programming languages ranking. It is also my favorite language for quickly coding tools, web apps and...

(Read more)

November 10, 2017

DB2Night Replays

The DB2Night Show #198: The 5 W's of Db2 HADR with guest Dale McInnis, IBM

@dalemmcinnis Special Guest: Dale McInnis STSM / NA Data Server Tech Sales IBM Canada The 5 W's of Db2 HADR 98% of our audience learned something! What is HADR? Why should you use HADR? Where should HADR be deployed? When should you use HADR? Who is using HADR? And, bonus, How is HADR being used by Db2 Customers? Our live audience was HUGE- clearly HADR is an important, and popular, topic of concern for the Db2 community! Watch and...

(Read more)

November 09, 2017

Leons Petrazickis

Cryptocurrency and irreversible transactions

There’s a current news story about a wallet blunder freezing up $280,000,000 of Ether, a cryptocurrency. I try to avoid posting too much opinion on my blog, but I do have a view on this....

(Read more)

November 08, 2017

Henrik Loeser

EU Cloud: IBM gives client full control over their data

IBM Cloud: Have full control over your data Today, IBM announced for December the roll-out of a new support model and capabilities for IBM Cloud. Based on the announcement IBM is in the process of...

(Read more)

November 07, 2017


IBM Champions: Nomination Period Ends Soon

I've been an IBM employee for the past two years. Prior to that, I was an IBM Champion.

November 06, 2017

Use the Index, Luke

Big News In Databases — Fall 2017

Don’t fall behind: Here’s the most important database news from the last six months.

SQL on The Rise

NoSQL pioneer Google writes that their Spanner database is becoming a SQL system (summary). Salesforce uses Artificial Intelligence to translate natural language to SQL. Even some ORM0 vendors say it is better to use SQL. Seems like SQL has a positive vibe.

The highly popular article “Why SQL is beating NoSQL, and what this means for the future of data” is a very nice write-up of why SQL had become frowned upon, and how it is now gaining new popularity right.

The Cloud War Continues

In the previous edition, I reported about the sudden spike in license costs for the Oracle database in Amazon’s AWS and Microsoft’s Azure cloud environments.

In July the opposite happened for users of Microsoft SQL Server: the cost of running the Standard Edition in Amazon’s AWS cloud was reduced by between 29 and 52 percent.

IBM’s Renaming Insanity

There is a saying that there are only two hard things in computer science: (0) cache invalidation, (1) naming things, (2) and off-by-one errors. In June, IBM renamed the products in the Db2 family and thereby demonstrated how to cause great harm by choosing poor names.

Old NameNew Name
DB2 for LUWDb2
DB2 for z/OSDb2 for z/OS
DB2 for iSeriesDb2 for i

Note the subtle but groundbreaking innovation to write the “b” in Db2 as a lower case letter.

All sarcasm aside: Previously, DB2 was a common element in the names of different products of the same family. The “for” addendum made a distinction between each product. As these products offer different features1, the distinction is quite important.

The new name Db2 doesn’t allow this distinction anymore because it represents the whole family as well as one specific product. This lack of differentiation becomes a real problem when searching the internet: webpages about the former product DB2 for LUW might not contain “LUW” anymore. Reducing the iSeries addendum to i doesn’t improve searching either.

Wikipedia says “A name is a term used for identification.” I doubt the new names fulfill this purpose sufficiently.

One thing is for sure: the new naming is an upgrade for the LUW version. The reverse of this conclusion shines an interesting light on the other variants. IBM also looses grounds in the just published Gartner Magic Quadrant for Operational Databases 2017—it’s now far behind SAP and Amazon Web Services (AWS).

Other Vendors Rethink Release Numbering

I have already mentioned that the next MySQL major release after 5.7 will be MySQL 8.0. In the meantime, a release candidate is available.

The next major release of the Oracle Database will be 18c (instead of New releases will be annual and the version will be the last two digits of the release year (see also: release roadmap).

Starting with the just released PostgreSQL 10, there is no dot in the major versions of PostgreSQL anymore.

New Database Releases

In the past six month there were three major releases among the most popular SQL databases.

SQL Server 2017 (October 2017)

My personal picks from the “new features” list:

PostgreSQL 10 (October 2017)

My favorite new features:

MariaDB 10.2 (May 2017)

MariaDB3 10.2 introduces two very important features that will also appear in MySQL 8.0:

News on my Sites

Articles I published or updated:

From Twitter, in Great Brevity (follow me on Twitter)

Big News In Databases — Fall 2017” by Markus Winand was originally published at Use The Index, Luke!.


November 03, 2017

DB2Night Replays

The DB2Night Show #Z81: Experiences using Db2 z Transparent DS Encryption

Presented by: James Pickel DB2 for z/OS SWAT team "The DB2Night Show #Z81: Early experiences using Db2 for z/OS Transparent Data Set Encryption" Replays available in WMV and M4V formats! 100% of our studio audience learned something!Jim described pervasive encryption for DB2 and the steps to implement it. Watch the replay...

(Read more)

Triton Consulting

IBM Db2 12 for z/OS Technology Workshop & Migration Planning

Join Tom Crocker, Rob Gould and Karen Wilkins for a Db2 12 Technology Workshop and Migration Planning event. Run by IBM, the one day, face to face event will take place on Monday, 13th November at South Bank, London. Spaces … Continue reading →

(Read more)

November 01, 2017

Adam Gartenberg

IBM Docs 2.0 CR3 Now Available - Track Changes and Linux Conversion Server

The IBM Docs team shipped Docs 2.0 CR3 yesterday, with a couple of much sought-after features, notably a conversion server on Linux and track changes functionality: Support for the Conversion Server...

(Read more)

October 31, 2017


Planning for a Db2 for z/OS Upgrade: Application Testing

I recently received this question about upgrading to Db2 12:
"Our shop will be upgrading to DB2 V12 z/OS and typically we do extensive testing with any new release in both CM and NFM. If we will not be using any of the new functionalities in our applications, and we test in CM (function level 100), do we really need to test again in NFM (function level 500)?"

October 29, 2017

Big Data University

From Python Nested Lists to Multidimensional numpy Arrays

Dealing with multiple dimensions is difficult, this can be compounded when working with data. This blog post acts as a guide to help you understand the relationship between different dimensions, Python lists, and Numpy arrays as well as some hints and tricks to interpret data in multiple dimensions. We provide an overview of Python lists and Numpy arrays, clarify some of the terminologies and give some helpful analogies when dealing with higher dimensional data.


Before you create a Deep Neural network in TensorFlow, Build a regression model, Predict the price of a car or visualize terabytes of data you’re going to have to learn Python and deal with multidimensional data. So this blog post is expanded from our introductory course on Python for Data Science and help you deal with nesting lists in python and give you some ideas about numpy arrays.

Nesting involves placing one or multiple Python lists into another Python list, you can apply it to other data structures in Python, but we will just stick to lists. Nesting is a useful feature in Python, but sometimes the indexing conventions can get a little confusing so let’s clarify the process expanding from our courses on  Applied Data Science with Python We will review concepts of nesting lists to create 1, 2, 3 and 4-dimensional lists, then we will  convert them to numpy arrays.

Lists  and 1-D Numpy Arrays

Lists are a useful datatype in Python; lists can be written as comma separated values. You can change the size of a Python list after you create it and lists can contain an integer, string, float, Python function and Much more.  Indexing for a one-dimensional (1-D) list in Python is straightforward; each index corresponds to an individual element of the Python list. Python’s list convention is shown in figure 1 where each item is accessed using the name of the list followed by a square Bracket. For example, the first index is obtained by A[0]:”0″; the means that the zeroth element of the List contains the string 0. Similarly, the value of A[4] is an integer 4. For the rest of this blog, we are going to stick with integer values and lists of uniform size as you may see in many data science applications.

Figure 1:  Indexing Conventions for a list “A”

Lists are useful but for numerical operations such as the ones you will use in data science, Python has many useful libraries one of the most commonly used is numpy.

From Lists to 1-D Numpy Arrays

Numpy is a fast Python library for performing mathematical operations. The numpy class is the ndarray” is key to this framework; we will refer to objects from this class as a numpy array. Some key differences between lists include, numpy arrays are of fixed sizes, they are homogenous I,e you can only contain, floats or strings, you can easily convert a list to a numpy array, For example, if you would like to perform vector operations you can cast a list to a numpy array. In example 1 we import numpy then cast the two list to numpy arrays:


import nunpy as np 



Example 1: casting list [1,0] and [0,1] to a numpy array u and v.


If you check the type of u or v (type(v) ) you will get a “numpy.ndarray”. Although u and v points in a 2 D space there dimension is one, you can verify this using the data attribute “ndim”. For example, v.ndim will output a one. In numpy dimension or axis are better understood in the context of nesting, this will be discussed in the next section. It should be noted the sometimes the data attribute shape is referred to as the dimension of the numpy array.

The numpy array has many useful properties for example vector addition, we can add the two arrays as follows:




Example 2: add numpy arrays u and v to form a new numpy array z.


Where the term “z:array([1,1])” means the variable z contains an array. The actual vector operation is shown in figure 2, where each component of the vector has a different color.

Figure 2:  Example of vector addition

Figure 2:  Example of vector addition

Numpy arrays also follow similar conventions for vector scalar multiplication, for example, if you multiply a numpy array by an integer or float:





Example 3.1: multiplying numpy arrays y by a scaler 2.


The equivalent vector operation is shown in figure 3:

Figure 3: Vector addition is shown in code segment 2

Figure 3: Vector addition is shown in code segment 2


Like list you can access the elements accordingly, for example, you can access the first element of the numpy array as follows u[0]:1. Many of the operations of numpy arrays are different from vectors, for example in numpy multiplication does not correspond to dot product or matrix multiplication but element-wise multiplication like Hadamard product, we can multiply two numpy arrays as follows:






Example 3.2: multiplying  two numpy arrays u and v


The equivalent operation is shown in figure 4:

Figure 4: multiplication of two numpy arrays expressed as a Hadamard product.

Figure 4: multiplication of two numpy arrays expressed as a Hadamard product.

Nesting lists and two 2-D numpy arrays

Nesting two lists are where things get interesting, and a little confusing; this 2-D representation is important as tables in databases, Matrices, and grayscale images follow this convention. When each of the nested lists is the same size, we can view it as a 2-D rectangular table as shown in figure 5. The Python list “A” has three lists nested within it, each Python list is represented as a different color. Each list is a different row in the rectangular table, and each column represents a separate element in the list.  In this case, we set the elements of the list corresponding to row and column numbers respectively.

Figure 5: List “A” two Nested lists represented as a table

Figure 5: List “A” two Nested lists represented as a table


In Python to access a list with a second nested list, we use two brackets, the first bracket corresponds to the row number and the second index corresponds to the column. This indexing convention to access each element of the list is shown in figure 6, the top part of the figure corresponds to the nested list, and the bottom part corresponds to the rectangular representation.

Figure 6: Index conventions for list  “A” also represented as a table

Figure 6: Index conventions for list  “A” also represented as a table

Let’s see some examples in figure 4, Example 1  shows the syntax to access element A[0][0], example 2 shows the syntax to access element A[1][2] and example 3 shows how to access element  A[2][0].

Figure 7: Example of indexing elements of a list.

Figure 7: Example of indexing elements of a list.


We can also view the nesting as a tree as we did in Python for Data Science as shown in figure 5 The first index corresponds to a first level of the tree, the second index corresponds to the second level.

Figure 8: An example of matrix addition

Figure 8: An example of matrix addition

2-D numpy arrays

Turns out we can cast two nested lists into a 2-D array, with the same index conventions.  For example, we can convert the following nested list into a 2-D array:


V=np.array([[1, 0, 0],[0,1, 0],[0,0,1]])

Example 4: creating a 2-D array or array with two access


The convention for indexing is the exact same, we can represent the array using the table form like in figure 5. In numpy the dimension of this array is 2, this may be confusing as each column contains linearly independent vectors. In numpy, the dimension can be seen as the number of nested lists. The 2-D arrays share similar properties to matrices like scaler multiplication and addition.  For example, adding two 2-D numpy arrays corresponds to matrix addition.






Example 5.1: the result of adding two numpy arrays


The resulting operation corresponds to matrix addition as shown in figure 9:

Figure 9: An example of matrix addition.

Figure 9: An example of matrix addition.

Similarly, multiplication of two arrays corresponds to an element-wise product:






Example 5.2: the result of  multiplying numpy arrays


Or Hadamard product:

Figure 10: An example of Hadamar product.

Figure 10: An example of Hadamar product.

To perform standard matrix multiplication you world use,Y). In the next section, we will review some strategies to help you navigate your way through arrays in higher dimensions.

Nesting List within a List within a List  and 3-D Numpy Arrays

We can nest three lists, each of these lists intern have nested lists that have there own nested lists as shown in figure 11. List “A” contains three nested lists, each color-coded. You can access the first, second and third list using A[0], A[1] and A[2] respectively. Each of these lists contains a list of three nested lists. We can represent these nested lists as a rectangular table as shown in figure 11. The indexing conventions apply to these lists as well we just add a third bracket, this is also demonstrated in the bottom of figure 6 where the three rectangular tables contain the syntax to access the values shown in the table above.

Figure 11: List with three nested, each nested list has three nested lists.

Figure 11: List with three nested, each nested list has three nested lists.


Figure 12 shows an example to access elements at index A[0][2][1] which contains a value of 132.  The first index A[0] contains a list that contains three lists, which can be represented as a rectangular table. We use the second index i.e A[0][2] to access the last list contained in A[0].  In the table representation, this corresponds to the last row of the table. The list A[0][2] corresponds to the list [131,132,133]. As we are interested in accessing the second element we simply append the index [1]; Therefore the final result is A[0][2][1].

Figure 12: Visualization of obtaining A[0][2][1]

Figure 12: Visualization of obtaining A[0][2][1]



A helpful analogy is if you think of finding a room in an apartment building on the street as shown in Figure  13. The first index of the list represents the address on the road, in  Figure 8 this is shown as depth.  The second index of the list represents the floor where the room is situated, depicted by the vertical direction in Figure 13.  To keep consistent with our table representation the lower levels have a larger index. Finally, the last index of the list corresponds to the room number on a particular floor, represented by the horizontal arrow.

Figure 13: Street analogy for list indexing

Figure 13: Street analogy for list indexing


For example, in figure 9 the element in the list  A[2][2][1]: corresponds to building 2 on the first floor the room is in the middle, the actual element is 332.

Figure 14: Example of List indexing Street analogy for list indexing

Figure 14: Example of List indexing Street analogy for list indexing


3D Numpy Arrays

The mathematical operations for 3D numpy arrays follow similar conventions i.e element-wise addition and multiplication as shown in figure 15 and figure 16. In the figures, X, Y  first index or dimension corresponds an element in the square brackets but instead of a number, we have a rectangular array. When the add or multiply X and Y together each element is added or multiplied together independently. More precisely each  2D arrays represented as tables is X are added or multiplied with the corresponding arrays Y as shown on the left; within those arrays, the same conventions of 2D numpy addition is followed.

Figure 15: Add two 3D numpy arrays X and Y.

Figure 15: Add two 3D numpy arrays X and Y.

Figure 16: Multiplying two 3D numpy arrays X and Y.

Figure 16: Multiplying two 3D numpy arrays X and Y.


Beyond 3D Lists

Adding another layer of nesting gets a little confusing, you cant really visualize it as it can be seen as a 4-dimensional problem but let’s try to wrap our heads around it.  Examining,  figure 17 we see list “A” has three lists, each list contains two lists, which intern contain two lists nested in them. Let’s go through the process of accessing the element that contains 3122. The third element A[2] contains 2  lists; this list contains two lists in figure 10 we use the depth to distinguish them.  We can access the second list using the second index as follows A[2][1]. This can be viewed as a table, from this point we follow the table conventions for the previous example as illustrated in figure 17.

Figure 17: Example of an element in a list, with a list, within a list nested in list “A”

Figure 17: Example of an element in a list, with a list, within a list nested in list “A”


We can also use the apartment analogy as shown in figure 18  this time the new list index will be represented by the street name of 1st street and 2nd street.  As before the second list index represents the address, the third list index represents the floor number and the fourth index represents the apartment number. The analogy is summarized in Figure 11. For example directions to element A[2][1][0][0] would be  2nd Street , Building 1, Floor 0 room 0.

Figure 18: Street analogy for figure 11

Figure 18: Street analogy for figure 11


We see that you can store multiple dimensions of data as a Python list. Similarly, a Numpy array is a more widely used method to store and process data. In both cases, you can access each element of the list using square brackets. Although Numpy arrays behave like vectors and matrices, there are some subtle differences in many of the operations and terminology. Finally, when navigating your way through higher dimensions it’s helpful to use analogies.

The post From Python Nested Lists to Multidimensional numpy Arrays appeared first on Cognitive Class.


October 27, 2017

Big Data University

Data Science Survey: The Results Are In!

Last week we ran a Data Science survey asking four simple questions to our community. In this post, I’ll show you the results of our survey and provide you with a Jupyter notebook; just in case you want to play with the data yourself.


2,233 people participated in the survey. This is a statistically significant participation for our students, but not the Data Science community in general. Among other factors, the Cognitive Class’ catalog of courses influences who we attract to our site and ultimately who responded to the survey.

Data Science Survey Q1: What’s your level of interest for the following technologies?

We presented respondents with eight data-related technologies and asked them to express their level of interest for each of them. The chart below shows the results.


Data Science Survey - Technologies

As expected, there is a high degree of interest (green bars) for Data Science, Big Data, and AI. Virtually everyone showed some degree of interest for these three categories.

Participants showed relatively low interest in hot technologies such as Blockchain, Virtual Reality, and Chatbots. I was somewhat surprised by this result. Though, as the author of our first Chatbot course and an enthusiast of cutting edge technology, I might be biased. 😉

Perhaps, our learners are primarily professionals who might not have yet a concrete business application for these emerging, but still green, technologies. But this is just speculation, of course.

Data Science Survey Q2: What’s your level of interest for the following areas of Data Science?

Our second question drilled down to the Data Science field, asking about the level of interest for specific areas of Data Science.


Data Science Survey - Areas of Data Science

The data shows a strong interest in all areas of Data Science, exception made for Data Journalism which received a lukewarm response. If you are interested in this topic, I highly recommend taking our Data Journalism course. Storytelling is underrated and I think it will benefit your Data Science career, even if you aren’t a journalist.

Data Science Survey Q3: Which programming language for Data Science are you most interested in?

Our third question narrowed the scope further to the programming language of choice for Data Science.


Data Science Survey - Programming Languages

Almost half of the respondents use or have an interest in Python for Data Science. R and SQL sit strong at 20.96% and 12.4%, respectively. No huge surprises here, but I was expecting Scala to have the fourth place. Instead, Java appears to be ahead of it, with JavaScript in 6th place, beating by a wide margin Julia.

Julia is actually a fantastic language for Data Science and I’d love to see it grow in popularity. Its performance characteristics alone are noteworthy. Unfortunately, it’s still somewhat niche in the Data Science community in general, and clearly among our students. (If you’d like to change this by authoring a course on the subject, feel free to get in touch with us.)

What’s interesting about this question is the fact that we allowed an open-ended Other option. As a result, we truly experienced the diversity of languages people adopt to perform Data Science in. In fact, our respondents also mentioned C#, Clojure, Perl, C, and a few others programming languages.

Data Science Survey Q4: Which Data Science tool are you most interested in?

Finally, we asked about the primary tool or IDE of choice.


Data Science Survey - Tools

Respondents could only pick their most used tool, so it’s not surprising to see Hadoop and Spark do so well among our respondents, who showed a clear inclination for Big Data.

RStudio is also fairly popular at 15.99%, a figure somewhat in line with the results of the previous question. The primary R tool is more popular than any other Python tool among our respondents.

Please note that there is no contradiction here. Python users simply had more choices available, splitting the vote between IBM DataScience Experience (IBM DSX for short), Anaconda, and Jupyter. Combined, over 35% of respondents selected Python tools as their primary tool for Data Science, confirming that Python is at least twice as popular as R among our users.

There you have it. It will be interesting to see how these change over time. In the meantime, feel free to play with the data yourself by using the Jupyter notebook created by my colleague Alex Aklson, author of the excellent Data Visualization with Python course.

If you enroll in his course, you’ll have access to our Labs environment to run the Data Science Survey notebook in the cloud, without having to install anything on your machine. Alternatively, you can sign up with a professional Data Science tool like IBM Data Science Experience.

Where to learn more

Since most of our respondents showed a great deal of interest in Data Science with Python and Big Data, allow me to recommend a couple of resources useful to learn more about these topics:

And if your interest lies elsewhere, feel free to check out our other learning paths and courses. All available for free.

The post Data Science Survey: The Results Are In! appeared first on Cognitive Class.

Triton Consulting

DB2 11 Performance: BLU Hits and Misses on the DB2Night Show

A forthcoming date for your diary that you won’t want to miss! On Friday 15th December, Mark Gillis, IBM Champion and Principal Consultant at Triton Consulting will be the guest presenter once again on DBI Software’s DB2Night Show. Based on … Continue reading →

(Read more)

Henrik Loeser

Cloud Foundry Logging Sources Deciphered

Ever deployed a cloud foundry app and ran into errors? I did and still do. My typical reaction is to request the recent app logs and to analyse them for the root cause. The logs contain those strange...

(Read more)

October 26, 2017


October 24, 2017

Data and Technology

IT Through the Looking Glass

Sometimes I look for inspiration in what may seem — at first glance — to be odd places.  For example, I think the Lewis Carroll “Alice in Wonderland” books offer sage advice for the IT...

(Read more)

ChannelDB2 Videos

Tutorial Part 1 - Transaction Logging and Buffer Pool Page Cleaning


Tutorial Part 1 - Transaction Logging and Buffer Pool Page Cleaning Happy Learning & Sharing


Contest Introduces Students to the Mainframe

For years I've discussed the aging mainframe workforce and the need to get young IT pros onto the platform. I've written about zNextGen, the great SHARE program that helps recent graduates who are entering the workforce connect with one another. I've also covered the IBM Academic Initiative, which works with high schools and universities and provide the curriculum and access to systems educators need to teach mainframe skills to the next generation. The Academic Initiative has provided mainframe training and resources to students at more than 1,000 schools in 70 countries.

Triton Consulting

The IDUG Buzz

In the lead up to an IDUG EMEA Technical Conference the Triton office can seem a bit fraught. The technical team are usually tweaking their presentations whilst I’m finalising plans for the Triton and DBI Software drinks reception. This year … Continue reading →

(Read more)

October 20, 2017

DB2Night Replays

The DB2Night Show #197: Db2 IoT, Project Pollinator, and GWLM

Follow @Roger_E_Sanders Follow @IBM_Paul_Bird Special Guests: Roger Sanders and Paul Bird, IBM Db2 Science Projects: IoT Project Pollinator and GWLM Tool 100% of our audience learned something! It's a special show when we have two guests! Roger Sanders introduced us to Project Pollinator and showed us his pet Internet of Things (IoT) Db2 project - Fascinating! And Paul Bird introduced us to a rather new "free" Graphical Workload Manager...

(Read more)

Subscribe by email



planetDB2 is an aggregator of blogs about the IBM DB2 database server. We combine and republish posts by bloggers around the world. Email us to have your blog included.