This week, I am continuing this series on middleware. Two weeks ago, in the first post of the series, I concentrated on four middleware categories from database to web. Last week, my attention was on five additional categories spanning transaction processing monitors…
IBM Db2 for IBM z/OS and IBM WebSphere Play Critical Roles in the World of Middleware
by IBM Systems Magazine | Nov 26, 2020 | Middleware | 1 comment
Hi,
I wish this is the right place to ask questions regarding DB/400 or DB/2.
I have issues regarding the database performance, the IOP seems become the
bottle neck.
The machine I’m using is iSeries P14 with 8 processors, 256GB memory and about 10 TB internal storage, SSDs.
My application is used to serve banking transaction, it’s sitting in between front-end
and back-end servers
The application start experiencing some issues after reaching a number of transactions per second, the IOP showing some heavy usage on read/write stats. But only in a few minutes – than back to normal.
After some discussions with team including from IBM, we suspect a file with big
records/size (200/300 millions records) accessed massively. Team suggest to split this file.
CPU usage is 15-20%
Then, we assume that accessing(read/write/update) a big database file is gonna be a serious issue.
My questions:
1. Is our assumption right?
2. If yes, should I split the file into smaller files (a bit difficult), or just simply replace the internal storage with external storage that may have better performance?
3. Does IBM provide external storage types with much better performance?
Any advice is highly appreciated and thanks in advance.
Warm regards,
Rusdy Heriyanto