planetDB2 logo

Planet DB2 is an aggregator of blogs about the IBM DB2 database server. We combine and republish posts by bloggers around the world. Email us to have your blog included.


September 18, 2018

Kim May

From Florence to IDUG – Thinking Good Thoughts for Charlotte

With Hurricane Florence finally moving north and giving North Carolina a break from the wind and rain, our thoughts are with friends, family and colleagues who live and work in the area.  If you were...

(Read more)

Triton Consulting

What’s coming up at IDUG EMEA 2018

The countdown is on! IDUG EMEA 2018 is just around the corner and it is set to be a great event. With so many fantastic speakers this year it’s going to be tough to plan your diary so we thought … Continue reading → The post What’s coming up at IDUG EMEA 2018 appeared first on Triton...

(Read more)

September 17, 2018

Data and Technology

How Much Data Availability is Enough?

I bet that some of you reading the title of this blog post scoffed at it. I mean, in this day age, isn’t round-the-clock availability for every application and user just a basic requirement?...

(Read more)

September 15, 2018

Henrik Loeser

Tutorial on how to apply end to end security to a cloud application

Before you head out to the weekend I wanted to point you to a new cloud security tutorial. If you read this at the beginning of your week: What a great start... ;-) Did you ever wonder how different...

(Read more)

September 14, 2018

DB2Night Replays

Db2Night Show #z89: Are you just gonna play in the buffer pool or own it?

Presented by: Adrian Burke DB2 for z/OS SWAT team "The DB2Night Show #Z89: Are you just gonna play in the buffer pool or own it?" Replays available in WMV and M4V formats! 100% of our studio audience learned something!Adrian talked about changes to buffer pool behavior, performance, and management across recent DB2 releases. Watch the replay...

(Read more)

September 13, 2018


September 10, 2018

Craig Mullins

BMC and the Mainframe: An Ongoing Partnership

No, the mainframe is not dead… far from it. And BMC Software continues to be there with innovative tools to help you manage, optimize, and deploy your mainframe applications and systems. BMC, Db2 for z/OS, and Next Generation Technology One place that BMC continues to innovate is in the realm of Db2 for z/OS utilities. Not just by extending what they have done in the past, but by starting fresh...

(Read more)

September 05, 2018

Data and Technology

Database Performance Management Solutions

Performance management, from a database perspective, is comprised of three basic components: Monitoring a database system and applications accessing it to find problems as they arise. This is...

(Read more)

Henrik Loeser

Upcoming Db2 events for the German and European crowd

Upcoming Db2 events Are you back from vacation and need to get an overview of upcoming Db2 related Events? Here we go. Db2 Aktuell: Already this September 24-26 in Berlin, the conference offers...

(Read more)

August 30, 2018

Robert Catterall

How Big is Big? (2018 Update - Db2 for z/OS Buffer Pools and DDF Activity)

Almost 5 years ago, I posted to this blog an entry on the question, "How big is big?" in a Db2 for z/OS context. Two areas that I covered in that blog entry are buffer pool configuration size and DDF transaction volume. Quite a lot has changed since October 2013, and it's time for a Db2 "How big is big?" update. In particular, I want to pass on some more-current information regarding buffer pool sizing and DDF activity. How big is big? (buffer pools) Back in 2013, when I posted the...

(Read more)

August 28, 2018

Craig Mullins

Come See Me Speak at the Heart of America Db2 User Group on 2018-09-10

On September 10, 2018 I will be delivering two Db2 presentations at the Heart of America Db2 User Group (HOADb2UG). The meeting is being held in Kansas City... well, a suburb of Kansas City named Overland Park. Here is the address of the exact location: KU Edwards CampusKansas University - Edwards Campus12600 Quivira RdOverland Park, Ks 66213-2402 There are several other speakers at the event,...

(Read more)

August 23, 2018

Kim May

Frank Fillmore to Present on IDAA V7 (“Sailfish”) at the Baltimore/Washington Db2 Users Group

Following his successful hands-on IDAA class in Columbus, Frank Fillmore will be presenting on the IDAA at the September Baltimore/Washington Db2 Users Group, scheduled for Wednesday, September...

(Read more)

Data and Technology

SQL Basics

It is hard to imagine a time when SQL was unknown and not the lingua franca it is today for accessing databases. That said, there are still folks out there who don’t know what SQL is… so...

(Read more)

Scott Hayes

Db2 Performance: Tuning Db2 LUW Explains and db2advis, Part 1

Would you like Db2 LUW Explains and db2advis.exe to run faster and more efficiently? If so, then this blog is for you! Explain and Db2Advis are applications just like any other, and their performance can be improved by adding indexes to EXPLAIN and ADVISE tables! In this new series of Db2 LUW Performance blog posts, we will share with you our performance analysis of Db2 EXPLAIN and Db2Advis processing and SQL with the goal of ultimately...

(Read more)

August 21, 2018

Triton Consulting

How flexible is your DB2 support team?

Flexibility is the name of the game when it comes to DB2 support. If a problem is going to happen you can bet your lunch money on it happening in the middle of the night on a Sunday when your … Continue reading → The post How flexible is your DB2 support team? appeared first on Triton...

(Read more)
Big Data University

Data Science Professional Certificate

Today IBM and Coursera launched an online Data Science Professional Certificate to address the shortage of skills in data-related professions. This certificate is designed for those interested in a career in Data Science or Machine Learning, and equips them to become job-ready through hands-on, practical learning.

IBM Data Science Professional Certificate

IBM Data Science Professional Certificate

In this post we look at

  • why this certificate is being created (the demand),
  • what is being offered,
  • how it differs from other offerings,
  • who it is for,
  • the duration and cost,
  • what outcomes you should expect,
  • and your next steps.

Why this Professional Certificate
Data is collected in every aspect of our existence. The true transformative impact of data is realizable only when we can mine and act upon the insights contained within the data. Thus it is no surprise to see phrases such as “data is the new oil” (Economist).

We see organizations in most spaces seeding data-related initiatives. Companies that leverage and act upon the gems of information contained within data will get ahead of the competition – or even transform their industries. The transformative aspect of data is also applicable to the not-for-profit sector, for the betterment of society and improving our existence.

A variety of data related professions are relevant: Data Scientist, Data Engineer, Data Analyst, Database Developer, Business Intelligence (BI) Analyst, etc., and the most prominent of those is Data Scientist. It has been called “the sexiest job of the 21st century” by the Harvard Business Review, and Glassdoor calls it the “best job in America”.

Job listings and salary profiles for this profession clearly reflect this. When we talk to our clients we see a common thread: they can’t find enough qualified people to staff their data projects. This has created a tremendous opportunity for data professionals, especially Data.

In a recent report, IBM projected that “by 2020 the number of positions for data and analytics talent in the United States will increase by 364,000 openings, to 2,720,000”. The global demand is even higher.

Even though Data Science is “hot and sexy” and might enable you to get a great job today, the question is will it continue to be in demand and important going forward?

I certainly believe so, at least for another decade or more. Data is being created and collected at a rapid pace, and the number of organizations leveraging data is also expected to increase significantly.

“IBM’s Data Science Professional Certificate on Coursera fulfills a massive need for more data science talent in the US and globally,” said Leah Belsky, Vice President of Enterprise at Coursera. “Coursera offers online courses on everything from computer science to literature, but over a quarter of all enrollments from our 7 million users in the US are in data science alone. We expect IBM’s certificate will become a valuable credential for people wanting to start a career in data science.”

What we offer
It is with this in mind IBM developed the Data Science Professional Certificate. It consists of 9 courses that are intended to arm you with latest job-ready skills and techniques in Data Science.

The courses cover variety of data science topics including: open source tools and libraries, methodologies, Python, databases and SQL, data visualization, data analysis, and machine learning. You will practice hands-on in the IBM Cloud (at no additional cost) using real data science tools and real-world data sets.

The courses in the Data Science Professional Certificate include:

  1. What is Data Science
  2. Tools for Data Science
  3. Data Science Methodology
  4. Python for Data Science
  5. Databases and SQL for Data Science
  6. Data Visualization with Python
  7. Data Analysis with Python
  8. Machine Learning with Python
  9. Applied Data Science Capstone

How it is different
This professional certificate has a strong emphasis on applied learning. Except for the first course, all other courses include a series of hands-on labs and are performed in the IBM Cloud (without any cost to you).

Throughout this Professional Certificate you are exposed to a series of tools, libraries, cloud services, datasets, algorithms, assignments and projects that will provide you with practical skills with applicability to real jobs that employers value, including:

  • Tools: Jupyter / JupyterLab, Zeppelin notebooks, R Studio, Watson Studio, Db2 database
  • Libraries: Pandas, NumPy, Matplotlib, Seaborn, Folium, ipython-sql, Scikit-learn, ScipPy, etc.
  • Projects: random album generator, predict housing prices, best classifier model, battle of neighborhoods

Who this is for
Data Science is for everyone – not just those with a Master’s or Ph.D. Anyone can become a Data Scientist, whether or not you currently have computer science or programming skills. It is suitable for those entering the workforce as well as for existing professionals looking to upskill/re-skill themselves and get ahead in their careers.

In the Data Science Professional Certificate we start small, re-enforce applied learning, and build to more complex topics.

I consider a Data Scientist as someone who can find the right data, prepare it, analyze and visualize data using a variety of tools and algorithms, build data experiments and models, run these experiments, learn from them, adjust and re-iterate as needed, and eventually be able to tell the story hidden within data so it can be acted upon – either by a human or a machine.

If you are passionate about pursuing a career line that is in high demand with above average starting salaries, and if you have the drive and discipline for self-learning, this Data Science Professional Certificate is for you.

“Now is a great time to enter the Data Science profession and IBM is committed to help address the skills gap and promote data literacy” says Leon Katsnelson, CTO and Director, IBM Developer Skills Network. “Coursera, with over 33 million registered learners, is a great platform for us to partner with and help with our mission to democratize data skills and build a pipeline of data literate professionals.”

Cost and Duration
The courses in this certificate are offered online for self-learning and available for “audit” at no cost. “Auditing” a course gives you the ability to access all lectures, readings, labs, and non-graded assignments at no charge. If you want to learn and develop skills you can audit all the courses for free.

The graded quizzes, assignments, and verified certificates are only available with a low-cost monthly subscription (just $39 USD per month for a limited time). So if you require a verified certificate to showcase your achievement with prospective employers and others, you will need to purchase the subscription. Enterprises looking to skill their employees in Data Science can access the Coursera for Business offering. Financial aid is also available for those who qualify.

The certificate requires completion of 9 courses. Each course typically contains 3-6 modules with an average effort of 2 to 4 hours per module. If learning part-time (e.g. 1 module per week), it would take 6 to 12 months to complete the entire certificate. If learning full-time (e.g. 1 module per day) the certificate can be completed in 2 to 3 months.

Upon completing the courses in this Professional Certificate you will have done several hands-on assignments and built a portfolio of Data Science projects to provide you with the confidence to plunge into an exciting profession in Data Science.

Those pursuing a paid certificate will not only receive a course completion certificate for every course they complete but also receive an IBM open badge. Successfully completing all courses earns you the Data Science Professional Certificate as well as an IBM digital badge recognizing your proficiency in Data Science. These credentials can be shared on your social profiles such as LinkedIn, and also with employers.

Sample Data Science Professional Certificate

Sample of IBM Data Science Professional Certificate

IBM and Coursera are also working together to form a hiring consortium. Learners who obtain the verified Certificate will be able to opt-in to have their resumes sent to employers in the consortium.

Next Steps
All courses in this Data Science Professional Certificate are live and available. Enroll today, start developing skills employers are looking for, and kickstart your career in Data Science.

The post Data Science Professional Certificate appeared first on Cognitive Class.


August 17, 2018

Henrik Loeser

Db2: Some Friday Fun with XML and SQL recursion

Recursion is fun!? Right now, Michael Tiefenbacher  and I have to prepare our joint talk "Some iterations over recursion" for the IDUG Db2 Tech Conference in Malta 2018. The title of our talk...

(Read more)

August 15, 2018

Henrik Loeser

IBM Cloud and Db2 News - August 2018 Edition

Catch up with news This Summer I have been away few days here and there. Once back, I tried to catch up with changes for IBM Cloud and its services as well as with Db2. Here are links to what I...

(Read more)

Data and Technology

DBA Corner

Just a quick blog post today to remind my readers that I write a regular, monthly column for Database Trends & Applications magazine called DBA Corner. The DBA Corner is geared toward news,...

(Read more)

August 13, 2018

Craig Mullins

A Guide to Db2 Application Performance for Developers - New Book on the Way!

Those of you who are regular readers of my blog know that I have written one of the most enduring books on Db2 called DB2 Developer's Guide. It has been in print for over twenty years in 6 different editions. Well, the time has come for me to write another Db2 book. The focus of this book is on the things that application programmers and developers can do to write programs that perform well from...

(Read more)

August 08, 2018

Big Data University

IBM Partners with to launch Professional Certificate Programs

IBM has partnered with, the leading online learning destination founded by Harvard and MIT, for the delivery of several Professional Certificate programs. Professional Certificate programs are a series of in-demand courses designed to build or advance critical skills for a specific career.

We are honored to welcome IBM as an edX partner,” said Anant Agarwal, edX CEO and MIT Professor. “IBM is defined by its commitment to constant innovation and its culture of lifelong learning, and edX is delighted to be working together to further this shared commitment. We are pleased to offer these Professional Certificate programs in Deep Learning and Chatbots to help our learners gain the knowledge needed to advance in these incredibly in-demand fields. Professional Certificate programs, like these two new offerings on edX, deliver career-relevant education in a flexible, affordable way, by focusing on the skills industry leaders and successful professionals are seeking today. is a great partner for us too, not just because they have an audience of over 17 million students, but because their mission of increasing access to high-quality education for everyone so closely aligns with our own.

Today we’re seeing a transformational shift in society. Driven by innovations like AI, cloud computing, blockchain and data analytics, industries from cybersecurity to healthcare to agriculture are being revolutionized. These innovations are creating new jobs but also changing existing ones—and require new skills that our workforce must be equipped with. We are therefore taking our responsibility by partnering with edX to make verified certificate programs available through their platform that will enable society to embrace and develop the skills most in-demand” said IBM Chief Learning Officer Gordon Fuller.

The IBM Skills Network (of which Cognitive Class is part of) also relies on Open edX — the open source platform that powers — and we plan to contribute back the enhancements as well as support the development of this MOOC project. To learn more about how we use (and scale Open edX) check out our [recent post] on the topic.

We are kicking off this collaboration with two Professional Certificate programs that might be of interest to you.

Deep Learning (the first course in the program, Deep Learning Fundamentals with Keras, is open for enrollment today starts September 16)
Building Chatbots Powered by AI (the first course in the program, How to Build Chatbots and Make Money , is open for enrollment today and already running)

The chatbot program includes three courses:

1. How to Build Chatbots and Make Money;
2. Smarter Chatbots with Node-RED and Watson AI;
3. Programming Chatbots with Watson Services.

Those of you who are familiar with my chatbot course on Cognitive Class, will recognize the first course on the list. The key difference is that this version on edX includes a module on making money from chatbots.

Node-RED is a really cool visual programming environment based on JavaScript. With little programming skills and the help of this second course, you’ll be able to increase your chatbot’s capabilities and make it interact with other services and tools, including sentiment analysis, speech to text, social media services, and deployment on Facebook Messenger.

The last course in this chatbot program focuses on other Watson services, specifically the powerful combination of Watson Assistant and Watson Discovery to create smarter chatbots that can draw answers from your existing knowledge base.

All in all, this program is still accessible to people with limited programming skills; though, you will get the most out of it if you are a programmer.

The Deep Learning program is aimed at professionals and students interested in machine learning and data science. Once completed, it will include five courses:

1. Deep Learning Fundamentals with Keras;
2. Deep Learning with Python and PyTorch;
3. Deep Learning with Tensorflow;
4. Using GPUs to Scale and Speed-up Deep Learning;
5. Applied Deep Learning Capstone Project.

The goal of these programs is to get you ready to use exciting new technologies in the emerging fields of Data Science, Machine Learning, AI, and more. The skills you’ll acquire through these highly practical programs will help you advance your career, whether at your current job or when seeking new employment.

It’s a competitive market out there, and we are confident that these programs will serve you well. If you are an employer looking to re-skill your workforce, these programs are also an ideal way to do so in a structured manner.

The certificates also look quite good on a resume (or LinkedIn) as passing these courses and completing the programs demonstrates a substantial understanding of the topics at hand. This isn’t just theory. You can’t complete these Professional Certificate programs without getting your hands dirty, so to speak.

We also plan to launch more Professional Certificates in collaboration with, but if you have an interest in advancing your career in Data Science and AI, we recommend that you start with these two.

The post IBM Partners with to launch Professional Certificate Programs appeared first on Cognitive Class.


August 06, 2018

Craig Mullins

Security, Compliance and Data Privacy – GDPR and More!

Practices and procedures for securing and protecting data are under increasing scrutiny from industry, government and your customers. Simple authorization and security practices are no longer sufficient to ensure that you are doing what is necessary to protect your Db2 for z/OS data.  The title of this blog post uses three terms that are sometimes used interchangeably, but they are...

(Read more)

August 03, 2018

Triton Consulting

“Suck it and See” development – A DB2 Support Nightmare!

The set-up   This could be strongly related to the Support Nightmare #1, but not necessarily always. We have many clients who have rigorous change and version control procedures, where nothing goes into a downstream environment until it’s been … Continue reading → The post “Suck...

(Read more)

August 01, 2018

Triton Consulting

Master the Mainframe 2018

What is the Master the Mainframe Competition? The contest is sponsored by the IBM Z Academic Initiative. Designed to help you learn, prepare for a career and win prizes, it’s a great way to get started with the Mainframe. The … Continue reading → The post Master the Mainframe 2018...

(Read more)

Triton Consulting

Six Reasons to Review your Database Availability – Part 1 Lost Sales Revenue

From working with many customers to help keep their critical databases up and running we have come up with the top 6 reasons for putting Database Availability at the top of your priority list. In this blog we look at … Continue reading → The post Six Reasons to Review your Database...

(Read more)

July 29, 2018

Frank Fillmore

IBM Db2 Analytics Accelerator (IDAA) v7 Workshop a Success! #IBMz #IBMAnalytics

On July 23 and 24, 2018 The Fillmore Group delivered a hands-on IBM Db2 Analytics Accelerator (IDAA) v7 Workshop to 12 students across 5 different enterprises at the IBM Dublin, Ohio (USA) Technical...

(Read more)

July 27, 2018

Kim May

New Mid-Atlantic Db2 for LUW Users Group Announced

After a few months of discussion and support from the IBM community-focused Db2 technical team and local Db2 users, a new RUG, the Mid-Atlantic Db2 for LUW Users Group, is here!  The group will kick...

(Read more)

ChannelDB2 Videos

Tips n Tricks Part 132 - Locked by Multiple Log Chains NORETRIEVE rescues when RF


Locked by Multiple Log Chains NORETRIEVE rescues during Rolling Forward Happy Learning & Sharing

Robert Catterall

Db2 for z/OS: Using the Profile Tables to Direct DDF Applications to Particular Package Collections

I've posted several entries to this blog on the topic of high-performance DBATs (for example, one from a few years ago covered a MAXDBAT implication related to high-performance DBAT usage). You may well be aware that a "regular" DBAT (a DBAT being a DDF thread) becomes a high-performance DBAT when a package bound with RELEASE(DEALLOCATE) is allocated to the DBAT for execution. How do you get a RELEASE(DEALLOCATE) package allocated to a DBAT? Well, for a DDF-using application that calls Db2...

(Read more)

July 24, 2018

Big Data University

React on Rails Tutorial: Integrating React and Ruby on Rails 5.2

Users expect a certain level of interactivity and speed when using websites, which can be hard to provide with server rendered websites. With a regular Rails project, we can sprinkle interactivity on the client side with vanilla javascript or jQuery but it quickly becomes tedious to maintain and work with for complex user interfaces.

In this tutorial, we’re going to look at integrating React into an existing Rails 5.2 app with the react on rails gem, in order to provide an optimal user experience and keep our codebase clean at the same time.

Suppose you are running your own app store called AppGarage. Users are able to see popular apps, download them, and search for new apps.

Currently, the website is built only with Ruby on Rails so users have to type the whole search term, submit, and wait for a page refresh before seeing the search results.

Users expect content to be loaded as they type so that they can find apps faster. Wouldn’t it be nice to upgrade our search functionality to dynamically fetch and render search results as the user types their query? Let’s do that with React!

Table of Contents


This tutorial assumes a basic understanding of Git, Ruby on Rails, and React/JavaScript.

Ensure the followings are installed on your device:

  • Ruby on Rails v5.2 or greater
  • Git
  • Node/NPM/Yarn

Initial Setup


We begin by cloning the repository for our project from GitHub which includes the entire static website built with Ruby on Rails and no react integration.

Use the following command to clone the repository:

$ git clone

After cloning, enter the app-garage folder:

$ cd app-garage

Migrate and Seed the Database

Now that we pulled down the code for our project, we must prepare rails by migrating and seeding our database:

$ rails db:migrate && rails db:seed

Start Server

Our database now has the correct schema and is seeded with initial sample data in order to easily visualize our code changes. We can now start the rails server (Note: it may take rails a while to start the server on its first run):

$ rails server

You can now head over to http://localhost:3000 and you’ll see that our base application is working. We can view the homepage, search for apps, and view specific apps.

Website Screenshot


Now that we have a working web app, we’re ready to improve it by integrating React on Rails and modifying the search functionality.

Installing React on Rails

Note: If you’re following this tutorial using a different existing Rails app or if you’re using a Rails version older than 5.1 you should take a look at the official react-on-rails documentation for integrating with existing rails projects.

Adding and Installing Gems

First, we must add the webpacker,  react_on_rails and mini_racer gems. Edit the Gemfile and add the following to the bottom of the file:

After adding the gems to the Gemfile, install them with the following command:

$ bundle install

Setting up Webpacker and React

Now that the required gems are installed, we can begin configuration. First, we configure Webpacker by running:

$ bundle exec rails webpacker:install

Now that webpacker is configured, we install and configure React:

$ bundle exec rails webpacker:install:react

We should now see the following in our terminal:

Webpacker now supports react.js 🎉

Note: We can delete the autogenerated sample file: app/javascript/packs/hello_react.jsx

Setting up React on Rails

Currently, our project has Webpacker and supports React but we do not have an integration between React and Ruby on Rails. We need to add our changes to version control, so we add all of our changes and commit with the following command (Note: it’s important to commit our changes otherwise we will get warnings when continuing the tutorial):

$ git add . && git commit -m "Add webpacker & react"

Add the react-dom and react_on_rails packages to our package.json by running:

$ yarn add react-dom react-on-rails

Now create config/initializers/react_on_rails.rb with the following content:

We’re now ready to start writing JavaScript and React components.

Implementing the Search Component

Starting simple, we’re going to take our current search view and have it render as a React component without changing any functionality.

Create the following structure in your application folder: app/javascript/components

We can now create our search component called Search.jsx inside the folder we just created with the following content:

The above is our markup for searching converted to JSX in order for React to render it as a component. Note that we changed the HTML class and autocomplete attributes to className and autoComplete respectively for JSX to properly render our markup. This is required because we are writing JSX which is a syntax extension to JavaScript.

We now have a search component but React on Rails knows nothing about it. Whenever we create a new component that we want to use in our Rails app, we must register it with react-on-rails in order to be able to use it with the react_component rails helper. To do so, we edit the app/javascript/packs/application.js file to have the following content:

The application.js file now serves as a way for us to register our components with react-on-rails. In our case, it’s acceptable to include our search component on every page load, but for real-life production applications, it’s not very performant to include every component on every page. In real-life applications, components would be split into webpack bundles which are loaded on pages where they are needed.

Now we include our application bundle in our layout on every page by editing app/views/layouts/application.html.erbto have the following content:

Now, we’ll replace our homepage markup with the react-on-rails react_component helper to render our Search component by editing app/views/home/index.html.erb to have the following content:

Adding React Functionality to our Replacement

Our search is now rendered as a react component but all of our functionality has remained the same, the only difference is not noticeable to users yet. We’re now able to start making our search dynamic.

We need to be able to fetch our search data as JSON but we currently don’t have a JSON endpoint for our search controller. To do this, we add the file app/views/search/index.json.jbuilder with the following content:

Now our search data is accessible as JSON via /search.json.

To access our search data from the client-side JavaScript, we need to add a library for fetching data asynchronously with the browser. In this tutorial, we’ll use the axios library since it also supports older browsers. To install the library, simply run the following command in your terminal:

$ yarn add axios

Now that we have our dependencies installed, we can begin improving our search component. We must start tracking the text written into the search field, fetching the search results for the current text, and updating the state. Here is the new content for app/javascript/components/Search.jsx:

  • To start, we defined our components state (and initial state) to include our search results and whether or not we’re currently loading/fetching new results.
  • Next, we wrote our onChange function which gets called each time the value in the search field changes. We use axios to send an http request to our new /search.json endpoint with the current search field text. Axios will either successfully fetch results in which case we update our state to include the results, or it will fail and we update our state to have no results.
  • Our render function stays almost the same. We alter the input field by adding an onChange handler and pointing to the onChange function we just wrote.

The updated search component now dynamically stores and fetches the users search results based on the current text but doesn’t render anything related to the results yet.

Rendering the Dynamic Search Results

In order to render the search components state, we will create two new components that will make our code easier to manage.

First, we create the SearchResult component which is a purely functional component with no state and it renders declaratively based on props. The prop we expect is a result which is a regular app object from our rails application. Create app/javascript/components/SearchResult.jsx with the following content:

Now, we create a SearchResultList which is also a purely functional component in order to render our result array as SearchResult components. The SearchResultList will expect two props, the first is results, an array of our search results and the second is whether or not we’re currently loading new results. Create app/javascript/components/SearchResultList.jsx with the following content:

Our SearchResultList will iterate through our search results and map them to render as a SearchResult component. We added a style attribute to the container in order to properly display the results under our search field.

Now that we have our two helper components we can modify Search.jsx to render its state when the result array is not empty. Update app/javascript/components/Search.jsx with the following content:

The changes we made to the Search component were:

  • Imported our SearchResultList component
  • Updated the render function to render the SearchResultList component when we have results or when we are loading.

We’ve now integrated React on Rails into our Rails 5.2 app in order to have a dynamic search component for our users.


We started with a regular rails application and went through the process of installing and configuring Webpacker, React, and React on Rails. After configuring, we replaced our search to be a react component which dynamically fetches and renders search results from a new JSON endpoint on our Rails app.

Initial Application

The original implementation above wasn’t a good user experience since it involved typing the full query, waiting for the page to load before seeing any results.















Updated Implementation

The new implementation above shows search results as the user types which saves time and provides a much better user experience.


















We can now begin adding even more interactivity to our website by implementing additional react components and reusing existing components on other pages.

Preview Final Version

You can preview the final version by following these steps:

  1. Clone the final-version branch of the GitHub Repository
    $ git clone -b final-version
  2. Enter the newly cloned app-garage folder:
    $ cd app-garage
  3. Run the necessary setup commands:
    $ yarn && rails db:migrate && rails db:seed
  4. Start the rails server and navigate to http://localhost:3000 Note: Initial load may time a while.
    $ rails server

Further Reading

If you want to learn more about integrating react with Ruby on Rails (such as proper state management with Redux or handling bundles for specific pages), the repository and documentation for react on rails is a great place to look.

The post React on Rails Tutorial: Integrating React and Ruby on Rails 5.2 appeared first on Cognitive Class.

Henrik Loeser

Secure apps on IBM Cloud Kubernetes Service with Let's Encrypt wildcard certificates

In my recent post I explained how I enable SSL for Cloud Foundry apps on IBM Cloud with my own custom domain. Today, I focus on securing apps running in Docker containers in the Kubernetes service on...

(Read more)

July 23, 2018

Subscribe by email



planetDB2 is an aggregator of blogs about the IBM DB2 database server. We combine and republish posts by bloggers around the world. Email us to have your blog included.