Case Studies
 

SAFE MIGRATION OF CRITICAL RESOURCE MANAGEMENT SYSTEMS

“Kernel Software has managed the integration of the different modules of DAYSY, the day-to-day crew management system of Lufthansa. The interface between DAYSY and the legacy mainframe system has become, after the re-design by Kernel Software, a masterpiece of information Technology”.

Dietmar Schymura – Deutsche Lufthansa AG

Critical Resource Management systems

Critical Resource Management systems are the applications supporting resource planning and tracking for large operators such as railways, airports or airlines.

Migration issues

Critical Resource Management systems are often legacy applications which run on mainframe systems, under the control of transaction monitors. Many are available 24 hours per day, seven days per week. They are usually connected to a high number of other systems.

This web of applications is the result of hundred man-years of software development. It has evolved over time under the pressure of the most urgent need of the day, without any consistent architectural vision. The rule has been to avoid any significant redesign, in order to reduce the risk of disrupting the operations. As a result, many legacy systems have now reached the point where any attempt to support new business requirements run into severe problems.

Some projects follow a conventional client-server approach: the mainframe application is enhanced to become the server of clients running on PCs or Unix workstations. However, enhancing a mainframe transaction-based system so that it can support the requirements of graphical interfaces and decision-support tools is next to impossible. These projects achieve little more than revamping the user interface with nice graphics.

Other projects pay little attention to systems integration. There is no plan for ensuring that the new system will be able to communicate with all the applications that the legacy system is linked to. If poor integration does not kill the project at the deployment stage, it leads to double data entry and significant amounts of manual rework. Almost all projects assume that, as it is usual for non-critical systems, there can be a D-day on which the legacy system is switched off, and the new system is switched on. Yet, this is a very dangerous course: if, for any reason, the new system is not up to its job, it is not possible to go back to the legacy system because its data is no longer up-to-date.

Clearly, for truly critical applications, the switch-off / switch-on approach is not acceptable. In case of a failure of the new system, there must be a way back to the old one.

Last but not least, one must take into account that training many users to a new system takes months, and that it is very difficult to do a good training on a system which is not yet operational. From this point of view also, the switch-off / switch-on approach is not realistic.

The basic rule for safe migration

There is a basic rule for the safe migration of critical Resource Management systems:
It must be possible to run a new application in parallel with the legacy application which is being replaced.

This basic rule is a stringent one: it means that a user is free to perform his job on the new system or on the legacy system, depending on the availability of the new system and on the progress of his training. Every update done through the new system must be propagated to the legacy system, and vice-versa. Satisfying this requirement is more than a safe migration technique. It also provides a solution to the systems integration issue: since the legacy application remains operational and its data is kept up to date, its connections to other systems can still be active. Over time, it is possible to redirect these connections to the new system, one by one, without time pressure.

The dangerous switch-on / switch-off approach is replaced with a gradual transition method relying on a controlled attrition of the legacy system.

The need for dedicated safe migration tools

Enabling two applications to manage concurrently the same data is not an obvious proposition.

A major constraint is that it must be achieved with very limited modifications of the legacy application. In particular, it is not practical to require that the legacy application be first migrated to another mainframe-based data management system: in the context of critical Resource Management systems, such a migration takes years, during which no significant business improvement can be delivered because the application remains limited by the mainframe environment.

Moreover, the new system must be capable of supporting a new generation of tools which will create business value: decision-support packages, Internet applications, interfaces with business partners. These tools have specific requirements, such as the need to consider a significant fraction of the total data in order to adapt resource utilization schedules to changing conditions.

Generic middleware products have not been designed to address these issues. They can even make the situation worse, by enshrining an obsolete legacy data management system in an extra carapace of software.

Kernel Software’s Safe Migration tools

Kernel Software has designed and developed tools for the safe migration of critical Resource Management systems.

The first key feature of these tools is to ensure that there is a safe path from the legacy situation to the new architecture. This is achieved by supporting co-operation with the legacy system as long as it takes for retiring them. Links with external applications are reconnected one by one to the new system. Only minor modifications of the legacy system are required.

The second key feature is to support the integration of decision-support tools through what-if simulation:

Each user has a private simulation workspace in which he can work without modifying the common data. The common data is updated only when the user decides that the modifications he has prepared in his workspace are valid and complete.

Decision-support tools (e.g.: automatic planning) update the data of the private simulation workspace of the user, not the common data. This gives to the user the possibility to review the results prepared by the tool, to correct them manually through a graphical interface, or to restart the tools with new instructions.

The third key feature is the capacity to handle the data of the largest companies while satisfying demanding response time requirements.

The fourth key feature is a powerful Publish and Subscribe mechanism for making Resource Management data available in real-time to a variety of applications inside and outside the company.