Aircraft IT MRO – Winter 2013

Aircraft IT MRO – Winter 2013 Cover


Name Author
Charting an Optimal Course JD Sillion, Chief Solutions Officer, Flatirons Solutions View article
Moving Day, Getting Ready and Afterwards David A. Marcontell, President & COO, and John F. Smiley III, Aviation Analyst, TeamSAI Inc. View article
Case Study: New Generation Reporting Tools Caroline Deba, Reliability Manager, Air Tahiti Nui View article
Case Study: Enhanced Business Operations Jorma Ryske, Director Finnair IT Aircraft Maintenance View article

Moving Day, Getting Ready and Afterwards

Author: David A. Marcontell, President & COO, and John F. Smiley III, Aviation Analyst, TeamSAI Inc.

Moving Day, Getting Ready and Afterwards
Migrating Data from legacy MRO IT systems to modern systems David A. Marcontell President & COO, TeamSAI Inc and John F. Smiley III, Aviation Analyst, TeamSAI Consulting explain how it should be done.
Challenges of migrating MRO data

The wave of technological advances and progress in software and computing platforms that our society has seen over the past 35 years has resulted in remarkable advances in productivity, efficiency and capabilities. Airline MRO IT systems have been no exception to this trend and indeed, in the late 1980’s and 1990’s many airline’s maintenance and engineering organizations transitioned from manual or near-manual maintenance control and record keeping systems into their first mainframe ‘green screen’ environments, using both home grown and enterprise applications like Scepter, Maxi Merlin, and PMI. The unrelenting advance of technology, however, has now rendered many of these applications both expensive to operate, inefficient to use and, at best, functionally limited.

The business case for making a change from a dated legacy MRO system has never been stronger with continued rapid advances in technology both on the ground and in the air, as well as a robust and competitive environment among MRO IT providers. The real, often underestimated, challenge is the difficult and complex process of migrating key operations, configuration and compliance history data from a legacy system to the new system; and all while the fleet is continuing to fly. Not only is the data dynamic, but the information contained in the legacy data may be limited and not immediately suitable for transfer to the new system. Effective project management, data validation, and error correction need to be planned for and handled through, and beyond, the new system go-live point to avoid an extended dual entry period or even a non-compliance condition.

Creating a migration path

Much like when buying a new car, a new MRO IT system will have considerably greater functionality and capabilities than the legacy system it is intended to replace. Indeed, this enhanced functionality and capability was likely a big part of the business case to make the purchase in the first place. However, do not let these shiny new features and capabilities obscure the primary objective of the data migration effort; moving current data to the new system in the most efficient, expeditious and error free way possible. Migration teams can spend considerable time and energy migrating data, or even driving the organization to create new data that previously did not exist, just to support the new functionality.

The best way to avoid this common pitfall is to clearly define what data is needed for the new system to go-live, at the same time understanding what system modules or features are required to deliver functionality equivalent to the existing system. Remaining modules and features should then be characterized as optional and prioritized accordingly. Interestingly, a good example of this rationalization and prioritization are Airworthiness Directives (AD). There is no doubt that the new MRO IT system will need to be able to track currently open and repetitive ADs, but are historical closed and non-applicable ADs also required to be in the system at go-live? Especially if a report of these historical and static (i.e. they are not changing) items can be delivered upon demand? For a variety of reasons, this historical information might not reside in the current system, meaning that there is no data to migrate; only new data entry efforts would populate the desired information. Working with Maintenance and Engineering leadership, the migration team will have to balance the resources required to create the new data against the benefit derived from the new features and functionality. Returning to the example above, some airlines have chosen to migrate only open and active ADs prior to go-live, leaving historical data population efforts for post go-live activities. Very similar discussions should be held regarding historical parts transactions, historical logbook activity, and airframe check non-routine histories.

Once minimum system features and functionalities have been identified and prioritized, the data required for each module, feature or functionality can then be documented and assigned to specific migration team sub-groups who are best equipped or most knowledgeable about the data. This latter step is particularly important for quality control as data familiarity – seeing the data as meaningful information and not just a series of number or text characters – is critical to identifying potential data migration errors or garbage data.

Lastly, a dedicated test environment should be created within the new system to permit the experimentation and multiple migration attempts that will ultimately be necessary for success. This test environment should have all of the features and functionality that are expected at go-live fully activated and available for use. Most importantly, the test environment should be completely independent from the final production environment as the migration team will want to test its migrations thoroughly before moving data to the production environment. As discussed later, deleting erroneous data from the production environment can be risky and have unintended consequences.

With a clear understanding of the data to be migrated, and an environment to migrate the data to, we can now simplify the migration process into three distinct steps: data mapping, moving the data and data validation.

Mapping the data

The first step of data mapping is analysis of both the old system and the new system to understand how the systems work and what information they will accept and output. If available, the original specification and design documentation for each of the systems will be critical to helping the migration team understand how each system functions, what data the old system will output, and what data the new system requires for go-live. This step may sound quick and easy but it cannot be stressed enough how important it is to accurately understand what data the new system is looking for and what the old system gives you. For example, the new system may want the time when the next maintenance check is due and the old system may output what time the last maintenance check was completed. While somewhat intuitive if you know the check interval, these are not programmatically the same of course and a conversion is necessary. Working through this step slowly and cautiously will save time and headaches in the long run.

One aspect that needs to be looked at carefully when analyzing the new system is required data fields. Again, if available, the documentation for the new system should make it relatively easy to see what data is required for each module and what data is optional; however, there can easily be exceptions that will be revealed by the data checks in the new system during migration testing. An exception may occur for a number of reasons ranging from the type of part being transferred into the new system (e.g. rotable LRU with subassembly build), or the migration team choosing to fill in an optional data field which, unwittingly, makes three more data fields switch from ‘optional’ to ‘required’. The majority of these exceptions should be indicated in the documentation for the new system; undoubtedly however, there will be a few that will go unnoticed until surfacing during data validation or final production.

Once analysis of the two systems is complete, the team can move on to the second step and create a detailed document that maps the data output from the old system to the data fields for the new system. This document should be easily editable and show all required and optional data fields for each module in the new system; the data source being used to fill each field; and the specific location of the data in the source file to be used to populate the field. When the data map is complete, the team may move on from the planning stages and start the process of moving data into the new system test environment.

Moving the data

At first glance, moving data from one system to another sounds very simple. However, in fact this can be the riskiest, most complex, and often underestimated element of the entire project and one that can quickly lead to non-compliance conditions if not executed correctly. Further, data from the old system will most likely be in a format unsuitable for immediate transfer into the new system, so format manipulation with an intermediary system such as Microsoft Access or Microsoft Excel is often necessary. Both of these intermediary system options have their advantages and disadvantages and both can be transformed into very powerful tools when supplemented with regular expressions or Visual Basic for Applications (VBA) coding.

As with designing – or even flying – an airplane, choosing an appropriate level of automation for data migration is critical. Automation can heavily reduce, if not wholly eliminate, human error in the migration process. However, just as an inappropriate level of automation can quickly lead to a very dangerous situation in the air (e.g. staying on autopilot while traversing significant turbulence), it can also lead to an excessive project delays if overused or used incorrectly. Anecdotal stories abound of how data migration problems were overcome with just a couple hours of work after the migration team stopped trying to fully automate the process and resorted to simpler methods such as Excel formulas, changing a computer’s regional settings, or even selected key-stroking. These anecdotes represent cautionary tales, so be sure to choose the appropriate level of automation for each step of the data migration process.

Once data for the new system has been appropriately reformatted in the intermediary system, the manipulated data will need to be converted into a form that the new system will accept. This most likely will be a text or comma separated value document depending on what system you are migrating to. This final conversion process is typically fairly straightforward; however, some of the converted files may need to be cleaned up with a powerful 3rd party text editor, such as Notepad++, as the text editor that comes with Microsoft Windows may hide formatting errors that can occur when converting from one file type to another.

To mitigate risk involved in moving the data, it is important to create – and religiously follow – detailed and documented procedures describing how each migrated data set will be pulled out of the old system, manipulated, and then input into the new system. These procedures will reduce human error to a large extent and allow the data migration team to quickly and accurately replicate a successful data load; an immensely important ability that will be required in the hours and days just prior to the new system go-live point. Remember, the aircraft are not going to stop flying while you are migrating data so it will be critical to document the process in detail in order that it can be quickly and repetitively performed.

Once detailed procedures have been established and automation for data migration has been developed and tested, the team can begin pulling data out of the old systems for migration testing, and later, data validation; all prior to go-live.

Validating the data

After the team has migrated a data set into the new system, the transferred data should be validated against the original data set to verify an accurate migration. The question that must constantly be asked during this process is, ‘Did we correctly understand what the new system wanted and did we get the desired result?’ For most of the data, the answer will be ‘yes’, but inevitably there will be data that falls into one of two categories: incorrectly transferred data; or simply garbage data.

Correcting data that was transferred incorrectly, whether through programming errors or simply misunderstanding what the new system wanted in a particular field, is fairly easy. The team simply goes back to their data map as well as the system documentation and, with a better understanding of what the system wants, correctly remaps or reformats the field(s) or migration procedure to the new system. On the other hand, correcting garbage data, which is data that was programmatically transferred correctly but ultimately was not acceptable to the new system or simply inaccurate, can be significantly more difficult as it often reveals deeper, more systemic issues with the underlying data. This is also often where non-compliant condition can emerge. When garbage data is found it is important for the migration team to act quickly to notify the owners of the source data so that actions can be taken to correct the source data and, if necessary, mitigate a non-compliance condition that was previously unknown. All migration teams should expect to find some amount of garbage data, even if a client insists that all of the source data is good and ready for migration.

It is important to note, that in some cases it may appear quicker to rectify incorrectly transferred or garbage data directly within the new system instead of reprogramming or correcting the source files. The migration team should carefully consider this valid option, but understand that with most new systems some of the data fields are highly complex and tied to background calculations and other fields. If manually corrected in the new system, these complex fields may break and require an entire system wipe and reset to correctly fix.

Once the migration team has completed the validation process for a migration data set, the team should repeat the process until a perfect, or near perfect, migration occurs. With each test migration the team will not only gain a better understanding of how the new system works, but equally importantly, identify and rectify garbage data.

Go-live data migration

Once all of the data migration sets have been successfully migrated in the test environment, and migrated data validated against the original source data or expected results to confirm migration programming and procedures are error free, the team will be ready for the final go-live migration. Remember, the many months of activity and hard work to map, program, migrate, and validate results in a test environment prior to go-live was for the purpose of facilitating a comprehensive, but rapid and high quality, migration over a period of just a few hours in order to minimize the number and risk of potential non-migrated transactions (i.e. transactions that occur during the go-live migration period).

The final go-live migration should be planned and executed during a period of historically low MRO IT system transaction volume. This optimal period will vary depending upon the airline and its operation and may not be intuitively obvious, so clear guidance should come from the Maintenance & Engineering leadership team as to the best time to schedule this important milestone.

Post go-live activities

After months of planning, preparation, and testing, the new system is finally live but there is still much to be done before the project can be considered closed. Post migration activities can make or break a successful data migration but fortunately it is fairly easy to avoid a post migration disaster by immediately beginning the data validation process and continuing to maintain the legacy system while data in the new system is being validated. This period of dual entry and system management will likely be a requirement from the airline’s regulator to assure the necessary control systems are operating correctly before the legacy system is shut down.

The migration team and system end users can expect to find issues with the data as time begins accruing against the assets in the new system. Just a couple of examples would include tasks with multiple requirements that are missing a requirement, incomplete or missing reference document numbers or incomplete task instructions. While it is true that a dual data entry period in both the legacy system and the new system is an inconvenience and labor intensive, the benefits and protections it provides as these issues arise greatly outweigh the time and effort needed to maintain both systems. Migration teams should plan to have a dual entry period of at least one to two months after the new system is live.

Once the data validation process is well underway and the data issues being found occur less frequently and are of lesser significance, the migration team can begin work on follow-on priorities such as building historical data that was not available for migration… closed and non-applicable ADs, aircraft oil consumption histories, etc. The satisfactory completion of these follow-on priorities will finally mark the end of the project and a successful MRO data migration effort.

Comments (0)

There are currently no comments about this article.

To post a comment, please login or subscribe.