Technical specification for data integration

Summary

This document explains the secure one-way data feed from various source applications to PMI. More specifically, it outlines:

  • which transaction applications (sources) are relevant to consider as data feed systems to PMI (target);
  • what data (data elements and values) are required for daily data feed to PMI;
  • PMI preferred output formats from source applications;
  • PMI preferred communication and transfer (data exchange) methods from source applications to PMI; and
  • what data (data elements and values) are required as historic data (one-off exercise).

As part of our services, we will proactively assist our customers – often in liaison with our customers’ third parties (software providers) – to define the data export requirements and procedures (see step 1 in the figure below). Once the output file with the agreed format is developed, the experienced integration team of d2o will ensure the subsequent steps of the process (steps 2 – 4) are completed to securely communicate/transmit the (source) data, import these, and make management information appropriately accessible to users through a web browser via our cloud-based solution.

We will ultimately, together with our customers, execute an end test or “dry run” – from the beginning to the end – to validate that all interfaces and the whole data journey work as intended, before certifying the solution ready for productive go-live.

Intended Audience

This document is meant for those who have a need and interest to understand the one-way data feed from source applications to PMI. More specifically, this document will include

Contents

  • 1Summary
  • 2Intended Audience
  • 3Overview
  • 4Process overview: daily data journey from source applications to PMI
  • 5Step 1: the data elements required from source applications, and the output formats to be used
  • 6Step 2: communication and data transfer methods
  • 7Step 3: loading and mapping of source data to PMI
  • 8Step 4: user access to management information in PMI
  • 9Historical data requirements
  • 10Data element library
    • 10.1Points of Sale (POS)
    • 10.2F&B Reservation Systems (FRS)
    • 10.3Time Keeping/Management System (TKS/TMS)
    • 10.4Property Management System (PMS)
    • 10.5Revenue Management System (RMS)
    • 10.6Sales & Catering Management System (S&C)
    • 10.7Accounting Management System (AMS)

Overview

This document explains the PMI data requirements for historical data and day-to-day data feed. The focus of this document is to explain what data elements and associated transactions are required to be exported from the various source applications, i.e. step 1 of the data journey. This is the part of the data journey that the d2o team will need to work on together with customers (or their third party) to ensure the right data are included in the export. It also details the applicable security measures and protocols built into the data exchange solution.

The rest of the journey, i.e. steps 2 – 4 (see figure below) will be accommodated by the PMI standard data integration solution.

Process overview: daily data journey from source applications to PMI

PMI relies on data feeds from various transaction applications (referred to as “sources”) like Points of Sale, Time Management, Reservations, etc. in order to process and deliver the intended management information. The scope of the data feed that needs to be established will vary from process to process and with the PMI scope that is to be implemented.

For instance, if the process is (only) related to restaurants, the various (source) applications relevant to consider as data “feeders” would typically be:

  • Points of Sale (POS)
  • Time Keeping/Management System (TKS/TMS)
  • F&B Reservation System (FRS) and equivalent


If this is to be applied to hotels or cruises, the various (source) applications relevant to consider as data “feeders” would typically be:

  • Points of Sale (POS)
  • Time Management System (TKS)
  • Property Management System (PMS)
  • Revenue Management System (RMS)
  • Sales and Catering System (S&C)


If the Planning module is a part of the PMI implementation scope (in addition to the base module Revenue & Productivity), a data feed from an accounting management system would also typically be recommended.

In general, we recommend that automation of the daily data export task from the source applications is automated, since this ensures accurate daily feed and eliminates any human interventions.

The daily data feed (data journey) can, at a high level, be described in four main steps, as illustrated in figure 1.

  1. Step: Execute dataset export to a pre-agreed output format, preferably Web service XML or a structured flat file format.
  2. Step: Communicate or transfer the output to the PMI server (cloud-based delivery model).
  3. Step: Import the source dataset into PMI and map it to corresponding data field structure.
  4. Step: The data will be computerised and the information presented through the PMI user-interface.
Datasheet-small.png

Step 1: the data elements required from source applications, and the output formats to be used

For each relevant application type, we have listed the data elements that would need to be included in the daily feed. The feed is run and imported into PMI every day, including weekends and holidays.

Daily data export routine: There are, in principle, three options to export the data from a source application (listed in order of preference).

  1. The export function in the source application is scheduled to run automatically every night, and eliminates the need for human intervention (most common and reliable option).
  2. The export function is invoked by a macro recording procedure (defined by using a software like AutoHotKey) to emulate the sequence of keystrokes and mouse clicks required to download the desired dataset and save this as an output file. This macro procedure (an EXE-file) is scheduled to run automatically every night, thus eliminating the need for any manual interventions.
  3. The export task to download the dataset and save this as an output file in a specific area is carried out manually by a person every night – thus allowing for human error.


Data export date selection: The daily data feed contains transactions not only for the day prior to the export day, but also pertaining to seven (7) days back in time. This is done to make sure that any changes and/or adjustments made pertaining to transactions falling within this time span are captured and, hence, fed to PMI.

An example: The daily data export on Jan 2, 2013 will not only include transactions pertaining to Jan 1, but also those from Dec 25, 2012 up to and including Dec 31, 2012.

Some customers may use a data warehouse or an enterprise data warehouse as a central repository holding data imported from one or more disparate source applications for reporting and for data analysis. Often, such a data warehouse is used as the main data source for PMI, since it normally has flexible export capabilities available.

Output formats: The exported data (output) from a source application can be in either Web service XML or plain flat file format. PMI data integration solution supports different output formats as long as the data content follows a consistent structure. However, given a choice, we recommend the following formats, listed order of preference:

  1. XML, or
  2. Flat file based RFC 4180, a widely accepted CSV format. More details on http://supercsv.sourceforge.net/csv_specification, or
  3. Other structured file format to be agreed on, or
  4. Open database connectivity (ODBC), in some cases between PMI and for the time management system.

In case of flat file, the files must always be stored in a pre-determined naming convention and catalogue so that the PMI file agent can initiate the file transmission procedure using NET.TCP or HTTPS, protocols that are commonly used professional websites to ensure secure data communication/transmission over the internet.

Step 2: communication and data transfer methods

The default choice is to use HyperText Transfer Protocol over SSL/TLS (HTTPS) for both Web service and flat file transfer.

In case of file communication (or transfer), the data files are automatically initiated by a PMI file agent as soon as the files are saved in a predetermined folder. The transmission to PMI cloud-based solution is based on either NET.TCP or HTTPS. These secure protocols (using authentication, encryption, etc.) are commonly used to transfer sensitive data (e.g. payment transactions) over the internet by professionals.

However, we do support customers who, for different reasons, prefer other data communication and transfer protocols like FTPS, SFTP, etc.

Once the output files from source applications have been downloaded and stored in line with the agreed naming convention in the PMI Agent Outbox folder (step 1), PMI File Agent, a small Windows service (.Net C#), initiates a secure file communication/transmission (step 2). More specifically, the agent is designed to perform the following main functions:

  • At predefined time intervals, monitor PMI File Outbox for predefined file names.
  • If a new file arrives, considering the naming convention and the create/modify time, transmission will be initiated.
  • When required, restart (agent).
  • Delete files upon successful transmission (to avoid unnecessary filling up of storage space)


The functioning of PMI File Agent has proven to be very robust and stable. Experience from hundreds of properties so far suggests minimal need for manual interventions from customers’ IT personnel. However, should the agent fail, the d2o support team may have to liaise with the customer’s personnel at property to check the existence of the agent on the local server.

For more information about technical requirements for PMI File Agent, how to install it, and to download the Microsoft service, please see below:

Step 3: loading and mapping of source data to PMI

Once the source output data from the source application has been received, it is immediately validated, imported and transmitted to PMI, a robust multi-tenants application architecture.

Data errors due to missing account mappings or files will be highlighted on PMI Desktop for review and corrective actions by users.

Step 4: user access to management information in PMI

d2o use SSL (Secure Sockets Layer), which is the standard security technology for establishing an encrypted link between a web server and a browser. This link ensures that all data passed between the web server and browsers remain private and integral. SSL is an industry standard and is used by millions of websites for protection of their online transactions with their customers.

The users can securely access the PMI cloud-based application by using the web browser via HTTPS connection (with logon authentication), i.e. the same level of security as, for instance, any secure professional website capturing sensitive information like payments transaction and personal information.


There is no need to install any additional software locally in this respect.

PMI response time: Based on experience accumulated over more than ten years and from of thousands of end users located in Europe, Africa, the US and Asia, the response time has not been an issue or inconvenience factor. A normal session in PMI application will typically require less than 10KB per second. The ultimate response time experienced by end users will, of course, be influenced by available bandwidth at the respective properties.

Historical data requirements

For comparison purposes and calculation of variables, trends, predictive analytics, etc. PMI will require historical data to be loaded as part of a one-off exercise. The source data elements and associated values are the same for the historical as for the daily feed. The main difference is in the date range to be selected. Unless the property has been recently opened, it is typically recommended that the historical data load consist of the 12 months prior to the month of go-live.

An example: If PMI is to go live (i.e. be ready for business use) on Jun 1, 2012, the historical data load needs to consist of transaction values pertaining to data from Jun 1, 2011 up to and including May 31, 2012.

To understand which data elements are required to be exported from which source applications, please refer to “Step 1: the data elements required from source applications, and the output formats to be used”.

Data element library

Below, we have included examples of mapping tables. A mapping table serves to define how data elements in one application (source) relate to data elements in another (target).

Where available and permissible, examples of XML, flat file or other output formats can be provided by the d2o team upon request.

Points of Sale (POS)

F&B Reservation Systems (FRS)

Time Keeping/Management System (TKS/TMS)

Property Management System (PMS)

Revenue Management System (RMS)

Sales & Catering Management System (S&C)

Accounting Management System (AMS)

  • Visma

Was this article helpful?

Related Articles