Tried and True: Oracle EBS File Transfer Integration part 2

Welcome to part 2 of my series on Oracle EBS file transfer integration. As I mentioned in my first post, one of my team’s major concentrations is the management of Oracle EBS and the many fringe applications that allow customers to evolve and keep up with growing business needs.

For this post, I evaluated 10 different vendors who provide a wide range of file transfer protocols (SFTP, FTPS, HTTPS, and AS2) and I cover in detail why I chose Linoma Software GoAnywhere Services (MFT) solution, the architecture design implemented, and growth potential.

With the requirements set forth in part 1 of this article, the following is a detailed overview of the design and implementation completed by our engineering staff.


Architecture Diagram


Systems Deployment Design
The above architecture diagram is separated into two types of data flow. The first (left diagram) shows the flow of application data on a service-based approach. The second (right diagram) shows the flow of data on a server- and location-based approach. Both of the approaches are needed to fully understand the factors which were required to meet our needs.

Application User
The generic user was added to all Linux systems in our footprint, and made part of the inf group for proper read write access to the share storage location. As I explained in the previous post this generic user is configured with no login capabilities globally, but on the MFT systems, this user was given a /bin/bash shell and its umask was adjusted to 002.

Application Storage
The MFT application running on the PRD MFT MT systems are started by the generic user, and provided access to the shared storage via NFS mount point.

Backup Configuration
The MFT application source and configuration files are stored on an NFS shared storage location but are also backed up for safety on a regular basis. For the sake of DR, storage replication and database synchronization is being used but it’s outside of the scope of this article. The MFT database is backed up on a nightly basis, utilizing a homebrew LVM-based backup methodology, which is done with zero downtime and a slight load increase during the backup process.

Virtualization Configuration
The MFT Services and Gateway system are small in nature and resource need – this helps us develop a robust infrastructure configuration. It was decided that all the required systems would be created utilizing a mix of VMware and Oracle VM virtualization. Any virtualization that has an HA option could be utilized to meet the needs of this infrastructure.

High Availability Design
High availability can be tough to implement because it can mean different things to different people; with this in mind, here are the requirements I decided on for this HA implementation. Knowing that many of the customers would be utilizing automation to integrate with our solution, I wanted to make sure that, even if a connection was lost due to a system failure, any re-attempt would complete successfully. This means the application must respond within one minute in the case that a failure occurs. This requires a mix of application, virtualization and database layer HA configurations.

Application Layer High Availability Design
There are two application layers that required HA configurations to complete our setup (GoAnywhere Services which is then fronted by the GoAnywhere Gateway products). To implement the high availability configuration we simply put a load balancer device in front of the DMZ node, then configured GoAnywhere clustering. The load balancer will force traffic to change from a failed pair set to an active pair set in the case that a failure occurs – this usually takes less than 15 seconds to recognize and adjust.

Virtualization Layer High Availability Design
Both virtualization platforms used support HA as an available option and each VM has been configured with this option enabled. This option does not mean that the server will stay up and running in the case of a hardware failure but just that the VM will restart automatically on a host which is still running. This restart time is usually between one and two minutes.

Both virtualization platforms used support “anti-affinity groups” as an available option and all paired VMs were configured with this functionality. This option means that the virtualization manager will guarantee that the paired set of DMZ, MT and DB systems will never reside on the same physical host.

Database Layer High Availability Design
The database layer does NOT have a high availability configuration – this was intentional. It was determined that the amount of time and money required to implement and manage database HA was too great. MySQL-based HA can be adjusted, but by default goes into effect after 60 seconds of timeout on the primary node. The PRD MFT DB1 was configured with VMware-based HA and will automatically start the MySQL service. So in the case that the VMware host running the PRD MFT DB1 system has a failure, it takes about 1:30 for the system to reboot and the MySQL service to respond to requests. GoAnywhere Services application requires that the database is available to respond to any new requests and data transfers already in progress will continue to work as expected. If the database becomes unavailable, the systems will resume operations immediately as it returns – this requires no application restart.


Now onto the functional design and implementation completed during this project…

Auditing Compliance Design

Logging Implementation
GoAnywhere Services application had the ability to send all logging to a central logging solution as well as store logs locally for easy review. Its log management interface is easy to use for the triage of issues, and allows my team to adjust the system’s configuration. It is supported to adjust the format of all outgoing logging, which makes it easy to point the remote logging tool of GoAnywhere Services at an already-deployed central logging solution.

To meet corporate standards, we were able to keep three months of logging locally within the system while allowing the additional nine months to be stored in both file and database form which, when compressed, is small enough to keep monthly copies over 12 iterations.

Audit Implementation
GoAnywhere Services granular user control allowed our team to create users who only had the ability to audit security configurations, user access, and log outputs without giving them full administrative access. GoAnywhere Services also has functionality that allowed me to limit resource maximums on each user while triggering notifications and adjustments on utilization above our defined thresholds.


User Control Design

User Access Control
GoAnywhere Services had an extremely detailed access control model; each user could have any number of the available options listed on the previous post. These options were available to be configured with recursive configurations or on a folder-by-folder basis, which offered even greater control. The great part about this is that the user control could be adjusted easily by any administrator without the need for Linux/Unix knowledge.

User Group Control
The user group control was an additional function of GoAnywhere Services that helped minimize deployment time for new users. This allows for deployment of a standardized user group, basically a shell of a user with standard access control deployed. This minimized the alterations per user and helped to control administrative errors.

User Authentication Control
In any shared solution like this you want granular authentication access – but with this granularity often comes large overheard. GoAnywhere Services has a very thin authentication model that does not required synchronization between different endpoints. Instead, it simply validates the user’s password. This can be a complicated implementation to explain. Here’s what happens: when a user is created in GoAnywhere Services, its password is stored in AD, LDAP, NIS, or some other central authentication service. With this design, no user information is passed back and forth between GoAnywhere Services and the central authentication system. This minimizes impact on the Services application in the case that the central authentication is not available.

With any architected solution like this, you want to be aware of your configured maximums so you can plan for growth requirements as well as foreshadow issues with resource capabilities.


Capacity Limitations

The deployment discussed above has the following potential maximum capabilities:

GoAnywhere Services has a theoretical maximum concurrent users of 500 per the default JVM sizing – we have doubled this JVM sizing which gives us a theoretical maximum of 1,000 concurrent users per node, and 2,000 for the environment. This maximum can be adjusted simply by increasing the number of internal and external nodes, but we don’t believe this will ever be needed. The only limitation we currently have that has no increase capability is a Linux limitation of 255 due to our storage segmentation model.

The 150 Mbps external data throughput maximum is dependent upon the public internet provider’s burstable capacity. We have never seen a saturation on this link but can increase bandwidth with an associated cost.

There are many ways to resolve a need in our ever-changing technical landscape – sometimes the designs need major revamping after deployment. Everyone once in a while you deploy something that truly is as great as its design, and this MFT deployment was just that. It has been stable, easy to manage, globally adopted, and has such a low total cost of ownership that we wanted to share our experience with others in the Oracle application world. As always, please feel free to share comments, thoughts and questions.

« Back to blog