Integration of third party software

If someone has already provided a solution to a problem you’re facing, it may make sense to use it rather than trying to solve yourself the same.

In the following, I will present my personal view on how to reuse such solution, focusing on reusing and integrating a piece of software. I think the same will also be true for media content like videos and audio files, or documents.


Check requirements

First of all, does the foreseen software provides all the required features and fulfill all the requirements (You can think of this as “IRCA”: identify, read reviews, compare, and analyze) ? Is is supported ? Will it integrate nicely with the rest of your system ?

Intellectual property rights, e.g. patents, trademarks, copyright shall be considered as well.


Check license and target group

When it is clear that this particular piece is THE solution, it is time to check if it can be used for what it is planed for.

Just because it is available on the internet doesn’t mean that it is allowed to use it for all purposes. The copyright law (The statutes and court decisions regarding the rights to creative and literary works) has to be respected. The owner shall provide a license agreement listing the restrictions on how to use the software (or media content, document, etc).

A distinguish has to be made between COTS (“Commercial off-the-shelf”) that can be free or not and OSS (Open source software).

For OSS, the Copyleft effect (typically for software licensed under the GNU GPLv2 license) has to be considered. As so many open source licenses exist and as each file can contain its own license agreement, the use of tools like Fossology appear as mandatory.

Worst, even if the license agreement allows to integrate it, it does not mean that it can be redistributed everywhere! Some countries do not allow the use of certain technologies, e.g. advanced cryptography algorithms. Specific regulations and Export Control and Customs (ECC) must be considered as early as possible.


Some examples may be useful to illustrate this:

  1. Oracle Java Virtual Machine (JVM) Code License agreement targets “General Purpose Desktop Computers and Servers”: it means that if the JVM is installed on the server with a specific task, this agreement does not apply.
  2. It is not allowed to redistribute Sysinternal tools (e.g. psexec). End customers shall download the tools from Microsoft website.
  3. Reuse a code snippet from Stack Overflow is not as simple as one could think.
  4. A license is mandatory to redistribute Adobe Reader.


Last, what happen if the author stops the development of the item ? Or does not support it anymore ? An Escrow Agreement may be a solution, developing technical expertise on the product, etc.


Redistribute OSS source code and eventual modifications

GPL license requires that the source code is accessible to the end customer. It can be provided on-demand or being delivered with the binaries. Easy, but the same version has to be provided. Source code and binaries for OSS items shall be downloaded at the same time, ideally directly from the author (source has to be trusted) and check for consistence (a corrupted archive has no value).

Redistribute the source code and the eventual modification with the product seems to me the easiest way to fulfill this requirement from the GPL, alternative would be to provide the complete source code for the exact version to anyone asking for it. This can be easily checked and implies the use of a framework in order to keep things organized (all source code in one place).

Integrate to the documentation (and online help if applicable) the use of OSS documentation, ideally with the data from the bill of material (see below).

Note: OSS source code redistribution may not be the single limitation imposed by the license.

Create a Bill of material

A system is constantly evolving. For instance, the author may deliver new versions fixing some security flaw that has been discovered (more on this in chapter Maintenance) and assuming this is not the single 3rd party item included in the software, it is difficult to keep an overview.

The Bill of material shall list all software components, media and documents, their corresponding owner, download location, license, etc. Difficult to create initially but keeping it up-to-date provides a great value.



Once the first version of the system is delivered, some maintenance may be required.

Microsoft provides security hotfixes every month (Patch Tuesday). Security problems affect any kind of software, and the provider is responsible to provide fixes to his customers. At least, the items listed in the CVE (Common vulnerabilities and exposures) database shall be fixed. It is no secret that a well-defined vulnerability management process to address security vulnerabilities is a must. A good practice is to register to the announcement mailing-list of the software as well as to get involved in the support community forums, allowing to gain first-hand experience.

Note: I won’t describe anything about liability and warranty aspects. The provider is, as far as I know, responsible regarding his customers e.g. in case of a patent infringement. This may be an additional requirement by using OSS (this article provides additional information).

In order to keep track of those changes, common practices use a versioning control system, e.g. Git for source code. But which tool can be used for binaries ? SVN (Subversion) may be a good choice as it provides a global version number for each commit. Alternative would be Actifactory (Open source).

Ultimate Target is the reproducible build, allowing to prove that the system building process is meeting the high quality standard customers are paying for (or required by the sector of activity).



Integrating a third party software component allows to gain competence and development time, enabling a faster time-to-market. At the same time, it involves some obligations against the customers.

Final note: Ensure coherence by keeping the variety of third party software components low.





How websites are tracking their visitors

If you did not know it, “If you are not paying for it, you’re not the customer; you’re the product being sold.” ( Andrew Lewis)

The theory is good, but by using a free service, are you aware about ALL the consequences?  The collusion website shows us a demonstration and a Firefox Plugin to check it by ourselves.

Maybe the technology will outdo the human, due to the systematically use of Artificial Intelligence and big data technologies. Luckily for us, Artificial Inteligence is lacking Intuition, but advertisers are getting better products with the increase of data (big data, Internet of Things, Android Auto…).

  • The following article (in french) is showing some recent technologies used by the companies building visitors profiles:

Architecture of a system integration system

A system integration system may act as a middleware between the myriad of isolated systems into a common data source for other applications.
Such a system is typically part of a Business Process Management (BPM) system or in operational intelligence (OI) solutions


Data sources Layer

  • Subsystems: customer systems. Those can be classed by their compatibilty to common data interfaces: OPC, REST, ODBC, CSV, oracle, sql server… Some may be specific and industry-specific (PI, IP21)
  • Subsystems connector: Need subsystem-specific connector that retrieve data
    Ideally use native API of the subsystems to connect to.
    Alternative use export function from the system or standard, e.g. OPC
    Data shall be retrieved on a regular basis (in order to calculate key performance indicators) or at runtime (for up-to-date data)

Cache Layer

  • Central database:
    – Ideally, cache collected data from the subsystems for performance and optimize the load on the network and on the subsystems
    – The calculation engine store its results in it
    – Store Displays configuration
    – Store authentication data (User Management )


  • Calculation engine
    – Perform calculations based on the data retrieved from any subsystems
    Those calculations shall provide ready-to use data sets for the displays (e.g. data content of a table)

Most of the data sets are computing large quantities of data. For instance, all values with no respect to the location are calculated. Therefore some data may be calculated even if not required.

– Calculate cyclical values, e.g. hourly, monthly,… averages or any typical values

– Calculation engine shall allow using the analytics capabilities of (e.g. Oracle or SQL Server provide high-level calculation)

– Calculation engine may use ETL systems (Extract Transform and Load: integration)

  • Identity management: manage the rules regarding which user / user group can view which element (display, value…)
    The Identity management module is responsible that each user see only the data he is granted to.
    Users can be grouped in to user groups.
    displays can be grouped in to displays groups (e.g. specific for a particular group of people)
    User groups can be assigned to displays group

Display Layer

  • User specific Menu system showing only the menu items and elements the logged-in user is granted to.
  • Dashboards: displays, ideally web-based showing graphical elements (trend, values, KPI…).
    Dashboard elements can be usual graphical elements (trend, values) or complex ones (KPI, traffic light, table, spiders)
    Dashboard elements may be reused, and each user shall be able to create his own dashboards by combining dashboard elements

The dashboard elements are displaying data from :
– subsystems. The caching mechanism allow to optimize the responsiveness of the system
– data computed by the calculation enigne allow to display complex data from different systems

Display typically filter values from the data sets:
– Trend over a specific time range
– Table for a specific location or element

Display provide following functions:
– Contextual menu with menu items specific to the selected element
– Links to further systems: e.g. link to documentation, other systems. Parameters allow to drill down directly to the right element.

  • API provide acces to the data to 3rd party products.

Stratus ftServer vs EverRun

An unavailable business-critical application is expensive. Forrester Research pegs the average cost of one hour of downtime at $150,000 for the typical enterprise (Source), therefore the ROI (retun on investiment) of a high-availability solution may be faster than you think.

Stratus ftServer

Stratus ftServer system provide the highest (to my knowledge) availability level coupled to the highest performance
The system is based on two specific parts named ‘enclosure’ that provide redundant hardware components, each enclosure performs the same instructions at the same time and is constantly monitoring the partner (Lockstep technology)

Stratus ftServer architecture (source)

On top of the hardware a Stratus ftServer system software (ftsss) monitors the hardware and perform system diagnostics. Typically, if an enclosure is going down due to e.g. a hardware failure, the second enclosure is immediately promoted master and the server may send an alarm to Stratus through a dedicated network interface. So, Stratus can send a replacement enclosure to the customer.
When the defect enclosure is replaced, both systems parts will synchronize and the ftServer may enter in Full duplex, the fault tolerant state.

Stratus price list as of 2015-06-01

EverRun MX, Express and Enterprise

Marathon (since 2012, acquired by Stratus) EverRun system adds a software layer on top on standard Intel servers. EverRun software add embedded clustering, fault tolerance, and data replication to XenServer (Linux-based hypervisor).
EverRun software allows to reach availability levels that compete with hardware-based fault tolerant servers

Datasheet EverRun MX

Having the two servers independent (not part of a single chassis as in ftServer) allows separating physically the servers, e.g. in two separates rooms or buildings. The software is checkpointing every few hundred milliseconds so two different systems can keep memory and I/O and applications synchronized so if one application instance fails in a virtual machine on the primary machine processing continues on the secondary machine.

EverRun MX solution was cheaper (due to the use of standard hardware) but slower and its architecture is more complex (several networks, virtualized environment) than the ftServer one. Moreover, EverRun MX is supporting windows guests only.

EverRun architecture (source)

The successor of EverRun MX, EverRun Enterprise is targeted to make the system easier-to-use.. The _SplitSite_ feature allows the software checkpointing to be done between mirrored systems over a campus distances and still be considering fault tolerance (How the availability engine works)
Since the version 7.2, EverRun Enterprise is based on Linux KVM Hypervisor (Source)

A license for everRun Enterprise costs $12,000 for a two-node pair. (Source). This price does not contain the hardware costs and the support for it.

EverRun Express is a cheaper solution, not providing fault tolerant capabilities costs around $5,000. More information on EverRun Express

Stratus Cloud Solutions

Stratus newest offer put everRun Enterprise onto OpenStack clouds with the KVM hypervisor on each node in the cloud.

Server Availability services
Datasheet Stratus’ Cloud Solutions

Keeping critical systems up and running 24/7

99.999% uptime… Does it means something to you  ? If not, think about of a system not available for as less as 5 minutes pro year. For the rest of the time, your business can rely on it to serve those critical applications

This is only possible using specific solutions. Two classes of such systems have to be distinguished:
Fault tolerant (FT) :
Fault tolerant (FT) solutions provide redundancy. It means that a single copy of the operating system and the applications run on two physical servers. The two servers run in paralllel and execute the same instructions at the same time, like a mirror. In case the primary server has a hardware failure, the secondary system takes over.

High Availability (HA)
High Availability (HA) solutions provide loosely coupled servers with fail over capabilities. Each system is independent, but the health of each server is monitored. In case of a failure, _applications will be restarted_ on a different server in the pool of the cluster. Even if the application start very quickly, a downtime cannot be avoided, especially in case of unplanned failures. The downtime can be minimized but with very high costs.

Fault tolerant and High Availability solve actually different problems. A fault tolerant system provide great resilience to hardware faults, but as only an instance of the operating system is running, any software fault affect both machines and the entire solution goes down…

The main question is the cost. I think that any hardware component can be configured in a resilient fashion: NIC teaming, RAID, redundant power supplies… In case the same component is failing on a regular basis, there is no need to search for redundant systems.
It has to be kept in mind, redundant systems are expensive. Logically, the hardware cost of a FT system add a 100% resource requirement, not to say the performance degradation associated with keeping two systems synchronized, which can be significant (>50 % in some cases).

Even if a loosely coupled system cannot provide zero downtime for unplanned hardware failures, it does protect against a wider range of failures, both in hardware and software. With a Failover Clustering, it is possible to move the applications to another server during the time needed to patch the OS or the applications.

Finally, managing such complex systems require trained personal, and those too have to be redundant to avoid extra downtime in case of a failure.

I have experience with some HA and FT solutions from Stratus  and each of them have their pro and cons.
Stratus ftServer is a fault tolerant solution based on specific hardware: two enclosures in a single chassis. In this high-end solution, all components are constantly synchronized. The synchronization performance overhead has to be considered.
Stratus everRun Enterprise provide a cheaper software solution but at the cost of a high system complexity: Linux host running an hypervisor running several virtual machines, specific network(s) for the server communication.