Private cloud on Windows Desktop

Introduction

Many users have a desktop computer (or a laptop) running most of the day. Each person at home has a smartphone with a lot of contacts, SMS, photos and other important files.
Unfortunately, everything is not connected: why should I transfer all my contacts or photos to a cloud server in order to copy it to my computer, 50 cm away? (Answer: because data is money and so you get a easy-to-use for free).

Target of this article is to run some services in a “human understandable way” without leaving home. Services means sharing information (i.e. contacts, calendar) between users and devices of the home.

Home network is protected from the outside through the Internet Box rented by the internet provider. In the local network, there is no need to encrypt the data, making it easier to configure the solution. Data may be encrypted on the mobile devices as those are leaving the home network. This part is not covered by the article, but is explained here.

Requirements

Technically, the solution shall fullfill the following points:

  • Multi user solution.
  • A server must run in the background on a Windows 10 desktop machine.
  • Several users must be able to continue to work as usually on the desktop machine.
  • Data shall not leave from home (excepts on mobile phones).
  • Installation shall be moderatly difficult and easy to follow.
    (- Instalation shall be updated easily – Not tested at the moment)
  • It must be possible to easily uninstall all software.
  • Mobile clients must exist.
  • Following services must be proposed:
    • Contacts
    • SMS
    • Calendar
    • Photos
    • Notes
    • Tasks
    • Synchronize data

Restrictions: the server is not architected for running all the time and for many users. It may be accessible in the local network but not from outside, this to maximize the security of the solution and to reduce its complexity.

Answer: Nextcloud on Docker

Nextcloud is a proven platform providing the required services (and more). It is based on a PHP framework and requires a database. German federal IT has chosen it.

The use of docker containers is allowing to limit at the maximum the required footprint of the additional services.

Docker allows to run software in an isolated and minimal environment, named Container. Each container target a single task (Web server, database).

Caution: the following is just a technical documentation. Author does not take responsability on any data loss caused by following those instructions. Mentioned trademarks are ownership of their author. No free support will be offered.

A focus is done on Free and Open-Source software. Consider participating to the common efforts, with your skills, time or money.

Technical overview of the implementation

Prerequisites The machine shall have at least 8GB RAM, CPU or graphics performances are not limiting in this case. An account with administrative rights is required on the host computer.
A Windows professional license is preferred but Home edition shall be sufficient. In Windows Explorer, open the properties of the node This PC to see the version of Windows in use.

A cooking receipt script, docker-compose.yml will be used for this solution. Such a file contains a description of the docker commands to download and run the different docker containers. Required files will be downloaded if not present.

Windows Firewall must be stopped for the services to be available in the local network. This shall be OK as long as the local network is behind a box providing NAT (Network Adress Translation) functionalities.

Tools

Here a list of the tools used to run the proposed solution. Those are well-known, mature and maintained.

Docker for Windows or Docker Toolbox

Docker Desktop is the state-of-the-art solution for running containers. It requires Windows 10 professional due to the use of the hypervisor Hyper-V feature.

Alternative is Docker Toolbox, a legacy technology, but supporting Windows 10 Home.

This software contains all the tools required to create, manage and execute docker containers. The setup will configure the required Windows features, like Hyper-V.

The documentation contains the required instruction to check the installation and the basic commands.

Nextcloud

The Nextcloud docker appliance will be preferred to any community-driven package. This provides most of the functionalities required for the proposed solution.

PostgreSQL

Most of the implementation will make use of MariaDB, but I, personally recommend the use of PostgreSQL.

As a fork of Oracle MySQL, MariaDB may be encumbered by some limitations in the future (Hint: MySQL lecture of the GPL license is ‘different’), due to some license infrigement, forcing many users to upgrade or to switch to an alternative.

Alpine Linux, Nginx, PHP-FPM

The following tools are running behind the scene:

  • Alpine Linux: Operating system reduced at its minimum.
  • Nginx: Light-weight web server
  • PHP-FM is an alternative implementation of an internal process. Using this or another is just a matter of choice.

Files required

Docker-compose.yml file

Save the following code to a file docker-compose.yml to any directory on your computer

version: '3'

services:
  db:
    image: postgres:alpine
    restart: always
    volumes:
      - db:/var/lib/postgresql/data
    env_file:
      - db.env

  app:
    image: nextcloud:fpm-alpine
    restart: always
    volumes:
      - nextcloud-html:/var/www/html
    environment:
      - POSTGRES_HOST=db
    env_file:
      - db.env
    depends_on:
      - db

  web:
    build: ./web
    restart: always
    ports:
      - 80:80
    volumes:
      - nextcloud-html:/var/www/html:ro
      - nextcloud-data:/var/www/html/data:rw
    depends_on:
      - app

volumes:
  db:
  nextcloud-html:
  nextcloud-data:

db.env

Copy the following content in a file db.env located in the same directory than docker-compose.yml.

POSTGRES_PASSWORD=MyPassword
POSTGRES_DB=nextcloud
POSTGRES_USER=nextcloud
NEXTCLOUD_TRUSTED_DOMAINS=IP-OF-WINDOWS-HOST

Note: Best practice is to use long and complex passwords.

web\nginx.conf

Create a directory web in the directory where are located docker-compose.yml and db.env.

The configuration of the web server nginx is kept outside and will be injected at run-time.

Copy the following content in a file nginx.conf.

worker_processes auto;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                     '$status $body_bytes_sent "$http_referer" '
                     '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    upstream php-handler {
        server app:9000;
    }

    server {
        listen 80;

        # Add headers to serve security related headers
        # Before enabling Strict-Transport-Security headers please read into this
        # topic first.
        #add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;" always;
        #
        # WARNING: Only add the preload option once you read about
        # the consequences in https://hstspreload.org/. This option
        # will add the domain to a hardcoded list that is shipped
        # in all major browsers and getting removed from this list
        # could take several months.
        add_header Referrer-Policy "no-referrer" always;
        add_header X-Content-Type-Options "nosniff" always;
        add_header X-Download-Options "noopen" always;
        add_header X-Frame-Options "SAMEORIGIN" always;
        add_header X-Permitted-Cross-Domain-Policies "none" always;
        add_header X-Robots-Tag "none" always;
        add_header X-XSS-Protection "1; mode=block" always;

        # Remove X-Powered-By, which is an information leak
        fastcgi_hide_header X-Powered-By;

        # Path to the root of your installation
        root /var/www/html;

        location = /robots.txt {
            allow all;
            log_not_found off;
            access_log off;
        }

        # The following 2 rules are only needed for the user_webfinger app.
        # Uncomment it if you're planning to use this app.
        #rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
        #rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last;

        # The following rule is only needed for the Social app.
        # Uncomment it if you're planning to use this app.
        #rewrite ^/.well-known/webfinger /public.php?service=webfinger last;

        location = /.well-known/carddav {
            return 301 $scheme://$host:$server_port/remote.php/dav;
        }

        location = /.well-known/caldav {
            return 301 $scheme://$host:$server_port/remote.php/dav;
        }

        # set max upload size
        client_max_body_size 10G;
        fastcgi_buffers 64 4K;

        # Enable gzip but do not remove ETag headers
        gzip on;
        gzip_vary on;
        gzip_comp_level 4;
        gzip_min_length 256;
        gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
        gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;

        # Uncomment if your server is build with the ngx_pagespeed module
        # This module is currently not supported.
        #pagespeed off;

        location / {
            rewrite ^ /index.php;
        }

        location ~ ^\/(?:build|tests|config|lib|3rdparty|templates|data)\/ {
            deny all;
        }
        location ~ ^\/(?:\.|autotest|occ|issue|indie|db_|console) {
            deny all;
        }

        location ~ ^\/(?:index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+)\.php(?:$|\/) {
            fastcgi_split_path_info ^(.+?\.php)(\/.*|)$;
            set $path_info $fastcgi_path_info;
            try_files $fastcgi_script_name =404;
            include fastcgi_params;
            fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
            fastcgi_param PATH_INFO $path_info;
            # fastcgi_param HTTPS on;

            # Avoid sending the security headers twice
            fastcgi_param modHeadersAvailable true;

            # Enable pretty urls
            fastcgi_param front_controller_active true;
            fastcgi_pass php-handler;
            fastcgi_intercept_errors on;
            fastcgi_request_buffering off;
        }

        location ~ ^\/(?:updater|oc[ms]-provider)(?:$|\/) {
            try_files $uri/ =404;
            index index.php;
        }

        # Adding the cache control header for js, css and map files
        # Make sure it is BELOW the PHP block
        location ~ \.(?:css|js|woff2?|svg|gif|map)$ {
            try_files $uri /index.php$request_uri;
            add_header Cache-Control "public, max-age=15778463";
            # Add headers to serve security related headers (It is intended to
            # have those duplicated to the ones above)
            # Before enabling Strict-Transport-Security headers please read into
            # this topic first.
            #add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;" always;
            #
            # WARNING: Only add the preload option once you read about
            # the consequences in https://hstspreload.org/. This option
            # will add the domain to a hardcoded list that is shipped
            # in all major browsers and getting removed from this list
            # could take several months.
            add_header Referrer-Policy "no-referrer" always;
            add_header X-Content-Type-Options "nosniff" always;
            add_header X-Download-Options "noopen" always;
            add_header X-Frame-Options "SAMEORIGIN" always;
            add_header X-Permitted-Cross-Domain-Policies "none" always;
            add_header X-Robots-Tag "none" always;
            add_header X-XSS-Protection "1; mode=block" always;

            # Optional: Don't log access to assets
            access_log off;
        }

        location ~ \.(?:png|html|ttf|ico|jpg|jpeg|bcmap|mp4|webm)$ {
            try_files $uri /index.php$request_uri;
            # Optional: Don't log access to other assets
            access_log off;
        }
    }
}

web\Dockerfile

Copy the following content in a file Dockerfile bin the subdirectory web.

FROM nginx:alpine

COPY nginx.conf /etc/nginx/nginx.conf

This file will perform some initial configuration in the docker container running the web server nginx: copy the nginx configuration file.

Executer script: runme.bat

Copy the following code to a file runme.bat to your local drive. Replace C:\Documents\NextCloud-Docker with the directory where docker-compose.yml is located.

@echo off
cd /d C:\Documents\NextCloud-Docker
docker-compose up -d
pause

This script will start the docker container in the background.

Clients

NextCloud supports the standard protocols CalDAV, CardDAV, allowing connecting it to many other software.

On Windows

On Android

  • F-Droid is a third-party catalog of Free and Open-source applications. From there several tools can be used to synchronize different services.
  • This article explains how to synchronize tasks, calendars and contacts.

Backup

This point is the reason behind this article. I did not found any easy how-to to follow, and due to Docker containers are running Linux, itsself running on Hyper-V, managed by Windows… I wonder how it’s work and has to test carefully this solution.

As dockers commands can be executed directly on the containers, and due to the use of persistent docker volumes, it shall work, but need to be more test.

Nextcloud is based on files stored on the web server and data stored in the dabase. So we need to copy the files from the web server and execute a backup of the database (why a tool is required to save the content of the database is beyond the scope of the article).

docker cp docker-compose_web_1:/var/www/html/data/user/files C:\Documents\NextCloudApps
docker exec docker-compose_db_1 pg_dump -c -U nextcloud nextcloud > C:\LS\Docker\PGBackup.sql

Some links:

Upgrades

According to the Nextcloud documentation, an upgrade is performed by pulling the latest image. Due to the usage of containers, no data is lost.

Download (pull) the data and restart the system in the background (-d: daemon)

docker-compose pull
docker-compose up -d

Alternatives

Cozy

https://cozy.io/en/ – Howto: https://opensource.com/article/17/2/cozy-personal-cloud

If docker is installed, cozy can be installed with a couple of lines (to be adjusted for running on windows):

docker build -t cozy/full github.com/cozy-labs/cozy-docker

docker run --restart=always -d -p 80:80 -p 443:443 --name=moncozy --volume=/home/cozy/backup:/media -e DOMAIN=my.domain.com -e TERM=xterm/backup cozy/full

More information on https://github.com/cozy/cozy-setup/wiki/2.4.-The-Docker-Way

 

 

Integration of third party software

If someone has already provided a solution to a problem you’re facing, it may make sense to use it rather than trying to solve yourself the same.

In the following, I will present my personal view on how to reuse such solution, focusing on reusing and integrating a piece of software. I think the same will also be true for media content like videos and audio files, or documents.

 

Check requirements

First of all, does the foreseen software provides all the required features and fulfill all the requirements (You can think of this as “IRCA”: identify, read reviews, compare, and analyze) ? Is is supported ? Will it integrate nicely with the rest of your system ?

Intellectual property rights, e.g. patents, trademarks, copyright shall be considered as well.

 

Check license and target group

When it is clear that this particular piece is THE solution, it is time to check if it can be used for what it is planed for.

Just because it is available on the internet doesn’t mean that it is allowed to use it for all purposes. The copyright law (The statutes and court decisions regarding the rights to creative and literary works) has to be respected. The owner shall provide a license agreement listing the restrictions on how to use the software (or media content, document, etc).

A distinguish has to be made between COTS (“Commercial off-the-shelf”) that can be free or not and OSS (Open source software).

For OSS, the Copyleft effect (typically for software licensed under the GNU GPLv2 license) has to be considered. As so many open source licenses exist and as each file can contain its own license agreement, the use of tools like Fossology appear as mandatory.

Worst, even if the license agreement allows to integrate it, it does not mean that it can be redistributed everywhere! Some countries do not allow the use of certain technologies, e.g. advanced cryptography algorithms. Specific regulations and Export Control and Customs (ECC) must be considered as early as possible.

 

Some examples may be useful to illustrate this:

  1. Oracle Java Virtual Machine (JVM) Code License agreement targets “General Purpose Desktop Computers and Servers”: it means that if the JVM is installed on the server with a specific task, this agreement does not apply.
  2. It is not allowed to redistribute Sysinternal tools (e.g. psexec). End customers shall download the tools from Microsoft website.
  3. Reuse a code snippet from Stack Overflow is not as simple as one could think.
  4. A license is mandatory to redistribute Adobe Reader.

 

Last, what happen if the author stops the development of the item ? Or does not support it anymore ? An Escrow Agreement may be a solution, developing technical expertise on the product, etc.

 

Redistribute OSS source code and eventual modifications

GPL license requires that the source code is accessible to the end customer. It can be provided on-demand or being delivered with the binaries. Easy, but the same version has to be provided. Source code and binaries for OSS items shall be downloaded at the same time, ideally directly from the author (source has to be trusted) and check for consistence (a corrupted archive has no value).

Redistribute the source code and the eventual modification with the product seems to me the easiest way to fulfill this requirement from the GPL, alternative would be to provide the complete source code for the exact version to anyone asking for it. This can be easily checked and implies the use of a framework in order to keep things organized (all source code in one place).

Integrate to the documentation (and online help if applicable) the use of OSS documentation, ideally with the data from the bill of material (see below).

Note: OSS source code redistribution may not be the single limitation imposed by the license.

Create a Bill of material

A system is constantly evolving. For instance, the author may deliver new versions fixing some security flaw that has been discovered (more on this in chapter Maintenance) and assuming this is not the single 3rd party item included in the software, it is difficult to keep an overview.

The Bill of material shall list all software components, media and documents, their corresponding owner, download location, license, etc. Difficult to create initially but keeping it up-to-date provides a great value.

 

Maintenance

Once the first version of the system is delivered, some maintenance may be required.

Microsoft provides security hotfixes every month (Patch Tuesday). Security problems affect any kind of software, and the provider is responsible to provide fixes to his customers. At least, the items listed in the CVE (Common vulnerabilities and exposures) database shall be fixed. It is no secret that a well-defined vulnerability management process to address security vulnerabilities is a must. A good practice is to register to the announcement mailing-list of the software as well as to get involved in the support community forums, allowing to gain first-hand experience.

Note: I won’t describe anything about liability and warranty aspects. The provider is, as far as I know, responsible regarding his customers e.g. in case of a patent infringement. This may be an additional requirement by using OSS (this article provides additional information).

In order to keep track of those changes, common practices use a versioning control system, e.g. Git for source code. But which tool can be used for binaries ? SVN (Subversion) may be a good choice as it provides a global version number for each commit. Alternative would be Actifactory (Open source).

Ultimate Target is the reproducible build, allowing to prove that the system building process is meeting the high quality standard customers are paying for (or required by the sector of activity).

 

Conclusion

Integrating a third party software component allows to gain competence and development time, enabling a faster time-to-market. At the same time, it involves some obligations against the customers.

Final note: Ensure coherence by keeping the variety of third party software components low.

 

 

 

 

How websites are tracking their visitors

If you did not know it, “If you are not paying for it, you’re not the customer; you’re the product being sold.” ( Andrew Lewis)

The theory is good, but by using a free service, are you aware about ALL the consequences?  The collusion website shows us a demonstration and a Firefox Plugin to check it by ourselves.

Maybe the technology will outdo the human, due to the systematically use of Artificial Intelligence and big data technologies. Luckily for us, Artificial Inteligence is lacking Intuition, but advertisers are getting better products with the increase of data (big data, Internet of Things, Android Auto…).

  • The following article (in french) is showing some recent technologies used by the companies building visitors profiles:

http://www.lemonde.fr/technologies/article/2014/04/10/big-brother-ce-vendeur_4399335_651865.html

Architecture of a system integration system

A system integration system may act as a middleware between the myriad of isolated systems into a common data source for other applications.
Such a system is typically part of a Business Process Management (BPM) system or in operational intelligence (OI) solutions

IntegrationSystem_Architecture

Data sources Layer

  • Subsystems: customer systems. Those can be classed by their compatibilty to common data interfaces: OPC, REST, ODBC, CSV, oracle, sql server… Some may be specific and industry-specific (PI, IP21)
  • Subsystems connector: Need subsystem-specific connector that retrieve data
    Ideally use native API of the subsystems to connect to.
    Alternative use export function from the system or standard, e.g. OPC
    Data shall be retrieved on a regular basis (in order to calculate key performance indicators) or at runtime (for up-to-date data)

Cache Layer

  • Central database:
    – Ideally, cache collected data from the subsystems for performance and optimize the load on the network and on the subsystems
    – The calculation engine store its results in it
    – Store Displays configuration
    – Store authentication data (User Management )

 

  • Calculation engine
    – Perform calculations based on the data retrieved from any subsystems
    Those calculations shall provide ready-to use data sets for the displays (e.g. data content of a table)

Most of the data sets are computing large quantities of data. For instance, all values with no respect to the location are calculated. Therefore some data may be calculated even if not required.

– Calculate cyclical values, e.g. hourly, monthly,… averages or any typical values

– Calculation engine shall allow using the analytics capabilities of (e.g. Oracle or SQL Server provide high-level calculation)

– Calculation engine may use ETL systems (Extract Transform and Load: integration)

  • Identity management: manage the rules regarding which user / user group can view which element (display, value…)
    The Identity management module is responsible that each user see only the data he is granted to.
    Users can be grouped in to user groups.
    displays can be grouped in to displays groups (e.g. specific for a particular group of people)
    User groups can be assigned to displays group

Display Layer

  • User specific Menu system showing only the menu items and elements the logged-in user is granted to.
  • Dashboards: displays, ideally web-based showing graphical elements (trend, values, KPI…).
    Dashboard elements can be usual graphical elements (trend, values) or complex ones (KPI, traffic light, table, spiders)
    Dashboard elements may be reused, and each user shall be able to create his own dashboards by combining dashboard elements

The dashboard elements are displaying data from :
– subsystems. The caching mechanism allow to optimize the responsiveness of the system
– data computed by the calculation enigne allow to display complex data from different systems

Display typically filter values from the data sets:
– Trend over a specific time range
– Table for a specific location or element

Display provide following functions:
– Contextual menu with menu items specific to the selected element
– Links to further systems: e.g. link to documentation, other systems. Parameters allow to drill down directly to the right element.

  • API provide acces to the data to 3rd party products.

Stratus ftServer vs EverRun

An unavailable business-critical application is expensive. Forrester Research pegs the average cost of one hour of downtime at $150,000 for the typical enterprise (Source), therefore the ROI (retun on investiment) of a high-availability solution may be faster than you think.

Stratus ftServer

Stratus ftServer system provide the highest (to my knowledge) availability level coupled to the highest performance
The system is based on two specific parts named ‘enclosure’ that provide redundant hardware components, each enclosure performs the same instructions at the same time and is constantly monitoring the partner (Lockstep technology)

Stratus ftServer architecture (source)

On top of the hardware a Stratus ftServer system software (ftsss) monitors the hardware and perform system diagnostics. Typically, if an enclosure is going down due to e.g. a hardware failure, the second enclosure is immediately promoted master and the server may send an alarm to Stratus through a dedicated network interface. So, Stratus can send a replacement enclosure to the customer.
When the defect enclosure is replaced, both systems parts will synchronize and the ftServer may enter in Full duplex, the fault tolerant state.

Stratus price list as of 2015-06-01

EverRun MX, Express and Enterprise

Marathon (since 2012, acquired by Stratus) EverRun system adds a software layer on top on standard Intel servers. EverRun software add embedded clustering, fault tolerance, and data replication to XenServer (Linux-based hypervisor).
EverRun software allows to reach availability levels that compete with hardware-based fault tolerant servers

Datasheet EverRun MX

Having the two servers independent (not part of a single chassis as in ftServer) allows separating physically the servers, e.g. in two separates rooms or buildings. The software is checkpointing every few hundred milliseconds so two different systems can keep memory and I/O and applications synchronized so if one application instance fails in a virtual machine on the primary machine processing continues on the secondary machine.

EverRun MX solution was cheaper (due to the use of standard hardware) but slower and its architecture is more complex (several networks, virtualized environment) than the ftServer one. Moreover, EverRun MX is supporting windows guests only.

EverRun architecture (source)

The successor of EverRun MX, EverRun Enterprise is targeted to make the system easier-to-use.. The _SplitSite_ feature allows the software checkpointing to be done between mirrored systems over a campus distances and still be considering fault tolerance (How the availability engine works)
Since the version 7.2, EverRun Enterprise is based on Linux KVM Hypervisor (Source)

A license for everRun Enterprise costs $12,000 for a two-node pair. (Source). This price does not contain the hardware costs and the support for it.

EverRun Express is a cheaper solution, not providing fault tolerant capabilities costs around $5,000. More information on EverRun Express

Stratus Cloud Solutions

Stratus newest offer put everRun Enterprise onto OpenStack clouds with the KVM hypervisor on each node in the cloud.

Server Availability services
Datasheet Stratus’ Cloud Solutions

Keeping critical systems up and running 24/7

99.999% uptime… Does it means something to you  ? If not, think about of a system not available for as less as 5 minutes pro year. For the rest of the time, your business can rely on it to serve those critical applications

This is only possible using specific solutions. Two classes of such systems have to be distinguished:
Fault tolerant (FT) :
Fault tolerant (FT) solutions provide redundancy. It means that a single copy of the operating system and the applications run on two physical servers. The two servers run in paralllel and execute the same instructions at the same time, like a mirror. In case the primary server has a hardware failure, the secondary system takes over.

High Availability (HA)
High Availability (HA) solutions provide loosely coupled servers with fail over capabilities. Each system is independent, but the health of each server is monitored. In case of a failure, _applications will be restarted_ on a different server in the pool of the cluster. Even if the application start very quickly, a downtime cannot be avoided, especially in case of unplanned failures. The downtime can be minimized but with very high costs.

Fault tolerant and High Availability solve actually different problems. A fault tolerant system provide great resilience to hardware faults, but as only an instance of the operating system is running, any software fault affect both machines and the entire solution goes down…

The main question is the cost. I think that any hardware component can be configured in a resilient fashion: NIC teaming, RAID, redundant power supplies… In case the same component is failing on a regular basis, there is no need to search for redundant systems.
It has to be kept in mind, redundant systems are expensive. Logically, the hardware cost of a FT system add a 100% resource requirement, not to say the performance degradation associated with keeping two systems synchronized, which can be significant (>50 % in some cases).

Even if a loosely coupled system cannot provide zero downtime for unplanned hardware failures, it does protect against a wider range of failures, both in hardware and software. With a Failover Clustering, it is possible to move the applications to another server during the time needed to patch the OS or the applications.

Finally, managing such complex systems require trained personal, and those too have to be redundant to avoid extra downtime in case of a failure.

I have experience with some HA and FT solutions from Stratus  and each of them have their pro and cons.
Stratus ftServer is a fault tolerant solution based on specific hardware: two enclosures in a single chassis. In this high-end solution, all components are constantly synchronized. The synchronization performance overhead has to be considered.
Stratus everRun Enterprise provide a cheaper software solution but at the cost of a high system complexity: Linux host running an hypervisor running several virtual machines, specific network(s) for the server communication.

Amen – Hébergement de fichiers sur un pack Web Nom +

Je loue un pack Web Nom + chez Amen. Il m’a fallu longtemps pour que je comprenne comment héberger des fichiers et les rendre disponibles à d’autres personnes. L’aide d’Amen n’est pas claire et encore moins didactique.

Le pack Web Nom + d’Amen inclut en standard deux types d’hébergement:
– Hébergement Linux
– Gestionnaire de fichiers

Dans la section Associations, on peut créer deux types de sous-domaines:
a. Application Hosting, qui est associe à l’hébergement Linux
Avantage: les scripts PHP sont disponibles
Inconvénient: les fichiers doivent être ajoutés un par un

b. Nom de domaine , qui est associe au gestionnaire de fichiers
Avantages: il est possible d’ajouter plus d’un fichier à la fois, et en utilisant un accès FTP
Inconvénient: les scripts PHP ne sont pas disponibles

Voilà comment héberger des fichiers et les rendre disponibles au téléchargement en utilisant un sous-domaine.
1. Créer un nouveau sous-domaine dans “Plate-forme linux ( Nom de domaine ) – Hébergement Primaire“, e.g. xxx
2. Cliquer sur Utilisation Locale pour xxx
=> un répertoire xxx est cree dans /public (/public/xxx)

3. Aller dans Hébergement Primaire et choisir Gestionnaire de fichiers
Ajouter les fichiers

Attention: par défaut, une erreur “Forbidden – You don’t have access to / on this server” est affichée. Il faut absolument ajouter une page index.html (ou index.php) pour afficher quelque chose (ceci pour de sombres histoires de sécurité). Un premier exemple peut être récupéré du site W3CSchool

Les coûts de la filière électro nucléaire

L’émission du “Le téléphone sonne (sur France Inter) en date du 29 août 2012 porte la filière nucléaire.

Cette émission fait suite à la publication en janvier 2012 d’un rapport de la cour des comptes sur les coûts de cette filière, notamment pour le démantèlement (lien). L’intérêt de ce rapport est double: il est indépendant et neutre. La Synthèse ne fait que 24 pages et contient les données intéressantes.

L’émission peut être écoutée en Podcast jusqu’au 25 mai 2015 à cette adresse.