TrustBuilder 2016-11 release

Follow

Hi all, 

With winter in the country it is time for a new release, with this release we are also at the final release of this year. The release is mainly focused around the following Stories:

  • Message Queuing
  • Gateway updates
  • OTP Adapter added security settings
  • Attribute verification
  • SAML2 Endpoints
  • AjaxRequests Authentication
  • Appliance maintenance

In addition of the regular Operating System security updates we bumped the versions of MariaDB (10.0.28) and Redis (3.2.5)

Message Queuing (Asynchronous Operations)

To do more user operations in a asynchronous way we are preparing TrustBuilder to be able to interface with a message queue. RabbitMQ will be installed on the same machine as the orchestrator (in a clustered mode if you have more then one orchestrator). This service is the first step into async operations in the next releases.

Gateway updates

Because of a security issue in Redis, we bumped the Redis version to version 3.2.5. Because of this upgrade we were required to introduce session authentication by default. During installation a Session Password will be generated and added to the configuration. 

OTP Adapter changes

The current implementation was using SHA-1 as a digest algorithm. Because SHA-1 is no longer considered secure we implemented a choice of multiple algorithm during adapter creation

Attribute verification

The current attribute verification for Email attributes was very strict. An attribute was only usable when the attribute was verified. This caused issues in the usage. We changed the attribute verification that it will supply a status. This status can be called through workflows or the api.

SAML2 Endpoints

SAML2 endpoints got a major overhaul. next to the possibility to have a list of different endpoints for a Service Provider, the handling of signatures is also more consistent.

AjaxRequest Authentication

TrustBuilder gateway AAA framework is based on the configured locations, TrustBuilder Gateway does an authentication request with every new location that has been requested in order to check the user authorization. This can cause problems when a api and a web application do not live on the same location (/webapp and /api for example). To counter this we have added a manual check that can be done through javascript (important that the x-requested-with header is set). By calling /idhub/gw-login?ref=<<the url you want to hit>> TrustBuilder server will verify the user access and give a response that can be handled through javascript (HTTP 401 if denied, HTTP 200 if ok)

Appliance Maintenance

Some big changes have been done to the appliance and the appliance configuration scripts. These scripts (created in ansible) will change your environment to the latest version. This script can be run at any time. 

Ansible scripts will reset configuration files to the default. So any changes to the configuration files will be removed. Use the variables to change the behaviour of our components

Common

  • Firewalld will be automatically started on every run of the ansible playbook.
  • Yum update will be executed on every run
  • NTPdate will be executed on every run to set the clock correctly
  • Sets the Shell PS1 for the trustbuilder and root user
  • dnsmasq is installed and all the nodes have an entry.

database role

The database role has been completely overhauled. The role can be executed several times now because it is more reliable. 

Database users are now created per node in the database. We try to avoid the % host as much as possible. The create the correct nodes a couple of new variables are available. 

  • orchestrator_groupname (default: orchestrator_servers): The inventory group that runs the orchestrator role.
  • repository_groupname (default: user_repository_servers):  The inventory group that runs the mysql nodes.

A backup script has also been introduced. A full backup of the database will be scheduled at 3:00 AM. Incremental backups are done every hour, fifteen minutes after the hour. Every time a new full backup is done, the incremental backups are deleted. The backups are stored in the "/opt/trustbuilder/backups/repository" and a log of the job is found under "/opt/trustbuilder/backups/repository/logs". The backups are encrypted with a shared secret. The shared secret can be found on your first node under /opt/trustbuilder/etc/.my-backup-key (only the mysql_backup user and root have access to this file). 

To manually do a full backup the following command can be used:

sudo su - mysql_backup -s /bin/bash -c /usr/bin/mysql-full-backup.sh

To restore a backup you will need to follow the steps in the following article: <<insert articlelink here>>

trustbuilder user no longer has access to the mysql command and should use sudo.

orchestrator role 

  • Option to setup the orchestrator for vasco digipass. Set the variable vasco_enabled to true
  • Possibility to set custom entries in the general context and in the idhub context of tomcat. Inside /opt/trustbuilder/appliance/config/files. The context_tc_extra_config.xml is for the general context.xml. idhub_tc_extra_config.xml is for the idhub.xml.
  • Installs rabbitmq-server, users and configure HA is more then one server is defined
  • Creates extra aliases to use: tb-logs and tb-restart

tomcat-core role (sub role for orchestrator role)

  • Logrotate on 30 days

tba role

  • logrotate on 30 days

gateway role 

The gateway role has had a major overhaul. This will make it possible to rerun the ansible playbooks without interfering with the configuration.

  • Clustered instances will now have a common directory that will be used for the configuration for every member of the cluster. This clustered directory is /opt/trustbuilder/cluster/gateway/instances/<<instance id>>. Configuration changes can be done within this folder and then be synced to all the cluster members. This is done through the sync task. You can run the sync process with following command:

    ansible-playbook /opt/trustbuilder/cluster/gateway/tasks/<<instance_id>>_sync.yml 

    This command will sync the directories and reload the nodes.

    To migrate from the old way to the new way of working the playbook will do a migration of the instance. This is done by fetching the instance configuration files from the first server in your cluster. and unpacking them in the new location on the machine running the ansible-playbook. the archive is stored in the /tmp folder with the <<instance_id>>_backup.tgz name.
  • Clustered instances will check if there are an odd number of gateway instances installed (three or more). if this is not the case you will be required to define a sentinel_host variable with the name of the host in your inventory which will function as a sentinel. If you do not require high availability you can disable this by setting the variable ha to false
  • The role will create a backup if the instance already exists instance. These backups are stored in /opt/trustbuilder/backups/gateway (on the host that runs the playbook) and this is done per node.

Preparation

These are the updated TrustBuilder RPMs for this release:

  • trustbuilder-all-8.2.0-2498.noarch.rpm
  • trustbuilder-appliance-8.2-313.noarch.rpm
  • trustbuilder-core-8.2.0-2498.noarch.rpm
  • trustbuilder-crl2db-8.2.0-2498.noarch.rpm
  • trustbuilder-gateway-20161213160111-1.x86_64.rpm
  • trustbuilder-gateway-debuginfo-20161213160111-1.x86_64.rpm
  • trustbuilder-gui-8.2.0-2498.noarch.rpm
  • trustbuilder-release-8.2-313.noarch.rpm
  • trustbuilder-userportal-20161213162611-1.noarch.rpm

To start the release it is recommended to do backups. If you are using VMWare you can create a snapshot. Alternatively you can do a manual backup as described in the backup section

Backup

Create a folder to hold your backups on every node. Use this command:

mkdir -p /opt/trustbuilder/release-backup

On the gateway node(s):

Goto the instances folder. there you will have one or more instances. you can backup them by using following command:

tar cvf /opt/trustbuilder/release-backup/gw-instances-$(date +%d-%m-%Y).tgz --exclude .git --exclude "*.log*" .

On the orchestrator node(s):

Copy following files to the backup folder:

  • /opt/trustbuilder/tomcat-core/conf/server.xml
  • /opt/trustbuilder/tomcat-core/conf/context.xml
  • /opt/trustbuilder/tomcat-core/conf/Catalina/conf/<<nodename>>/*

On the repository node(s):

Make a database backup. use following command:

mysqldump --all-databases --single-transaction > database-backup-$(date +%d-%m-%Y).sql

Installation

While the installation will stop services if needed it is recommended that you stop all the services for TrustBuilder.

On the gateway node(s):

  • sudo systemctl stop tb-gw-<<instance_id>>
  • sudo systemctl stop tb-gw-<<instance_id>>-sessionstore
  • sudo systemctl stop tb-gw-<<instance_id>>-sessionstore-sentinel

On the orchestrator node(s):

  • sudo systemctl stop tomcat-core

On the repository node(s):

  • sudo systemctl stop mysql

On the admin node (node which runs tba)

  • sudo systemctl stop tomcat-gui

To start the installation of the TrustBuilder update do the following steps

On the node that runs the ansible playbook (f.e. the admin node)

  1. sudo yum update trustbuilder-appliance (this should update to 8.2-310)
  2. cd /opt/trustbuilder/appliance/config
  3. change cluster.yml with the necessary changes described above
  4. run ansible-playbook -v cluster.yml

In rare cases it could be that the Gateway Service will not start. The root cause can be found by executing the following command:

sudo systemctl status tb-gw-default

Any errors need to be fixed manually and then you can rerun the ansible-playbook again

If everything runs correctly TrustBuilder should be up and running

 

Have more questions? Submit a request

Comments