Drupal 7 vs Drupal 8 - A Technical Comparison

I don't find a lot of time to get on the tools these days and sometimes I miss getting into code. Recent projects have seen me focus on strategy, architecture, data and systems integration. Despite that I am comfortable describing myself as an expert in Drupal 7, having spent years giving the D7 codebase a forensic examination. However, despite Drupal 8.0.0 being released three years ago on November 19, 2015 I have not yet looked at a single line of code or even taken in a demo of its feature set.

Today that changes.

For starters I would like to see just how D8 differs from D7 when we start looking at the code used to build it and the database behind it. The areas I am interested in are:

  • Tech stack
  • Server Setup
  • Installing Drupal
  • File System Layout
  • Page Lifecycle
  • Creating a Theme
  • Site Building
  • Database
  • Deployment process
  • Drush

To that end I am going to create a small, Drupal 8 brochure site for my consulting business, https://fullstackconsulting.co.uk and every step of the way I will dig into the code to see how Drupal 8 differs from Drupal 7.

Tech Stack

It was very tempting to get up and running quickly with one of the Docker containers, but while I would ultimately like to move to a containerised solution my priority today is to take a look at all the nuts and bolts, starting with building the stack that Drupal will run on top of. Therefore, while it is great to see this advancement I am not going to dig into it today.

Instead I am going to create a Puppet profile to handle the server configuration, which will let me tweak and rebuild the server when I need to as I learn more about D8.

First job, select a database server. Many years ago MySQL was the only viable option when spinning up a new Drupal installation, but today the list of recommended database vendors is: MySQL, MariaDB or Percona Server.

We can also choose between versions, for instance MySQL 5.5, 5.6, 5.7 or 8.0, or the corresponding versions of MariaDB or Percona Server.

Of course, MariaDB and Percona Server represent forks of MySQL, so how do we choose which one to select? First a quick recap of the history. MySQL started life under the Swedish company MySQL AB, Sun Microsystems acquired MySQL AB in 2008 and Oracle acquired Sun in 2010. Oracle is already a big player in the database market and that caused concerns, although during acquisition negotiations with the European Commission, Oracle committed to keeping MySQL alive.

Some MySQL folk had already left MySQL and founded Percona in 2006, while others jumped ship to create the MariaDB fork in 2008. General opinion seems to be that the MySQL community has contracted, now mainly relying on Oracle employees for commits, while the MariaDB community is thriving. 

The Percona and MariaDB forks get a lot of credit for being significantly more performant and efficient with memory, and this is appealing after running some big Drupal 7 sites with many entity types and lots of fields. But equally, MySQL 8 claims to be 2x faster than 5.7 with up to 1.8 million queries/second.

Percona aims to stay closer to the MySQL code base, meaning updates to MySQL surface quicker for Percona than they do for MariaDB, but MariaDB will tend to be more ambitious in terms of adding new features.

Pantheon, a major managed Drupal host has adopted MariaDB, which certainly gives a lot of confidence in that approach.

I am not going to benchmark this myself today as I am focussing on Drupal not the database engine it uses. That said, I would like to come back and revisit this topic to see which variant wins the performance contest with a heavyweight setup.

If you need to select a database server for a live project that you expect to be heavy on the database I would suggest you consider the following:

  1. Create some actual tests to establish performance supremacy in your own context.
  2. MariaDB's broader feature set could have more value in a custom build project, not a Drupal project, which ought to adhere more closely to standards for wide compatibility. But do you see a feature that could add value to your project that is not available in the others?
  3. Look at your neighbours. What are they using and why?
  4. People close to MySQL, MariaDB and Percona comment that Oracle is proving to be a good steward for the MySQL open source project, and so maintaining alignment is a positive thing.
  5. Does your OS have a preferred package? If not, would you be prepared to manage packages yourself in order to deviate? The ability to maintain your stack is paramount.

For starters, the stack for this project will look like this:

  • Ubuntu 18.04 LTS
  • Apache 2.4
  • MySQL 5.7 (because it is the one supported by Ubuntu 18.04 out of the box)
  • PHP 7.2

Server Setup

  1. Create a VM, I'm on Windows 10 so it will be Hyper-V
  2. Install Ubuntu 18
  3. Update package list
    1. apt update
  4. Upgrade all upgradable packages,updating dependencies:
    1. apt dist-upgrade
  5. Create a user account to own the files that will exist in the website docroot. This user should be in the www-data group, assuming a standard Apache install. This user will allow us to move files around and execute Composer commands - Composer will complain if you try to execute it as root, more on that later.
  6. Many VM consoles lack flexibility, such as copy/paste for the command line. It will save time if I setup keyless SSH access to the VM. But until that is setup I need an easy way to move data/files from our host machine to our VM. One easy way to achieve this is to create an iso. Most VM software will let you load this onto the VM via the virtual DVD drive.
    1. I will create a private and public key that will be used to:
      1. Access the VM from our host OS without password prompt
      2. Access the git repository holding our code
    2. To create the SSH key and add it to the iso I use the Linux subsystem in Windows 10 to execute the following commands:
      1. mkdir certs
      2. ssh-keygen -t rsa -b 4096 -C "[email protected]"
        1. When prompted change the path to the newly created directory)
      3. genisoimage -f -J -joliet-long -r -allow-lowercase -allow-multidot -o certs.iso certs
        1. In case you use non-standard names the flags in this command prevent the iso from shortening your filename.
    3. Via your VM software load the iso into the DVD Drive.
      1. Mount the DVD Drive
        1. mkdir /mnt/cdrom
        2. mount /dev/cdrom /mnt/cdrom
        3. ls -al /mnt/cdrom
          1. You should see your SSH key listed
    4. Copy the SSH private and public key to the relevant location
      1. mkdir /root/.ssh
      2. cp /mnt/cdrom/id_rsa /root/.ssh
      3. cp /mnt/cdrom/id_rsa.pub /root/.ssh
    5. Add the public key to the authorized_keys file, to facilitate login from your host OS. Using some output redirection makes this easy:
      1. cat ~/.ssh/id_rsa.pub ~/.ssh/authorized_key
    6. Tighten up file permissions to keep the SSH client happy
      1. chmod 0600 -R /root/.ssh
        1. You'll need to do this on both the VM and the host
    7. Find the IP address of your VM
      1. ifconfig
    8. Test your login from your host OS
      1. ssh root@VM_IP_ADDRESS -i /path/to/your/ssh_key
  7. Setup Bitbucket
    1. Add the public key to your Bitbucket/GitHub/GitLab profile.
    2. If you use a non-standard name for your SSH key, you can tell the SSH client where to find it by providing SSH config directives in /root/.ssh/config on the VM (on your host you will likely not be using root):
    3. host bitbucket.org
          HostName bitbucket.org
          IdentityFile ~/.ssh/YOUR_PRIVATE_KEY
          User git
  8. Deploy DevOps scripts to VM
    1. I want to get my Puppet profile onto the VM. One day maybe there will be a team working on this so I follow a process that use Git to deploy DevOps related scripts, including Puppet, onto the VM. This means that any time the DevOps scripts are updated it will be simple for all team members to get those updates onto their own VMs. The Git repo is managed in Bitbucket so that means I need to get an SSH key onto the VM and then register it on a relevant Bitbucket account.
  9. Now I can deploy the Puppet profile from the git repo and install it on the VM
    1. mkdir /var/DevOps
    2. cd /var
    3. git clone git@YOUR_REPO.git DevOps
  10. Puppet is not contained in the official Ubuntu package repositories, but Puppet do maintain their own package repository, which is what I am going to use for this setup:
    1. https://puppet.com/docs/puppet/5.4/puppet_platform.html
  11. No Drupal development environment is complete without XDebug. Again, this is not present in official Ubuntu package repositories, so I am going to enable the Universe repository by adding to /etc/apt/sources.list.d/drupaldemo.list
    1. deb http://archive.ubuntu.com/ubuntu bionic universe
      deb-src http://archive.ubuntu.com/ubuntu bionic universe
      deb http://us.archive.ubuntu.com/ubuntu/ bionic universe
      deb-src http://us.archive.ubuntu.com/ubuntu/ bionic universe
      deb http://us.archive.ubuntu.com/ubuntu/ bionic-updates universe
      deb-src http://us.archive.ubuntu.com/ubuntu/ bionic-updates universe
      deb http://security.ubuntu.com/ubuntu bionic-security universe
      deb-src http://security.ubuntu.com/ubuntu bionic-security universe
    2. I manage these repositories via the Puppet profile
    3. I won't need this on the production system, so I can keep the production OS to official packages only, not using the Universe repository

Installing Drupal

The options are:

  1. Download the code from the project page: https://www.drupal.org/download
  2. Use Composer - a PHP dependency manager: https://getcomposer.org/

The recommended option is to use Composer, that way dependencies on libraries besides Drupal can all be managed together.

Composer will warn if it is executed as root. Fortunately I already created a user in the www-data group, so I can use that user to execute the Composer commands. The reason root is not recommended is that some commands, such as exec, install, and update allow execution of third party code on our system. Since plugins and scripts have full access to the user account which runs Composer, if executing as root they could cause a lot of damage if they contained malicious, or even just broken code.

There is a kickstarter composer template at drupal-composer/drupal-project. Not only will this install the core project but it will also install utilities such as Drush and it will configure Composer to install Drupal themes and modules into appropriately named directories, rather than installing everything into the Composer standard /vendor directory . Using that project the Drupal codebase is installed with this command:

composer create-project drupal-composer/drupal-project:8.x-dev my_site_name_dir --stability dev --no-interaction

Another file layout point is that it will load the core Drupal files into a subdirectory named "web".

Because Drupal is being built with Composer there is a /vendor directory, which will host all of the libraries installed by Composer. This presents another choice to make, do I:

  1. Commit the contents of the vendor directory to git
  2. Add the vendor directory to gitignore.

The argument for committing it is that all the code that our project needs is stored, versioned and cannot change unless explicitly updated, making it stable. But the argument against it is that we will significantly increase the size of the versioned code base and we duplicate the history of the dependencies into our git repository. It is also possible to pin our project to specific versions of libraries via composer.json configuration meaning so we do not need to be concerned about stability.

I will follow the recommendation of Composer and ignore the vendor directory with the following in .gitignore, which is already added if using the Composer kickstarter project:
/vendor/

My Puppet profile has already taken care of creating the relevant MySQL database and user, so the next step is to run the installer in the browser. As with Drupal 7, since Drupal is not already installed I get automatically redirected to the installer:
https://DRUPAL/core/install.php

"Select an installation profile" - this feels very familiar from D7. I choose the Standard profile, which includes some fairly well used modules relating to administrative tasks, text formats etc. Minimal is a little too stripped back for most typical needs, but great for a barebones install.

I complete the install by heading to:
https://DRUPAL/admin/reports/status
This report will instruct us to tighten up file permission, define trusted hosts etc in order to secure our installation.

After completing the config screen I am redirected to the front page and…..it looks just like a fresh D7 installation. The admin menu looks a bit different and I see the layout responding to some screen width breakpoints. But that's enough about the user interface, let's see where the differences are under the bonnet.

File System Layout

The drupal-composer project has setup some directories:
/web - the document root as served by the web server is now a subdirectory of the project root.
/web/core - this is where the core Drupal project files are installed
/web/libraries - libraries that can be shared between core, modules and themes
/web/modules/contrib - modules maintained by the community
/web/profiles/contrib - profiles maintained by the community
/web/themes/contrib - themes maintained by the community
/drush/Commands - command files to extend drush functionality

If we write some code ourselves it should go here:
/web/modules/custom - modules maintained by the community
/web/themes/custom - themes maintained by the community

I can see that module and theme code location differs from the D7 standard of placing it at:
sites/all/modules/[contrib|custom]
sites/all/themes/[contrib|custom]

Composer will install non-Drupal, dependencies into:
/vendor

Core system files have moved. Under D7 we had:
/includes - fundamental functionality
/misc - jss/css
/modules - core modules
/scripts - utility shell scripts

Under D8 we have:
/web/core/includes
/web/core/misc
/web/core/modules
/web/core/scripts

Overall, this feels very familiar so far. But there are some new directories in D8:
/web/core/assets/vendor - js and cs for external libraries such as jquery

We have yaml based configuration scripts. The ones for core are here:
/web/core/config/install - only read on installation
/web/core/config/schema - covers data types, menus, entities and more

Modules can follow this same convention, defining both  the install and schema yaml scripts.

There is a new directory for classes provided by Drupal core:
/web/core/lib

A functional test suite can be found at:
/web/core/tests

Page Lifecycle

URL routing

As in D7, .htaccess routes requests to index.php. However, there is another script that looks like it could play a part in URL routing:
.ht.router.php

Upon close inspection though, .ht.router/php is used by the builtin web server and does not typically play a role in menu routing.

Request Handling

A fairly standard Inversion of Control principle is followed, as was the case with Drupal 7 and earlier. Apache routes the request to index.php which orchestrates the loading of the Drupal Kernel and subsequent handling of the request, executing any custom code we may implement at the appropriate point in the request lifecycle.

It is apparent right away that Drupal 8 is object oriented, loading classes from the DrupalKernel and Request namespaces, the latter being part of the Symfony framework.

use Drupal\Core\DrupalKernel;
use Symfony\Component\HttpFoundation\Request;

We won't have to manually include class files, as long as conventions are followed, because there is an autoload script that will scan the vendor directory:
$autoloader = require_once 'autoload.php';

Now it is time initiate an XDebug session so I can trace a request from start to finish and see exactly what this new DrupalKernel class does:

$kernel = new DrupalKernel('prod', $autoloader);

The first observation is that an environment parameter is being passed to the DrupalKernel constructor, this indicates that we can have different invocations for dev, staging and prod environments.

The $kernel object is also initialised with an instance of the autoloader, which we can expect to be used frequently in the newly object oriented D8. Next step is to create a $request object:

$request = Request::createFromGlobals();

This is standard Symfony code storing GPCES data on $request. Now the DrupalKernel class handles the request:

$response = $kernel->handle($request);
Within the handle() method we find that site settings are handled slightly differently, with the global $conf variable gone and variable_get() function replaced with the Settings class exposing a static get method to retrieve specific settings. But then things start to look familiar, with the inclusion of bootstrap.inc and the setup of some php settings. After some OO based config loading and PageCache handling we come across more familiar territory with the DrupalKernel->preHandle() method, which invokes loadLegacyInclude() to require_once many .inc scripts, such as common.inc, database.inc etc, this has a very familiar feel to it.

Module loading appears to have had an update, with the invocation of the Drupal\Core\DependencyInjection\Container class being used to load all enabled modules.

Where things have changed significantly is processing the URI to identify the menu_item which defined the callback function to invoke. The new approach is much more in keeping with object oriented concepts, favouring an event dispatcher pattern, invoked from within the Symfony HttpKernel class. There are listeners defined in Symfony and Drupal classes handling such task as redirecting requests with multiple leading slashes, authenticating user etc.

We haven't got as far as looking at modules yet, but it looks like Drupal modules are now able to define listeners for these events, nice.

Once the kernel events have all been handled a $controller object is initialised. Interestingly, before the $controller is used a controller event is dispatched, giving the opportunity for modules to modify or change the controller being used.

The registered controller is responsible for identifying the class, such as ViewPageController, that will generate the render array that is used for rendering the response body.

An interesting observation given that this debug session was initiated on a bog standard Drupal front page - where path is equivalent to /node the ViewPageController is invoked and it has code that feels very similar to the very popular, Views module from D7, such as view_id, display_id. This makes sense, because now that Views has been baked into Drupal core in D8 we would expect a page listing multiple  nodes to be powered by Views, rather than some other case specific database query.

Response handling has certainly had a refresh, no longer relying on drupal_deliver_html_page() to set the response headers and body in favour of using Response class from the Symfony framework.

There are lots of areas to look at in more detail, such as how blocks are added to the page render array etc, but from this whirlwind tour of the codebase there are some very nice architectural improvements, while also retaining a high degree of familiarity.

Creating a Theme

Before looking at how theming works in Drupal 8 I need to make a call over whether to use a base theme like Bootstrap, or a much lighter base theme such as Classy, one of the core themes added in D8.

Bootstrap would give more to play with out of the box, but my feeling is that it offers more value if you are building multiple sites, often using the various Bootstrap components, so you enjoy more benefit due to reuse which makes the effort learning the framework worthwhile. 

Since the motivation behind this exercise is to explore the capability of Drupal core in D8, I don't want to muddy the waters by adding a lot of additional functionality from contrib modules and themes unless I really need it. This approach will allow me to learn more quickly where D8 is strong and where it is deficient.

I am going to implement a responsive theme based on the core theme, Classy. For starters I create the theme directory at:
/web/themes/custom/themename

The .info script from D7 is gone and instead I start by creating:
themename.info.yml - configure the base theme, regions etc
themename.libraries.yml - specify CSS and JS scripts to load globally

At this point if I reload /admin/appearance I see my new theme listed as an uninstalled theme. If I go ahead and choose the option to Install and Set as Default, then the next time I load the home page it will be rendered in accordance with the new, empty theme.

By default I see that CSS aggregation is enabled, that is going to hinder my development efforts so I will turn that off for now.

Having defined the regions for my theme adding blocks to those regions via the admin UI is very familiar. Altering the page layout is also very similar. Yes, the templating engine has been altered, now using Twig, and variable embeds and control statements are noticeably different, using the {% %} and {{ }} markup. But while we are just setting up the html there is no significant difference at this stage.

One area I do we need to pay attention to is laying out my theme resources. For instance, there is a much stricter convention in terms of CSS scripts in an attempt to bring more structure and readability, for full details take a look here:
https://www.drupal.org/docs/develop/standards/css/css-architecture-for-drupal-8

Another interesting detail is that the theme will by default look for the logo with a svg file extension. If you have a reason to use a png you need to configure this theme setting in themes/custom/themename/config/install/themename.settings.yml:
logo:
  path: 'themes/custom/themename/logo.png'
  use_default: false

Site Building

Past D7 projects have used the Webform module extensively but upon seeing there is not a general release available for D8 I looked at other options. It was only then that I realised that the core Contact module has received a significant upgrade. When coupled with the Contact Storage and Contact Blocks modules it should make Webform redundant in many, although probably not all scenarios.

To kick things off I setup up the contact form recipient, thank you message, redirect path and a couple of fields through the core admin UI:
https://DRUPAL/admin/structure/contact

I decided I wanted a contact form embedded in the footer of my page - for now I am going to ignore what that might mean for full page caching in D8, you know, the problem where a cached page contains a form that has expired.

This is the first point at which a new module needs to be installed. Considering that I am adopting Composer for managing dependencies this ought to be a little different to the D7 days. The steps I followed were:

Update composer.json to require the contact_block module:
"require": {
        ….
        ….
        "drupal/contact_block": "^1.4"
}

Ask composer to update dependencies and install the new module:
composer update

As per the "installer-paths" config in composer.json, the contact_block module installed into the appropriate directory:
web/modules/contrib/contact_block

Now to install the module. Back in D7 I would use:
drush en contact_block

However, there is a problem:
Command 'drush' not found, did you mean:
command 'rush' from deb rush
Try: sudo apt install <deb name>

Given that our composer.json config does fetch drush, we know it is installed but the issue is that it is not in our PATH variable. We can confirm that by getting a successful result when we absolutely referencing the drush binary:
/path/to/drupal/vendor/bin/drush status

The issue is that in order to avoid package dependency problems, when managing the codebase with Composer it is recommended that drush is installed per project. There is a small program we can install in order to run drush in an unqualified context, which is perfect when combined with drush aliases to specify the Drupal root path:
https://github.com/drush-ops/drush-launcher

As in D7, I can navigate through Admin > Structure > Block, but there are some differences once on the page. I scroll to my footer region and hit the "Place block" button, launching a modal that lists out the available blocks. The one I am interested in is "Contact block" and I press the corresponding "Place block" button.

The next form prompts me to select which contact form I would like to render along with standard block config such as Content types, Pages and Roles, which has a very familiar feel. After saving the block layout page and then reloading the frontend page the contact block appears as expected.

A cache clear is needed after playing around with the block placement and styling so as a matter of habit I try:
drush cc all

I am almost surprised to see the accompanying error message:
`cache-clear all` is deprecated for Drupal 8 and later. Please use the `cache-rebuild` command instead.

That will take some getting used to!

I am not a fan of running a local mailserver. Not only does it increase the management overhead but deliverability rates are unlikely to be as high as a proper mail provider due to mail server trust scores and  white lists etc. I have had good success on past projects routing all transactional emails through Sendgrid, and the free tier for low usage projects is a great way to get started. What is more, there is already a stable contrib module available for D8. As before I will start by adding the module reference to composer.json
"require": {
        ….
        ….
        "drupal/sendgrid_integration": "^1.2"
}

Followed by:
composer update --dry-run

All looks good so I kick off the update
composer update

Enable the Sendgrid module
drush en sendgrid_integration

The sendgrid module tells us that it is dependent on this library:
https://github.com/taz77/sendgrid-php-ng

Because dependencies are being managed by Composer I have no work to do here, Composer will take care of fetching that library and installing it into the vendors directory because it is referenced in the composer.json of the sendgrid_integration module.

All that is left to do is login to Sendgrid, generate an API key and use that API key on the Sendgrid config form:
https://DRUPAL/admin/config/services/sendgrid

Another dependency of the Sendgrid module is another contrib module called Mailsystem. This is very similar to D7, I can use the Mailsystem module to specify that Sendgrid should be the default mail sender:
https://DRUPAL/admin/config/system/mailsystem

Now I can fill in the contact form embedded in the page footer and have the results emailed to me by Sendgrid. That was a piece of cake.

Database

The first thing I notice when I list out the tables in the D8 database is that field handling seems to be different. In D7 the database would have field_* and field_revision_* tables for every field. These tables would contain the field values that relate to a particular entity. Those tables are absent but so too are the tables that stored the field configuration: field_config and field_config_instance.

On closer inspection I can see that field data tables now seem to be entity type specific, for example:
node__field_image
node__field_tags

The field configuration points us towards where D8 has done a lot of work. Field configuration data has moved into a general config table. Looking at the names of the config settings it is apparent that the config table is responsible for storing config data for:
fields
field instances
node types
views
cron
text formats

..and more.

In other words it looks like anything related to config will appear here and I am expecting this, coupled with the Configuration API is what will make promoting changes from dev through staging and into production much slicker than in D7.

The users table is a little cleaner with user data no longer serialized as one single lump, instead shifted to a separate table keyed on uid and module and therefore stored in smaller lumps. Other fields that were considered entity properties in D7 were stored directly on the users table in D7, but in D8 they are shifted to a separate table, for example: pass, mail, timezone, created etc.

Similar with nodes, properties such as title, uid, created, sticky have been shifted to the node_field_data table.

On the whole though the database feels very similar. Field data is stored in a very similar fashion, retaining the bundle, entity_id, delta and one or more value columns.

Deployment process

I am going to avoid using a managed service such as Pantheon.io or Platform.sh for this project purely because I would like to see all of the moving parts, problems and general maintenance tasks that are required with a D8 install for the time being.

Instead I will use a plain VM. I will not be using one of the public clouds such as Azure, Google or AWS because this Drupal project is very small and will have stable usage patterns with no requirement for load balancers or elastic scaling capability in the near term. With that type of profile using one of those cloud providers would be a more expensive option in contrast to a bog standard VM from a provider such as Linode.

While writing the last two paragraphs my new VM has been provisioned complete with a Ubuntu 18.04 LTS image. Fortunately, right at the start of this project I wrote all of the server configuration into Puppet manifests, so configuring this server, covering the entire tech stack and including firewall rules will be a breeze.

Let's see how straight forward it is:

  1. SSH into new server
  2. Create a directory to hold our DevOps scripts
    1. mkdir /var/DevOps
  3. Create an SSH deploy key for the Git repo - the production environment should only need read access to the Git repo
    1. ssh-keygen -t rsa -b 4096 -C "[email protected]"
  4. Add the deploy key to the repo
    1. If you accepted the defaults from ssh-keygen this will be id_rsa.pub
  5. Clone the DevOps repo
    1. git clone [email protected]:reponame/devops.git /var/DevOps
  6. Set values in the Puppet config script
  7. Kick off the Puppet agent
  8. Update DNS records to point to the new server

That went without a hitch, all system requirements are now in place.

Puppet has taken care of checking out my Drupal Git repo into the web root, but as discussed earlier, this project does not commit the vendor libraries, contrib modules or core files since Composer is managing these for us. That means the next step is to ask Composer to update all of our dependencies:
composer update --no-dev

The --no-dev option is added to the update command because when deploying into the production environment we do not want libraries such as phpunit present, which could present us with a security risk and needlessly bloat the size of the deployed code base.

Composer tells us that it has completed the installation of dependencies successfully, but we are not done yet because we don't have a database. Rather than complete the Drupal installation wizard to setup the database I will promote the database used in the development environment into the production environment. Since the Drupal install wizard will not be used to setup the database and since settings scripts containing database credentials are not committed to Git a manual update is needed to /web/sites/default/settings.php

These are the settings that will be required at a minimum:
$settings['hash_salt']
$databases
$settings['trusted_host_patterns']

The database export/import is a pretty straightforward task using mysqldump in the dev environment for the export and then the mysql cli in the production environment for the import.

But that approach would not have a great degree of reusability. Instead, I turn to drush.

Drush

Drush has a sql-sync command that allows us to specify a source and a target and it will then take care of the relevant mysqldump and mysql commands. For maximum reusability it is possible to configure drush aliases so that all of the relevant server connection details are stored in a config script resulting in the following, simple command:
drush sql-sync @source @target

Up until Drush 8, which maintained support for D7, aliases were defined in a PHP script named according to this convention:
SITENAME.aliases.drushrc.php

But as of Drush 9 this has changed, the format is now YAML and the filename convention is:
SITENAME.site.yml

There are also some changes regarding where we place drush config scripts. The drupal-composer/drupal-project package that was used to build this project creates the following directory, where we can place drush scripts:
/path/to/drupal/drush/sites

However, I quite like the old convention of being able to store drush alias definitions in ~/.drush. This is because I tend to have Puppet manifests that configure all of this independently from the Drupal installation. This setup is still possible but first of all it is necessary to create the overall config script at ~/.drush/drush.yml, a path that drush will automatically scan. In that script we can specify that it will also be the location of site alias definitions with something like this:
drush:
  paths:
    alias-path:
      - "/home/USER_NAME/.drush"

Now drush will auto detect the alias script at ~/.drush/SITE_NAME.site.yml. The alias script will specify the connection properties for each version of the site:
dev:
  options:
    target-command-specific:
      sql-sync:
        enable:
          - stage_file_proxy
  root: /var/www/vhosts/DEV_HOST_NAME/web
  uri: 'https://DEV_HOST_NAME'
prod:
  host: DEV_HOST_NAME
  options: {  }
  root: /var/www/vhosts/HOST_NAME/web
  uri: 'https://DEV_HOST_NAME'
  user: USER_NAME
  ssh:
    options: '-o PasswordAuthentication=no -i ~/.ssh/id_rsa'

I test if this is working with these commands:
drush cache-rebuild
drush sa

Clearing the cache give Drush a chance to find the new config scripts. The sa command is the abbreviated site-alias command and should list out the aliases that are defined in the site.yml script. Now check that the dev and prod config looks ok:
drush @SITE_NAME.dev status
drush @SITE_NAME.prod status

At this point I can execute a database sync:
drush sql-sync @SITE_NAME.dev @SITE_NAME.prod

Job done. Now the site loads correctly in the production environment.

Wrap Up

That is all I have time for in this walkthrough and while there are some areas I would like to come back to, such as custom modules, querying the database, entity API, form API and configuration management, I have seen enough to confirm that Drupal 8 represents a great, architectural step forward while feeling very familiar to those coming from a Drupal 7 background.