PreviousNext: Responsive Images for Media Entities in Drupal 8

Image Styles Breadcrumb

Images on websites can be a huge pain when you are optimizing a site. We want our images to render as crisp as possible but we also want our sites to load as fast as possible. Content creators will often ask “what size image should I upload” and with the thought of some tiny image being rendered at full-screen width pixelated out of control we’ll answer “as large as you’ve got”. The content creator will then upload a 2mb jpeg and the load time/network request size will increase dramatically.

Nick Fletcher

Responsive images can be a decent solution for this. A front end developer can achieve this in many ways. A popular way would be to use a  element with the contents of multiple ‘s using srcset and `media` attributes and a default  tag.

I’ll explain how we can do that in Drupal 8. 
The scenario I’m trying to set up in this example is a paragraph that references a media entity with an image field.


Enable the responsive images module from Drupal core.

  1. To enable the responsive image module, go to Admin > Configuration.
  2. Click the checkbox next to Responsive Image.
  3. Click Install.

This module may already be installed on your project so just head to Admin > Config and ensure that the Responsive image styles module is enabled.

Responsive Image Styles Config

Add / Confirm breakpoints

The default theme will already have a breakpoints YAML file. If you’re using a custom theme you’ll need to make sure you have a breakpoints YAML file for it. This should exist at themes/{theme-name}/{theme-name}.breakpoints.yml, where {theme-name}, is the name of your theme.
Create or open the file and configure your breakpoints. There should be a few breakpoints in there and they’ll look something like this

  label: small
  mediaQuery: "(min-width: 0px)"
  weight: 1
    - 1x
    - 2x
  label: medium
  mediaQuery: "(min-width: 768px)"
  weight: 2
    - 1x
    - 2x
  label: large
  mediaQuery: "(min-width: 1024px)"
  weight: 3
    - 1x
    - 2x

You can add as many breakpoints as you need to suit your requirements. The weight should go from 0 for the smallest breakpoint to the highest number for the largest breakpoint. The multipliers are used to provide crisper images for HD and retina displays.

Configure Image Styles (sizes)

Head to Admin > Config > Media > Image Styles and create a size for each breakpoint.

Configuring Image Styles UI

  1. Click Add image style.
  2. Give it an Image style name and click Create new style (e.g. Desktop 1x, Desktop 2x, Tablet 1x etc…).

    Create Image Style UI

  3. Select an effect e.g. Scale or Scale and crop.

    Edit Image Style UI

    Edit Image Style Effect Options

  4. Set a width (height is calculated automatically) or width and height when cropping.

    Image Style Effect UI

  5. When creating multiple styles just use the breadcrumbs to get back to Configure Image Styles

    Image Styles Breadcrumb Item

When you have created all the sizes for your responsive format you can move on to the next step.

Create a responsive Image Style

Head to Admin > Config > Media > Responsive Image Styles

  1. Click Add responsive image style to create a new one.
  2. Give it a label (for example if it’s for a paragraph type called profile_image then use that as the name)

    Add responsive image style UI

  3. Select your theme name from the Breakpoint Group

    Breakpoint Group Selection

  4. The breakpoints will load, open the breakpoints that this image style will use and check the radio next to Select a single image style or use multiple.

    Configuring the breakpoint image style

  5. Select the image style from the Image style dropdown (these are the styles we created in the previous step).

    Image style selection UI

  6. Set the Fallback Image style (this will be used where the browser doesn’t understand the  tags inside the picture element. It should be the most appropriate size to use if you could only pick one across all screen sizes)

    Fallback image style

Add a new view mode for media entities

Head to Admin > Structure > Display Modes > View Modes, click ‘Add new view mode’ and add your display mode. In this instance, we’ll use ‘Profile image’ again.

Adding a view mode

Update the display of the image for the entity type

Head to Admin > Structure > Media Types > Image > Manage display

  1. On the default tab click on Custom display settings at the bottom and check the new ‘Profile Image’ view mode and then Click Save

    Custom display settings

  2. Click on the tab that matches your new display type (in my example it’s Profile image)
  3. On the Image fields row change the Label to Hidden and the Format to Responsive Image.

    Configuring the image format

  4. Click on the cog at the end of the row.

    Row configuration cog

  5. Under Responsive image style select your style.

    Format Configuration

  6. Select where the file should link to (or Nothing), Click update
  7. Click Save

Update your Paragraph type to use the new display format

Go to Structure > Paragraph Types > {type} > Manage Display

  1. Find the row with the field displaying your media entity and change the format to Rendered Entity
  2. Click the gear icon to configure the view mode by selecting your view mode from the list (in this instance profile image)

    Paragraph type display format

  3. Click Save

Testing Our Work

At this point you should be all set.

  1. Create an example page
  2. Select your paragraph to insert into the page
  3. Add an image
  4. Save the page and view it on the front end
  5. Inspect the image element and ensure that a  element is rendered with ‘s, a default, and when you resize the browser you see what you expect.
    A profile Image
  6. To inspect further select the Network tab in your developer tools and filter by images. Resize the browser window and watch as new image sizes are loaded at your defined breakpoints.

Palantir: Dynamic Content From the Edge

How to scale content delivery infrastructure by implementing Edge Side Includes in Drupal.

Developers and webmasters who oversee websites with millions of users need to provide a solution to keep their infrastructure from getting overloaded with requests. Scaling up web and database servers is one option, but it can be costly and inefficient. Instead, people have increasingly turned to a Content Delivery Network, or CDN, as a type of protective layer in front of their web and database servers.

What does a CDN do?

The CDN provides a cached layer of content close to the user, often referred to as “the edge.” When a user requests a homepage, for example, they are directed to the cached static version of that page on the CDN rather than overloading the web server or accessing the database, thereby scaling content delivery.

scaled content deliver
Illustrating scaled content deliver supporting millions of requests while only passing on a small percentage of those requests to a web server, which in turn makes even fewer requests to a database server.

The CDN serves static content. So, what happens when web content is updated? There are a few different options here:

  • The cache can be programmed to expire after a certain number of hours or days.
  • Cache entries can be proactively purged when updates are made.
  • Changes to individual page content can be fetched as each page is requested by a user.

More problematic, however, are the changes that affect each and every page. For example, a global banner or a menu often changes the header or footer. This is where our recent work with Edge Side Includes comes in.

Edge Side Includes

Edge Side Includes (ESI) is both a web standard and an XML-based language that enables the dynamic generation of HTML pages at “the edge”. We recently worked with one client to solve the problem: How can we enable our cached content to remain fresh even after we make updates to some of the global parts?

We used Drupal to solve this problem by generating and rendering this global content as ESI fragments. These ESI fragments could then be included by all of our client’s web properties by using ESI include statements, regardless of how they were built.

Pseudocode of static markup
Illustrating pseudocode of static markup on a web page, where each web page must include that markup and therefore each page must be retrieved from cache, the web server, and may require a shorter cached lifetime.
Pseudocode of ESI includes
Illustrating pseudocode of ESI includes and their corresponding ESI fragments indicating that now each web page may be able to have a longer cache lifetime since the ESI fragments are referenced and can have their own cache lifetime.

How ESI Works

For our client, we developed a way to render ESI fragments in Drupal that included page parts (specifically the header and footer) on non-Drupal sites so that when those page parts change in Drupal, the non-Drupal sites get those changes automatically. To accomplish this, at the cache layer we have all of these pages that have unique page content and then the same content on each page for including the header and the footer.

Using ESI fragments, if a request is made for a page, the first thing that happens is a request for the header content, and then for the cached footer content. Now if something in the header is changed, only one page needs to be updated.

Drupal and Edge Side Includes

The use case that we solved here was specifically for the header and the footer, but our client wanted to have similar branding across all of their web properties, and they wanted it to be governed by the content management that’s done with Drupal. Drupal can do this by rendering the actual ESI fragments at two different endpoints.

Our custom module defines two routes, one for the header and one for the footer. Our controller maps to both of those routes and returns an empty render. Then, in the dot module files, we made sure that we’re only including the sort of meta tags that we need and the libraries that we want. In our theme, we have special templates for the ESI fragments. We also made sure that we leveraged some of the core functionality by still going through the render API.

We’re rendering blocks, we’re rendering menus, we’re using libraries, and we’re respecting cache tags.

Breakdown of responsibilities
Illustrating the breakdown of responsibilities between Drupal modules (route definition, controller creation, page attachment alters), themes (templates, library definition), and core (render process for various entities, libraries, and cache api).

Our client developed page templates that use business logic to set whatever variables might be needed to deliver their page content, adding our ESI include statements to actually grab the header and the footer content. They’re hosting all of their non-Drupal pages on a server that will provide this ESI service. Our client also determines the cache lifetime, both for their page templates and for the ESI fragments that Drupal’s actually hosting.

Breakdown of responsibilities for ESI implementers
Illustrating the breakdown of responsibilities for ESI implementers/consumers (business logic, variables parameters for fragments, page templates with variables and includes) and service providers (cache lifetime configuration, ESI fragment routes, ESI service / hosting).

Personalization Through ESI

So, what’s next for Drupal and ESI? Another implementation that we’re using in Drupal with ESI is a content model where a URL can reference internal or external endpoints and then include content from a URL that references static assets. This is CSS that we can include for personalization. This will allow us to do things like get data from Google tag manager or from Marketo or Mailchimp or a similar platform and make some decisions about the route, which could be a view page. Then, we could dynamically write the ESI include source based on the content they want to render personalized. We’ll let you know as we progress!

Drupal What happens when the Drupal Security Team marks a module as unsupported?

You may have noticed that today the Drupal Security Team marked 16 modules as unsupported, due to the module maintainer not fixing a reported security vulnerability after a sufficiently long period of time.

Among those modules, there were a few very popular ones like Admininistration Views and Nodequeue, which have have reported ~118k and ~40k sites using them, respectively.

Everytime a popular module is unsupported, there’s a certain amount of panic and uncertainty, so I wanted to address that in this article, both for the Drupal community at large and for our customers in particular, because we promise to deploy security updates the same day they are released.

Read more to see our perspective!

PreviousNext: PreviousNext’s Open Source Contribution Policies and Initiatives for the Drupal Community

PreviousNext builds open source digital platforms for large scale customers, primarily based on Drupal and hosted using Kubernetes, two of the world’s biggest open source projects. With our business reliant on the success of these open source projects, our company is committed to contributing where we can in relation to our relatively small size. We get a lot of questions about how we do this, so are happy to share our policies so that other organisations might adopt similar approaches.

Owen Lansbury

We learned early on in the formation of PreviousNext that developers who are passionate and engaged in open source projects usually make great team members, so wanted to create a work environment where they could sustain this involvement. 

The first step was to determine how much billable work on client projects our developers needed to achieve in order for PreviousNext to be profitable and sustainable. The figure we settled on was 80%, or 32 hrs per week of billable hours of a full time week as the baseline. Team members then self manage their availability to fulfil their billable hours and can direct up to 20% of their remaining paid availability to code contribution or other community volunteering activities. 

From a project management perspective, our team members are not allowed to be scheduled on billable work more than 80% of their time, which is then factored into our Agile sprint planning and communicated to clients. If certain team members contribute more billable hours in a given week, this just accelerates how many tickets we can complete in a Sprint.

If individual team members aren’t involved or interested in contribution, we expect their billable hours rate to be higher in line with more traditional companies. We don’t mandate that team members use their 20% time for contribution, but find that the majority do due to the benefits it gives them outside their roles. 

These benefits include:

  • Learning and maintaining best-practice development skills based on peer review by other talented developers in the global community.
  • Developing leadership and communication skills with diverse and distributed co-contributors from many different cultures and backgrounds.
  • Staying close to and often being at the forefront of new initiatives in Drupal, whether it be as a core code contributor or maintaining key modules that get used by hundreds of thousands of people. For example, the Video Embed Field that Sam Becker co-maintains is used on 123,487 websites and has been downloaded a staggering 1,697,895 times at the time of publishing. That’s some useful code!  
  • Developing close working relationships with many experienced and talented developers outside PreviousNext. In addition to providing mentoring and training for our team, these relationships pay dividends when we can open communication channels with people responsible for specific code within the Drupal ecosystem.
  • Building their own profiles within the community and being considered trusted developers in their own right by demonstrating a proven track record. After all, it’s demonstrated work rather than the CV that matters most. This often leads to being selected to provide expert talks at conferences and obviously makes them highly desirable employees should they ever move on from PreviousNext.
  • If our team members do get selected as speakers at international Drupal events, PreviousNext funds their full attendance costs and treats their time away as normal paid hours.
  • Working on non-client work on issues that interest them, such as emerging technologies, proof of concepts, or just an itch they need to scratch. We never direct team members that they should be working on specific issues in their contribution time.

All of these individual benefits provide clear advantages to PreviousNext as a company, ensuring our team maintains an extremely high degree of experience and elevating our company’s profile through Drupal’s contribution credit system. This has resulted in PreviousNext being consistently ranked in the top 5 companies globally that contribute code to Drupal off the back of over 1,000 hours of annual code contribution.

In addition to this 20% contribution time, we also ensure that most new modules we author or patch during client projects are open sourced. Our clients are aware that billable time during sprints will go towards this and that they will also receive contribution credit on as the sponsor of the contributions. The benefits to clients of this approach include:

  • Open sourced modules they use and contribute to will be maintained by many other people in the Drupal community. This ensures a higher degree of code stability and security and means that if PreviousNext ceases to be engaged the modules can continue to be maintained either by a new vendor, their internal team or the community at large.
  • Clients can point to their own contribution credits as evidence of being committed Drupal community supporters in their own right. This can be used as a key element in recruitment if they start hiring their own internal Drupal developers.

Beyond code contributions, PreviousNext provides paid time to volunteer on organising Drupal events, sit on community committees, run free training sessions and organise code sprints. This is then backed by our financial contributions to sponsoring events and the Drupal Association itself.

None of this is rocket science, but as a company reliant on open source software we view these contribution policies and initiatives as a key pillar in ensuring PreviousNext’s market profile is maintained and the Drupal ecosystem for our business to operate in remains healthy. 

We’re always happy to share insights into how your own organisation might adopt similar approaches, so please get in touch if you’d like to know more.

PreviousNext: Updating to Drupal 8.8.0 Beta with Composer

PreviousNext continue to be major contributors to the development and promotion of Drupal 8. As participants of the Drupal 8.8.0 Beta Testing Program, we thought it would be useful to document the steps we took to update one of our sites on Drupal 8.7 to the latest 8.8.0 beta.

Every site is different, so your mileage may vary, but it may save you some time.

Kim Pepper

Drupal 8.8 is a big release, with a number of new features added, and APIs deprecated to pave the way to a Drupal 9.0 release. Thankfully, the upgrade process was fairly straightforward in our case.

Upgrade PathAuto

First step was to deal with The Path Alias core subsystem has been moved to the “path_alias” module This meant some classes were moved to different namespaces. In order to make things smoother, we installed the latest version of pathauto module and clear the caches.

composer require drupal/pathauto:^1.6@beta
drush cr

Core Dev Composer Package

We use the same developer tools for testing as Drupal core, and we want to switch to the new core composer packages, so first we remove the old one.

composer remove --dev webflo/drupal-core-require-dev

Update Patches

We sometimes need to patch core using cweagans/composer-patches. In the case of this site, we are using a patch from ckeditor_stylesheets cache busting: use system.css_js_query_string which needed to be re-rolled for Drupal 8.8.x. We re-rolled the patch, then updated the link in the extra/patches section.

Update Drupal Core and Friends

In our first attempt, composer could not install due to a version conflict with some symfony packages (symfony/findersymfony/filesystem and symfony/debug). These are transient dependencies (we don’t require them explicitly). Our solution was to explicitly require them (temporarily) with versions that Drupal core is compatible with, then remove them afterwards.

First require new Drupal core and dependencies:

composer require --update-with-dependencies

Second, require new core-dev package and dependencies:

composer require --dev --update-with-dependencies

Lastly, remove the temporary required dependencies:

composer remove -n

Update the Database and Export Config

Now our code is updated, we need to update the database schema, then re-export our config. We use drush_cmi_tools, so your commands may be different, e.g. just a drush config-export instead of drush cexy.

drush updb
drush cr
drush cexy


We also need to update our settings.php file now that The sync directory is defined in $settings and not $config_directories.

This is a trivial change from:

$config_directories['sync'] = 'foo/bar';to:$settings['config_sync_directory'] = 'foo/bar';

Final Touches

In order to make sure our code is compatible with Drupal 9, we check for any custom code that is using deprecated APIs using the excellent PHPStan and Matt Glaman’s mglaman/phpstan-drupal. (Alternatively you can use Drupal Check.)

 We were using an older version that was incompatible with “nette/bootstrap”:”>=3″ so needed to remove that from the conflict section and do the remove/require dance once again.

composer remove --dev

composer require --dev --update-with-dependencies

And that’s it! Altogether not too painful once the composer dependencies were all sorted out. As we are testing the beta, some of these issues may be addressed in future betas and RCs.

I hope you found this useful! Got a better solution? Let us know in the comments!

JD Does Development: Docksal gets a Training

Docksal gets a Training
Tue, 11/12/2019 – 17:19

In July of last year I started a new job as a developer with a new agency. During my first week, in between meetings, HR trainings, and all the other fun things that happen during onboarding, I was introduced to the preferred local development environment that was being used on most of the projects.

It was lightweight, based on Docker, it ran easily, and it was extremely easy to configure. Prior to this, I had bounced around from local setup to local setup. My local dev environment resume included such hits as MAMP, WAMP, Acquia Dev Desktop, Kalabox, VAMPD, DrupalVM, Vagrant, ScotchBox, VirtualBox, native LAMP stacks, and everything in between. All of them had their strengths and weaknesses, but none of them really had that spark that really hooked me.

Enter Docksal.

When I first started using Docksal, I thought it was just like any other setup, and to a point, it is. It creates a reusable environment that can be shared across multiple developers and setup to mimic a hosting provider to a certain point, but the two things that really grabbed me were how easy it was to get started and how fast it was compared to other systems. Docksal has one key, killer feature in my opinionated mind, and that’s the fact that the entire application is written in Bash. The primary binary (which may or may not be the name of my upcoming one-man, off-Broadway, off-any stage show) begins with #! /usr/bin/env bash and runs on any system that has the bash executable, which encompasses Linux (of course), macOS, and now Windows thanks to WSL and the ability to add Ubuntu.

One thing that was missing, though, was a training guide. It has AMAZING documentation, available at, including a great getting started walkthrough, but for someone just starting out using it who might not have guidance and support from people they work with, it might take a little getting used to.

If you know me, you know that I enjoy talking at conferences. I’ve given over two dozen presentations at several types of events from local meetup groups to national level conferences. If you don’t know me, you just learned something new about me. Since I enjoy talking in front of people so much, the next logical step was to find something I’m familiar with and make a training of it. Turns out, I’m familiar with Docksal.

I submitted my pitch for a training to NEDCamp, the New England Drupal Camp, and they accepted it. Since I now had a reason to write a training, I began writing a training. Initially, I started with a very high-level outline, and eventually built a framework for my training. Thanks to the nature of open source, I was able to use many of the features that already had in order to make my training seem a little familiar to current users and easily accessible to new users.

The first go at this training will be at NEDCamp 2019 on Friday, November 22nd. This will be the first time a dedicated training spot has been used to train on Docksal, and I’m extremely excited to see how it goes and how to improve. After that training, I will make my handbook available online, eventually to be merged into the Docksal Github repo as part of the documentation. I have had help from numerous people in building this training, especially from the Docksal maintainers, Sean Dietrich, Leonid Makarov, Alexei Chekiulaev; folks who have reviewed what I’ve written so far, Dwayne McDaniel and Wes Ruvalcaba; and people who have challenged me to learn more about Docksal, whose numbers are too high to list them all.

If you’re interested in learning how to use Docksal or what it’s all about, consider attending my training at NEDCamp on November 22nd. You can find all the details on the NEDCamp training page, and if you can’t make it, be sure to watch for the handbook to be released soon.

Since I’m still working on the finishing touches, why not take the time to let me know what you would like to get out of this type of training or what you wish you would have known when learning how to use Docksal or a similar product in the comments and where you feel extra attention should be placed.



Screenshot of VSCode with minified HTML