Srijan Technologies: Site Owner’s Guide to a Smooth Drupal 9 Upgrade Experience

While upgrading to the latest version is always part of the best practice, the process can be staggering.

Drupal 8.7 is already here and 9 will be released in a year, in June 2020.

Although a lot of discussion is happening around the upgrade and possibilities it brings along, the final product can only be as good as the process itself.

The good and important news is that moving from Drupal 8 to Drupal 9 should be really easy — radically easier than migrating from Drupal 7 to Drupal 8.

As a site owner, here’s what you need to know about the new release and what to take care of to make the process easier without many glitches.

myDropWizard.com: Experimental Composer repository with CKEditor plugins

In my experience, a big part of making a Drupal 8 site usable for content editors is customizing the WYSIWYG, which usually includes adding a couple additional CKEditor plugins.

Of course, you can simply download the plugins into the ‘libraries’ folder, and that’s fine. But these days, it’s becoming best practice to pull in all of your site’s dependencies using Composer.

Adding ‘package’ repositories to your composer.json for the CKEditor plugins (the current best practice) works fine – but only for your individual site.

It doesn’t work so well if you want to install:

  • A Drupal “Feature” (like with the Features module) that configures the WYSIWYG, including adding some CKEditor plugins, or
  • A Drupal distribution (like Panopoly or Lightning)

In those cases, you can’t depend on what the individual site may have in its top-level composer.json, and asking the user to manually copy-paste a bunch of ‘package’ repositories in there may create enough confusion or problems that potential users will just give up.

Well, I’ve got an possible solution to this problem: an experimental Composer repository which includes CKEditor plugins for use on a Drupal site.

It works better for Feature modules and distributions, but can also make life easier for individual sites too.

Read more to find out how it works and how to use it!

ThinkShout: The Problems In Tech Go Deeper Than ‘Hacking’

Earlier this week, The Cut ran a piece about a “Tinder Hacker” who created a fake profile with his roommate’s photos, then hooked a piece of code up to the Tinder API and did some very simple string substitutions so that men who messaged “her”–after “she” swiped right on them–were tricked into actually talking to other men who did the same. In brief, he put strangers in contact with each other under false pretenses, rerouted and surveilled their communications without consent, and proceeded to use this as a bragging point on dates and in interviews.

One might take exception to a number of elements of this story, but let’s start with its terminology. “Hacking” is a word whose meaning has broadened beyond all practical use, but in no sense did “Sean”, the pseudonymous subject of the story, “hack Tinder.” He relied on someone else’s reverse engineering to write some buggy code that ran against its API. That’s all.

The article itself seems confused about whether the Tinder API, or Application Program Interface, only exists to allow homebrew apps on Windows Phone. But an API is just a set of commands made available by a server, like the Tinder mothership, to accept instructions from client apps, like the many copies of the Tinder app that run on all kinds of phones. Almost all the apps on your phone are clients that work this way, and APIs are ubiquitous. Even the Drupal and WordPress sites we build each have their own versions.

The code described in the article fits less within the definition of a hack than that of a bot. It would live on a server, persist as a service, wait for triggers–like incoming messages–and then respond to them according to certain rules. Some bots are used for automated customer service; some are used for art projects; some are used for jokes. Many, many, many bots are used for spam or other malicious purposes.

The ethics of bot development are not always simple, but they’re not new territory either. That’s the second and most glaring exception to be taken here: Sean’s assertion that his bot was at the “gray hat” level of malice in terms of its exploitation of code. Bot creator and Portland local Darius Kazemi wrote a thoughtful piece about considering and refining the possibility space of joke bots toward kindness in 2015. That in turn references fellow creator Leonard Richardson’s seminal 2013 post “Bots Should Punch Up”, which contains a telling bit with regard to the color of that hat:

“Hackers and comedians and artists are always attracted to the grey areas. But your bot is an extension of your will, and if you’re a white guy like me, most of the grey areas are not grey in your favor.”

Perhaps it’s assuming too much to conclude that Sean, a San Francisco programmer whose race is not mentioned in the article, is a white guy. Perhaps not. Technology as a field in the US is overwhelmingly full of white men, offering most of the benefits of the biggest wealth creation engine in history to the people who were already granted our society’s highest levels of privilege. That privilege, and power, means that thoughtless choices have more potential to do harm: by default, they’re punching down.

But even if that weren’t the case, as an educated and socialized human adult, it shouldn’t have been hard to see that writing a service solely to entice, deceive, manipulate and mock people in a vulnerable space like a dating app might have consequences. That is, unless you’ve spent a career being rewarded for ignoring consequences, because you work in tech. That’s the third exception to be taken. For pulling a prank like this, many people would be fired or sued. Instead, Sean got a better job.

I can admit that this story struck me on a personal level. Back before I had to quit Twitter, I used to write bots using their API myself. One of them, which I created in 2014, worked on a similar principle to the Tinder bot: it would receive a person’s message, put it in holding, and send them back a random held message from someone else in response. The juxtapositions were surreal, delightful, and often rewarding. And everyone involved was informed, consenting, and able to make use of built-in safety tools to report bad actors.

I’m not an ethicist or a researcher by training, but I knew to consider those aspects of my work because I have an interest in the history of the internet. According to the article, Sean does too–I’m willing to bet he and I read the same books about phone phreaks, blue boxes and Captain Crunch.

The phreakers he admires, by the way, were indeed “punching up” with their pranks–using low-rent tools to get one back at Bell, an exploitative tech monopoly that would eventually be broken up. Hey, there’s an idea.

People have made infamously bad choices like Sean’s before, and one might expect creators here in the future to work at avoiding their repetition. But instead, his story reflects the broader attitude of a tech sector that is not just ahistorical, but willfully naive and ignorant of the lessons of its past. (If you only read one thing linked in this whole piece, make it that last one. Go ahead, I’ll wait.)

The things I value about working at ThinkShout stand in opposition to all of that. My colleagues here are technical experts, but they’re also widely read, deeply informed, and always working to expand our collective view of the world in inclusive and considerate ways. That’s why we take pride in supporting progressive organizations like the Campaign Legal Center and ChangeLab Solutions. That’s why we focus on accessibility for all users as a core concern and increasing equity in our own job pipeline. That’s why we’re fine with being located far outside the insular centers of big tech culture, where it seems like people would rather try to land on the Moon than make change on the ground.

Even if the article in The Cut highlights the deep problems in the technology sphere that engulfs us all, there are certainly worse things on the internet than a man getting his kicks by trolling a bunch of other men. But there are better things too. If you’d prefer to join us on that side, please get in touch! We’re hiring, and we’d be glad to hear about how your hobby project brought a little kindness and empathy to the world.

Duo Consulting: A Better Way to Search in Drupal

One of the best things about Drupal’s open-source ecosystem is that it empowers you to be open-minded. Given the vast array of solutions and modules available, users can customize their site to their whims. Alternatively, if you think up and code something new, your contributions can be shared online with other users. With all of the customization available, Drupal is a conducive platform for outside-the-box thinking.

federated search

Decoupling is a recent example of this philosophy. Where a standard Drupal website would feature a Drupal-powered front and backend, decoupling opens the door for a variety of possibilities. A decoupled site can utilize different platforms and technologies for both the front and backend. For example, a decoupled site could utilize Drupal’s backend CMS while running a React-powered frontend. Such is Drupal’s flexibility that it can power scores of different, user-facing channels from a single backend, including other sites, native apps, Internet of Things (IoT), and more.

This decoupled or “headless” concept has more applications than just for site design, though. The search function of a website, for one, can benefit from components that utilize this headless approach – and not a moment too soon. As Google has begun to sunset its Google Search Appliance offering, there is now a need for an open and flexible search tool with enterprise-level capabilities.

At this year’s Midwest Drupal Camp, the team from Palantir demonstrated that a decoupled approach to site search was viable. This solution, federated search, allows for indexing and searching across multiple sites. For organizations with a large web portfolio across different platforms, this open federated search solution can fill the gap left by Google.

Decoupled Drupal Facts & Myths

Understanding why federated search for Drupal is important requires an understanding of how regular site search functions operate. At the core, the search feature is built from three different components: the source, index and results. The source simply refers to all of the searchable content on a given site, from blogs to landing pages. The index is a compilation of metadata that makes the content form the source easier to parse. At Duo, we often use Apache Solr, a platform-agnostic, open source solution for indexing, as it provides speed, power and its own server capabilities. Finally, the results refers to the front-end experience that compiles and delivers the search results to the user.

The above setup will work fine for most simple websites, but larger organizations often require a more robust solution. With federated search, users can query across multiple sites across different platforms without placing much strain on Drupal, since Apache Solr is handling generating the index and providing results. This is accomplished through some tweaking of the basic site search formula.

Part of what makes this search so powerful is that it takes advantage of Drupal’s backend without relying on its frontend. For that, Apache Solr’s dedicated servers empower this new search solution by shouldering the burden of indexing and providing the results. Before it can work, though, some configuration is needed. Based on this configuration, Apache Solr can encompass searches across different sites – including sites that aren’t built with Drupal. Creating this custom solution, in conjunction with the Search API and Search API Solr modules, will ensure that the different data types being indexed will be standardized.  

As for the results section, the best approach is a decoupled one. Building out the front-end component in the React JavaScript library allows for robust searches without slowing down the rest of the site. By using Drupal’s CMS, Apache Solr and React’s power in coordination, any organization can create a search feature that quickly indexes vast ranges of data and delivers it in an easily digestible manner. For a deeper dive, Palantir has made their demo of federated search available.

This powerful and streamlined take on site search has a variety of applications. Before releasing the solution, Palantir originally developed federated search for the University of Michigan, as each department ran their own sites on different platforms. Federated search now allows users to seamlessly search for information across the entire school’s network, regardless of the technology used to deliver the content. Beyond university ecosystems, federated search also presents an opportunity for eCommerce. Using this solution, products from different vendors can be consolidated into a simple search.

Thanks to Drupal being open source, organizations can utilize federated search and any other contributed solution at any time. This level of openness is what makes Duo such champions of the Drupal platform. At Duo, we’re committed to exploring new features like this and helping each of our partners think outside the box. If you’re ready to start rethinking your website or sites, we’re just a click away.

Explore Duo

Morpht: Drupal 8 Configuration – Part 5: Module developers and the API

Background

We live in an age of Drupal complexity. In the early days of Drupal, many developers would have a single Drupal instance/environment (aka copy) that was their production site, where they would test out new modules and develop new functionality. Developing on the live website however sometimes met with disastrous consequences when things went wrong! Over time, technology on the web grew, and nowadays it’s fairly standard to have a Drupal project running on multiple environments to allow site development to be run in parallel to a live website without causing disruptions. New functionality is developed first in isolated private copies of the website, put into a testing environment where it is approved by clients, and eventually merged into the live production site.

While multiple environments allow for site development without causing disruptions on the live production website, it introduces a new problem; how to ensure consistency between site copies so that they are all working with the correct code.

This series of articles will explore the Configuration API, how it enables functionality to be migrated between multiple environments (sites), and ways of using the Configuration API with contributed modules to effectively manage the configuration of a project. This series will consist of the following posts:

This article will focus specifically on how developers can manage, declare, and debug configuration in their custom modules.

Configuration Schema

Configuration schema describes the type of configuration a module introduces into the system. Schema definitions are used for things like translating configuration and its values, for typecasting configuration values into their correct data types, and for migrating configuration between systems. Having configuration in the system is not as helpful without metadata that describes what the configuration is. Configuration schemas define the configuration items.

Any module that introduces any configuration into the system MUST define the schema for the configuration the module introduces.

Configuration schema definitions are declared in [MODULE ROOT]/config/schema/[MODULE NAME].schema.yml, where [MODULE NAME] is the machine name of the module. Schema definitions may define one or multiple configuration objects. Let’s look at the configuration schema for the Restrict IP module for an example. This module defines a single configuration object, restrict_ip.settings:

restrict_ip.settings:
  type: config_object
  label: 'Restrict IP settings'
  mapping:
    enable:
      type: boolean
      label: 'Enable module'
    mail_address:
      type: string
      label: 'Contact mail address to show to blocked users'
    dblog:
      type: boolean
      label: 'Log blocked access attempts'
    allow_role_bypass:
      type: boolean
      label: 'Allow IP blocking to be bypassed by roles'
    bypass_action:
      type: string
      label: 'Action to perform for blocked users when bypassing by role is enabled'
    white_black_list:
      type: integer
      label: 'Whether to use a path whitelist, blacklist, or check all pages'
    country_white_black_list:
      type: integer
      label: 'Whether to use a whitelist, blacklist, or neither for countries'
    country_list:
      type: string
      label: 'A colon separated list of countries that should be white/black listed'

The above schema defines the config object restrict_ip.settings which is of type config_object (defined in core.data_types.schema.yml).

When this module is enabled, and the configuration is exported, the filename of the configuration will be restrict_ip.settings.yml. This object has the keys enable, mail_address, dblog etc. The schema tells what type of value is to be stored for each of these keys, as well as the label of each key. Note that this label is automatically provided to Drupal for translation.

The values can be retrieved from the restrict_ip.settings object as follows:

$enable_module = Drupal::config('restrict_ip.settings')->get('enable');
$mail_address = Drupal::config('restrict_ip.settings')->get('mail_address');
$log = Drupal::config('restrict_ip.settings')->get('dblog');

Note that modules defining custom fields, widgets, and/or formatters must define the schema for those plugins. See this page to understand how the schema definitions for these various plugins should be defined.

Default configuration values

If configuration needs to have default values, the default values can be defined in [MODULE ROOT]/config/install/[CONFIG KEY].yml where [CONFIG KEY] is the configuration object name. Each item of configuration defined in the module schema requires its own YML file to set defaults. In the case of the Restrict IP module, there is only one config key, restrict_ip.settings, so there can only be one file to define the default configuration, restrict_ip/config/install/restrict_ip.settings.yml. This file will then list the keys of the configuration object, and the default values. In the case of the Restrict IP module, the default values look like this:

enable: false
mail_address: ''
dblog: false
allow_role_bypass: false
bypass_action: 'provide_link_login_page'
white_black_list: 0
country_white_black_list: 0
country_list: ''

 

As can be seen, each of the mapped keys of the restrict_ip.settings config_object in the schema definition are added to this file, with the default values provided for each key. If a key does not have a default value, it can be left out of this file. When the module is enabled, these are the values that will be imported into active configuration as defaults.

Debugging Configuration

When developing a module, it is important to ensure that the configuration schema accurately describes the configuration used in the module. Configuration can be inspected using the Configuration Inspector module. After enabling your custom module, visit the reports page for the Configuration Inspector at /admin/reports/config-inspector, and it will list any errors in configuration.

Screenshot of an item with an error in the  configuration inspector
The Configuration Inspector module errors in configuration schema definitions

Clicking on ‘List’ for items with errors will give more details as to the error.

Screenshot of the Configuration Inspector module showing a configuration item with an error
The ‘enable’ key has an error in schema. The stored value is a boolean, but the configuration definition defines a string

Using the Configuration Inspector module, you can find where you have errors in your configuration schema definitions. Cleaning up these errors will correctly integrate your module with the Configuration API. In the above screenshot, then type of data in the active schema is a boolean, yet the configuration schema defines it as a string. The solution is to change the schema definition to be a boolean.

Summary

In this final article of this series on the Drupal 8 Configuration API, we looked at configuration schema, how developers can define this schema in their modules and provide defaults, as well as how to debug configuration schema errors. Hopefully this series will give you a fuller understanding of what the Configuration API is, how it can be managed, and how you can use it effectively in your Drupal projects. Happy Drupaling!