OpenSense Labs: How to implement Continuous Deployment with Drupal

How to implement Continuous Deployment with Drupal
Shankar
Fri, 10/26/2018 – 18:46

The Guardian, one of the most trusted news media, took a different approach for their membership and subscriptions apps. Rather than emphasising on lengthy validation in staging environments, The Guardian’s Continuous Deployment pipeline places greater focus on ensuring that the new builds are really working in production. Their objective was to let the developers know that their code has run successfully in the real world instead of just observing green test cases in a sanitised and potentially unrepresentative environment.

Front view of lots of hollow cylindrical pipes stacked on top of each other with a man sitting inside one of the pipes

Thus, The Guardian reduced the amount of testing run pre-deployment and extended the deployment pipeline constituting feedback on tests run against the production site. Such is the significance of utilising a lightweight Continuous Deployment pipeline which has helped a large organisation like The Guardian to focus on production validation instead of a large suite of acceptance tests. Such benefits can be witnessed in the Drupal-based projects as well where Continuous Deployment can allow us to iterate on Drupal web applications at speed.

Read more on the implementation of Continuous Integration and Continuous Delivery with Drupal

A Brief Timeline of Continuous Deployment

Agile Aliiance has stated that the origins of Continuous Deployment can be traced in the early 2000s. In 2002, Kent Beck, creator of Extreme Programming, has mentioned Continuous Deployment in the early discussions (unpublished) of applying Lean ideas to software where undeployed features are seen as inventory. However, it took multiple years for it to be refined and codified.

Later, in the proceedings of Agile 2006 Conference, the first article describing the core of Continuous Deployment – The Deployment Production Line – came into the limelight. Published by Jez Humble, Chris Read and Dan North, it was a codification of the practices of numerous ThoughtWorks UK teams.

By 2009, the practice of Continuous Deployment became well established as can be seen through the article called Continuous Deployment at IMVU by Timothy Fitz. Not only it is beneficial in Agile processes, but its great features can be extracted for methodologies such as a Lean startup or DevOps.

Continuous Deployment in focus

Flowchart showing green and orange coloured boxes to illustrate workflow of Continuous Integration, Continuous Delivery and Continuous Deployment
Source: Atlassian

While Continuous Integration refers to the process of automatically building and testing your software on a regular basis, Continuous Delivery is the logical next step which ensures that your code is always in a release-ready state. The ultimate culmination of this process is the Continuous Deployment.

In Continuous Deployment, every alteration that passes all stages of your production pipeline is released to the customers

In Continuous Deployment, every alteration that passes all stages of your production pipeline is released to the customers with no human intervention and only a failed test will deter a new alteration to be deployed to production. It is a spectacular way to aggrandise the feedback loop with your customers and take pressure off the team as is takes away the so-called ‘release day’ from the equation. It allows the developers to emphasise on creating software and they can see their work going live minutes after they have put in all their efforts on it.

Why Should you Consider Continuous Deployment?

Continuous Deployment benefits both the internal team who are implementing it and the stakeholders in your company.

For internal team

  • Instead of performing a weekly or a monthly release, moving to feature-driven releases enables faster and finer-grained upgrades and helps in debugging and regression detection by only altering one thing at a time.
  • By automating every step of the process, you make it self-documenting and repeatable.
  • By making the deployment to the server fully automated, a repeatable deployment process can be created.
  • By automating the release and deployment process, you can constantly release the ongoing work to the staging and QA servers thereby giving visibility fo the state of development.
Moving to feature-driven releases enables faster and finer-grained upgrades

For stakeholders in the company

  • Instead of waiting for a fixed upgrade window, you can release features when they are ready thereby getting them to the customer faster. As you are constantly releasing to a staging server while developing them, internal customers can see the alterations and take part in the development process.
  • Managers will see the result of work faster and progress will be visible when you release more often
  • If a developer needs a few more hours to make sure that the feature is in perfect working condition, then the feature will go out a few hours later and not when the next release window opens.
  • Sysadmins will not have to perform the releases themselves. Small, discrete feature releases will enable easier detection of the alterations that have affected the system adversely. 

Continuous Deployment Tools

wrench and screwdriver icon

Unit tests and functional tests put the code into as many execution scenarios as possible for predicting its behaviour in production. Unit testing frameworks consist of NUnit, TestNG and RSpec among others.
 
IT automation and configuration management tools like Poppet and Ansible manage code deployment and hosting resource configuration. Tools like Cucumber and Calabash can help in setting up integration and acceptance tools.
 
Monitoring tools like AppDynamics and Splunk can help in tracking and reporting any alterations in application or infrastructure. Performance due to the new code. Management tools like PagerDuty can trigger IT incident response. Monitoring and incident response for Continuous Deployment setups should be to real-time for shortening time to recovery when there are hassles with the code.
 
Rollback capabilities are essential in the deployment toolset to detect any unexpected or undesired effects of new code in production and mitigate them faster. Moreover, canary deployment and sharding, blue/green deployment, feature flags or toggles and other deployment controls can be useful for organisations looking to safeguard against user disruption from Continuous Deployment.
 
Some applications can deploy in containers such as Docker and Kubernetes for isolating updates from the underlying infrastructure.

Continuous Deployment with Drupal

An arrow and a box icon representing settings icon

A digital agency worked with Drupal 8, Composer, Github, Pantheon and CircleCI around Continuous Integration and Deployment. The project involved moving from internal hosting to the cloud (in this case, Pantheon), moving the main sites from Drupal 7 to Drupal 8 and implementing a new design.

To the cloud

Pantheon was chosen as the cloud host for new Drupal sites. Initially, it was chosen for features like ‘Cutom Upstreams’, one-click core updates, simple deployments between development, Test, and Live environments, Multidevs, and the fact that each is a Git repo a heart. Terminus (Pantheon CLI tool) was heavily used and appreciated.

Migration to Drupal 8

It focussed on two main umbrella sites and one news site to serve both umbrella sites. It did a content refresh which showed that only content that needs to be migrated are the news articles. The configuration management of Drupal 8 was found to be nicer than the Drupal 7.

Custom Design

As the Drupal is not the only web platform they were using, instead of building a Drupal theme, they built a platform-agnostic project with a new look and feel. It was based on the Zurb foundation and was just HTML, CSS, and JavaScript.
 
Grunt was used as the build tool. So when they have a new release, they would just commit and push to Github. That triggers a CircleCI workflow which tags a new release and publishes the release artefact as an npm package to Artifactory. From there, npm package can be pulled into any project including Drupal.
 
It should be noted that the published package includes only the CSS, JS, libraries and other assets. After the publishing, a static site is created with the package and corresponding HTML templates on a cloud host as a reference implementation.

Deployment Process

They had an ‘upstream’ repo on Github named umbrella-upstream which is a composer-based Drupal 8 project with a custom install profile comprising of custom modules, package.json, and deploy scripts. Each of the sites (umbrella-site X, umbrella-site Y, etc.) was also in a Github repo as composer-based Drupal 8 project and had umbrella-upstream configured as a remote.
 
When they push an alteration to the upstream repo, a set of CircleCI workflows gets started that runs some Codeception acceptance tests and the alterations get merged from umbrella-upstream down to each umbrella-site X/Y repo.
 
Then, another CircleCI workflow builds, tests and pushes a full Drupal umbrella-site X/Y install to the corresponding Pantheon site X/Y all the way up right to the test environment. Quicksilver hooks were used to send any alterations Pantheon back to the site repos.

Entire Workflow involved:

  • Code alterations and Git commit in custom design repo
  • Npm update custom-design -save-dev, grunt and Git commit in umbrella-upstream repo

Finally, the alterations show up in the Test environment of each site on Pantheon.

Conclusion

It is of paramount importance that you keep iterating and deploy software at speed and with efficacy. Continuous Deployment is a great strategy for software releases wherein code commit that passes automated testing phase is automatically released into the production environment.
 
Drupal deployment can benefit to a great extent through the incorporation of Continuous Deployment in the project development process. The biggest advantage of doing so is that it makes the alterations visible to the application’s users.
 
Opensense Labs is committed towards the provision of wonderful digital experience to the organisations with its suite of services.
 
To make your next Drupal-based project supremely efficacious through the implementation of Continuous Deployment, ping us at hello@opensenselabs.com

blog banner
sticky notes on paper document beside pens and box

blog image
orange LED bulb

Blog Type
Is it a good read ?
On

Wim Leers: State of JSON API (October 2018)

Mateu, Gabe and I just released the first RC of JSON API 2, so time for an update!

It’s been three months since the previous “state of JSON API” blog post, where we explained why JSON API didn’t get into Drupal 8.6 core.

What happened since then? In a nutshell:

  • We’re now much closer to getting JSON API into Drupal core!
  • JSON API 2.0-beta1, 2.0-beta2 and 2.0-rc1 were released
  • Those three releases span 84 fixed issues. (Not counting support requests.)
  • includes are now 3 times faster, 4xx responses are now cached!
  • Fixed all spec compliance issues mentioned previously
  • Zero known bugs (the only two open bugs are core bugs)
  • Only 10 remaining tasks (most of which are for test coverage in obscure cases)
  • ~75% of the open issues are feature requests!
  • ~200 sites using the beta!
  • Also new: JSON API Extras 2.10, works with JSON API 1.x & 2.x!
  • Two important features are >80% done: file uploads & revisions (they will ship in a release after 2.0)

So … now is the time to update to 2.0-RC1!

JSON API spec v1.1

We’ve also helped shape the upcoming 1.1 update to the JSON API spec, which we especially care about because it allows a JSON API server to use “profiles” to communicate support for capabilities outside the scope of the spec. 1

Retrospective

Now that we’ve reached a major milestone, I thought it’d be interesting to do a small retrospective using the project page’s sparklines:

JSON API project statistics, annotated with green vertical lines for the start of 2018 and the time of the previous blog post.
The first green line indicates the start of 2018. Long-time Drupal & JSON API contributor Gabe Sullice joined Acquia’s Office of the CTO two weeks before 2018 started. He was hired specifically to help push forward the API-First initiative. Upon joining, he immediately started contributing to the JSON API module, and I joined him shortly thereafter. (Yes, Acquia is putting its money where its mouth is.)
The response rate for this module has always been very good, thanks to original maintainer Mateu “e0ipso” Aguiló Bosch working on it quite a lot in his sparse free time. (And some company time — thanks Lullabot!) But there’s of course a limit to how much of your free time you can contribute to open source.

  • The primary objective for Gabe and I for most of 2018 has been to get JSON API ready to move into Drupal core. We scrutinized every area of the existing JSON API module, filed lots of issues, minimized the API surface, maximized spec compliance (hence also minimizing Drupalisms), minimized potential for regressions to occur, and so on. This explains the significantly elevated rate of the new issues sparkline. It also explains why the open bugs sparkline first increased.
  • This being our primary objective also explains the response rate sparkline being at 100% nearly continously. It also explains the plummeted average first response time: it went from days to hours! This surely benefited the sites using JSON API: bug fixes happened much faster.
  • By the end of June, we managed to make the 1.x branch maximally stable and mature in the 1.22 release (shortly before the second green vertical line) — hence the “open bugs” sparkline decreased). The remaining problems required BC breaks — usually minor ones, but BC breaks nonetheless! The version of JSON API that ends up in core needs to be as future proof as possible: BC breaks are not acceptable in core. 2 Hence the need for a 2.x branch.

Surely the increased development rate has helped JSON API reached a strong level of stability and maturity faster, and I believe this is also reflected in its adoption: a 50–70 percent increase since the end of 2017!

From 1 to 3 maintainers

This was the first time I’ve worked so closely and so actively on a small codebase in an open-source setting. I’ve learned some things.

Some of you might understandably think that Gabe and I steamrolled this module. But Mateu is still very actively involved, and every significant change still requires his blessing. Funded contributions have accelerated this module’s development, but neither Acquia nor Lullabot ever put any pressure on how it should evolve. It’s always been the module maintainers, through debate (and sometimes heartfelt concessions), who have moved this module forward.

The “participants” sparkline being at a slightly higher level than before (with more consistency!) speaks for itself. Probably more importantly: if you’re wondering how the original maintainer Mateu feels about this, I’ll be perfectly honest: it’s been frustrating at times for him — but so it’s been for Gabe and I — for everybody! Differences in availability, opinion, priorities (and private life circumstances!) all have effects. When we disagree, we meet face to face to chat about it openly.

In the end I still think it’s worth it though: Mateu has deeper ties to concrete complex projects, I have deeper ties to Drupal core requirements, and Gabe sits in between those extremes. Our discussions and disagreements force us to build consensus, which makes for a better, more balanced end result! And that’s what open source is all about: meeting the needs of more people better 🙂

Thanks to Mateu & Gabe for their feedback while writing this!


  1. The spec does not specify how filtering and pagination should work exactly, so the Drupal JSON API implementation will have to specify how it handles this exactly. ↩︎

  2. I’ve learned the hard way how frustratingly sisyphean it can be to stabilize a core module where future evolvability and maintainability were not fully thought through. ↩︎

OpenSense Labs: Anatomy of Continuous Delivery with Drupal

Anatomy of Continuous Delivery with Drupal
Shankar
Thu, 10/25/2018 – 21:51

Audi’s implementation of Continuous Delivery into its marketing has had an astronomical impact on its competitive advantage. For instance, when Audi released its new A3 model along with all other new releases, it wanted to communicate the new features, convey the options, and assist people in understanding the differences among body types, engines and things like that. Continuous Delivery turned out to be the definitive solution. It helped in refining the messaging and optimising it on the fly to make sure that the people are understanding what the automaker is trying to communicate.

vehicles on a cargo ship with wake of the ship in the background

Continuous Delivery (CD) is a quintessential methodology which makes the management and delivery of projects in big enterprises like Audi more efficient. When it comes to Drupal-based projects, Continuous Delivery can bring efficacy to the governance of projects. It can lead to better team collaboration and on-demand software delivery.

Read more on Continous Integration with Drupal

Building and Deploying using Continuous Delivery

A graphical representation showing white parabolic curves on a blue graph and text on it
Source: Atlassian

For many organisations, shipping takes a colossal amount of effort. If your team is still living with manual testing preparing for releases and manual or semi-scripted deploys for carrying out releases, it can be toilsome. No wonder software development is moving towards continuity. In the continuous paradigm, quality products are released in a frequent and predictable manner to the customers thereby reducing the risk factor.

In 2010, Jez Humble and David Farley released a book called Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation.

In this book, they argued that “software that’s been successfully integrated into a mainline code stream still isn’t a software that’s out in production doing its job”. That is, no matter how fast you assemble your product, it does not really matter if it is just going to be stored in a warehouse for months.

Continuous Delivery is the software development practice for building software in such a way that it can be released to production at any time.

Continuous Delivery refers to the software development practice for building software in such a way that it can be released to production at any time. So, if your software is deployable throughout its lifecycle, you are doing Continuous Delivery. In this, the team gives more priority to keeping the software deployable than working on new features. This ensures that anybody can get quick and automated feedback on the production readiness of their systems whenever alterations are done. 

Thus, Continuous Delivery enables push-button deployments of any software version to any environment on demand.

How does Continuously Delivery work?

Flowchart showing box and circles to illustrate the workflow of continuous delivery, continuous integration, and continuous deployment
Source: Amazon Web Services

For achieving Continuous Delivery, you need to continuously integrate the software built by the development team, build executables and run automated tests on those executables for detecting problems.

Then, the executables are required to be pushed into increasingly production-like environments to make sure that the software is in working condition when pushed to production. This is done by implementing a deployment pipeline that provides visibility into the production readiness of your applications. It gives feedback on every alteration to your system and allows team members to perform self-service deployments into their environments.

Continuous Delivery requires a close, collaborative working relationship between the team members which is often referred to as DevOps Culture. It also needs extensive automation of all possible parts of the delivery process using a deployment pipeline.

Continuous Delivery vs Continuous Integration vs Continuous Deployment

Continuous Delivery is often confused with Continuous Deployment.

In Continuous Deployment, every alteration goes through the pipeline and are automatically pushed into production which results in many production deployments every day.

In Continuous Delivery, you are able to do frequent deployment and if the certain businesses demand a slower rate of deployment, you may choose not to perform the frequent deployment. So, for performing Continuous Deployment, you must be doing Continuous Delivery.

Continuous Delivery builds on Continuous Integration and deals with the final stages that are required for production deployment.

So, where does Continuous Integration come into the picture? It allows you to integrate, build, and test code within the development environment. Continuous Delivery builds on this and deals with the final stages that are required for production deployment.

Benefits of Continuous Delivery

The major benefits of Continuous Delivery are:

  • Minimised Risk: As you are deploying smaller alterations, there’s reduced deployment risk and it is easier to fix whenever a problem occurs.
  • Trackable progress: By tracking work done, you can get a believable progress. If developers declaring a work to be “done”, it is less believable. But if it is deployed into a production environment, you actually see the progress right there.
  • Rapid feedback: One of the pivotal challenges of any software development is that you can wind up building something that is not useful. So, earlier you get the working software in front of real users with higher frequency, faster you get the feedback for finding out how valuable it really is.

Continuous Delivery with Drupal

Drupal Community has been a great catalyst for digital innovation. To make software development and deployment better with Drupal, the community has always leveraged technological innovations.

A session held at DrupalCon Amsterdam had an objective of bringing enterprise Continuous Delivery practices to Drupal with a comprehensive walkthrough of open-sourced CD platform called ‘Go’. The ‘Go’ project started off as ‘Cruise Control’ in 2001 rooted in the first principle of the Agile Manifesto: Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

It outlined principles of CD practice, exhibited how easy it is to get a Drupal build up and running in Go and illustrated the merits of delivering in a pipeline. It involved setting up of a delivery pipeline. Then, configuring of build materials, build stages, build artefacts, jobs and tasks were done. Furthermore, it drilled down to familiar Drush commands and implemented the basic principles of the CD.

Basically, the build configuration was shown that deploys Drupal sites using Phing, Drush and other tools with the possibility of calling out to Jenkins as another way for managing tasks. Multiple steps of testing and approval were shown with a separate path for content staging as separate from code thereby deploying a complex Drupal site.

Homepage of Go platform with a flowchart explaining Continuous Delivery practice and ‘Simplify Continuous Delivery’ written in bold letters on the top on a pink background

Later, it emphasised on testing and previewing on production before cutting over a release, zero downtime releases, secure and simple rollback options, and making the release a business decision rather than a technical decision.

Moreover, it showed that Go’s trusted artefacts can take the ambiguities out fo the build with spectacular support for administering dependencies between different projects.

This session is very useful for the developers who use Drush and have some understanding of DevOps and knows about all-in-code delivery. Even those who undertake less technical roles like QA(Quality Assurance), BA(Business Analyst) and product owner will find it beneficial as the CD practice is all about the interaction of the team as well as the tools and techniques. 

How the future of continuous delivery looks like?

A report on Markets and Markets stated that the Continuous Delivery Market was valued at USD 1.44 Billion in 2017 and would reach USD 3.85 Billion by 2023 at a Compound Annual Growth Rate (CAGR) of 18.5% during the forecast period of 2018-2023.

Open source Continuous Deliver projects and tools will dominate the commercial CD tools segment

Bar graphs in dark blue and light blue colours showing automation market size in USD Billion, by segment, global, 2016-2020

Another report on Mordor Intelligence states that the market for Continuous Delivery is seeing a tremendous rise. It is due to the adoption of Artificial Intelligence (AI) and Machine Learning, rapid deployment of connected infrastructure and the proliferation of automated digital devices. But open source CD projects and tools will dominate the commercial CD tools segment.

The North American region is projected to have the largest growth in demand during the forecast period (2028-2023) because of the early adoption of cloud computing and IoT by the United States. The continuous evolution of new technologies (as shown above) have been the prime factor behind large-scale investments in the CD segment. Retail, healthcare, communications and manufacturing application in North America are going to see a massive growth rate in the forecast period.

Conclusion

On-demand software delivery and enhanced team collaboration is a sort of combination that every major enterprise can benefit from. Continuous Delivery is one such mechanism that can help software development projects to be production-ready always. And this can work in favour of projects involving Drupal development and deployment.

Opensense Labs has been steadfast in its goals of offering marvellous digital experience with its suite of services.

Contact us at hello@opensenselabs.com to know how can continuous delivery be implemented for your business in Drupal-based projects.

blog banner
cardboard box lot

blog image
vehicles on a cargo ship with wake of the ship in the background

Blog Type
Is it a good read ?
On

Agiledrop.com Blog: Drupal meetup in Maribor

Last week we organised a Drupal meetup in Maribor (the second largest town in Slovenia, where Agiledrop has the second office). As a member of Drupal Slovenia, we organised two presentations and sponsored a reception with networking after the event. Are you interested what those two lecturers were about?

READ MORE

Palantir: University of California Berkeley Extension

University of California Berkeley Extension
Extension homepage displayed on desktop computerbrandt
Wed, 10/24/2018 – 11:20

How we helped UC Berkeley Extension reduce the cost of student enrollment.

extension.berkeley.edu
Streamlined Enrollment to Nurture Students in Their Journeys
On

UC Berkeley Extension (Extension) is the continuing education branch of the University of California Berkeley. Extension offers more than 2,000 courses each year, including online courses, as well as more than 75 professional certificates and specialized programs of study.

Extension knew their site was significantly behind what they needed students’ user experience to be, and they needed assistance in simplifying enrollment. While preparing for a redesign of their website, Extension approached Palantir as a subject matter expert on website redesign who could also help to user-test their new information architecture and design and also conduct user research in order to recommend revisions that would help them improve enrollment conversions on future iterations of the site. The ultimate goal was to make it easier for students to continue their educational journey at Extension.

Reducing The Cost of Student Enrollment

UC Berkeley Extension has over 40,000 student enrollments a year. Previous to their engagement with Palantir, it took 127 web sessions between the first visit and enrollment.

In the first three months after implementing Palantir’s recommendations, that number decreased 33% to only 82.5 web sessions needed to secure an enrollment. By decreasing this number, Extension was able to capture more revenue per web session, increasing the average from $6.08 to $10.68 per session.

Here’s How We Did It

Because Extension had already done significant market research, we quickly nailed down the key goals of the project and how we would define success.

We identified a two-prong approach:

  1. Validate their recent site redesign and new information architecture through virtual and in-person user testing; and
  2. Conduct user research, and create and validate wireframes to support their execution of a future redesign.

Palantir came in as the subject matter experts on the re-design of our multi-million dollar e-commerce web site. They exceeded expectations on every measure. We then re-hired them for a subsequent project. We recommend Palantir highly.

Jim Kaczkowski

Marketing Manager, University of California Berkeley Extension


Our Methods

In order to move the needle on business outcomes, methods must be backed with real, actionable insights and data. For Extension, this meant developing a deep understanding of their users’ behavior and motivations.

First, we defined key audience segments and generated personas and user journeys. Then, we validated the way that each segment interacts with the site through menu testing and in-person usability testing. This user research gave us direct and applicable insights which established the foundation for what kinds of features prospective students need and expect from the site.

Competitive analysis table

We continued our exploration of audience needs by conducting a competitive analysis of six competitor sites in the higher, continuing, and online education space. Outcomes of this research revealed that students need more cues before they make a decision about enrolling in a course and before they take a deeper dive into a program or course page.

Questions like: “Is the course open or closed?” “Is there a waitlist?” and “Is it at a location convenient to me?” linger in a student’s mind.

Wireframes for new site

Based on the competitive analysis, audience definition, in-person usability testing, and menu testing, Palantir developed a set of wireframes to support Extension’s upcoming redesign.

These outlined many of the key priorities that surfaced throughout the project, such as:

  • Simplifying the Student Services landing page Surfacing content that supports the offerings of the courses and programs (e.g. instructor expertise and alumni success)
  • Making information about career outcomes more prominent

But the testing didn’t stop there. Once wireframes were created, we validated them further by conducting a final set of first-click tests, designed to help identify and close gaps between the designs and what the audience members wanted to do on the site.

Before and After images of Extension site

The strategy work we did allowed Extension to gain a better sense of the needs and pain points of their audience and revealed a handful of key points for them to address:

  • The Extension site needed a more extensive faceted search.
  • Extension needed to work with the institution to reposition and rebrand the Student Services department as a key advocate for incoming, current and returning students.
  • Extension needed to modify its messaging to better surface the qualities of its curriculum, flexibility and affordability, along with instructor expertise so that prospective students could quickly get a sense of the value of the education and academic offerings.

Palantir helped to shape the future evolution of the Extension website by equipping the UC Berkeley team with a set of user experience tools and methods they continue to utilize. The user-research compiled throughout the engagement continues to focus an intention in their design as they undertake new website projects, always with the student journey top of mind.

Mobomo: NOAA Fisheries and Mobomo win 2018 Acquia Engage Award

Award Program Showcases Outstanding Examples of Digital Experience Delivery

Vienna, VA – October 24, 2018 – Mobomo today announced it was selected along with NOAA Fisheries as the winner of the 2018 Acquia Engage Awards for the Leader of the Pack: Public Sector. The Acquia Engage Awards recognize the world-class digital experiences that organizations are building with the Acquia Platform.

In late 2016, NOAA Fisheries partnered with Mobomo to restructure and redesign their digital presence. Before the start of the project, NOAA Fisheries worked with Foresee to help gather insight on their current users. They wanted to address poor site navigation, one of the biggest complaints. They had concerns over their new site structure and wanted to test proposed designs and suggest improvements. Also, the NOAA Fisheries organization had siloed information, websites and even servers within multiple distinct offices. The Mobomo team was and (is currently) tasked with the project of consolidating information into one main site to help NOAA Fisheries communicate more effectively with all worldwide stakeholders, such as commercial and recreational fishermen, fishing councils, scientists and the public. Developing a mobile-friendly, responsive platform is of the utmost importance to the NOAA Fisheries organization. By utilizing Acquia, we are able to develop and integrate lots of pertinent information from separate internal systems with a beautifully designed interface.

“It has been a great pleasure for Mobomo to develop and deploy a beautiful and functional website to support NOAA fisheries managing this strategic resource. Whether supporting the work to help Alaskan Native American sustainable fish stocks, providing a Drupal-based UI to help fishing council oversight of the public discussion of legislation, or helping commercial fishermen obtain and manage their licenses, is honored help NOAA Fisheries execute its mission.” – Shawn MacFarland, CTO of Mobomo 

More than 100 submissions were received from Acquia customers and partners, from which 15 were selected as winners. Nominations that demonstrated an advanced level functionality, integration, performance (results and key performance indicators), and overall user experience advanced to the finalist round, where an outside panel of experts selected the winning projects.

“This year’s Acquia Engage Award nominees show what’s possible when open technology and boundless ambition come together to create world-class customer experiences. They’re making every customer interaction more meaningful with powerful, personalized experiences that span the web, mobile devices, voice assistants, and more,” said Joe Wykes, senior vice president, global channels at Acquia. “We congratulate Mobomo and NOAA Fisheries and all of the finalists and winners. This year’s cohort of winners demonstrated unprecedented evidence of ROI and business value from our partners and our customers alike, and we’re proud to recognize your achievement.”

“Each winning project demonstrates digital transformation in action, and provides a look at how these brands and organizations are trying to solve the most critical challenges facing digital teams today,” said Matt Heinz, president of Heinz Marketing and one of three Acquia Engage Award jurors. Sheryl Kingstone of 451 Research and Sam Decker of Decker Marketing also served on the jury.

About Mobomo

Mobomo builds elegant solutions to complex problems. We do it fast, and we do it at a planetary scale. As a premier provider of mobile, web, and cloud applications to large enterprises, federal agencies, napkin-stage startups, and nonprofits, Mobomo combines leading-edge technology with human-centered design and strategy to craft next-generation digital experiences.

About Acquia

Acquia provides a cloud platform and data-driven journey technology to build, manage and activate digital experiences at scale. Thousands of organizations rely on Acquia’s digital factory to power customer experiences at every channel and touchpoint. Acquia liberates its customers by giving them the freedom to build tomorrow on their terms.

For more information visit www.acquia.com or call +1 617 588 9600.

###

All logos, company and product names are trademarks or registered trademarks of their respective owners.

The post NOAA Fisheries and Mobomo win 2018 Acquia Engage Award appeared first on .

Ashday’s Digital Ecosystem and Development Tips: Drupal Module Spotlight: Paragraphs

 

I really don’t like WYSIWYG editors. I know that I’m not alone, most developers and site builders feel this way too. Content creators always request a wysiwyg, but I am convinced that it is more of a necessary evil and they secretly dislike wysiwygs too. You all know what wysiwygs (What You See Is What You Get) are right? They are those nifty fields that allow you to format text with links, bolding, alignment, and other neat things. They also can have the ability to add tables, iframes, flash code, and other problematic HTML elements. With Drupal we have been able to move things out of a single wysiwyg body field into more discrete purpose-built fields that match the shape of the content being created and this has helped solve a lot of issues, but still didn’t cancel out the need for a versatile body field that a wysiwyg can provide.

TEN7 Blog’s Drupal Posts: Episode 042: DrupalCorn 2018

It is our pleasure to welcome Tess Flynn to the TEN7 podcast to discuss attending the 2018 DrupalCorn and presenting “Dr. Upal Is In, Health Check Your Site”. Tess is TEN7’s DevOps engineer. Here’s what we’re discussing in this podcast: DrupalCorn2018; DrupalSnow; Camp scheduling; What it takes to put on a camp; Unconference the conference; Substantive keynotes; Dr. Upal is now in; The good health of your website is important; It takes humans and tools; Every website is a bit like a person, it’s a story; Docker-based Battle Royale; Auditing the theme; Mental health and tech; Drupal 8 migration; A camp with two lunches; Loaded baked potatoes and corn; Cornhole; Catching Jack the Ripper; Onto DrupalCamp Ottawa

MTech, LLC: Troubleshooting a Drupal 8 Migration

Troubleshooting a Drupal 8 Migration

A day doesn’t go by that someone isn’t asking a question in Slack #migration about how to troubleshoot a specific problem with a tricky migration. Almost universally, these problems be demystified by using Xdebug and putting breakpoints in two spots in Core’s MigrateExecutable. First is in the ::import() method where it rewinds the source and then processes it. The second place I regularly put a breakpoint is in ::processRow().

heddn
Wed, 10/24/2018 – 08:21