<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Pixelite]]></title><description><![CDATA[A technology blog or rant area depending on the topic]]></description><link>https://www.pixelite.co.nz/</link><generator>Ghost 3.1</generator><lastBuildDate>Tue, 17 Dec 2019 13:16:38 GMT</lastBuildDate><atom:link href="https://www.pixelite.co.nz/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Preparing for a high traffic event, simple steps to success]]></title><description><![CDATA[The steps that any new launch or high traffic event should go through in order to have the best chance of success. ]]></description><link>https://www.pixelite.co.nz/article/preparing-for-a-high-traffic-event-simple-steps-to-success/</link><guid isPermaLink="false">5dc622216b676700383d2f80</guid><category><![CDATA[Performance]]></category><category><![CDATA[Caching]]></category><category><![CDATA[Hosting]]></category><category><![CDATA[Drupal]]></category><category><![CDATA[Drupal planet]]></category><dc:creator><![CDATA[Sean Hamlin]]></dc:creator><pubDate>Mon, 02 Dec 2019 09:34:49 GMT</pubDate><media:content url="https://www.pixelite.co.nz/content/images/2019/11/denys-nevozhai-7nrsVjvALnA-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.pixelite.co.nz/content/images/2019/11/denys-nevozhai-7nrsVjvALnA-unsplash.jpg" alt="Preparing for a high traffic event, simple steps to success"><p>The steps that any new launch or high traffic event should go through in order to have the best chance of success. This post is aimed at the project management level, so will try to stay out of the weeds, and focus on the high level topics you need to think about. There is a ~18 minute recording at the end of this post where I presented this topic at <a href="https://drupalsouth.org/">Drupalsouth 2019</a>.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/12/Drupalsouth_2019_Hobart_-_Google_Slides.png" class="kg-image" alt="Preparing for a high traffic event, simple steps to success"><figcaption>Title slide from the presentation.</figcaption></figure><h2 id="preamble-what-could-be-considered-a-high-traffic-event">Preamble: What could be considered a high traffic event</h2><p>Launching a brand new site</p><ul><li>Re-platforming (e.g. moving CMS version or type, or between hosting providers)</li><li>eDM or other marketing event (e.g. Adwords)</li><li>Planned traffic event (e.g. black Friday)</li><li>Unplanned traffic event (e.g. news and media site)</li></ul><h3 id="step-1-ensure-you-have-some-basic-drupal-configuration-in-place">Step 1) Ensure you have some basic Drupal configuration in place</h3><ul><li>Disable known problem child modules <code>dblog</code>, <code>devel</code>, <code>statistics</code>, <code>radioactivity</code>, <code>page_cache</code></li><li>Enable <code>dynamic_page_cache</code> (if you have authenticated traffic)</li><li>Set minimum cache lifetime to something sensible</li><li>JS and CSS aggregation enabled</li><li>Automate these checks with <a href="https://github.com/drutiny/drutiny">Drutiny</a></li></ul><h3 id="step-2-content-delivery-network-cdn-">Step 2) Content Delivery Network (CDN)</h3><p>Additional insurance against a lot of traffic is distributing your cached content to all corners of the globe.</p><p>Tiered caching should be used to ensure the highest offload rate. Most CDN providers will support this at a given price point.</p><h3 id="step-3-cache-tuning-and-minimising-origin-requests">Step 3) Cache tuning and minimising origin requests</h3><p>Every request that bypasses your CDN layer adds load to the platform. In order to have the best chance of surviving a high traffic event, origin traffic needs to be carefully considered and reduced where possible.</p><p>Requests to origin that are often overlooked</p><ul><li>404s</li><li>Marketing based parameters (e.g. <code>utm_campaign</code>)</li><li>Redirects (especially if re-platforming)</li><li>WAF to block silly requests (e.g. Wordpress URLs like <code>wp-login.php</code>)</li></ul><p>It you are interested in WAF tuning, you should check out my talk last year on <a href="https://youtu.be/oQizvm_dDeM">using Cloudflare to secure your Drupal site</a>.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/12/ddos.png" class="kg-image" alt="Preparing for a high traffic event, simple steps to success"><figcaption>This has happened to a customer of mine in the past. Fun fact the <code>gclid</code> and <code>dclid</code> query parameters are guaranteed unique for every user and click. This effectively makes them un-cacheable.</figcaption></figure><h3 id="step-4-load-testing">Step 4) Load testing</h3><p>If you are building a new site, or are expecting a substantially different traffic profile than what you have currently, then you should look to load test the system.</p><ul><li>Production hardware replica (scaled up if appropriate)</li><li>Emulate expected user behaviour, use existing analytics, or expected flows</li><li>Emulate what the browser would be doing (download all assets, including any HTTP 404s)</li><li>Ensure complex tasks are also simulated at the same time (e.g. editorial, searching, form submissions, feeds ingestions)</li></ul><p>At the end of this task (they you may need to run several times), you should have the confidence that you can handle the traffic expected.</p><h3 id="step-5-hardware-auto-scaling">Step 5) Hardware (auto) scaling</h3><p>Now that you have the hardware you need to have in place with load testing, ensure you have autoscaling in place to deal with the peaks and troughs (it is unlikely you need to run your peak hardware for the entire duration of the event).</p><p>Autoscaling can also help if the origin traffic that you experience is higher than anticipated.</p><p>Test the autoscaler, set limits that you are comfortable with, and ensure you know how quickly the new resources take to come to life.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/12/auto.png" class="kg-image" alt="Preparing for a high traffic event, simple steps to success"><figcaption>Be nice to your OPs team, use an auto scaler.</figcaption></figure><h3 id="step-6-have-a-good-fallback">Step 6) Have a good fallback</h3><p>Say the worst does happen, and you site does go down, or a critical API drops off the face of the internet, what does the end user see? Can you offer at least a better experience than a generic web server error page?</p><p>Most CDNs will have the ability to load balance origins (hot DR), and even fallback to a static version of the site if all origins are down.</p><p>It would make sense to test this prior to the high load event as well.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/12/abc.png" class="kg-image" alt="Preparing for a high traffic event, simple steps to success"><figcaption>ABC's news website went down just before Drupalsouth 2019, and someone managed to screencap it, and then send it to me. I am confident that you can come up with a better fallback than this error page.</figcaption></figure><h3 id="step-7-warm-your-cache">Step 7) Warm your cache</h3><p>If you have a rather long tail website, it will be worth warming your cache prior to the event. An excellent module called <a href="https://www.drupal.org/project/warmer">warmer</a> has been written, to which allows warming all sorts of caches. It can for instance load every page in the XML sitemap. So this is fairly low effort, high reward.</p><h3 id="step-8-third-party-api-dependencies">Step 8) Third party API dependencies</h3><p>This is more of a fundamental design decision likely made much earlier on in the project. Say the content of your page is <em>dependent</em> on the content in an API response.  If you request the API content during page generation time, then you are tying the <em>speed</em> and <em>availability</em> of your site to another site (often outside your control).</p><p>This can lead to slow page load times, and worse case scenario can tie up your server's resources.</p><p>New Relic APM has "external requests" to which allow you to visualise this.</p><p>There are ways to mitigate this:</p><ul><li>Fetch the data in the background and cache locally in Drupal for as long as the data is considered 'good'. e.g. using Drush and a cronjob.</li><li>Use a client side application (e.g. React) and request the API response in the client side</li><li>Use a CDN on the API and see Step #6 above</li></ul><h3 id="step-9-realtime-analytics">Step 9) Realtime analytics</h3><p>During the event, having access to realtime (or near realtime) analytics to find out</p><ul><li>how the system is currently performing</li><li>requests/sec</li><li>where the traffic is coming from</li><li>cache offload rate from the CDN</li></ul><p>Is extremely valuable. Even more valuable is being able to respond to this data in a quick and efficient matter. Having access to technical people can help. The types of logs and analytics you should be looking to get a hold of:</p><ul><li>Web analytics tools (e.g. Google Analytics)</li><li>APM tools (e.g. New Relic)</li><li>CDN analytics (e.g. Cloudflare Logs)</li><li>Log stream from hosting provider (e.g. PHP error log)</li></ul><p>To see where you can take this, you might also be interested in <a href="https://www.pixelite.co.nz/article/analyzing-cloudflare-logs-with-the-command-line/">reading this blog post</a> that shows off some dashboards that were purpose built for a high traffic event.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/12/dashboard.png" class="kg-image" alt="Preparing for a high traffic event, simple steps to success"><figcaption>An example dashboard that was written for a previous high traffic event that I was involved with. The data is around 6 minutes delayed, but still proved invaluable.</figcaption></figure><h3 id="step-10-application-changes-in-a-pinch">Step 10) Application changes in a pinch</h3><p>If you do spot something in your analytics, knowing what tools you have at your disposal to mitigate issues quickly and easily is worth knowing.</p><ul><li>Cloudflare page rules (redirect a broken path, increase the WAF presence on a route)</li><li>Nginx or Apache configuration</li><li>Application hotfix (avoid clearing the cache)</li></ul><p>Knowing what tool will solve what problem, how long each option takes to deploy, how safe it is, how easy is the rollback is is absolutely critical.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/12/pagerules.png" class="kg-image" alt="Preparing for a high traffic event, simple steps to success"><figcaption>Cloudflare's pagerules feature is an excellent way to make quick changes to how your application functions.</figcaption></figure><h3 id="step-11-letting-your-hosting-provider-and-their-support-team-know">Step 11) Letting your hosting provider and their support team know</h3><p>No-one likes surprises, so plan ahead. Ensure there are people available or on call during your traffic event. This goes for both your hosting provider, to CDN provider to support staff.</p><h2 id="postamble-what-success-looks-like">Postamble: What success looks like</h2><p>So after your high traffic event has ended, here are some simple things to check in order to see how successful you were:</p><ul><li>Minimal origin requests and a high CDN offload</li><li>Boring origin hardware graphs</li><li>No rants on twitter</li><li>No trending hashtag on twitter that is negative</li><li>Users remember the event for it's content, and not the problems with it</li></ul><h2 id="drupalsouth-2019-video">Drupalsouth 2019 video</h2><figure class="kg-card kg-embed-card kg-card-hascaption"><iframe width="480" height="270" src="https://www.youtube.com/embed/zujP5cGJUFw?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe><figcaption>This was me presenting this topic at Drupalsouth 2019.</figcaption></figure><p>Let me know in the comments if this was of use, and also if you have any other words of wisdom for anyone else.</p>]]></content:encoded></item><item><title><![CDATA[Search API attachments and storing reasonable amounts of data]]></title><description><![CDATA[Search API Attachments has a setting that allows you to store only the most important information in the database.]]></description><link>https://www.pixelite.co.nz/article/search-api-attachments-and-storing-reasonable-amounts-of-data/</link><guid isPermaLink="false">5dd486daa299e400443dda04</guid><category><![CDATA[Drupal]]></category><category><![CDATA[MySQL]]></category><category><![CDATA[Search]]></category><category><![CDATA[Database]]></category><category><![CDATA[Drupal planet]]></category><dc:creator><![CDATA[Sean Hamlin]]></dc:creator><pubDate>Wed, 20 Nov 2019 02:41:43 GMT</pubDate><media:content url="https://www.pixelite.co.nz/content/images/2019/11/alexander-andrews-eNoeWZkO7Zc-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.pixelite.co.nz/content/images/2019/11/alexander-andrews-eNoeWZkO7Zc-unsplash.jpg" alt="Search API attachments and storing reasonable amounts of data"><p>On a number of Drupal sites that I am involved with recently I see the error message when either trying to dump or restore certain databases:</p><pre><code class="language-sql">ERROR 2020 (HY000) at line 1: Got packet bigger than 'max_allowed_packet'</code></pre><p>It is important to note that with MySQL there are 2 settings for <code>max_allowed_packet</code>, the client (what you are connecting from), and the server (what you are connecting to).</p><p>In order to find out the client's current settings, you can run:</p><pre><code class="language-bash">$ mysql --help | grep max-allowed-packet | grep -v '#' | awk '{print $2/(1024*1024)}'
16</code></pre><p>So <code>16MB</code> on the client side.</p><p>In order to find out the server's current setting, you can run:</p><pre><code class="language-sql">SHOW VARIABLES LIKE 'max_allowed_packet';
+--------------------+----------+
| Variable_name      | Value    |
+--------------------+----------+
| max_allowed_packet | 67108864 |
+--------------------+----------+</code></pre><p>So <code>64MB</code> on the server side.</p><p>So at the moment:</p><ul><li>The client cannot send or receive more than <code>16MB</code> in a single statement</li><li>The server cannot send or receive more than <code>64MB</code> in a single statement</li></ul><p>So you can end up in a position where there is content in your database, that is working fine, but you cannot dump the database, nor can you restore from a dump (with your current client configuration).</p><p>Take the <a href="https://www.drupal.org/project/search_api_attachments">search_api_attachments</a> module in Drupal, it <a href="https://www.drupal.org/project/drupal/issues/2496457">uses the <code>key_value</code> table</a> in Drupal 8 to store extracted text from documents.</p><figure class="kg-card kg-code-card"><pre><code class="language-sql">SELECT name, length(value) as size FROM key_value ORDER BY size DESC limit 5;
+------------------------------+----------+
| name                         | size     |
+------------------------------+----------+
| search_api_attachments:14646 | 33623803 |
| search_api_attachments:14288 |  3394023 |
| search_api_attachments:13146 |  2356958 |
| search_api_attachments:4921  |  1554830 |
| search_api_attachments:1586  |  1549981 |
+------------------------------+----------+</code></pre><figcaption>You can see a 33MB document extracted into the <code>key_value</code> table</figcaption></figure><h2 id="solution-1-alter-max_allowed_packet">Solution 1 - alter max_allowed_packet</h2><p>The first solution is simply to alter the <code>max_allowed_packet</code> size to accommodate the size needed (on both the client and server). The only issue is that this is a game of cat and mouse. As soon as you tune the size to be larger, a content editor will upload a larger document.</p><p>It also means your database size will grow fairly unchecked, especially if you have heavy editorial where documents are commonly uploaded.</p><p>While I do advocate for sensible defaults, I think having &gt; <code>64MB</code> in a single cell in a table as potentially overkill for the benefits it provides.</p><h2 id="solution-2-reduce-the-amount-in-the-database">Solution 2 - reduce the amount in the database</h2><p>The point of indexing attachments is to ensure that you can still find content that is buried in binary files. I would argue that most of the important keywords of these documents will be in the first few pages. Indexing the entire document often will provide limited added value (if any).</p><p>Reading through the code of the <code>search_api_attachments</code> module, I spot these <a href="https://git.drupalcode.org/project/search_api_attachments/blob/8.x-1.x/src/Plugin/search_api/processor/FilesExtractor.php#L381-403">useful lines</a>:</p><pre><code class="language-php">  /**
   * Limit the indexed text to first N bytes.
   *
   * @param string $extracted_text
   *   The hole extracted text.
   *
   * @return string
   *   The first N bytes of the extracted text that will be indexed and cached.
   */
  public function limitBytes($extracted_text) {
    $bytes = 0;
    if (isset($this-&gt;configuration['number_first_bytes'])) {
      $bytes = Bytes::toInt($this-&gt;configuration['number_first_bytes']);
    }
    // If $bytes is 0 return all items.
    if ($bytes == 0) {
      return $extracted_text;
    }
    else {
      $extracted_text = mb_strcut($extracted_text, 0, $bytes);
    }
    return $extracted_text;
  }</code></pre><p>So it turns out, baked into the module, is a way to effectively limit the number of characters stored in the database (<a href="https://www.drupal.org/project/search_api_attachments/issues/2888827">drupal.org issue</a>). </p><p>In order to enable this feature, you need to edit the Processors for a given index:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/11/index_list-3.png" class="kg-image" alt="Search API attachments and storing reasonable amounts of data"><figcaption>Search API index lis, and the Processors link</figcaption></figure><p>Inside this page, is a configuration form that allows you to set a max limit for the amount stored in the database.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/11/processor.png" class="kg-image" alt="Search API attachments and storing reasonable amounts of data"><figcaption>Search API Attachments "limit size" feature</figcaption></figure><p>After setting the size to <code>100 KB</code> which seems like a reasonable number, and then re-indexing the appropriate index, you see the results</p><pre><code class="language-sql">SELECT name, length(value) as size FROM key_value ORDER BY size DESC limit 5;
+--------------------------------+--------+
| name                           | size   |
+--------------------------------+--------+
| node.field_storage_definitions | 116308 |
| search_api_attachments:831     | 102412 |
| search_api_attachments:1536    | 102412 |
| search_api_attachments:1571    | 102412 |
| search_api_attachments:1576    | 102412 |
+--------------------------------+--------+</code></pre><p>So this will mean that the database is a lot smaller, allowing faster database backups, restores, and rollbacks. It also will save a lot of issues around forever tuning <code>max_allowed_packet</code>.</p><p>It is also worth noting that this <code>key_value</code> storage is actually a caching system for <code>search_api_attachments</code>, and that this table will be used, even if your only search servers are external to Drupal (e.g. Solr), and you make no use of database searching.</p><h2 id="update-21-november-2019">Update 21 November 2019</h2><p>In the hopes that the module maintainers make a more sensible default limit I have also raised <a href="https://www.drupal.org/project/search_api_attachments/issues/3095538">this issue</a> to get a default value set. Having any size limit would be better than having no limit. A patch is now uploaded so you can test this out.</p><h2 id="update-23-november-2019">Update 23 November 2019</h2><p>The <a href="https://git.drupalcode.org/project/search_api_attachments/commit/84ca55f">patch has been committed</a>, and a <a href="https://www.drupal.org/project/search_api_attachments/releases/8.x-1.0-beta15">new beta released</a> for Drupal 8 🎉. Thanks so much to <a href="https://www.drupal.org/u/izus">Ismaeil</a> (module maintainer) for the prompt reply and action.</p><h2 id="comments">Comments</h2><p>If you have come across this in the past, what was your go to solution? I am keen to understand how others have solved this issue, and how big your <code>key_value</code> table got in the mean time.</p>]]></content:encoded></item><item><title><![CDATA[How to add sub tabs under the User profile in Drupal 8]]></title><description><![CDATA[A simple step by step tutorial on adding a custom sub tab in a user's profile in Drupal 8.]]></description><link>https://www.pixelite.co.nz/article/how-to-add-sub-tabs-under-the-user-profile-in-drupal-8/</link><guid isPermaLink="false">5d8d21646fac3c0038fe69b3</guid><category><![CDATA[Drupal]]></category><category><![CDATA[Drupal planet]]></category><category><![CDATA[Drupal console]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Sean Hamlin]]></dc:creator><pubDate>Thu, 26 Sep 2019 22:18:49 GMT</pubDate><media:content url="https://www.pixelite.co.nz/content/images/2019/09/mateo-avila-chinchilla-x_8oJhYU31k-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.pixelite.co.nz/content/images/2019/09/mateo-avila-chinchilla-x_8oJhYU31k-unsplash.jpg" alt="How to add sub tabs under the User profile in Drupal 8"><p>I am writing this quick tutorial in the hopes it helps someone else out there. There are a few guides out there to <a href="https://drupal.stackexchange.com/questions/275753/how-do-i-add-secondary-tabs-to-the-user-profile-edit-tab">do similar tasks</a> to this. They just are not quite what I wanted. </p><p>To give everyone an idea on the desired outcome, this is what I wanted to achieve:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/09/example_sub_tab.png" class="kg-image" alt="How to add sub tabs under the User profile in Drupal 8"><figcaption>Example user profile with 2 custom tabs in it.</figcaption></figure><p>Before I dive into this, I will mention that you can do this with views, if all that you want to produce is content supplied by views. Ivan <a href="https://www.webwash.net/custom-tab-user-profile-page-views-drupal-8/">wrote a nice article on this</a>. In my situation, I wanted a completely custom route, controller and theme function. I wanted full control over the output.</p><h2 id="steps-to-add-sub-tabs">Steps to add sub tabs</h2><h3 id="step-1-create-a-new-module">Step 1 - create a new module</h3><p>If you don't already have a module to house this code, you will need one. These commands make use of <a href="https://drupalconsole.com/articles/how-to-install-drupal-console">Drupal console</a>, so ensure you have this installed first.</p><pre><code class="language-bash">drupal generate:module --module='Example module' --machine-name='example' --module-path='modules/custom' --description='My example module' --package='Custom' --core='8.x'</code></pre><h3 id="step-2-create-a-new-controller">Step 2 - create a new controller</h3><p>Now that you have a base module, you need a route</p><pre><code class="language-bash">drupal generate:controller --module='example' --class='ExampleController' --routes='"title":"Content", "name":"example.user.contentlist", "method":"contentListUser", "path":"/user/{user}/content"'</code></pre><h3 id="step-3-alter-your-routes">Step 3 - alter your routes</h3><p>In order to use magic autoloading, and also proper access control, you can alter your routes to look like this. This is covered in the <a href="https://www.drupal.org/docs/8/api/routing-system/parameters-in-routes/using-parameters-in-routes">official documentation</a>.</p><pre><code class="language-php"># Content user tab.
example.user.contentlist:
  path: '/user/{user}/content'
  defaults:
    _controller: '\Drupal\example\Controller\ExampleController::contentListUser'
    _title: 'Content'
  requirements:
    _permission: 'access content'
    _entity_access: 'user.view'
    user: \d+
  options:
    parameters:
      user:
        type: entity:user

# Reports user tab.
example.user.reportList:
  path: '/user/{user}/reports'
  defaults:
    _controller: '\Drupal\example\Controller\ExampleController::reportListUser'
    _title: 'Reports'
  requirements:
    _permission: 'access content'
    _entity_access: 'user.view'
    user: \d+
  options:
    parameters:
      user:
        type: entity:user</code></pre><h3 id="step-4-create-example-links-task-yml">Step 4 - create <code>example.links.task.yml</code></h3><p>This is the code that actually creates the tabs in the user profile. No Drupal console command for this unfortunately. The key part of this is defining <code>base_route: entity.user.canonical</code>.</p><pre><code class="language-yaml">example.user.content_task:
  title: 'Content'
  route_name: example.user.contentlist
  base_route: entity.user.canonical
  weight: 1

example.user.reports_task:
  title: 'Reports'
  route_name: example.user.reportList
  base_route: entity.user.canonical
  weight: 2</code></pre><h3 id="step-5-enable-the-module">Step 5 - enable the module</h3><p>Don't forget to actually turn on your custom module, nothing will work until then.</p><pre><code class="language-bash">drush en example</code></pre><h2 id="example-module">Example module</h2><p>The best (and simplest) example module I could find that demonstrates this is the <a href="https://github.com/drupal/drupal/tree/8.8.x/core/modules/tracker">Tracker module in Drupal core</a>. The Tracker module adds a tab to the user profile.</p>]]></content:encoded></item><item><title><![CDATA[How the feeds module in Drupal 7 ended up causing MySQL to sever the connection]]></title><description><![CDATA[This is a short story on an interesting problem we were having with the Feeds module and Feeds directory fetcher module in Drupal 7.]]></description><link>https://www.pixelite.co.nz/article/how-the-feeds-module-in-drupal-7-ended-up-causing-mysql-to-sever-the-connection/</link><guid isPermaLink="false">5d59d133f1ba340044f60aab</guid><category><![CDATA[Drupal]]></category><category><![CDATA[Drupal planet]]></category><category><![CDATA[SQL]]></category><dc:creator><![CDATA[Sean Hamlin]]></dc:creator><pubDate>Mon, 19 Aug 2019 03:05:09 GMT</pubDate><media:content url="https://www.pixelite.co.nz/content/images/2019/08/magnezis-magnestic-TW62wXQ6Omc-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.pixelite.co.nz/content/images/2019/08/magnezis-magnestic-TW62wXQ6Omc-unsplash.jpg" alt="How the feeds module in Drupal 7 ended up causing MySQL to sever the connection"><p>This is a short story on an interesting problem we were having with the <a href="https://www.drupal.org/project/feeds">Feeds</a> module and <a href="https://www.drupal.org/project/feeds_fetcher_directory">Feeds directory fetcher</a> module in Drupal 7.</p><h2 id="background-on-the-use-of-feeds">Background on the use of Feeds</h2><p>Feeds for this site is being used to ingest XML from a third party source (Reuters). The feed perhaps ingests a couple of hundred articles per day. There can be updates to the existing imported articles as well, but typically they are only updated the day the article is ingested.</p><p>Feeds was working well for over a few years, and then all of a sudden, the ingests started to fail. The failure was only on production, whilst the other (lower environments) the ingestion worked as expected.</p><h2 id="the-bizarre-error">The bizarre error</h2><p>On production we were experiencing the error during import:</p><pre><code class="language-bash">PDOStatement::execute(): MySQL server has gone away database.inc:2227 [warning] 
PDOStatement::execute(): Error reading result set's header [warning] 
database.inc:2227PDOException: SQLSTATE[HY000]: General error: 2006 MySQL server has [error]</code></pre><p>The error is not so much that the database server is not alive, more so that PHP's connection to the database has been severed due to exceeding MySQL's <code>wait_timeout</code> value.</p><p>The reason why this would occur on only production happens on Acquia typically when you need to read and write to the shared filesystem a lot. As lower environments, the filesystem is local disk (as the environments are not clustered) the access is a lot faster. On production, the public filesystem is a network file share (which is slower).</p><h2 id="going-down-the-rabbit-hole">Going down the rabbit hole</h2><p>Working out <strong>why</strong> Feeds was wanting to read and/or write many files from the filesystem was the next question, and immediately one thing stood out. The shear size of the <code>config</code> column in the <code>feeds_source</code> table:</p><figure class="kg-card kg-code-card"><pre><code class="language-sql">mysql&gt; SELECT id,SUM(char_length(config))/1048576 AS size FROM feeds_source GROUP BY id;
+-------------------------------------+---------+
| id                                  | size    |
+-------------------------------------+---------+
| apworldcup_article                  |  0.0001 |
| blogs_photo_import                  |  0.0003 |
| csv_infographics                    |  0.0002 |
| photo_feed                          |  0.0002 |
| po_feeds_prestige_article           |  1.5412 |
| po_feeds_prestige_gallery           |  1.5410 |
| po_feeds_prestige_photo             |  0.2279 |
| po_feeds_reuters_article            | 21.5086 |
| po_feeds_reuters_composite          | 41.9530 |
| po_feeds_reuters_photo              | 52.6076 |
| example_line_feed_article           |  0.0002 |
| example_line_feed_associate_article |  0.0001 |
| example_line_feed_blogs             |  0.0003 |
| example_line_feed_gallery           |  0.0002 |
| example_line_feed_photo             |  0.0001 |
| example_line_feed_video             |  0.0002 |
| example_line_youtube_feed           |  0.0003 |
+-------------------------------------+---------+</code></pre><figcaption>What 52 MB of ASCII looks like in a single cell.</figcaption></figure><p>Having to deserialize 52 MB of ASCII in PHP is bad enough.</p><p>The next step was dumping the value of the <code>config</code> column for a single row:</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">drush --uri=www.example.com sqlq 'SELECT config FROM feeds_source WHERE id = "po_feeds_reuters_photo"' &gt; /tmp/po_feeds_reuters_photo.txt</code></pre><figcaption>Get the 55 MB of ASCII in a file for analysis</figcaption></figure><p>Then open the resulting file in vim:</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">"/tmp/po_feeds_reuters_photo.txt" 1L, 55163105C</code></pre><figcaption>Vim struggles to open any file that has 55 million characters on a single line</figcaption></figure><p>And sure enough, inside this <code>config</code> column was a reference to every single XML file ever imported, a cool ~450,000 files.</p><figure class="kg-card kg-code-card"><pre><code class="language-php">a:2:{s:31:"feeds_fetcher_directory_fetcher";a:3:{s:6:"source";s:23:"private://reuters/pass1";s:5:"reset";i:0;
s:18:"feed_files_fetched";a:457065:{
s:94:"private://reuters/pass1/topnews/2018-07-04T083557Z_1_KBN1JU0WQ_RTROPTC_0_US-CHINA-AUTOS-GM.XML";i:1530693632;
s:94:"private://reuters/pass1/topnews/2018-07-04T083557Z_1_KBN1JU0WR_RTROPTT_0_US-CHINA-AUTOS-GM.XML";i:1530693632;
s:96:"private://reuters/pass1/topnews/2018-07-04T083557Z_1_LYNXMPEE630KJ_RTROPTP_0_USA-TRADE-CHINA.XML";i:1530693632;
s:97:"private://reuters/pass1/topnews/2018-07-04T083617Z_147681_KBE99T04E_RTROPTT-LNK_0_OUSBSM-LINK.XML";i:1530693632;
s:102:"private://reuters/pass1/topnews/2018-07-04T083658Z_1_KBN1JU0X2_RTROPTT_0_JAPAN-RETAIL-ZOZOTOWN-INT.XML";i:1530693632</code></pre><figcaption>457,065 is the array size in <code>feed_files_fetched</code></figcaption></figure><p>So this is the root cause of the problem, Drupal is attempting to <code>stat()</code> ~450,000 files that do not exist, and these files are mounted on a network file share. This process took longer than MySQL's <code>wait_timeout</code> and MySQL closed the connection. When Drupal finally wanted to talk to the database, it was not to be found.</p><p>Interesting enough, the problem of the <a href="https://www.drupal.org/node/1715124">config column running out of space came up in 2012</a>, and "the solution" was just to change the type of the column. Now you can store 4GB of content in this 1 column. In hindsight, perhaps this was not the smartest solution.</p><p>Also in 2012, you see the<a href="https://www.drupal.org/project/feeds_fetcher_directory/issues/1630970#comment-6436920"> comment from @valderama</a>:</p><blockquote>However, as <code>feed_files_fetched</code> saves all items which were already imported, it grows endless if you have a periodic import.</blockquote><p>Great to see we are not the only people having this pain.</p><h2 id="the-solution">The solution</h2><p>The simple solution to limp by is to increase the <code>wait_timeout</code> value of your Database connection. This gives Drupal more time to scan for the previously imported files prior to importing the new ones.</p><figure class="kg-card kg-code-card"><pre><code class="language-php">$databases['default']['default']['init_commands'] = [
  'wait_timeout' =&gt; "SET SESSION wait_timeout=2500",
];</code></pre><figcaption>Increasing MySQL's <code>wait_timeout</code> in Drupal's <code>settings.php</code>.</figcaption></figure><p>As you might guess, this is not a good long term solution for sites with a lot of imported content, or content that is continually being imported.</p><p>Instead we opted to do a fairly quick update hook that would loop though all of the items in the <code>feed_files_fetched</code> key, and unset the older items.</p><pre><code class="language-php">&lt;?php

/**
 * @file
 * Install file.
 */

/**
 * Function to iterate through multiple strings.
 *
 * @see https://www.sitepoint.com/community/t/strpos-with-multiple-characters/2004/2
 * @param $haystack
 * @param $needles
 * @param int $offset
 * @return bool|int
 */
function multi_strpos($haystack, $needles, $offset = 0) {
  foreach ($needles as $n) {
    if (strpos($haystack, $n, $offset) !== FALSE) {
      return strpos($haystack, $n, $offset);
    }
  }
  return false;
}

/**
 * Implements hook_update_N().
 */
function example_reuters_update_7001() {
  $feedsSource = db_select("feeds_source", "fs")
    -&gt;fields('fs', ['config'])
    -&gt;condition('fs.id', 'po_feeds_reuters_photo')
    -&gt;execute()
    -&gt;fetchObject();

  $config = unserialize($feedsSource-&gt;config);

  // We only want to keep the last week's worth of imported articles in the
  // database for content updates.
  $cutoff_date = [];
  for ($i = 0; $i &lt; 7; $i++) {
    $cutoff_date[] = date('Y-m-d', strtotime("-$i days"));
  }

  watchdog('FeedSource records - Before trimmed at ' . time(), count($config['feeds_fetcher_directory_fetcher']['feed_files_fetched']));

  // We attempt to match based on the filename of the imported file. This works
  // as the files have a date in their filename.
  // e.g. '2018-07-04T083557Z_1_KBN1JU0WQ_RTROPTC_0_US-CHINA-AUTOS-GM.XML'
  foreach ($config['feeds_fetcher_directory_fetcher']['feed_files_fetched'] as $key =&gt; $source) {
    if (multi_strpos($key, $cutoff_date) === FALSE) {
      unset($config['feeds_fetcher_directory_fetcher']['feed_files_fetched'][$key]);
    }
  }

  watchdog('FeedSource records - After trimmed at ' . time(), count($config['feeds_fetcher_directory_fetcher']['feed_files_fetched']));

  // Save back to the database.
  db_update('feeds_source')
    -&gt;fields([
      'config' =&gt; serialize($config),
    ])
    -&gt;condition('id', 'po_feeds_reuters_photo', '=')
    -&gt;execute();
}</code></pre><p>Before the code ran, there were &gt; 450,000 items in the array, and after we are below 100. So a massive decrease in database size.</p><p>More importantly, the importer now runs a lot quicker (as it is not scanning the shared filesystem for non-existent files).</p><h2 id="what-this-means-for-feeds-in-drupal-8">What this means for Feeds in Drupal 8</h2><p>It came to my attention from <a href="https://www.drupal.org/u/dinesh18">Dinesh</a>, that this same issue may likely impact Drupal 8 feeds. To make things slightly more interesting, is that the functionality of the Drupal 7 module <a href="https://www.drupal.org/project/feeds_fetcher_directory">feeds_fetcher_directory</a> is now moved into the <a href="https://git.drupalcode.org/project/feeds/blob/8.x-3.x/src/Feeds/Fetcher/DirectoryFetcher.php">main feeds module</a>.</p><p>An <a href="https://www.drupal.org/project/feeds/issues/3078213">issue has been opened on Drupal.org to track this</a>. I will update this blog post once we know the outcome.</p><h2 id="update-27-september-2019">Update - 27 September 2019</h2><p>The above update hook has been run on production (to where Gluster is used). Feeds used to take upwards of 30 minutes to run there (even if there was no new files to process). Post the update hook running, it is now under 1 minute. We were also able to remove the <code>wait_timeout</code> setting as well. So this is a nice result.</p>]]></content:encoded></item><item><title><![CDATA[PHP 7.3 and when you can upgrade your Drupal site]]></title><description><![CDATA[PHP 7.3.0 was released in December 2018, and brings with it a number of improvements in both performance and the language. As always with Drupal you need to strike a balance between adopting these new improvements early and running into issues]]></description><link>https://www.pixelite.co.nz/article/php-7-3-and-when-you-can-upgrade-your-drupal-site/</link><guid isPermaLink="false">5d23ea3f3cd21a00383572e5</guid><category><![CDATA[Drupal]]></category><category><![CDATA[Drupal planet]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Sean Hamlin]]></dc:creator><pubDate>Sun, 28 Jul 2019 19:59:38 GMT</pubDate><media:content url="https://www.pixelite.co.nz/content/images/2019/07/peter-maselkowski-N135eczYTAs-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.pixelite.co.nz/content/images/2019/07/peter-maselkowski-N135eczYTAs-unsplash.jpg" alt="PHP 7.3 and when you can upgrade your Drupal site"><p>PHP 7.3.0 was <a href="https://www.php.net/ChangeLog-7.php#7.3.0">released in December 2018</a>, and brings with it a number of improvements in both performance and the language. As always with Drupal you need to strike a balance between adopting these new improvements early and running into issues that are not yet fixed by the community.</p><h2 id="why-upgrade-php-to-7-3-over-7-2">Why upgrade PHP to 7.3 over 7.2?</h2><ul><li><strong>It is around 10% faster compared to PHP 7.2</strong> - some basic benchmarks for Drupal on <a href="https://kinsta.com/blog/php-benchmarks/#drupal-benchmarks">https://kinsta.com/blog/php-benchmarks/#drupal-benchmarks</a></li><li><strong>A bunch of quality of life improvements to the language</strong> - e.g. <a href="https://wiki.php.net/rfc/flexible_heredoc_nowdoc_syntaxes">flexible heredoc and nowdoc syntaxes</a>, <a href="https://wiki.php.net/rfc/trailing-comma-function-calls">allowing a trailing comma in function calls</a> and <a href="https://wiki.php.net/rfc/json_throw_on_error">better JSON parsing error messages</a> (just to name a few). I would recommend reading this <a href="https://kinsta.com/blog/php-7-3/">great blog post on the topic</a> if you want to know more.</li></ul><h2 id="what-hosting-providers-support-php-7-3">What hosting providers support PHP 7.3?</h2><p>All the major players have support, here is how you configure it for each.</p><h3 id="acquia">Acquia</h3><p>Somewhere around April 2019 the option to choose PHP 7.3 was released. You can opt into this version by changing a value in Acquia Cloud. This can be done on a per environment basis.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/07/Overview___Acquia.png" class="kg-image" alt="PHP 7.3 and when you can upgrade your Drupal site"><figcaption>The PHP version configuration screen for Acquia Cloud&nbsp;</figcaption></figure><h3 id="pantheon">Pantheon</h3><p>Pantheon have had support since the April 2019 as well (<a href="https://pantheon.io/blog/speed-your-wordpress-or-drupal-site-php-73">see the announcement post</a>). To change the version, you update your <code>pantheon.yml</code> file (<a href="https://pantheon.io/docs/php-versions/">see the docs on this</a>).</p><figure class="kg-card kg-code-card"><pre><code class="language-yaml"># Put overrides to your pantheon.upstream.yml file here.
# For more information, see: https://pantheon.io/docs/pantheon-yml/
api_version: 1
php_version: 7.3</code></pre><figcaption>Example <code>pantheon.yml</code> file</figcaption></figure><p>On a side note, it is interesting that PHP 5.3 is still offered on Pantheon (end of life for <a href="https://www.php.net/eol.php">nearly 5 years</a>).</p><h3 id="platform-sh">Platform.sh</h3><p>Unsure when Platform.sh released PHP 7.3, but the process to enable it is very similar to Pantheon, you update your <code>.platform.app.yaml</code> file (<a href="https://docs.platform.sh/languages/php.html#supported-versions">see the docs on this</a>).</p><figure class="kg-card kg-code-card"><pre><code class="language-yaml"># .platform.app.yaml
type: "php:7.3"</code></pre><figcaption>Example <code>.platform.app.yaml</code> file</figcaption></figure><h3 id="dreamhost">Dreamhost</h3><p>PHP 7.3 is also available on Dreamhost, and can be chosen in a dropdown in their UI (<a href="https://help.dreamhost.com/hc/en-us/articles/214895317-How-do-I-change-the-PHP-version-of-my-site-">see the docs on this</a>).</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/07/02_PHP.fw.png" class="kg-image" alt="PHP 7.3 and when you can upgrade your Drupal site"><figcaption>The '<em>Manage Domains' section of Dreamhost</em></figcaption></figure><p>Dreamhost also win some award for also allowing the oldest version of PHP that I have seen in a while (PHP 5.2).</p><h2 id="when-can-you-upgrade-php-7-3">When can you upgrade PHP 7.3</h2><h3 id="drupal-8">Drupal 8</h3><p>As of <a href="https://www.drupal.org/project/drupal/releases/8.6.4">Drupal 8.6.4</a> (6<sup>th</sup> December 2018), PHP 7.3 is fully supported in Drupal core (<a href="https://www.drupal.org/node/3038583">change record</a>). I have been running PHP 7.3 with Drupal 8 for a while now and have seen no issues, and this includes running some complex installation profiles such as <a href="https://www.drupal.org/project/thunder">Thunder</a> and <a href="https://www.drupal.org/project/lightning">Lightning</a>.</p><p>Any Drupal 8 site that is reasonably up to date, should be fine with PHP 7.3.</p><h3 id="drupal-7">Drupal 7</h3><p>Slated for support in the next release of Drupal 7 - being Drupal 7.68 (see the <a href="https://www.drupal.org/project/drupal/issues/3012308">drupal.org issue</a>), however there are a number of related tasks that seem like deal breakers. There also is <a href="https://www.drupal.org/node/3060/qa">not PHP 7.3 and Drupal 7 tests running</a> daily either.</p><p>It seems like for the mean time, it is probably best to hold off on the PHP 7.3 upgrade until 7.68 is out the door, and also contributed modules have had a chance to upgrade and release a new stable release.</p><p>A <a href="https://www.drupal.org/project/issues/search?projects=&amp;project_issue_followers=&amp;issue_tags_op=%3D&amp;issue_tags=PHP+7.3">simple search on Drupal.org</a> yields the following modules that look like they may need work (more are certainly possible):</p><ul><li>composer_manager (<a href="https://www.drupal.org/project/composer_manager/issues/3058496">issue</a>)</li><li>scald (<a href="https://www.drupal.org/project/scald/issues/3032429">issue</a>) [now fixed and released]</li><li>video (<a href="https://www.drupal.org/project/video/issues/3042169">issue</a>)</li><li>search_api (<a href="https://www.drupal.org/project/search_api/issues/3009744">issue</a>) [now fixed and released]</li></ul><p>Most of the issues seem to be related to this deprecation - <a href="https://wiki.php.net/rfc/continue_on_switch_deprecation">Deprecate and remove continue targeting switch</a>. If you know of any other modules that have issues, please let me know in the comments.</p><h3 id="drupal-6">Drupal 6</h3><p>For all you die hard Drupal 6 fans out there (I know a few large websites still running this), you are going to be in for a rough ride. There is a <a href="https://github.com/d6lts/drupal/tree/php-7">PHP 7 branch of the d6lts Github repo</a>, so this is promising, however the last commit was September 2018, so this does not bode well for PHP 7.3 support. I also doubt contributed modules are going to be up to scratch (drupal.org does not even list D6 versions of modules anymore).</p><p>To test this theory, I audited the current 6.x-2.x branch of views</p><pre><code class="language-bash">$ phpcs -p ~/projects/views --standard=PHPCompatibility --runtime-set testVersion 7.3
................................................W.W.WW.W....  60 / 261 (23%)
................................E........................... 120 / 261 (46%)
...................................................EE....... 180 / 261 (69%)
............................................................ 240 / 261 (92%)
.....................                                        261 / 261 (100%)</code></pre><p>3 errors in views alone. The errors are show stoppers too</p><pre><code class="language-bash">Function split() is deprecated since PHP 5.3 and removed since PHP 7.0; Use preg_split() instead</code></pre><p>If this is the state of one of the most popular modules for Drupal 7, then I doubt the lesser known modules will be any better.</p><p>If you are serious about supporting Drupal 6, it would pay to get in contact with <a href="https://www.mydropwizard.com">My Drop Wizard</a>, as they <a href="https://www.mydropwizard.com/blog/drupal-6-year-2020-and-php-7-support">at least at providing support for people looking to adopt PHP 7</a>.</p>]]></content:encoded></item><item><title><![CDATA[Analyzing Cloudflare Logs (formally ELS) with the command line]]></title><description><![CDATA[If you have an enterprise zone with Cloudflare, there is the ability to request the raw request logs using 'Cloudflare Logs' (formally called Enterprise Log Share or ELS for short).]]></description><link>https://www.pixelite.co.nz/article/analyzing-cloudflare-logs-with-the-command-line/</link><guid isPermaLink="false">5d2be2fa4a57280044316114</guid><category><![CDATA[Cloudflare]]></category><category><![CDATA[Curl]]></category><category><![CDATA[Analytics]]></category><category><![CDATA[CDN]]></category><dc:creator><![CDATA[Sean Hamlin]]></dc:creator><pubDate>Tue, 16 Jul 2019 16:00:00 GMT</pubDate><media:content url="https://www.pixelite.co.nz/content/images/2019/07/stephen-dawson-qwtCeJ5cLYs-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.pixelite.co.nz/content/images/2019/07/stephen-dawson-qwtCeJ5cLYs-unsplash.jpg" alt="Analyzing Cloudflare Logs (formally ELS) with the command line"><p>If you have an <a href="https://www.cloudflare.com/plans/">enterprise zone</a> with Cloudflare, there is the ability to request the raw request logs using '<a href="https://www.cloudflare.com/products/cloudflare-logs/">Cloudflare Logs</a>' (formally called Enterprise Log Share or ELS for short).</p><p>Cloudflare logs comes in 2 flavours, "Log Push" (e.g. to an S3 bucket) and "Log Pull" (using the REST API). In this blog post I will be covering the REST API, as I find analyzing the data easier on my local laptop.</p><p>If you have Splunk or Sumologic (or similar), then likely Log Push will be better suited to you.</p><h2 id="how-to-download-your-cloudflare-logs-using-the-rest-api">How to download your Cloudflare logs using the REST API</h2><h3 id="step-1-ensure-cloudflare-logs-are-enabled-for-your-zone">Step 1: Ensure Cloudflare Logs are enabled for your zone</h3><p>This is a manual step, and it requires you to raise a ticket with Cloudflare in order to enable it. It would be super if there was an API endpoint to both read and write this feature flag, but alas, manual it is for now.</p><p>You can en-masse enable Cloudflare logs for many zones in the same ticket, so save some time and enable it for all your enterprise zones now would be advice. There is literally no downside to enabling it (assuming you don't store silly things like credit card numbers in the URL).</p><p>It is also important to note that the logs are only captured from the point you enable the service, they do not retroactively appear. So if you are experiencing issues, and you don't have Cloudflare Logs already enabled, you may have missed out on collecting critical data.</p><h3 id="step-2-get-your-cloudflare-global-api-key">Step 2: Get your Cloudflare Global API key</h3><p>You get this from your user profile in Cloudflare</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/07/apikey.png" class="kg-image" alt="Analyzing Cloudflare Logs (formally ELS) with the command line"><figcaption>You find your Global API key in your user profile in the Cloudflare UI.</figcaption></figure><p>Treat this API key like you would your password, keep it safe.</p><h3 id="step-3-use-your-favourite-tool-or-language-to-download-the-logs">Step 3: Use your favourite tool or language to download the logs</h3><p>Now that you have your email address and Global API key, you can start to use the Cloudflare REST API to retrieve the logs.</p><p>Here is an example to download 1 hour's worth of logs. There is an offset of 5 minutes in the past (to ensure you get a full 1 hours worth of logs, as there is a delay). This will work so long as the zone does not have loads of traffic (as there is a 1GB limit on the download).</p><pre><code class="language-bash">ZONE_ID=XXXX
CLOUDFLARE_EMAIL=bob@example.com
CLOUDFLARE_KEY=XXXXXX
STARTDATE=$(($(date +%s)-3900))
ENDDATE=$((STARTDATE+3600))
FILENAME="/tmp/${ZONE_ID}-${STARTDATE}-${ENDDATE}.log"
FIELDS=$(curl -s -H "X-Auth-Email: ${CLOUDFLARE_EMAIL}" -H "X-Auth-Key: ${CLOUDFLARE_KEY}" "https://api.cloudflare.com/client/v4/zones/${ZONE_ID}/logs/received/fields" | jq '. | to_entries[] | .key' -r | paste -sd "," -)

curl -s \
  -H "X-Auth-Email: ${CLOUDFLARE_EMAIL}" \
  -H "X-Auth-Key: ${CLOUDFLARE_KEY}" \
  "https://api.cloudflare.com/client/v4/zones/${ZONE_ID}/logs/received?start=${STARTDATE}&amp;end=${ENDDATE}&amp;fields=${FIELDS}" \
  &gt; "${FILENAME}" \
  &amp;&amp; echo "Logs written to ${FILENAME}"</code></pre><p>There is actually a lot of information in a single request, and a large JSON object is returned. You can get a sense of this below (dummy data has been substituted):</p><pre><code class="language-json">$ head -n1 ${FILENAME} | jq
{
  "CacheCacheStatus": "hit",
  "CacheResponseBytes": 89846,
  "CacheResponseStatus": 200,
  "CacheTieredFill": false,
  "ClientASN": 9304,
  "ClientCountry": "hk",
  "ClientDeviceType": "desktop",
  "ClientIP": "118.143.70.210",
  "ClientIPClass": "noRecord",
  "ClientRequestBytes": 1928,
  "ClientRequestHost": "www.example.com",
  "ClientRequestMethod": "GET",
  "ClientRequestPath": "/scripts/app.built.js",
  "ClientRequestProtocol": "HTTP/1.1",
  "ClientRequestReferer": "https://www.example.com/",
  "ClientRequestURI": "/scripts/app.built.js?puhl4d",
  "ClientRequestUserAgent": "Mozilla/5.0 (Windows NT 6.1; Trident/7.0; rv:11.0) like Gecko",
  "ClientSSLCipher": "ECDHE-RSA-AES128-SHA",
  "ClientSSLProtocol": "TLSv1.2",
  "ClientSrcPort": 33177,
  "EdgeColoID": 23,
  "EdgeEndTimestamp": 1563271736447000000,
  "EdgePathingOp": "wl",
  "EdgePathingSrc": "macro",
  "EdgePathingStatus": "nr",
  "EdgeRateLimitAction": "",
  "EdgeRateLimitID": 0,
  "EdgeRequestHost": "www.example.com",
  "EdgeResponseBytes": 89194,
  "EdgeResponseCompressionRatio": 0,
  "EdgeResponseContentType": "application/javascript",
  "EdgeResponseStatus": 200,
  "EdgeServerIP": "",
  "EdgeStartTimestamp": 1563271736404000000,
  "FirewallMatchesActions": [],
  "FirewallMatchesSources": [],
  "FirewallMatchesRuleIDs": [],
  "OriginIP": "",
  "OriginResponseBytes": 0,
  "OriginResponseHTTPExpires": "",
  "OriginResponseHTTPLastModified": "",
  "OriginResponseStatus": 0,
  "OriginResponseTime": 0,
  "OriginSSLProtocol": "unknown",
  "ParentRayID": "00",
  "RayID": "4f732d708c26d1ee",
  "SecurityLevel": "med",
  "WAFAction": "unknown",
  "WAFFlags": "0",
  "WAFMatchedVar": "",
  "WAFProfile": "unknown",
  "WAFRuleID": "",
  "WAFRuleMessage": "",
  "WorkerCPUTime": 0,
  "WorkerStatus": "unknown",
  "WorkerSubrequest": false,
  "WorkerSubrequestCount": 0,
  "ZoneID": 12345
}</code></pre><h2 id="analyse-your-cloudflare-logs">Analyse your Cloudflare logs</h2><p>Now that you have the raw data, you should look to turn it into something you can make business decisions with.</p><p>Here are some simple analysis you can do with the <code>jq</code> tool (ensure <a href="https://github.com/stedolan/jq/wiki/Installation">you install this first</a> if you have not already).</p><h3 id="top-uris">Top URIs</h3><pre><code class="language-bash">jq -r .ClientRequestURI ${FILENAME} | sort -n | uniq -c | sort -nr | head -n 3

3716 /scripts/app.built.js
1331 /images/sample.png
 642 /
</code></pre><h3 id="top-user-agents">Top user agents</h3><pre><code class="language-bash">jq -r .ClientRequestUserAgent ${FILENAME} | sort -n | uniq -c | sort -nr | head -n 3

1507 Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36
1364 Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko
1014 Mozilla/5.0 (iPhone; CPU iPhone OS 12_3_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.1.1 Mobile/15E148 Safari/604.1
</code></pre><h3 id="top-http-404s">Top HTTP 404s</h3><p>Slightly more complex query, but certainly still readable.</p><pre><code class="language-bash">jq 'select(.EdgeResponseStatus == 404) | "\(.ClientRequestHost)\(.ClientRequestURI)"' ${FILENAME} | sort -n | uniq -c | sort -nr | head -n 3

 197 "www.example.com/images/globalnav/example.gif"
 195 "www.example.com/images/excelmark200_blue_300dpi.png"
  49 "www.example.com/includes/fonts/318130/AFD64F04666D9047C.css"</code></pre><h3 id="top-ips-triggering-the-waf">Top IPs triggering the WAF</h3><pre><code class="language-bash">jq -r 'select(.WAFAction == "drop") | .ClientIP' ${FILENAME} | sort -n | uniq -c | sort -nr | head -n 3

   1 58.11.157.113
   1 18.212.21.164</code></pre><p>There are other examples on <a href="https://developers.cloudflare.com/logs/tutorials/parsing-json-log-data/">Cloudflare's own documentation site</a> if you wish to pursue this further. Mostly this is just a matter of knowing how to use <code>jq</code>.</p><h2 id="extra-for-experts">Extra for experts</h2><p>Using <code>jq</code> is fun for some basic analysis, at some point you will want something more comprehensive. Here are some of the more unique things I have done to help show off this data. This likely will involve using a programming language, and having some form of presentation output (e.g. HTML) from it.</p><h3 id="html-tables">HTML tables</h3><p>Having the data in HTML makes it more presentable.</p><p>On a side note, I often find the offload of the CDN is one number that is the most critical for caching, that is missing from the Cloudflare UI. Here the offload of <code>64.38%</code> indicates that Cloudflare has removed around two thirds of all requests from your origin platform. Given enough time, energy and traffic you can tune this number to be &gt; <code>99.9%</code>.</p><!--kg-card-begin: html--><table class="table table-striped table-hover table-sm"><thead class="thead-dark"><tr><th scope="col">Layer that served request</th><th scope="col">Requests</th></tr></thead><tbody><tr><td>Edge</td><td>103,916</td></tr><tr><td>Cache</td><td>435,913</td></tr><tr><td>Re-Validated from origin (HTTP 304)</td><td>25,550</td></tr><tr><td>Origin</td><td>312,878</td></tr><tr><td>Offload</td><td>64.38%</td></tr></tbody></table><!--kg-card-end: html--><h3 id="integration-with-highcharts">Integration with Highcharts</h3><p><a href="https://www.highcharts.com/">Highcharts</a> is a Javascript powered graphing library. It supports zooming, and removing series by clicking on them. Fairly fancy, and great for graphing time based data. Here is 24 hours of data, broken down by minute (the Cloudflare UI does not allow this granularity).</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/07/highcharts.png" class="kg-image" alt="Analyzing Cloudflare Logs (formally ELS) with the command line"><figcaption>HTTP status codes over time, displayed using Highcharts.</figcaption></figure><h3 id="integration-with-geckoboard">Integration with Geckoboard</h3><p>If you need something more realtime, then Geckoboard is a simple solution. Geckoboard supports custom datasets, and allows you to send arbitrary data to it. Here is a real dashboard for a high traffic event</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/07/dashboard-cdn.png" class="kg-image" alt="Analyzing Cloudflare Logs (formally ELS) with the command line"><figcaption>An example Geckoboard dashboard showing Cloudflare Logs data being aggregated.</figcaption></figure><h3 id="logstalgia">Logstalgia</h3><p>If you convert the JSON format into Apache format you can use this rather unique visualization. Logstalgia ends up producing a pong-like representation of the traffic and the paddles are the virtual hosts. Fun stuff to have on the TV on your office wall if you have one free. <a href="https://logstalgia.io/">See the official site</a> for more information on how to install and use this.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/07/logstalgia-1.png" class="kg-image" alt="Analyzing Cloudflare Logs (formally ELS) with the command line"><figcaption>Pushing the limits of Logstalgia with a high traffic event. Looks like a DDoS, but it is just loads of traffic.</figcaption></figure><h2 id="comments">Comments</h2><p>If you have done something unique with Cloudflare Logs (and are allowed to share it), please let me know in the comments.</p>]]></content:encoded></item><item><title><![CDATA[Adding Google Analytics to AMP posts in Ghost]]></title><description><![CDATA[Ghost comes with AMP support, however there is no built in way to in the administration UI to add Google Analytics to your AMP theme. Fortunately, implementing this is relatively simple.]]></description><link>https://www.pixelite.co.nz/article/adding-google-analytics-to-amp-posts-in-ghost/</link><guid isPermaLink="false">5d21adb03cd21a00383571e1</guid><category><![CDATA[Ghost]]></category><category><![CDATA[Analytics]]></category><category><![CDATA[Google analytics]]></category><category><![CDATA[AMP]]></category><dc:creator><![CDATA[Sean Hamlin]]></dc:creator><pubDate>Thu, 11 Jul 2019 16:00:00 GMT</pubDate><media:content url="https://www.pixelite.co.nz/content/images/2019/07/edho-pratama-yeB9jDmHm6M-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.pixelite.co.nz/content/images/2019/07/edho-pratama-yeB9jDmHm6M-unsplash.jpg" alt="Adding Google Analytics to AMP posts in Ghost"><p>I noticed that out of the box, Ghost comes with AMP support (<a href="https://ghost.org/blog/amp-support/">as has done since 2016</a>), however there is no built in way to in the administration UI to add Google Analytics to your AMP theme. This means you will be missing out on a portion of your mobile traffic in your analytics. Not great.</p><p>Fortunately, implementing this is relatively simple.</p><h2 id="download-the-starter-amp-page-template">Download the starter AMP page template</h2><p>You can grab Ghost's start AMP page template <a href="https://github.com/TryGhost/Ghost/blob/master/core/frontend/apps/amp/lib/views/amp.hbs">from their Github repository</a>. Place <code>amp.hbs</code> into the root of your theme.</p><h2 id="add-in-the-required-google-analytics-code">Add in the required Google Analytics code</h2><p>There are 2 sections</p><p>In the <code>&lt;head&gt;</code> section, near the end of it, place the Javascript include:</p><pre><code class="language-html">    {{!-- Load amp-analytics --}}
    &lt;script async custom-element="amp-analytics" src="https://cdn.ampproject.org/v0/amp-analytics-0.1.js"&gt;&lt;/script&gt;</code></pre><p>After the <code>&lt;body&gt;</code> tag is opened add the following <code>gtag</code> configuration section. You will need to replace the <code>UA-XXXXXXX-X</code> with your own Google Analytics property ID.</p><pre><code class="language-html">    {{!-- Configure analytics to use gtag --}}
    &lt;amp-analytics type="gtag" data-credentials="include"&gt;
        &lt;script type="application/json"&gt;
            {
                "vars" : {
                    "gtag_id": "UA-XXXXXXX-X",
                    "config" : {
                        "UA-XXXXXXX-X": { "groups": "default" }
                    }
                }
            }
        &lt;/script&gt;
    &lt;/amp-analytics&gt;</code></pre><p>You will then need to upload your theme to your Ghost blog, to which is <a href="https://www.ghostforbeginners.com/how-to-install-a-ghost-theme/">covered nicely in this blog post</a>.</p><h2 id="verify-your-amp-posts-are-valid">Verify your AMP posts are valid</h2><h3 id="option-1-use-your-browser-s-console">Option #1 - Use your browser's console</h3><p>Visit any AMP post and add on to the end to the end of the URL <code>#development=1</code> this will trigger the AMP validator to run. If you open the console of your browser, you should see something like this:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/07/amp.png" class="kg-image" alt="Adding Google Analytics to AMP posts in Ghost"><figcaption>Chrome's console after visiting an AMP page with <code>#development=1</code></figcaption></figure><p>If you see <code>AMP validation successful.</code> then you know at least the theme is still valid.</p><h3 id="option-2-use-a-chrome-extension">Option #2 - Use a Chrome extension</h3><p>Another option (if the console is not your friend) is to use the dedicated plugin for AMP validation. There is <a href="https://chrome.google.com/webstore/detail/amp-validator/nmoffdblmcmgeicmolmhobpoocbbmknc?hl=en">one for chrome</a> that I am aware of.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/07/valid-amp.png" class="kg-image" alt="Adding Google Analytics to AMP posts in Ghost"><figcaption>What the browser extension looks like when the AMP is valid.</figcaption></figure><h3 id="option-3-use-a-website">Option #3 - Use a website</h3><p>Likely the simplest option if you content has a valid public URL, visit <a href="https://search.google.com/test/amp">https://search.google.com/test/amp</a> and plug in your URL to your AMP post, and then see the results:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/07/amp-valid-web.png" class="kg-image" alt="Adding Google Analytics to AMP posts in Ghost"><figcaption>What the web based validator will look like after testing a valid AMP page.</figcaption></figure><h2 id="verify-google-analytics-is-working">Verify Google Analytics is working</h2><p>Next steps is verifying that the tracking is enabled, you can prove this in the <code>Network</code> tab of your browser's console and filtering by <code>google-analytics</code>, you should see a request to a URL that starts off like <code>collect?v=1&amp;_v=a1&amp;ds=AMP&amp;aip&amp;_s=1...</code> </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/07/console.png" class="kg-image" alt="Adding Google Analytics to AMP posts in Ghost"><figcaption>How to ensure that Google Analytics is sending the tracking request using the browser's console.</figcaption></figure><p>You can also just go checkout the realtime view inside Google Analytics itself:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/07/google-analytics-amp.png" class="kg-image" alt="Adding Google Analytics to AMP posts in Ghost"><figcaption>If you see any realtime pages that include <code>/amp/</code> on the end, then your analytics is working.</figcaption></figure><h2 id="further-reading-and-extra-for-experts">Further reading and extra for experts</h2><p>For more advanced topics like tracking custom events, using a custom URL, please see the <a href="https://developers.google.com/analytics/devguides/collection/amp-analytics/">official documentation from Google on this</a>.</p>]]></content:encoded></item><item><title><![CDATA[How to generate a CSR with SANs in PHP]]></title><description><![CDATA[A simple tutorial on how to generate a CSR with SANs in PHP. Code samples are supplied.]]></description><link>https://www.pixelite.co.nz/article/how-to-generate-a-csr-with-sans-in-php/</link><guid isPermaLink="false">5d17208121658900386b51f7</guid><category><![CDATA[PHP]]></category><category><![CDATA[SSL]]></category><category><![CDATA[Development]]></category><dc:creator><![CDATA[Sean Hamlin]]></dc:creator><pubDate>Wed, 10 Jul 2019 16:00:00 GMT</pubDate><media:content url="https://www.pixelite.co.nz/content/images/2019/07/james-sutton-FqaybX9ZiOU-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.pixelite.co.nz/content/images/2019/07/james-sutton-FqaybX9ZiOU-unsplash.jpg" alt="How to generate a CSR with SANs in PHP"><p>I recent project required me to generate a CSR programmatically in PHP. There are lots of tutorials on the generation of CSR with just a Common Name (with no SANs). To my surprise, as soon as you add in a list of SANs, the process is much more complex, and the tutorials are quite thin.</p><p>In this post I hope to make this process as simple as possible for everyone, in the hopes it helps at least one other person.</p><h2 id="step-1-create-a-distinguished-name">Step 1 - Create a Distinguished Name</h2><p>An SSL certificate contains your Distinguished Name information to help your users trust your certificate. You should replace the dummy data with your own</p><pre><code class="language-php">// The Distinguished Name to be used in the certificate.
$dn = [
  'commonName' =&gt; example.com,
  'organizationName' =&gt; 'ACME Inc',
  'organizationalUnitName' =&gt; 'IT',
  'localityName' =&gt; 'Seattle',
  'stateOrProvinceName' =&gt; 'Washington',
  'countryName' =&gt; 'US',
  'emailAddress' =&gt; 'foo@example.com',
];</code></pre><h2 id="step-2-generate-a-new-private-key">Step 2 - Generate a new private key</h2><p>Here is where you generate your private key. 4096 bits is used as this is stronger than the default of 2048 bits.</p><pre><code class="language-php">// Generates a new private key
$privateKey = openssl_pkey_new([
  'private_key_type' =&gt; OPENSSL_KEYTYPE_RSA,
  'private_key_bits' =&gt; 4096
]);</code></pre><h2 id="step-3-generate-openssl-config-file">Step 3 - Generate OpenSSL config file</h2><p>For some reason, OpenSSL in PHP requires you to create a file to supply arguments to add in SANs, they cannot be supplied in PHP.</p><pre><code class="language-twig">[ req ]
distinguished_name = req_distinguished_name
req_extensions = v3_req

[ req_distinguished_name ]

[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @san

[ san ]
{% for san in sans %}
DNS.{{ loop.index }} = {{ san }}
{% endfor %}</code></pre><p>I render the above template using twig, and then create a file with the contents of the compiled template.</p><pre><code class="language-php">file_put_contents('/tmp/openssl.cnf', $contents);</code></pre><p>If you did not want to use twig, the format of the <code>[ san ]</code> section down the bottom is very simple to replicate in raw PHP, or another templating language, after being compiled, it should look something like this (replace the SANs with your own):</p><pre><code class="language-php">[ san ]
DNS.1 = www.example.com
DNS.2 = shop.example.com
DNS.3 = foo.example.com
DNS.4 = *.foo.com</code></pre><p>Essentially, a 1 indexed, newline separated, list of SANs you want to list on your certificate.</p><p><strong>N.B.</strong> You should not include your Common Name in the SAN list.</p><h2 id="step-4-generate-the-csr">Step 4 - Generate the CSR</h2><p>Here is the magic that pulls in all the above code, and exports your CSR and private key into files on your filesystem.</p><pre><code class="language-php">$csrResource = openssl_csr_new($dn, $privateKey, [
  'digest_alg' =&gt; 'sha256',
  'config' =&gt; '/tmp/openssl.cnf',
]);

openssl_csr_export($csrResource, $csrString);
openssl_pkey_export($privateKey, $privateKeyString);

file_put_contents('/tmp/private.key', $privateKeyString);
file_put_contents('/tmp/public.csr', $csrString);</code></pre><h2 id="final-thoughts">Final thoughts</h2><p>This process seemed fairly complex for what I thought to be a simple process. I found no PHP libraries thought would help make this process a bit more Object Oriented. If you happen to know of a CSR generation PHP library let me know in the comments.</p>]]></content:encoded></item><item><title><![CDATA[Custom Cloudflare WAF rules that every Drupal site should run]]></title><description><![CDATA[This blog post helps to summarise some of the default rules I will deploy to every Drupal (7 or 8) site as a base line.]]></description><link>https://www.pixelite.co.nz/article/custom-cloudflare-waf-rules-that-every-drupal-site-should-run/</link><guid isPermaLink="false">5d200ba23cd21a003835710c</guid><category><![CDATA[Drupal]]></category><category><![CDATA[Cloudflare]]></category><category><![CDATA[Drupal planet]]></category><category><![CDATA[Security]]></category><category><![CDATA[WAF]]></category><dc:creator><![CDATA[Sean Hamlin]]></dc:creator><pubDate>Mon, 08 Jul 2019 16:00:00 GMT</pubDate><media:content url="https://www.pixelite.co.nz/content/images/2019/07/gabriele-diwald-aL1Bp6Put2I-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.pixelite.co.nz/content/images/2019/07/gabriele-diwald-aL1Bp6Put2I-unsplash.jpg" alt="Custom Cloudflare WAF rules that every Drupal site should run"><p>Part of my day job is to help tune the Cloudflare WAF for several customers. This blog post helps to summarise some of the default rules I will deploy to every Drupal (7 or 8) site as a base line.</p><p>The format of the custom WAF rules in this blog post are YAML format (for humans to read), if you do want to create these rules via the API, then you will need them in JSON format (see the end of this blog post for a sample API command).</p><h2 id="default-custom-waf-rules">Default custom WAF rules</h2><h3 id="unfriendly-drupal-7-urls">Unfriendly Drupal 7 URLs</h3><p>I often see bots trying to hit URLs like <code>/?q=node/add</code> and <code>/?q=user/register</code>. This is the default unfriendly URL to hit on Drupal 7 to see if user registration or someone has messed up the permissions table (and you can create content as an anonymous user). Needless to say, these requests are rubbish and add no value to your site, let's block them.</p><pre><code class="language-yaml">description: 'Drupal 7 Unfriendly URLs (bots)'
action: block
filter:
  expression: '(http.request.uri.query matches "q=user/register") or (http.request.uri.query matches "q=node/add")'</code></pre><h3 id="autodiscover">Autodiscover</h3><p>If your organisation has bought <a href="https://docs.microsoft.com/en-us/exchange/client-developer/exchange-web-services/autodiscover-for-exchange">Microsoft Exchange</a>, then likely your site will receive loads of requests (GET and POST) to which is likely to just tie up resources on your application server serving these 404s. I am yet to meet anyone that actually serves back real responses from a Drupal site for Autodiscover URLs. Blocking is a win here.</p><pre><code class="language-yaml">description: Autodiscover
action: block
filter:
  expression: '(http.request.uri.path matches "/autodiscover\.xml$") or (http.request.uri.path matches "/autodiscover\.src/")'</code></pre><h3 id="wordpress">Wordpress</h3><p>Seeing as Wordpress has a huge market share (<a href="https://w3techs.com/technologies/details/cm-wordpress/all/all">34% of all websites</a>) a lot of Drupal sites get caught up in the mindless (and endless) crawling. These rules will effectively remove all of this traffic from your site.</p><pre><code class="language-yaml">description: 'Wordpress PHP scripts'
action: block
filter:
  expression: '(http.request.uri.path matches "/wp-.*\.php$")'</code></pre><pre><code class="language-yaml">description: 'Wordpress common folders (excluding content)'
action: block
filter:
  expression: '(http.request.uri.path matches "/wp-(admin|includes|json)/")'</code></pre><p>I separate <code>wp-content</code> into it's own rule as you may want to disable this rule if you are migrating from a old Wordpress site (and want to put in place redirects for instance).</p><pre><code class="language-yaml">description: 'Wordpress content folder'
action: block
filter:
  expression: '(http.request.uri.path matches "/wp-content/")'</code></pre><h3 id="sqli">SQLi</h3><p>I have seen several instanced in the past where obvious SQLi was being attempted and the default WAF rules by Cloudflare were not intercepting them. This custom WAF rule is an attempt to fill in this gap.</p><pre><code class="language-yaml">description: 'SQLi in URL'
action: block
filter:
  expression: '(http.request.uri.path contains "select unhex") or (http.request.uri.path contains "select name_const") or (http.request.uri.path contains "unhex(hex(version()))") or (http.request.uri.path contains "union select") or (http.request.uri.path contains "select concat")'</code></pre><h3 id="drupal-8-install-script">Drupal 8 install script</h3><p>Drupal 8's default install script will expose your major, minor and patch version of Drupal you are running. This is bad for a lot of reasons. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/07/drupal.png" class="kg-image" alt="Custom Cloudflare WAF rules that every Drupal site should run"><figcaption>Drupal 8's default install screen exposes far too much information</figcaption></figure><p>It is better to just remove these requests from your Drupal site altogether. Note, this is not a replacement for upgrading Drupal, it is just to make fingerprinting a little harder.</p><pre><code class="language-yaml">description: 'Install script'
action: block
filter:
  expression: '(http.request.uri.path eq "/core/install.php")'</code></pre><h3 id="microsoft-office-and-skype-for-business">Microsoft Office and Skype for Business</h3><p>Microsoft sure is good at making lots of products that attempt to DoS its own customers websites. These requests are always POST requests, often to your homepage, and you require partial string matching to match the user agent, as it changes with the version of Office/Skype you are running.</p><p>In large organisation, I have seen the number of requests here number in the hundreds of thousands per day.</p><pre><code class="language-yaml">description: 'Microsoft Office/Skype for Business POST requests'
action: block
filter:
  expression: '(http.request.method eq "POST") and (http.user_agent matches "Microsoft Office" or http.user_agent matches "Skype for Business")'</code></pre><h3 id="microsoft-activesync">Microsoft ActiveSync</h3><p>Yet another Microsoft product that you don't why it is trying to hit another magic endpoint that doesn't exist.</p><pre><code class="language-yaml">description: 'Microsoft Active Sync'
action: block
filter:
  expression: '(http.request.uri.path eq "/Microsoft-Server-ActiveSync")'</code></pre><h2 id="using-the-cloudflare-api-to-import-custom-waf-rules">Using the Cloudflare API to import custom WAF rules</h2><p>It can be a pain to have to manually point and click a few hundred times per zone to import the above rules. Instead you would be better off to use the API. Here is a sample cURL command you can use do import all of the above rules in one easy go.</p><p>You will need to replace the redacted sections with your details.</p><pre><code class="language-bash">curl 'https://api.cloudflare.com/client/v4/zones/XXXXXXXXXXXXXX/firewall/rules' \
  -H 'X-Auth-Email: XXXXXXXXXXXXXX' \
  -H 'X-Auth-Key: XXXXXXXXXXXXXX'
  -H 'Accept: application/json' \
  -H 'Content-Type: application/json'
  -H 'Accept-Encoding: gzip'
  -X POST \
  -d '[{"ref":"","description":"Autodiscover","paused":false,"action":"block","priority":null,"filter":{"expression":"(http.request.uri.path matches \"\/autodiscover\\.xml$\") or (http.request.uri.path matches \"\/autodiscover\\.src\/\")"}},{"ref":"","description":"Drupal 7 Unfriendly URLs (bots)","paused":false,"action":"block","priority":null,"filter":{"expression":"(http.request.uri.query matches \"q=user\/register\") or (http.request.uri.query matches \"q=node\/add\")"}},{"ref":"","description":"Install script","paused":false,"action":"block","priority":null,"filter":{"expression":"(http.request.uri.path eq \"\/core\/install.php\")"}},{"ref":"","description":"Microsoft Active Sync","paused":false,"action":"block","priority":null,"filter":{"expression":"(http.request.uri.path eq \"\/Microsoft-Server-ActiveSync\")"}},{"ref":"","description":"Microsoft Office\/Skype for Business POST requests","paused":false,"action":"block","priority":null,"filter":{"expression":"(http.request.method eq \"POST\") and (http.user_agent matches \"Microsoft Office\" or http.user_agent matches \"Skype for Business\")"}},{"ref":"","description":"SQLi in URL","paused":false,"action":"block","priority":null,"filter":{"expression":"(http.request.uri.path contains \"select unhex\") or (http.request.uri.path contains \"select name_const\") or (http.request.uri.path contains \"unhex(hex(version()))\") or (http.request.uri.path contains \"union select\") or (http.request.uri.path contains \"select concat\")"}},{"ref":"","description":"Wordpress common folders (excluding content)","paused":false,"action":"block","priority":null,"filter":{"expression":"(http.request.uri.path matches \"\/wp-(admin|includes|json)\/\")"}},{"ref":"","description":"Wordpress content folder","paused":false,"action":"block","priority":null,"filter":{"expression":"(http.request.uri.path matches \"\/wp-content\/\")"}},{"ref":"","description":"Wordpress PHP scripts","paused":false,"action":"block","priority":null,"filter":{"expression":"(http.request.uri.path matches \"\/wp-.*\\.php$\")"}}]'</code></pre><h2 id="how-do-you-know-the-above-rules-are-working">How do you know the above rules are working</h2><p>Visit the firewall overview tab in Cloudflare's UI to see how many requests are being intercepted by the above rules.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/07/firewall-1.png" class="kg-image" alt="Custom Cloudflare WAF rules that every Drupal site should run"><figcaption>Cloudflare's firewall overview screen showing the custom WAF rules in action</figcaption></figure><h2 id="final-thoughts">Final thoughts</h2><p>The above custom WAF rules are likely not the only custom WAF rules you will need for any given Drupal site, but it should at least be a good start. Let me know in the comments if you have any custom WAF rules that you always deploy. I would be keen to update this blog post with additional rules from the community.</p><p>This is likely the first post in a series of blog posts on customising Cloudflare to suit your Drupal site. If you want to stay up to date - <a href="https://www.pixelite.co.nz/rss/">subscribe to the RSS feed</a>, <a href="https://www.pixelite.co.nz/#subscribe">sign up for email updates</a>, or <a href="https://twitter.com/pixelite_">follow us on Twitter</a>.</p>]]></content:encoded></item><item><title><![CDATA[New features coming in PHP 7.4]]></title><description><![CDATA[<p>PHP 7.4 is scheduled for release in November of this year, with it bring some performance improvements along with some new features. Here are a couple of the new features that are coming that I'm excited about.</p><h2 id="spread-operator-updates">Spread operator updates</h2><p>PHP has had support for the spread operator for</p>]]></description><link>https://www.pixelite.co.nz/article/new-features-in-php-7-4/</link><guid isPermaLink="false">5d1c73fc3cd21a0038356ceb</guid><category><![CDATA[PHP]]></category><category><![CDATA[Code]]></category><dc:creator><![CDATA[Craig Pearson]]></dc:creator><pubDate>Sun, 07 Jul 2019 00:00:00 GMT</pubDate><media:content url="https://www.pixelite.co.nz/content/images/2019/07/elephant.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.pixelite.co.nz/content/images/2019/07/elephant.jpg" alt="New features coming in PHP 7.4"><p>PHP 7.4 is scheduled for release in November of this year, with it bring some performance improvements along with some new features. Here are a couple of the new features that are coming that I'm excited about.</p><h2 id="spread-operator-updates">Spread operator updates</h2><p>PHP has had support for the spread operator for a while. It's functionality has been limited to unpacking functional arguments. For example. You've been able to do the following since PHP 5.6.</p><pre><code class="language-php">&lt;?php
function spreadArgs(...$args) {
  print_r($args);
}

spreadArgs(...[1, 2, 3, 4]);
spreadArgs(1, 2, 3, 4);

/*
Array
(
    [0] =&gt; 1
    [1] =&gt; 2
    [2] =&gt; 3
    [3] =&gt; 4
)
*/</code></pre><p>This new <a href="https://wiki.php.net/rfc/spread_operator_for_array">RFC</a> will add spread functionality to the array expression. That means unpacking arrays inline like so</p><pre><code class="language-php">&lt;?php
$vine_veges = ['cucumber', 'pumpkin'];
$ground_veges = ['carrots', 'potatos'];

print_r(['eggplant', ...$vine_veges, ...$ground_veges]);

/*
Array
(
    [0] =&gt; eggplant
    [1] =&gt; cucumber
    [2] =&gt; pumpkin
    [3] =&gt; carrots
    [4] =&gt; potatos
)
*/</code></pre><p>This unpacking works with both <code>array()</code> and <code>[]</code> syntax. You can also unpack arrays returned directly from functions.</p><pre><code class="language-php">&lt;?php
function get_colours($additional_colours = []) {
  return ['red', 'green', 'blue', ...$additional_colours];
}

print_r(['yellow', ...get_colours(['purple', 'green']), 'black']);

/*
Array
(
    [0] =&gt; yellow
    [1] =&gt; red
    [2] =&gt; green
    [3] =&gt; blue
    [4] =&gt; purple
    [5] =&gt; green
    [6] =&gt; black
)
*/</code></pre><p>The unpacking syntax doesn't work with associative arrays. So it's not as flexible as it's Javascript counterpart.</p><h2 id="arrow-functions">Arrow functions</h2><p>While PHP has supported closures for some time, they tend to be quite verbose. <a href="https://wiki.php.net/rfc/arrow_functions_v2">This RFC</a> adds arrow functions and short function syntax to PHP. Take this code example from the RFC.</p><pre><code class="language-php">function array_values_from_keys($arr, $keys) {
    return array_map(function ($x) use ($arr) { return $arr[$x]; }, $keys);
}</code></pre><p>With the new syntax this can be shortened to be</p><pre><code class="language-php">function array_values_from_keys($arr, $keys) {
    return array_map(fn($x) =&gt; $arr[$x], $keys);
}</code></pre><p>Few things to notice there, there's a new shortened function <code>fn</code> operator, also the scoping has been simplified so that the variable <code>$arr</code> is in scope to the function without the need for the <code>use</code> statement.</p><p>I'm genuinely excited about this as I find this code.</p><pre><code class="language-php">/* inline */
array_filter(range(0, 1024), fn($b) =&gt; $b % 64 === 0);

/* variable function */
$factor = fn($number) =&gt; $number % 64 === 0;
array_filter(range(0, 1024), $factor);</code></pre><p>A lot cleaner than these variants.</p><pre><code class="language-php">/* inline */
array_filter(range(0, 1024), function($b) { 
  return $b % 64 === 0;
});

/* variable function */
$factor = function($number) {
  return $number % 64 === 0;
}
array_filter(range(0, 1024), $factor);</code></pre><h2 id="typed-properties">Typed properties</h2><p>PHP has supported types in some form or another for quite a while. Argument types have been a staple since version 5 and since version 7, PHP also supports return types. It'spretty standard to see class definitions like this.</p><pre><code class="language-php">class MyClass {

  /* int */
  protected $count = null;
  
  /* MyClass */
  protected $sibling;
  
  /**
   * @param $sibling MyClass
   * @return array
   */
  public function addSibling(MyClass $sibling): array
  {
  	$this-&gt;sibling = $sibling;
    return [
      'count' =&gt; $this-&gt;count++,
      'current_sibling' =&gt; $this-&gt;sibling
    ];
  }
}</code></pre><p>With typed properties we'll also be able to define the types of the class properties.</p><pre><code class="language-php">class MyClass {

  protected ?int $count = null;
  
  protected MyClass $sibling;
  
  /* ... */
}</code></pre><p>This improves code readability and will also help debugging and IDE support. </p><p>Currently the supported types are.</p><ul><li><code>int</code></li><li><code>bool</code></li><li><code>float</code></li><li><code>string</code></li><li><code>array</code></li><li><code>object</code></li><li><code>iterable</code></li><li><code>self</code></li><li><code>parent</code></li><li>any <code>class</code> or <code>interface</code> name</li><li><code>?type</code> where <code>type</code> may be any of the above.</li></ul><p>The preceding <code>?</code> tells the run-time interpreter that the property can be null.</p><p>There's a lot more to typed properties than I've covered here so go and have a read over at the <a href="https://wiki.php.net/rfc/typed_properties_v2">RFC page.</a></p><h2 id="null-coalescing-assignment">Null Coalescing Assignment</h2><p>PHP 7 introduced the <a href="https://www.php.net/manual/en/migration70.new-features.php#migration70.new-features.null-coalesce-op">Null Coalescing Operator</a> as a shorthand for common usage of the ternary operator. As per the documentation:</p><blockquote>The null coalescing operator (??) has been added as syntactic sugar for the common case of needing to use a ternary in conjunction with isset(). It returns its first operand if it exists and is not NULL; otherwise it returns its second operand.</blockquote><pre><code class="language-php">&lt;?php
// Fetches the value of $_GET['user'] and returns 'nobody'
// if it does not exist.
$username = $_GET['user'] ?? 'nobody';
// This is equivalent to:
$username = isset($_GET['user']) ? $_GET['user'] : 'nobody';

// Coalescing can be chained: this will return the first
// defined value out of $_GET['user'], $_POST['user'], and
// 'nobody'.
$username = $_GET['user'] ?? $_POST['user'] ?? 'nobody';
?&gt;</code></pre><p>The <a href="https://wiki.php.net/rfc/null_coalesce_equal_operator">Null Coalescing Assignment Operator</a> takes this a step further. Consider the case of defining defaults.</p><pre><code class="language-php">/* Standard Conditional */
function setName($name = null) {
  if (!$name) {
    $name = 'default';
  }
  /* ... */
}

/* Ternary (Elvis Operator) */
function setName($name = null) {
  $name = $name ?: 'default';
  /* ... */
}

/* Null Coalescing */
function setName($name = null) {
  $name = $name ?? 'default';
  /* ... */
}

/* Null Coalescing Assignment */
function setName($name = null) {
  $name ??= 'default';
  /* ... */
}</code></pre><p>As you can see the syntax is a lot cleaner. Another benefit is that it's safe to undefined values.</p><pre><code class="language-php">&lt;?php
$details = [[], ['category' =&gt; 'Red']];

/* Ternary/Elvis */
$details[0]['category'] = $details[0]['category'] ?: 'Blue';

// PHP Notice:  Undefined index: category in php shell code on line 1


/* Null Coalescing */
$details[0]['category'] = $details[0]['category'] ?? 'Blue';

// 'Blue'


/* Null Coalescing Assignment */
$details[0]['category'] ??= 'Blue';

// 'Blue'</code></pre><h2 id="one-more">One more</h2><p>I'm not a massive fan of all the new features. Take, for example the <a href="https://wiki.php.net/rfc/numeric_literal_separator">Numeric Literal Separator</a>. </p><p>This feature enables the ability to make numeric literals easier to read for developers. By adding an underscore separator to Numerical Literals. The underscore is removed during runtime so is ignored by the interpreter. It's completely optional.</p><pre><code class="language-php">// a billion!
(1000000000 === 1_000_000_000) // true

// scale is hundreds of millions
(107925284.88 === 107_925_284.88) // true

// $135, stored as cents
(13500 === 135_00) // true</code></pre><p>While I appreciate that it makes the intention of the original developer easier to decode, this one seems a tad strange to me.</p><h2 id="conclusion">Conclusion</h2><p>These are the features I'm most looking forward to, let me know in the comments what you think, or if there are other features you'd like to see in PHP in the future. </p><p>7.4 isn't due till November 28th 2019. In the meantime if you want to have a play with some of the new features you can use the interactive PHP interpreter via <code>docker</code>.</p><pre><code class="language-bash">docker run -it php:7.4-rc-cli -a</code></pre><p>Have fun.</p>]]></content:encoded></item><item><title><![CDATA[The history of Pixelite, a progression through the CMS landscape]]></title><description><![CDATA[This website www.pixelite.co.nz has gone through a few iterations over the years. I thought I would go through a few of the various CMSs and hosting providers we have used, and what went well, and what lessons we learned.]]></description><link>https://www.pixelite.co.nz/article/the-history-of-pixelite/</link><guid isPermaLink="false">5d19bd3421658900386b5321</guid><category><![CDATA[Meta]]></category><category><![CDATA[CMS]]></category><dc:creator><![CDATA[Sean Hamlin]]></dc:creator><pubDate>Thu, 04 Jul 2019 02:59:28 GMT</pubDate><media:content url="https://www.pixelite.co.nz/content/images/2019/07/dario-veronesi-lUO-BjCiZEA-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.pixelite.co.nz/content/images/2019/07/dario-veronesi-lUO-BjCiZEA-unsplash.jpg" alt="The history of Pixelite, a progression through the CMS landscape"><p>This website <a href="https://www.pixelite.co.nz/">www.pixelite.co.nz</a> has gone through a few iterations over the years. I thought I would go through a few of the various CMSs and hosting providers we have used, and what went well, and what lessons we learned.</p><h2 id="drupal-7-2012-2015-">Drupal 7 (2012 -2015)</h2><p>Pixelite started off as a <a href="https://www.drupal.org/">Drupal</a> 7 site, with a basic responsive theme applied over the top</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/07/1.png" class="kg-image" alt="The history of Pixelite, a progression through the CMS landscape"><figcaption>Taken from the <a href="https://web.archive.org/web/20130207011305/http://www.pixelite.co.nz/">Wayback Machine (2013)</a></figcaption></figure><p><strong>What worked well</strong></p><ul><li>Drupal is FOSS (<a href="https://github.com/drupal/drupal">GPLv2</a>), and thus no licensing costs to get up and running</li><li>Drupal modules meant that adding functionality was simple (e.g. Disqus comments, tag cloud)</li><li>The theme was more or less out of the box (<a href="https://www.drupal.org/project/arctica">Arctica</a>)</li></ul><p><strong>Lessons learnt</strong></p><ul><li>We were terrible at keeping Drupal up to date. The blog ended up getting hacked by <a href="https://www.drupal.org/forum/newsletters/security-public-service-announcements/2014-10-29/drupal-core-highly-critical">Drupalgeddon</a>. This was less than ideal. Very much the 'mechanics car never gets fixed' type of situation.</li></ul><p>We figured we didn't actually need a dynamic CMS at this point, and were wondering what other options were out there.</p><h2 id="jekyll-2015-2019-">Jekyll (2015 - 2019)</h2><p>Enter <a href="https://jekyllrb.com/">Jekyll</a>, I guess the most popular static site generator around 4 years ago.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/07/2.png" class="kg-image" alt="The history of Pixelite, a progression through the CMS landscape"><figcaption>Taken from the <a href="https://web.archive.org/web/20150221021045/http://www.pixelite.co.nz/">Wayback Machine (2015)</a></figcaption></figure><p><strong>What worked well</strong></p><ul><li>Jekyll is also FOSS (<a href="https://github.com/jekyll/jekyll">MIT license</a>), and thus no licensing costs to get up and running</li><li>Loads of free themes available, the one we ended up using was called <a href="https://startbootstrap.com/themes/clean-blog-jekyll/">clean-blog</a></li><li>Markdown is a nice and easy way to write content, although often at the cost of spelling and grammar checking</li><li>We were able to extend Jekyll to support authors (with author pages), tags, article types, RSS</li><li>Hosting on Github (as a static site) using Github Pages which was entirely free</li><li>The site was un-hackable really as a result</li></ul><p><strong>Lessons learnt</strong></p><ul><li>At the time Github Pages did not support SSL (<a href="https://github.blog/2018-05-01-github-pages-custom-domains-https/">it does now</a>)</li><li>Ruby is not really our strong suit, so adding complexity in Ruby was not really a great long term move for us</li><li>The customizations we had done to Jekyll meant that Github was no longer able to compile the site for us, and we had to push a compiled branch ourselves</li><li>No real support for draft posts (<a href="https://www.hongkiat.com/blog/jekyll-draft/">it does now by the looks</a>) this is handy, as often a well researched blog post will take a few stabs at to get right.</li></ul><p>Ultimately the friction in creating content caused a massive drop in articles over the last 3 years. In order to compile the site you required a certain version of Ruby and certain gems installed. It was clear we needed a simpler solution.</p><h2 id="ghost-pro-2019-now-">Ghost (Pro) (2019 - now)</h2><p>In June of this year I decided I wanted to shake things up, and move the content over to a new CMS, called <a href="https://ghost.org/">Ghost</a>.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/07/3.png" class="kg-image" alt="The history of Pixelite, a progression through the CMS landscape"><figcaption>Taken from the live site</figcaption></figure><p>Ghost struck me as a happy medium between having some flexibility to customise the theme, and yet still have the core platform looked after by experts.</p><p><strong>What is working well</strong></p><ul><li>Ghost is FOSS (<a href="https://github.com/TryGhost/Ghost">MIT license</a>), and thus no licensing costs to get up and running.</li><li>If we wanted to move to self hosted Ghost in the future (I don't) you can <a href="https://ghost.org/faq/the-importer/">export your posts and users as a giant JSON file</a>.</li></ul><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/07/export.png" class="kg-image" alt="The history of Pixelite, a progression through the CMS landscape"><figcaption>The export screen in Ghost's administration section</figcaption></figure><ul><li>The <a href="https://ghost.org/faq/using-the-editor/">editing interface for authors is great</a>, it reminds me of <a href="https://medium.com/">Medium</a> or <a href="https://wordpress.org/gutenberg/">Gutenberg</a></li><li>Someone else looks after Ghost. I remember logging into the administration interface one day to discover Ghost 2.25 had been released, and things looked a little nicer, with no effort on my part.</li><li>The post header images automatically get responsive versions created (e.g. for thumbnails etc)</li><li>The default <a href="https://github.com/TryGhost/Casper">Casper theme</a> looks pretty slick out of the box</li><li>I was able to customise the theme very easily. For instance here is a quick post on how I added <a href="https://www.pixelite.co.nz/article/add-prismjs-to-ghost/">PrismJS to the default Casper theme</a></li><li>Built in CDN with Cloudflare, very useful for a global audience</li><li>SSL out of the box</li></ul><p><strong>What could be better</strong></p><ul><li>You cannot extend content types or users with additional fields. The schema appears to be quite locked. If you example you wish to add a LinkedIn field on your user account, so you can display a link on your author page, then you are out of luck. [<a href="https://forum.ghost.org/t/custom-fields-for-posts/1124">link to the feature request</a>]</li><li>The date picker on the post is some silly Javascript widget, rather than the standard HTML5 date picker. When we migrated a lot of our old content, do you know how many times you need to click the "back 1 month" button to get back to 2012? [<a href="https://forum.ghost.org/t/allow-manual-date-entry-in-post-editor/6628">link to the feature request</a>]</li><li><s>US date formats in the post settings</s> [<a href="https://github.com/TryGhost/Ghost/issues/10767">now fixed</a>]</li><li>Ghost Pro has restrictions around the number of authors you can have at any one time, this makes it very difficult to have guest blog posts by other contributors. You are forced to author in another platform and then migrate the content in.</li></ul><h2 id="final-thoughts">Final thoughts</h2><p>Pixelite is a work in progress and has been for a number of years. It is finally at a stage where it is fairly frictionless to create new content, and I am happy with the fact that the core platform is being looked after by people far more qualified to look after Ghost than I am.</p><p>Expect a lot of content to be coming out in the coming months. <a href="https://www.pixelite.co.nz/rss/">Subscribe to the RSS feed</a>, <a href="https://www.pixelite.co.nz/#subscribe">sign up for email updates</a>, or <a href="https://twitter.com/pixelite_">follow us on Twitter</a>.</p>]]></content:encoded></item><item><title><![CDATA[Creating a cluster with Rancher - Part 1: Installing rancher]]></title><description><![CDATA[Rancher is an open-source self-hosted Kubernetes user interface. I'm going to show you how easy it is to get up and running with Rancher so that you can have a play.]]></description><link>https://www.pixelite.co.nz/article/creating-a-kubernetes-cluster-with-rancher/</link><guid isPermaLink="false">5d1a82783cd21a0038356bc2</guid><category><![CDATA[Docker]]></category><category><![CDATA[Containers]]></category><category><![CDATA[kubernetes]]></category><dc:creator><![CDATA[Craig Pearson]]></dc:creator><pubDate>Tue, 02 Jul 2019 21:00:00 GMT</pubDate><media:content url="https://www.pixelite.co.nz/content/images/2019/07/Screenshot-from-2019-07-02-17-26-20.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.pixelite.co.nz/content/images/2019/07/Screenshot-from-2019-07-02-17-26-20.png" alt="Creating a cluster with Rancher - Part 1: Installing rancher"><p><a href="https://rancher.com">Rancher</a> is an open-source self-hosted <a href="https://kubernetes.io/">Kubernetes</a> user interface. I'm going to show you how easy it is to get up and running with Rancher so that you can have a play. This will just be a single node install of Rancher so not recommended for production environments.</p><h2 id="provision-server">Provision server</h2><p>The requirements are well documented on the Rancher site. We're going to  a small single node server, so we'll need a server with 4G of memory and at lease 1 VCPU. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/07/Screenshot-from-2019-07-02-16-36-46.png" class="kg-image" alt="Creating a cluster with Rancher - Part 1: Installing rancher"><figcaption>Single node requirements</figcaption></figure><p>For this example I'll also be running Ubuntu Server 18.04.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/07/Screenshot-from-2019-07-02-17-00-16-1.png" class="kg-image" alt="Creating a cluster with Rancher - Part 1: Installing rancher"><figcaption>Choose your droplet</figcaption></figure><p>I'm going to use a Digital Ocean droplet, but you can use whatever provider you'd like so long as it meets the requirements.</p><p>Once that's done, if you want to have SSL that's not self signed. You can optionally point a DNS record at the boxes IP address.For this example I'm going to use <code>rancher.craigpearson.co.nz</code>.</p><h2 id="install-docker">Install docker</h2><p>We need to install docker so log into the newly created server and run the following commands.</p><pre><code class="language-bash">sudo apt-get update &amp;&amp; sudo apt-get remove docker docker-engine docker.io containerd runc
sudo apt-get -y install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io</code></pre><p>This removes any previously installed versions of Docker and adds the official Docker <code>apt</code> repositories.</p><h2 id="start-rancher-container">Start Rancher container</h2><p>Now we've got Docker installed all we need to do is start up the rancher container. If you're happy with self signed SSL cert you can run the following.</p><pre><code class="language-bash">docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher</code></pre><p>If however you want to use Lets Encrypt and you've already set up a DNS record which is ready to go you can use the following to get Rancher to issue a SSL cert via the Lets Encrypt HTTP challenge. (Remember to change your <code>--acme-domain</code>). </p><pre><code class="language-bash">sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher:latest --acme-domain rancher.craigpearson.co.nz</code></pre><p>Since this is for demo purposes and not meant for production there's no data persistence. If you'd like to persist the rancher data you can add a docker volume to the above command.</p><pre><code class="language-bash">-v /opt/rancher:/var/lib/rancher</code></pre><p>It should work away and download the latest image the output should look something like this.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/07/Screenshot-from-2019-07-02-17-05-50.png" class="kg-image" alt="Creating a cluster with Rancher - Part 1: Installing rancher"><figcaption>Downloading rancher docker images</figcaption></figure><p>Typing <code>docker ps</code> you should see there is now a <code>rancher/rancher:latest</code> container running.</p><p>Now at this point if you are considering running this in a publicly accessible manner I'd suggest taking a look at the Rancher <a href="https://rancher.com/docs/rancher/v2.x/en/security/">hardening guide</a>. Since this is just a simple tutorial, I'm not going to bother.</p><h2 id="configure-and-sign-in">Configure and sign in</h2><p>Now we're up and running. In a browser, navigate to your server's IP address or your configured DNS record and you should get something that looks like this.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/07/Screenshot-from-2019-07-02-17-10-49.png" class="kg-image" alt="Creating a cluster with Rancher - Part 1: Installing rancher"><figcaption>Set the admin credentials</figcaption></figure><p>Set an administrator password and confirm your URL on the next page. Then, you're all logged in.</p><figure class="kg-card kg-image-card"><img src="https://www.pixelite.co.nz/content/images/2019/07/Screenshot-from-2019-07-02-17-11-40.png" class="kg-image" alt="Creating a cluster with Rancher - Part 1: Installing rancher"></figure><h2 id="next-steps">Next steps</h2><p>This is a quick and dirty install to show you the basics of Rancher. In the next post I'll talk about some of the key features of rancher.</p>]]></content:encoded></item><item><title><![CDATA[Using Docker for PHP development.]]></title><description><![CDATA[<p>I'm going to assume you have some basic knowledge of docker. I also assume you have installed <code>docker</code> and <code>docker-compose</code> locally. You can read more about docker <a href="https://docs.docker.com/">here</a>. I'm going to be doing this for an example Laravel site however this should work if you're using Drupal or some similar</p>]]></description><link>https://www.pixelite.co.nz/article/using-docker-for-local-php-development-2/</link><guid isPermaLink="false">5d11738116f9db0044673777</guid><category><![CDATA[Development]]></category><category><![CDATA[Containers]]></category><category><![CDATA[Docker]]></category><dc:creator><![CDATA[Craig Pearson]]></dc:creator><pubDate>Mon, 01 Jul 2019 10:32:03 GMT</pubDate><media:content url="https://www.pixelite.co.nz/content/images/2019/06/whale.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://www.pixelite.co.nz/content/images/2019/06/whale.jpeg" alt="Using Docker for PHP development."><p>I'm going to assume you have some basic knowledge of docker. I also assume you have installed <code>docker</code> and <code>docker-compose</code> locally. You can read more about docker <a href="https://docs.docker.com/">here</a>. I'm going to be doing this for an example Laravel site however this should work if you're using Drupal or some similar framework.</p><h2 id="what-we-need">What we need</h2><p>We're going to build a local stack for development, so we're going to need Apache, PHP, and MariaDB database.</p><h2 id="setup">Setup</h2><p>I've generated an example project called <code>mysite</code> it's a typical composer based PHP site, with a <code>public</code> web root.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/07/Screenshot-from-2019-07-01-17-08-37.png" class="kg-image" alt="Using Docker for PHP development."><figcaption>Generic Laravel template</figcaption></figure><p>In the root of your project we're going to want to create some files and folder structures, run the following command.</p><pre><code class="language-bash">mkdir -p docker/php-apache2 &amp;&amp; touch docker-compose.yml docker/php-apache2/Dockerfile</code></pre><p>This will create the following:</p><pre><code class="language-bash">docker-compose.yml
docker
docker/php-apache2
docker/php-apache2/Dockerfile</code></pre><h2 id="the-web-php-container">The Web/PHP container</h2><p>First up we'll get the web container configured as it's the only one we'll need a <code>Dockerfile</code> for. In your favourite editor open up <code>docker/php-apache2/Dockerfile</code> and paste in the following.</p><pre><code class="language-docker"># Base image
FROM php:7.2-apache

# Fix debconf warnings upon build
ARG DEBIAN_FRONTEND=noninteractive

# Run apt update and install some dependancies needed for docker-php-ext
RUN apt update &amp;&amp; apt install -y apt-utils sendmail mariadb-client pngquant unzip zip libpng-dev libmcrypt-dev git \
  curl libicu-dev libxml2-dev libssl-dev libcurl3 libcurl3-dev libsqlite3-dev libsqlite3-0

# Install PHP extensions
RUN docker-php-ext-install mysqli bcmath gd intl xml curl pdo_mysql pdo_sqlite hash zip dom session opcache

# Update web root to public
# See: https://hub.docker.com/_/php#changing-documentroot-or-other-apache-configuration
ENV APACHE_DOCUMENT_ROOT /var/www/html/public
RUN sed -ri -e 's!/var/www/html!${APACHE_DOCUMENT_ROOT}!g' /etc/apache2/sites-available/*.conf
RUN sed -ri -e 's!/var/www/!${APACHE_DOCUMENT_ROOT}!g' /etc/apache2/apache2.conf /etc/apache2/conf-available/*.conf

# Enable mod_rewrite
RUN a2enmod rewrite</code></pre><p>There's a lot going on in there however, the key points to know are, we're using the official PHP-Apache2 docker images as a base (found <a href="https://hub.docker.com/_/php">here</a>), we also install some other packages needed for enabling some PHP extensions.</p><p>We're not going to customise the database container so we're ready to put it all together.</p><h2 id="-and-the-rest">...and the rest</h2><p>In your editor of choice open the <code>docker-compose.yml</code> file we created earlier. Paste the following into it.</p><pre><code class="language-yaml">version: "3.1"
services:
  database:
    image: mariadb:10.1
    container_name: mysite-mariadb
    environment:
      - MYSQL_ROOT_PASSWORD=password
      - MYSQL_DATABASE=mysite
      - MYSQL_USER=mysite
      - MYSQL_PASSWORD=password
    ports:
      - "8083:3306"
  web:
    build: docker/php-apache2
    container_name: mysite-web
    volumes:
      - .:/var/www/html
    ports:
      - "8080:80"</code></pre><p>This uses the off-the-shelf MariaDB docker image and specifies the build directory for the web container. <code>docker/php-apache2</code>. The environment section in the database service specifies the credentials we want MariaDB to start up with.</p><h2 id="lets-start-it-up">Lets start it up</h2><p>Now we've put it all together, lets start up our stack and see what we have. Run the following to build and start up the containers.</p><pre><code class="language-bash">docker-compose up -d</code></pre><p>It might take a while to build, especially if you have to download the base images but ones completed you should be able to run <code>docker ps</code> and see something like the following.</p><figure class="kg-card kg-image-card"><img src="https://www.pixelite.co.nz/content/images/2019/07/Screenshot-from-2019-07-01-20-45-27.png" class="kg-image" alt="Using Docker for PHP development."></figure><p>So we now have an Apache2 webserver running <code>mod_php</code> listening on <a href="http://localhost:8080">http://localhost:8080</a> if you navigate there now you'll likely get an error, cause we still have to tell Laravel about the correct database settings. Open up the <code>.env</code> file in your editor and update the database settings to read:</p><pre><code class="language-bash">DB_CONNECTION=mysql
DB_HOST=mysite-mariadb
DB_PORT=3306
DB_DATABASE=mysite
DB_USERNAME=mysite
DB_PASSWORD=password</code></pre><p>These are the environment values provided to the MariaDB container in the <code>docker-compose.yml</code> file. Now if we navigate to <a href="http://localhost:8080">http://localhost:8080</a> we should see the default Laravel homepage.</p><figure class="kg-card kg-image-card"><img src="https://www.pixelite.co.nz/content/images/2019/07/Screenshot-from-2019-07-01-21-21-57.png" class="kg-image" alt="Using Docker for PHP development."></figure><h2 id="some-final-thoughts">Some final thoughts</h2><p>Having this setup makes it pretty easy to update software versions like PHP and MariaDB. Try it. Update the version of MariaDB in the <code>docker-compose.yml</code> to <code>10.2</code> and rebuilding. Or even changing the base image in the <code>Dockerfile</code> from <code>php:7.2-apache</code> to <code>php:7.3-apache</code>. It's nice not having to upgrade distributions or VirtualBox images in order to see if your site will work on a newer software. It also makes adding things like Redis, Memcache, Mailhog etc a lot easier.</p><p>Now this is a pretty cut back version of what I'd use normally as I wanted to keep this post as simple as possible. Normally in my PHP/Apache <code>Dockerfile</code> I'd install some extra things I find useful for development like Xdebug, PHPUnit, NodeJS, Composer etc. Let me know in the comments if you have any questions.</p>]]></content:encoded></item><item><title><![CDATA[Add PrismJS to Ghost for syntax highlighting of code snippets]]></title><description><![CDATA[Out of the box Ghost will not syntax highlight your code snippets. I will explain how to implement these changes in your own site.]]></description><link>https://www.pixelite.co.nz/article/add-prismjs-to-ghost/</link><guid isPermaLink="false">5d11e568feaa6d0038e5158d</guid><category><![CDATA[Ghost]]></category><category><![CDATA[PrismJS]]></category><dc:creator><![CDATA[Sean Hamlin]]></dc:creator><pubDate>Tue, 25 Jun 2019 09:34:00 GMT</pubDate><media:content url="https://www.pixelite.co.nz/content/images/2019/06/teddy-kelley-abVkUkfyAJE-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.pixelite.co.nz/content/images/2019/06/teddy-kelley-abVkUkfyAJE-unsplash.jpg" alt="Add PrismJS to Ghost for syntax highlighting of code snippets"><p>Out of the box Ghost will not syntax highlight your code snippets. In this blog post I will explain the changes I made to the Casper theme, and how you can look to implement these in your own Ghost site.</p><p>There are 2 options, and which option you use is up to you. The code snippets are the same, it is only the method of embedding the snippets that change. Only second option can be version controlled (which I prefer).</p><h2 id="option-1-use-code-injection">Option #1 - Use Code Injection</h2><p>This is likely the simplest way if you do not have your theme in code, or run Ghost pro, and don't really care about code at all. </p><p>You simply add the following code to your <code>Site Header</code> in your blog settings:</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/06/Settings_-_Code_injection_-_Pixelite.png" class="kg-image" alt="Add PrismJS to Ghost for syntax highlighting of code snippets"><figcaption>Found at <code>/ghost/#/settings/code-injection</code> in your admin section</figcaption></figure><pre><code class="language-html">    &lt;link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/prism/1.16.0/themes/prism-okaidia.min.css" integrity="sha256-Ykz0nNWK7w4QWJUYR7OraN4773aMB/11aMt1nZyrhuQ=" crossorigin="anonymous" /&gt;

    &lt;style type="text/css" media="screen"&gt;
        .post-full-content pre strong {
            color: white;
        }
        .post-full-content pre {
            line-height: 1;
        }
        .post-full-content pre code {
            white-space: pre-wrap;
            hyphens: auto;
            line-height: 0.7;
            font-size: 0.7em;
        }
    &lt;/style&gt;</code></pre><p>I have also added some extra CSS to reduce the size of PrismJS code blocks within the Casper theme in Ghost (as it can be quite large), and also allow long strings to wrap to newlines (to avoid horizontal scrolling).</p><p>In <code>Site Footer</code> you add something like: </p><pre><code class="language-html">    &lt;script src="https://cdnjs.cloudflare.com/ajax/libs/prism/1.16.0/prism.min.js" integrity="sha256-NFZVyNmS1YlmiklazBA+TALYJlJtZj/y/i/oADk6CVE=" crossorigin="anonymous"&gt;&lt;/script&gt;
    &lt;script src="https://cdnjs.cloudflare.com/ajax/libs/prism/1.16.0/components/prism-markup-templating.min.js" integrity="sha256-41PtHfb57czcvRtAYtUhYcSaLDZ3ahSDmVZarE0uWPo=" crossorigin="anonymous"&gt;&lt;/script&gt;
    &lt;script src="https://cdnjs.cloudflare.com/ajax/libs/prism/1.16.0/components/prism-javascript.min.js" integrity="sha256-KxieZ8/m0L2wDwOE1+F76U3TMFw4wc55EzHvzTC6Ej8=" crossorigin="anonymous"&gt;&lt;/script&gt;
    &lt;script src="https://cdnjs.cloudflare.com/ajax/libs/prism/1.16.0/components/prism-css.min.js" integrity="sha256-49Y45o2obU1Yv4zkYDpMDyAa+D9sgKNbNy4ZYGRl/ls=" crossorigin="anonymous"&gt;&lt;/script&gt;
    &lt;script src="https://cdnjs.cloudflare.com/ajax/libs/prism/1.16.0/components/prism-php.min.js" integrity="sha256-gJj4RKQeXyXlVFu2I8jQACQZsii/YzVMhcDT99lr45I=" crossorigin="anonymous"&gt;&lt;/script&gt;
    &lt;script src="https://cdnjs.cloudflare.com/ajax/libs/prism/1.16.0/components/prism-sql.min.js" integrity="sha256-zgHnuWPEbzVKrT72LUtMObJgbwkv0VESwRfz7jpdsq0=" crossorigin="anonymous"&gt;&lt;/script&gt;
    &lt;script src="https://cdnjs.cloudflare.com/ajax/libs/prism/1.16.0/components/prism-yaml.min.js" integrity="sha256-JoqiKM2GipZjbGjNyl62d6qjQY1F9QTLriWOe4N76wE=" crossorigin="anonymous"&gt;&lt;/script&gt;
    &lt;script src="https://cdnjs.cloudflare.com/ajax/libs/prism/1.16.0/components/prism-sass.min.js" integrity="sha256-3oigyyaPovKMS9Ktg4ahAD1R6fOSMGASuA03DT8IrvU=" crossorigin="anonymous"&gt;&lt;/script&gt;
    &lt;script src="https://cdnjs.cloudflare.com/ajax/libs/prism/1.16.0/components/prism-json.min.js" integrity="sha256-18m89UBQcWGjPHHo64UD+sQx4SpMxiRI1F0MbefKXWw=" crossorigin="anonymous"&gt;&lt;/script&gt;
    &lt;script src="https://cdnjs.cloudflare.com/ajax/libs/prism/1.16.0/components/prism-bash.min.js" integrity="sha256-0W9ddRPtgrjvZVUxGhU/ShLxFi3WGNV2T7A7bBTuDWo=" crossorigin="anonymous"&gt;&lt;/script&gt;
    &lt;script src="https://cdnjs.cloudflare.com/ajax/libs/prism/1.16.0/components/prism-python.min.js" integrity="sha256-zXSwQE9cCZ8HHjjOoy6sDGyl5/3i2VFAxU8XxJWfhC0=" crossorigin="anonymous"&gt;&lt;/script&gt;
    &lt;script src="https://cdnjs.cloudflare.com/ajax/libs/prism/1.16.0/components/prism-ruby.min.js" integrity="sha256-SGBXZakPP3Fv0P4U6jksuwZQU5FlC22ZAANstHSSp3k=" crossorigin="anonymous"&gt;&lt;/script&gt;</code></pre><p>The only real gotcha here is that you must add <code>prism.min.js</code> first, then <code>prism-markup-templating.min.js</code> followed by any and all languages you want to syntax highlight for.</p><p>If you want to find the links to more libraries, or newer versions of the above, check out <a href="https://cdnjs.com/libraries/prism">https://cdnjs.com/libraries/prism</a>.</p><h2 id="option-2-edit-the-theme-templates">Option #2 - Edit the theme templates</h2><p>This is just like the above, except you edit the template file <code>default.hbs</code>, and insert the code in the appropriate sections. The end result is the same, except this method allows you to version control the theme (which I prefer).</p><h2 id="final-thoughts">Final thoughts</h2><p>The above are the libraries that we run on pixelite, but your needs might be different (i.e. you may need different languages etc). This should serve as a head start for any other people looking to share code snippets, and have them look great on your Ghost site.</p><p>Please let me know if you have any other pro tips that I may have missed.</p>]]></content:encoded></item><item><title><![CDATA[JSON:API testing with Cypress]]></title><description><![CDATA[Upgrading JSON:API and Drupal core can be tricky to keep your API intact. Using Cypress is an easy way to have an extra set of eyeballs on the upgrade.]]></description><link>https://www.pixelite.co.nz/article/json-api-testing-with-cypress/</link><guid isPermaLink="false">5d0a06a5193c960038ff1fef</guid><category><![CDATA[Drupal]]></category><category><![CDATA[Drupal planet]]></category><category><![CDATA[Testing]]></category><category><![CDATA[Cypress]]></category><category><![CDATA[JSON:API]]></category><category><![CDATA[Entity]]></category><dc:creator><![CDATA[Sean Hamlin]]></dc:creator><pubDate>Fri, 21 Jun 2019 01:00:33 GMT</pubDate><media:content url="https://www.pixelite.co.nz/content/images/2019/06/cypress.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.pixelite.co.nz/content/images/2019/06/cypress.jpg" alt="JSON:API testing with Cypress"><p>I am working with a customer now that is looking to go through a JSON:API upgrade, from version 1.x on Drupal 8.6.x to 2.x and then ultimately to Drupal 8.7.x (where it is bundled into core).</p><p>As this upgrade will involve many moving parts, and it is critical to not break any existing integrations (e.g. mobile applications etc), having basic end-to-end tests over the API endpoints is essential.</p><p>In the past I have written <a href="https://www.pixelite.co.nz/tag/casperjs/">a lot about CasperJS</a>, and since then a number of more modern frameworks have emerged for end-to-end testing. For the last year or so, I have been involved with <a href="https://www.cypress.io/">Cypress</a>.</p><p>I won't go too much in depth about Cypress in this blog post (I will likely post more in the coming months), instead I want to focus specifically on JSON:API testing using Cypress.</p><p>In this basic test, I just wanted to hit some known valid endpoints, and ensure the response was roughly OK.</p><p>Rather than have to rinse and repeat a lot of boiler plate code for every API end point, I wrote a custom Cypress command, to which abstracts all of this away in a convenient function.</p><p>Below is what the <code>spec</code> file looks like (the test definition), it is very clean, and is mostly just the JSON:API paths.</p><figure class="kg-card kg-code-card"><pre><code class="language-JS">describe('JSON:API tests.', () =&gt; {

    it('Agents JSON:API tests.', () =&gt; {
        cy.expectValidJsonWithMinimumLength('/jsonapi/node/agent?_format=json&amp;include=field_agent_containers,field_agent_containers.field_cont_storage_conditions&amp;page[limit]=18', 6);
        cy.expectValidJsonWithMinimumLength('/jsonapi/node/agent?_format=json&amp;include=field_agent_containers,field_agent_containers.field_cont_storage_conditions&amp;page[limit]=18&amp;page[offset]=72', 0);
    });
    
    it('Episodes JSON:API tests.', () =&gt; {
        cy.expectValidJsonWithMinimumLength('/jsonapi/node/episode?fields[file--file]=uri,url&amp;filter[field_episode_podcast.nid][value]=4976&amp;include=field_episode_podcast,field_episode_audio,field_episode_audio.field_media_audio_file,field_episode_audio.thumbnail,field_image,field_image.image', 6);
    });

});</code></pre><figcaption>jsonapi.spec.js</figcaption></figure><p>And as for the custom function implementation, it is fairly straight forward. Basic tests are done like:</p><ul><li>Ensure the response is an HTTP 200</li><li>Ensure the content-type is valid for JSON:API</li><li>Ensure there is a response body and it is valid JSON</li><li>Enforce a minimum number of entities you expect to be returned</li><li>Check for certain properties in those returned entities. </li></ul><figure class="kg-card kg-code-card"><pre><code class="language-JS">Cypress.Commands.add('expectValidJsonWithMinimumLength', (url, length) =&gt; {
    return cy.request({
        method: 'GET',
        url: url,
        followRedirect: false,
        headers: {
            'accept': 'application/json'
        }
    })
    .then((response) =&gt; {
        // Parse JSON the body.
        let body = JSON.parse(response.body);
        expect(response.status).to.eq(200);
        expect(response.headers['content-type']).to.eq('application/vnd.api+json');
        cy.log(body);
        expect(response.body).to.not.be.null;
        expect(body.data).to.have.length.of.at.least(length);

        // Ensure certain properties are present.
        body.data.forEach(function (item) {
            expect(item).to.have.all.keys('type', 'id', 'attributes', 'relationships', 'links');
            ['changed', 'created', 'default_langcode', 'langcode', 'moderation_state', 'nid', 'path', 'promote', 'revision_log', 'revision_timestamp', 'status', 'sticky', 'title', 'uuid', 'vid'].forEach((key) =&gt; {
                expect(item['attributes']).to.have.property(key);
            });
        });
    });

});</code></pre><figcaption>commands.js</figcaption></figure><p>Some of the neat things in this function is that it does log the parsed JSON response with <code>cy.log(body);</code> this allows you to inspect the response in Chrome. This allows you to extend the test function rather easily to meet you own needs (as you can see the full entity properties and fields.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.pixelite.co.nz/content/images/2019/06/jsonapi-cypress.png" class="kg-image" alt="JSON:API testing with Cypress"><figcaption>Cypress with a GUI can show you detailed log information</figcaption></figure><p>Using Cypress is like having an extra pair of eyes on the Drupal upgrade. Over time Cypress will end up saving us a lot of developer time (and therefore money). The tests will be in place forever, and so regressions can be spotted much sooner (ideally in local development) and therefore fixed much faster.</p><h2 id="comments">Comments</h2><p>If you do JSON:API testing with Cypress I would be keen to know if you have any tips and tricks.</p>]]></content:encoded></item></channel></rss>