<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Click Travel Engineering</title>
    <description>The latest articles on Forem by Click Travel Engineering (@clicktravelengorg).</description>
    <link>https://forem.com/clicktravelengorg</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/clicktravelengorg"/>
    <language>en</language>
    <item>
      <title>We spent a day gaming… at work!</title>
      <dc:creator>James Butherway</dc:creator>
      <pubDate>Fri, 13 Mar 2020 11:30:56 +0000</pubDate>
      <link>https://forem.com/clicktravelengorg/we-spent-a-day-gaming-at-work-5h0f</link>
      <guid>https://forem.com/clicktravelengorg/we-spent-a-day-gaming-at-work-5h0f</guid>
      <description>&lt;p&gt;From the title (which is total clickbait, sorry not sorry!) it might sound like the Auth team at Click Travel spent the day slacking off. I can assure you that is not the case and in fact we just participated in a ‘GameDay’. It has nothing to do with Playstations or Minecraft, in fact we used it as a tool to validate some of our platform’s existing security and to see if we had any new areas of risk.&lt;/p&gt;

&lt;p&gt;First things first, who are the Auth team at Click Travel? We are a small product engineering group whose mission is to: “Enable the Product Engineering department to seamlessly authenticate and authorise users on the platform by providing robust access control services”.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About GameDays&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;They are used throughout the tech industry as a tool of chaos programing to help engineering teams confirm that they are delivering resilient software. It is essentially time that you set aside in order to consider areas of risk within your system and attempt to break them to see whether it is possible and if it is, then at what threshold.&lt;/p&gt;

&lt;p&gt;For example, you could use a GameDay to test all the strategic actions that have been implemented from the last [x] incidents, to make sure those actions would have actually stopped the same incident from happening again under similar conditions.&lt;/p&gt;

&lt;p&gt;We love this article on a GameDays from Gremlin, head over there if you want a little more info:&lt;br&gt;
&lt;a href="https://www.gremlin.com/community/tutorials/how-to-run-a-gameday/"&gt;https://www.gremlin.com/community/tutorials/how-to-run-a-gameday/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Our approach to the day&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Auth team wanted to take a slightly different slant on the GameDay in order to keep it more valuable to our own goals. So with this in mind we decided to change the focus of the day from strictly resilience to more of a self hack/penetration test but with a totally white-box approach. This way we could use knowledge of our services in conjunction with the industry wide security standards, such as OWASP Top 10, to make sure our platform held up under very targeted attacks.&lt;/p&gt;

&lt;p&gt;We broke the day down into two parts which we split over one week:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Decision on chosen targets and deep analysis of those targets.&lt;/li&gt;
&lt;li&gt;The actual gaming and analysis of the findings&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;So we planned&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We took an afternoon and all got together to discuss our ideas. We used Miro (a collaborative whiteboarding platform) extensively throughout these days in order to enable visual collaboration whilst being part of a remote team.&lt;/p&gt;

&lt;p&gt;We all presented our ideas on what we thought would be a good area to test and used a mind map to dig deeper into the expected results and value. We tried hard to time box all actions and discussions over the allotted prep days so that we could keep to the time given to us for this task. We all agreed on what should be tested and divided the targets amongst ourselves.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;And we gamed&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Our targets were decided so we scheduled the second part of the GameDay later that week giving us soak time on how best to hack our chosen areas. As we work in an Agile way, we allowed thinking time for GameDay during the week’s focus to ensure we could maximise the value out of the actual gaming time, whilst still delivering on other weekly objectives.&lt;br&gt;
I also created a test case document that we could use to collect the results uniformly. We jumped on a group call using Zoom, with all of us having a vague plan of our individual approaches, and set the timer for two hours to get it done.&lt;/p&gt;

&lt;p&gt;Once again Miro was used as a collaboration space so that we could visualize what the others were doing. Being on a video call helped the team share something interesting or reach out for guidance if needed. The two hours went extremely fast but we all kept focused, determined to have valuable results at the end.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;We analysed&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The gaming was done and results had been captured. We spent another hour and a bit going over them, sharing what we had found and critiquing our own approaches to help with the evaluation of the day. This was a bit like an incident debrief — we talked through what we found and the actions generated were given a severity rating and sub-classed as immediate or strategic. The main point was to capture everything in one place, so we could refer back to it at any point and understand what decisions we made.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Evaluation of the day&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We found that this GameDay complimented our existing penetration testing process nicely. The day itself brought the Auth team together on a fun and proactive project that enabled knowledge sharing and bolstered our mindsets as security professionals.&lt;/p&gt;

&lt;p&gt;All of the collaboration aspects worked very well but we might change it to do the actual Gaming as more of a group activity so that we can all see what is going on as it happens.&lt;br&gt;
Overall, we felt it was a successful exercise and we look forward to the next one!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Could you do one?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The benefit the team saw from this outweighed the time we put aside for carrying it out. Based on that I would urge any team that found hearing about this with interest to give it a go yourself.&lt;/p&gt;

</description>
      <category>gameday</category>
      <category>security</category>
    </item>
    <item>
      <title>await Promise.all: not just for async functions</title>
      <dc:creator>Tim Knight</dc:creator>
      <pubDate>Thu, 21 Mar 2019 15:25:54 +0000</pubDate>
      <link>https://forem.com/clicktravelengorg/await-promiseall-not-just-for-async-functions-4bk6</link>
      <guid>https://forem.com/clicktravelengorg/await-promiseall-not-just-for-async-functions-4bk6</guid>
      <description>&lt;p&gt;In our code we end up making a lot of asynchronous, independent calls to 3rd party and internal REST APIs in order to build up among other things, currency conversion rates, Airport IATA Code -&amp;gt; Name mapping and getting a result set for a user's Flights search.&lt;/p&gt;

&lt;p&gt;This led to a lot of head-scratching about how to improve speed, we needed all these calls to resolve before we could continue, but because they were all independent did we need to wait for each Promise to resolve before we could invoke another?&lt;/p&gt;

&lt;p&gt;In short: no, we could use Promise.all to invoke all our independent asynchronous calls at once, and await them all to have resolved, not worrying about which resolves first:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const [
 conversionRates,
 airports,
 flights,
] = await Promise.all([
 getConversionRates(),
 getAirports(),
 getFlights()
]); 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Brilliant! We're waiting once, for the longest process to resolve and gathering all out data in one go.&lt;/p&gt;

&lt;p&gt;But we still have some synchronous calls later one which again, are independent, non-mutating functional code. Could we do the same thing there?&lt;/p&gt;

&lt;p&gt;Yes was the answer, with a bit of fore-thought about grouping functions together and making sure our code was fully functional in design, we are able to use Promise.all to await the results of multiple functions regardless of whether they are defined as &lt;code&gt;async&lt;/code&gt; or not.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Write functional, independent synchronous and asynchronous Javascript functions&lt;/li&gt;
&lt;li&gt;Run groups of them simultaneously using Promise.all&lt;/li&gt;
&lt;li&gt;Await for all the functions to resolve, rather than one at a time&lt;/li&gt;
&lt;li&gt;???&lt;/li&gt;
&lt;li&gt;Profit&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>node</category>
      <category>asynchronous</category>
    </item>
    <item>
      <title>Trialing AWS Lambda performance? Are you being fair?</title>
      <dc:creator>Robin Smith</dc:creator>
      <pubDate>Fri, 01 Mar 2019 13:39:18 +0000</pubDate>
      <link>https://forem.com/clicktravelengorg/trialing-aws-lambda-performance-are-you-being-fair-2lcd</link>
      <guid>https://forem.com/clicktravelengorg/trialing-aws-lambda-performance-are-you-being-fair-2lcd</guid>
      <description>&lt;p&gt;I was helping out a colleague this week who had created a simple serverless style setup using AWS: &lt;code&gt;API Gateway -&amp;gt; Lambda -&amp;gt; DynamoDB&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;There was nothing unusual about the setup, and the function worked as expected first time. Their problem came when they decided to benchmark performance against the traditional 24/7 deployed server that the function was designed to replace. Their results left them massively disappointed, verging on concerned.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why?
&lt;/h2&gt;

&lt;p&gt;The easiest thing to do in this situation is to think about the current version first. Your existing service is effectively fully scaled out at all times; it sits quietly in your data centre burning cash waiting for requests, and whether it receives a single request or 50 requests per second (RPS) it's ready to respond with the full provisioned capacity from the first millisecond.&lt;/p&gt;

&lt;p&gt;But this isn't how lambda works: If there are no requests then there is no capacity. &lt;/p&gt;

&lt;p&gt;When that single request arrives, the architecture rapidly provisions resource to deal with the request and also leaves that resource available for a short time in case additional requests come through. This means that if you suddenly load your function with 50RPS from it being idle, then you will be forced to wait for &lt;br&gt;
a short time while the architecture reacts, which it does by spinning up a fleet of functions to process the sudden influx of requests. &lt;/p&gt;

&lt;h3&gt;
  
  
  Lambda doesn't scale linearly to requests
&lt;/h3&gt;

&lt;p&gt;Receiving 50 requests at once doesn't mean you will have 50 concurrent executions. There are complex calculations performed behind-the-scenes, trading off things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;time it takes to start a new function &lt;/li&gt;
&lt;li&gt;the expected amount of remaining time on the current execution&lt;/li&gt;
&lt;li&gt;historical data about how the function is being utilised&lt;/li&gt;
&lt;li&gt;individual account limits &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of these things play a factor in how your function is auto-scaled, and as a result in some circumstances Lambda may choose to simply queue some of your invocations up behind each other, instead of opting for concurrent executions.&lt;/p&gt;

&lt;p&gt;What this means is that if you decide to perform an unrealistic load test then you are &lt;strong&gt;very likely&lt;/strong&gt; to see much poorer scalability and performance than you would expect to see under real production load.&lt;/p&gt;

&lt;p&gt;The point here is that Lambda can easily handle load, but you have to understand how it works in order to evaluate it fairly. If your function is expected to handle 50RPS constantly as a baseline then it's unrealistic to benchmark Lambda from 0 to 50RPS within a 60 second window: you need to give it time to scale up if you want a realistic view of how it will perform at that level. &lt;/p&gt;

&lt;p&gt;What you'll find is that if you run your function at 50RPS for a longer period, the architecture will work out exactly how much resource is required in order to best service that load, and your function will perform fantastically without any need to scale up further. If you do suddenly get a huge spike and have 1000RPS lambda will &lt;strong&gt;still deal with that&lt;/strong&gt;, but it won't be at the same performance level of your 50rps baseline immediately; it needs to react to the additional load with extra resource which will impact the performance at the beginning of the spike. &lt;/p&gt;

&lt;h3&gt;
  
  
  Lambda is fantastic, but it's not magic: it takes time to deal with unexpected traffic spikes.
&lt;/h3&gt;

&lt;p&gt;If you want to understand how your service will behave within Lambda under normal load then you need to run that test over a sensible timeframe. This will allow Lambda to understand that this is your normal load, and this will enable you to see the reliable performance you expect. &lt;/p&gt;

&lt;p&gt;Of course, running tests like this over a short period also allows you to see the worst-case scenario for how your function will behave should it get bombarded with unexpected spikes. &lt;/p&gt;

&lt;p&gt;Both are important metrics but they shouldn't be confused as equal. Don't write Lambda off because of your unrealistic expectations; if you do you'll be missing out.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>lambda</category>
    </item>
    <item>
      <title>Add value, not complexity: Change the way you think about "The Cloud"</title>
      <dc:creator>Robin Smith</dc:creator>
      <pubDate>Tue, 26 Feb 2019 15:35:37 +0000</pubDate>
      <link>https://forem.com/clicktravelengorg/add-value-not-complexity-change-the-way-you-think-about-the-cloud-8b6</link>
      <guid>https://forem.com/clicktravelengorg/add-value-not-complexity-change-the-way-you-think-about-the-cloud-8b6</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--c-jSn_2B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/gvrlielz8upyy3fug5t3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--c-jSn_2B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/gvrlielz8upyy3fug5t3.png" alt="Cloud"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I dislike using the phrase “the cloud” and I’m not alone but here we are; it’s become the standard term for anything at all related to “things not on your hardware”. Be that personal storage from your phone to the deployment of multi-region redundant database clusters, they all get lumped under the same umbrella question of “why not move it to the cloud?”&lt;/p&gt;

&lt;p&gt;Then there is the over-trivialisation of “the cloud”. One of the biggest challenges for a tech department when trying to persuade their business to migrate is that it’s hard to describe the benefits. Teams spend ages planning a pitch to outline the benefits, only to be shot down with a single response: “but isn’t it just the same computers in someone else’s data centre?”&lt;/p&gt;

&lt;h2&gt;
  
  
  How it normally gets sold
&lt;/h2&gt;

&lt;p&gt;So the tech team have been talking, and they really want to move the system to the cloud. They have gone away to work out what will be involved and what it will cost in order to convince the decision-maker. A couple of weeks later (if all goes well) they come back with the numbers and a migration plan. The conversation normally goes something like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Decision-Maker: “So how much will this cost?”&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Team: “It is going to cost about  $X thousand to provision the architecture to replicate what we have now”&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Decision-Maker: “Ok but what about all this expensive stuff that you convinced me to buy last year that would make our system future proof for years to come?”&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Team: “We don’t need that anymore: it will be on the cloud!”&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Decision-Maker: “..................”&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Team: “......erm we could probably sell it to recoup something but it prob won’t be a lot, but moving really is the right thing to do”&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Decision-Maker: “Ok let’s ignore the cost for a second. How long will this take to implement?”&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Team: “Well it will take about 6 months to build up the architecture to replicate the current system. Once that is done we will turn off the current system migrate all of our data and then start up the systems again”&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Decision-Maker: “turn off……”&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Team: “Well there will have to be some point in which we switch from one system to another. That could cause some downtime but it will be kept to a minimum.”&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Decision-Maker: “ ….. Ok, but then we will be better off and more resilient than we are now?”&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Team: “Yes ….. well actually this just replicates the current system we have now, but with a dedicated resource to keep the servers running etc. if there is a fault”&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Decision-Maker: “Don’t we have a team for that already?”&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Team: “We don’t need that anymore: it will be on the cloud!”&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Decision-Maker: “..... ok well thanks for your time I’ll get back to you.”&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I’ve been a bit harsh there - and people will point out that there are a multitude of ways to have seamless migrations that don’t involve a downtime etc. - but the reality is that this is often how these conversations go.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Decision-makers often don’t understand the benefits, and those trying to influence don't know how to quantify them.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;When starting out anew, it makes perfect sense to start out on the cloud. Quite simply the cloud providers - both public and private - have one advantage over us: Purchasing power. They are buying enough hardware that they are able to offer us a tiny slice of it for a fraction of what we could purchase it for directly. We will be able to buy exactly what we need for the exact amount of time we need it - literally by the minute - versus years of investment in our own hardware. In fact, we will most likely to be able to provision hardware much higher specification that our budget would normally allow.&lt;/p&gt;

&lt;p&gt;More established businesses, however, tend to have thousands of pounds worth of equipment that has built up over the years. It’s maintained by their team, and represents a massive investment, why would they trash it all just to move to the cloud?&lt;/p&gt;

&lt;h2&gt;
  
  
  Because what they have isn’t good enough!
&lt;/h2&gt;

&lt;p&gt;Nobody has enough capacity for the unexpected, but we all want our business to be the next Slack or JustEat. We constantly need to consider how to increase capacity so that when we win that big client or become the next overnight success we aren’t left waiting for delivery of new kit. With growth comes the need for resilience - it’s no good having provisioned tons of hardware if it’s all sat next to each other in a single location waiting for the power to come back on - so we deploy across multiple sites and invest in high availability failover equipment. This is just how it has always worked; we take a gamble based on the best data we have and hope to have what we need when we need it. But this isn’t how it has to work anymore.&lt;/p&gt;

&lt;p&gt;When just starting out we can deploy on a small scale for very little cost and really trial what you need without any long-term commitment. As we grow, being on the cloud makes sense as we can quickly respond to that growth by provisioning or de-provisioning hardware within minutes, not weeks. When we’re huge, then being on the cloud makes sense because you are able to deploy those services resiliently across multiple locations and even across multiple countries.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;We could do all the above without “the cloud” but really what this comes down to is: Why would you want to?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What’s important to your business?&lt;/strong&gt; Is it protecting your hardware investment and traditional processes, or is it giving the best quality service to your users? Your focus shouldn’t be on recruiting staff and expertise to manage infrastructure; it should be on recruiting staff and expertise to solve customers’ problems and advance the product. The reality is that cloud providers are better at managing these things than you are! So why on Earth would you want to waste time and effort trying to replicate something that someone else is doing better and cheaper? Accept your constraints and concentrate your efforts on what makes you great.&lt;/p&gt;

&lt;p&gt;If you get to this point and you’re starting to come around to the idea of using the cloud then great! But to be honest you are probably still thinking about it in the wrong way. Whilst all the benefits I’ve alluded to so far should be clear, this is only scratching the surface of what can be achieved after you move to the cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  It’s so much more than just “someone else’s computer”!
&lt;/h2&gt;

&lt;p&gt;All of the above is focusing on the very obvious benefits: cost, scale, and maintenance. But you may have noticed I’ve gone to great length to use the phrase “on the cloud” and not “in the cloud” and that’s because there is a very big difference between those two ideas.&lt;/p&gt;

&lt;p&gt;Once you get past the initial steps then a whole new world of possibilities will open up. With engineers totally focused on solving actual business problems - and not infrastructure problems - you can utilise a huge array of services beyond virtual servers.&lt;/p&gt;

&lt;p&gt;The difference between the concepts is subtle, but here are a couple of examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;On the cloud - You provision a virtual instance and deploy your application. It services requests over the internet and as you need to scale you provision more identical instances in order to spread the load and improve performance etc.&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;In the cloud - You extract just your application logic and deploy it to a Functions as a Service (FaaS) provider within your cloud provider account. Scaling, execution, and resilience of the application are managed by your cloud provider and you only pay for the requests your customers actually make.&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;On the cloud - Your application is well-architected, and messages between different parts of your system are handled by passing events to keep the system decoupled. You deploy more virtual instances to act as brokers and queues to support this decoupled system.&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;In the cloud - In order to manage your decoupled system you simply make use of the cloud provider’s messaging and queue systems. Able to cope with millions of messages, your events are now handled by a native solution designed to scale with your use across multiple availability zones.&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The end result is the same, but the solution is the key. This comes back to the same point when discussing cost and scalability: Anyone can migrate their systems to the cloud.&lt;/p&gt;

&lt;p&gt;Anyone can gain the cost benefits: but that’s not the real value. The real value isn’t the cost; it isn’t even the resilience. It’s about what can be done once you get there. There are new services being offered up to cloud providers on an almost daily basis; from new databases, to machine learning, to satellite comms, all designed to make your life easier.&lt;/p&gt;

&lt;p&gt;Stop focusing on the cost associated with it; just accept it needs to happen. Stop trying to compare a cloud provider with what you can do yourself; just accept they can do it better. Stop thinking about your service in its current form; just accept that it needs to evolve to meet modern demands.&lt;/p&gt;

&lt;p&gt;Concentrate on making the best product you can, and utilise everything you can that makes that easier. Nobody has ever used a product because they like the way the database has been configured or the particular data centre it’s hosted in.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;em&gt;Add value: not complexity.&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;Robin Smith&lt;br&gt;
Chief Product Engineer&lt;br&gt;
Click Travel&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>devops</category>
      <category>engineering</category>
    </item>
    <item>
      <title>Agile: Optimising our Workflows</title>
      <dc:creator>Tim Knight</dc:creator>
      <pubDate>Wed, 20 Feb 2019 16:38:36 +0000</pubDate>
      <link>https://forem.com/clicktravelengorg/optimising-our-workflows-5ded</link>
      <guid>https://forem.com/clicktravelengorg/optimising-our-workflows-5ded</guid>
      <description>&lt;h1&gt;
  
  
  Or why we stopped caring about "Effort" and strict Agile.
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;No two people, no two teams, will agree on what “Agile” means.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I should say before I get into the main thrust of this blog post, that just because this works for us, it won’t necessarily work for your team. Though you may want to read this, bring it up in your team, and decide what “Agile” means to you and what workflows you are doing because they’ve always been done, if you’re not using that data within the team, or could be using that data more effectively.&lt;/p&gt;

&lt;h2&gt;
  
  
  “Agile” as Guidelines
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;‘They’re more what you’d call “guidelines” than actual rules’&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Our Agile process should adapt to the team’s needs, rather than hinder their growth or efficiency by constraining them with a fixed set of rules.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A real-life example of sticking to the Rules:&lt;/strong&gt;&lt;br&gt;
In an old job, we did “strict” SCRUM, recording effort, estimating complexity and time, performing repeated retrospectives to hone the team to know exactly how hard things were and how long they would take. From that, we could plot velocity and plan SCRUM from that. However, this ate a lot of time, and we probably didn’t need to be so strict, EXCEPT in that team we had an external client we updated every 2–3 weeks on progress and had to show concrete data to them to prove our workflow and explain any delays.&lt;br&gt;
    On the flipside, we could artificially inflate or change our velocity with easy tasks to buy ourselves time to complete larger tasks we deliberately underestimated. That didn’t help the team at all because we couldn’t trust our own estimates, but it was needed to keep the client happy, which in turn reduced pressure on the team.&lt;/p&gt;

&lt;p&gt;Now back to my current team at ClickTravel; &lt;br&gt;
    I’ve spent a lot of time this week thinking about the “software” side of Team leading, how to make the lives easier of my team not through any code or automated pipeline, but simply by changing our internal practices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Changing how we use JIRA and Asana
&lt;/h2&gt;

&lt;p&gt;My Product Owner and I spent a whole afternoon talking through our use of JIRA/Asana, how we felt each should be used and then going through and actioning those changes to JIRA.&lt;/p&gt;

&lt;p&gt;This resulted in clearing down our JIRA backlog of over half the tasks and tying a lot of the weeks/months old Technical tasks or Improvements which we hadn’t had time to do, in with upcoming Asana cards. In future allowing us to deliver new Product Features whilst simultaneously tidying up tech-debt or implementing other improvements, we’ve spotted within the codebase.&lt;/p&gt;

&lt;p&gt;After discussion with the team, we collectively agreed that our JIRA dashboard should contain only work in focus, or expedites, and anything planned but not in focus would sit on the Flights Backlog. As a team we can then organise the backlog in Priority order, meaning if anyone is off on a planning day, or someone finishes focus work early, the team can look at the backlog and simply pop some tasks off and into the dashboard. Our snippets meetings then become times we can sit down, run through what we’ve done, and looking at the ordered backlog and simply deciding how many tasks we think we can pop off onto the Dashboard.&lt;/p&gt;

&lt;p&gt;Our dashboard has become much cleaner and obvious what needs working on that week, and our backlog similarly can become a team product to keep clean and ordered, so when work is to be done, the team knows they can simply take off the top of the backlog.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;‘Gotta go Fast’&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Stopping caring about “Effort”
&lt;/h2&gt;

&lt;p&gt;Cycling back round to the first point, during our discussion about using JIRA within our Kanban cycle; I raised that during the week, people should move cards to “LIVE” with Effort recorded, but not move to DONE, as that is something we’d review as a team during Snippets and use to inform our weekly achievements.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Which has the added side benefit that we don’t have to remember what we’ve achieved, anything we have achieved is in “LIVE” waiting to be moved to “DONE” at the end of the week.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But with “Effort” I was asked, “Why?”, at which point we paused.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“Why do we record Effort?”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Well in Agile theory to measure our velocity against our estimates.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“But we don’t do any estimation of story size, as story points, within our Kanban flow, nor do we measure our velocity.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So what was the point in having to record effort, or more importantly, having to remember to record effort, or someone explicitly having to spend time asking people to record effort days after the fact?&lt;/p&gt;

&lt;p&gt;Within our team, the answer was a resounding: &lt;strong&gt;&lt;em&gt;None&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;So we stopped caring and removed it from our JIRA cards (we set up a team-specific ‘screen’ in JIRA because there were some fields we wanted adding just for us, therefore it was relatively trivial for me to remove the ‘Effort’ field from our ‘screen’).&lt;/p&gt;

&lt;p&gt;That’s not to say recording “Effort” is a bad thing if you are using the data you collect within your team to inform your planning.&lt;/p&gt;

&lt;p&gt;Nor is any of this to say that our Kanban flow, JIRA and Asana use are the one true way, quite the opposite.&lt;/p&gt;

&lt;h2&gt;
  
  
  There is no One True Way; Agile should work for you, rather than you working for it.
&lt;/h2&gt;

&lt;p&gt;If sticking to a tried and tested flow works and improves your team’s efficiency, rather than not, then don’t change just because I wrote a blog post, that’d be the worst outcome I’d want!&lt;/p&gt;

&lt;p&gt;Simply think about what you use your Issue-tracking tools for (such as JIRA/Asana) for;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How you use them and is there anything you’re doing because they’ve always been done?&lt;/li&gt;
&lt;li&gt;Is there anything you could change or remove which would actually save you time in the long run?&lt;/li&gt;
&lt;li&gt;Or do you want to add in processes such as Estimation because measuring velocity will help you plan your project timescales more effectively?
There’s nothing to say we won’t change our minds down the line and add in processes we’ve removed or some other “Agile” process.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;‘There is no one Agile Methodology to rule them all’&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Tim Knight&lt;br&gt;
Flights - Tech Lead&lt;br&gt;
Click Travel&lt;/p&gt;

</description>
      <category>agile</category>
      <category>discuss</category>
      <category>devops</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
