<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Phil Gibbs</title>
    <description>The latest articles on Forem by Phil Gibbs (@philgibbs).</description>
    <link>https://forem.com/philgibbs</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/philgibbs"/>
    <language>en</language>
    <item>
      <title>Avoiding Temporary Files in Shell Scripts</title>
      <dc:creator>Phil Gibbs</dc:creator>
      <pubDate>Tue, 25 Jul 2017 12:50:37 +0000</pubDate>
      <link>https://forem.com/philgibbs/avoiding-temporary-files-in-shell-scripts</link>
      <guid>https://forem.com/philgibbs/avoiding-temporary-files-in-shell-scripts</guid>
      <description>&lt;p&gt;This is an extract from a longer article from &lt;a href="https://www.openmakesoftware.com/production-quality-shell-script/%20"&gt;Openmake Software&lt;/a&gt;. If you want to see other tips and tricks to do with performance and security in shell scripts - go there!&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;It's always fun to start with a polemic so let me make my position quite clear: temporary files are a force for evil in the world of shell scripts and should be avoided if at all possible. If you want justification for this stance just consider the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If the script runs as root (from a deployment processÂ for example) then the use of temporary files can be a major security risk.&lt;/li&gt;
&lt;li&gt;Temporary files need to be removed whenever the script exits. That means you have to trap user exits such as CTRL-C. But what if the script is killed with SIGKILL?&lt;/li&gt;
&lt;li&gt;Temporary files can fill the filesystem. Even worse, if they are created by a redirect from an application’s standard output then the application may not realise that the output has failed due to the filesystem being full.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s look at these problems in detail:&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem 1: What happens to a temporary file if the shell script exits unexpectedly?
&lt;/h2&gt;

&lt;p&gt;This problem can happen if the shell script is interactive and the user halts it with CTRL-C. If this is not trapped then the script will exit and the temporary file will be left lying around.&lt;/p&gt;

&lt;p&gt;The only solution to this is to write a “trap” function to capture user generated signals such as CTRL-C and to remove the temporary files. Here’s an example:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre&gt;
function Tidyup
{
    rm –f $tempfile
    exit 1
}

trap Tidyup 1 2 3 15
&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;However, if the script exits for any other reason then the tidy up script is never invoked. A kill –9 on the script will kill it stone dead and leave the temporary files in existence.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem 2: What happens if the filesystem fills up?
&lt;/h2&gt;

&lt;p&gt;Okay, so you’re busy writing your temporary file when the filesystem hits 100% full. What happens now? Well, your script will fail won’t it? Well maybe but perhaps not in the way you think. Some third party applications do not check for error returns when they write to standard output. Be honest, if you’ve written â€˜C’ code do you check the return code from &lt;code&gt;printf&lt;/code&gt; (or &lt;code&gt;write&lt;/code&gt; or whatever) to see if it has been written correctly? Probably not – what can go wrong writing to the screen? Not a lot presumably, but if you’ve redirected standard out then output is not going to the screen – it’s going to a file. You’d be amazed how many commercial applications fall victim to this.&lt;/p&gt;

&lt;p&gt;The net result is that a command such as&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre&gt;
third_party_command &amp;gt; /tmp/tempfile
&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;may &lt;em&gt;not&lt;/em&gt; return a fail condition even if &lt;code&gt;/tmp&lt;/code&gt; is full. You then have no way of knowing that the command failed but &lt;code&gt;/tmp/tempfile&lt;/code&gt; does not contain the full output from &lt;code&gt;third_party_command&lt;/code&gt;. What happens next depends on what your script does but it’s likely to be sub-optimal. We will discuss some workarounds for this later.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem 3: Beware redirection attacks.
&lt;/h2&gt;

&lt;p&gt;Most temporary files are created in &lt;code&gt;/tmp&lt;/code&gt; and given a filename containing a $$ sequence. The shell replaces the $$ sequence with the current process id thus creating a unique filename.&lt;/p&gt;

&lt;p&gt;However, if the script is run as root and you create the file implicitly like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre&gt;
echo "top secret" &amp;gt; /tmp/mytempfile$$
&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;then it is possible that "top secret" won't stay secret for very long. An unscrupulous user called Dave could create a script like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre&gt;
mknod /tmp/mytempfile12345 p
cat /tmp/mytempfile12345 &amp;gt; /home/dave/secretdata
&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This will just block until your root script finally executes as PID 12345 - when it writes "top secret" to what it &lt;em&gt;thinks&lt;/em&gt; is a file it's going to create. However, that file is a pipe and Dave's unscrupulous script then just grabs the content and writes it to a file in Dave's directory called &lt;code&gt;secretdata&lt;/code&gt;. Of course, this will probably break the root script (it will hang when it tries to read &lt;code&gt;mytempfile$$&lt;/code&gt;) but by the stage, Dave is away with the secret data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Avoiding temporary files
&lt;/h2&gt;

&lt;p&gt;Avoiding temporary files can be difficult but is not necessarily impossible. A lot of UNIX commands will read standard input (or send output to standard output) as well as to named files. Using pipes to connect such commands together will normally give the desired result without recourse to temporary files.&lt;/p&gt;

&lt;p&gt;What if you have two separate commands from which you want to merge and process the output? Let’s assume that we’re going to build some form of control file for the program &lt;code&gt;process_file&lt;/code&gt;. The control file is built from some header lines, the actual body of the control file (which our script will generate) and some tail lines to finish the whole thing off.&lt;/p&gt;

&lt;p&gt;A common way of building this sort of file is this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre&gt;
echo “some header info” &amp;gt;Â  /tmp/tempfile.$$
process_bodyÂ  Â  Â  Â  Â  Â  &amp;gt;&amp;gt; /tmp/tempfile.$$
echo “some tailer info” &amp;gt;&amp;gt; /tmp/tempfile.$$
process_file /tmp/tempfile.$$
rm –f /tmp/tempfile.$$
&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;However, this code is susceptible to all the problems outlined above.&lt;/p&gt;

&lt;p&gt;If we rewrite the code as:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre&gt;
{
echo "some header info"
process_body
echo "some tailer info"
} | process_file
&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;then this brackets all the relevant commands into a list and performs a single redirection of the list’s standard out into the &lt;code&gt;process_file&lt;/code&gt; program. This avoids the need to build a temporary file from the various components of the desired input file.&lt;/p&gt;

&lt;p&gt;What if &lt;code&gt;process_file&lt;/code&gt; is an application that is incapable of taking its input from standard input? Surely then we have to use a temporary file?&lt;/p&gt;

&lt;p&gt;Well, you &lt;em&gt;can&lt;/em&gt; still avoid temporary files but it takes a bit more effort. Here’s what the code looks like. We’ll examine it line by line.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre&gt;
mknod /tmp/mypipe.$$ p # 1
if [ $? –ne 0 ]
then
    echo “Failed to create pipe” &amp;gt;&amp;amp;2
    exit 1
fi
chmod 600 /tmp/mypipe.$$ # 2
process_file /tmp/mypipe.$$ &amp;amp; # 3
(
   echo "some header info"
   process_body
   echo "some tailer info"
) &amp;gt; /tmp/mypipe.$$ # 4
wait $! # 5
rm –f /tmp/mypipe.$$ # 6
&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;First we create a named pipe (#1). A named pipe is exactly the same as any other pipe except that we can create it explicitly and that it appears in the filesystem. (In other words you can see it with an &lt;code&gt;ls&lt;/code&gt;). Now strictly speaking this is a temporary file. However, it is of zero length and therefore will not fill the filesystem. Also, if it cannot be created for any reason (including the file already existing) there is an error return. Therefore redirect attacks are useless. Of course it is left around by an untrappable kill but we can’t have everything.&lt;/p&gt;

&lt;p&gt;We change the access mode (#2) so only the user running the script can read or write to it. Another way of doing this is to create the pipe with the correct permissions in the first place by invoking &lt;code&gt;umask 066&lt;/code&gt; before we call &lt;code&gt;mknod&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We set our &lt;code&gt;process_file&lt;/code&gt; program running in background, reading its input from this named pipe (#3). Now, since there is nothing on the pipe (yet) the program’s read call will block. Therefore &lt;code&gt;process_file&lt;/code&gt; will hang awaiting input. However, we've set &lt;code&gt;process_file&lt;/code&gt; running background (with the &lt;code&gt;&amp;amp;&lt;/code&gt; operator) so the script will continue.&lt;/p&gt;

&lt;p&gt;We construct the control file for &lt;code&gt;process_file&lt;/code&gt; as before except that this time we redirect it to our named pipe (#4). At this point, &lt;code&gt;process_file&lt;/code&gt; will unblock and start reading the data just as if it had come from a conventional file.&lt;/p&gt;

&lt;p&gt;The wait call (#5) will block until the specified child process has exited. The &lt;code&gt;$!&lt;/code&gt; is a shell sequence meaning the process ID of the last background process. Since this is the PID of &lt;code&gt;process_file&lt;/code&gt; our script will wait until &lt;code&gt;process_file&lt;/code&gt; has completed, just as if it had been invoked in foreground.&lt;/p&gt;

&lt;p&gt;Finally, we remove the named pipe (#6).&lt;/p&gt;

&lt;h2&gt;
  
  
  I have no choice but to create a temporary file. What can I do?
&lt;/h2&gt;

&lt;p&gt;If you have no choice but to use temporary files then use the following techniques:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;use &lt;code&gt;umask&lt;/code&gt; to set the file permission mask right at the top of your script. &lt;code&gt;umask 077&lt;/code&gt; will remove group and world read permissions from newly created files. That way, even if your script aborts without cleaning up, only the invoking user can read the content of the file(s) that are left lying around.&lt;/li&gt;
&lt;li&gt;Do not implicitly create the file with a single redirection operator. Instead, ensure the filename does not already exist first (with a &lt;code&gt;-f&lt;/code&gt; test operation), then set your &lt;code&gt;umask&lt;/code&gt; appropriately so that only the owner of the file can access it. Better still, set up a function to generate a unique filename, create it and set the access permissions.&lt;/li&gt;
&lt;li&gt;If you're creating more than one temporary file, create a temporary directory and place your files in there. That way, your &lt;code&gt;trap&lt;/code&gt; function can simply remove the directory and all the files within it on exit - you don't need to track what temporary files you've created since they're all in the same temporary directory.&lt;/li&gt;
&lt;li&gt;Perform sanity checking on the temporary file to ensure that it has been successfully written. Remember that checking &lt;code&gt;$?&lt;/code&gt; may not be adequate since the application may not be checking error returns from writes to standard output. Try the following technique instead:
&lt;pre&gt;
process_file | tee $output_file &amp;gt; /dev/null
if [ $? != 0 ]
then
…
fi
&lt;/pre&gt;
since this will make &lt;code&gt;tee&lt;/code&gt; responsible for writing to the filesystem and it *should* flag errors properly. Note that the &lt;code&gt;$?&lt;/code&gt; operator will be checking the exit status of &lt;code&gt;tee&lt;/code&gt; and not &lt;code&gt;process_file&lt;/code&gt;. If you want to know if &lt;code&gt;process_file&lt;/code&gt; has worked correctly use a list to group the execution of &lt;code&gt;process_file&lt;/code&gt; with an exit check (see below)&lt;/li&gt;
&lt;li&gt;Create tidy up functions to remove the temporary file(s) if the user aborts the script. Call the same functions on controlled exit (either normal or error) from the script.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Putting all of this together, gives us a script like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre&gt;
#!/bin/ksh
function ExitWithError
{
        echo "$*"&amp;gt;&amp;amp;2
        rm -f $output_file
        exit 1
}

function Tidyup
{
        ExitWithError "Abort"
}

umask 077
output_file=/tmp/tfile$$
rm -f $output_file
trap Tidyup 1 2 3 15

{
        process_file
        [[ $? != 0 ]] &amp;amp;&amp;amp; ExitWithError "process_file failed"
} | tee $output_file &amp;gt; /dev/null
[[ $? != 0 ]] &amp;amp;&amp;amp; ExitWithError "Error writing to $output_file"
# process output_file here
# .
# .
# normal exit
rm -f $output_file
exit 0
&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This script ensures the temporary file it creates is removed on exit (either normal successful exit or on error), ensures the temporary file is written correctly before it is read for processing and sets the permission mask correctly so that if the script is killed with SIGKILL (-9) the temporary file can only be read by the user who invoked the script.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Temporary files can create serious security issues as well as opening scripts up to unexpected failures. Most shells make it relatively easy to avoid them - and you should! If you really have no choice, then make sure the permissions are set correctly, you check that they have been written to correctly and you remove them on both successful exit and on error.&lt;/p&gt;

</description>
      <category>shell</category>
      <category>scripting</category>
    </item>
    <item>
      <title>Agents are the Enemy of Agile</title>
      <dc:creator>Phil Gibbs</dc:creator>
      <pubDate>Mon, 17 Jul 2017 14:07:21 +0000</pubDate>
      <link>https://forem.com/philgibbs/agents-are-the-enemy-of-agile</link>
      <guid>https://forem.com/philgibbs/agents-are-the-enemy-of-agile</guid>
      <description>&lt;p&gt;&lt;em&gt;This post was originally published on Openmake Software's &lt;a href="https://www.openmakesoftware.com"&gt;website&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So you’re a modern software factory and you’re embracing DevOps. You know that the first thing you have to have is a Continuous Delivery process. After all, you can’t consider yourself having an Agile DevOps practice without Continuous Delivery. So you set up a CI/CD process for your developers and get the resulting builds pushed automatically into SIT. There some automated tests run and the build gets the green light (or not as the case may be).&lt;/p&gt;

&lt;p&gt;You had to install agents on each server within the SIT environment, but that wasn’t too onerous – there were only 10 machines. And you have monitoring setup to make sure the agents are always running. So there’s no problem there, right?&lt;/p&gt;

&lt;p&gt;Then one day your boss comes to your desk. “I’ve been reading up about DevOps, he says, “and we should be able to push to UAT if the code passes the automated tests in SIT, right? That should save us a load of time.”&lt;/p&gt;

&lt;p&gt;So, you smile and say “of course”. But now you have a problem. There are several UAT environments and the deployment could go to any of them. So now you have another 20 machines on which to install agents. Half are virtual and half are physical, but you get it done and get the monitoring set up. Of course the monitoring starts alerting every time a virtual server gets shutdown so you stop monitoring those servers. But then half the time the agent doesn’t start properly when the virtual server is brought up – and you only find that out when the automated deployment fails.&lt;/p&gt;

&lt;p&gt;Still, it’s not too bad. You have 30 deployment agents now, only 20 of which are being monitored and occasionally an automatic deployment into a virtualised UAT environment will fail since the agent isn’t running. But all in all, you can live with it.&lt;/p&gt;

&lt;p&gt;Then you get called to a meeting. The UAT team have decided that they are going to move everything to the cloud. This will allow them to be more agile since they can just spin up a server whenever they need it to cope with the increased demand due to all the new builds that need testing. Of course each virtualised cloud-based server is going to need an agent installing but you can cope with that, right?&lt;/p&gt;

&lt;p&gt;So you get the agents installed in each of the virtualised servers, writing PowerShell commands to load up the agent on startup. Of course, deployments into these environments cannot take place until the agent is up and running so that’s slowing the process down a bit, but it’s still agile, right?&lt;/p&gt;

&lt;p&gt;Then your deployment tool gets an upgrade. And that makes a change to all the agents necessary. How many do you have again? Where are they installed?&lt;/p&gt;

&lt;p&gt;Still, you’re on top of things. You have a decent DevOps practice. All you need now is the final push to go from Continuous Delivery to Continuous Deployment. Once that’s done you’ll have reached the top of the DevOps maturity model and you’ve got automated deployments all the way into Production.&lt;/p&gt;

&lt;p&gt;So you go to talk to Production Support. And you explain that you want to install a piece of software on their mission-critical servers which can make changes to any piece of production software on the machine. And it’s a black-box so you’ve no idea what protocol it communicates over or whether it’s secure. And ten minutes later you leave the meeting with your ears still ringing.&lt;/p&gt;

&lt;p&gt;So you’ll never implement Continuous Deployment since you have no way of automating deployments into Production. But hey, you’ve still got an Agile solution, right?&lt;/p&gt;

&lt;p&gt;Then the decision is made to outsource the IT infrastructure. Now you’re no longer running Windows Server 2012 and RedHat Linux. Now you’re on Windows Server 2016 and Oracle Linux. Is the agent compatible with Oracle Linux? You make a quick call to the vendor who says it should work (Oracle Linux is just RedHat right?) but it’s not certified. So if it fails for any reason, you’re out of luck.&lt;/p&gt;

&lt;p&gt;As part of this shift, a decision is also made to install DataPower for load balancing. Configuration changes need to be delivered to DataPower as part of the deployment process. Of course, there’s no such thing as an “Agent for a Web Appliance such as DataPower so those changes will have to be applied manually. Such changes will need to be co-ordinated with the automated software delivery process. But with spreadsheets and email it should be possible to set something up that isn’t that bad.&lt;/p&gt;

&lt;p&gt;Then a new Chief Software Architect joins. He’s young and ambitious and likes the sounds of micro-services and containers. So he decides to start a project that breaks up the monolithic code base into a bunch of micro and mini services hosted in various containers. The developers love this idea (looks great on their CV!) Now your deployment needs to potentially target lots of different Docker containers. Can you install an agent in a container? Would that even be practical?&lt;/p&gt;

&lt;p&gt;At this point, you take a long hard look at your new infrastructure with its load balancers and containers and a mix of physical, virtual and cloud based servers and decide that those deployment agents of yours were a really bad idea.&lt;/p&gt;

&lt;h2&gt;
  
  
  Agents are the enemy of Agile
&lt;/h2&gt;

&lt;p&gt;If this nightmare sounds familiar to you, you’re not alone. Agents are the enemy of Agile. They restrict where deployments can be performed and on which platform. They need monitoring to ensure that they are operational. They need to be updated. They need to be installed on every target – even cloud based servers. Some cloud providers have their own deployment solutions that you need to integrate with rendering your agent either useless or terminally compromised. And – worst of all – it’s a struggle to convince Production Support to allow them to be installed into the Production environment. Which means not only do you have to perform a manual deployment process on the very environment where an automated, repeatable, controlled and audited process is required – into production – but you will never achieve Continual Deployment which is the target of DevOps in the first place.&lt;/p&gt;

&lt;p&gt;Application Release Automation solutions which use Agents also tend to charge for them. It is not unusual to see tools sold by licensing each deployment end-point. When you’re deploying to physical servers, that’s acceptable. But nowadays sites are moving to containers and cloud-based infrastructure to allow them to spin up potentially hundreds of servers to meet short-term testing spikes. A licensing model using agents cannot keep up with that sort of agile testing process.&lt;/p&gt;

&lt;p&gt;That’s why modern ARA solutions don’t use Agents. Operating over open protocols such as SSH or FTPS also has a number of other advantages:&lt;/p&gt;

&lt;p&gt;The ports are already open. There are generally no firewall changes to make to communicate with the deployment target.&lt;/p&gt;

&lt;p&gt;You can deploy into production since the very ports open for administering the Production Servers are the same ports used to deploy the changes. Continual Deployment now becomes possible without the need to install “black box software into the production servers.&lt;/p&gt;

&lt;p&gt;You can target any operating system provided it operates over these open protocols. Want to deploy to Mainframe, iSeries, Linux, Windows, OpenVMS and Tandem from a single ARA solution? Then pick a tool that operates without agents.&lt;/p&gt;

&lt;p&gt;Want to make a configuration change to a WebAppliance over a RESTful or SOAP based API? No agent, no problem – deliver the change over the API and record it as a successful component delivery.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Deployment Agents are incompatible with an Agile Development and Testing approach. By compromising the ability to deliver Continual Deployment, they’re also the enemy of DevOps. In short, if you want to achieve the full potential of Agile; if you want to implement a mature DevOps solution with Continual Deployment into Production; if you want to reduce your costs and downtime – &lt;a href="https://www.openmakesoftware.com/agentless-release-automation/"&gt;lose your agents&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>agile</category>
    </item>
    <item>
      <title>Hi, I'm Phil Gibbs</title>
      <dc:creator>Phil Gibbs</dc:creator>
      <pubDate>Mon, 17 Jul 2017 13:53:40 +0000</pubDate>
      <link>https://forem.com/philgibbs/hi-im-phil-gibbs</link>
      <guid>https://forem.com/philgibbs/hi-im-phil-gibbs</guid>
      <description>&lt;p&gt;I have been coding for 40 years (gulp) - I started at the age of 11 on a PDP-8. I have worked full time in IT since 1984, working in defence, emergency services command and control, banking and retail. Now specialising in SCM and ARA solutions.&lt;/p&gt;

&lt;p&gt;You can find me on Twitter as &lt;a href="https://twitter.com/philgibbs" rel="noopener noreferrer"&gt;@philgibbs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I live in Leeds in the United Kingdom.&lt;/p&gt;

&lt;p&gt;I am the CTO for &lt;a href="http://www.openmakesoftware.com" rel="noopener noreferrer"&gt;Openmake Software&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I mostly program in Java and C++ with JavaScript fighting for my attention when building our web-based apps. Other languages pop in and out during my working year!&lt;/p&gt;

&lt;p&gt;Nice to meet everyone.&lt;/p&gt;

</description>
      <category>introduction</category>
    </item>
  </channel>
</rss>
