<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Giannis Papadakis</title>
    <description>The latest articles on Forem by Giannis Papadakis (@giannispapadakis).</description>
    <link>https://forem.com/giannispapadakis</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/giannispapadakis"/>
    <language>en</language>
    <item>
      <title>Mutants, Mutants everywhere! Have we "J"est the Mutants?</title>
      <dc:creator>Giannis Papadakis</dc:creator>
      <pubDate>Fri, 24 Mar 2023 11:35:28 +0000</pubDate>
      <link>https://forem.com/giannispapadakis/mutants-mutants-everywhere-have-we-jest-the-mutants-ic4</link>
      <guid>https://forem.com/giannispapadakis/mutants-mutants-everywhere-have-we-jest-the-mutants-ic4</guid>
      <description>

&lt;p&gt;In previous post we have covered how to measure code coverage for our React Apps. In this one we will go one step further and review if our tests are meant to test what they should!&lt;/p&gt;

&lt;p&gt;Relying on code coverage for assuring test efficiency is not always the best approach when it comes to the testing approach for your shift-left testing strategy. &lt;/p&gt;

&lt;p&gt;Mutation testing is the evolution to better and stronger development practises. We need to use it to reassure that our tests are testing something…(&lt;strong&gt;100% code coverage from unit tests does not guarantee that we have also strong unit tests!!&lt;/strong&gt;)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Some history behind it...&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Mutation testing was originally proposed by Richard Lipton as a student in 1971, and first developed and published by DeMillo, Lipton and Sayward. The first implementation of a mutation testing tool was by Timothy Budd as part of his PhD work (titled Mutation Analysis) in 1980 from Yale University.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;And what exactly is mutation testing???&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Bugs, or mutants, are automatically inserted into your production code. Your tests are run for each mutant. If your tests fail then the mutant is killed. If your tests passed, the mutant survived. The higher the percentage of mutants killed, the more effective your tests are.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It's that simple…&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Stryker wants to kill all mutants&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Stryker will only mutate your source code, making sure there are no false positives. In the beginning of our journey found out that stryker takes significant time to analyse our projects. We aimed to use it for our react microfrontends thus used stryker-js and jest-runner for the RTL tests. One of our first findings was that the time it takes to finish mutation testing was significant HIGH and lets see the reason for this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Faults are introduced into the source code by creating multiple versions of the code, each version is called a mutant. Each mutant contains a single fault, and the goal is to cause the mutant version to fail which demonstrates the effectiveness of the test cases.&lt;/li&gt;
&lt;li&gt;Test cases are applied to the original program and also the mutant program.&lt;/li&gt;
&lt;li&gt;Compare the results of the original and mutant program.&lt;/li&gt;
&lt;li&gt;If the original program and mutant programs generate the different output, then that the mutant is killed by the test case. Hence the test case is good enough to detect the change between the original and the mutant program.&lt;/li&gt;
&lt;li&gt;If the original program and mutant programs generate the same output, mutant is kept alive. In such cases, more effective test cases need to be created that kill all mutants.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The costly part is the inner loop of course, where the tool needs to build and test each mutant. For example, for stryker-js generates around 1600 mutants, and a test run takes around 10 seconds. This give a total of roughly 4 (four) hours. Run time can be significantly improved by using test coverage detail so the engine only run tests that may be impacted by the mutation. But it implies a tight collaboration between the test runner, the coverage tool and the mutation testing tool. Of course improvements have been made and we will cover the needed changes from the team to maximise performance of the engine. &lt;/p&gt;

&lt;p&gt;So Stryker has to generate mutants, but this raises two conflicting goals:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On one hand, you want to inject a lot of mutants to really exert your tests.&lt;/li&gt;
&lt;li&gt;But on the other hand, you need to have an acceptable running time, hence a reasonable number of test runs (= mutants). This is also demanding if you decide to incorporate mutation testing within your CI pipelines…&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The example
&lt;/h2&gt;

&lt;p&gt;Lets start by a simple example first to understand how mutation testing really works.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function isdoubledigit(digit){
   return digit.value &amp;gt;= 10;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Stryker will find the return statement and decide to change it in several ways:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/* 1 */ return digit.value &amp;gt; 10;
/* 2 */ return digit.value &amp;lt; 10;
/* 3 */ return false;
/* 4 */ return true;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We call those modifications mutants. After the mutants have been found, they are applied one by one, and your tests are executed against them. If at least one of your tests fail, we say the mutant is killed. That's what we want! If no test fails, it survived. The better your tests, the fewer mutants survive. Stryker will output the results in various different formats. One of the easiest to read is the clear text reporter:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;h2&gt;
  
  
  Instructions
&lt;/h2&gt;

&lt;p&gt;Lets setup the project by installing dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yarn add @stryker-mutaror/core -D
yarn add @stryker-mutator/jest-runner -D
yarn add @stryker-mutator/typescript-checker -D
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With typescript checker we enable type checking on mutants.&lt;br&gt;
👽 Type check each mutant. Invalid mutants will be marked as CompileError in your Stryker report.&lt;br&gt;
🧒 Easy to setup, only your tsconfig.json file is needed.&lt;br&gt;
🔢 Type check is done in-memory, no side effects on disk.&lt;br&gt;
🎁 Support for both single typescript projects as well as projects with project references (--build mode).&lt;/p&gt;

&lt;p&gt;Create your stryker.conf.js in root dir (Or you can generate by running stryker init). We will explain some configurations you can pass to improve performance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BD6Y0r5C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ne8sjdqpkn7u015c1qkf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BD6Y0r5C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ne8sjdqpkn7u015c1qkf.png" alt="Image description" width="880" height="669"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To run stryker scan just execute:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; yarn stryker run
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's review the example of configurations to showcase the improvements you can make to the overall execution time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;concurrency&lt;/strong&gt;&lt;br&gt;
Set the concurrency of workers. This defaults to n-1 where n is the number of logical CPU cores available on your machine, unless n &amp;lt;= 4, in that case it uses n. This is a sane default for most use cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;coverageAnalysis&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;&lt;em&gt;all&lt;/em&gt;&lt;/strong&gt;: Stryker will automatically collect code coverage results during the initial test run phase&lt;br&gt;
&lt;strong&gt;&lt;em&gt;perTest&lt;/em&gt;&lt;/strong&gt;: Nice! We're already saving time by analysing a simple code coverage result. But if we take a closer look, we see that we can save even more time. Stryker will automatically collect code coverage results per test during the initial test run phase. Next, it will select only those tests that actually cover a mutant to run for that mutant. This might seem like a small improvements, but in big projects with hundreds of tests, it quickly adds up to minutes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;mutate&lt;/strong&gt;&lt;br&gt;
With mutate you configure the subset of files or just one specific file to be mutated. These should be your production code files, and definitely not your test files. Excluding test files or static files see that execution time improved even further!&lt;/p&gt;

&lt;p&gt;If you use perTest you might want to exclude static mutants as well (A static mutant is a mutant that is executed once on startup instead of when the tests are running) to improve the performance even better.&lt;/p&gt;

&lt;p&gt;Lets see the results while running from console log:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rbm67SDl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qce40u0yd31dqbq3prai.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rbm67SDl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qce40u0yd31dqbq3prai.png" alt="Image description" width="880" height="614"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Nice!! 👌 Now lets see how different the statement coverage from mutation coverage to show the true value of it. We picked up one package which the line coverage is around 95% but the mutation coverage at the same time drops to 50%.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OIO2QAoU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q4e1ywscoh75m0r6rvh5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OIO2QAoU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q4e1ywscoh75m0r6rvh5.png" alt="Image description" width="784" height="54"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--T3IbQPtw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1rce3bkcurilxvhczuin.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--T3IbQPtw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1rce3bkcurilxvhczuin.png" alt="Image description" width="880" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So lets summarise some important pros and cons for our new type of testing below.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Identifying the number of mutants that have survived vs killed is a more reliable metric than simply using line coverage. It actually ensures your unit tests are testing what they should be.&lt;/li&gt;
&lt;li&gt;It catches many small programming errors, as well as holes in unit tests that would otherwise go unnoticed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Running a mutation testing framework against an entire complex project is computationally very expensive, requires a lot of processing power to complete.&lt;/li&gt;
&lt;li&gt;Runs can take anywhere up to several hours, making them unsuitable within a fast release process. Of course, you can run a framework overnight and check the report later.&lt;/li&gt;
&lt;li&gt;You need to build metrics around surviving mutants to ensure they are dropping in numbers over time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
We do not want to compare code coverage with mutation coverage just point out that you may need a way to measure the effectiveness of your unit tests. So we can empower more testing capabilities within our teams. I would certainly urge you to give it a try and see what is missing from your testing strategies!!!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Qwq9ax8s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pc0n6t0fv7pihs6avi05.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Qwq9ax8s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pc0n6t0fv7pihs6avi05.png" alt="Image description" width="225" height="225"&gt;&lt;/a&gt;&lt;/p&gt;




</description>
    </item>
    <item>
      <title>Code Coverage with React, Vite, RTL &amp; Cypress</title>
      <dc:creator>Giannis Papadakis</dc:creator>
      <pubDate>Fri, 17 Mar 2023 09:22:18 +0000</pubDate>
      <link>https://forem.com/giannispapadakis/code-coverage-with-react-vite-rtl-cypress-12h0</link>
      <guid>https://forem.com/giannispapadakis/code-coverage-with-react-vite-rtl-cypress-12h0</guid>
      <description>&lt;p&gt;My newest post involves one of our recent projects to move our react microfrontend to use Vite. I will summarise in this post how we managed to collect code coverage statistics from RTL and Cypress tools to have a combined coverage level for our development process. But before jumping to technical details let's review the background story.&lt;/p&gt;

&lt;p&gt;Initially using Webpack and decided to move to Vite for reasons that you may already know like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It aims to provide a &lt;strong&gt;faster&lt;/strong&gt; and more &lt;strong&gt;efficient&lt;/strong&gt; development experience for developers by using the native ES modules feature in the browser&lt;/li&gt;
&lt;li&gt;Ability to handle &lt;strong&gt;large&lt;/strong&gt; JavaScript projects with ease&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Excellent&lt;/strong&gt; documentation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The above are just some few key features of Vite that makes it stand out from other solutions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;We already know what is Cypress?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With Cypress, you can easily create tests for your modern web applications, debug them visually, and automatically run them in your continuous integration builds. It can easily be integrated in your react projects and run both component and integration tests at ease as part of your development pipeline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;But what about your component tests?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If the team is more flexible you can use Cypress even to test your react components but most of the teams are more aware of RTL(react testing library) so in our projects as well we stick to use this to drive our component tests. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;And what about code coverage?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When it comes to code coverage we picked Instanbul.&lt;br&gt;
Istanbul instruments your ES5 and ES2015+ JavaScript code with line counters, so that you can track how well your unit-tests exercise your codebase. The nyc command-line-client for Istanbul works well with most JavaScript testing frameworks: tap, mocha, AVA, etc.&lt;/p&gt;

&lt;p&gt;Now that we had some few introductions lets deep dive into the setup needed to start measuring code coverage on your react project. &lt;/p&gt;

&lt;p&gt;First install the needed tool to instrument your react app as Vite plugin &lt;code&gt;yarn add vite-plugin-istanbul -D&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;After we installed the package we need to change the &lt;strong&gt;vite.config.ts&lt;/strong&gt; file accordingly&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;From the configuration above you will see that we used instanbul for both cypress e2e &amp;amp; rtl tests. &lt;/p&gt;

&lt;p&gt;But we will not see all the files because Istanbul by default uses coverage only for those files for which there are tests, in order to configure Istanbul reports more flexibly, add the following package:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;yarn add @istanbuljs/nyc-config-typescript -D&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now create the .nycrc.json to store settings for cypress tests.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Ok great we can now generate reports for each tests (RTL &amp;amp; Cypress) but it would be extremely useful to merge both under one single report.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
That’s all, thanks for reading, in this article we figured out how to add code coverage support to our application using React and Vite for both unit and e2e tests.&lt;/p&gt;

</description>
      <category>typescript</category>
      <category>cypress</category>
      <category>codequality</category>
    </item>
    <item>
      <title>Integrate K6 with InfluxData</title>
      <dc:creator>Giannis Papadakis</dc:creator>
      <pubDate>Tue, 17 Jan 2023 10:39:17 +0000</pubDate>
      <link>https://forem.com/giannispapadakis/integrate-k6-with-influxdata-kcl</link>
      <guid>https://forem.com/giannispapadakis/integrate-k6-with-influxdata-kcl</guid>
      <description>&lt;p&gt;In this post we will describe the process of integrating &lt;strong&gt;k6.io&lt;/strong&gt; with &lt;strong&gt;InfluxData&lt;/strong&gt; (InfluxDB cloud). We recently integrated load tests with k6 in our development process in GWI and our journey just begun. &lt;/p&gt;

&lt;p&gt;InfluxDB v1 is already supported from k6 out of the box to persist metrics but support on v2 is not yet fully covered. Let's start by reviewing how to deploy a local instance of v1 and integrate with k6 for the sake of introduction.&lt;/p&gt;

&lt;h2&gt;
  
  
  InfluxDB V1
&lt;/h2&gt;

&lt;p&gt;First we need to create our grafana-datasource.yaml file to provision Grafana, with the following configuration&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: 1
datasources:
  - name: myinfluxdb
    type: influxdb
    access: proxy
    database: k6
    url: http://influxdb:8086
    isDefault: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's create also the docker-compose file that will do the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Setup InfluxDB v1&lt;/li&gt;
&lt;li&gt;Setup Grafana with predefined dashboards&lt;/li&gt;
&lt;li&gt;Run k6 tests and direct output to InfluxDB
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;networks:
  k6:
  grafana:
services:
  influxdb:
    image: influxdb:1.8 # Version 2.x introduces some breaking compatibility changes. K6 support for it comes via an extension
    networks:
      - k6
      - grafana
    ports:
      - "8086:8086"
    environment:
      - INFLUXDB_DB=k6

  grafana:
    image: grafana/grafana:latest
    networks:
      - grafana
    ports:
      - "3000:3000"
    environment:
      - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
      - GF_AUTH_ANONYMOUS_ENABLED=true
      - GF_AUTH_BASIC_ENABLED=false
    volumes:
      - ./grafana-datasource.yaml:/etc/grafana/provisioning/datasources/datasource.yaml
  k6:
    image: grafana/k6:latest
    networks:
      - k6
    ports:
      - "6565:6565"
    environment:
      - K6_OUT=influxdb=http://influxdb:8086/k6
    volumes:
      - ./dist:/scripts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running our tests we can easily review and analyze through the Grafana Dashboards&lt;/p&gt;

&lt;h2&gt;
  
  
  InfluxData
&lt;/h2&gt;

&lt;p&gt;InfluxData is the cloud version of InfluxDB and it is based on V2. Main difference is that is organized in buckets and we can query data based on &lt;a href="https://docs.influxdata.com/influxdb/cloud/query-data/get-started/" rel="noopener noreferrer"&gt;Flux Query&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo45yt9j7ss74u6ok70x2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo45yt9j7ss74u6ok70x2.png" alt="Image description" width="800" height="177"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The current Grafana dashboards are mainly developed to support V1 so how can we support backwards compatibility without creating our own dashboards? &lt;/p&gt;

&lt;p&gt;First thing first lets review how the docker-compose file is changes to persist data to InfluxData:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;networks:
  k6:
  grafana:
services:
  grafana:
    image: grafana/grafana:latest
    networks:
      - grafana
    ports:
      - "3000:3000"
    environment:
      - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
      - GF_AUTH_ANONYMOUS_ENABLED=true
      - GF_AUTH_BASIC_ENABLED=false
    volumes:
      - ./grafana-datasource.yaml:/etc/grafana/provisioning/datasources/datasource.yaml
  k6:
    build: .
    networks:
      - k6
    ports:
      - "6565:6565"
    environment:
      - K6_OUT=xk6-influxdb=&amp;lt;influxdb_url&amp;gt;
      - K6_INFLUXDB_ORGANIZATION=&amp;lt;influxdb_org_id&amp;gt;
      - K6_INFLUXDB_BUCKET=Platform_Performance_Tests
      - K6_INFLUXDB_INSECURE=true
        # NOTE: This is an Admin token, it's not suggested to use this configuration in production.
        # Instead, use a Token with restricted privileges.
      - K6_INFLUXDB_TOKEN=&amp;lt;influxdb_token&amp;gt;
    volumes:
      - ./dist:/scripts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The change appararently is the output and here we use the xk6 binary &lt;a href="https://github.com/grafana/xk6-output-influxdb" rel="noopener noreferrer"&gt;xk6-influxdb&lt;/a&gt;&lt;br&gt;
Provide proper environment variables retrieved from your InfluxData configuration.&lt;/p&gt;

&lt;p&gt;Now we can actually persist our data to InfluxData lets see how to use Grafana Dashboards from V1 with backwards compatibility.&lt;/p&gt;

&lt;p&gt;The InfluxDB 1.x data model includes databases and retention policies. InfluxDB Cloud replaces databases and retention policies with buckets. To support InfluxDB 1.x query and write patterns in InfluxDB Cloud, databases and retention policies are mapped to buckets using the database and retention policy (DBRP) mapping service.&lt;/p&gt;

&lt;p&gt;Our dashboards are using InfluxQL (v1) to query data so create a datasource with InfluxQL&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5fdd2li0kxdpp25wyet1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5fdd2li0kxdpp25wyet1.png" alt="Image description" width="800" height="755"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now the only thing remain is to add custom HTTP header Authorization and provide as value the InfluxData token to make appropriate connection. So we are good to go and use the  predefined dashboards from &lt;a href="https://grafana.com/grafana/dashboards/14801-k6-dashboard/" rel="noopener noreferrer"&gt;grafana site&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;If someone is experienced enough using Flux Query you do not need to worry about backwards compatibility from v2 to v1 API. But yet again i wanted to share my experience trying to move to InfluxData and how we managed to migrate our Grafana Dashboards without any extra handling!&lt;/p&gt;

</description>
      <category>community</category>
    </item>
    <item>
      <title>Web Performance with SiteSpeed.io</title>
      <dc:creator>Giannis Papadakis</dc:creator>
      <pubDate>Fri, 24 Jul 2020 07:06:06 +0000</pubDate>
      <link>https://forem.com/giannispapadakis/web-performance-with-sitespeed-io-1peo</link>
      <guid>https://forem.com/giannispapadakis/web-performance-with-sitespeed-io-1peo</guid>
      <description>&lt;p&gt;In this post we will cover basic concepts for Web Performance and with what frameworks we can measure it in modern web applications. &lt;/p&gt;

&lt;h2&gt;
  
  
  Why should we care for Web Performance??
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzxuzx8it0yxjk7l7zbs0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzxuzx8it0yxjk7l7zbs0.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Two words you often hear together are mobile and site speed. And that’s not without reason because these two go hand in hand. Mobile-friendliness and site speed are some of the most pressing matters we have to deal with. &lt;/p&gt;

&lt;p&gt;Measuring page speed has always been something of a dark art. The site speed tools we use today are fairly adequate, but with the new Web Vitals metrics Google is trying to come at it from a different, more realistic angle, taking page experience into account.&lt;/p&gt;

&lt;p&gt;The first thing to understand is that there is no single metric or measurement for ‘speed’. There’s no simple number which you can use to measure how quickly your pages load.&lt;/p&gt;

&lt;p&gt;Think about what happens when you load a website. There are lots of different stages and many different parts which can be measured. If the network connection is slow, but the images load quickly, how ‘fast’ is the site? What about the other way around?&lt;/p&gt;

&lt;p&gt;Even if you try to simplify all of this to something like “the time it takes until it’s completely loaded“, it’s still tricky to give that a useful number.&lt;/p&gt;

&lt;p&gt;For example, a page which takes longer to ‘finish loading’ may provide a functional ‘lightweight’ version while the full page is still downloading in the background. Is that ‘faster’ or ‘slower’ than a website which loads faster, but which I can’t use until it’s finished loading?&lt;/p&gt;

&lt;h3&gt;
  
  
  Browser Stuff
&lt;/h3&gt;

&lt;p&gt;This stage is where the page needs to be constructed, laid out, colored in, and displayed. The way in which images load, in which JavaScript and CSS are processed, and every individual HTML tag on your page affects how quickly things load.&lt;/p&gt;

&lt;p&gt;We can monitor some of this from the ‘outside-in’ with tools which scan the website and measure how it loads. We recommend using multiple tools, as they measure things differently, and are useful for different assessments. For example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5chb0y4zcg6uzhbnjsfb.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5chb0y4zcg6uzhbnjsfb.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How to diagnose speed issues through the waterfall charts:&lt;/p&gt;

&lt;p&gt;1.&lt;strong&gt;Look for red text&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;If you have 400 or 500 error response codes for resources in your waterfall chart, you may see the name of the resource marked in red text. This indicates an error retrieving that resource. The below example shows a 500 Internal Server Error on a waterfall chart, but you could see any individual request fail within the makeup of a page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fvj1ngwmas3598nf8w24w.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fvj1ngwmas3598nf8w24w.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2.&lt;strong&gt;Look for long bars&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the following example, the long bar represents an exceptionally long time for a DNS lookup to find a custom font on a site.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frpycmcr6wp2842v05hac.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frpycmcr6wp2842v05hac.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.&lt;strong&gt;Look for big gaps between requests&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As demonstrated in the following example, these gaps represent times when no requests happened, such as when JavaScript was executing or the browser was processing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fewfkmure9fzoi9j1500g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fewfkmure9fzoi9j1500g.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Universal Metrics
&lt;/h3&gt;

&lt;p&gt;Despite all of these moving parts, there are a few universal metrics which make sense for all sites to measure, and optimize for. These are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Time until first byte, which is how long it takes until the server responds with some information. Even if your front-end is blazing fast, this will hold you up. Measure with Query Monitor or &lt;a href="https://newrelic.com/" rel="noopener noreferrer"&gt;NewRelic&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Time until first contentful (and meaningful) paint, which is how long it takes for key visual content (e.g., a hero image or a page heading) to appear on the screen. Measure with &lt;a href="https://chrome.google.com/webstore/detail/lighthouse/blipmdconlkpinefehnmjammfjpmpbjk?hl=en" rel="noopener noreferrer"&gt;Lighthouse for Chrome&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Time until interactive, which is how long it takes for the experience to be visible, and react to my input. Measure with &lt;a href="https://chrome.google.com/webstore/detail/lighthouse/blipmdconlkpinefehnmjammfjpmpbjk?hl=en" rel="noopener noreferrer"&gt;Lighthouse for Chrome&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are much more sophisticated metrics than “how long did it take to load”, and, perhaps more importantly, have a user-centric focus. Improving these metrics should correlate directly with user satisfaction, which is super-important for &lt;a href="https://yoast.com/how-site-speed-influences-seo/" rel="noopener noreferrer"&gt;SEO&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Measuring Web Performance with SiteSpeed.io
&lt;/h2&gt;

&lt;p&gt;When it comes to more modern tools SiteSpeed.io covers multiple metrics and has the advantage to integrate into a more modern DevOps Pipeline.  &lt;/p&gt;

&lt;p&gt;Sitespeed.io is a set of &lt;a href="https://www.sitespeed.io/documentation/" rel="noopener noreferrer"&gt;Open Source tools&lt;/a&gt; that makes it easy to monitor and measure the performance of your web site.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Built-in Simulation Features&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0f1wbd32uis1smiohnfe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0f1wbd32uis1smiohnfe.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Browser Support&lt;/li&gt;
&lt;li&gt;Mobile &lt;/li&gt;
&lt;li&gt;Simulation
Real Device (Android for now)&lt;/li&gt;
&lt;li&gt;Bandwidth Simulation&lt;/li&gt;
&lt;li&gt;Continuous Integration&lt;/li&gt;
&lt;li&gt;Selenium Integration&lt;/li&gt;
&lt;li&gt;Crawling&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Third Party Tools Integration&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fvbyhbov8sz5yv1349vxm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fvbyhbov8sz5yv1349vxm.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Slack Integration&lt;/li&gt;
&lt;li&gt;Google Page Speed Insight Integration&lt;/li&gt;
&lt;li&gt;WebpageTest Integration&lt;/li&gt;
&lt;li&gt;Various Plugins&lt;/li&gt;
&lt;li&gt;Custom Metrics&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Reports&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fwks8pm7klexrfglbfrew.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fwks8pm7klexrfglbfrew.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Grafana Dashboards&lt;/li&gt;
&lt;li&gt;Coach&lt;/li&gt;
&lt;li&gt;Unitary Report&lt;/li&gt;
&lt;li&gt;Historical Dashboard&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  SiteSpeed.io in CI/CD
&lt;/h3&gt;

&lt;p&gt;To run SiteSpeed.io its pretty simple especially if you use the &lt;a href="https://hub.docker.com/r/sitespeedio/sitespeed.io/" rel="noopener noreferrer"&gt;Docker Image&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --shm-size=1g --userns=host --rm -v ${WORKSPACE}:/sitespeed.io sitespeedio/sitespeed.io:13.3.2-plus1 --outputFolder output/report --axe.enable true  --video false --influxdb.host ... --influxdb.port .... --influxdb.database ... --influxdb.username ... --influxdb.password ... http://www.google.com --slack.hookUrl ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above example we have various arguments passing summarized:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;--outputFolder&lt;/strong&gt;: The folder relative to workspace to store the results&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;--axe.enable&lt;/strong&gt;: Enable support for accessibility scores&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;--video&lt;/strong&gt;: Enable or disable video support&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;--influxdb.host&lt;/strong&gt;: InfluxDB host to store metrics as time series data&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;--influxdb.port&lt;/strong&gt;: InfluxDB port &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;--influxdb.database&lt;/strong&gt;: InfluxDB database name to store the metrics&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;--slack.hookUrl&lt;/strong&gt;: Enable WebHooks to Slack channels and update the channel with performance output&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Of course this is just an example if you want to add more capabilities in your docker image command visit &lt;a href="https://www.sitespeed.io/documentation/sitespeed.io/configuration/" rel="noopener noreferrer"&gt;SiteSpeed Configurations&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The output after scanning will produce multiple html reports combined in one dashboard providing scores and individual metrics per pages&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fv5c0j9v0xjjeag5nyu9q.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fv5c0j9v0xjjeag5nyu9q.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you have to deal with scanning the application when registration forms or login forms are providing more data for the user then using only the docker image to scan will not be sufficient. No problem ... Selenium is our tool.&lt;/p&gt;

&lt;p&gt;You can write a Selenium async script(EventCloud2.js) and pass it to docker image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module.exports = async function(context, command) {
       // Use case #1: Open EventCloud Page
   await command.measure.start('http://eventcloud.aspnetboilerplate.com/', 'Load EventCloud');  
    try
   {
       // Use case #2: Open Register Page
       await command.navigate('http://eventcloud.aspnetboilerplate.com/account/login');
       await command.measure.start('Open_Register');
       await command.click.byClassName('btn btn-block bg-deep-purple waves-effect');
       context.log.info('Register page opened..');
       await command.measure.stop();

       // Use case #3: Login Page.
       await command.navigate('http://eventcloud.aspnetboilerplate.com/account/login');
       await command.wait.byTime(5000);
       await command.addText.byXpath('john', '//input[@name="userNameOrEmailAddress"]');
       await command.addText.byXpath('123qwe', '//input[@name="password"]');

       await command.measure.start('Login');
       await command.click.byIdAndWait('LoginButton');
       context.log.info('User logged In..');

       return command.measure.stop();
   }
   catch(e) {}
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then just execute the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --shm-size=1g --userns=host --rm -v ${WORKSPACE}:/sitespeed.io sitespeedio/sitespeed.io:13.3.2-plus1 --outputFolder output/report --axe.enable true  --video false --influxdb.host ... --influxdb.port .... --influxdb.database ... --influxdb.username ... --influxdb.password ... EventCloud2.js --slack.hookUrl ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Grafana Integration
&lt;/h4&gt;

&lt;p&gt;In order to display the metrics, we need to create the SiteSpeed dashboard. For this specific task, we are going to import a JSON script by following these steps:&lt;/p&gt;

&lt;p&gt;• Click the “+” icon on the left menu &amp;gt; “Import.”&lt;/p&gt;

&lt;p&gt;• In the text box, paste the JSON script copied from this &lt;a href="https://github.com/sitespeedio/grafana-bootstrap-docker/blob/main/dashboards/influxdb/pageSummary.json" rel="noopener noreferrer"&gt;link&lt;/a&gt;.  &lt;/p&gt;

&lt;p&gt;• Click the “Load” button.&lt;/p&gt;

&lt;p&gt;• In the import options, set a name for your dashboard, the folder where you want to have it, and the InfluxDB data source created in step 5.&lt;/p&gt;

&lt;p&gt;• Click the “Import” button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frrqs4otgpag3tk15ajus.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frrqs4otgpag3tk15ajus.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you can review above widgets in Grafana containing most of the information in html reports:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fngo1w9spqe35dhd86yg2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fngo1w9spqe35dhd86yg2.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F13rgdlp8c5iez89f9eim.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F13rgdlp8c5iez89f9eim.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Slack Integration
&lt;/h4&gt;

&lt;p&gt;You can enable WebHook into a public/private Slack Channel to send information for running scans to the team as well.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a new Slack app in the workspace where you want to post messages.&lt;/li&gt;
&lt;li&gt;From the Features page, toggle Activate Incoming Webhooks on.&lt;/li&gt;
&lt;li&gt;Click Add New Webhook to Workspace.&lt;/li&gt;
&lt;li&gt;Pick a channel that the app will post to, then click Authorize.&lt;/li&gt;
&lt;li&gt;Use your Incoming Webhook URL to post a message to Slack&lt;/li&gt;
&lt;li&gt;Copy the WebHook URL to use it in argument --slack.hookUrl&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then you will see results summarized and posted to Slack:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fladdjrsp05d2f5qng6ek.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fladdjrsp05d2f5qng6ek.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Wrapping this into a process
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Use an ‘outside-in’ tool, like SiteSpeed or Lighthouse to generate a waterfall diagram of how the website loads.&lt;/li&gt;
&lt;li&gt;Identify bottlenecks with servers and the back end. Look for slow connection times, slow SSL handshakes, and slow DNS lookups. Use a plugin like Query Monitor, or a service like NewRelic to diagnose what’s holding things up. Make server, hardware, software and script changes.&lt;/li&gt;
&lt;li&gt;Identify bottlenecks with the front end. Look for slow loading and processing times on images, scripts and stylesheets. Use a tool like SiteSpeed.io we described previously to measure also more metrics, like time until first meaningful paint and time until interactive.&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Web Security Testing with OWASP ZAP and Selenium</title>
      <dc:creator>Giannis Papadakis</dc:creator>
      <pubDate>Wed, 06 May 2020 13:03:52 +0000</pubDate>
      <link>https://forem.com/giannispapadakis/web-security-testing-with-owasp-zap-and-selenium-3gf5</link>
      <guid>https://forem.com/giannispapadakis/web-security-testing-with-owasp-zap-and-selenium-3gf5</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VHbJFes2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/uaw3wmf4av0eszjnckss.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VHbJFes2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/uaw3wmf4av0eszjnckss.jpg" alt="ZAP" width="309" height="309"&gt;&lt;/a&gt;&lt;br&gt;
Have you ever wondered how we can actually find security vulnerabilities in Web Applications? There are guidelines from global security organizations that can be followed from Security Experts on how to efficiently perform penetration and security tests in your application. To review the top 10 vulnerabilities refer to &lt;a href="https://owasp.org/www-project-top-ten/"&gt;OWASP Top 10 Risks&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Introduction to Security Testing
&lt;/h2&gt;

&lt;p&gt;There are multiple scanners in the software community commercial or open-source that gives the ability to penetration testers and security engineers to scan their application for known vulnerabilities.&lt;br&gt;&lt;/p&gt;

&lt;p&gt;Now most of the scanners are having CI/CD support and works well side by side with Selenium which is the tool that simulates user actions in our browsers.&lt;br&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  OWASP Zed Attack Proxy (ZAP)
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://owasp.org/www-project-zap/"&gt;OWASP Zed Attack Proxy&lt;/a&gt; (ZAP) is one of the world’s most popular free security tools and is actively maintained by hundreds of international volunteers. It can help you automatically find security vulnerabilities in your web applications while you are developing and testing your&lt;br&gt;
applications. Its also a great tool for experienced pen testers to use for manual security testing. &lt;/p&gt;
&lt;h3&gt;
  
  
  Objective
&lt;/h3&gt;

&lt;p&gt;To use OWASP ZAP, to detect web application vulnerabilities in a CI/CD pipeline&lt;/p&gt;
&lt;h3&gt;
  
  
  Problem
&lt;/h3&gt;

&lt;p&gt;Web applications have Basic Authentication, User Logins and Form Validation which stops Scanner in its tracks&lt;/p&gt;
&lt;h3&gt;
  
  
  Solution
&lt;/h3&gt;

&lt;p&gt;Use Selenium test scripts to drive ZAP. A project may include already selenium scripts for functional testing. Active scans actively modify the recorded requests and responses to determine further vulnerabilities&lt;/p&gt;
&lt;h2&gt;
  
  
  CI/CD Setup
&lt;/h2&gt;

&lt;p&gt;Let's create a CI pipeline that will start ZAP in headless mode, run our functional tests that will perform two types of scan (active/passive), store results of scanning alerts in HTML Reports and tear down the server.&lt;/p&gt;

&lt;p&gt;CI/CD Steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Start ZAP&lt;/li&gt;
&lt;li&gt;Run Selenium Scripts (Passive Scan)&lt;/li&gt;
&lt;li&gt;Wait for Passive scan to complete&lt;/li&gt;
&lt;li&gt;Start Active Scan&lt;/li&gt;
&lt;li&gt;Wait for Active scan to complete&lt;/li&gt;
&lt;li&gt;Retrieve alerts and report&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  OWASP ZAP Installation
&lt;/h3&gt;

&lt;p&gt;OWASP ZAP can be installed with multiple ways but we prefer to use Docker which is the simplest way to bring up the server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight groovy"&gt;&lt;code&gt; &lt;span class="n"&gt;stage&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'OWASP ZAP setup'&lt;/span&gt;&lt;span class="o"&gt;){&lt;/span&gt;
    &lt;span class="n"&gt;sh&lt;/span&gt; &lt;span class="s2"&gt;"docker pull owasp/zap2docker-stable"&lt;/span&gt;
    &lt;span class="n"&gt;sh&lt;/span&gt; &lt;span class="err"&gt;"&lt;/span&gt;&lt;span class="n"&gt;docker&lt;/span&gt; &lt;span class="n"&gt;run&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;rm&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;d&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;u&lt;/span&gt; &lt;span class="n"&gt;zap&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="n"&gt;zap&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="mi"&gt;4449&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;4449&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt; 
        &lt;span class="n"&gt;owasp&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;zap2docker&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;stable&lt;/span&gt; 
        &lt;span class="n"&gt;zap&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sh&lt;/span&gt; &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="n"&gt;A&lt;/span&gt; &lt;span class="n"&gt;start&lt;/span&gt; &lt;span class="n"&gt;up&lt;/span&gt; &lt;span class="n"&gt;script&lt;/span&gt; &lt;span class="n"&gt;provided&lt;/span&gt; &lt;span class="n"&gt;by&lt;/span&gt; &lt;span class="n"&gt;ZAP&lt;/span&gt;
        &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;daemon&lt;/span&gt; &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="n"&gt;Start&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="n"&gt;headless&lt;/span&gt; &lt;span class="n"&gt;configuration&lt;/span&gt;
        &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;host&lt;/span&gt; &lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="mf"&gt;0.0&lt;/span&gt; &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="n"&gt;The&lt;/span&gt; &lt;span class="n"&gt;ZAP&lt;/span&gt; &lt;span class="n"&gt;host&lt;/span&gt;
        &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;port&lt;/span&gt; &lt;span class="mi"&gt;4449&lt;/span&gt; &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="n"&gt;The&lt;/span&gt; &lt;span class="n"&gt;ZAP&lt;/span&gt; &lt;span class="n"&gt;port&lt;/span&gt; 
        &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="n"&gt;api&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;addrs&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;addr&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=.*&lt;/span&gt;
        &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="n"&gt;api&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;addrs&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;addr&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;regex&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="n"&gt;Allow&lt;/span&gt; &lt;span class="n"&gt;any&lt;/span&gt; &lt;span class="n"&gt;source&lt;/span&gt; &lt;span class="n"&gt;IP&lt;/span&gt; &lt;span class="n"&gt;to&lt;/span&gt; &lt;span class="n"&gt;connect&lt;/span&gt; 
        &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="n"&gt;api&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;testypon&lt;/span&gt;&lt;span class="err"&gt;"&lt;/span&gt; &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="n"&gt;Api&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt; &lt;span class="n"&gt;to&lt;/span&gt; &lt;span class="n"&gt;be&lt;/span&gt; &lt;span class="n"&gt;used&lt;/span&gt; 
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we brought up ZAP in headless mode navigating to localhost in port 4449 you will be able to see:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XW7OOUH8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/asfyjj03s35zfhv3cduj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XW7OOUH8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/asfyjj03s35zfhv3cduj.png" alt="Zap host" width="853" height="462"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  WebDriver Integration
&lt;/h3&gt;

&lt;p&gt;Lets see how we can integrate ZAP with our WebDriver instance that will drive the user interaction with our application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nc"&gt;DesiredCapabilities&lt;/span&gt; &lt;span class="n"&gt;caps&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;DesiredCapabilities&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
&lt;span class="nc"&gt;Proxy&lt;/span&gt; &lt;span class="n"&gt;proxy&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Proxy&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
&lt;span class="n"&gt;proxy&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setProxyType&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Proxy&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;ProxyType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;PAC&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="nc"&gt;StringBuilder&lt;/span&gt; &lt;span class="n"&gt;strBuilder&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;StringBuilder&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
&lt;span class="n"&gt;strBuilder&lt;/span&gt;
&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;append&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"http://localhost:4449"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;append&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/proxy.pac?apikey="&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;append&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;zapApiKey&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;proxy&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setProxyAutoconfigUrl&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;strBuilder&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toString&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
&lt;span class="n"&gt;caps&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setCapability&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;CapabilityType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;PROXY&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;proxy&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now pass the capabilities in your WebDriver Object as usual to intercept traffic from ZAP.&lt;br&gt;&lt;br&gt;
If the target web application has security response headers in place, specifically Strict-Transport-Security the WebDriver should be configured as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="n"&gt;caps&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setCapability&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;CapabilityType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;ACCEPT_SSL_CERTS&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;caps&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setCapability&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;CapabilityType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;ACCEPT_INSECURE_CERTS&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Passive Scan
&lt;/h3&gt;

&lt;p&gt;Passive scans record the requests and responses sent to the web application and creates alerts for detected vulnerabilities. Also they are triggered whenever we access the application from WebDriver.&lt;/p&gt;

&lt;h3&gt;
  
  
  Active scan
&lt;/h3&gt;

&lt;p&gt;Active scans actively modify the recorded requests and responses to determine further vulnerabilities for the application.&lt;/p&gt;

&lt;h3&gt;
  
  
  ZAP API in Selenium
&lt;/h3&gt;

&lt;p&gt;For the reasons of simplicity we created a Driver(Wrapper to the actual ZAP API) called ZAPDriver to map to all needed API calls from our selenium scripts and will drive the execution of the scans. Lets see the most needed fuction for the scans:&lt;/p&gt;

&lt;p&gt;First add your dependencies in the test project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
 &lt;span class="nt"&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.zaproxy&lt;span class="nt"&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
 &lt;span class="nt"&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;zap-clientapi&lt;span class="nt"&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
 &lt;span class="nt"&gt;&amp;lt;version&amp;gt;&lt;/span&gt;${zapapi.version}&lt;span class="nt"&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now lets create our ZAPDriver to start interacting with the API functions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@Service&lt;/span&gt;
&lt;span class="nd"&gt;@Profile&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Security"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ZapDriver&lt;/span&gt; &lt;span class="kd"&gt;implements&lt;/span&gt; &lt;span class="nc"&gt;Spider&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;ScanningProxy&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;ContextModifier&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Authentication&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

    &lt;span class="nd"&gt;@Value&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"${zap.enabled:false}"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kt"&gt;boolean&lt;/span&gt; &lt;span class="n"&gt;zap&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

    &lt;span class="nd"&gt;@Value&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"${zap.host:}"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;zapHost&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

    &lt;span class="nd"&gt;@Value&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"${zap.base.url}"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;zapBaseUrl&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

    &lt;span class="nd"&gt;@Value&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"${zap.port:0000}"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;zapPort&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

    &lt;span class="nd"&gt;@Value&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"${zap.api.key}"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;zapApiKey&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;static&lt;/span&gt; &lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="no"&gt;MINIMUM_ZAP_VERSION&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"2.6"&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// Weekly builds are also allowed.&lt;/span&gt;

    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;ClientApi&lt;/span&gt; &lt;span class="n"&gt;clientApi&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

    &lt;span class="nd"&gt;@PostConstruct&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;initializeScanner&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;clientApi&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ClientApi&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;zapHost&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;zapPort&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;zapApiKey&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;secData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;SecurityData&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
        &lt;span class="n"&gt;validateMinimumRequiredZapVersion&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
        &lt;span class="n"&gt;setAttackMode&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the PostContruct phase of the Object we initialize an instance of the ClientAPI to perform the need HTTP calls to ZAP from within our tests&lt;/p&gt;

&lt;p&gt;Step 1: &lt;strong&gt;&lt;em&gt;Enable Scanner&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Now if you enable Passive or Active scanners you can do that with different ways. You can find the scanner policy name from the API and use the id to enable specific scanners for example (SQL Injection or Cross Site Scripting) but for the sake of simplicity we can enable all scanners and get a unified report with different categories of alerts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;     &lt;span class="nd"&gt;@Override&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;enableAllScanners&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="kd"&gt;throws&lt;/span&gt; &lt;span class="nc"&gt;ProxyException&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;clientApi&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;pscan&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setEnabled&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"true"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
            &lt;span class="nc"&gt;ApiResponse&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;clientApi&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;ascan&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;enableAllScanners&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
            &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;trace&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"ZAP OK response for api call %s!!!"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getName&lt;/span&gt;&lt;span class="o"&gt;()));&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;ClientApiException&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;ProxyException&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above will enable all active scanners and you have also another call if you want to enable all passive scanners as well&lt;/p&gt;

&lt;p&gt;Step 2: &lt;strong&gt;&lt;em&gt;Spidering&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Spider is an Internet bot that systematically browses the World Wide Web, typically for the purpose of Web indexing (web spidering).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;  &lt;span class="nd"&gt;@Override&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;spider&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="nc"&gt;ApiResponse&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;clientApi&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;spider&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;scan&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
            &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;trace&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"ZAP OK response for api call %s!!!"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getName&lt;/span&gt;&lt;span class="o"&gt;()));&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;ClientApiException&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Exception trying to spider "&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getDetail&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
 &lt;span class="nd"&gt;@Override&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;excludeFromSpider&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;regex&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="nc"&gt;ApiResponse&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;clientApi&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;spider&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;excludeFromScan&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;regex&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
            &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;trace&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"ZAP OK response for api call %s!!!"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getName&lt;/span&gt;&lt;span class="o"&gt;()));&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;ClientApiException&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;ProxyException&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can exclude also URLs from third party providers that you do not want to spider in order to speed up the crawling process and produce valid alerts for your application.&lt;/p&gt;

&lt;p&gt;Step 3: &lt;strong&gt;&lt;em&gt;Scanning&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
After spidering your application you need to scan for gathering our alerts based on the enabled policy of the scanners previously&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt; &lt;span class="nd"&gt;@Override&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;scan&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="kd"&gt;throws&lt;/span&gt; &lt;span class="nc"&gt;ProxyException&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="nc"&gt;ApiResponse&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;clientApi&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;ascan&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;scan&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"true"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"false"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
            &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;trace&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"ZAP OK response for api call %s!!!"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getName&lt;/span&gt;&lt;span class="o"&gt;()));&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;ClientApiException&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;ProxyException&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;Great!!!&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Tip: Be careful that the times of the scan are different per application. You need to adjust the scan/spider timeout per your need&lt;/p&gt;

&lt;p&gt;Step 4: &lt;strong&gt;&lt;em&gt;Reporting&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;&lt;/p&gt;

&lt;p&gt;After scan is complete we need to create a report for the alerts and store it on our CI server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kt"&gt;byte&lt;/span&gt;&lt;span class="o"&gt;[]&lt;/span&gt; &lt;span class="n"&gt;htmlReport&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;scanner&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getHtmlReport&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt;
&lt;span class="nc"&gt;Path&lt;/span&gt; &lt;span class="n"&gt;pathToFile&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Paths&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="nc"&gt;Files&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;createDirectories&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pathToFile&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getParent&lt;/span&gt;&lt;span class="o"&gt;())&lt;/span&gt;
&lt;span class="nc"&gt;Files&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;write&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pathToFile&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;htmlReport&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;allureService&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;html&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pathToFile&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toFile&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt;&lt;span class="s"&gt;"OWASP ZAP Report"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;

 &lt;span class="nd"&gt;@Override&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;byte&lt;/span&gt;&lt;span class="o"&gt;[]&lt;/span&gt; &lt;span class="nf"&gt;getHtmlReport&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="kd"&gt;throws&lt;/span&gt; &lt;span class="nc"&gt;ProxyException&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;clientApi&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;core&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;htmlreport&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;ClientApiException&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;ProxyException&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now the report for example will summarize all alerts with description and possible solutions provided from OWASP ZAP Organization:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JOu_Kjgk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/knffl5ls8z9s3qh2xb50.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JOu_Kjgk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/knffl5ls8z9s3qh2xb50.png" alt="ZAP Report" width="880" height="607"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;SecDevOps is the philosophy of integrating security practices within the DevOps process. SecDevOps involves creating a 'Security as Code' culture with ongoing, flexible collaboration between release engineers and security teams. There are plenty tools and platforms that can be used in the release cycles of your application and we reviewed how to use the most known open-source tool.&lt;/p&gt;

</description>
      <category>security</category>
      <category>testing</category>
      <category>selenium</category>
      <category>java</category>
    </item>
    <item>
      <title>Visual Automation with Applitools</title>
      <dc:creator>Giannis Papadakis</dc:creator>
      <pubDate>Fri, 24 Apr 2020 09:58:50 +0000</pubDate>
      <link>https://forem.com/giannispapadakis/visual-automation-with-applitools-2b6n</link>
      <guid>https://forem.com/giannispapadakis/visual-automation-with-applitools-2b6n</guid>
      <description>&lt;p&gt;The purpose of this article is to review how we can integrate Applitools platform to add AI/ML in our existing Web UI Functional Tests. &lt;/p&gt;

&lt;h2&gt;
  
  
  Applitools Introduction
&lt;/h2&gt;

&lt;p&gt;Applitools is "yet" another tool that you can use in conjunction with Selenium to handle the complicated visual types of testing you need to get done, and leave the more functionality-type testing to the tools that where designed to handle that, like Selenium.&lt;br&gt;&lt;br&gt;
Applitools provide an easy to use platform to review visual differences between screens that can handle different configurations we will examine later on (e.g view port, device types etc.). It provides integration with multiple test frameworks and programming languages in order for any team to be able to adjust to their needs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fl60ad8g9dmfupeb45qki.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fl60ad8g9dmfupeb45qki.png" alt="Platform"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Applitools Integration with Test Frameworks
&lt;/h2&gt;

&lt;p&gt;There are a bunch of SDKs for most of the programming languages that allow you to take images and upload them for visual validation in Applitools. We are going to review on how to start with the integration in a Java based Test Project.&lt;/p&gt;
&lt;h3&gt;
  
  
  Dependencies and SDKs
&lt;/h3&gt;

&lt;p&gt;In your maven dependency management section add the following dependencies:&lt;br&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt; &lt;span class="c"&gt;&amp;lt;!-- START Applitools Eyes dependencies --&amp;gt;&lt;/span&gt;
   &lt;span class="nt"&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
       &lt;span class="nt"&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;com.applitools&lt;span class="nt"&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
       &lt;span class="nt"&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;eyes-selenium-java3&lt;span class="nt"&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
       &lt;span class="nt"&gt;&amp;lt;version&amp;gt;&lt;/span&gt;${eyes-java3.version}&lt;span class="nt"&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
   &lt;span class="nt"&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
   &lt;span class="nt"&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
       &lt;span class="nt"&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;com.applitools&lt;span class="nt"&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
       &lt;span class="nt"&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;eyes-images-java3&lt;span class="nt"&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
       &lt;span class="nt"&gt;&amp;lt;version&amp;gt;&lt;/span&gt;${eyes-java3.version}&lt;span class="nt"&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
 &lt;span class="c"&gt;&amp;lt;!-- END Applitools Eyes dependencies --&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before any selenium test starts execution we need to initialize Eyes and EyesRunner objects in order to take images within our tests.&lt;br&gt; &lt;/p&gt;

&lt;p&gt;An example of initializing the runner:&lt;br&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt; &lt;span class="nc"&gt;EyesRunner&lt;/span&gt; &lt;span class="n"&gt;runner&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;visualGrid&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="no"&gt;LOG&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;info&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Applitools: Running on Visual Grid"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
            &lt;span class="n"&gt;runner&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;VisualGridRunner&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;concurrency&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="no"&gt;LOG&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;info&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Applitools: Running on Classic Runner"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
            &lt;span class="n"&gt;runner&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ClassicRunner&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="nc"&gt;Configuration&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Configuration&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
        &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;addBrowser&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1920&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1080&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;BrowserType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;CHROME&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;addBrowser&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1920&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1080&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;BrowserType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;FIREFOX&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;addDeviceEmulation&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;DeviceName&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;iPhone_X&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;ScreenOrientation&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;PORTRAIT&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setApiKey&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;apiKey&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setBatch&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;batch&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setMatchLevel&lt;/span&gt;&lt;span class="o"&gt;(!&lt;/span&gt;&lt;span class="n"&gt;matchLevel&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;isEmpty&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;?&lt;/span&gt; &lt;span class="nc"&gt;MatchLevel&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;valueOf&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;matchLevel&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;MatchLevel&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;LAYOUT&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lets review the arguments of the above example. First we need to set the type of Runner and currently Applitools support two flavors:&lt;br&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Classic Runner&lt;/strong&gt;: Applitools will process the images taken in your local executed browser/device
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Visual Grid Runner&lt;/strong&gt;: Applitools give you the ability to send images and process them in different type of configurations (browsers/devices) scaling your tests on different platforms covering more visual checkpoints
Next we have the &lt;strong&gt;API Key&lt;/strong&gt; which is provided by an Admin user of the platform. &lt;strong&gt;Batch&lt;/strong&gt; can be used to set the batch name that will be used to store the tests for this runner and finally the &lt;strong&gt;Match Level&lt;/strong&gt; which is the type of validations that will be used to compare the checkpoints.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Applitools Eyes can test the UI in 4 different comparison levels:&lt;br&gt; &lt;br&gt;
&lt;strong&gt;Exact (MatchLevel.EXACT)&lt;/strong&gt; - pixel-to-pixel comparison, not recommended.&lt;br&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strict (MatchLevel.STRICT)&lt;/strong&gt; - Strict compares everything including content (text), fonts, layout, colors and position of each of the elements.  Strict knows to ignore rendering changes that are not visible to the human.  Strict is the recommended match level when running regression tests on the same browser/OS (strict is not designed to do cross browser comparison).&lt;br&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Content (MatchLevel.CONTENT)&lt;/strong&gt; - Content works in a similar way to Strict except for the fact that it ignores colors.&lt;br&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Layout (MatchLevel.LAYOUT)&lt;/strong&gt; - Layout, as its name implies, compares the layouts (i.e. structure) of the baseline and actual images. It validates the alignment and relative position of all elements on the page, such as: buttons, menus, text areas, paragraphs, images, and columns. It ignores the content, colour and other style changes between the pages.&lt;br&gt; &lt;/p&gt;

&lt;p&gt;Now lets see how to initialize the Eyes instance&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nc"&gt;Eyes&lt;/span&gt; &lt;span class="n"&gt;eyes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Eyes&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;runner&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;eyes&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setConfiguration&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;eyes&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setForceFullPageScreenshot&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;eyes&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setStitchMode&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;StitchMode&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;CSS&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;eyes&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setWaitBeforeScreenshots&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2000&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;eyes&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setLogHandler&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;FileLogger&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"target/eyes.log"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we initialized our eyes before our test we can actually navigate to a page with selenium and take a baseline image from the page and send it over to Applitools.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;info&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Start Visual Check From Applitools"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;matchTimeout&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;10000&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;&lt;span class="c1"&gt;// in milli seconds&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(!&lt;/span&gt;&lt;span class="n"&gt;eyes&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getIsOpen&lt;/span&gt;&lt;span class="o"&gt;())&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;eyes&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;open&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;driver&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"VisualTest"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;&lt;span class="s"&gt;"TestName"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;eyes&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;checkWindow&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;matchTimeout&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"ScreenshotName"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the commands above we create a new test and send an image of the page into Applitools for visual comparison. &lt;/p&gt;

&lt;h3&gt;
  
  
  Getting Results from Applitools
&lt;/h3&gt;

&lt;p&gt;First we need to close eyes instance after a test is finished execution with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kt"&gt;boolean&lt;/span&gt; &lt;span class="n"&gt;testFailed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getStatus&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="nc"&gt;ITestResult&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;FAILURE&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(!&lt;/span&gt;&lt;span class="n"&gt;testFailed&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;eyes&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;closeAsync&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
     &lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;eyes&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;abortAsync&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
     &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This terminates the sequence of checkpoints, and then waits asynchronously for the test results.&lt;/p&gt;

&lt;p&gt;We can retrieve the results now by calling:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nc"&gt;TestResultsSummary&lt;/span&gt; &lt;span class="n"&gt;allTestResults&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;runner&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getAllTestResults&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;TestResultContainer&lt;/span&gt; &lt;span class="n"&gt;result1&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;allTestResults&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nc"&gt;Throwable&lt;/span&gt; &lt;span class="n"&gt;ex&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;result1&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getException&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="nc"&gt;TestResults&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;result1&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getTestResults&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="no"&gt;LOG&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;info&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"No test results information available\n"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;resultReport&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"URL = %s, AppName = %s, testname = %s, Browser = %s,OS = %s, viewport = %dx%d, matched = %d,mismatched = %d, missing = %d,aborted = %s\n"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getUrl&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt;
 &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getAppName&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt;
 &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getName&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt;
 &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getHostApp&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt;
 &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getHostOS&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt;
 &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getHostDisplaySize&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;getWidth&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt;
 &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getHostDisplaySize&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;getHeight&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt;
 &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getMatches&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt;
 &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getMismatches&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt;
 &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getMissing&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt;
 &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;isAborted&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;?&lt;/span&gt; &lt;span class="s"&gt;"aborted"&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"no"&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;
 &lt;span class="no"&gt;LOG&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;info&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Applitools Results: "&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;resultReport&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So with this way we can get results and post them to our test reporting platforms.&lt;/p&gt;

&lt;h2&gt;
  
  
  Applitools CI/CD Integration
&lt;/h2&gt;

&lt;p&gt;In order to view test results directly in your Jenkins CI server you need first to install Applitools Jenkins plugin.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fypwczu9us8x2ie2nwmc7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fypwczu9us8x2ie2nwmc7.png" alt="Plugin"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then we need to add the following section in your jenkins pipeline stage for running the visual tests:&lt;br&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight groovy"&gt;&lt;code&gt;&lt;span class="n"&gt;node&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
     &lt;span class="n"&gt;stage&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'Visual Tests'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; 
         &lt;span class="n"&gt;Applitools&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
               &lt;span class="n"&gt;sh&lt;/span&gt; &lt;span class="s1"&gt;'mvn clean test'&lt;/span&gt;
         &lt;span class="o"&gt;}&lt;/span&gt;
     &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that the results are available in an embedded iframe in Jenkins:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fj2i7gghzlxs9oi5219xi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fj2i7gghzlxs9oi5219xi.png" alt="jenkins"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>applitools</category>
      <category>visualtesting</category>
      <category>selenium</category>
      <category>java</category>
    </item>
    <item>
      <title>Integrate ReportPortal.io in Java Projects</title>
      <dc:creator>Giannis Papadakis</dc:creator>
      <pubDate>Thu, 16 Apr 2020 10:45:27 +0000</pubDate>
      <link>https://forem.com/giannispapadakis/integrate-reportportal-io-in-java-projects-4634</link>
      <guid>https://forem.com/giannispapadakis/integrate-reportportal-io-in-java-projects-4634</guid>
      <description>&lt;p&gt;In this post we will describe how someone can integrate &lt;a href="https://reportportal.io/" rel="noopener noreferrer"&gt;ReportPortal&lt;/a&gt; platform for an existing java test project in order to leverage reporting of unit/integration tests. ReportPortal is an AI-powered Test Automation Dashboard that you can use for visualizing your test suites across multiple components and products and provides a unique real time experience.&lt;/p&gt;

&lt;h1&gt;
  
  
  Setup ReportPortal.io
&lt;/h1&gt;

&lt;p&gt;In order to setup ReportPortal we need to have preinstalled &lt;a href="https://docs.docker.com/compose/install/" rel="noopener noreferrer"&gt;docker-compose&lt;/a&gt; locally.&lt;br&gt;
Download docker-compose.yml from ReportPortal.io by executing the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl https://raw.githubusercontent.com/reportportal/reportportal/master/docker-compose.yml -o docker-compose.yml 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start then the platform by executing the following docker-compose command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-compose -p reportportal up -d --force-recreate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now if you navigate to localhost:8080 you will see the Login Screen where you can login with following default credentials &lt;strong&gt;superadmin\erebus&lt;/strong&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Integrate with Test Frameworks
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fveaqhao9p6xwtdqlfiau.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fveaqhao9p6xwtdqlfiau.png" alt="Integration Point"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that we have the platform up and running lets review an example of a test project and how to integrate with RP to forward test results. For the purpose of this i will be using TestNG but of course there are multiple adaptors that someone can review in the official RP Documentation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding Dependencies
&lt;/h2&gt;

&lt;p&gt;In order to start we need to add the dependencies for reportportal in our test project. Locate your maven pom.xml and add the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;

 &lt;span class="nt"&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
   &lt;span class="nt"&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;com.epam.reportportal&lt;span class="nt"&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
   &lt;span class="nt"&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;logger-java-logback&lt;span class="nt"&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
   &lt;span class="nt"&gt;&amp;lt;version&amp;gt;&lt;/span&gt;3.0.0&lt;span class="nt"&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;

&lt;span class="nt"&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;com.epam.reportportal&lt;span class="nt"&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;agent-java-testng&lt;span class="nt"&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;version&amp;gt;&lt;/span&gt;3.0.0&lt;span class="nt"&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also we need to add a private repository storing the external dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;repositories&amp;gt;&lt;/span&gt;
     &lt;span class="nt"&gt;&amp;lt;repository&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;snapshots&amp;gt;&lt;/span&gt;
          &lt;span class="nt"&gt;&amp;lt;enabled&amp;gt;&lt;/span&gt;false&lt;span class="nt"&gt;&amp;lt;/enabled&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;/snapshots&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;id&amp;gt;&lt;/span&gt;bintray-epam-reportportal&lt;span class="nt"&gt;&amp;lt;/id&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;name&amp;gt;&lt;/span&gt;bintray&lt;span class="nt"&gt;&amp;lt;/name&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;url&amp;gt;&lt;/span&gt;http://dl.bintray.com/epam/reportportal&lt;span class="nt"&gt;&amp;lt;/url&amp;gt;&lt;/span&gt;
     &lt;span class="nt"&gt;&amp;lt;/repository&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/repositories&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now in order to send results to your ReportPortal instance first we need to create a project and get following parameters stored in a file called reportportal.properties that should be stored in your classpath under src/test/resources.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rp.endpoint = http://localhost:8080
rp.uuid = 2c54960d-cb95-4842-9c9b-5e72e43d1979
rp.launch = Test_Launch
rp.project = superadmin_personal
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fm0qvdkzu0ut46pxb25d7.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fm0qvdkzu0ut46pxb25d7.gif" alt="RP Properties"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Posting Data to ReportPortal
&lt;/h2&gt;

&lt;p&gt;Now that we added needed dependencies we need to enable the adapter and also review how we can send custom data to RP like logs,attachments etc.&lt;/p&gt;

&lt;p&gt;There are two ways to add the adaptor in TestNG:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Annotation Listeners:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@Listeners&lt;/span&gt;&lt;span class="o"&gt;({&lt;/span&gt;&lt;span class="nc"&gt;ReportPortalTestNGListener&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="o"&gt;})&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;FailedParameterizedTest&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
&lt;span class="err"&gt;…&lt;/span&gt;&lt;span class="o"&gt;..&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;TestNG XML:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;suite&amp;gt;&lt;/span&gt;

  &lt;span class="nt"&gt;&amp;lt;listeners&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;listener&lt;/span&gt; &lt;span class="na"&gt;class-name=&lt;/span&gt;&lt;span class="s"&gt;"com.example.MyListener"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;listener&lt;/span&gt; &lt;span class="na"&gt;class-name=&lt;/span&gt;&lt;span class="s"&gt;"com.epam.reportportal.testng.ReportPortalTestNGListener"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/listeners&amp;gt;&lt;/span&gt;
.....
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we enabled to start having launch details triggered when our tests are triggered&lt;/p&gt;

&lt;h3&gt;
  
  
  Logging Framework Support
&lt;/h3&gt;

&lt;p&gt;In our case as logging framework we use logback-classic and slf4j so in order to send our test logs to RP locate logback.xml and add the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;   &lt;span class="nt"&gt;&amp;lt;appender&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"ReportPortalAppender"&lt;/span&gt; &lt;span class="na"&gt;class=&lt;/span&gt;&lt;span class="s"&gt;"com.epam.reportportal.logback.appender.ReportPortalAppender"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;encoder&amp;gt;&lt;/span&gt;
            &lt;span class="nt"&gt;&amp;lt;pattern&amp;gt;&lt;/span&gt;%d{HH:mm:ss.SSS} [%t] %-5level - %msg%n&lt;span class="nt"&gt;&amp;lt;/pattern&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;/encoder&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;/appender&amp;gt;&lt;/span&gt;

    &lt;span class="nt"&gt;&amp;lt;root&lt;/span&gt; &lt;span class="na"&gt;level=&lt;/span&gt;&lt;span class="s"&gt;"info"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;appender-ref&lt;/span&gt; &lt;span class="na"&gt;ref=&lt;/span&gt;&lt;span class="s"&gt;"STDOUT"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;appender-ref&lt;/span&gt; &lt;span class="na"&gt;ref=&lt;/span&gt;&lt;span class="s"&gt;"FILE"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;appender-ref&lt;/span&gt; &lt;span class="na"&gt;ref=&lt;/span&gt;&lt;span class="s"&gt;"ReportPortalAppender"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;/root&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now lets see an example of attaching a screenshot for a failed Web Test. &lt;br&gt;
In order to send a file as attachment we will use our logger with the following arguments:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;info&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"RP_MESSAGE#FILE#{}#{}"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="o"&gt;((&lt;/span&gt;&lt;span class="nc"&gt;TakesScreenshot&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="n"&gt;driver&lt;/span&gt;&lt;span class="o"&gt;).&lt;/span&gt;&lt;span class="na"&gt;getScreenshotAs&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;OutputType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;FILE&lt;/span&gt;&lt;span class="o"&gt;).&lt;/span&gt;&lt;span class="na"&gt;getAbsoluteFile&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt; &lt;span class="s"&gt;"Screenshot on Failure"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First argument is the File to be attached and the second the message we want to be shown in the report&lt;/p&gt;

&lt;p&gt;See an example attached here:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftx23qfk248ldgp7n5oev.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftx23qfk248ldgp7n5oev.gif" alt="Attachments"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;ReportPortal is one example of how AI/ML is transforming software testing and can be easily integrated in your existing test infrastructure&lt;br&gt;
If you want:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Manage all your automation results and reports in one place&lt;/li&gt;
&lt;li&gt;Make automation results analysis actionable &amp;amp; collaborative&lt;/li&gt;
&lt;li&gt;Establish fast traceability with defect management&lt;/li&gt;
&lt;li&gt;Accelerate routine results analysis&lt;/li&gt;
&lt;li&gt;Visualize metrics and analytics&lt;/li&gt;
&lt;li&gt;Make smarter decisions together&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Try it out!!!&lt;/p&gt;

</description>
      <category>java</category>
      <category>testing</category>
      <category>selenium</category>
      <category>testng</category>
    </item>
  </channel>
</rss>
