<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Nithya Sen</title>
    <description>The latest articles on Forem by Nithya Sen (@nithya_sen_806bd7b3dc741b).</description>
    <link>https://forem.com/nithya_sen_806bd7b3dc741b</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/nithya_sen_806bd7b3dc741b"/>
    <language>en</language>
    <item>
      <title>Selenium Overview</title>
      <dc:creator>Nithya Sen</dc:creator>
      <pubDate>Wed, 01 Apr 2026 04:01:10 +0000</pubDate>
      <link>https://forem.com/nithya_sen_806bd7b3dc741b/selenium-overview-3oke</link>
      <guid>https://forem.com/nithya_sen_806bd7b3dc741b/selenium-overview-3oke</guid>
      <description>&lt;p&gt;&lt;strong&gt;What is Selenium?&lt;/strong&gt;&lt;/p&gt;

&lt;h6&gt;
  
  
  Selenium is an automation tool widely used to perform browser automation. For example, consider we are working for Amazon and our task is to make sure the user is able to place an order successfully. Say this has to be done every day to make sure there is no breakage in the application due to everyday code deployment. Its tedious and time taking to verify this function manually every day. So here comes our Selenium tool using which we can place an order on Amazon and verify the order goes through, without any manual intervention. Once the test execution is done, we get to see the report if all the tests we coded are passed. This way one can avoid manual testing and any redundant tasks.
&lt;/h6&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Selenium Titbits&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Development:&lt;/strong&gt; Selenium was developed in the year 2004 by a team in ThoughtWorks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Original Name:&lt;/strong&gt; It was initially called &lt;strong&gt;JavascriptTestRunner&lt;/strong&gt; and was used to automate ThoughtWorks internal application ‘Time and Expenses’.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;How Selenium got its name?:&lt;/strong&gt; In 2004, the dominant commercial testing tool was made by a company called Mercury Interactive (later acquired by HP). While brainstorming a name for his new project, creator Jason Huggins joked in an email that "mercury poisoning can be cured by taking selenium supplements." The team loved it and the name stuck.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;It is a W3C Standard:&lt;/strong&gt; Selenium WebDriver became a World Wide Web Consortium (W3C) Recommendation in 2018. This is a massive deal because it means that browser vendors (like Google, Apple, and Microsoft) are now responsible for developing the drivers that allow Selenium to communicate with their browsers. This makes the automation much more stable and "native" than it used to be.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The "Remote Control" Hack:&lt;/strong&gt; In the early days, browsers had a security feature called the Same-Origin Policy, which blocked JavaScript from interacting with a site if the script didn't come from that exact same domain. To "trick" the browser, Paul Hammant created &lt;strong&gt;Selenium RC (Remote Control)&lt;/strong&gt;. It acted as a HTTP proxy, making the browser believe that the Selenium Core (JavaScript) and the website being tested were from the same source. It was a clever, slightly messy workaround that paved the way for modern automation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Google’s Massive Role:&lt;/strong&gt; While it started at ThoughtWorks, Selenium’s "teenage years" were spent at Google. Jason Huggins joined Google in 2007, and for a long time, Google was the primary driver of the project. They needed a way to test their massive applications (like Gmail and Maps) across every possible browser, so they poured resources into making Selenium scalable.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Why do we use Selenium for Automation?&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Selenium supports browser automation which helps in automating testing all manual test execution for any sort of web application.&lt;/li&gt;
&lt;li&gt;Selenium is an open source software and has huge developer and community support.&lt;/li&gt;
&lt;li&gt;It supports wide range of programming languages like &lt;strong&gt;Java, Javascript, C#, Python, and Ruby&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;A single Selenium script can be executed across different environments. This "write once, run anywhere" capability is vital for Cross-Browser Testing. 

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Browsers:&lt;/strong&gt; Chrome, Firefox, Safari, Edge, and Opera. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operating Systems:&lt;/strong&gt; Windows, macOS, and Linux.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Support for Parallel Execution:&lt;/strong&gt; With Selenium Grid, we can run multiple tests simultaneously across different browsers and operating systems.&lt;/li&gt;

&lt;li&gt;It integrates seamlessly with:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Build Tools:&lt;/strong&gt; Maven, Gradle, Ant.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CI/CD Tools:&lt;/strong&gt; Jenkins, Azure DevOps, GitLab.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Testing Frameworks:&lt;/strong&gt; JUnit, TestNG, PyTest.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;What is the relevance of Selenium in Automation Testing using Python?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The pairing of Selenium and Python is one of the most popular combinations in the world of Quality Assurance.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Readability:&lt;/strong&gt; Python allows testers to write scripts that are easy to understand for everyone on the team as its close to plain English&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Powerful Library Ecosystem:&lt;/strong&gt; Python has a large library of "packages" that integrate perfectly with Selenium:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;PyTest:&lt;/strong&gt; The industry-standard framework for organizing and running tests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pandas:&lt;/strong&gt; For Data-Driven Testing (reading thousands of test cases from CSV/Excel).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Behave:&lt;/strong&gt; For Behavior-Driven Development (BDD), allowing you to write tests in "Given/When/Then" format.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Since Python is the leading language for AI and Machine Learning, using Selenium with Python makes it much easier to integrate AI models that can "heal" broken tests or analyze UI screenshots for visual bugs.&lt;/li&gt;

&lt;/ul&gt;

</description>
      <category>automation</category>
      <category>beginners</category>
      <category>testing</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Common Manual Testing Techniques</title>
      <dc:creator>Nithya Sen</dc:creator>
      <pubDate>Sun, 01 Feb 2026 09:59:03 +0000</pubDate>
      <link>https://forem.com/nithya_sen_806bd7b3dc741b/common-manual-testing-techniques-1l8l</link>
      <guid>https://forem.com/nithya_sen_806bd7b3dc741b/common-manual-testing-techniques-1l8l</guid>
      <description>&lt;p&gt;&lt;strong&gt;What is Manual Testing?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Manual Testing is the process in which a software is tested to make sure it functions as intended. &lt;/p&gt;

&lt;h2&gt;
  
  
  Below are some common Manual Testing Techniques:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Exploratory Testing:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What is it?: Exploratory testing is done by exploring the app like an end user would do. &lt;/p&gt;

&lt;p&gt;When do we use?: We use Exploratory Testing in the following cases: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;When we have short period of time and have no enough time to write test cases, we use exploratory testing. &lt;/li&gt;
&lt;li&gt;When we want to test the app like an end user would do, without knowing what the Functional Requirement Document says, we use exploratory testing. This way we explore the app and cover all scenarios an end user would encounter &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;How do we do?: We explore the application and note down any discrepancies/errors we see. We also note down all the scenarios we tested and make Exploratory Testing notes which we can deliver to the client, so one would know what exactly been tested and what sort of bugs have been discovered. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Experience Based Testing:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What is it?: Like the name suggests, Experience Based Testing is done based on one's own experience of the app or the domain. Its &lt;/p&gt;

&lt;p&gt;When do we use?: We use Experience Based Testing in the following scenarios&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;If we don't have Requirement document, we follow experience based testing, to uncover bugs and to ensure the app works well&lt;/li&gt;
&lt;li&gt;If we don't have enough time and needed to uncover all critical bugs&lt;/li&gt;
&lt;li&gt;In early stage startups where the requirement is not stable, we use experience based testing&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;How do we do?: 'Error Guessing' is the common technique of Experience Based Testing. As an experienced tester, the tester would know where more bugs would occur and where the app might break. This way, its easy for the tester to uncover all the bugs from their past experience of the app, skill and intuition. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Blackbox Testing:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What is it?: Blackbox Testing is about testing the functionality of the app without knowing about how the application is built underneath. &lt;/p&gt;

&lt;p&gt;When do we use?: Once the application is built, we do the blackbox testing by executing test cases and uncover any bugs. &lt;/p&gt;

&lt;p&gt;How do we do?: First, we prepare test cases based on the Requirement Document. We will also have Requirement Traceability Matrix prepared to make sure all the requirements are covered in the test cases. Once the test cases are ready, we will execute them and make sure they all pass. In the case failure, we log bug and report to development team and all stakeholders. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test case Design Technique:&lt;/strong&gt;&lt;br&gt;
In Blackbox Testing, we have the following major Test Case Design Technique.&lt;/p&gt;

&lt;p&gt;**Equivalence Partitioning: **In Equivalence Partitioning, we divide the test data into equal parts and choose one data from each partition and test. This is useful when we have large number of data to test. For example: If we need to test age, we can partition it into 3 categories: Negative values, 0 to 18 and above 18. This way, we don't have to cover all possible values since its impossible and unnecessary. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Equivalence Partitioning:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We avoid redundancy by choosing only one number from a particular group&lt;/li&gt;
&lt;li&gt;We make it efficient by only testing what would confirm the result&lt;/li&gt;
&lt;li&gt;This also provides better coverage by testing for both valid and invalid data&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Boundary Value Analysis:&lt;/strong&gt;&lt;br&gt;
In this technique, we check for errors at the boundary. For example, if only users above the age of 13 is allowed to create an account, then testers can derive test cases with values 12,13 and 14. Checking the boundary of a valid input ensures thorough testing and functions as most of the time there is likely to be an error at the edges of a valid input. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Decision Table Testing:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What is it? We use Decision Table for complex systems that has different outcomes for different combination of inputs&lt;br&gt;
For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Conditions         Rule 1   Rule 2  Rule 3  Rule 4
Is New User?             Y    Y   N   N
Has Coupon?          Y    N   Y   N
Actions             
Apply 20% Discount   ✔            
Apply 10% Discount            ✔   
No Discount           ✔             ✔
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;How to Read this?&lt;br&gt;
Rule1: If the user is New and has coupon, apply 20% discount&lt;br&gt;
Rule2: If the user is New but has no coupon, apply no discount&lt;br&gt;
Rule3: If the user is not new but has coupon, apply 10% discount&lt;br&gt;
Rule4: If the user is not new and has no coupon, no discount &lt;/p&gt;

&lt;p&gt;With the decision table, its easy to read and test complex rules and scenarios. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;State Transition Testing:&lt;/strong&gt;&lt;br&gt;
We use this technique when the application involves going through several different states based on the previous actions and states. &lt;/p&gt;

&lt;p&gt;For example: During ATM Pin validation, we display the choices based on the apps current state. If the current state is a welcome screen and if the user has entered right pin, then the application will show the next state which are options the user can carry out. If the user has entered a wrong pin, then it would show First-attempt failed screen. Below is the detailed state of each stage:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Current state            Event                    Next state           

Welcome screen         Enters right pin          Dashboard is shown

Welcome screen         Enters incorrect pin      First Attempt Failed

First Attempt Failed   Enters incorrect pin      Second AttemptFailed

Second Attempt Failed  Enters incorrect pin      Account Locked

Second Attempt Failed  Enters correct pin        Dashboard is shown
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With state transitions clearly described, we can catch all the issues and also make sure it works in all the scenarios. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use-case Testing:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In Use-case Testing, we start testing by writing use-cases from the requirement document. In this, we perform end-to-end testing, where we test from the beginning till the end of the flow uncovering any integration bugs or design defects. &lt;/p&gt;

&lt;p&gt;Testers start by listing out all the posible usecases that an end-user might perform. &lt;/p&gt;

&lt;p&gt;Benefits of use-case testing is that, it reduces the complexity of the system as the path of testing will be derived in the use-case document. Testing from the user's perspective is another good benefit to discover any bugs that lies in the typical path. &lt;/p&gt;

&lt;p&gt;Some Drawbacks are that: As they are user-focused, some edge cases can be missed and 100% of the scenarios cannot be tested. Other thing is that it covers only functional part of the requirement. Aspects like performance and security cannot be covered. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monkey Testing:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As the name suggests, in monkey testing, the tester would click random buttons, visits random pages, does random actions deliberately like a monkey would do in an aim to assert the application's stability and error handling capacity. &lt;/p&gt;

&lt;p&gt;This would be helpful to discover some nasty workflows that would break the system. Its not possible to find these kind of bugs in a normal functional testing where focus is to ensure that the function works.&lt;/p&gt;

&lt;p&gt;Monkey testing is necessary to make sure that the app works well even in any unpredicted steps that a user might carry out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Future of Manual Testing in the Age of AI&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the age of AI, performing manual testing is made simple. Because we can use AI and write test cases in minutes. We can also take AI's help in thinking out all the possible scenarios including all sort of edge cases. Tester's efficiency is increased with the help of AI tools.&lt;/p&gt;

&lt;p&gt;Following are some of the common AI tools we can use to write test cases in seconds: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ChatGPT&lt;/li&gt;
&lt;li&gt;Google Gemini&lt;/li&gt;
&lt;li&gt;Claude&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Though AI can analyse requirement, come up with test cases, implement them and run them, there are certain limitation. They are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;They can't perform Exploratory Testing like an end user would do &lt;/li&gt;
&lt;li&gt;They can't do usability testing like a human would do&lt;/li&gt;
&lt;li&gt;They can't perform any testing that requires human intuition&lt;/li&gt;
&lt;li&gt;Though it generates Test cases, we may still require some manual editing as all of our instructions are not always followed no matter how good our prompts are. They are still in evolutional stage.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Verdict: Use AI but always double check how the actions are performed or created by AI as they are well prone to errors. &lt;/p&gt;

</description>
      <category>beginners</category>
      <category>learning</category>
      <category>testing</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
