<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Oleksandra</title>
    <description>The latest articles on Forem by Oleksandra (@oleksandrakordonets).</description>
    <link>https://forem.com/oleksandrakordonets</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/oleksandrakordonets"/>
    <language>en</language>
    <item>
      <title>Making TypeScript Tools Safer and Smarter</title>
      <dc:creator>Oleksandra</dc:creator>
      <pubDate>Sat, 13 Dec 2025 01:38:25 +0000</pubDate>
      <link>https://forem.com/oleksandrakordonets/making-typescript-tools-safer-and-smarter-4pgk</link>
      <guid>https://forem.com/oleksandrakordonets/making-typescript-tools-safer-and-smarter-4pgk</guid>
      <description>&lt;p&gt;For the past few weeks, my main goal in my open-source contributions was to work on tools that real TypeScript developers use every day, and to challenge myself with issues that were not just styling fixes. I picked two projects. One was &lt;a href="https://github.com/typescript-eslint/typescript-eslint" rel="noopener noreferrer"&gt;typescript-eslint&lt;/a&gt;, which is a big set of ESLint rules that help keep TypeScript code safe and consistent. In that project I focused on the &lt;code&gt;no-unsafe-*&lt;/code&gt; rules that try to stop the any type from quietly spreading through a codebase. My second project was &lt;a href="https://github.com/publicodes/language-server" rel="noopener noreferrer"&gt;publicodes/language-server&lt;/a&gt;, which is a VS Code extension that gives language server features like diagnostics, completion, hover and go to definition for the Publicodes language. Together these two contributions let me work on both static analysis rules and an actual language server.&lt;/p&gt;

&lt;p&gt;In &lt;code&gt;typescript-eslint&lt;/code&gt; I worked on a &lt;a href="https://github.com/typescript-eslint/typescript-eslint/issues/11801" rel="noopener noreferrer"&gt;feature request&lt;/a&gt; that asked the &lt;code&gt;no-unsafe-*&lt;/code&gt; rules to catch unsafe any values inside object types, not only at the top level. The rules like &lt;code&gt;no-unsafe-argument&lt;/code&gt; and &lt;code&gt;no-unsafe-assignment&lt;/code&gt; already blocked obvious cases where a parameter or variable is typed directly as &lt;code&gt;any&lt;/code&gt;, or where a whole generic type uses &lt;code&gt;any&lt;/code&gt;. The problem described in the issue was that if you had an object with a nested any property, it could pass into a stricter object type without any warning. I changed the internal helper that these rules use so that it can now walk into objects, arrays and other containers to see if there is an unsafe &lt;code&gt;any&lt;/code&gt; hiding inside. I also added tests to prove that the exact example from the issue is now reported, and that a safe version still passes. Finally, I had to clean up a few parts of the repository that started failing under the stricter checks. &lt;a href="https://github.com/typescript-eslint/typescript-eslint/pull/11834" rel="noopener noreferrer"&gt;My PR&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In &lt;code&gt;publicodes/language-server&lt;/code&gt; I worked on support for multiple models in one workspace. The language server integrates publicodes language into editors like VS Code and offers features such as semantic highlighting, diagnostics, and go to definition for &lt;code&gt;.publicodes&lt;/code&gt; files. &lt;a href="https://github.com/publicodes/language-server/issues/24" rel="noopener noreferrer"&gt;The issue&lt;/a&gt; was that the server treated all &lt;code&gt;.publicodes&lt;/code&gt; files in a workspace as if they belonged to one giant model. That was a problem in monorepos or multi folder workspaces, where you can have several separate models that should not see each other. &lt;a href="https://github.com/publicodes/language-server/pull/32" rel="noopener noreferrer"&gt;My fix&lt;/a&gt; was to introduce the idea of a model boundary, for example the nearest &lt;code&gt;package.json&lt;/code&gt; or &lt;code&gt;.publicodes.config&lt;/code&gt; file, and then group files by that boundary. Each group gets its own internal data structures and engine instance, and all diagnostics, hovers, completions and definitions for a file now go through the model that owns that file instead of a global one.&lt;/p&gt;

&lt;p&gt;I did not build any of this in one sitting. A lot of the work was just slowly reading through code that I did not write and trying to understand how things fit together. In &lt;code&gt;typescript-eslint&lt;/code&gt;, despite already working on this project, I spent a long time inside the type utilities before starting writting code. I ran the test suites many times and kept adjusting my changes when I saw new failures. In the language server I had to trace how files are discovered, how rules are stored, and how diagnostics are produced, and then carefully thread the new model information through those paths. This forced me to break the work into smaller steps, for example first tracking models, then updating parsing, then updating validation, instead of trying to rewrite everything at once.&lt;/p&gt;

&lt;p&gt;Looking back at my original goal, I think I did achieve what I wanted. I stayed in the TypeScript tooling space, I picked tasks that were slightly above my comfort level, and I stuck with them even when the logic became hard to follow. I also saw the downside of picking complicated issues, which is that it is easy to overthink and get stuck, but overall I feel more confident now about working in serious open source projects. For future work I would like to improve how I plan these contributions so I leave more time for reviews and polishing, but I am happy that I pushed myself and produced contributions that feel real and useful.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>tooling</category>
      <category>typescript</category>
    </item>
    <item>
      <title>Catching Nested any in typescript-eslint</title>
      <dc:creator>Oleksandra</dc:creator>
      <pubDate>Wed, 10 Dec 2025 04:12:45 +0000</pubDate>
      <link>https://forem.com/oleksandrakordonets/catching-nested-any-in-typescript-eslint-3gc0</link>
      <guid>https://forem.com/oleksandrakordonets/catching-nested-any-in-typescript-eslint-3gc0</guid>
      <description>&lt;p&gt;For my open source contribution I decided to keep working in the &lt;a href="https://github.com/typescript-eslint" rel="noopener noreferrer"&gt;TypeScript&lt;/a&gt; linting space and picked a &lt;a href="https://github.com/typescript-eslint/typescript-eslint/issues/11801" rel="noopener noreferrer"&gt;feature request&lt;/a&gt; that asks the &lt;code&gt;no-unsafe-*&lt;/code&gt; family of rules to catch &lt;code&gt;any&lt;/code&gt; nested inside object arguments, not just at the top level. I read the rule docs for no-unsafe-argument and no-unsafe-assignment to understand how they work today, and why nested any can slip through, for example when you pass &lt;code&gt;{ foo: any }&lt;/code&gt; to a function that expects &lt;code&gt;{ foo: number }&lt;/code&gt;. These rules exist to stop unsafe any from leaking into calls and assignments, so closing this gap matters for real apps, not just theory. I also reviewed the project’s guidance on typed linting performance to stay mindful of cost.&lt;/p&gt;

&lt;p&gt;My main change teaches the checker to look inside structures, not only at the top, so it now walks through objects, arrays, tuples, unions, and similar shapes. If it finds a nested any, it reports it, and the rule avoids printing two errors for the same line, so the message stays clear. I added tests to prove the exact repro fails as expected, and I fixed a few small spots in the repo that started failing once the checker became stricter, for example places that were spreading values typed as &lt;code&gt;any&lt;/code&gt;. This was tricky, I spent a lot of time figuring out how to keep the common case fast, and I added short circuit checks and a guard to skip built in library types so we do not flood users with noise. My PR &lt;a href="https://github.com/typescript-eslint/typescript-eslint/pull/11834" rel="noopener noreferrer"&gt;Link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This turned out to be a very challenging issue as I spent many hours untangling type recursion logic and dealing with edge cases. It taught me a lot about how typed linting works under the hood. I also learned to measure first, to keep early exits wherever the source type clearly has no any, and to keep error messages focused on the exact unsafe spot, not the whole object. Reading the typed linting overview and performance notes helped me plan these guardrails.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>tooling</category>
      <category>typescript</category>
    </item>
    <item>
      <title>Planning My Next Open-Source Contributions</title>
      <dc:creator>Oleksandra</dc:creator>
      <pubDate>Wed, 10 Dec 2025 03:27:36 +0000</pubDate>
      <link>https://forem.com/oleksandrakordonets/planning-my-next-open-source-contributions-17pn</link>
      <guid>https://forem.com/oleksandrakordonets/planning-my-next-open-source-contributions-17pn</guid>
      <description>&lt;p&gt;Over the last little while, I’ve been working more actively with open-source projects, especially ones related to the TypeScript ecosystem. In my previous contributions, I focused on tools that are used behind the scenes by editors, like language servers (&lt;a href="https://github.com/typescript-language-server/typescript-language-server" rel="noopener noreferrer"&gt;typescript-language-server&lt;/a&gt;), which help with things such as error reporting, code navigation, and auto completion. That experience pushed me outside my comfort zone, but it also helped me understand how large developer tools are structured and maintained.&lt;/p&gt;

&lt;p&gt;I want to continue working in the TypeScript ecosystem because of the kind of problems these projects deal with. My previous contributions focused on tooling that sits between the editor and the TypeScript compiler, such as language servers and related infrastructure. That work exposed me to real-world challenges like performance, scalability in large codebases, and how design decisions affect many different editors and users. Because of that, I’m planning to keep contributing to similar projects where the focus is on improving developer experience at a system level, not just adding isolated features.&lt;/p&gt;

&lt;p&gt;For the next few weeks, I want to build on what I’ve already explored rather than switch to a completely new stack. I plan to look for issues that are related to TypeScript tooling, editor integration, or infrastructure improvements, and approach them with a better understanding of how these projects evolve over time. Instead of aiming for the biggest possible feature, I want to focus on contributions that are well-scoped, maintenance-friendly, and aligned with how maintainers already structure their code, so that each contribution moves me forward technically while fitting naturally into the project.&lt;/p&gt;

</description>
      <category>devjournal</category>
      <category>tooling</category>
      <category>opensource</category>
      <category>typescript</category>
    </item>
    <item>
      <title>My First Software Release: Repo-Context-Packager</title>
      <dc:creator>Oleksandra</dc:creator>
      <pubDate>Sat, 22 Nov 2025 05:09:03 +0000</pubDate>
      <link>https://forem.com/oleksandrakordonets/my-first-software-release-repo-context-packager-dn0</link>
      <guid>https://forem.com/oleksandrakordonets/my-first-software-release-repo-context-packager-dn0</guid>
      <description>&lt;p&gt;Releasing software sounds simple in theory, but I quickly learned how many small decisions are involved in making an app usable by real people. For my &lt;a href="https://github.com/OleksandraKordonets/Repository-Context-Packager" rel="noopener noreferrer"&gt;Repo-Context-Packager&lt;/a&gt; project, I needed to choose a release method that would allow non-developers to install and run the tool with as little setup as possible. Since this is a C++ command-line program with no external dependencies, I initially experimented with Conan, a package manager for C++ projects. However, I ran into installation and PATH issues on Windows, and I realized that using a package registry was adding unnecessary complexity. Instead, I chose a more direct and user-friendly approach: distributing a prebuilt executable through GitHub Releases. This gave me a simple and reliable way to make the tool available to anyone with a Windows machine.&lt;/p&gt;

&lt;p&gt;To support distribution, I first needed to ensure the project could be built in a reproducible way. I created a PowerShell script that compiles all the C++ source files into a single executable using g++ with fixed build options. I also updated the program's version output and introduced semantic versioning by tagging the release commit as &lt;code&gt;v1.0.0&lt;/code&gt;. Once everything was stable, I generated the executable and a &lt;code&gt;.zip&lt;/code&gt; file that included the binary along with documentation. With these artifacts ready, I published &lt;a href="https://github.com/OleksandraKordonets/Repository-Context-Packager/releases/tag/v1.0.0" rel="noopener noreferrer"&gt;my GitHub Release&lt;/a&gt; and attached the files so users can download and run the tool without installing a compiler.&lt;/p&gt;

&lt;p&gt;A big part of this lab was understanding how releases represent frozen snapshots of the code at a specific point in time. When I updated my README after publishing the release, the changes weren’t reflected in the release assets, and GitHub showed a message saying that the main branch was a few commits ahead of the release tag. Any improvements must lead to a new patch release rather than modifying the old one. I also realized that GitHub automatically provides source archives for every release, which initially confused me because I thought I had uploaded something incorrectly.&lt;/p&gt;

&lt;p&gt;For user testing, I asked a partner to download and run only the release files—no cloning and no compiling allowed. They were initially unsure where to click to download the exe and how to run a command-line tool in Windows. Based on their feedback, I revised the README so that installation instructions appear at the very top, and I added simple PowerShell examples that anyone could copy and run. After those changes, the experience went smoothly: they were able to scan a folder using the command &lt;code&gt;repo-context-packager.exe . -o context.txt&lt;/code&gt; and instantly generate a full snapshot of the repository.&lt;/p&gt;

&lt;p&gt;Overall, completing this release taught me how much work goes into making software truly usable for others. I learned practical versioning, packaging, and documentation skills, along with the importance of testing installation instructions with someone unfamiliar with the project. Today, users can simply visit the GitHub Release page, download the prebuilt executable, and run it from a terminal to generate a formatted snapshot of any repository. &lt;/p&gt;

</description>
      <category>cpp</category>
      <category>showdev</category>
      <category>devops</category>
      <category>beginners</category>
    </item>
    <item>
      <title>My Open-Source Contribution: Adding a feature to typescript-language-server</title>
      <dc:creator>Oleksandra</dc:creator>
      <pubDate>Fri, 21 Nov 2025 03:56:04 +0000</pubDate>
      <link>https://forem.com/oleksandrakordonets/my-open-source-contribution-adding-watch-event-support-to-typescript-language-server-3ed5</link>
      <guid>https://forem.com/oleksandrakordonets/my-open-source-contribution-adding-watch-event-support-to-typescript-language-server-3ed5</guid>
      <description>&lt;p&gt;I recently finished an open-source contribution that I’m actually really proud of. This time, I worked on the &lt;a href="https://github.com/typescript-language-server/typescript-language-server" rel="noopener noreferrer"&gt;typescript-language-server&lt;/a&gt; project. I wanted to challenge myself with something more complex than what I normally do, and I came across &lt;a href="https://github.com/typescript-language-server/typescript-language-server/issues/956" rel="noopener noreferrer"&gt;Issue #956&lt;/a&gt;, which asked to support a new &lt;code&gt;tsserver&lt;/code&gt; feature called &lt;code&gt;--canUseWatchEvents&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The idea behind the feature is that, in huge TypeScript projects, &lt;code&gt;tsserver&lt;/code&gt; ends up watching thousands of files on its own, which can slow things down a lot. Newer versions of TypeScript offer a way for the editor to handle file watching instead, and only tell &lt;code&gt;tsserver&lt;/code&gt; when something actually changes. VS Code already does this, but other editors that rely on the language server didn’t have this yet. I thought this would be a medium-level contribution, but it turned out to be a pretty big learning experience for me.&lt;/p&gt;

&lt;p&gt;I spent a lot of time trying to understand the codebase: how the language server talks to &lt;code&gt;tsserver&lt;/code&gt;, how features are enabled, and how file watchers are registered. To support this feature, in &lt;a href="https://github.com/typescript-language-server/typescript-language-server/pull/1057" rel="noopener noreferrer"&gt;my PR&lt;/a&gt; I had to update multiple parts of the project, including: adding a new CLI flag &lt;code&gt;--canUseWatchEvents&lt;/code&gt;, passing the flag to &lt;code&gt;tsserver&lt;/code&gt; when launching it, handling the new events &lt;code&gt;tsserver&lt;/code&gt; sends related to file watching, and forwarding file change events from the editor back to &lt;code&gt;tsserver&lt;/code&gt; properly.&lt;/p&gt;

&lt;p&gt;I also created a new piece called &lt;code&gt;WatchEventManager&lt;/code&gt;, which basically keeps track of what &lt;code&gt;tsserver&lt;/code&gt; wants to watch and registers the right file watchers with the editor. Then, when files are created, changed, or deleted, it sends a &lt;code&gt;watchChange&lt;/code&gt; message back to &lt;code&gt;tsserver&lt;/code&gt;. And if something isn’t supported like an old TypeScript version, everything just falls back to the old behavior, so the change is safe.&lt;/p&gt;

&lt;p&gt;I definitely struggled at times, but I learned a lot about how language servers work, so all the time I spent on this wasn’t wasted. Even though the change took longer than I expected, I’m really glad I stuck with it. It feels cool knowing that I was able to take this feature from start to finish. Having some previous experience with open-source definetely helped me feel more confident. At one point, when I realized the feature was much bigger than I first thought, I honestly considered dropping it and finding something easier. But I pushed through, and I’m proud of that.&lt;/p&gt;

&lt;p&gt;Now I’m looking forward to the maintainer’s feedback. The codebase is complicated, and seeing what I did right and what I can improve will be really valuable for me as I keep learning and contributing.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>typescript</category>
      <category>tooling</category>
      <category>performance</category>
    </item>
    <item>
      <title>My Open-Source Contribution - Working on TypeScript-ESLint</title>
      <dc:creator>Oleksandra</dc:creator>
      <pubDate>Thu, 20 Nov 2025 01:46:49 +0000</pubDate>
      <link>https://forem.com/oleksandrakordonets/my-open-source-contribution-working-on-typescript-eslint-gmd</link>
      <guid>https://forem.com/oleksandrakordonets/my-open-source-contribution-working-on-typescript-eslint-gmd</guid>
      <description>&lt;p&gt;I wanted to share my contributions to open-source, and especially the work I recently did for the &lt;a href="https://github.com/typescript-eslint/typescript-eslint" rel="noopener noreferrer"&gt;TypeScript-ESLint&lt;/a&gt; project. For this week, I challenged myself to contribute to a much bigger and more complex project than I had worked with before. I wanted to push past basic fixes and actually make a change that improves developer experience for a large community. After exploring the repository and reading through open issues, I found &lt;a href="https://github.com/typescript-eslint/typescript-eslint/issues/11758" rel="noopener noreferrer"&gt;Issue #11758&lt;/a&gt;, which described a bug in the &lt;code&gt;no-base-to-string&lt;/code&gt; rule. The rule was mistakenly warning when calling &lt;code&gt;.toString()&lt;/code&gt; on a class that extends &lt;code&gt;Error&lt;/code&gt; and uses generics, even though that should be allowed.&lt;/p&gt;

&lt;p&gt;Before writing any changes, I needed to understand how the rule checked TypeScript types. I spent a lot of time reading through the code and learning how the TypeScript type checker works with inheritance. Eventually, I discovered the cause of the bug: the rule only checked the type’s own name to decide whether it should be ignored, but it never looked at its parent type. Because of that, a type like a generic subclass of &lt;code&gt;Error&lt;/code&gt; was treated as unsafe, even though it should be allowed. To fix the issue, I updated the rule so it now looks not only at the current type but also at all of its base types, even when generics are involved. This means the rule can now correctly recognize inherited behavior.&lt;/p&gt;

&lt;p&gt;In the end, &lt;a href="https://github.com/typescript-eslint/typescript-eslint/pull/11767" rel="noopener noreferrer"&gt;the fix&lt;/a&gt; turned out to be much smaller than I first expected. After updating the rule logic, I then needed to prove that it worked. I added a new valid test case that represents the situation from the issue, where a generic class extends another class which ultimately extends &lt;code&gt;Error&lt;/code&gt;. This test confirms that the rule no longer reports &lt;code&gt;.toString()&lt;/code&gt; in that scenario. Running the full test suite showed that everything passed successfully, which gave me confidence that my fix solved the problem without causing any new issues elsewhere in the project.&lt;/p&gt;

&lt;p&gt;This contribution was a big learning step for me. Compared to earlier work, I gained much deeper experience reading unfamiliar code and learning how to efficiently navigate through large codebases. It also showed me that I am capable of contributing to major developer tools used by many people every day. I hope to continue doing more contributions like this going forward.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>tooling</category>
      <category>typescript</category>
    </item>
    <item>
      <title>Setting Up CI and Writing Tests for Another Project</title>
      <dc:creator>Oleksandra</dc:creator>
      <pubDate>Sat, 15 Nov 2025 22:56:06 +0000</pubDate>
      <link>https://forem.com/oleksandrakordonets/setting-up-ci-and-writing-tests-for-another-project-3a6</link>
      <guid>https://forem.com/oleksandrakordonets/setting-up-ci-and-writing-tests-for-another-project-3a6</guid>
      <description>&lt;p&gt;This week, I continued improving the testing setup for my &lt;a href="https://github.com/OleksandraKordonets/Repository-Context-Packager" rel="noopener noreferrer"&gt;Repository-Context-Packager&lt;/a&gt; tool and worked on adding tests to my &lt;a href="https://github.com/kphero/repository-context-packager" rel="noopener noreferrer"&gt;partner’s project&lt;/a&gt;. This was good practice in managing project complexity using automated testing and Continuous Integration.&lt;/p&gt;

&lt;p&gt;To automate test execution, I added a &lt;a href="https://github.com/OleksandraKordonets/Repository-Context-Packager/pull/19" rel="noopener noreferrer"&gt;GitHub Actions CI workflow&lt;/a&gt;. The workflow runs on every push and on every pull request. It uses an Ubuntu runner, installs a C++ compiler, builds the project and its tests, and executes all Catch2 test cases. If any test fails, the workflow stops and reports the failure in the pull request. Setting this up helped me understand how automated test pipelines work and how CI ensures the main branch never breaks.&lt;/p&gt;

&lt;p&gt;For the collaboration part of the lab, I contributed a test to my partner's Python project. Their repository was structured differently, and there wasn’t a single obvious entry point for testing, so I had to explore the package and understand how the modules were organized before writing a test. &lt;a href="https://github.com/kphero/repository-context-packager/pull/18" rel="noopener noreferrer"&gt;My contribution&lt;/a&gt; was a dynamic import test &lt;code&gt;tests/test_imports.py&lt;/code&gt; that walks through every module under the analyzer package and confirms that each one imports correctly. The test automatically discovers all submodules using &lt;code&gt;pkgutil.walk_packages&lt;/code&gt;, tries importing them with &lt;code&gt;importlib.import_module()&lt;/code&gt;, and reports any failures. This ensures the package structure stays consistent and that no module contains syntax errors or missing dependencies.&lt;/p&gt;

&lt;p&gt;Working on someone else’s codebase required slowing down and reading the design closely, not just looking at individual lines. It felt closer to debugging or performing a code review, because I had to decide what the expected behavior of their modules should be before creating the test. Overall, this week strengthened my understanding of automated testing, CI workflows, and how to approach testing in unfamiliar projects.&lt;/p&gt;

</description>
      <category>cicd</category>
      <category>cpp</category>
      <category>testing</category>
      <category>github</category>
    </item>
    <item>
      <title>Adding Unit Tests to repo-context-packager</title>
      <dc:creator>Oleksandra</dc:creator>
      <pubDate>Fri, 07 Nov 2025 01:21:23 +0000</pubDate>
      <link>https://forem.com/oleksandrakordonets/adding-unit-tests-to-repo-context-packager-3n2o</link>
      <guid>https://forem.com/oleksandrakordonets/adding-unit-tests-to-repo-context-packager-3n2o</guid>
      <description>&lt;p&gt;To add unit tests into &lt;a href="https://github.com/OleksandraKordonets/Repository-Context-Packager" rel="noopener noreferrer"&gt;my CLI tool&lt;/a&gt; I used &lt;a href="https://github.com/catchorg/Catch2" rel="noopener noreferrer"&gt;Catch2&lt;/a&gt;, a modern C++ unit testing framework that makes writing tests easy and quick. It has a nice single-header option that you can drop into a repo, simple &lt;code&gt;REQUIRE&lt;/code&gt; / &lt;code&gt;CHECK&lt;/code&gt; macros, and flexible test naming and tagging. Also, Catch2 was the library I found interesting when I was looking at open-source projects during the first week of OSD600 cource, so I already liked it.&lt;/p&gt;

&lt;p&gt;To write unit tests, I first created a &lt;code&gt;tests/&lt;/code&gt; folder in my project and added the Catch2 single-header file there. Then I wrote separate test files for different parts of my project, such as &lt;code&gt;test_utils.cpp&lt;/code&gt;, &lt;code&gt;test_file_reader_scanner.cpp&lt;/code&gt;, and &lt;code&gt;test_compressor.cpp&lt;/code&gt;. Each test file included &lt;code&gt;catch.hpp&lt;/code&gt; and my project headers, and I used &lt;code&gt;TEST_CASE&lt;/code&gt; blocks with &lt;code&gt;REQUIRE&lt;/code&gt; or &lt;code&gt;CHECK&lt;/code&gt; statements to verify expected behavior. I compiled the tests together with the project source files to produce a test executable. Running this executable ran all the test cases, and I could filter by tags or test names to run specific tests.&lt;/p&gt;

&lt;p&gt;For this assignment, I also tried to complete the optional tasks, such as Test Runner Improvements and Code Coverage Analysis, but I got stuck. For the test runner, using a file watcher like &lt;code&gt;watchexec&lt;/code&gt; initially caused an infinite loop because my test binary was inside a watched directory, so every rebuild triggered another rebuild. I thought moving the binary to a separate &lt;code&gt;build/&lt;/code&gt; folder would fix it, but it didn’t completely solve the issue. For code coverage, I tried using GCC’s &lt;code&gt;gcov&lt;/code&gt; and &lt;code&gt;lcov&lt;/code&gt;, but generating a proper coverage report was tricky. The &lt;code&gt;.gcno&lt;/code&gt; files are created at compile time, and the &lt;code&gt;.gcda&lt;/code&gt; files are created only when the test executable runs. I ran into issues with missing files and paths, and I wasn’t able to fix them the way I wanted. &lt;/p&gt;

&lt;p&gt;Overall, this assignment gave me my first experience writing unit tests for C++ code. I had never done testing like this before, and it was nice to work with Catch2. Even though I didn’t fully complete the optional tasks, I think I will definitely explore these topics more and use that in my future C++ projects.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/OleksandraKordonets/Repository-Context-Packager/commit/bae01f36b28327fbbb8f501c1165bb4f5728d732" rel="noopener noreferrer"&gt;Commit link&lt;/a&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>cpp</category>
      <category>testing</category>
      <category>cli</category>
    </item>
    <item>
      <title>October Recap - my first Hacktoberfest</title>
      <dc:creator>Oleksandra</dc:creator>
      <pubDate>Sat, 01 Nov 2025 01:50:38 +0000</pubDate>
      <link>https://forem.com/oleksandrakordonets/october-recap-my-first-hacktoberfest-5ci8</link>
      <guid>https://forem.com/oleksandrakordonets/october-recap-my-first-hacktoberfest-5ci8</guid>
      <description>&lt;p&gt;October was a busy month. I set out to make useful contributions across a few repos and ended the month with four pulled PRs. Below is a short, friendly summary of what I did each week, how I work, what I learned, and just my thoughts on the whole process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I worked on:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/AryanVBW/NIRMAAN" rel="noopener noreferrer"&gt;NIRMAAN&lt;/a&gt;: it was my first contribution and it was a simple and quick task to compose a README.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/tonybaloney/vscode-pets" rel="noopener noreferrer"&gt;vscode-pets&lt;/a&gt;: it was my second cintribution and I added a setting so users can pick the ball color to throw from VS Code Settings instead of being prompted every time.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/physicshub/physicshub.github.io" rel="noopener noreferrer"&gt;physicshub.github.io&lt;/a&gt;: I fixed a visual bug where a canvas would slowly shift color when a “trail” effect was toggled.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/tonybaloney/vscode-pets" rel="noopener noreferrer"&gt;vscode-pets&lt;/a&gt;: I worked on this repo again and there was an isuue to add a new feature so additional pets you spawn don’t vanish when you switch folders.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before I started coding, I spent a lot of time searching for repos and issues that were a good fit. That part turned out to be the hardest. I’d find a project that looked perfect: right language, interesting project, open issues, and then spend hours trying to implement something only to realize I can't really work on it.&lt;/p&gt;

&lt;p&gt;One example of that: I spent a whole day poking at the &lt;a href="https://github.com/hediet/vscode-drawio" rel="noopener noreferrer"&gt;hediet/vscode-drawio&lt;/a&gt; extension because I really liked and and I use drawio a lot for schoolwork, so it looked like a great place to contribute. The repo embeds the &lt;a href="https://github.com/jgraph/drawio" rel="noopener noreferrer"&gt;draw.io&lt;/a&gt; project and many parts are tightly coupled to that upstream code. I picked a change that I thought was safe to make inside the extension, but after digging deeper I discovered the behavior I wanted actually required changing drawio itself. That was frustrating as I’d spent time understanding the repo only to have to abandon my attempt. I feel like finding the right repo and the right issue is a skill. I think and hope with more time and exposure to open source I’ll get better at spotting good matches faster. &lt;/p&gt;

&lt;p&gt;Another important thing I learned by mistakes is the story of my first &lt;a href="https://github.com/tonybaloney/vscode-pets/pull/815" rel="noopener noreferrer"&gt;PR&lt;/a&gt; at the &lt;a href="https://github.com/tonybaloney/vscode-pets/pull/815" rel="noopener noreferrer"&gt;vscode-pets&lt;/a&gt; project . At first everything looked straightforward: I forked the repo, implemented the feature, opened a PR, received feedback, addressed the feedback, and pushed another commit. Then the maintainers tried to merge my changes and the merge failed because the CI lint check didn’t pass. The repo’s contributing guide explicitly asks contributors to run &lt;code&gt;npm run lint&lt;/code&gt; and &lt;code&gt;npm run lint:fix&lt;/code&gt; before submitting, which I did, but apparently not carefully enough. The lint failure blocked merging. When I went back to fix the lint errors, I also discovered my branch was behind the upstream main. That’s normal and easy to fix, but things went sideways and then force-pushed. The force-push replaced the branch history on the remote and my earlier commits disappeared from the PR, leaving a single commit instead of the sequence I intended. I panicked for a bit because I didn’t fully understand what had happened, but the branch was only locally inconsistent, the commits still existed in my reflog. To recover I used Git diagnostics and repair tools. I inspected the history with &lt;code&gt;git reflog&lt;/code&gt; and &lt;code&gt;git log&lt;/code&gt; to find the SHAs of my lost commits, created a safety backup branch, then reconstructed the branch history by resetting to the correct base commit and cherry-picking the missing commits back in the intended order. Overall the history under the PR became a bit messier than I originally wanted, but recovering those commits myself taught me more about Git than any tutorial could. It was stressful in the moment, but I now feel more confident.&lt;/p&gt;

&lt;p&gt;All my other contributions went pretty smoothly, and despite the hiccups I genuinely learned a lot through trial and error. I was pleasantly surprised by how helpful and collaborative the open-source community is, for some reason I expected things to be more hostile. It is vey fun to be in the community where your work is very welcome. Most maintainers and contributors were patient, left constructive feedback, and gave useful pointers that made my changes better. That made the whole process feel welcoming and encouraged me to keep improving my work. &lt;/p&gt;

</description>
      <category>hacktoberfest</category>
      <category>devjournal</category>
      <category>opensource</category>
      <category>beginners</category>
    </item>
    <item>
      <title>How I Fixed Vanishing Pets in vscode-pets</title>
      <dc:creator>Oleksandra</dc:creator>
      <pubDate>Sat, 01 Nov 2025 00:14:18 +0000</pubDate>
      <link>https://forem.com/oleksandrakordonets/how-i-fixed-vanishing-pets-in-vscode-pets-41ep</link>
      <guid>https://forem.com/oleksandrakordonets/how-i-fixed-vanishing-pets-in-vscode-pets-41ep</guid>
      <description>&lt;p&gt;For my Hacktoberfest contribution I fixed a bug in the &lt;a href="https://github.com/tonybaloney/vscode-pets" rel="noopener noreferrer"&gt;vscode-pets repo&lt;/a&gt; where extra pets you spawned in the pets panel would vanish whenever you switched workspaces or opened a different folder. Before the fix, only the single default pet from settings would remain. Now, pets you spawn in the webview are synced to the extension host, saved in the extension’s global storage, and restored when the panel or explorer view is recreated, so they persist across workspace changes.&lt;/p&gt;

&lt;p&gt;I worked on &lt;a href="https://github.com/tonybaloney/vscode-pets/issues/809" rel="noopener noreferrer"&gt;Issue #809&lt;/a&gt;. The problem was easy to reproduce: spawn a few extra pets, switch folders or close/reopen the panel, and those extra pets were gone. My fix makes the webview tell the extension which pets it currently has, the extension saves that list and later sends it back to the webview to recreate the pets. I also added a simple deduplication check (type + name + color) so pets don’t multiply during restores, that was something I ran into while testing and it was very annoying. Lastly, I centralized message handling between the webview and extension to keep the communication clearer and make future messages easier to add. &lt;/p&gt;

&lt;p&gt;Working on this taught me a bunch: how the webview and extension host talk to each other, how VS Code globalState works, and how small UX decisions can ripple into bigger problems. Early on I got a lot of little bugs and tried to have a timer to wait before sending the stored pet list to the webview. It did work, but it was brittle: it created a race condition, and felt flaky. After doing a bit more research I replaced the timer with a proper message handshake that you can see in &lt;a href="https://github.com/tonybaloney/vscode-pets/pull/832" rel="noopener noreferrer"&gt;my PR&lt;/a&gt;. The webview sends webview-ready when it’s actually loaded, the extension replies with restore-pets, and we perform a simple dedupe.&lt;/p&gt;

&lt;p&gt;I’ve worked on this repo before, so getting started was straightforward. That said, I made a few design choices without a strict spec, and I’d like feedback from the repo maintainers on whether this behavior should be global or workspace-specific. A future improvement would be a setting that lets users choose whether pets persist globally or per workspace.&lt;/p&gt;

</description>
      <category>hacktoberfest</category>
      <category>showdev</category>
      <category>vscode</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Fixing the BouncingBall “trail” background bug</title>
      <dc:creator>Oleksandra</dc:creator>
      <pubDate>Mon, 27 Oct 2025 00:44:29 +0000</pubDate>
      <link>https://forem.com/oleksandrakordonets/fixing-the-bouncingball-trail-background-bug-4p9f</link>
      <guid>https://forem.com/oleksandrakordonets/fixing-the-bouncingball-trail-background-bug-4p9f</guid>
      <description>&lt;p&gt;For this week of Hacktoberfest I wanted to fix a bug instead of adding a feature. I’d already opened a feature PR in another repo, so this time I looked for bug to patch. A classmate pointed me to a &lt;a href="https://github.com/physicshub/physicshub.github.io" rel="noopener noreferrer"&gt;repo&lt;/a&gt; they’d worked on, which is a &lt;a href="https://physicshub.github.io/" rel="noopener noreferrer"&gt;public site&lt;/a&gt; of interactive physics simulations. By chance it had a reproducible visual bug, so I volunteered to take it on.&lt;/p&gt;

&lt;p&gt;The project is a React + p5.js site with explanatory theory pages. Each simulation runs as a p5 sketch embedded in React, and shared drawing helpers live in &lt;code&gt;src/utils/&lt;/code&gt;. On the BouncingBall page the “Enable trail” toggle worked, but when I later turned the trail off the canvas looked slightly lighter than the original background. Running the simulation with the trail on for a while or toggling it repeatedly biased the canvas toward that lighter color. The bug only showed up on that page because it used the shared drawBackground helper.&lt;br&gt;
&lt;a href="https://github.com/physicshub/physicshub.github.io/issues/14" rel="noopener noreferrer"&gt;Issue link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Two different code paths painted the background: a full-clear path that used &lt;code&gt;p.background(...)&lt;/code&gt; when the trail was off, and a translucent-overlay path that used &lt;code&gt;p.fill(r,g,b,alpha); p.rect(...)&lt;/code&gt; when the trail was on. They constructed colors differently and didn’t enforce the same blend settings. Small differences in color conversion and alpha handling accumulated through repeated compositing, which pushed the canvas color toward the overlay. When the trail was switched off, the full-clear path used a slightly different conversion and the canvas stayed a bit lighter.&lt;/p&gt;

&lt;p&gt;To fix it I centralized and normalized both paths so they come from the same &lt;code&gt;p.color(...)&lt;/code&gt; object and run under a consistent color mode. That way the translucent overlay and the hard clear use identical RGB channels and the same alpha interpretation, so compositing no longer shifts the base color. The solution turned out to be small but it removes the confusing visual drift.&lt;br&gt;
&lt;a href="https://github.com/physicshub/physicshub.github.io/pull/89" rel="noopener noreferrer"&gt;PR link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Tiny inconsistencies in color handling can produce annoying visual bugs, so I checked the shared utilities first. I noticed someone before me had tried to fix the problem by changing the &lt;code&gt;BouncingBall&lt;/code&gt; page directly and that ended up causing more serious bugs because it duplicated logic. Fixing the root utility &lt;code&gt;drawBackground&lt;/code&gt; instead avoided those regressions and turned out to be the safer, cleaner approach.&lt;/p&gt;

</description>
      <category>hacktoberfest</category>
      <category>frontend</category>
      <category>opensource</category>
      <category>react</category>
    </item>
    <item>
      <title>Adding a Compress Feature to My CLI Tool</title>
      <dc:creator>Oleksandra</dc:creator>
      <pubDate>Tue, 21 Oct 2025 02:15:26 +0000</pubDate>
      <link>https://forem.com/oleksandrakordonets/adding-a-compress-feature-to-my-cli-tool-28mc</link>
      <guid>https://forem.com/oleksandrakordonets/adding-a-compress-feature-to-my-cli-tool-28mc</guid>
      <description>&lt;p&gt;For my latest update, I added a &lt;code&gt;--compress&lt;/code&gt; feature to my &lt;a href="https://github.com/OleksandraKordonets/Repository-Context-Packager" rel="noopener noreferrer"&gt;Repository Context Packager&lt;/a&gt;, a CLI tool that packages repositories into readable text files for large language models. This new option allows users to compress their code content before packaging, meaning it removes unnecessary parts and keeps mainly function signatures and their associated comments, while skipping includes, imports, pragmas, and function bodies. Issue &lt;a href="https://github.com/OleksandraKordonets/Repository-Context-Packager/issues/14" rel="noopener noreferrer"&gt;URL&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I was inspired by a similar feature in &lt;a href="https://github.com/yamadashy/repomix" rel="noopener noreferrer"&gt;Repomix&lt;/a&gt;. While exploring its &lt;a href="https://repomix.com" rel="noopener noreferrer"&gt;web version&lt;/a&gt;, I noticed options for compressing code, removing comments, and stripping empty lines. I really liked how simple and effective those features were, so I decided to bring the same idea into my own project. To understand how Repomix achieved this, I studied its code, focusing mainly on the &lt;code&gt;processContent&lt;/code&gt; and &lt;code&gt;parseFile&lt;/code&gt; functions. That helped me trace the logic flow from reading a file to modifying its text and returning a processed version.&lt;/p&gt;

&lt;p&gt;The first major challenge I faced was that Repomix’s compression logic wasn’t perfect, so I didn’t have a flawless example to rely on, and as we all know, no software is perfect. I tested its compression feature using my own repository since I know my files well and could tell exactly what was being skipped. What I noticed was that sometimes it removed too much. For example, my &lt;code&gt;Compressor.h&lt;/code&gt; file ended up completely empty after compression:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpvq94ngcmrrqk7goydl9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpvq94ngcmrrqk7goydl9.png" alt="Repomix’s compression output" width="222" height="64"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This showed me that I needed to rethink how compression should work in my own implementation.&lt;/p&gt;

&lt;p&gt;Another big issue came from trying to do too much at once. At first, I attempted to implement &lt;code&gt;--compress&lt;/code&gt;, &lt;code&gt;--remove-comments&lt;/code&gt;, and &lt;code&gt;--remove-empty-lines&lt;/code&gt; all together, assuming they were closely related. In practice, that approach caused a huge number of bugs, fixing one thing would break another. Eventually, I decided to narrow my focus and implement only the &lt;code&gt;--compress&lt;/code&gt; feature for now, leaving the other two for later. I’ve already created GitHub issues for those so I can revisit them in future updates. &lt;br&gt;
&lt;a href="https://github.com/OleksandraKordonets/Repository-Context-Packager/issues/15" rel="noopener noreferrer"&gt;Remove Comments Issue&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/OleksandraKordonets/Repository-Context-Packager/issues/16" rel="noopener noreferrer"&gt;Remove Empty Lines Issue&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One thing I particularly like about my implementation, compared to Repomix’s, is the output format. I designed mine to be easier to read while still staying compact. For example:&lt;br&gt;
My tool’s output:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkyy90zsn7gtjyudehr39.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkyy90zsn7gtjyudehr39.png" alt="My tool's output" width="752" height="174"&gt;&lt;/a&gt;&lt;br&gt;
Repomix’s output:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2hjpzk7dyn5qrahfhbxr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2hjpzk7dyn5qrahfhbxr.png" alt="Repomix’s output" width="742" height="144"&gt;&lt;/a&gt;&lt;br&gt;
While the Repomix approach might be more LLM-friendly, I personally prefer my version. It feels easier for me to read while still serving the purpose of summarizing code.&lt;/p&gt;

&lt;p&gt;Overall, this feature taught me a lot about designing transformations carefully and not rushing to combine too many related ideas at once. My next step is to refine the compression logic further and eventually implement the other two options. Once those are in place, I plan to explore similar tools again to see what additional ideas I can bring into my project. My goal is to make the Repository Context Packager a truly powerful and customizable way to generate clean, readable summaries of any repository.&lt;/p&gt;

</description>
      <category>cli</category>
      <category>showdev</category>
      <category>tooling</category>
      <category>llm</category>
    </item>
  </channel>
</rss>
