<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Aubrey D</title>
    <description>The latest articles on Forem by Aubrey D (@aubreyddd).</description>
    <link>https://forem.com/aubreyddd</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/aubreyddd"/>
    <language>en</language>
    <item>
      <title>Shipping Meaningful Open Source Work</title>
      <dc:creator>Aubrey D</dc:creator>
      <pubDate>Fri, 12 Dec 2025 20:20:56 +0000</pubDate>
      <link>https://forem.com/aubreyddd/shipping-meaningful-open-source-work-3p3o</link>
      <guid>https://forem.com/aubreyddd/shipping-meaningful-open-source-work-3p3o</guid>
      <description>&lt;p&gt;Release 0.4 was about doing work I could be proud of, over multiple weeks. I chose to continue contributing to &lt;strong&gt;hiero-sdk-python&lt;/strong&gt; because it aligns with my career goals (Python SDK development, developer experience, and reliable tooling), and because the repo has clear contribution standards and active maintainers. By the end of this release, I delivered two pull requests that together reflect both sides of real SDK work: improving a user-facing example and extending a core API with tests.&lt;/p&gt;

&lt;p&gt;•PR &lt;a href="https://github.com/hiero-ledger/hiero-sdk-python/pull/1044" rel="noopener noreferrer"&gt;#1044&lt;/a&gt;: Fix token association verification in the token airdrop example. This PR fixes a subtle but important developer experience problem: the example previously displayed token balances after association, which can be misleading because balances can remain 0 even when association is correct. The PR refactors the example output to verify association properly by checking whether the token ID exists in the account’s token_balances map and printing a clear “Associated / NOT Associated” result. This is the kind of change that looks small but prevents confusion for new users who copy/paste examples to learn an SDK.&lt;/p&gt;

&lt;p&gt;•PR &lt;a href="https://github.com/hiero-ledger/hiero-sdk-python/pull/1051" rel="noopener noreferrer"&gt;#1051&lt;/a&gt;: Support PublicKey for batch_key in the Transaction class with Unit &amp;amp; integration tests. This PR extends the transaction batching API so batch_key can accept both PrivateKey and PublicKey. Previously, only PrivateKey was accepted, which forced developers to provide private key material even in workflows where only a public key should be required. To make the change safe and reviewable, the PR also adds unit tests &amp;amp; integration tests.&lt;/p&gt;

&lt;p&gt;Both PRs were built with the project’s standards in mind: minimal unintended behavior change, changelog updates, signed commits, and CI checks. I achieved the “meaningful work” goal by choosing one contribution that improves developer understanding and another that improves SDK capability. I also stayed consistent with weekly progress and worked through review/automation feedback instead of treating PR creation as the finish line.￼&lt;/p&gt;

&lt;p&gt;This release reinforced that “real” open source work is not just writing code — it’s writing code that fits a system:&lt;br&gt;
•Examples are part of the product. A misleading example can cost users hours and harm trust in the SDK. Fixing #815 taught me to treat examples like public APIs: they need to be correct and unambiguous.  ￼&lt;br&gt;
•Small API changes can require broad thinking. Allowing PublicKey for batch_key sounds simple, but it touches typing, serialization paths, and testing expectations across transactions.  ￼&lt;br&gt;
•Review feedback is where you level up. The most valuable part wasn’t writing the first version, it was iterating based on reviewer and also automation feedback to align with repo conventions.&lt;/p&gt;

&lt;p&gt;Automated maintainership tools like WorkflowBot pointed out when the PR couldn’t be merged due to failing checks and provided direct links to contribution guides. Maintainer review helped me see beyond the immediate code change. For example, maintainers suggested expanding related example/tests around batch transactions, which is the type of ecosystem thinking I want to build in my future contributions. Even AI review comments were useful as a checklist for consistency, but I treated them as suggestions buy not authority and prioritized maintainer expectations and CI rules. &lt;/p&gt;

&lt;p&gt;Release 0.4 gave me a realistic open-source development experience: choosing scope, shipping improvements, handling review cycles, and working with community standards. I’m leaving this term with stronger confidence in contributing to production grade Python projects, especially SDKs where correctness, developer trust, and maintainability matter.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>career</category>
      <category>api</category>
      <category>python</category>
    </item>
    <item>
      <title>Progress Update To Release 0.4</title>
      <dc:creator>Aubrey D</dc:creator>
      <pubDate>Fri, 12 Dec 2025 19:55:43 +0000</pubDate>
      <link>https://forem.com/aubreyddd/progress-update-to-release-04-ikl</link>
      <guid>https://forem.com/aubreyddd/progress-update-to-release-04-ikl</guid>
      <description>&lt;p&gt;In the second week of Release 0.4, my focus shifted from planning to execution. Rather than trying to finish everything quickly, I spent this week scoping the work more carefully, understanding the codebase in depth, and validating that the issues I chose were the right size and complexity for a multi-week contribution. I am currently working on two issues in the &lt;a href="https://github.com/hiero-ledger/hiero-sdk-python" rel="noopener noreferrer"&gt;hiero-sdk-python&lt;/a&gt; repository: &lt;a href="https://github.com/hiero-ledger/hiero-sdk-python/issues/815" rel="noopener noreferrer"&gt;#815&lt;/a&gt; and &lt;a href="https://github.com/hiero-ledger/hiero-sdk-python/issues/1015" rel="noopener noreferrer"&gt;#1015&lt;/a&gt;. Although they differ in scope, they are connected by a common goal—improving the developer experience of the SDK.&lt;br&gt;
This week was primarily about research and orientation. For Issue #815, I reviewed the existing example &lt;code&gt;token_airdrop.py&lt;/code&gt; and examined how token association is currently demonstrated. I focused on understanding the expected behavior of token association in the SDK and why the current output could be misleading to users. This issue is relatively small in terms of code changes, but it requires careful handling to ensure the example remains correct, clear, and runnable. For Issue #1015, I spent time reading through the transaction system in the SDK to understand how batching works and how keys are handled internally. This involved tracing how transaction properties are stored, validated, and eventually serialized. At this stage, my goal was not to immediately write code, but to understand the broader design and identify which parts of the system would be affected by the proposed change.&lt;br&gt;
The main challenge this week has been scope management. While Issue #815 is well-contained and straightforward, Issue #1015 touches a more central part of the SDK. Even a small API change can have ripple effects, so I needed to slow down and make sure I fully understood how the transaction pipeline works before committing to an implementation. Another challenge has been balancing progress with caution. Since this is a core SDK feature, I want to avoid rushing changes that could introduce subtle bugs or break existing assumptions.&lt;br&gt;
Based on what I learned this week, I’m adjusting my approach in the following way:&lt;br&gt;
•I will complete Issue #815 first, using it as a focused improvement that delivers immediate value and builds momentum.&lt;br&gt;
•I will continue developing Issue #1015 incrementally, validating assumptions and testing behavior as I go rather than attempting a large change all at once.&lt;br&gt;
•I will prioritize steady weekly progress over rapid completion, aligning with the long-term nature of Release 0.4.&lt;br&gt;
This week confirmed that my chosen scope is appropriate for the remaining weeks of the course. While I’m not finished yet, I now have a much clearer understanding of the problem space and a solid foundation to build on moving forward.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Planning My Final Open Source Contribution</title>
      <dc:creator>Aubrey D</dc:creator>
      <pubDate>Fri, 12 Dec 2025 19:10:16 +0000</pubDate>
      <link>https://forem.com/aubreyddd/planning-my-final-open-source-contribution-2j85</link>
      <guid>https://forem.com/aubreyddd/planning-my-final-open-source-contribution-2j85</guid>
      <description>&lt;p&gt;For Release 0.4, I want to push myself further and work on a contribution that is meaningful to me and demonstrates everything I’ve learned this term about open-source development. After reviewing the options listed in the assignment, I decided to work on a feature-level contribution or bug fix for the &lt;a href="https://github.com/hiero-ledger/hiero-sdk-python" rel="noopener noreferrer"&gt;hiero-sdk-python&lt;/a&gt; project. This is a repository I have already contributed to in earlier releases, so I am familiar with the structure, coding style, and contribution workflow. More importantly, this project aligns well with my long-term interests in Python development, and SDK design. During Release 0.3, I worked on adding an example script and became more familiar with how the SDK interacts with Hiero’s APIs. While working on the project, I noticed that certain parts of the SDK are still incomplete or could be improved with more examples, extra helper utilities, and better developer-side support. The codebase is active, well-maintained, and has reviewers who give detailed feedback—exactly the kind of environment where I can continue to grow as an open-source contributor. Over the next few days, I will communicate with maintainers to confirm which feature area would be most helpful and ensure my work aligns with their roadmap.&lt;br&gt;
My plan is to take steady, visible steps each week:&lt;br&gt;
Week 1 – Research and proposal&lt;br&gt;
•Explore open issues, discuss with maintainers, and finalize the scope.&lt;br&gt;
•Prepare a technical plan for the feature.&lt;/p&gt;

&lt;p&gt;Week 2 – Development + iteration&lt;br&gt;
•Implement the core part of the feature.&lt;br&gt;
•Write tests, examples, or documentation as needed.&lt;br&gt;
•Submit early PRs to get feedback from maintainers.&lt;/p&gt;

&lt;p&gt;Week 3 – Finalization&lt;br&gt;
•Address all code review comments.&lt;br&gt;
•Polish tests and documentation.&lt;br&gt;
•Publish the final PR and write my final reflection.&lt;/p&gt;

&lt;p&gt;I wish I can deliver a feature that the maintainers are willing to merge and gain deeper experience reading large Python codebases and contributing at a higher level. This release is my opportunity to take one more step toward becoming a stronger open-source developer, and I’m excited to work on something I can be proud of.&lt;/p&gt;

</description>
      <category>devjournal</category>
      <category>opensource</category>
      <category>learning</category>
      <category>python</category>
    </item>
    <item>
      <title>Releasing "Repository-Context-Packager" to npm</title>
      <dc:creator>Aubrey D</dc:creator>
      <pubDate>Sat, 22 Nov 2025 00:44:07 +0000</pubDate>
      <link>https://forem.com/aubreyddd/releasing-repository-context-packager-to-npm-507k</link>
      <guid>https://forem.com/aubreyddd/releasing-repository-context-packager-to-npm-507k</guid>
      <description>&lt;p&gt;I recently prepared and published a small CLI tool called &lt;a href="https://github.com/AubreyDDD/Repository-Context-Packager" rel="noopener noreferrer"&gt;Repository-Context-Packager&lt;/a&gt; to the npm registry. I used &lt;a href="https://www.npmjs.com" rel="noopener noreferrer"&gt;npm&lt;/a&gt; as my package registry and the standard &lt;a href="https://docs.npmjs.com/cli/v9/commands/npm-publish" rel="noopener noreferrer"&gt;npm publish&lt;/a&gt; for releasing. &lt;/p&gt;

&lt;p&gt;The release process started with preparing my &lt;code&gt;package.json&lt;/code&gt; file. I set the version to &lt;code&gt;1.0.0&lt;/code&gt; and added a &lt;code&gt;files&lt;/code&gt; array to control what gets published. I also added a &lt;code&gt;bin&lt;/code&gt; field for the CLI command, set the &lt;code&gt;engines&lt;/code&gt; requirement, and created a &lt;code&gt;prepublishOnly&lt;/code&gt; script to run tests and linting automatically before publishing.&lt;/p&gt;

&lt;p&gt;Next, I created a &lt;code&gt;.npmignore&lt;/code&gt; file to exclude test files, GitHub workflows, and dev documentation. This kept the package size small—only about 27 KB with 11 files. I used &lt;code&gt;npm pack --dry-run&lt;/code&gt; to preview the package contents before actually publishing.&lt;/p&gt;

&lt;p&gt;Before publishing, I made sure all 76 tests passed with &lt;code&gt;npm test&lt;/code&gt; and verified the code quality with &lt;code&gt;npm run lint&lt;/code&gt;. Then I committed the &lt;code&gt;package.json&lt;/code&gt; changes and created a git tag using &lt;code&gt;git tag -a v1.0.0 -m "Release v1.0.0"&lt;/code&gt;. After pushing both the commit and tag to GitHub, I ran &lt;code&gt;npm publish&lt;/code&gt; to upload the package to the npm registry. Finally, I verified everything by checking the npm package page and testing the installation with &lt;code&gt;npm install -g repository-context-packager&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;I was realizing that git tags and npm package versions are completely separate systems. npm reads the version from &lt;code&gt;package.json&lt;/code&gt;, not from git tags. I initially created a git tag &lt;code&gt;v1.0.0&lt;/code&gt; and expected npm to automatically show that version, but npm still showed &lt;code&gt;0.9.0&lt;/code&gt; because that's what was in &lt;code&gt;package.json&lt;/code&gt;. The correct workflow is to update &lt;code&gt;package.json&lt;/code&gt; first, commit that change, create the git tag, push everything, and then publish.&lt;/p&gt;

&lt;p&gt;I also learned how to handle git push conflicts. When I tried to push my tag, it was rejected because the remote had new commits. I used &lt;code&gt;git pull --rebase origin main&lt;/code&gt; to update my branch and resolved the conflicts. Another issue was that I had created a tag before updating &lt;code&gt;package.json&lt;/code&gt;, so I had to delete it with &lt;code&gt;git tag -d v1.0.0&lt;/code&gt; and &lt;code&gt;git push origin --delete v1.0.0&lt;/code&gt;, then recreate it on the correct commit.&lt;/p&gt;

&lt;p&gt;Adding a &lt;code&gt;prepublishOnly&lt;/code&gt; script that runs &lt;code&gt;npm test &amp;amp;&amp;amp; npm run lint&lt;/code&gt; was really valuable. It prevents me from accidentally publishing broken code because the publish command stops if tests fail.&lt;/p&gt;

&lt;p&gt;The changes I made were minimal and focused on packaging rather than source code. I updated &lt;code&gt;package.json&lt;/code&gt; with the new version, added a &lt;code&gt;files&lt;/code&gt; array, &lt;code&gt;bin&lt;/code&gt; field, &lt;code&gt;engines&lt;/code&gt; requirement, &lt;code&gt;prepublishOnly&lt;/code&gt; script, and keywords for discoverability. I created a &lt;code&gt;.npmignore&lt;/code&gt; file to exclude tests, CI files, and dev documentation. I also updated the &lt;code&gt;README.md&lt;/code&gt; with npm installation instructions and version history. The actual source code in the &lt;code&gt;src/&lt;/code&gt; and &lt;code&gt;bin/&lt;/code&gt; directories didn't need any changes.&lt;/p&gt;

&lt;p&gt;When my partner tested the release process, they got stuck on two main issues. First, they were confused about the version mismatch—they saw the git tag &lt;code&gt;v1.0.0&lt;/code&gt; but npm showed &lt;code&gt;0.9.0&lt;/code&gt;. I explained that git tags and npm versions are separate systems and that npm reads from &lt;code&gt;package.json&lt;/code&gt;. Second, they encountered a git push rejection because of remote commits they didn't have. I showed them how to use &lt;code&gt;git pull --rebase origin main&lt;/code&gt; to handle this situation.&lt;/p&gt;

&lt;p&gt;The testing session revealed that I needed better documentation explaining the relationship between git tags and package versions. What seemed obvious to me after going through the process wasn't clear to someone doing it for the first time.&lt;/p&gt;

&lt;p&gt;Now that the package is published, users can install it globally with &lt;code&gt;npm install -g repository-context-packager&lt;/code&gt;. After installation, they can use the &lt;code&gt;repomaster&lt;/code&gt; command from anywhere. Common commands include &lt;code&gt;repomaster --version&lt;/code&gt; to check the version, &lt;code&gt;repomaster --help&lt;/code&gt; to see available options, &lt;code&gt;repomaster .&lt;/code&gt; to package the current directory, and &lt;code&gt;repomaster . -o output.txt&lt;/code&gt; to save the output to a file.&lt;/p&gt;

</description>
      <category>cli</category>
      <category>npm</category>
      <category>node</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Setting Up CI Workflow: A Journey of Testing and Automation</title>
      <dc:creator>Aubrey D</dc:creator>
      <pubDate>Wed, 12 Nov 2025 19:58:21 +0000</pubDate>
      <link>https://forem.com/aubreyddd/setting-up-ci-workflow-a-journey-of-testing-and-automation-dlo</link>
      <guid>https://forem.com/aubreyddd/setting-up-ci-workflow-a-journey-of-testing-and-automation-dlo</guid>
      <description>&lt;p&gt;We set up a GitHub Actions workflow with two separate jobs that run on every push to the main branch and on pull requests. Here's what happens automatically:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lint Job&lt;/strong&gt;: This job checks our code quality using ESLint. It runs on the latest Ubuntu environment, installs our dependencies with npm ci, and then runs our linting rules. If any code doesn't meet our style standards, the build fails and we know we need to fix it before merging.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unit Tests Job&lt;/strong&gt;: This job runs all our tests using Jest. We're using the experimental ES modules support in Node.js, which required adding some special flags. The tests make sure our code actually works as expected. &lt;/p&gt;

&lt;p&gt;My partner's &lt;a href="https://github.com/slyang08/repo-snapshot" rel="noopener noreferrer"&gt;repo-snapshot&lt;/a&gt; project uses a different testing framework called &lt;code&gt;Vitest&lt;/code&gt; instead of &lt;code&gt;Jest&lt;/code&gt;. He has coverage configured in his &lt;code&gt;vitest.config.ts&lt;/code&gt; file, which automatically tracks code coverage whenever tests run. His setup looks pretty clean and modern since Vitest is built specifically for modern JavaScript projects. He also have a GitHub Actions CI workflow, but it's structured differently from mine. While I split our workflow into two separate jobs (lint and unit-tests) that run in parallel, he combine everything into a single build job. His workflow uses pnpm instead of npm, which is a faster alternative package manager. In his single job, he run format checking, linting, and tests one after another. But if the format check fails, you have to wait for it to fix before the other checks can run. &lt;/p&gt;

&lt;p&gt;When I helped write tests, I focused on testing the &lt;code&gt;parseTomlConfig&lt;/code&gt; function, which reads and parses TOML configuration files. I wrote test cases covering different scenarios - when the config file doesn't exist (should return an empty object), when it contains valid TOML content, when the TOML is malformed (should throw an error), and when using a custom config file path. One thing that made testing easier in his project was that Vitest has really nice mocking capabilities. I could easily mock the file system functions like existsSync and readFileSync so tests wouldn't depend on actual files existing on disk. The beforeEach hooks let me clean up mocks between tests, which kept everything isolated and predictable.&lt;/p&gt;

&lt;p&gt;Every time I push code, I know within a few minutes whether everything still works. I don't have to remember to run tests manually or worry that I forgot to check something. If someone else contributes to the project, the CI workflow automatically checks their code. This is huge because it means I don't have to manually review every detail - the automated checks catch obvious problems. The workflow file itself documents how to run tests and linting, which helps new contributors understand the project setup. Having that green checkmark on pull requests just feels good. It shows the project is well-maintained and takes quality seriously.&lt;/p&gt;

&lt;p&gt;I did two optional challenges. I Added a Linter to the CI Workflow. This was actually pretty straightforward once we understood the YAML syntax. We created a separate job specifically for linting that runs ESLint on all our JavaScript files. The key was making sure to install dependencies first with npm ci before running the linter. This catches style issues and potential bugs before they make it into the codebase. It's saved us from some embarrassing typos and inconsistent formatting. I also seted up a Dev Container. I created a &lt;code&gt;devcontainer.json&lt;/code&gt; file that defines a complete development environment using Node.js 20 on Debian Bookworm. The most important addition was the &lt;code&gt;postCreateCommand&lt;/code&gt; that runs npm install and npm link. This ensures that whenever someone opens the project in a dev container, all dependencies are installed automatically and the repomaster command is available globally. Without npm link, the CLI tool wouldn't work properly inside the container.&lt;/p&gt;

&lt;p&gt;Building a CI pipeline isn't just about running tests automatically, it's about creating a development workflow that makes it easy to maintain quality. The linter keeps our code clean, the tests verify functionality, and the dev container ensures everyone has a consistent environment. Together, these tools make collaboration easier and give us confidence that our code works as expected.&lt;/p&gt;

</description>
      <category>cicd</category>
      <category>testing</category>
      <category>github</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Setting Up Testing for My CLI Tool</title>
      <dc:creator>Aubrey D</dc:creator>
      <pubDate>Sat, 08 Nov 2025 02:47:37 +0000</pubDate>
      <link>https://forem.com/aubreyddd/setting-up-testing-for-my-cli-tool-33j1</link>
      <guid>https://forem.com/aubreyddd/setting-up-testing-for-my-cli-tool-33j1</guid>
      <description>&lt;p&gt;I just finished adding tests to my &lt;a href="https://github.com/AubreyDDD/Repository-Context-Packager" rel="noopener noreferrer"&gt;Repository-Context-Packager&lt;/a&gt; project, &lt;br&gt;
I went with &lt;a href="https://jestjs.io/" rel="noopener noreferrer"&gt;Jest&lt;/a&gt; as my testing framework. Jest is probably the most popular JavaScript testing framework out there and it comes with everything built-in, mocking, coverage reports. I didn't need to install a bunch of separate packages like you would with some other frameworks. Plus, Jest has really good documentation and a huge community, so when I got stuck, I could find answers pretty easily. The other reason was that Jest just works out of the box for most projects.&lt;/p&gt;

&lt;p&gt;My project uses ES modules, and Jest doesn't natively support that without some configuration. First, I installed Jest as a dev dependency with &lt;code&gt;npm install --save-dev jest&lt;/code&gt;, but then when I tried to run it with just &lt;code&gt;npm run jest&lt;/code&gt;, it completely failed with a "Cannot use import statement outside a module" error. Turns out, even though Jest 30.x has better ES module support, you still need to run it with Node's experimental VM modules flag. So I had to update my package.json test script to &lt;code&gt;node --experimental-vm-modules node_modules/jest/bin/jest.js&lt;/code&gt; instead of just jest. I also created a &lt;code&gt;jest.config.js&lt;/code&gt; file to tell Jest to use Node as the test environment and specify where my tests are located. The config was pretty simple, just set testEnvironment: 'node', defined my test match patterns for files, and configured coverage settings. One more thing that tripped me up, I had to import jest from &lt;code&gt;@jest/globals&lt;/code&gt; in my test files to use things like &lt;code&gt;jest.spyOn()&lt;/code&gt; for mocking, and I also had to update my &lt;code&gt;ESLint&lt;/code&gt; config to recognize Jest globals like describe, test, and expect, otherwise my editor was full of red squiggly lines complaining that these weren't defined.&lt;/p&gt;

&lt;p&gt;I decided to start with three different modules to get good coverage of different testing scenarios. First was &lt;code&gt;toml-config.js&lt;/code&gt;, which loads configuration from TOML files. This was perfect for a first test because it's a simple, pure function with clear inputs and outputs. I wrote 12 test cases covering everything from valid configs to malformed TOML syntax to missing files. I needed to create actual test fixture files instead of mocking the file system. My second target was &lt;code&gt;tree-builder.js&lt;/code&gt;, which has two functions - one that builds a tree structure from file paths and another that renders it as text. This ended up being super fun to test because I could write 30 test cases covering all sorts of edge cases like empty arrays, nested directories, different path separators, dotfiles, and making sure the sorting worked correctly. I even added integration tests that combined both functions. The third module I tested was &lt;code&gt;git-info.js&lt;/code&gt;, which gets Git repository information. This one was tricky because it involves running actual Git commands. I initially tried creating temporary Git repositories in my tests, but that got really complicated really fast with issues about deleted directories and branch names. So I simplified it to just test against the actual project repository, which worked much better. I wrote 13 tests covering things like checking the return value structure, handling non-Git directories, and various edge cases.&lt;/p&gt;

&lt;p&gt;I discovered that my Git function needed to handle cases where someone runs it from a non-Git directory, and it properly returns null in those cases. The most interesting edge case was dealing with different Git default branch names - some systems use master and newer ones use main, so I had to make my tests flexible enough to handle both. I also found that testing with Unicode characters in author names and special characters in commit messages worked fine. One thing that surprised me was that my TOML parser properly rejected duplicate keys in config files, which I wasn't even sure about until I wrote a test for it.&lt;/p&gt;

&lt;p&gt;I've written unit tests before in other projects, but this was my first time setting up Jest from scratch with ES modules, and honestly, it was more challenging than I expected. The ES modules support in Jest is still marked as "experimental" and it shows - the setup wasn't as smooth as I remembered from working with CommonJS projects. But once I got past the initial configuration hurdles, everything clicked. What really struck me this time was how much more thoughtful I've become about testing. Before, I used to just write tests to make sure functions returned the right values, but now I'm thinking more deeply about edge cases and failure modes. The biggest lesson was about test design. I learned that sometimes testing against real files and real Git repositories is actually better than mocking everything. &lt;/p&gt;

</description>
      <category>tooling</category>
      <category>testing</category>
      <category>cli</category>
      <category>javascript</category>
    </item>
    <item>
      <title>My Hacktoberfest 2025 Journey: From First PR to Four Projects</title>
      <dc:creator>Aubrey D</dc:creator>
      <pubDate>Fri, 31 Oct 2025 09:00:18 +0000</pubDate>
      <link>https://forem.com/aubreyddd/my-hacktoberfest-2025-journey-from-first-pr-to-four-projects-2c0l</link>
      <guid>https://forem.com/aubreyddd/my-hacktoberfest-2025-journey-from-first-pr-to-four-projects-2c0l</guid>
      <description>&lt;p&gt;October was one of my most productive months so far, I officially completed Hacktoberfest 2025! This was my first real experience contributing to multiple open-source projects, and honestly, it changed the way I think about coding, collaboration, and learning.&lt;/p&gt;

&lt;p&gt;Hacktoberfest 2025 wasn’t just about collecting pull requests.&lt;br&gt;
It was about learning how to read, communicate, and contribute to real-world projects that skills that you can’t really get from tutorials alone.&lt;/p&gt;

&lt;p&gt;By the end of the month, I had contributed to four completely different codebases:&lt;br&gt;
    • One taught me backend testing,&lt;br&gt;
    • One taught me frontend motion design,&lt;br&gt;
    • One taught me async logic,&lt;br&gt;
    • And one taught me code clarity.&lt;/p&gt;

&lt;p&gt;I started October nervous about even opening someone else’s repo.&lt;br&gt;
Now I end it confident that I can step into any open-source project, figure things out, and leave it a little better than I found it.&lt;/p&gt;

</description>
      <category>hacktoberfest</category>
      <category>learning</category>
      <category>beginners</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Hacktoberfest PR: Cleaning Up Code</title>
      <dc:creator>Aubrey D</dc:creator>
      <pubDate>Fri, 31 Oct 2025 08:48:44 +0000</pubDate>
      <link>https://forem.com/aubreyddd/hacktoberfest-pr-cleaning-up-code-44d3</link>
      <guid>https://forem.com/aubreyddd/hacktoberfest-pr-cleaning-up-code-44d3</guid>
      <description>&lt;p&gt;&lt;a href="https://github.com/hiero-ledger/hiero-sdk-python/issues/377" rel="noopener noreferrer"&gt;Issue&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/hiero-ledger/hiero-sdk-python/pull/413" rel="noopener noreferrer"&gt;PR&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;hiero-sdk-python is a Python SDK for the Hiero blockchain — a toolkit that helps developers interact with smart contracts, tokens, and transactions through Python.&lt;br&gt;
It’s structured a lot like the Hedera SDK, where each type of blockchain transaction (freeze, unfreeze, transfer, etc.) has its own class, all inheriting from a shared base class called Transaction.&lt;/p&gt;

&lt;p&gt;This SDK has a clear focus on clean design and maintainability, which is why this issue existed in the first place. The issue described a small but important cleanup: the TokenUnfreezeTransaction class had some duplicate logic that was already implemented in its parent class. The goal was to remove the extra field and method, and update the internal calls to use the &lt;code&gt;inherited _require_not_frozen()&lt;/code&gt; instead.&lt;/p&gt;

&lt;p&gt;This one was pretty straightforward, but it needed careful attention to detail.&lt;/p&gt;

&lt;p&gt;Here’s what I did:&lt;br&gt;
    1.  Removed the line that declared &lt;code&gt;_is_frozen&lt;/code&gt; in &lt;code&gt;token_unfreeze_transaction.py&lt;/code&gt;.&lt;br&gt;
    2.  Deleted the &lt;code&gt;__require_not_frozen()&lt;/code&gt; method entirely.&lt;br&gt;
    3.  Replaced all instances of &lt;code&gt;self.__require_not_frozen()&lt;/code&gt; with &lt;code&gt;self._require_not_frozen()&lt;/code&gt; to make sure the class now used the inherited version from Transaction.&lt;/p&gt;

&lt;p&gt;Before this PR, I didn’t realize how important method naming conventions  could be in large codebases. In Python, a double underscore method gets “name-mangled”, so it doesn’t behave exactly the same as a protected _method. That subtle difference can lead to bugs if you don’t notice it.&lt;/p&gt;

</description>
      <category>hacktoberfest</category>
      <category>python</category>
      <category>blockchain</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Hacktoberfest PR: A Tiny Code Change</title>
      <dc:creator>Aubrey D</dc:creator>
      <pubDate>Fri, 31 Oct 2025 08:36:32 +0000</pubDate>
      <link>https://forem.com/aubreyddd/hacktoberfest-pr-a-tiny-code-change-5165</link>
      <guid>https://forem.com/aubreyddd/hacktoberfest-pr-a-tiny-code-change-5165</guid>
      <description>&lt;p&gt;&lt;a href="https://github.com/erxes/erxes/issues/6423" rel="noopener noreferrer"&gt;issue&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/erxes/erxes/pull/6480" rel="noopener noreferrer"&gt;PR&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;erxes is an open-source customer experience and growth marketing platform — kind of an all-in-one tool for managing conversations, sales pipelines, marketing campaigns, and analytics. It’s a huge project with tons of modules and integrations, so even small fixes matter. This one was a really simple issue compared to my earlier PRs. The problem was a function that didn’t handle async behavior properly.&lt;/p&gt;

&lt;p&gt;The fix was to make it return a Promise, ensuring that the function completed its asynchronous work before continuing.&lt;br&gt;
That small change helped stabilize the flow and made the code cleaner and more predictable.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;removeChecklists(contentTypeIds: string[]): Promise&amp;lt;void&amp;gt;;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Even a tiny change can make a real difference in a large codebase.&lt;br&gt;
This fix reminded me how easy it is to overlook async behavior — and how important it is to make functions predictable when dealing with network requests, database calls, or delayed responses.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Hacktoberfest Contribution: Feature implement in make-it-oss</title>
      <dc:creator>Aubrey D</dc:creator>
      <pubDate>Fri, 31 Oct 2025 08:18:14 +0000</pubDate>
      <link>https://forem.com/aubreyddd/hacktoberfest-contribution-feature-implement-in-make-it-oss-3dco</link>
      <guid>https://forem.com/aubreyddd/hacktoberfest-contribution-feature-implement-in-make-it-oss-3dco</guid>
      <description>&lt;p&gt;This time I jumped into something new. I found this &lt;a href="https://github.com/architech-devs/make-it-oss" rel="noopener noreferrer"&gt;make-it-oss&lt;/a&gt; project. This is a useful tool to help open source maintainers manage the documentation (maybe scaffolding or helping contributions). The &lt;a href="https://github.com/architech-devs/make-it-oss/issues/35" rel="noopener noreferrer"&gt;issue #35&lt;/a&gt; caught my attention because it described a specific details of what they want.&lt;/p&gt;

&lt;p&gt;The issue I worked on was a feature request to improve the user experience. Here’s what the request said (in simple terms):&lt;br&gt;
When the user clicks the arrow button in the GithubInput component and the file list appears, the new content should animate smoothly upward — a slide-up and fade-in motion.&lt;br&gt;
At the same time, the Hero section (the big intro banner) should disappear, leaving the focus on the file list.&lt;br&gt;
A Back button should appear at the top so the user can bring the Hero back if they want to start over. Basically, the goal was to make the whole interaction feel more natural.&lt;/p&gt;

&lt;p&gt;Before jumping into code, I wanted to see how things worked.&lt;br&gt;
The part that handles the submission and file display is the GithubInput component.&lt;br&gt;
It already fetched and displayed files once the repo was processed, but the Landing page, which also contained the Hero section, didn’t know when those files were ready. So the first thing I needed to do was make these two components talk to each other.&lt;br&gt;
That way, when files were ready, the Landing page could hide the Hero and trigger the animation for the results.&lt;/p&gt;

&lt;p&gt;To make all that work, I started by letting the Landing page know when files were ready to show. I added an onFilesReady() callback in the GithubInput component so it could notify the parent once the repo scan finished. When that callback fired, the Landing page set a simple showFiles state to true — which controlled everything: hiding the Hero, showing the file list, and triggering the animation. Instead of creating new components, I reused GithubInput with a small filesOnly prop that hides the input and only renders the file checklist. For the animation itself, I kept it lightweight using Tailwind’s built-in transitions. The Hero fades and slides up out of view, and the file list fades in from below with a 300ms ease-out motion. I also added a Back button that scrolls smoothly to the top and resets the state. To keep it all stable, I avoided unmounting the Hero right away — I just visually hid it, which prevented any layout jumps and made the whole experience feel smooth and focused.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/architech-devs/make-it-oss/pull/38" rel="noopener noreferrer"&gt;PR&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fushwja9pus657f873bsg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fushwja9pus657f873bsg.png" alt=" " width="800" height="683"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This PR taught me that UI polish is all about state control and simplicity. Smooth animations aren’t about fancy code, they come from clear logic. I learned how to reuse components smartly, how to hide elements without breaking layout flow, and how small visual cues can make a big difference in user experience. It was also a reminder that testing animations on different viewports (especially mobile) is crucial — timing and motion feel completely different on smaller screens.&lt;/p&gt;

</description>
      <category>ux</category>
      <category>hacktoberfest</category>
      <category>showdev</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Implementing New Feature in Repository Context Packager</title>
      <dc:creator>Aubrey D</dc:creator>
      <pubDate>Tue, 28 Oct 2025 00:54:12 +0000</pubDate>
      <link>https://forem.com/aubreyddd/implementing-new-feature-in-repository-context-packager-2n83</link>
      <guid>https://forem.com/aubreyddd/implementing-new-feature-in-repository-context-packager-2n83</guid>
      <description>&lt;p&gt;I extended my &lt;a href="https://github.com/AubreyDDD/Repository-Context-Packager" rel="noopener noreferrer"&gt;Repository Context Packager&lt;/a&gt; tool by adding a new feature:&lt;br&gt;
File Filtering by Extension via the &lt;code&gt;--include&lt;/code&gt; flag.&lt;br&gt;
This feature allows users to specify which file types they want to include when generating repository summaries.&lt;br&gt;
For example, users can now do the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;repomaster &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;--include&lt;/span&gt; &lt;span class="s2"&gt;"*.js"&lt;/span&gt;
repomaster &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;--include&lt;/span&gt; &lt;span class="s2"&gt;"*.js,*.py,*.md"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Many users only want to extract meaningful source code or documentation. By allowing them to filter files by extension, the tool becomes more flexible and efficient when used to generate data for LLM input or AI code assistants. In this feature, the program first parses these patterns from the CLI, then uses the &lt;code&gt;globby&lt;/code&gt; library to quickly find all matching files in the current directory. It supports advanced glob syntax, .gitignore, and cross-platform paths, and it handles most of the complexity automatically. Finally, the tool filters the collected file list so that only files whose absolute paths match the &lt;code&gt;globbySync()&lt;/code&gt; results are included in the final output.  &lt;/p&gt;

&lt;p&gt;This feature was inspired by the open-source project &lt;a href="https://github.com/yamadashy/repomix" rel="noopener noreferrer"&gt;Repomix&lt;/a&gt;, which also supports --include and --exclude options for pattern-based file selection.&lt;br&gt;
Repomix uses the same globby library, so I followed its approach.&lt;/p&gt;

&lt;p&gt;The main difference is that my implementation is intentionally simpler which I focused only on inclusion logic, leaving exclusion and .gitignore merging for future versions.&lt;/p&gt;

&lt;p&gt;After completing this feature, I’ve opened two new issues to continue improving overall flexibility in the tool:&lt;br&gt;
    • &lt;a href="https://github.com/AubreyDDD/Repository-Context-Packager/issues/12" rel="noopener noreferrer"&gt;issue-12&lt;/a&gt;&lt;br&gt;
    • &lt;a href="https://github.com/AubreyDDD/Repository-Context-Packager/issues/13" rel="noopener noreferrer"&gt;issue-13&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These two changes will reduce code complexity, improve performance, and make filtering behavior consistent with modern CLI tools. &lt;/p&gt;

</description>
      <category>cli</category>
      <category>showdev</category>
      <category>tooling</category>
      <category>llm</category>
    </item>
    <item>
      <title>Exploring Repomix and Learning from Its Feature</title>
      <dc:creator>Aubrey D</dc:creator>
      <pubDate>Mon, 27 Oct 2025 23:01:36 +0000</pubDate>
      <link>https://forem.com/aubreyddd/exploring-repomix-and-learning-from-its-feature-733</link>
      <guid>https://forem.com/aubreyddd/exploring-repomix-and-learning-from-its-feature-733</guid>
      <description>&lt;p&gt;This time, I studied an open-source project called Repomix to better understand how professional open-source tools organize their code and implement advanced features. &lt;strong&gt;Repomix&lt;/strong&gt; is a CLI tool that packages repository context for LLMs — similar to my own project &lt;strong&gt;Repository Context Packager&lt;/strong&gt;, but much more feature-complete. To analyze Repomix, I used a tool called &lt;strong&gt;DeepWiki&lt;/strong&gt;. DeepWiki automatically indexes the source code of open-source repositories and presents it in a structured, documentation-style website. It breaks the project down into categories such as Command Line Interface, Configuration System, and Core Packager, and provides direct links to the original GitHub files. This made the process much faster — instead of opening dozens of files manually, I could see how Repomix’s architecture fits together at a high level and then jump straight to the parts I cared about.&lt;/p&gt;

&lt;p&gt;The feature I chose to explore was: Include Patterns. By tracing through the DeepWiki documentation and the linked source files, I found that Repomix implements file filtering in three main stages:  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Parsing the CLI Arguments
Repomix uses Commander to parse options like &lt;code&gt;--include "*.js,*.py"&lt;/code&gt;. The argument string is split into an array of patterns (for example: ["&lt;em&gt;.js", "&lt;/em&gt;.py"]).
&lt;/li&gt;
&lt;li&gt;Merging Configuration Sources
In &lt;a href="https://github.com/yamadashy/repomix/blob/main/src/cli/actions/defaultAction.ts" rel="noopener noreferrer"&gt;defaultAction.ts&lt;/a&gt;, the CLI arguments are merged with configuration file values (repomix.toml or .json) so that defaults are respected and CLI options override them when provided.
&lt;/li&gt;
&lt;li&gt;Applying the Filters with globby
During the repository scanning phase, Repomix uses globby to list all files, passing the include/exclude patterns directly to it.
This allows the user to precisely control what files are read and included in the output. What I liked most was how modular this logic is, instead of writing custom filtering code, Repomix delegates the complex part to globby, which supports glob patterns, .gitignore, and cross-platform path normalization.
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;From this learning process, I do learned that instead of reinventing pattern matching, using globby simplifies the entire filtering process. Good naming helps understanding and searching. It’s easier to learn when you have a concrete feature to trace instead of reading everything blindly.&lt;/p&gt;

</description>
      <category>tooling</category>
      <category>learning</category>
      <category>llm</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
