<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Alice Chen</title>
    <description>The latest articles on Forem by Alice Chen (@alchen99).</description>
    <link>https://forem.com/alchen99</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/alchen99"/>
    <language>en</language>
    <item>
      <title>How to get started with GitHub Copilot: Tips for experienced coders</title>
      <dc:creator>Alice Chen</dc:creator>
      <pubDate>Wed, 16 Jul 2025 13:00:00 +0000</pubDate>
      <link>https://forem.com/alchen99/how-to-get-started-with-github-copilot-tips-for-experienced-coders-dai</link>
      <guid>https://forem.com/alchen99/how-to-get-started-with-github-copilot-tips-for-experienced-coders-dai</guid>
      <description>&lt;p&gt;For years I’ve been writing everything from bash scripts to code fixes using my go-to editor: vi / vim. But I also find that an IDE like VSCode is great when I need to see the whole picture. Routine tasks don’t need extra help — but AI tools like GitHub Copilot and Gemini help me when I’m working in a new language or need a quick scaffold to build on. If you’re trying to figure out how to make use of AI in your everyday programming work, here’s where to start.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it’s good for
&lt;/h2&gt;

&lt;p&gt;Despite the hype, AI is better at some tasks than others. You’ll have fewer headaches if you play to its strengths. Things I’ve found it useful for:&lt;/p&gt;

&lt;p&gt;Scaffolding a new project in a particular language or quick prototyping of features. Sometimes I need to throw together something in a specific language that I don’t have experience with. I can give the AI a set of tasks and have it generate the needed code.&lt;/p&gt;

&lt;p&gt;Q&amp;amp;A help — the AI can give me a technical description of a piece of code, or show how to construct a specific SQL query based on a partial example.&lt;/p&gt;

&lt;p&gt;Writing docs and READMEs. I can use it to write inline documentation for a function, or create instructions to build and install the package. Watch: &lt;a href="https://youtu.be/HSP1XrLQxXM" rel="noopener noreferrer"&gt;Prompting GitHub Copilot&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This one is a big time saver: I can use it to generate additional test data from an existing data set. This is great when you’re load-testing and need to go from 3 to 3000.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to set things up
&lt;/h2&gt;

&lt;p&gt;If you mostly work from a terminal window, the first thing you’ll need is a code editor that integrates with AI tools. VSCode is a popular option because it integrates with basically everything. If you go another route, make sure the tool you pick has an extension for the AI services you intend to use.&lt;/p&gt;

&lt;p&gt;You can use vi key bindings in VSCode too. Sometimes the GitHub Copilot extension can get confused by vi macros though, so keep in mind you may need to disable it temporarily if things are acting up. I find that switching to insert mode before triggering GitHub Copilot can help too. Watch: &lt;a href="https://youtu.be/xYFf7oz6Uic" rel="noopener noreferrer"&gt;Setting up Visual Studio Code with Vi / Vim  and GitHub Copilot&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GitHub Copilot has three modes to switch between:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Ask&lt;/strong&gt; will let you interact with the chat agent to get information (like the examples above).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agent&lt;/strong&gt; has the entire workspace in context, so it may not be as effective on bigger projects if the project has not been indexed in some way. This is the mode you would use to write larger features across multiple files, or when you don’t know where in the code the change should happen.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Edit&lt;/strong&gt; is targeted to selected file(s) or code section(s). You select the context where it can act.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Context is a key issue because it affects token usage, performance, and accuracy. Whatever text or data is in context for the AI agent determines the scope of operations to answer questions, make edits, or update files. You’ll get the best results if you limit the AI context only to what you want it to work with. Edit mode will give you the most control, or you can use Agent mode when creating new files or making edits across multiple files at once. Context grows as you work and after a certain point the AI may start to seem like it’s going in circles instead of doing what you want. If that happens, click to start a new chat. It will automatically add the current open file to a new context.&lt;/p&gt;

&lt;p&gt;GitHub Copilot can make use of several different AI models, which will also impact how effective it is for your purposes. If you’re using VSCode, there’s a selection box for this to the right of the mode setting in the GitHub Copilot window. Gemini has the largest context window so it’s the best choice for handling multiple large files at once. It also seems to be the best option for Terraform, Kubernetes, and Golang. For other coding tasks, Claude is my pick. Watch: &lt;a href="https://youtu.be/J90oosEIvjc" rel="noopener noreferrer"&gt;GitHub Copilot Settings in Visual Studio Code&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Don’t assume that newer models will necessarily be more effective. Newer models incorporate additional data and training that may lead to different, oftentimes unexpected, results. I have had several encounters with a new model giving me unexpected results. Most of the time the solution is to find a slightly different prompt to get what you want. If you want to use the newer models then test it out for a while so you get to know its quirks before completely changing over to the new model for everything. If you find that a particular model is working well for you, stick with it until you find a prompt that works with the newer models.&lt;/p&gt;

&lt;p&gt;Finally, you’ll need a ‘rules for AI’ file to feed the agent to ensure it interacts with you in a specific way and uses the specific tools and package versions you need. You can also give it guidelines for the output you expect, like “create clear and readable code”. This can include things from your standard formatting requirements, like ending all lines with a semicolon. Keep adding to it when the AI does something you don’t want, like using &lt;code&gt;async await&lt;/code&gt; instead of &lt;code&gt;.then&lt;/code&gt; or vice versa. Watch: &lt;a href="https://youtu.be/hnDFj7u3HWs" rel="noopener noreferrer"&gt;GitHub Copilot Custom Instructions&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To wrap up: using AI for coding may feel a little fiddly at first, but it can be helpful as part of your workflow. If you get the basics down, you’ll be able to experiment and see what works best for you. Love it or hate it, this is becoming the industry standard for software engineering. I hope these tips help you move past the pain points so you can get work done.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>githubcopilot</category>
      <category>vscode</category>
      <category>tooling</category>
    </item>
    <item>
      <title>GitHub Universe, GenAI, and the Current Future</title>
      <dc:creator>Alice Chen</dc:creator>
      <pubDate>Mon, 15 Jan 2024 14:00:00 +0000</pubDate>
      <link>https://forem.com/opencontextinc/github-universe-genai-and-the-current-future-4kl4</link>
      <guid>https://forem.com/opencontextinc/github-universe-genai-and-the-current-future-4kl4</guid>
      <description>&lt;p&gt;I went to GitHub Universe in October, and I was surprised by how heavily GitHub and Microsoft are leaning into AI. I expected it to be a feature, but they’re weaving it into their offerings at every level. In fact, they just announced that Microsoft keyboards are going to have a physical Copilot button! At Universe, Github said that this was the year they refounded themselves, based on Copilot.&lt;/p&gt;

&lt;p&gt;AI is &lt;em&gt;not the next thing coming&lt;/em&gt; in that environment, and it’s not a track, it’s part of every track and every product. And given how much GitHub penetrates the market, there’s no hiding from it. And there are a lot of good parts. You can ask Copilot to help you generate a regex for very complicated patterns. We used to have to look them up and experiment, but now there’s enough of a base of input that Copilot can help you generate a tuned regex for exactly what you need. &lt;/p&gt;

&lt;p&gt;You can also use Copilot to help you write more detailed pull request descriptions. Programmers are sometimes lazy about that, so if we can make it easier to write, we end up with something easier to read and review, also. That might be especially useful in an open-source or community-centered codebase, where there is not a manager who can enforce pull request standards with authority.&lt;/p&gt;

&lt;p&gt;Copilot Enterprise may also be trained just on internal code, so if you have a relatively clean codebase, you can keep aligned with that, instead of adding possibly-dubious practices that work in the current open-source codebase Copilot is trained on, but might not work for more secure or enterprise-focused development. If your codebase has Fortran or COBOL, you want to respect that existing code and the structures built around it at your organization. Your standards become self-reinforcing.&lt;/p&gt;

&lt;h2&gt;
  
  
  We say AI, we mean LLM
&lt;/h2&gt;

&lt;p&gt;GitHub kept saying AI, and I think it’s important that we’re really clear on what we’re talking about. What most people are using the word “AI” for is actually something like ChatGPT, which is a Large Language Model, and they’re not quite the same. And neither of them is the same as an Artificial General Intelligence, which is the sci-fi kind of &lt;em&gt;I, Robot&lt;/em&gt; thing. People having conversations where the artificial intelligence is understanding them instead of just statistically modeling the right response is a long way from where we are.&lt;/p&gt;

&lt;p&gt;We are not there yet. &lt;/p&gt;

&lt;p&gt;LLMs are very powerful, but they’re not smart. They don’t understand. They pattern match. I sometimes say AI, because that’s the language we’re all using, but I don’t think we have any AI yet, in the magical sense.&lt;/p&gt;

&lt;h2&gt;
  
  
  We still need experts
&lt;/h2&gt;

&lt;p&gt;Copilot isn’t full automation. It’s an assistant, not a replacement. You still need to be a subject matter expert in the business problem you’re trying to solve, because you need to be able to spot wrong or damaging solutions. AI/LLM is statistics and word association. It can detect patterns and match them, but if the pattern is not correct, then you have a problem. And you can’t tell if the pattern is incorrect if you don’t know what it should look like.&lt;/p&gt;

&lt;p&gt;You still need to know how to fix the pattern or how to solve it, more than just fixing the parts. For someone like me, who’s a subject matter expert in Terraform and shell scripting, Copilot is useful for me to quickly generate crud code, because I can ask for quick code and when I look at it, I can see the parts that work or the parts that I don’t like. And so I can take it as a starting point and fix it. I generally don’t bother to try to fix the LLM unless the answer is totally off and I need a better answer, because that’s a distraction from what I’m trying to do, I usually just want a starting point, or template.&lt;/p&gt;

&lt;p&gt;But I worry about newer developers, who may rely on LLM to generate their code -- how will they be at debugging the code, since they don’t always know what lines are causing what action? What if ChatGPT or OpenAI shuts down, like lots of new apps shut down? The LLM hides the process, and you don’t know what the process is or how to repeat it because you’ve never done it by hand.&lt;/p&gt;

&lt;p&gt;Think of it like a cashier. Their cash register calculates the sales tax, and knows what it is in that exact municipality, and whether or not clothing is taxed. If the power goes out, there’s no way that they know how to calculate the sales tax correctly. Someone else has set that calculation, and they may not even know what the tax rate is. Newer devs may end up in the same place, where they are perfectly competent, until their tool is unavailable.&lt;/p&gt;

&lt;p&gt;I don’t really 100% trust technology will always be working, so I like to know how to do things myself, even if I don’t do it most of the time. I at least want to be able to look something up!&lt;/p&gt;

&lt;h2&gt;
  
  
  The basics of automation
&lt;/h2&gt;

&lt;p&gt;Before you can automate a process, you need to know how the process is done manually. Then you can automate it. Then you can try to refine it or make it more efficient. You can’t do all those things at the same time, it’s not going to work.&lt;/p&gt;

&lt;p&gt;In the same way, we can’t trust LLMs/AI through our whole stack without understanding what we’re automating.&lt;/p&gt;

&lt;p&gt;There are things that automation and AI will be great at. If it does a vulnerability scan of your system, and sees things that should be fixed and that the remediation path is pretty obvious, we could just agree to fix them automatically. Or you can choose not to apply those fixes, because maybe you know better than Github what you need to keep stable. I think this is one of those situations where if you have proper testing, it may not be a bad idea to just have AI apply the fix.&lt;/p&gt;

&lt;p&gt;But you really do need to review the code first before you apply the fix without checking it. I don’t think they’ll deliberately introduce anything, but it might interact strangely with your code base if you don’t understand what the fix is doing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Third parties and GitHub
&lt;/h2&gt;

&lt;p&gt;GitHub has a lot of actions and other add-ons from third-party companies, and I think if the main thing that action does is something Copilot can do, those companies need to be thinking about their business plan and long-term sustainability. Microsoft has so much software it can draw on to add Copilot value. Word? Copilot now. Excel? Copilot! They already have the market base to do that. I don’t think Microsoft/GitHub will get into really complex things, but if they already have it, they might as well roll it out with Copilot.&lt;/p&gt;

&lt;h1&gt;
  
  
  How OpenContext fits in
&lt;/h1&gt;

&lt;p&gt;When I came back from GitHub Universe, I started thinking about what OpenContext needed to do to respond to this big move from GitHub. &lt;/p&gt;

&lt;p&gt;I think we need to start investigating more about what things we can detect about generative AI, or the use of generative AI, or the way models are used. The point of OpenContext is not to track all the things, but track the relationships of things that matter.&lt;br&gt;
For example, if the majority of your codebase is Python, but you’re also using generative AI because you have a call out to OpenAI or HuggingFace, maybe we don’t need to do much about it, but I think highlighting it, or showing it to the user would be useful, so that everyone knows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Audits are coming
&lt;/h2&gt;

&lt;p&gt;Right now we are having a lot of copyright fights about what these large LLMs can scrape. But we are going to have audits soon to make sure we’re in compliance with whatever rules we settle on. If you are regulated or responsible, you need to be able to point at what model you’re using, what data you’re feeding it, where the data came from, is the data being sent back, either raw or as results. It’s not going to be acceptable to make a commercial product if you don’t have at least some provenance, and the more sensitive the product is, the more you’re going to need to be able to pinpoint what is happening with the LLM.&lt;/p&gt;

&lt;p&gt;OpenContext can’t track all of that. Right now, no one can track all of it due to the complexity. But we think we could at least &lt;em&gt;help&lt;/em&gt; track some of the interaction points and show them. This could help you create guardrails.&lt;/p&gt;

&lt;p&gt;I didn’t think OpenContext was going to have to work on that this year, and now I feel like it’s going to get urgent pretty quickly.&lt;/p&gt;

&lt;p&gt;It’s not always easy to see if a human, bot, or automated process created content or data. Being able to track that means that you can apply different kinds of guardrails and standards to it. Think of it like automated testing. Automated testing can’t and won’t catch everything, but it will catch a lot of things, before you break production. That’s why we require automated testing for most code deployment pipelines. If we’re using AI to generate code multiple times faster, we need to upgrade the guardrails for that pipeline/road. It’s obvious to us that people can’t or shouldn’t approve their own PRs for production code, so why would we let AIs do it?&lt;/p&gt;

&lt;p&gt;Also, our data scientists are going to be moving in ways they’re not used to. They used to be able to treat production models as static, but now if you’re feeding user data back into the model, and implementing it, data scientists are going to have to get used to being on call. I’m not sure they’re ready for that!&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;It’s time to stop talking about what will happen when LLMs show up in our workplaces and codebases. They’re already here. They’ve been here for a while - every time a sentence or a command line autocompletes, that’s the technology that underlies it. What’s going to become important is using the technology in a way that’s safe, auditable, and trustworthy. I think OpenContext is going to have a lot of opportunities to help people map and understand their internal systems, and how those systems interact with LLMs and external systems.&lt;/p&gt;

</description>
      <category>githubcopilot</category>
      <category>genai</category>
    </item>
    <item>
      <title>Who is the ideal OpenContext user?</title>
      <dc:creator>Alice Chen</dc:creator>
      <pubDate>Thu, 18 May 2023 12:00:00 +0000</pubDate>
      <link>https://forem.com/opencontextinc/who-is-the-ideal-opencontext-user-37n7</link>
      <guid>https://forem.com/opencontextinc/who-is-the-ideal-opencontext-user-37n7</guid>
      <description>&lt;p&gt;What happens when you leave your tech stack unmapped? Time-bombs…&lt;/p&gt;




&lt;p&gt;I think our ideal users are engineering teams. It’s not just one person getting value, it’s everyone on the team. For example, if you’re a new hire, OpenContext will help because you would be able to understand from your team’s point of view, what your team does, and what different code repositories they work in. And if you’re working in a monorepo, you’d see the different code paths that you're responsible for. It’s sometimes hard for people to know that! You learn it over time, but it’s seldom explicitly taught when you join a team.&lt;/p&gt;

&lt;p&gt;OpenContext makes it easier to onboard a new person on a team, but it also makes things clearer for existing members. It’s rare that someone works on a single team for their entire time at a company. OpenContext also makes it easier to see when the things that you use are moving or changing, even if you’re not directly told by the team that’s changing them. You also really benefit from being able to ask about context when you’re on call or troubleshooting a problem. You don’t have to ask someone else what the codepath or structure is, you can use self-service to see what is affected.&lt;/p&gt;

&lt;p&gt;If you can free up brain space or cognitive capacity for people, that brain power can be used to go do something much more innovative and productive than trying to deduce who owns a piece of code.&lt;/p&gt;

&lt;h2&gt;
  
  
  What can I use OpenContext for?
&lt;/h2&gt;

&lt;p&gt;The main focus of OpenContext is to help people see the whole picture and get an understanding of all the interdependencies of things. Whether we like it or not, our work rubs up against and affects other people’s work. It’s easy to think you’re just working on this one piece of code or single feature, and it won’t affect anything beyond itself. It’s easy to think that, but it’s wrong.&lt;/p&gt;

&lt;p&gt;Inevitably that piece of code does affect other people, either upstream or downstream. That’s true whether or not you know it at the time. It’s better if you make changes while being aware of what your changes may affect. That kind of awareness and notification is a given for a big upgrade - we build in notification and alerts for a version-level change. But sometimes, a small change for you can be a big change for someone else. That’s when all kinds of interesting things happen. We’d like to think that we did enough end-to-end and performance testing, but there’s often something we miss because of time constraints. If we knew ahead of time who our code was affecting, we might be able to do better planning.&lt;/p&gt;

&lt;p&gt;When we first started OpenContext, we were thinking that people would use it to do modeling and get a broader picture of where they were, but as people really started using it, we realized that auto-discovery was a huge benefit. Auto-discovery gives anyone in the organization the ability to see how their work connects with the rest of the system. That’s especially important when all sorts of individual roles like release manager or QA are getting squished into the developer role. We don’t have people or time dedicated to creating and maintaining a system for explicit system mapping - it needs to be built-in. Every company’s tech stack is different enough that it needs to be mapped, but most companies don’t have the resources to do it as a separate task.&lt;/p&gt;

&lt;p&gt;People will only manually map things that they remember. Sadly, most of us don’t remember every tiny detail of our system. Manual mapping is useful, but not sufficient. Auto-discovery starts showing you dependencies that you thought you’d gotten rid of, services that have been running fine for years, so no one actually owns them anymore, that type of thing. &lt;/p&gt;

&lt;h2&gt;
  
  
  What happens when you leave your tech stack unmapped? Time bombs…
&lt;/h2&gt;

&lt;p&gt;If a service is mature and stable, we tend to not think about it, but then if there’s a hardware failure or something happens to it, you can have an outage of epic proportions because no one knows anything about it or its connections. You lack the context for it, and if you don’t have auto-discovery, it’s really hard to find out what depends on a service that is broken.&lt;/p&gt;

&lt;p&gt;Orphaned services may not have a subject matter expert in the organization, and the person who got paged for it needs to be able to find as much information as possible on their own. Is there a runbook? Is the runbook up-to-date? Is it possible you’re intentionally or unintentionally running multiple versions of the same service? That context should be available when you need it, without having to escalate to someone else. Everyone on call should be able to find out basic information about your repository, who owns it, are there past pages for it, is it in the CI/CD pipeline? Most importantly, you should be able to find out who owns it or is likely to know something about it.&lt;/p&gt;

&lt;p&gt;GitHub has most of this information. We can see who has contributed to a code repository, and what folders or sub-paths in your repository are getting changed at the same time.That all helps with self-service and faster triage. If I get an alert about a service, I can see exactly where it lives and what it’s connected to. The bigger your codebase is, the more important it is to have pointers to the exact things you need.&lt;/p&gt;

</description>
      <category>sre</category>
      <category>systems</category>
      <category>startup</category>
    </item>
    <item>
      <title>Autodiscovery in a time of layoffs</title>
      <dc:creator>Alice Chen</dc:creator>
      <pubDate>Wed, 17 May 2023 17:58:45 +0000</pubDate>
      <link>https://forem.com/opencontextinc/autodiscovery-in-a-time-of-layoffs-h05</link>
      <guid>https://forem.com/opencontextinc/autodiscovery-in-a-time-of-layoffs-h05</guid>
      <description>&lt;p&gt;Software is a complex sociotechnical system. It’s not just code, or servers, or people, it’s all of those things and how they relate.&lt;/p&gt;




&lt;p&gt;Layoffs are a really hard time for both employees and companies. We always feel for the people who are out of work --  it’s scary. We even feel bad for management who cannot get around the need to lay people off, and have to select their teammates, and sometimes themselves, as the people to lay off. We also should spare a thought for the people who stay behind, who have to keep everything going with fewer resources and fewer teammates.&lt;/p&gt;

&lt;p&gt;Software is a complex sociotechnical system. It’s not just code, or servers, or people, it’s all of those things and how they relate to each other. Layoffs are a significant disruption to that system, and we need to figure out how to respond to that in both human and technical ways.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who owns this code?
&lt;/h2&gt;

&lt;p&gt;You can use OpenContext to figure out who owns what, once the dust settles. What services, what code paths, what infrastructure rolled up to someone, and how do we need to rebalance that load? If you lose half your team, the half that remains can’t automatically pick up everything at once, they need time to transition and reprioritize.&lt;/p&gt;

&lt;p&gt;Managers and technical leads need to be able to assess their code structure in light of the remaining developers and team members, and redistribute the work fairly and logically. What things should get dropped due to lack of resources?&lt;/p&gt;

&lt;p&gt;It needs to be easier for teams to see all the interdependencies of services teams so that they can make better decisions on how to divvy up responsibilities mindfully.&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing change in difficult times
&lt;/h2&gt;

&lt;p&gt;Having a good map hopefully helps teams understand how to keep things running after a re-organization. Onboarding is easier if you can see the context of the new service you’re managing. Theoretically, people have time to train for their new responsibilities as part of the transition, but that doesn’t always work out in practice.&lt;/p&gt;

&lt;p&gt;We need to do continuity in a way that doesn’t entirely depend on humans.&lt;/p&gt;

&lt;p&gt;If we make it easier to self-serve and self-discover context with autodiscovery, we are empowering everyone dealing with a new system, whether it’s because they are taking over a project, switching teams, or starting a new position.&lt;/p&gt;

&lt;h2&gt;
  
  
  Avoiding surprises
&lt;/h2&gt;

&lt;p&gt;Code autodiscovery is a way to avoid surprises. Even people familiar with the codebase need to check that their dependencies have stayed stable. Sometimes other teams move things that you use, and it’s great to be able to check and find the things you depend on.&lt;/p&gt;

&lt;p&gt;Autodiscovery may also surface things that you have either forgotten about or didn't realize were in there. As time goes on projects change and evolve, and sometimes things are left behind. Priorities change. The entire organization may gradually forget about some parts of the code base. You need mapping to be able to tell if they could just be deleted, or if they’re important to some other part of the system.&lt;/p&gt;

&lt;p&gt;How often have you pushed a new release, and after deployment, something obscure breaks? It happens because we can’t keep track of all the dependencies. But if you had known your dependencies during product development, you could have planned with the team in charge of that dependency, and collaborated on any upgrades or updates or security changes that your new code needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reducing the system impact
&lt;/h2&gt;

&lt;p&gt;It’s never going to be easy to lose teammates, or to have to take over work that was abruptly dropped. There is a lot of emotional work that goes into carrying on the work of the company. One of the things that we hope is that creating tools to help map your system makes the technical side of that work easier.&lt;/p&gt;

</description>
      <category>automation</category>
      <category>sociotechnical</category>
      <category>productivity</category>
    </item>
    <item>
      <title>After 20 years in tech, here’s what I wish I had all along (and how I’m building it)</title>
      <dc:creator>Alice Chen</dc:creator>
      <pubDate>Tue, 16 May 2023 22:49:43 +0000</pubDate>
      <link>https://forem.com/opencontextinc/after-20-years-in-tech-heres-what-i-wish-i-had-all-along-and-how-im-building-it-4f1f</link>
      <guid>https://forem.com/opencontextinc/after-20-years-in-tech-heres-what-i-wish-i-had-all-along-and-how-im-building-it-4f1f</guid>
      <description>&lt;p&gt;Why did I create OpenContext? There are many reasons outlined here, but mostly I hate wasting my time.&lt;/p&gt;




&lt;p&gt;Hi, I’m Alice. Co-founder and CTO at OpenContext here to talk about why I’m passionate about building a tool like OpenContext for engineers and their teams and organizations. I’m a systems engineer. I went into QA, build engineering, operations, DevOps, and solutions architecture. I got mixed up in all kinds of interesting things. All that different experience meant I had a lot different perspectives.&lt;/p&gt;

&lt;p&gt;I realized that in all those jobs, I spent a lot of time telling people where things were, or who to talk to. Who made that breaking change? Who’s responsible for this module? When you’re working with remote teams or teams across the world, it’s even harder because of time zones and distribution. It’s just so much wasted time trying to find out who owns something.&lt;/p&gt;

&lt;p&gt;I also think there are just so many proliferating tools that we use. More API tools, more cloud tools, more CI tools, and the open source versions, it’s so much. And in a way it makes sense, because these tools are specialized, or are doing things a new way. But it gets hard to track all these things.&lt;/p&gt;

&lt;p&gt;Back when I was at HP, it was a very simple setup, there was just the one tool, but even then, each team might use the same tool in a slightly different way. So if you moved between teams, you still had to relearn it, you couldn’t swap teams seamlessly. It seems like it should be easy, but it almost never is.&lt;/p&gt;

&lt;p&gt;It seemed to me like there had to be a better way to understand context.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's the problem OpenContext is solving?
&lt;/h2&gt;

&lt;p&gt;People don’t have systems context. For example, development doesn’t share language or focus with operations or security. Context in this case is what people are doing and metadata would help identify that work.&lt;/p&gt;

&lt;h3&gt;
  
  
  A good example of this would be  GitHub actions
&lt;/h3&gt;

&lt;p&gt;Typically security or operations doesn't have any insight into the CI process or GitHub actions being done on development, and they don’t need the details, but if they know that seven builds have previously failed, that information might be useful to them and give them context for events that affect them. It’s not necessarily that they need to know the workflow, but having insight into who is doing what, and what tools and teams are involved makes later troubleshooting easier. It’s not easy to keep track of the interdependencies between tools, people, code, customization, delivery, and other variables.&lt;/p&gt;

&lt;p&gt;Our processes are applications. Everything is getting more and more complex as time goes on, which means those dependencies get even more complicated.&lt;/p&gt;

&lt;h3&gt;
  
  
  Rethinking the water cooler for remote work
&lt;/h3&gt;

&lt;p&gt;Work from home is great, and I like it a lot because I get to concentrate a bit more, but one of the things I miss about office life is picking up context casually. Because you're walking around, you're talking to other teams, you're getting coffee and you talk to this person and find out what's happening over there on their team. If you have one of those jobs that is a lot more cross-functional than most people, that kind of awareness is really valuable, like being aware that Team A is running into issues with this or that, or that Team B is trying to create the same kind of service, and maybe they should collaborate.&lt;/p&gt;

&lt;p&gt;I used to be part of a combined cloud operations and security team and I learned a lot of security by sitting next to them and hearing what they were complaining about. I came to realize most of their time is actually not spent in finding the problems. Those things get found pretty quickly. It's finding the people, the tools that are affected by it, the code that's affected by the security problem.&lt;/p&gt;

&lt;p&gt;They’re trying to express these dependencies as massive spreadsheets, and then contact the general operations or developer teams to find out who is going to fix the problem. But if it’s not a direct request, people tend to think it’s someone else’s problem. It’s not like anyone is assigned to them, because the security team can’t tell who owns that specific code.&lt;/p&gt;

&lt;p&gt;There are so many inefficiencies in the whole software development lifecycle, and we keep trying to combine everything down and make it part of a developer’s job. As if, miraculously, all the problems are going to be fixed if we push the responsibility to the developer. We’re saying developers have to worry about finance, security, operation, tools, APIs, and coding. It’s a lot.&lt;/p&gt;

&lt;h2&gt;
  
  
  How context helps
&lt;/h2&gt;

&lt;p&gt;OpenContext is aiming to add more context or metadata to people’s questions about their system, or show what their actual system versus what they imagine their system to be. Dependency maps are important, but frequently don’t have everything. Part of OpenContext is to discover those dependencies and relationships. We want to show the connection between the person, the code, the repository, and the deployed artifacts. We want to give you a high-level view of the whole system. &lt;/p&gt;

&lt;p&gt;Part of the problem with asking developers to do and understand so many parts of our software is that everyone’s understanding of the system is limited to what they know. It’s very hard to see everything when you are in the middle of one part of it.&lt;/p&gt;

&lt;p&gt;Security knows security, but they don’t always know how their security recommendations or rules affect operations or development. This isn’t because of malice or laziness, just that everyone is coming from different knowledge bases. Security may not know how code looks once it’s deployed to the cloud. It’s easier to prioritize fixing a bug if they understand that it’s in a front end application, and the bug is SQL injection, and there’s an SQL database that they need to protect. &lt;/p&gt;

&lt;p&gt;Without having the context of how operations set up the database or how all the pieces work together, how will they make the right decision about fix priority? And operations experiences these questions as an interruption because they’re trying to handle other emergencies that security isn’t aware of.&lt;/p&gt;

&lt;p&gt;We just want help so we can self-serve, and we want the content to be freely accessible across the company. We want to be able to answer questions like “is this expected behavior?” by ourselves, without having to cross team boundaries. Or at least have enough information to ask better questions&lt;/p&gt;

&lt;p&gt;We created OpenContext as a way to help teams get high-level understanding of their systems, and give them the opportunity to find answers for themselves. If having context helps people make better choices in their work, then we’re doing what we hoped for.&lt;/p&gt;

</description>
      <category>startup</category>
      <category>career</category>
      <category>development</category>
    </item>
  </channel>
</rss>
