<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Karsens</title>
    <description>The latest articles on Forem by Karsens (@karsens).</description>
    <link>https://forem.com/karsens</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/karsens"/>
    <language>en</language>
    <item>
      <title>Quickly ship your changes to git? Use 'ship'</title>
      <dc:creator>Karsens</dc:creator>
      <pubDate>Wed, 29 Dec 2021 10:25:31 +0000</pubDate>
      <link>https://forem.com/karsens/quickly-ship-your-changes-to-git-use-ship-1i4c</link>
      <guid>https://forem.com/karsens/quickly-ship-your-changes-to-git-use-ship-1i4c</guid>
      <description>&lt;p&gt;Ship is a small cli command I wrote that automatically adds and commits all your changes and pushes them to your current branch. It then lets you know if it succeeded or failed.&lt;/p&gt;

&lt;p&gt;To add ship to your cli, copy and paste this into your terminal:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;echo "ship () { BRANCH=$(git branch --show-current); git add . &amp;amp;&amp;amp; git commit -m \"${1:-Improvements}\" &amp;amp;&amp;amp; git push -u origin \"$BRANCH\" &amp;amp;&amp;amp; say you shipped it || say something went wrong }" &amp;gt;&amp;gt; ~/.zshrc &amp;amp;&amp;amp; source ~&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Go ahead and try it! Hope you like it ;)&lt;/p&gt;

&lt;p&gt;P.S. only tested on MacOS.&lt;/p&gt;

</description>
      <category>bash</category>
    </item>
    <item>
      <title>Scaling to one million RPS</title>
      <dc:creator>Karsens</dc:creator>
      <pubDate>Fri, 11 Jan 2019 20:49:31 +0000</pubDate>
      <link>https://forem.com/karsens/scaling-to-one-million-rps-19il</link>
      <guid>https://forem.com/karsens/scaling-to-one-million-rps-19il</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fmcs1cny71liy2bx2306p.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fmcs1cny71liy2bx2306p.jpg" alt="scale"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This &lt;a href="https://www.forbes.com/sites/reuvencohen/2013/11/26/google-shows-how-to-scale-apps-from-zero-to-one-million-requests-per-second-for-10/#7de604137ad9" rel="noopener noreferrer"&gt;Forbes Article&lt;/a&gt; and its &lt;a href="https://cloudplatform.googleblog.com/2013/11/compute-engine-load-balancing-hits-1-million-requests-per-second.html" rel="noopener noreferrer"&gt;Original Post&lt;/a&gt; describe how a guy managed to hit 1Million RPS for a while with one load balancer and 10$. This post also contains a gist to reproduce the experiments, which is pretty cool. This &lt;a href="https://www.facebook.com/notes/facebook-engineering/scaling-facebook-to-500-million-users-and-beyond/409881258919/" rel="noopener noreferrer"&gt;Facebook blog&lt;/a&gt; shows how they scaled up to 500 million users in 2010. In &lt;a href="http://highscalability.com/blog/2010/11/4/facebook-at-13-million-queries-per-second-recommends-minimiz.html" rel="noopener noreferrer"&gt;this article&lt;/a&gt;, which is also from 2010, Facebook gives an insight about their statistics and scaling strategies. Stories like these excite me to think about future scaling of &lt;a href="https://communify.cc/" rel="noopener noreferrer"&gt;Communify&lt;/a&gt;. With 500 Million users, Facebook had 13M requests per second. Now, with 4 billion users, they probably handle ±100M requests per second. I wonder how proper scaling could end up there.&lt;/p&gt;

&lt;h1&gt;
  
  
  My Geo Scaling Plan: scaling without bottlenecks
&lt;/h1&gt;

&lt;p&gt;Since we have global communities now, it's pretty much impossible to get one user's data from one server. Therefore, the load balancer may need to have an overview of which communities are housed where. Then, the load balancer can send the query to the right server, based on the request. &lt;/p&gt;

&lt;p&gt;If this wasn't the case, it would be easy, like described in &lt;a href="https://medium.com/leckr-react-native-graphql-apollo-tutorials/the-benefits-and-drawbacks-of-decentralised-geo-scaling-thinking-of-2019-and-beyond-infinite-9faa5ad465c8" rel="noopener noreferrer"&gt;my previous article about geo-scaling&lt;/a&gt;. But now that we have global communities too, we have a few tricks to do. This is what I came up with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;PULL&lt;/strong&gt; automatically pull my GitHub repo everywhere whenever a new commit is pushed. pull, restart pm2, and automatically merge db changes to the db.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SPLIT&lt;/strong&gt; automatically split a server into 2 servers whenever load is too high. Shard on community. The server is split in two, so there should be drawn a line on a map (cluster by location) so that the total members are split in two, based on location. There is an edge case in which an user is member of a community in both clusters. In this case, put the user on both servers. This is no big deal. The new server should then let the load balancer know it exists, and expose its communities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;MERGE&lt;/strong&gt; automatically merge a server with another server whenever load is too low. Because we merge servers, we can't use incremental ID's, but need a UUID or GUID to keep table-rows unique. Since the sharding/splitting was done on communities, we can just merge all rows in all tables together. But because users may be in both clusters, there can be double users. If this happens, pick the user that's last updated, because that's where the user was active last. The deleted server should let the load balancer know it got deleted, so it will be deleted from the global server list. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;BALANCE&lt;/strong&gt; put a load balancer in front of all of it that directs every request to the right server, based on communityid. it knows which server has which communities because every server exposes their communities by &lt;code&gt;communities(){id}&lt;/code&gt; and we know every server, so, every minute or so, we can look up all community-id's from all servers. This is pretty light. However, this can also be done the other way around. When a community gets created or removed, or when a server splits up or merges, the load balancer gets notified with the new composition. This would be instant and way cheaper.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Double Models
&lt;/h1&gt;

&lt;p&gt;Models that can be sharded based on community, and thus just need to live on one server, and have just one copy: Posts, Subs, Roles, Communities, Channels. &lt;/p&gt;

&lt;p&gt;Models with some problems: Users, CommunitySubs, Locations&lt;/p&gt;

&lt;p&gt;A user can be subbed to two communities that live on different servers at the same time. There are a few possibilities to deal with this: &lt;/p&gt;

&lt;p&gt;1) &lt;strong&gt;Copy/paste&lt;/strong&gt; When a user updates the current community to a community on a different server, copy that user, together with all of its CommunitySubs and Locations to the new server. This will, then, be the single server that user gets its data from. CommunitySubs get notification increments by mutation calls (that increment) from other servers where the user still exists. When a user changes community, all servers should be notified, so that if a server knows about a user, it also knows in which community that user is... This can get heavy, but it doesn't happen that often. A side effect of this strategy is that users, communitysubs and locations can get outdated on servers on which the user isn't active. However, all servers that need to know, know on which server a user currently lives, all servers can get updates about that user to stay updated. For example, every hour, or every day. I don't know how much will be useful. In principle, a user doesn't change much for another community if it's not active in that community. &lt;/p&gt;

&lt;p&gt;2) &lt;strong&gt;Global, seperate database for users, community and locations&lt;/strong&gt; This can be nice because its a single source of truth with is always up to date. However, the drawback is having multiple servers the app has to connect to, and there is one global server, which is bad for availability (risk) and can't scale infinitely. &lt;/p&gt;

&lt;p&gt;I think this is where I have to choose from, and I think option one is the best. I still have to discuss this with an expert. I'm quite impressed with this idea, because my app can scale infinitely big without bottlenecks based on a few assumptions, which my design of the app can guarantee:&lt;/p&gt;

&lt;p&gt;1) It doesn't get bigger than one loadbalancer can handle (around 1M rps)&lt;br&gt;
2) A single community never has to be sharded&lt;/p&gt;

&lt;p&gt;This whole architecture is, I think, very interesting and would also work for my &lt;a href="https://karsens.com/chat-baas/" rel="noopener noreferrer"&gt;Chat-BaaS idea&lt;/a&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  From 1M to 100M RPS.
&lt;/h2&gt;

&lt;p&gt;100 M RPS, 4.000 M users, and ±40 M communities, ±20.000 servers.. That's the dream! &lt;/p&gt;

&lt;p&gt;On a single server that's balanced well, I don't think any problems will arise. The only problem is that, when a user changes community, all servers that know this user should know this, so the server has to send 20.000 requests! Right? Well it does. Unless it knows on which server all other communities the user is subbed to are hosted. And the load balancers know this, right? So let's ask a load-balancer, and then just let the servers know that care! Great! Problem solved.&lt;/p&gt;

&lt;p&gt;The other thing is, we have to balance the traffic with load balancers. One load balancer doesn't cut it anymore. 100 times as many RPS should mean 100 times as many load balancers, but because we have some extra work for the load balancers (telling servers which communities are hosted where), I think 1000 load balancers would be better, just to be sure. With 40M communities, is it still doable to let all load balancers know which communities are hosted where? &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;If yes, the problem gets easy. Just have one 'Master Load Balancer' that assigns any new visitor to one of the 1000 Load Balancers, and keeps it there. From there, the visitor knows where to go, because every Load balancer knows everything. Searching from ±40M rows of communities could be doable, but the bottleneck is probably that, on every user changing community (that is on a different server), every Load Balancer gets notified. The question is: How many times per second does this happen with 4 Billion users? If this is more often than ±5.000, the bottleneck is too small, and it won't fit. We have to find another solution.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If no, it gets complicated. Splitting the communities over all load balancers may be an option, but it's a complicated mess that I would have to think through. Let's save that for another time!&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Originally published at &lt;a href="https://karsens.com/scaling/" rel="noopener noreferrer"&gt;my website&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Chat-BaaS: Chat Backend as a Service</title>
      <dc:creator>Karsens</dc:creator>
      <pubDate>Fri, 11 Jan 2019 00:27:47 +0000</pubDate>
      <link>https://forem.com/karsens/chat-baas-chat-backend-as-a-service-1khk</link>
      <guid>https://forem.com/karsens/chat-baas-chat-backend-as-a-service-1khk</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flerr1dtahkjt1jsde49h.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flerr1dtahkjt1jsde49h.jpg" alt="baas" width="550" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Chats are very universal. With &lt;a href="https://communify.cc" rel="noopener noreferrer"&gt;Communify&lt;/a&gt;, I'm solving a problem that many software companies have solved, are solving, and will solve. I am solving it for React-Native + Node JS + GraphQL + MySQL, on Linux. A very nice combination.&lt;/p&gt;

&lt;p&gt;What if I could create a BaaS for chats, and solve this problem once and for all?&lt;/p&gt;

&lt;p&gt;There are already Chat BaaS'es out there. Check &lt;a href="https://sendbird.com/" rel="noopener noreferrer"&gt;https://sendbird.com/&lt;/a&gt;. If you read their blog, you'll find others, too. But it's super expensive, and I don't know if they've taken the right approach.&lt;/p&gt;

&lt;p&gt;So, in this post, I will elaborate on what I think the right approach would be for creating a Chat-BaaS. &lt;/p&gt;

&lt;h1&gt;
  
  
  Model, front-end and back-end
&lt;/h1&gt;

&lt;p&gt;Every chat-post has { user (id,name,avatar,pushtoken), channel, group, information (type, text, likes, etc.), createdAt, updatedAt } and every channel has { id, channel, group, user, createdAt, updatedAt, lastMessage, lastMessageText, etc... }&lt;/p&gt;

&lt;p&gt;The whole front-end is super convenient. I could open source the whole front-end of the chat-list, chat, chat-details, actions, and everything and couple this to a default BaaS where people can create an account, get 1M messages/month for free, and pay a lot more for more.&lt;/p&gt;

&lt;p&gt;The cool thing would be that I open source the front-end as one component which is a data-coupled and navigation-coupled react-native component. you only have to give your token and it connects with your backend.&lt;/p&gt;

&lt;p&gt;On the backend, it has a few queries and mutations:&lt;/p&gt;

&lt;p&gt;Queries: serverInfo, posts, subs, mysubs;&lt;br&gt;
Mutations: createPost, likePost, createSub, createPM, deleteSub, toggleSub, setRead, upsertSub.&lt;/p&gt;

&lt;p&gt;On the front-end, it can actually handle all push-notification stuff. This is a big big deal. Lots of code that can be spared for a startup if they use this. The user model is very lightweight and is meant to be extended by the startup itself.&lt;/p&gt;

&lt;p&gt;The cool thing of a BaaS is that you can create multiple front-ends for it. I could also create an app in which you can create your own 'app'. It then creates the whole chat-interface inside that app. with react-native-web, I could also expose that chat-interface on the web.&lt;/p&gt;

&lt;p&gt;This idea could take a huge part out of my codebase. On itself, it's a big startup, which is just a small refactor for me to create and start using myself... Since the biggest part of many applications is the chat, it's a great way to shorten the programming time to create useful apps.&lt;/p&gt;

&lt;p&gt;This app will still be useful in a 1000 years. Think about that!&lt;/p&gt;

&lt;h1&gt;
  
  
  Scaling
&lt;/h1&gt;

&lt;p&gt;It gets even cooler if you think where the bottlenecks are. I think that, if it's a truly good product, you don't need marketing or anything. No sales. I think, if it's a truly good product, all you need is good scaling. It's so universal, that features is something you can finish. After a few years of programming, it just has all you need. So features is not a bottleneck you will need many people for. What's left? SCALING. So scaling is the bottleneck. So how do we scale succesfully? The great thing is that every platform that wants to use this, is independent of other platforms. So they have to scale independently. Another great thing is that even scaling can be automated. Databases can be created automatically. For small customers, a single server per DB will do with a few servers for the backend. This can be automated. Once we have super big clients that need sharding, they have 2 options:&lt;/p&gt;

&lt;p&gt;1) Pay us for experts that automate the sharding process&lt;/p&gt;

&lt;p&gt;2) Split up their app into separate communities, for example, location-based.&lt;/p&gt;

&lt;p&gt;So if I get the hang of basic CI automation, I can automate up to 10k online users per community or so, maybe even more.&lt;/p&gt;

&lt;p&gt;This is a great idea.&lt;/p&gt;

&lt;h1&gt;
  
  
  Should I do it?
&lt;/h1&gt;

&lt;p&gt;I think that, if Communify doesn't pick up as quickly as I expect, it may be a smart idea to have a look at this. It seems that MessageBird is doing incredibly well. To start, a PoC should be easy. Just extrahere the server of communify, automatically create a new SQLite for every Backend user, and create a nice onboarding page that asks you the app-name and shows you your app on the web. It should just be a screen that you can place in your RN-app. super simple. If this proves succesful, I could create more building blocks for apps maybe. More small BaaS'es. For example one for a timeline. One for friends. One for pages. One for photo albums. The possibilities are endless. &lt;/p&gt;

&lt;p&gt;To further see if it's really that easy to create a BaaS for Communify (there may be some things I didn't see because I didn't look at the code) I should have a look at the backend code and see if I really thought of all connections....&lt;/p&gt;

&lt;h1&gt;
  
  
  Join me!
&lt;/h1&gt;

&lt;p&gt;Did you read it all and think it's a great idea? Then join me, I could use some hands! As of now, I have so many things I want to build, but just so little time! Yet, I already have a great start. I basically have all the code (front-end and back-end) already, I just need to rewrite it a bit, and make it look fancy. Let me know if you're interested, or let me know if you're interested in using parts of my codebase! Everything's negotiable!&lt;/p&gt;

&lt;p&gt;This was originally published &lt;a href="https://karsens.com/chat-baas/" rel="noopener noreferrer"&gt;on my website&lt;/a&gt;&lt;br&gt;
Follow my &lt;a href="https://github.com/EAT-CODE-KITE-REPEAT/" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

</description>
      <category>baas</category>
      <category>saas</category>
    </item>
    <item>
      <title>Big reason to use React navigation over Wix navigation (React Native)</title>
      <dc:creator>Karsens</dc:creator>
      <pubDate>Thu, 10 Jan 2019 15:07:42 +0000</pubDate>
      <link>https://forem.com/karsens/big-reason-to-use-react-navigation-over-wix-navigation-react-native-4cdd</link>
      <guid>https://forem.com/karsens/big-reason-to-use-react-navigation-over-wix-navigation-react-native-4cdd</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fw872x980resi7ddbqwdq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fw872x980resi7ddbqwdq.png" alt="header"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Today I discovered a really big reason why React navigation is fundamentally better than Wix navigation. React navigation has functions createBottomTabNavigator/createStackNavigator, etc. which return a component. However, Wix navigation just has registerComponent, which take the screen you want to render (with a wrapper around it, if you want it) and just returns a void and creates this screen natively.&lt;br&gt;
The problem with the latter is that, if you use a wrapper around your screen, this wrapper will be mounted for every screen that is registered. For our app this meant that our LocalAuthWrapper was also mounted 5 times on app start, because we had an app with 5 tabs, and it mounts all tabs. Therefore, the dialog to authorize with local auth mounted 5 times, which is 4 times too many. Because of how this library works, it didn't function anytime after the first time, and therefore we had to make native changes to this library.&lt;/p&gt;

&lt;p&gt;I think it's kind of ugly that our wrapper gets mounted every single time a new screen loads... Also less efficient. With react navigation, you don't have this problem, because you can wrap your wrapper around the whole of the navigation, because navigation is a component too.&lt;/p&gt;

&lt;p&gt;This fundamental difference should not be overseen! It can be very important.&lt;/p&gt;

&lt;p&gt;Originally published &lt;a href="https://karsens.com/big-reason-to-use-react-navigation-over-wix-navigation/" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

</description>
      <category>reactnative</category>
      <category>navigation</category>
    </item>
    <item>
      <title>Why code ownership is a must for agile development</title>
      <dc:creator>Karsens</dc:creator>
      <pubDate>Sat, 05 Jan 2019 13:56:29 +0000</pubDate>
      <link>https://forem.com/karsens/why-code-ownership-is-a-must-for-agile-development-cfl</link>
      <guid>https://forem.com/karsens/why-code-ownership-is-a-must-for-agile-development-cfl</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OUoLQzxw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/cwyclhpver7v1syw8z4u.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OUoLQzxw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/cwyclhpver7v1syw8z4u.jpg" alt="Agile"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Tl;dr:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It takes a while to get to know a codebase&lt;/li&gt;
&lt;li&gt;The more people have t know and edit one piece of code, the less efficient because of confusion and errors&lt;/li&gt;
&lt;li&gt;However, strong code ownership also brings another bottleneck in play&lt;/li&gt;
&lt;li&gt;Strike the golden mean! Strive for feature ownership to diminish ineficiencies and bottlenecks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's optimal to assign one person to a piece of code if you want to have the optimal development speed. This is also one of the reasons why Communify is developing so quickly. I'm having ownership over the entire codebase. I know what every script does. Searching for where is what is never a thing. Having to figure out how something works or what something does, never happens.&lt;/p&gt;

&lt;p&gt;If you're in a team, it may be useful to have access to changing someone elses code sometimes. This is called weak code ownership. You are responsible for your own piece of the codebase but others can make changes in it. You review those changes and make sure the codebase stays the way you like. I think this is the best because waiting on other developers to make changes for you can also be a bitch. It can slow things down and force you to work on something else. That's also why I don't like the back-end frond-end distinction much. I know python and react native, yet, at my current day-job, I'm not allowed to work on the back-end and make changes there. Very often I'm forced to start working on something else and stop what I'm working on now, just because the backend people don't have time to make a tiny change. If I could handle backend too, I could do this change within seconds. Now I have to wait hours.&lt;/p&gt;

&lt;p&gt;A lot of new features in an app involve changes in all layers of the code and other work. That is, UI/UX, front-end, back-end, and testing. Therefore, if you have a front-end backend sepeartion of your team, a new feature has to be done by two people. If you also have seperate UX/UI and testing, that number becomes 4. Often, one is done before the other. Therefore, one person has to wait on the other for it to complete! This is, in my humble opinion, a huge bottleneck to developing speed and being agile... And now I haven't even talked about UX and Testing! These functions are very often also done by separate teams. If this is the way you do things as a team, and everyone has to wait for one another all the time, a new feature can take days, if not weeks. The bottleneck gets bigger and bigger.&lt;/p&gt;

&lt;p&gt;Therefore, in a company that wants to be truly agile, I think that you'd have to strike the golden mean by giving everyone responsibility for a part of the codebase and process, yet allow everyone to access and edit all, and, in the end, strive to work on your own code yourself by proper issue/feature distribution by the team lead. This also shows why Full Stack Developers are so valuable. Especially when they can also handle a bit of UI, UX and testing. Ideally, you shouldnt' seperate code and tasks by it's technology and skills used to create them, but by it's features. This ensures code ownership, yet makes it possible for one person to add one new feature, all by himself, from start to finish. You can even take it further: let the person that crafts the feature also evalue the introduction of the feature into production and see the impact to the product. See what customers think of it, and if the results are not good, improve. I think this would be great for many reasons. One because you improve agile-ness, but also because you get to see the result of your own doings. As a front-end developer myself, it can be very boring sometimes. I never get to think about what something must look like, I just need to make it. I never get to see the outcome of what I make, I just have to make it. So boring....&lt;/p&gt;

&lt;p&gt;So, in conclusion:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Be agile&lt;/li&gt;
&lt;li&gt;Strive for feature ownership along the entire production-line to diminish bottlenecks, inefficiencies, and waiting&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(originally published &lt;a href="https://karsens.com/code-ownership/"&gt;here&lt;/a&gt;)&lt;/p&gt;

</description>
      <category>agile</category>
    </item>
  </channel>
</rss>
