This post is the first in a series in which we'll be detailing the various methods we've used to scale TypeScript despite the size of our big - and ever-growing - repository.
The idea behind this series is to share the solutions we've implemented and the thoughts that have guided us, with the aim of inspiring other teams, fostering exchanges with the community and, why not, sparking enriching feedback or discussions. Now let's go back to today's topic : why scaling TypeScript became a necessity at Datadog ?
At Datadog, our frontend codebase has historically enjoyed quick developer cycle times. Fast CI jobs, responsive IDEs, and the TypeScript tooling performed well. This allowed teams to ship products & iterate quickly.
As members of developer experience teams whose clients are internal developers, one of our goals is to ensure that development remains fast and safe as the organization and its codebase continues to grow.
A Codebase That Keeps Growing
To give some perspective about the growth of our repository: in June 2025, our frontend codebase was made of over 11 million lines of TypeScript across more than 95,000 .ts
and .tsx
files. One year earlier, we only had around 6 million lines in 65,000 files.
That doubling trend has been consistent since at least 2021, and we are still looking at significant growth over the next few years. (~ doubling every 12 to 18 months)
This graph with forecast is from one of our Datadog monitoring dashboard
The Impact of Growth
This growth brought a number of challenges:
- Typechecking, as well as other jobs such as linting, running tests, etc... that are mandatory to make any change to production, are all increasingly taking longer in CI and slowing down our ability to ship features or fixes (see following graph). Their increase in duration was also correlated to the number of lines of code in the codebase, which is a clear scalability problem.
This graph is from one of our Datadog monitoring dashboard
- In IDEs, the
TS server
was getting slower, resulting in delays for core features like type error display, autocompletion, “Go to definition,” and “Find references.” This led to a noticeably poorer developer experience.
To give a clearer picture, the P90 loading time of the
TS server
in our IDEs is currently 6s, but has peaked as high as 2min. Similarly, autocompletion latency currently sits at a P90 of 662ms, with past peaks reaching 6.6s seconds
Support requests from developers related to TS performance in CI or in the IDEs were increasing.
CI jobs became costly and consumed more resources as they were running for a bigger codebase and taking longer.
Understanding dependencies across teams got harder. As the number of teams and products was growing, so did the complexity of the dependency graph. This led to larger bundles, performance regressions, and a higher risk of incidents due to hidden coupling.
Understanding the signals
All the mentioned signals showed us that we needed to decorrelate the size of the codebase and the dev tooling performance. In the case of TypeScript, it means keeping CI jobs performant and maintaining a responsive TS server
in IDEs.
With our forecast of the codebase doubling every 12 to 18 months for the next few years, it was imperative that we take proactive steps — not only to address the current situation, but also to support our future growth.
While everything was still manageable, we knew that waiting would only make the problems harder to solve. We decided to act early, investing in structural, setup, and tooling improvements to keep the development experience strong as we scale.
In the next post in this series, before looking at the concrete ideas and solutions we've put in place, we'll focus on an essential subject (especially for us since we're Datadog): monitoring. We'll detail how, and to what extent, we observe the performance of our TS server
and CI jobs to detect signals and guide our decisions.
Top comments (1)
Great introduction to our core problem Valentin!
To other readers, if you run into such issues or have insights into how you've fixed performance issues: do reach out! We'd love to talk to you, be it over emails or over video calls.