30 engineers. You lose half that to people managing the infrastructure alone. That leaves 15 code monkeys. Of 2 are dedicated to deployment and 3 to setting up unit tests (that’s not many btw) you are left with 10 people. If say for a global platform that’s not many at all.
If you have separate developers for writing unit tests, and not every developer writing them as they code, something is already very wrong in your project.
Deployment and infra should also mostly be setup and forget, by which I mean general devops, like setting up CI and infrastructure-as-code. Using modern practices, which lean towards continuous deployment, releasing a feature should just be a matter of toggling a feature flag. Any dev can do this.
Finally, if your developers are ‘code monkeys’, you’re not ready for a project of this scale.
Infra setup and forget… this is a large system with plenty of stuff that cyclicly needs to be deployed updated and such. Even with automation the sheer volume and tech in use requires bredth of knowledge. Sure you could do it with less I guess. But with changes on supplier side etc it’s still much work.
And for tests, sure you do it as you go along, but usually it helps to have people going over this and making sure it all stays functional, meets standards and fix things.
I have never, in my decade as a software dev, seen a role dedicated to “making sure unit tests stay functional, meet standards and fixing them”. That is the developer’s job, and the job of the code review.
The tests must be up to standards and functional before the functionality they’re testing gets merged into main. Otherwise, yes, you may actually need hundreds of engineers just to keep your application somewhat functional.
Finally, 30 engineers can be a vast breadth of knowledge.
So cool that you got to work with teams of devs that where able to do that. Was it for software used in a OT environment? Cause stuff like telegram seems a lot more like that imho.
And the bredth… 30 people can cover it all, yes. Doing that in a 24/7 global environment means 3 of several competences, in shifts, covering timezones. It’s not as if you can just click out at 5 and come back tomorrow.
I have no idea why you’re even bringing up OT. We’re not talking about PLCs or scientific equipment here, we’re talking about glorified web apps.
Web apps that need to be secure and highly available, for sure, but web apps all the same. It’s mainly just a messenger app, after all.
So cool that you got to work with teams of devs that where able to do that.
Just because, as I assume from this quote, you weren’t able to work with teams like that, does not mean that there are no teams like that, or that Telegram doesn’t operate that way. Following modern practices, complex projects can be successfully done by relatively small teams. Yes, a lot of projects are not run that way, but that just means that it’s all the more a valid point of pride for Telegram.
A point of pride sure, also a risk. Responding to incidents requires coverage. And the OT comparison was just more on the uptime requirements and redundancies than anything else.
It’s no more a risk than throwing more developers at it when they’re not needed.
“Too many devs“ can, and often is, a significant bottleneck in and of itself. The codebase may simply not be big enough to fit more.
Besides, I still don’t see what all those additional engineers would actually be doing. “Responding to incidents” presupposes a large number of incidents. In other words, the assumption is that the application will be buggy, or insecure enough, that 30 engineers will not be enough to apply the duct tape. I stand by the claim that an application adhering to modern standards and practices will not have as many bugs or security breaches, and therefore 30 engineers sounds like a completely reasonable amount.
Fair enough, we can disagree there. It’s impressive telegram pulls it off. I’d be worried for burning out people and losing them to that. And there is a lot between working flawless and buggy mess. Fixing issues in the operational system usually takes time.
Maintenance vs new functionality. Infra vs application. A lot to spread out across.
30 engineers. You lose half that to people managing the infrastructure alone. That leaves 15 code monkeys. Of 2 are dedicated to deployment and 3 to setting up unit tests (that’s not many btw) you are left with 10 people. If say for a global platform that’s not many at all.
If you have separate developers for writing unit tests, and not every developer writing them as they code, something is already very wrong in your project.
Deployment and infra should also mostly be setup and forget, by which I mean general devops, like setting up CI and infrastructure-as-code. Using modern practices, which lean towards continuous deployment, releasing a feature should just be a matter of toggling a feature flag. Any dev can do this.
Finally, if your developers are ‘code monkeys’, you’re not ready for a project of this scale.
Infra setup and forget… this is a large system with plenty of stuff that cyclicly needs to be deployed updated and such. Even with automation the sheer volume and tech in use requires bredth of knowledge. Sure you could do it with less I guess. But with changes on supplier side etc it’s still much work.
And for tests, sure you do it as you go along, but usually it helps to have people going over this and making sure it all stays functional, meets standards and fix things.
I have never, in my decade as a software dev, seen a role dedicated to “making sure unit tests stay functional, meet standards and fixing them”. That is the developer’s job, and the job of the code review.
The tests must be up to standards and functional before the functionality they’re testing gets merged into main. Otherwise, yes, you may actually need hundreds of engineers just to keep your application somewhat functional.
Finally, 30 engineers can be a vast breadth of knowledge.
So cool that you got to work with teams of devs that where able to do that. Was it for software used in a OT environment? Cause stuff like telegram seems a lot more like that imho.
And the bredth… 30 people can cover it all, yes. Doing that in a 24/7 global environment means 3 of several competences, in shifts, covering timezones. It’s not as if you can just click out at 5 and come back tomorrow.
I have no idea why you’re even bringing up OT. We’re not talking about PLCs or scientific equipment here, we’re talking about glorified web apps.
Web apps that need to be secure and highly available, for sure, but web apps all the same. It’s mainly just a messenger app, after all.
Just because, as I assume from this quote, you weren’t able to work with teams like that, does not mean that there are no teams like that, or that Telegram doesn’t operate that way. Following modern practices, complex projects can be successfully done by relatively small teams. Yes, a lot of projects are not run that way, but that just means that it’s all the more a valid point of pride for Telegram.
A point of pride sure, also a risk. Responding to incidents requires coverage. And the OT comparison was just more on the uptime requirements and redundancies than anything else.
It’s no more a risk than throwing more developers at it when they’re not needed.
“Too many devs“ can, and often is, a significant bottleneck in and of itself. The codebase may simply not be big enough to fit more.
Besides, I still don’t see what all those additional engineers would actually be doing. “Responding to incidents” presupposes a large number of incidents. In other words, the assumption is that the application will be buggy, or insecure enough, that 30 engineers will not be enough to apply the duct tape. I stand by the claim that an application adhering to modern standards and practices will not have as many bugs or security breaches, and therefore 30 engineers sounds like a completely reasonable amount.
Fair enough, we can disagree there. It’s impressive telegram pulls it off. I’d be worried for burning out people and losing them to that. And there is a lot between working flawless and buggy mess. Fixing issues in the operational system usually takes time.
Maintenance vs new functionality. Infra vs application. A lot to spread out across.
15 engineers for managing infrastructure?? Are they setting up servers by hand?
I would not want you as my boss, that’s for sure.
Try covering a 24/7 global service window. I’d think this is on the low end.
And you als need full infra stack knowledge: Server, database, Network, connectivity.
And probably some of these schmucks will get stuck managing the corporate environment too.
This comment smells of outdated software development practices.