I am currently the only developer at the company where work. Despite the fact that we’re a tech company selling SaaS software, it’s going somewhat well.

I am in charge of pretty much anything technical (aside from internal network, internet and phones, thank God). Around 75% of my time is spent on development of new features and R&D. The other 25% is meetings, hosting, strategy, a little pre-sales etc.

The big question is productivity. We have made efforts to increase my productivity as much as possible. I’m not a sysadmin, so for hosting we used a consultant that set up our cloud in a way that is both efficient, and makes it easy to upgrades our apps. On development, I made a number of small technical migrations to increase productivity (custom CSS to Bootstrap, Scriptaculous to jQuery) and we are moving forward with new small migrations to bypass the limitations of our legacy framework. Effective continuous integration has been setup to make QA easier and faster. Some tech documentation has also been written to enable me to be as productive as possible in finding the best approach to a problem. This is a number of quick hints for some recurring problems, some UML diagrams for the most complex parts of the app as well as for the database.

It has also been our strategy to limit and eventually block altogether specific features our clients might want. Let me explain this one: we sell software to Enterprise(y) customers. Most of them are used to custom made software on which any change they want can be done. We don’t allow that - or rather we no longer allow that. Instead, we find solutions to their problems using existing features, and sometimes develop features that are in keeping with our product strategy. There is now just one codebase, for all clients, instead of one codebase per client.

Also important, being the sole developer, there is no overhead. Having been in and led many teams, I know how productive a three people team would be. In theory, I should be roughly one third as productive as a team of that size. In reality, there is so little overhead that I believe I’m about half as productive as three people team. This is invaluable, and we take advantage of this. The mythical 5-minute change is possible when you’re a one person team.

Is it a good idea?

Fuck no! This situation is due to difficult circumstances in our key market. This has been emphasized by the fact that our company has moved from half-product, half-service to being an actual product oriented SaaS startup. While our business model is sound, it has meant that revenu has dropped two years ago and is just now coming back to its original level. The situation is untenable long term, and my goal is to have a fully operationnal dev team in the next two years.

Paradoxically, my being handle to handle the situation has led to it carrying on. There hasn’t yet been a crisis I haven’t been able to handle. This has meant that increasing the dev team has been a “nice to have” - a “very very very nice to have” one might say, but has had to come after reimbursing our debts.

Now, everyone is aware that this situation isn’t ideal for Alveos. There are a number of risks we try and mitigate. For example, I have to be able to keep in touch when on vacation. To avoid sudden critical bugs, I forbid the delivery of new features three weeks before I go on vacation.

The biggest risk, of course, is my leaving or worse sudden absence due to accidental death/disease/finding my true calling as a hobo - pick one. Leaving is somewhat mitigated by the three month notice that is the norm in France, but the market being what it is, finding a replacement would be difficult. Regardless, any sudden absence is a difficult risk to live with and is tough to mitigate, as hard as I try. I don’t want to leave, but I also don’t want to put the company in jeopardy by breaking my leg. Therefore, we have a number of contractors that are operational for development on our app that we can call if need be. This is not ideal, but it is better than the alternative.

My coworkers are also very aware of the “burnout” risk. It has happened to previous devs, and to me as well at a former company. I feel I’m way below my maximum working rate, to be honest, and we are keeping it as sustainable as possible.

Obviously, this situation can’t go on forever. While budget is an issue, it is my goal to have a resilient tech team that can move faster than I can alone. Of course, given the overhead a team brings, to double productivity we would probably need to be three. We are getting interns as a start, and depending on success, we will see where that takes us.

Note: I was working in China for some time on French offshore software projects. The company’s business model was to have a French customer facing front, with the development work done in China. One project in particular was a high priority project - as well as being a large and complex project. This is a proposal I wrote to try and ensure quality for the future developments.

There have been a lot of times when the answer to the quality issues we’ve had on the project were give this response from management : “Be careful”, “Failure is not an option”, or even “We’re just asking you to do your job”. It seems that seen from France, the main reasons for our quality problems are that the China team is not careful, does not understand the needs of the client, or simply does not understand the need for quality. This could not be further from the truth.

The need for quality and the need are completely understood by the China team. This project remains the only project where every level of higher-ups can come crashing down on the China team. This has long been taken into account, and these issues are not the cause the the quality issues we face.

This project in an extremely complex one: 250 tables in the database, 1000 classes, and that’s for just one app. We have no less that 30 independent apps, all interconnected one way or the other. The result is simple: no matter how careful the developer is, it is impossible for him to take into account all the risks when solving a task.

This is also a rather old project, on which around 20 different developers have worked, from the very worst to the very best of the company. This mean, whatever the level of the current developers, the project’s history is the main thing impeding quality developments. The character encoding issue, for example, is first and foremost a problem due to a bad encoding choice at the project’s genesis, much more than due to all the later actions that have tried to make do with this bad choice.

I am not saying the different people working on the project shouldn’t be careful. Everybody is giving their all. My point is this: if the first line of defense is that nobody should make mistakes, we’re on a crash course.

Quality is, before being a people issue, a process issue.

Of course, you can tell developers to “be careful” when working on files on a shared server, but you’ll have much fewer problems if we use source control.

Of course, you can tell the person doing deliveries in production to “be careful” when delivering code line by line and task by task, but you’ll have much fewer problems if you deliver a whole sprint at once.

The objective is therefore to have a fool-proof process - or as close as we can get.

Process improvement proposals

Development process

Task evaluation is still a relatively important problem. We can’t have feedback on the quality of estimates. It seems, however, that most estimates are two low, regardless of the person doing the estimate. To get more exact estimates, there should be time set aside to check what exactly should be done.

There should also be an effort to setup code quality measurement tools. This should allow us to follow industry-standard coding standards, which should increase the readability of the code. Current code should gradually be brought up to the standard. This would allow for a clear gain in productivity as the current codebase is, to put it simply, a huge mess.

Quality process

I believe it is foolish to try and hire more competent developers: for the last few years, the best developers in the company are move from their original team to this project’s team. It is better to try and find a process that will work with the current team, or even a less competent team.

Currently, there are only 2 test levels: developer and QA engineer. The minimum to limit development side-effects would be to have 5 : developer, regression tests before commiting, full on regression testing on the regression server, QA engineer in preprod and QA engineer in prod.

To avoid problems during delivery, there should also be perfect coherence between the different environment and regression tests included in the delivery process.

Crisis management

The project often encounters crisis caused by quality problems or unforeseen change of the use of the application (system load). Every single time, these problems are handled in an ad hoc manner

It should be easy to setup a crisis management process to handle these crisis in the most efficient way. To do so, after every crisis, the “5 whys” questions should be asked to find to root cause of the problem, and every time document a diagnosis method and a resolution process.

Furthermore, it is sometimes impossible to handle a crisis rapidly as some people do not have access to the system, or the people that know how to act are unavailable. To avoid that, there should be triple redundancy on people who have the hard-to-know info: prime, backup, fallback. That way, the probability that it is impossible to act becomes almost null.

A schedule should be set up to know who can be contaced and when. That should allow info to be available on each type of issue: system, development, etc.


Communication is a huge issue at out company, which is unavoidable for any offshore company. This is so much more of a problem on this project because there are more people working on it: 6 developers, 2 QA engineers, 1 architect, 1 Team leader, 1 html integrator, 2 sysadmins, 1 French developer, 1 project manager, 1 project director. That’s not even mentionning, the General Manager, Operations Manager, Integration Team Leader or the client and its partners.

The result is simple: nobody know everything about what’s going on in this project, and a lot of decisions are taken without knowing if they are tenable. Issue analysis can’t be considered exact, and a lot of effort is spent trying to uphold what turned out to be a bad decision. One example is the release date of the last project: the client chose the date before seeing a schedule, which meant all the good decisions from V4’s post mortem had to be ignored.

The action plan on this is simple. All the information should be centralised by the client-side Project Manager, including choices on system architecture, and the results of the CEO-level meetings. If nobody has all the information, it becomes impossible to correctly anticipate which action should be taken. Also, before decisions are taken, all the info should be obtained: production load, client imperatives, resource availability, etc. This information centralisation should be set up before new developments start.

Furthermore, contacts between France and China via Skype are not enough to ensure fluid communication. The solution implemented by numerous offshore companies is the concept of “Ambassador”. Every three month one person from France should stay one week in China, and vice versa.

To make day-to-day communication easier, webcams should be used to make Skype communication more efficient.


Team motivation has a direct and evident link with quality: a passionate developer will of course do better than the one who doesn’t care. Motivation is slipping the last few months. Overtime has a very bad effect on morale, especially when it is useless. Overtime should only be used with a concrete goal in mind. But, more importantly, it should be avoided and considered a last resort. Currently, overtime is being seen as the solution to all problems. This must be changed.

Risk management

Ever since V4, a primitive risk management system is in place. To go further on this, risks should be taken into account further upstream, by quantifying them in the schedule. There should also be a dedicated activity of risk discovery at project initialisation with dedicated time. This would enable us to validate risks with the people in charge of development, delivery, system, and client. The objective would be to only encounter a very small number of unforeseen problems.


Of course, the project evolve so fast, and the client’s requirement evolve so fast that it is irrealistic to believe that we can keep an up to date, complete and reliable documentation. However, it is important to pursue the documentation of processes to ensure that everybody will be able to act. This includes a complete documentation of the system architecture both in China and in France, a list of stakeholders client-side, partner-side and on our side, as well as an exhaustive documentation of the development process and the delivery process.

Continuous improvement

To maintain client satisfaction, we should be constantly improving. To do so, we should hae a few metrics to follow, on three issues: development quality, delivery quality, and system architecture quality. We should follow closely the evolution of these metrics. This will enable us to act when the evolution goes the wrong way.

And in conclusion

All this seems to be quite an investment, but as Philip Crosby has stated “Quality is free”. The return on investment is such that we can consider that gaining quality will be free in the end. Quality problems are simple problems with known solutions. However, the possibility to act is not China-side. The ball is now in your court.

When I joined Alveos, the overall feeling at the company was that our main product was impossible to move forward. It had been developed originally 4 years before, and grew organically. Previous tech leads felt that we had hit a dead-end development wise and that continuing development was dangerous. These people were very very competent developers, knew the product from end to end. I should know, I live in their code everyday. Yet, it turned out, the were wrong.

I know where their feeling came from. For one thing, developers like to start fresh, start a new project and “do it right this time”. But more importantly in this case, the maintainability was particularly bad. Or rather it was the worst they’d ever seen. Unlike most of their previous projects, this project had rapid and ongoing development. This meant the ripple effects of small, quick and bad decisions were felt regularly for years. “We should redo that part” had become a description of more and more parts of the project, and radical changes had to be put together the MacGyver way rather than with careful analysis. Most developers hate that, and with good reason.

Also, the feeling of “If we do anything, it’s all going to explode” came from some particularly difficult times. During those time, changes brought with them new bugs and clients were exceedingly unhappy. The job was basically putting out fire after fire. Adding to that, the hosting solution (one server per app for frontend and backend) was reaching its limits. On top of that, the upgrade scripts were unwieldly and sometimes required manual intervention or failed all together. All this added to the clients’ perception that nothing was working right.

When I came to the company after the previous tech lead had left, this impression was pervasive even among the non-tech staff. Despite early good impressions on my part regarding the app’s maintainability (regardless of its obsolete framework), it was chosen to move forward with a complete rewrite. As I had stated before, this was stopped for a number of reasons: cost, risks, and a strategy that would have prevented migrating existing clients. Paradoxically, writing from scratch while permitting migration was deemed impossible/too hard/too time-consuming: the kernel of the app was considered too hard to port as it was deeply encroached in the framework. So the rewrite stopped. It was clear we wouldn’t get a new “clean” app. Yet we were at risk of becoming obsolete.

That’s when I tried to make us go forward on the existing app despite the perceived risks. Just small risk-free changes at first. Pretty simple stuff, but often important enough to be noticed: showing the number of items in the shopping cart for example. Changing the position of buttons to be more consistent across the app. Gradually, the changes I added became large ones. It turned out the app was quite maintainable, even though evidently flawed architecture-wise in some places.

I believe my difference in impression came from previous experiences, maintaining much older code that had been written by inexperienced developers before PHP frameworks became commonplace. Compared to those project, this app was almost a breath of fresh air. The guys before me were good, and their worst was better than most. Having worked on really bad projects, I’m pretty used to working on bad code. Bad projects also help you learn to work even not everything is as it should be. What also helped me quite a bit was that I wasn’t as productive as my predecessors, given that my knowledge of the app’s architecture was still maturing. So, instead of introducing new changes regularly, I’d introduce new ones every few weeks. It turned out, spacing the start of the fires made them easier to put out.

We underwent a hosting migration that allowed us to scale our app much better. The app was once again fast enough (around 6 times faster on worst case scenarios), and began to be considered stable again. Our contractor for that did a really incredible job, moving us to scalable, relatively cheap hosting, with a Chef delivery system that was both reliable and easy to use.

With these successes, I began to grow bolder as my knowledge of the app’s maintainability grew. I had encountered my share of bad code before, and this wasn’t nearly the worst. We started implementing a real roadmap, and moving forward. Our current pace is faster than it has ever been, I believe, save for the original release of the app. This whole evolution took about one year, culminating in the complete overhaul of the application’s interface.

We lost a lot of time because of the feeling nothing could be done. If I encounter a development dead-end again, I believe the best approach would be, instead of immediately going towards a costly rewrite to ask these questions:

  • what do we want to do?
  • what can’t we do that we want to do?
  • why can’t we do it?

If I had asked these questions in the beginning, I think we could have saved six months. It turned out that what we really couldn’t do, we could live with. And what we can do is really a lot.

Our application didn’t have its interface change since 2008. Even back then, it wasn’t winning any prize for usability or design. That interface was designed, like a lot are, by the original developers. Like most developer “designed” interface, it showed a clear lack of foresight. Often, it was implemented in such a way that made it easier for the developer to maintain than for the user to use. It was easily breakable, and small changes could render whole features impossible to use.

Also problematic, it used Scriptaculous (Prototype), as that library was then integrated in the server-side framework (Symfony). Over time, as jQuery overtook Scriptaculous in both popularity and usability for the developer, the problems with this integration became more and more evident. While jQuery has a thriving plugin community, it has become tough to even find good documentation for Scriptaculous apps. See this post for more on the risks of working with an older framework.

The interface did have a lot of good qualities though. There is something to be said for its consistency across the whole application. It was, even if not always intuitive, heavily consistent in between features. The same pattern was repeated over and over and over, which made it somewhat easy to guess where you could find one functionnality or another. It was also heavily optimised for performance through its heavy use of ajax instead of a more traditionnal “reload whole page” pattern that would have been easier to setup. Interestingly too, it aimed to be a CSS Zen Garden of sorts, with a custom CSS for each client. The issue with that of course, was that it demanded heavy CSS editing whenever we got a new client, and often didn’t have the hoped modularity. This meant that sometimes, functionnality that worked with one CSS didn’t work with another.

For the last 5 years, we made do with it.

We still sold to some clients, still managed to train new users, and still kept maintaining the app. We sometimes tried to have some small incremental benefits, but there was a fear we could easily break the app. Not unjustified, this fear mainly stemmed from the fact that what worked with one client often failed with another.

Then in early 2012, we lost a big bid for a very big prospect. It was a brand everybody has heard of, and would have been the crown jewel of our client roster. We lost to a competitor over usability. We matched feature for feature, were cheaper, but looked harder to use - and really were harder to use to an extent. We also lacked eye-candy, which does make a very big difference, especially when you sell to marketing. The app was to be a flagship of sorts for their brand to their resellers. An ugly app hurts the brand image.

Well, that was a big wake up call.

We decided to redo everything. Our goal was to have a new interface that:

  • looked modern
  • allowed for some customization per client
  • was easily maintainable
  • was easily expanded

We ended up choosing to use Bootstrap, as it meant we had a complete toolkit that we could adapt to the different situations we would face. We liked the look and feel, and easily could customize the look per client.

Bootstrap uses jQuery for its javascript components, which meant we would have to either make our whole app work with both jQuery and Scriptaculous or rewrite all our frontend code to use jQuery instead of Scriptaculous. I chose the second option: it would take more time, but we would end up with something more easily maintainable. While there is no automatic way to go from Prototype to jQuery, and everything has to be done by hand, it is fairly straightforward. That part took about two weeks, which left me wondering why it hadn’t been done before.

Then, I set up moving from the custom CSS classes and HTML structure to Bootstrap classes and HTML. That part was the longest, and took around 3 months. The only significant hurdle was to ensure Ajax features didn’t break in the move. This was actually time consuming, as our app made heavy use of in-app popins. We had created custom popin classes that enabled us to superimpose multiple popins ; Bootstrap doesn’t let us do that, so we had to come up with simpler ways to let the user interact with the application. At debugging, this was the part that had the most bugs, especially since it didn’t degrade gracefully: our original HTML was ok-looking when using Bootstrap, and could still be used. For the popins however, if we didn’t change the behavior, it wouldn’t work “as is”.

Finally, I set up customization. The idea was pretty simple: changing the color scheme for each client. Bootstrap gave us an easy way to do that. We had the user change the color scheme, and recompile the Bootstrap less files with the new color variable values when it was done. Fairly easy. That way, only colors changed, and no customization would break the app.

In the end redoing everything took about three months, for a single developer. It is not that big of an undertaking, even for a company with limited resources and was clearly worth it. It was another three months of use in production before all the wrinkles were ironed. We are currently migrating each of our clients to the new interface.

In the year since the rewrite, we have signed 4 times more clients than the year before. Of course, not all of that can be attributed to the user interface: the recession was in full swing in France before, and our sales approach has evolved greatly. Still, it certainly helped, and the original cost of redoing the interface has been recouped now.

We are all certain, if we had had that interface, we wouldn’t have lost that one prospect.

This is how I first started to program, in 1997 to be exact. It was the year I entered high school. Every students was required to have a TI80 or higher, an entry level programmable graphical calculator. This marvelous little machine cost 100 French Francs at the time, roughly $20 USD or 15 Euros.


I started programming withit as soon as one of my friends showed me how he had created a rock-paper-scissors program on his TI80, and I could never program enough. I programmed before, during and after class - much to the irritation of my teachers (using a calculator in French class is somewhat conspicuous). Like many people who ended up being programmers, I had just found something I loved. My TI80 is dead now, I tore it apart to try and put it back together as soon as I upgraded, the next year.

Programming on the TI80 was, in retrospect, a peculiar experience. The language was TI-BASIC, which was a quirky language to say the least especially, in its TI80 form:

  • the keyboard was calculator style, with the letters ordered alphabetically
  • you didn’t type the program text but inputted lexemes directly. For example, to use a for loop, you didn’t type the letter F, O and R, but you went in the “Program” menu, and chose the seventh item from the list. In your program code, the “FOR(” would be then be displayed and you could input your parameters.
  • there was no way to comment your code
  • variable names were all exactly one letter long and always in caps (in fact everything was always in caps); and variables could only hold a float value. Some of these variables could be overwritten by the system. X and Y notably got set to coordinates chosen on the screen, so you could never use these variables in any program with graphical capabilities. This meant that you could use at most 25 variables (the greek letter theta θ was also available) in a program, all with nondescript names. All these variables were system-wide global variables.
  • variables only held floating point values, no strings, objects, arrays…
  • you couldn’t create macros or functions in a program. You could call a program from another program, but there was no direct way to pass data from one program to the other, you used the variables as they were all global
  • the only data model available were arrays, called lists. There were only 6 lists available, with a max size of 99 items, and of course they could only store floats.
  • there was no way to get data from the user during the program run apart from a prompt. This meant there was no way for the calculator to tell which key was being pressed, so any real time interaction was out of the question
  • storing data in variables was done in the opposite way to the standard. Instead of typing: A = 1, you typed: 1->A
  • goto were available, but you were limited to 36 goto destinations.

Of course, there were also strong limitations with the hardware. there was only 7Kb of RAM on the calculator and no ROM, so you checked every byte to make sure your program would fit, and could run without hitting “ERR: MEMORY”.

The screen was 2-colors, of course, and a very small screen (76 by 80 if I remember correctly).

There was no data input port. This meant you were limited to TI-BASIC and couldn’t use any Assembly, like other TIs could. This also meant that if you wanted to use a program a friend had or that you found on the Internet, the only way to get it was to retype it entirely.

This led to a number of hacks to get the code running. I wouldn’t put closing parenthesis at the end of my FOR loop definition because the interpreter would add it implicitely. That way I’d gain one byte of memory.

Two dimensionnal matrixes were handled by putting the array[n][m] data at the n*(array_size)+m position in the one dimensionnal list.

Often, I’d go at great length coming up with clever mathematical formulas that today I would do in a separate function, with a bunch of if/else. That would save me a few lines of code as well as making the program flow a little clearer.

When I wanted the program flow to wait a few seconds, I’d make the calculator calculate the hardest operation it could without running out of memory: 69!. If it tried to compute 70!, it crashed.

At the time of course, I didn’t realise that was weird in any way. It was just how you programmed.

Here is an example of a program I wrote. In this programe you would, play a game similar to Invaders, except turn based instead of real time. Enemies would appear randomly at the top, and go down every turn. The player would try to kill them off one by one at each of his turn. Enemies would be one step closer everytime. I advise you not to try to understand this code…

:INPUT "(0<X<95)",W
:IF W>94
:IF W<0
:PLOT1(**,L3,L4,°) (**= STAT PLOT, TYPE, 1)
:5->DIM L3
:5->DIM L4
:LBL 1
:IF DIM L3=W+5
:IF MIN(L4)<=10
:IF X=L3(A
:LBL 7
:RANDINT(1,53->L3(DIM L3+1
:47->L4(DIM L4+1)
:LBL 6
:LBL 9

Edsger W. Dijkstra famously said that those that start to learn programming with Basic become brain damaged. With the hindsight, I can see what he means. Good coding practices were few and far between in the TI-BASIC world. I’m glad to report, though, there seems to be few lasting repercussions on my programming style. At least, I don’t think it is, you’d have to ask the people who maintain my code.

What I mainly programmed was games. I started with some glorified rock-paper-scissors, and went on to more advanced territory. The one I liked the most was Minesweeper. Programming has always seemed more fun to me than actually playing the games I developed, but Minesweeper was one the few games I wrote that I actually used more than a few times.

After that, I tried to do even more advanced games, but the wall that I hit was the calculator’s inability to allow real time interaction. This limited me to turn based games, which often (in my opinion) are no fun. Especially with a low-level game designer like I am. I tried strategy games, but couldn’t come up with a sufficiently advanced AI to make the game any fun to play.

That’s when I tried something else: I actually developed my own version of Sim-City for the TI80. It was very limited, both graphically and in functionality, but it was pretty fun to play. I was somewhat proud of myself for that one.

A friend of mine had started his own website, on a Geocities type service provided by his ISP. He asked me if he could put my programs on his website. I, of course, was okay with it. So he typed the whole source code manually on his computer and put it on his website. In essence, before I had any real idea was open source was, I had released my first open source software.

A few years later, I randomly googled “TI80 games” and found on a website I had never heard of, a list of a few games. (the website is still online today: http://www.ti80.online.fr/jeux.php3 - in french) The last one of the list was Sim-City. The one I had programmed. Apparently, someone had stumbled onto my friend’s page, and copied the game on their calculator, played it, and found it good enough to put on their website. The only part that was slightly annoying was that they had removed my name from the program (pretty easy to do: remove DISP “BY YANNICK” from the code).

I guess I got “pirated” but it was a pretty awesome feeling: somebody, somewhere, was actually playing a game I had created! But the best part was when I found somebody had used the game and liked it so much… they wrote a strategy guide.

Further discussion

Reddit discussion on /r/coding: link