2011
01.18

At my new job, the network architecture is something like this:

Old Network Architecture

Developer 1 as XAMPP Server installed ( Windows Vista ), and Developer 2 has MAMP Server installed ( Macbook Pro?! Or something like that.. )

All developers, work on their computer, and keep all the projects files on their computer only, no backups are made… no source control is used, so if for example their disks fail, all their work is going to be lost, mine isn’t because I’m backing up everything to my 2,5″ external hard disk..

The external hard disk, only has some files for the various projects, organized in a strange way, I can’t seem to figure why it’s organized like that, some things make sense.. but others, no sense at all… I had to correct a bug in a flash site, and it took me about 5 minutes to correct the bug, and 5 hours to publish the flash, libraries were missing, fonts where missing.. well everything was missing..

The remote webserver is our development server ( so to speak )…every time we have to show a almost finished site to a customer, we have to upload everything to a remote webserver, so the client can see it..

Last month I’ve been working on a new site for one of our costumers, the time comes to demo the site and the back office, so I upload everything to our testing server.. the client see’s it, and asks for a lot of modifications ( this is a story for another post ), most of these changes mess with the database, so I have to change the database two times, one on my local machine, to test, and then I’ve to upload everything to the server again.. which can cause serious bugs, because if I forget to update a field on one table, something is going to go very wrong..

So I proposed the following architecture ( if I can call it architecture ):

New network architecture

What we have here is pretty simple, the development server, is basically, a windows machine with XAMPP Server installed, we really should go for Ubuntu Server or something like that, but nobody ( including me ) knows a lot about Unix, so configuring one was going to take a lot of time…

It’s going to be accessible from the outside of the network, that means that I will point a domain to that machine, and every client can see their site using sub domains ( client1, client2, etc ).

This server is also going to be the source control server, we are discussing options right now, but we are thinking about Mercurial ( I tested it at home, and in 30 minutes, I was committing, pushing, pulling with no problems whatsoever.. )… and we are going to use an DCVS ( Distributed Concurrent Versions System ) no doubt about that..

The database on the server is going to be shared, this means that, when I’m developing at my local machine, the database on the server is the database I’m using.. so any change I make to the database will automatically be accessible for the client, for the code, I just have to do a push to the main repository, and the client can see everything…

The external disk is there for the backups of course… this way everything is saved somewhere.. ( I just need to find out how to restore MySQL backups … )

There are some kinks that need to be sorted out, for example the repositories on the development server, I’m thinking of using two of them..

One is the main repository, and the other is the WWW repository, to synchronize them, when I change the code is pretty easy ( at least in Mercurial ), I just need to add a hook to the push action, and when the push finishes, I just call an Update on the WWW repository..

The problem is synchronizing in the other direction, I’m talking about images loaded by the client and etc, with this method if I upload an image on the server, I’m not going to have that image on my local development machine.. some people say, that files loaded dynamically shouldn’t be on source control… and I tend to agree with them.. but solving this problem in PHP is going to take a while, I’ve got to find a simple way to access a UNC share from PHP, while still using the move_uploaded_file function ( etc ) with no problems… I probably need a class to handle all of this..

For me, this seems the best architecture to use on the office, at least for a small team of programmers.. I could change a few things, the perfect solution would be to use a webserver ( only Apache and PHP ), a database server ( MySQL ), and a Source Control Server.. one machine for every job.. that would be perfect.. but we really don’t need that yet.. down the road, maybe.. but for now this is going to be just fine… and it’s going to be a big leap forward on the organizational side of things..

2 comments so far

Add Your Comment
  1. Think of the following scenario…

    You have two developers modifying the same table on the same database, at the same time.

    By coincidente, they are in different locations, one in the main office, and one in the customer’s office.

    Also imagine that developer #2, by mistake, deletes a field that it wasn’t supposed to be deleted, or changes the name of any other field, and makes developer #1’s code go bezerk?

    What would happen then? They email each other? They call each other? They go to the source code and try to find what happened?

    I think that a more “elegant” solution, (unless you are using RoR migrations), is to implement a method that keeps the SQL scripts for updating, creating, alter, etc, in a text file.

    Each developer has it’s own localhost database, and after testing, runs the SQL scripts needed in the staging server. Those scripts are also, of course, under source control.

    Just my 2 cents…

    Meanwhile, since this post, has the situation changed?

  2. Hi João,

    Nice, you found my blog…

    In this special case, working on two different locations doesn’t happen, trust me on this one, really. The most probably case is one working from home and another on the office, but this is solved pretty easily with MySQL Workbench.

    In the case someone delete’s something that isn’t supposed to delete, we are currently using MYSQL Workbench, which has a function to synchronize the databases on the sketch with the testing database, if for example I delete one field that I’m not supposed to, the other members of the team only has to synchronize the DB and update the table again, but this implies communication between the two persons. And the means of communication is talking.

    What really is missing their is CI ( Continuous Integration Server ), and yes RoR are a pretty good solution, and I liked them very much, I think the only problem is synchronizing the data between the local and staging server.

    I wanted to implement RoR Migrations, but I haven’t had the time… and now is not the time of course..

    Check this one for some insight on CI: http://programmers.stackexchange.com/questions/33477/best-development-architecture-for-a-small-team-of-programmers-wamp-stack

    And PHP equivalent’s to RoR Migrations:
    http://stackoverflow.com/questions/3324571/is-there-a-php-equivalent-of-rails-migrations

    And yes, the situation has changed ( of course ), after one month of being there I started using the staging server, with the shared MySQL database, with Mercurial on it. But I haven’t had the time to transform it to an Unix Server ( that I wanted ), or even think about using RoR Migrations.