How do y'all share files among multiple programmers?
Jeff
Posts: 374
My company has several programmers, and we all try to write on the same standard. However, none of us really has a computer science background, so we've never set up any sort of version management system to keep track of things for us, and I've had trouble figuring out what the best practice for something like this would be. Here's what I'm looking for.
We have a bunch of shared modules and include files that get used on every project. When somebody updates one, we rename it. So if I take HoppSTRUCT Rev5-01.axi and make changes to it, I rename it HoppSTRUCT Rev5-02.axi. Essentially we're doing version management ourselves. These files don't change all that often, but when they do change, it's important that everyone get the new version. I also have a bunch of templates for different panel sizes that I update as I make changes to our panel standard (adding features or fixing bugs), and it's important that those get distributed as well.
The additional element that matters is that we all spend a lot of time with no internet connection available. We do a lot of government work in sites where we don't have access to the internet and aren't allowed to bring our phones in, and we do a lot of work underground in places where we get no cell reception anyway. This needs to be a solution that doesn't at all rely on network access.
Originally, I just used Windows Live Sync, then it became Windows Live Mesh, and now it's Windows Sky Drive. All I was doing was sharing a giant folder between all of our computers. This meant there was essentially no backup if someone were to accidentally (or on purpose) delete everything in the folder. Right now I'm using Microsoft SharePoint shared folders, but there are some real problems with it. For one, it doesn't always update, even though it's set for automatic updating, and there's virtually no status available so you can't see what's going on or why it's not doing what you want. Beyond that, it also locks files while it reads them. This wouldn't be a problem, but when I hit F7 to compile in Netlinx Studio, It often managers to lock the .src file while Netlinx is trying to compile source files, and then the compiling of source files fails, and i have to recompile. It really sucks.
So with all that behind me, basic question: What is the best way for multiple programmers to share files across multiple computers when all programmers are not regularly connected to the internet? I'd like it to update everyone whenever they DO connect to the internet.
I assume I could ask elsewhere but I figured this forum might best understand where I'm coming from in a way that a computer programmer's forum wouldn't. My background is in AV, not programming, which means I don't have the experience with something like SVN to figure it out.
Thanks in advance!
We have a bunch of shared modules and include files that get used on every project. When somebody updates one, we rename it. So if I take HoppSTRUCT Rev5-01.axi and make changes to it, I rename it HoppSTRUCT Rev5-02.axi. Essentially we're doing version management ourselves. These files don't change all that often, but when they do change, it's important that everyone get the new version. I also have a bunch of templates for different panel sizes that I update as I make changes to our panel standard (adding features or fixing bugs), and it's important that those get distributed as well.
The additional element that matters is that we all spend a lot of time with no internet connection available. We do a lot of government work in sites where we don't have access to the internet and aren't allowed to bring our phones in, and we do a lot of work underground in places where we get no cell reception anyway. This needs to be a solution that doesn't at all rely on network access.
Originally, I just used Windows Live Sync, then it became Windows Live Mesh, and now it's Windows Sky Drive. All I was doing was sharing a giant folder between all of our computers. This meant there was essentially no backup if someone were to accidentally (or on purpose) delete everything in the folder. Right now I'm using Microsoft SharePoint shared folders, but there are some real problems with it. For one, it doesn't always update, even though it's set for automatic updating, and there's virtually no status available so you can't see what's going on or why it's not doing what you want. Beyond that, it also locks files while it reads them. This wouldn't be a problem, but when I hit F7 to compile in Netlinx Studio, It often managers to lock the .src file while Netlinx is trying to compile source files, and then the compiling of source files fails, and i have to recompile. It really sucks.
So with all that behind me, basic question: What is the best way for multiple programmers to share files across multiple computers when all programmers are not regularly connected to the internet? I'd like it to update everyone whenever they DO connect to the internet.
I assume I could ask elsewhere but I figured this forum might best understand where I'm coming from in a way that a computer programmer's forum wouldn't. My background is in AV, not programming, which means I don't have the experience with something like SVN to figure it out.
Thanks in advance!
0
Comments
I'd recommend having a look at git. It's one of the more popular options, is completely free and there's loads of awesome tutorials and info to help you get started. When you have 15 minutes spare start with the try git course.
For managing modules and includes that you use across a number of projects git has a feature called submodules. This page does a much better job at explaining it than I can.
I haven't tried git, but did want to check it out.
I'll try Git in the next few days, thanks!
It also does its own version control, when you check a file in it keeps the previous version and you don't need to rename the file.
I was using it on our server, but I have heard there is a cloud based version available as well
Paul
Or you can just have multiple free accounts and recommend each other to boost the storage a bit.
The un-delete / previous versions options work really well.
You can use it offline and it updates the next time you connect to the internet.
Different solutions handle the methodology of making this happen in various ways. But, the onething a lot of people who program but have never used versioning software seem to think is that it's all about just having a centralized storage place. That part is easy. It's how to handle the possible changes that can unknowingly occur on thousands of lines of code over a long period of time with multiple people touching the project.
We can all get into a pissing match on which flavor of version management is best. But, you are treading on very thin ice if you think that cloud storage is the same thing.
Those are my opinions. If you don't like them, I have others.
One thing that I had to think about when first going to code versioning was how to structure the code within the system. In the end, I put modules in one branch and projects in another branch. Modules are organised by version number, while projects are organised by date. "trunk" is essentially "temp", and "stable" is code that has been rolled out. In the Modules branch, "current" is a copy of the most recent stable version.
I set up all my projects to use /Modules/<model>/current/<module>.tko. This ensures all my projects have the most recent stable version of the module. If the newest version of a module is found to be buggy, I can overwrite the /current version with an older stable version. If there are radical changes with a module which would cause some older projects to break, those older projects can be set up to compile against a specific version of the module, until such time as I can update that project to work with the latest module version.
And because I like to make things easy for myself, here is another method that I use. When I am satisfied with all my project and/or module modifications that I've made in the /trunk directories, I duplicate the files into the appropriate /stable directory with date (for projects) or version (for modules) in the file path. After re-pointing my project file-paths to the new location, I then compile once against this "snapshot" file which compiles the current file-path into the module. The resulting .tko is then copied into the /current directory for mass deployment. This may seem like overkill, but it is really handy when you use PROGRAM INFO in a telnet session. Every module and even the main code shows you what version you are running since the original file path is compiled into the .tko.
When you are trying to keep hundreds of rooms synchronised to the same module versions, this approach is worth the effort. I also put my project documentation (schematics) into a /Projects/<project>/Documentation directory and replicate it when making a project snapshot. This is really handy when the room spec changes over time - you've got a snapshot of the "current doc" for each of your code snapshots.
Roger McLean
Swinburne University.
Is it not hard to setup/use on windows anymore?
I'm using TortoiseHG, but I can't convince anyone else to deal with version control. Showing people diffs from any point in the project is helping with that a little.
The GIT part was pretty easy.
Setting up on the local machines was not difficult at all.
Personally, I didn't see much to choose from between Git and Mercurial. They're both great, and I personally think Mercurial has better tools available on Windows, like TortiseHg (though I haven't looked at GitHub's client, since it wasn't out yet last time I looked).
There's a TortoiseGIT rig too. Pretty much the same deal. In fact I prefer it to the built in stuff.
It's important to note that Git != GitHub. Git is a version control system. GitHub is a (completely and utterly amazing and awesome) web service that makes it super easy to publish, share and track git repositories. You need absolutely no infrastructure to use git. Even if you're not collaborating with other developers you can use git on your laptop to track changes, experiment whilst still having a safetey net (use branches for this) or handle revision control between a couple of machines. If you do collaborate its all peer to peer.
A super nice advantage that you will notice if you switch to version control is that you can remove a whole bunch of useless comments from your code. Checkin notes in your version control system can house all the I changed x because of y type comments. Your code then purely has to contain comments relevant to the code that is there. Small, succinct pieces of information that fill in the gaps of what someone can understand from just reading the code itself.
I don't know about versioning apps but I doubt they need to sync since you're working from a central location (server) on a version or a branch and when you compile or save that's it, you're ready to go.
Basically what I'm trying to say is even for solo programmers DropBox kind of sucks so I constantly have to remember to pause syncing before working or wait x amount of time before transferring a newly saved or compiled file.
Of course I'm also due for a new PC so there may be other issues with my constant crashing other than DropBox like having a large structure open in debug which NS3 doesn't like.
We had our "IT guy" try and use SugarSync to backup our CVS system, and it ended up wiping a lot of it out. It was a mess and CVS had to be rebuilt by hand by the programmers, which is a nightmare if anyone has been through that. I think their syncing software is very poorly written, certainly not reliable enough to protect critical documents like program code, and it doesn't provide any history of what it has done if things go wrong. Unbelievably the software will modify and merge files on it own if it thinks it should, like if two files are almost the same. Maybe this is good for storing your holiday pictures, but I would never use it in any type of commercial endeavor imho. Personally, the idea of sending my documents to some server somewhere to be maintained by some people, under a terms of service agreement that can change at any time, gives me the creeps. It'll render AXW files unreadable as well, so beware. They also say they are not responsible if they lose all your documents so you better have backups. If you need a backup of your backup, its not a backup.
Paul
I love Git for my personal use but am disappointed in the ability to "share" single files. I've used Visual SourceSafe in the past and liked how just the tko could be shared in to my project's modules folder and any updates would automatically be retrieved on the next pull.
Though [post=64680]annuello's method[/post] would be the easiest with Git, I prefer to keep all files necessary for a project in the project folder. I know this leads to numerous duplicates, but when it comes time to archive a project, NetLinx's AXW/workspace management is just too frustrating. At the end of the day I want to be able to just zip up the whole project folder and know that everything I need for that project is in that zip.
I know Git has "submodules", but I've found that managing submodules is a pain.
My perfect world would look like this:
Huh??. We do this all the time.
As I understand submodules, each module would require it's own folder. So if I have 10 modules in my program, my project will require 10 submodule folders since each would need it's own tracking data, correct? And since I only want the tkos, I'm assuming I'd have to maintain a separate branch in the module repository wherein I delete all the source code so that it just contains the tko so that the submodule doesn't report a difference between the working forlder and the commit it points to. Or can you add a submodule that says, "this file points to that commit"?
This is not possible, right?:
Let's think about this a little less AMX-centricly.
Make a master directory (but not a repository)
In that directory create the following directories
Master\MyProject_1
Master\MyProject_2
etc...
Now, most us AMX folks usually put all our project files in our project folders, something along the lines of
Master\MyProject_1\Modules\device_comm.tko
Master\MyProject_1\Includes\device_setup.axi
typically the include is what is project specific. It changes from one project to another depending upon the settings perculiar to that project. But the module stays the same.
So, what you do is create in your Master Directory a repository for these kinds of things
Master\Modules_common\device_comm.tko
Then when in Netlinx Studio when importing that module navigate out to the Modules_common folder and pull it from there. You can do this repeatedly in as many projects as you wish. If yoyu make changes to the moodule file in the Modules_common folder it will propigate to every project referenciing it when you next compile.
How you deal with multiple programmers/techs is to put the same master folder on their computers and they just sync up with the current common folder. You can setup permissions and so forth so they can only pull and not edit or push back up.
Also, if you screwed up and realize the new version of the module is crap, it's easy-peasy to roll back every project with one simple file fix.
It does get rather goofy if you try and keep stuff that is by definition global in a local folder. But since most of us do not live in a multiple programmer world, there is just no pressuree to adhear to what most of the big programming farms deal with every day. If you have 50 programmers workinng on a project with tons of potential to screw things up,there's much more pressure to do it the way the Versioning solutions require.
Correct. I didn't notice that in your file structure above.
Eric's suggestion above is a great one and an approach to this problem that I've used in the past. It's good practice to you keep .tko's (and all other compiled files) out of your module project repositories and just let these track source, you can then place the compile modules into a separate 'Common' repository. When you a compile a module tag the head of the module's repository with a version number (I recommend following semver) and mention this in the commit of the compiled tko / jar to your compiled modules repository. That way if you ever need to check why something is funky with a system running version x of a module you can reference the source of that version.
Now, in your common modules directory this is where you can start getting funky with branching models. Rather than embedding the module version in its file name use the version control system for this. That way when you initially compile a module you might place it in say a 'dev' or 'beta' branch of your modules repo. Once you're happy with its stability you can then migrate it to you 'master' branch. Whenever you do any updates they are always committed to the 'dev' branch, requiring everything that ever enters 'master' to come via this path. As your module file names are the same this means that as you are doing system development you just check out 'dev' and write your system as you normally would. When it comes to system commissioning time just check out 'master' and compile your project against this.
We are setting up CVS from scratch but does anyone have experience of this for C* files?
Ta
This is great in theory, but in practice its a big headache and not worth it to me. For instance, I'll have 3 or 4 different Denon A1HDCI modules because different customers are running different firmware on each, so they all have slightly different commands. Sometimes I'll have to hack a module to add an obscure feature I didn't bother to implement, or change a level range to suit an interface that will break something in other projects.
So we have many almost duplicate files, but who cares, it all works, and each project is its own little universe always guaranteed to compile and work right. It has its cons too, but its not too bad. So we never propagate changes across a bunch of projects at once. That kind of thing makes me break out in a cold sweat in AMX land. Netlinx code is just too brittle for that kind of thing, and I have too many one-off hacks in projects to deal with some wonky issue or bizarre client request to ever feel confident about doing things that way.
Paul
I don't program that brand myself, but a lot of our work is with them. Submodules always "work", as long as you can put discreet items into a subfolder. It is also possible to keep a company-wide shared User/modules folder that any project can pull from. The main problem using any content versioning system with the other guys is that most of the created files (compiled or not, archived or not) are binaries. And most content revisioning systems are geared to efficiently save the minor differences between text files. When you make a minor change to a binary blob of data, the resulting changes are irrelevant and show nothing about what you actually did. You essentially have to save nearly a whole new copy of the file every time you make one little change - so you get a really large amount of bloat in your repo.
Some things to help mitigate this bloat:
1) Make a new repo for every project.
With git this is the defacto standard of operation, but with things like SVN and CVS, you _could_ create one master repo for the company and just make folders for the projects, since SVN allows just checking out one sub-floder. Don't fall into this trap, because all that binary file bloat will eventually make the repo too cumbersome to be useable and you'll waste a lot of time splitting it out. (ps- if anyone needs a linux shell script for splitting a master git repo into smaller repos for each sub-folder, we can make a deal
2) Try not to commit too frequently for large binary files.
One thing some people don't realize is that revision control systems don't have a delete function. If you "accidentally" commit a DVD ISO image file, deleting it from the repo folder and making another commit will not make the size of the repo any smaller. Because it has to give you the ablility to revert back to the version of that folder which had the deleted monster file, so it keeps a copy forever. Our repo rule is that the other company's projects only ever get the packed archive files committed after a full feature is added or something has been debugged or fixed for certain. With text code like AMX, the opposite is the rule, commit every little bitty change, early and often.
3) Use the darn commit messages.
Make them as detailed as you can. Keep a notepad file open while coding if you are forgetful. This is the only way you can know what has been changed in the other company's code files without pulling each version from the repo and opening each up and comparing them by hand. With text files like AMX has, you can use a nice online repo web interface which will tell you every line of code that was edited from one verion to the next. We use redmine for this. http://www.redmine.org/