Home AMX User Forum AMX General Discussion

How do y'all share files among multiple programmers?

My company has several programmers, and we all try to write on the same standard. However, none of us really has a computer science background, so we've never set up any sort of version management system to keep track of things for us, and I've had trouble figuring out what the best practice for something like this would be. Here's what I'm looking for.

We have a bunch of shared modules and include files that get used on every project. When somebody updates one, we rename it. So if I take HoppSTRUCT Rev5-01.axi and make changes to it, I rename it HoppSTRUCT Rev5-02.axi. Essentially we're doing version management ourselves. These files don't change all that often, but when they do change, it's important that everyone get the new version. I also have a bunch of templates for different panel sizes that I update as I make changes to our panel standard (adding features or fixing bugs), and it's important that those get distributed as well.

The additional element that matters is that we all spend a lot of time with no internet connection available. We do a lot of government work in sites where we don't have access to the internet and aren't allowed to bring our phones in, and we do a lot of work underground in places where we get no cell reception anyway. This needs to be a solution that doesn't at all rely on network access.

Originally, I just used Windows Live Sync, then it became Windows Live Mesh, and now it's Windows Sky Drive. All I was doing was sharing a giant folder between all of our computers. This meant there was essentially no backup if someone were to accidentally (or on purpose) delete everything in the folder. Right now I'm using Microsoft SharePoint shared folders, but there are some real problems with it. For one, it doesn't always update, even though it's set for automatic updating, and there's virtually no status available so you can't see what's going on or why it's not doing what you want. Beyond that, it also locks files while it reads them. This wouldn't be a problem, but when I hit F7 to compile in Netlinx Studio, It often managers to lock the .src file while Netlinx is trying to compile source files, and then the compiling of source files fails, and i have to recompile. It really sucks.

So with all that behind me, basic question: What is the best way for multiple programmers to share files across multiple computers when all programmers are not regularly connected to the internet? I'd like it to update everyone whenever they DO connect to the internet.

I assume I could ask elsewhere but I figured this forum might best understand where I'm coming from in a way that a computer programmer's forum wouldn't. My background is in AV, not programming, which means I don't have the experience with something like SVN to figure it out.

Thanks in advance!

Comments

  • PhreaKPhreaK Posts: 966
    Hi Jeff, what you are looking for is a called a distributed version control system. These are a class of development tools designed for this exact problem.

    I'd recommend having a look at git. It's one of the more popular options, is completely free and there's loads of awesome tutorials and info to help you get started. When you have 15 minutes spare start with the try git course.

    For managing modules and includes that you use across a number of projects git has a feature called submodules. This page does a much better job at explaining it than I can.
  • Dropbox is also a better solution to the way you've been doing it. It syncs whenever there is an Internet connection, and also over LAN when possible. Plus, if someone deletes a file, it can be recovered. Same goes for recovering an older version in the case of bad overwrites.

    I haven't tried git, but did want to check it out.
  • ericmedleyericmedley Posts: 4,177
    We use GIT. It's wonderful.
  • JeffJeff Posts: 374
    I wanted to use dropbox, but we need more space than a regular dropbox account will take, and dropbox for teams starts at $800/year, which I won't be able to convince management is worth it.

    I'll try Git in the next few days, thanks!
  • trobertstroberts Posts: 228
    Another great one is Source Vault http://www.sourcegear.com/vault/ It allows you to check in and check out files. Only one person can check a file out a file. No one else can check it out until the file is checked in again...which means there are no multiple files floating around and who knows who modified it last.
    It also does its own version control, when you check a file in it keeps the previous version and you don't need to rename the file.
    I was using it on our server, but I have heard there is a cloud based version available as well
  • a_riot42a_riot42 Posts: 1,624
    We use a local Tortoise CVS installation. I set it up so programmers could collaborate and its done well. Its fast, easy and free. Programmers can work on the same files at the same time and merge changes. My company wanted to use Dropbox but I was able to talk them out of it. I think Dropbox renders AXW files unopenable due to the way it handles paths. We tried storing AXW files using Dropbox but once downloaded they wouldn't open. Plus I don't trust Dropbox in the least. Their syncing software is crap as well imho and when it deletes your files without you knowing there's no way to figure out what happened or get your files back.
    Paul
  • rfletcherrfletcher Posts: 217
    I use Mercurial together with Bitbucket.org and TortiseHg. I went with Mercurial over Git because I was already using TortiseSVN with our old SVN version control and liked the type of windows integration it provides. TortiseGit just didn't seem up to the quality level of TortiseSVN and TortiseHg.
  • You can use 1 Dropbox account on multiple computers, I have the 100GB option on my laptop and desktop, cheaper than using teams, but you do loose some of the tools.
    Or you can just have multiple free accounts and recommend each other to boost the storage a bit.
    The un-delete / previous versions options work really well.
    You can use it offline and it updates the next time you connect to the internet.
  • viningvining Posts: 4,368
    You can use 1 Dropbox account on multiple computers, I have the 100GB option on my laptop and desktop, cheaper than using teams, but you do loose some of the tools.
    Or you can just have multiple free accounts and recommend each other to boost the storage a bit.
    The un-delete / previous versions options work really well.
    You can use it offline and it updates the next time you connect to the internet.
    The idea of multiple users working on a common file system is scary. I'm solo so it work fine for me working between computers but i would never recommend multiple users all with access to the same files without versioning of some sort.
  • ericmedleyericmedley Posts: 4,177
    This is why you don't want to use just what is basially a form of cloud storage. (Dropbox or the like) The major difference between cloud storage and true version management is that something/someone is in charge of what is the 'working' version and also something/someone can roll back to previous versions in case something goes wrong. In addition, you have a running dialog that tells everyone just what exact changes have happened as the thread advances.

    Different solutions handle the methodology of making this happen in various ways. But, the onething a lot of people who program but have never used versioning software seem to think is that it's all about just having a centralized storage place. That part is easy. It's how to handle the possible changes that can unknowingly occur on thousands of lines of code over a long period of time with multiple people touching the project.

    We can all get into a pissing match on which flavor of version management is best. But, you are treading on very thin ice if you think that cloud storage is the same thing.

    Those are my opinions. If you don't like them, I have others. ;)
  • annuelloannuello Posts: 294
    I've been using Subversion for the past few years. It has all the usual CVS features, including file diff-ing and checking for changes while you were offline. The whole code tree resides as on my laptop, so no live-remote access is required. I sync with the content on the server when appropriate. The data on the subversion server is backed up by our server admins, so there is my code back plan.

    One thing that I had to think about when first going to code versioning was how to structure the code within the system. In the end, I put modules in one branch and projects in another branch. Modules are organised by version number, while projects are organised by date. "trunk" is essentially "temp", and "stable" is code that has been rolled out. In the Modules branch, "current" is a copy of the most recent stable version.
    /
     Modules/
       EpsonProjector/
         stable/
           current/
             EpsonProjector.axs
             EpsonProjector.tko
           v1.0/
             EpsonProjector.axs
             EpsonProjector.tko
           v1.1/
             EpsonProjector.axs
             EpsonProjector.tko
         trunk/
           EpsonProjector.axs
     Projects/
       BigRoom/
         stable/
           20120503/
             BigRoom.apw
             BigRoom.axs
           20121127/
             BigRoom.apw
             BigRoom.axs
         trunk/
             BigRoom.apw
             BigRoom.axs
       SmallRoom/
         stable/
           20120901/
             SmallRoom.apw
             SmallRoom.axs
           20130105/
             SmallRoom.apw
             SmallRoom.axs
         trunk/
           SmallRoom.apw
           SmallRoom.axs
    

    I set up all my projects to use /Modules/<model>/current/<module>.tko. This ensures all my projects have the most recent stable version of the module. If the newest version of a module is found to be buggy, I can overwrite the /current version with an older stable version. If there are radical changes with a module which would cause some older projects to break, those older projects can be set up to compile against a specific version of the module, until such time as I can update that project to work with the latest module version.

    And because I like to make things easy for myself, here is another method that I use. When I am satisfied with all my project and/or module modifications that I've made in the /trunk directories, I duplicate the files into the appropriate /stable directory with date (for projects) or version (for modules) in the file path. After re-pointing my project file-paths to the new location, I then compile once against this "snapshot" file which compiles the current file-path into the module. The resulting .tko is then copied into the /current directory for mass deployment. This may seem like overkill, but it is really handy when you use PROGRAM INFO in a telnet session. Every module and even the main code shows you what version you are running since the original file path is compiled into the .tko.
    >program info
    Show Program Info
    -----------------
    
    -- Program Name Info
    
    -- Module Count = 1
           1  Name is SmallRoom
    
    -- File Names = 2
             1  C:\Program Files\Common Files\AMXShare\AXIs\NetLinx.axi
             2  C:\AMX\subversion\Projects\SmallRoom\stable\20130105\SmallRoom.axs
           2  Name is MODPROJ
    
    -- File Names = 2
             1  C:\Program Files\Common Files\AMXShare\AXIs\NetLinx.axi
             2  C:\AMX\subversion\Modules\EpsonProjector\stable\v1.1\EpsonProjector.axs
    
    

    When you are trying to keep hundreds of rooms synchronised to the same module versions, this approach is worth the effort. I also put my project documentation (schematics) into a /Projects/<project>/Documentation directory and replicate it when making a project snapshot. This is really handy when the room spec changes over time - you've got a snapshot of the "current doc" for each of your code snapshots.

    Roger McLean
    Swinburne University.
  • travistravis Posts: 180
    ericmedley wrote: »
    We use GIT. It's wonderful.

    Is it not hard to setup/use on windows anymore?

    I'm using TortoiseHG, but I can't convince anyone else to deal with version control. Showing people diffs from any point in the project is helping with that a little.
  • ericmedleyericmedley Posts: 4,177
    We have a GIT hub server. That took a bit to setup but honestly it had more to do with getting the router setup for outside access.

    The GIT part was pretty easy.

    Setting up on the local machines was not difficult at all.
  • rfletcherrfletcher Posts: 217
    travis wrote: »
    Is it not hard to setup/use on windows anymore?

    I'm using TortoiseHG, but I can't convince anyone else to deal with version control. Showing people diffs from any point in the project is helping with that a little.

    Personally, I didn't see much to choose from between Git and Mercurial. They're both great, and I personally think Mercurial has better tools available on Windows, like TortiseHg (though I haven't looked at GitHub's client, since it wasn't out yet last time I looked).
  • ericmedleyericmedley Posts: 4,177
    rfletcher wrote: »
    Personally, I didn't see much to choose from between Git and Mercurial. They're both great, and I personally think Mercurial has better tools available on Windows, like TortiseHg (though I haven't looked at GitHub's client, since it wasn't out yet last time I looked).

    There's a TortoiseGIT rig too. Pretty much the same deal. In fact I prefer it to the built in stuff.
  • PhreaKPhreaK Posts: 966
    Yep Tortoise<insert acronym of various version control system here> is just an open source graphical front end to a few different version controls systems (TortoiseGit, TortoiseSVN, TortoiseCVS, TortoiseHg, TortoiseBzr). They are not the version control client, all they do is wrap up what you can do from the clients command line arguments into a nice windows shell extension. If you are learning git learn the command line interface (again see try.github.com). There are really only a couple of command you need to know. It's really quite a beautiful and well designed interface.

    It's important to note that Git != GitHub. Git is a version control system. GitHub is a (completely and utterly amazing and awesome) web service that makes it super easy to publish, share and track git repositories. You need absolutely no infrastructure to use git. Even if you're not collaborating with other developers you can use git on your laptop to track changes, experiment whilst still having a safetey net (use branches for this) or handle revision control between a couple of machines. If you do collaborate its all peer to peer.

    A super nice advantage that you will notice if you switch to version control is that you can remove a whole bunch of useless comments from your code. Checkin notes in your version control system can house all the I changed x because of y type comments. Your code then purely has to contain comments relevant to the code that is there. Small, succinct pieces of information that fill in the gaps of what someone can understand from just reading the code itself.
  • viningvining Posts: 4,368
    Another problem I experience with DropBox even as a single programmer is that the constant syncing causes major headaches. If I save a file in NS3 or TPD4 and immediately go to transfer either program will likely crash or throw an error since DropBox has already started to sync and if it hasn't finished yet the app will probably crash or complain since the file is "in use".

    I don't know about versioning apps but I doubt they need to sync since you're working from a central location (server) on a version or a branch and when you compile or save that's it, you're ready to go.

    Basically what I'm trying to say is even for solo programmers DropBox kind of sucks so I constantly have to remember to pause syncing before working or wait x amount of time before transferring a newly saved or compiled file.

    Of course I'm also due for a new PC so there may be other issues with my constant crashing other than DropBox like having a large structure open in debug which NS3 doesn't like.
  • a_riot42a_riot42 Posts: 1,624
    vining wrote: »
    Basically what I'm trying to say is even for solo programmers DropBox kind of sucks

    We had our "IT guy" try and use SugarSync to backup our CVS system, and it ended up wiping a lot of it out. It was a mess and CVS had to be rebuilt by hand by the programmers, which is a nightmare if anyone has been through that. I think their syncing software is very poorly written, certainly not reliable enough to protect critical documents like program code, and it doesn't provide any history of what it has done if things go wrong. Unbelievably the software will modify and merge files on it own if it thinks it should, like if two files are almost the same. Maybe this is good for storing your holiday pictures, but I would never use it in any type of commercial endeavor imho. Personally, the idea of sending my documents to some server somewhere to be maintained by some people, under a terms of service agreement that can change at any time, gives me the creeps. It'll render AXW files unreadable as well, so beware. They also say they are not responsible if they lose all your documents so you better have backups. If you need a backup of your backup, its not a backup.
    Paul
  • Any good suggestions for "linking" files?

    I love Git for my personal use but am disappointed in the ability to "share" single files. I've used Visual SourceSafe in the past and liked how just the tko could be shared in to my project's modules folder and any updates would automatically be retrieved on the next pull.

    Though [post=64680]annuello's method[/post] would be the easiest with Git, I prefer to keep all files necessary for a project in the project folder. I know this leads to numerous duplicates, but when it comes time to archive a project, NetLinx's AXW/workspace management is just too frustrating. At the end of the day I want to be able to just zip up the whole project folder and know that everything I need for that project is in that zip.

    I know Git has "submodules", but I've found that managing submodules is a pain.

    My perfect world would look like this:
    Projects/
       Project A/
          <Project A code...>
          Modules/
             Module1.tko <----- LINKED
    Modules/
       Module 1/
          <Module 1 Code...>
          Module1.tko <-------- LINKED
    
  • ericmedleyericmedley Posts: 4,177
    I love Git for my personal use but am disappointed in the ability to "share" single files. I've used Visual SourceSafe in the past and liked how just the tko could be shared in to my project's modules folder and any updates would automatically be retrieved on the next pull.

    Though [post=64680]annuello's method[/post] would be the easiest with Git, I prefer to keep all files necessary for a project in the project folder. I know this leads to numerous duplicates, but when it comes time to archive a project, NetLinx's AXW/workspace management is just too frustrating. At the end of the day I want to be able to just zip up the whole project folder and know that everything I need for that project is in that zip.

    I know Git has "submodules", but I've found that managing submodules is a pain.

    My perfect world would look like this:
    Projects/
       Project A/
          <Project A code...>
          Modules/
             Module1.tko <----- LINKED
    Modules/
       Module 1/
          <Module 1 Code...>
          Module1.tko <-------- LINKED
    

    Huh??. We do this all the time.
  • PhreaKPhreaK Posts: 966
    This is exactly what submodules are for. I know you mentioned that you've looked into them before but they are well and truly worth getting your head around. Invest the time to how to use them and it will pay off many, many times over.
  • DHawthorneDHawthorne Posts: 4,584
    I do all the coding myself, and handle the version control on my own. But I do have guys out in the field loading and testing. They don't generally make any changes though. So when i need to get a file to them, or make it accessible to them, I use Evernote. It's also a great way to store customer information like logins and WiFi passwords. I prefer it to Dropbox because I can put pretty much any kind of data or file I want in there, and organize the heck out of it with different notebooks, keywords, etc.
  • You know, I think I wasn't clear, or maybe I don't really understand how submodules can work?

    As I understand submodules, each module would require it's own folder. So if I have 10 modules in my program, my project will require 10 submodule folders since each would need it's own tracking data, correct? And since I only want the tkos, I'm assuming I'd have to maintain a separate branch in the module repository wherein I delete all the source code so that it just contains the tko so that the submodule doesn't report a difference between the working forlder and the commit it points to. Or can you add a submodule that says, "this file points to that commit"?

    This is not possible, right?:
    Projects/
       Project A/
          ProjectA.apw
          ProjectA.axs
          ProjectA.tkn
          Modules/
             Module1.tko <----- LINKED
             Module2.tko <----- LINKED
             Module3.tko <----- LINKED
             Module4.tko <----- LINKED
    Modules/
       Module 1/
          Module1.apw
          Module1.axs
          Module1.tko <-------- LINKED
       Module 2/
          Module2.apw
          Module2.axs
          Module2.tko <-------- LINKED
       Module 3/
          Module3.apw
          Module3.axs
          Module3.tko <-------- LINKED
       Module 4/
          Module4.apw
          Module4.axs
          Module4.tko <-------- LINKED
    
  • ericmedleyericmedley Posts: 4,177
    You know, I think I wasn't clear, or maybe I don't really understand how submodules can work?

    As I understand submodules, each module would require it's own folder. So if I have 10 modules in my program, my project will require 10 submodule folders since each would need it's own tracking data, correct? And since I only want the tkos, I'm assuming I'd have to maintain a separate branch in the module repository wherein I delete all the source code so that it just contains the tko so that the submodule doesn't report a difference between the working forlder and the commit it points to. Or can you add a submodule that says, "this file points to that commit"?

    This is not possible, right?:
    Projects/
       Project A/
          ProjectA.apw
          ProjectA.axs
          ProjectA.tkn
          Modules/
             Module1.tko <----- LINKED
             Module2.tko <----- LINKED
             Module3.tko <----- LINKED
             Module4.tko <----- LINKED
    Modules/
       Module 1/
          Module1.apw
          Module1.axs
          Module1.tko <-------- LINKED
       Module 2/
          Module2.apw
          Module2.axs
          Module2.tko <-------- LINKED
       Module 3/
          Module3.apw
          Module3.axs
          Module3.tko <-------- LINKED
       Module 4/
          Module4.apw
          Module4.axs
          Module4.tko <-------- LINKED
    

    Let's think about this a little less AMX-centricly.

    Make a master directory (but not a repository)
    In that directory create the following directories

    Master\MyProject_1
    Master\MyProject_2
    etc...
    Now, most us AMX folks usually put all our project files in our project folders, something along the lines of

    Master\MyProject_1\Modules\device_comm.tko
    Master\MyProject_1\Includes\device_setup.axi

    typically the include is what is project specific. It changes from one project to another depending upon the settings perculiar to that project. But the module stays the same.

    So, what you do is create in your Master Directory a repository for these kinds of things

    Master\Modules_common\device_comm.tko

    Then when in Netlinx Studio when importing that module navigate out to the Modules_common folder and pull it from there. You can do this repeatedly in as many projects as you wish. If yoyu make changes to the moodule file in the Modules_common folder it will propigate to every project referenciing it when you next compile.

    How you deal with multiple programmers/techs is to put the same master folder on their computers and they just sync up with the current common folder. You can setup permissions and so forth so they can only pull and not edit or push back up.

    Also, if you screwed up and realize the new version of the module is crap, it's easy-peasy to roll back every project with one simple file fix.

    It does get rather goofy if you try and keep stuff that is by definition global in a local folder. But since most of us do not live in a multiple programmer world, there is just no pressuree to adhear to what most of the big programming farms deal with every day. If you have 50 programmers workinng on a project with tons of potential to screw things up,there's much more pressure to do it the way the Versioning solutions require.
  • PhreaKPhreaK Posts: 966
    As I understand submodules, each module would require it's own folder. So if I have 10 modules in my program, my project will require 10 submodule folders since each would need it's own tracking data, correct?

    Correct. I didn't notice that in your file structure above.

    Eric's suggestion above is a great one and an approach to this problem that I've used in the past. It's good practice to you keep .tko's (and all other compiled files) out of your module project repositories and just let these track source, you can then place the compile modules into a separate 'Common' repository. When you a compile a module tag the head of the module's repository with a version number (I recommend following semver) and mention this in the commit of the compiled tko / jar to your compiled modules repository. That way if you ever need to check why something is funky with a system running version x of a module you can reference the source of that version.

    Now, in your common modules directory this is where you can start getting funky with branching models. Rather than embedding the module version in its file name use the version control system for this. That way when you initially compile a module you might place it in say a 'dev' or 'beta' branch of your modules repo. Once you're happy with its stability you can then migrate it to you 'master' branch. Whenever you do any updates they are always committed to the 'dev' branch, requiring everything that ever enters 'master' to come via this path. As your module file names are the same this means that as you are doing system development you just check out 'dev' and write your system as you normally would. When it comes to system commissioning time just check out 'master' and compile your project against this.
  • AntAnt Posts: 54
    Dare I ask - will GIT, Github, submodules work when saving C*****n files too? Oh and also GUIs and DSP files etc..

    We are setting up CVS from scratch but does anyone have experience of this for C* files?

    Ta
  • a_riot42a_riot42 Posts: 1,624
    ericmedley wrote: »
    typically the include is what is project specific. It changes from one project to another depending upon the settings perculiar to that project. But the module stays the same.

    So, what you do is create in your Master Directory a repository for these kinds of things

    Master\Modules_common\device_comm.tko

    Then when in Netlinx Studio when importing that module navigate out to the Modules_common folder and pull it from there. You can do this repeatedly in as many projects as you wish. If yoyu make changes to the moodule file in the Modules_common folder it will propigate to every project referenciing it when you next compile.

    This is great in theory, but in practice its a big headache and not worth it to me. For instance, I'll have 3 or 4 different Denon A1HDCI modules because different customers are running different firmware on each, so they all have slightly different commands. Sometimes I'll have to hack a module to add an obscure feature I didn't bother to implement, or change a level range to suit an interface that will break something in other projects.

    So we have many almost duplicate files, but who cares, it all works, and each project is its own little universe always guaranteed to compile and work right. It has its cons too, but its not too bad. So we never propagate changes across a bunch of projects at once. That kind of thing makes me break out in a cold sweat in AMX land. Netlinx code is just too brittle for that kind of thing, and I have too many one-off hacks in projects to deal with some wonky issue or bizarre client request to ever feel confident about doing things that way.
    Paul
  • GregGGregG Posts: 251
    Ant wrote: »
    Dare I ask - will GIT, Github, submodules work when saving C*****n files too? Oh and also GUIs and DSP files etc..

    We are setting up CVS from scratch but does anyone have experience of this for C* files?

    Ta

    I don't program that brand myself, but a lot of our work is with them. Submodules always "work", as long as you can put discreet items into a subfolder. It is also possible to keep a company-wide shared User/modules folder that any project can pull from. The main problem using any content versioning system with the other guys is that most of the created files (compiled or not, archived or not) are binaries. And most content revisioning systems are geared to efficiently save the minor differences between text files. When you make a minor change to a binary blob of data, the resulting changes are irrelevant and show nothing about what you actually did. You essentially have to save nearly a whole new copy of the file every time you make one little change - so you get a really large amount of bloat in your repo.

    Some things to help mitigate this bloat:

    1) Make a new repo for every project.
    With git this is the defacto standard of operation, but with things like SVN and CVS, you _could_ create one master repo for the company and just make folders for the projects, since SVN allows just checking out one sub-floder. Don't fall into this trap, because all that binary file bloat will eventually make the repo too cumbersome to be useable and you'll waste a lot of time splitting it out. (ps- if anyone needs a linux shell script for splitting a master git repo into smaller repos for each sub-folder, we can make a deal ;)

    2) Try not to commit too frequently for large binary files.
    One thing some people don't realize is that revision control systems don't have a delete function. If you "accidentally" commit a DVD ISO image file, deleting it from the repo folder and making another commit will not make the size of the repo any smaller. Because it has to give you the ablility to revert back to the version of that folder which had the deleted monster file, so it keeps a copy forever. Our repo rule is that the other company's projects only ever get the packed archive files committed after a full feature is added or something has been debugged or fixed for certain. With text code like AMX, the opposite is the rule, commit every little bitty change, early and often.

    3) Use the darn commit messages.
    Make them as detailed as you can. Keep a notepad file open while coding if you are forgetful. This is the only way you can know what has been changed in the other company's code files without pulling each version from the repo and opening each up and comparing them by hand. With text files like AMX has, you can use a nice online repo web interface which will tell you every line of code that was edited from one verion to the next. We use redmine for this. http://www.redmine.org/
Sign In or Register to comment.