Home AMX User Forum NetLinx Studio

Netlinx vs. files on a network drive

At this point, we have 3 different people who access AMX files, so we've got everything we can on the server. For any network-attached system, this is fine. But... I also have a number of installations where the clients are either too cheap to pay our IT department for more network ports, of there's some other reason the system needs to be standalone.

The problem is, every time I end up needing to work on one of these standalone systems - where I've got all the files stored locally - if my workspace file has any references to any network drive files, it just spins and spins and spins it's wheels for 5 minutes every single time I try to do anything that involves any file management, because it can't find those files. It's driving me nuts.

I'm sort of assuming the only real solution here is to have two different workspace files, one that references network resources, and the other for local systems that doesn't?

Any other ideas out there?

Thanks!

Comments

  • ericmedleyericmedley Posts: 4,177
    You need to use GIT or some other flavor of versioning software. Git is perfect for the application you speak of since it creates a repository on both the local machine(s) and also the server. A user can go to a non-connected installation, do their work, make whatever changes and then they "Commit" the changes to their local machine. Than when they get back to a connected-to-the-internet place, they "Push" those changes up to the main repository.

    Bear in mind, this is non-destructive. So, If you later decide that whatever changes made were not good, you can always roll-back to a previous version, bring in parts of all of the changes that might have been good and just edit the bad ones, etc...

    Versinoing essentially does what you're doing already with the exceptiion that you have someone (the software) tracking every change and giving you the ability to roll back if needed.
  • I'm not sure how I've managed to avoid dealing with versioning software until now, but I have. I've always just worked in 'one man shop' situations before; never in big team programming environments. I cringe at the thought of trying to implement a versioning system in my current work environment. It's a lot more than just the work; it's politics too! Dealing with the delays, and charging the client for that, seems to be a much simpler solution.

    About a year ago, I tried to hire another programmer to replace me. The first thing he did was set up some kind of versioning software (not GIT, don't remember what it actually was). It *NEVER* worked. Not once. In the 6 or 7 times I needed to do something to the system, never once did I get the "correct" version from the versioning system. There was always one reason or another, or another, why the synch never quite worked right. Up to, and including, our own servers crashing and losing data on us. When he left, I ripped all that out, and went back to my old way. And, pretty much just re-downloaded every program from every controller I manage, to make sure I actually had the right version on file as current. It works, it's just very frustratingly slow in these scenarios.

    What I've done for now is just keep 2 different workspace files: one that references server files, one that only references the local files on standalone systems. As long as I remember to swap workspace files, it works. The one where everything is kept local is blazingly fast compared to the one that references server files, even when I've got good solid connectivity to the server.

    Still gonna have to cross that versioning bridge eventually though.

    Thanks for the tips!
  • a_riot42a_riot42 Posts: 1,624
    Even if I was the only programmer in my company, I would still implement a versioning system. Its not that difficult, and has saved a lot of grief over the years with lost code. I've seen what some others have implemented without versioning, using multiple files or directories, multiple file names, dates in file names, numerous workspace files, etc. Anyone who thinks that's easier than a versioning system is nuts!
    Paul
  • GregGGregG Posts: 251
    Thing 1) It is possible to mount a network share into a folder on the c: (or whatever) drive from the command line: mklink /d "C:\Users\Me\Desktop\AMX\IR" "\\server-ip\shared\ir"

    Thing 2) It is possible to then edit the project apw (since it's just xml) and change all the file references from network drives to relative links (if needed):
    <File CompileType="None" Type="IR"><Identifier>Sony,SATA65</Identifier>
    <FilePathName>..\..\AMX\IR\Sony,SATA65.irl</FilePathName>
    <Comments></Comments>
    </File>
    <File CompileType="Netlinx" Type="MasterSrc"><Identifier>Job 42</Identifier>
    <FilePathName>Job 42,Rev 2.axs</FilePathName>
    <Comments></Comments>
    </File>
    

    The location is relative to where the apw itself is sitting, so files in the same folder will have no path specified.

    I used to do this about a decade or more ago, before studio created proper relative links. (Even though it didn't create them, it could use them)

    Now that the apw files are created with relative links and we use git, everything seems to work just fine without any hand patching.



  • ericmedleyericmedley Posts: 4,177
    a_riot42 wrote: »
    Even if I was the only programmer in my company, I would still implement a versioning system. Its not that difficult, and has saved a lot of grief over the years with lost code. I've seen what some others have implemented without versioning, using multiple files or directories, multiple file names, dates in file names, numerous workspace files, etc. Anyone who thinks that's easier than a versioning system is nuts!
    Paul


    Agreed, When I first went indie, I started using it. I will say that the major players (SVN, Mercurial, GIT) all suffer from the "IT GUY" syndrome. The learning curve might seem a bit steep in that the people making the help videos and help posts are exactly the kind of people who have no business making help videos or help files. There is an assumption that you already know how to use it and why and under what framework you need such a thing. There is little explanation of what terms mean and how they came about. You're just supposed to know why "Commit" is different from "Push" (stuff like that)

    But, all that to say, if you do go through the process it is worth it in the end and it has saved my bacon on more occasions than I can count. Imagine a situation where you're under the gun and have been coding feverishly for hours - trying to deal with more than one problem at a time. And at 11PM you realize your brain is tired and you actually have no idea what the hell you just did for the past 5 hours, and now everything's jacked up and you can't even figure out why.

    With normal file management, you're stuck just parsing through files hoping you can remember why you did something an hour ago.

    With versinoing (Here again, I use GIT) you can actually go back through version as you've committed during the day. It really loads all the files back in as they were at whatever time you "commited" Then you can call in a side-by-side "Diff" to see what all has changed. Once you see it go, it makes good sense.

    one thing I can highly recommend as well is - for getting started - go ahead and use one of the online versioning services like Bit Bucket to manage a server for you. They are web based and generally do a good job of hosting/managing/etc... the server for you. Then you can experiment and spend time figuring out how it all works. Most of these services have a free level that is pretty robust and gives you tons of tools and help.

    Then once you feel you have a good handle on being a "user" you can approach tacking managing your own server.

    Bit Bucket is nice since they are pretty agnostic. you can set up multiple repos based upon SVN, Git, Mercurial, etc... That way you can try each one.

    Just a thought.
  • Thanks for the tips. Like I said, I ended up having a really bad experience from implementation of versioning, but I tend to think that was as much purposeful sabotage by the guy who implemented it, as anything. He was a major jackass. If I ever saw him again, I would have trouble restraining myself from just physically beating the shit out of him. My inability to forget or forgive is a very dark part of my soul.

    I would rate needing to recall a previous version of code as fairly low; the things that have bitten me in the past is straight up file loss due to server crashes, and bad file references in workspace files due to people moving files around on me without my knowledge, or any knowledge themselves of what they are actually doing.

    Use of versioning tools is nearly universal in the programming world though; I'm just a throwback. Cloud-based file storage would have been a godsend about a year ago too. Unfortunately I keep feeling like I'm too damned busy getting things done, to improve the way I do things. I'm frustrated enough with other problems and slowdowns, I'm afraid I'd rage at the learning curve and time spent implementing, or worse, end up missing a deadline because I got sidetracked by that. Summer is coming, my back is to the wall.

    Sigh... sorry to grouse. Point taken. I'll work on it.
  • You could also use Microsoft's build in 'offline files' feature. When you do this, Windows makes a local copy of the network share (or part of it, as you wish) and when offline you can continue to use the share as if you were connected to it. When you make changes while offline, they wil be automaticallly synced with the real network share, as soon as you are connected. I use this feature for more than 10 years now and it never let me down.
    It isn't a replacement for a versioning system, but is much simpler to implement: just right click on a folder and choose 'make offline available'. That's all.

    Richard
  • Thanks Richard. Now that you mention it, I think what my former jackass programmer was using was some 3rd party equivalent of what you describe. Unfortunately, my experience was very, very different: Never once did I get the current version of the file, when I needed it. The fact that I should have canned the guy after the first week was a very hard lesson to learn. That experience basically ruined my job.
  • viningvining Posts: 4,368
    My inability to forget or forgive is a very dark part of my soul.
    I hear ya!
    Unfortunately I keep feeling like I'm too damned busy getting things done, to improve the way I do things
    Don't ya just hate that, especially when you know just taking a little time off from "real work" to change a few minor things would make everything more efficient and more enjoyable. I think it's a common curse that a lot of us suffer from. I know I didn't have these problems when I was younger and self medicating :)
  • a_riot42a_riot42 Posts: 1,624
    I started using CVSNT many years ago, and it was simple enough that anyone could get the hang of it easily. Used tortoiseCVS as the client, for integration with Windows Explorer. Worked great and was free, until they changed the business model. Now they charge fees I believe. Still, probably worth it in the end, especially if your company will pay for it. Stay away from PVCS. I now use SVN, and its ok, but I preferred the CVSNT implementation for ease of use.

    https://www.march-hare.com/cvspro/

    Paul
  • I, like Richard, have been using the Windows Offline Files feature for about 10 years now. I have only once had a problem with it, when the IT guy did something horribly wrong. I had to delete my offline files and make the share available offline again. I am still using Win7, so I don?t know how it will work in Win10.
    It syncs over VPN and when I am at the office. We do have a little system that makes sure we don?t mess with any old revisions. We make a copy of the current project and work on that, just in case something happens and we loose something.
  • NZRobNZRob Posts: 70
    I personally use drop box as my verioning / backup / send someone code with one click - one stop shop. Its been great for me - all AMX files are stored in the drop box folder. I have 2 laptops and both always have the lastest of any code that I work on. I use it to work closely with my on site techs with it and any changes they make, I get updates straight way if they are online. For verisoning, you can go back to any change to look at/revert etc.
    I use to use SVN which is great if just me but others struggle with it and need access to the central storage. but everyone has drop box or can be sent a one shot download.
  • pdabrowskipdabrowski Posts: 184
    NZRob wrote: »
    I personally use drop box as my verioning / backup / send someone code with one click - one stop shop. Its been great for me - all AMX files are stored in the drop box folder. I have 2 laptops and both always have the lastest of any code that I work on. I use it to work closely with my on site techs with it and any changes they make, I get updates straight way if they are online. For verisoning, you can go back to any change to look at/revert etc.
    I use to use SVN which is great if just me but others struggle with it and need access to the central storage. but everyone has drop box or can be sent a one shot download.
    All of these things make GIT the more attractive option for working offsite and also having the benefit of a versioning system.

    There was also a bit of talk a little while ago here with issues when compiling projects inside dropbox folders while online.

  • Thanks again everyone, I'm pretty sure I'm going to get started with GIT, and go from there.
  • travistravis Posts: 180
    Last time I tried to compile on a network share, it was super slow, even on our fancy gigabit LAN.
    Another thing git will save you from.

    I still think hg is easier to get running on windows (tortoisehg).

    bitbucket.com has free private repositories for git or hg.
Sign In or Register to comment.