Jump to content

Recommended Posts

Posted

Hi

We are currently working on a project in UE4 using Plastic as our main (and only) source control solution.
Since files from Unreal Engine cannot be merged, we are using exclusive checkouts most files. It happens quite often that we want to test some features or create a slightly divergent version of the project for a demo.

The issue we experience is that doing version control for these branches gets quite annoying. Even though the exclusive checkouts work on a branch basis to check if you have the latest version, it's impossible to checkout or push a file that's already checked out on another branch. This behavior feels weird because the checkout/checkin on the other branch will not affect your current branch in any way. Once it's checked in again on the other branch, your options are exactly the same as before the file was checked out on the other branch.

I may understand something wrong here, but it feels weird to me at least.
So the issue is that branches are never completely separated in terms of checkouts. So making a child branch for a different release or demo with it's own history and locks is not possible as far as I know.
Specifically for our use case, a one-way child branch that's excluded from locking would be enough. Since this is basically a fork, a way to set up forks would also solve this issue.

If this behavior is designed like this on purpose or we simply missed an existing feature, we're open for suggestions on how to set up source control in situations like these.


Thanks in advance!
Wouter

Posted

Hi,

You are right, the locks are only useful if you are working o a single branch (normally working with binary/not mergeable files).
This "single branch + locks" workflow fits good for the artists or developers working with assets (not mergeable files). 
By the way, are you using Gluon? 
https://www.plasticscm.com/gluon

If you are going to create branches, it normally means that at some point you will need to run a merge so you are editing text/mergeable files.
We generally recommend removing from the lock rules the files that are going to be modified in the task branches and finally at some point merged.

I guess your scenario is very specific because you are creating branches where you also modify the binary files and you don't plan to merge later?

1) If this "fork" doesn't need to be integrated later, you can push this branch to a different repo where you can perform your changes. This way, the locks from the original repo won't be affecting you.
2) If this "fork" needs to be integrated at some point, you will need to perform the changes on a task branch. And we recommend you to remove the files from the "lock.conf" rules. If you are modifying the files on a task branch, it normally means they are mergeable, so locks shouldn't be necessary.

PD: In the following blog post you can also read some improvements we have on our road map regarding the locks and task branches (your feedback is welcome!):
http://blog.plasticscm.com/2018/04/2018-backlog.html

 
 
5
 Advanced issues found
 
 
Quote

Distributed locking: Or traveling locks. For example, I want to lock this file on main/task127 but only unlock it when it reaches main (otherwise, as it is today, the lock is not useful unless you are on a single branch). 
This enables branching for teams with lock restrictions (unmergeable content). It goes even further: I lock it, but it should only unlock when it reaches main@centralserver. 
Awesome, isn't it? Game devs love it, but micro-electronics teams too

Regards,
Carlos.

Posted

Hi,

First of, some of the artists are using gluon, although most are using the classic UI because it gives a more detailed view (which is handy IF you understand it).
Secondly the single branch workflow is working out very well for us indeed.
Pushing our branch to a new repository and pulling it in again every time we need to update it will indeed work in our use case since we don't want to merge later on.

Though I find it hard to believe that this isn't a fairly common use case. Even building for different platforms might need changes in settings, shaders and sometimes even logic. For big differences like those, it makes sense to make a new repository and pull in any changes you need. For smaller things though, it might give quite some overload? Making a new repository for each small temporary task might be a bit too much.

While further thinking about this and about the locking system, it makes less and less sense to me why the acquisition of a lock would be global but still the effects of a lock are separated from branch to branch. I'll illustrate with another example:

Firstly note that we do not in any situation want to stop using locks, they are key to making sure no work gets lost.
Suppose we are two weeks before a major deadline but due to some unexpected issues, we still have a whole new feature to implement. Problem: we haven't decided in which of two ways we will make it work. We thus split our dev team of 6 programmers in 2 teams of 3. Each of them will be implementing one of the two ways into the project. Also suppose their changes cannot be merged because, for example, everything is done in blueprints.
The two teams start working on two different branches and keep using the checkout system to make sure they are not editing the same file within their team (and because they are used to using the UE4 plastic plugin to checkout things). But by doing so, they unwillingly block the other team from editing those files at the same time, thus stalling their progress.

I think their are many other use cases where the lock acquisition should not be global and I can't think of any cases where having it global would be useful (although there probably are some but I'm just biased).
Although I can definitely see the use of the distributed locking system, as would make proper feature branching possible. I just don't see why those globally acquired locks are the default behavior currently.

Regards,
Wouter

  • 2 weeks later...
Posted

Just hit the same issue. Global lock doesn't seem very useful and has a fundamental flaw of not actually preventing conflicts between branches (until "traveling locks" are implemented). We were also planning on branching one way and never merging back at some point, for porting reasons. "Traveling locks" might make that impossible. I suppose we'll look into pushing the branch to a different repo, but it is always nice to have the option to merge later, in case we didn't plan ahead perfectly. 

Ideally locking would be branch specific. For our use case, I'd rather have merge conflicts that I manually resolve for binary files than global locking. Locking is nice for our main branch for day to day work, though.

thanks

  • 2 weeks later...
Posted

@calbzam We just found out that locks are actually affecting other repositories where the files got pushed to. The locks created in the main repository as well as the 'child repository' (where we push our changes to) share the same identifier (seen in cm listlocks). Even though the listlocks command shows the repository a lock belongs to, Plastic doesn't use that information when handling locks or creating their identifiers. Ideally these identifiers would be repository-and branch-specific.

Posted

Hi @wouter,

There are two things here: regarding the locks that get wrongly applied between repos, we are currently checking this. I think you also requested this in Zendesk. We are on it.

 

Regarding your suggestion about keeping locks per branch: yes, it makes a lot of sense. I mean, it is not our final goal, we'd like to have locks applied between branches and released only when the file reaches a given branch in a given repo, but so far being able to keep locks per branch makes a lot of sense. Not sure, though, if we'll be able to develop it asap.

 

 

  • 4 months later...
Posted

@psantosl Can you provide an update on this topic? It's still a big drawback on our workflow using Plastic and often creates situations where work gets lost because the changes in another branch are not taken into account and are overwritten.

  • 7 months later...
Posted

Howdy,

I'm also curious, are there any concrete plans to implement distributed locking?


We are growing from ~20 to ~50 people on our project over the next 12 months. We use task branches, and the lack of distributed locking is making life difficult for content creators. The lack of an effective locking mechanism that operates well together with multi-branch workflows may force us to move to a single-branch workflow for content creators in the future.

 

I have thought a bit about what Wouter has mentioned (100% global locks causing problems with release branches etc). A mechanism that organizes the branches within a repository into groups could act as a solution:

- I think that the core concept of a branch + sub-branches (+ sub-sub-branches) represents one version of a project well, with sub-branches being tools to collaborate on that particular version of the project. Locks/unlocks should behave according to the "intuitive" distributed-locking model within this group of branches.

Every version of the code that is not by definition going to be merged into another version should then be a separate group. That would apply to each long-lived release branch (if they exist at all) and each long-lived maintenance branch (imagine v1.x / v2.x  / v3.x branches here).

Lock/unlock operations should be contained within a group, and not propagate between groups.

 

One way of modelling this in Plastic would be to use multiple top-level branches. Each top-level branch is the root of a branch group. Locks propagate within that branch group, but not over to other top-level branches.

 

Another way of modelling it in Plastic is to allow marking certain branches as "root branch for locking group". Those branches that are marked such act as propagation stops for locking information. If we create release branches off of main (i.e. /main/release1, /main/release2) and mark /main/release1 & /main/release2 as root branches for locking groups, then we have three locking groups in the tree: /main (excluding those release branches), /main/release1 and /main/release2. We can use all the workflows that are described in the Plastic SCM book without any modifications; it's just that when a file is locked under /main/release1, it won't affect locking elsewhere.

 

A third way of modelling it in Plastic is to take a note from Perforce's streams model, where the branch tree extends both _upward_ and _downward_ from the /main branch: feature branches are located below /main, and release branches are above /main. This gives a simple visual presentation, but comes with other problems, like, why is there exactly 1 main branch? And what about if someone wants to create a task branch off of one of the release branches, that ought to go _below_ the release branch...? Many questions appear.

 

I think option 1 and 2 are more appealing than option 3 in the case of Plastic.

 

 

Anyhow -- reiterating: are there plans to implement distributed locking? Our beautiful workflows are beginning to suffer due to the lack of this feature.

  • 2 months later...
Posted

Hello,

Roadmap is currently being discussed. Locking across branches is being discussed and it is something we are planning to work on eventually.
But I cannot tell you when right now (it was originally planned for 2022, there are now talks of 2021). 
I'm afraid we cannot tell you the specific timeframe yet, but I can tell you we are looking at where to fit it in the plans.

Sorry for the inconvenicnes,
Carlos.

  • Thanks 1
  • 3 months later...
  • 2 months later...
Posted

I'm cross-posting from old post this to keep all conversations about distributed locks in the same place:
 

Quote

 

Hello there! Sorry for reviving this really old post, but it was the first that showed up when googling the topic of exclusive checkouts in a distributed environment. Is there a way to make exclusive locks work on distributed environments right now? E.g. something similar to this workflow:

  1. I request a Lock and Checkout on a file on my local server.
  2. That operation, before running locally, triggers a Lock and Checkout operation on the central server.
  3. The central server sends back a response stating if the Lock and Checkout operation on the central repo succeeded.
    1. If it succeeded, then the file is also checked out and locked locally.
    2. If it failed, the user is warned that someone else already has that file locked and the local operation doesn't go through.

Similar functionality would happen for checking in files. Effectively, "local exclusive checkout/checkin" would become a mirror of "server exclusive checkout/checkin".

Motivation for this ask: We are using Plastic Cloud (paid customers), and several of us work on both binaries and text files, binaries rarely being less than 30% of the work. Not being able to easily checkout binaries exclusively is enough of a reason to ditch the distributed workflow entirely, and be forced to use Gluon with a centralized repo workflow. This kind of defeats the purpose of us having switched over from SVN, as one of the (if not THE) biggest reasons we decided to give Plastic a try was to get a distributed workflow for "mergeable" files as a possibility. So, even if this is not officially supported yet, could you hint me about ways to achieve this via some workarounds? Using triggers maybe? How can I kick operations on the central repo from a script or something like that? Thanks for your understading!

 

Hi @calbzam, thanks for your quick answer! After reading this post, I now see that the one issue with my trigger-based solution is that someone can request a valid yet non-useful lock from a local repo. If user Alice pushes a change to file A therefore releasing the lock for A, and then user B requests the lock for file A without pulling changes first, user Bob would acquire the lock successfully despite not having the latest A version.

I guess the before-checkout operation should verify both 1) that the file is not locked on the central repo, and 2) that the requesting user has the most up-to-date version of the file in comparison to main.

Any ideas on how I can implement a before-checkout trigger that accomplishes these 2 checks? Assuming everyone with a local server uses the same triggers pointing at the same central server, I think that would cover my team's needs. Am I missing something?

Regards,

Nahuel

Posted

Hi @nahuelarjonadev,

- The problem is currently the locks are designed to be enabled in the central server (centralized workflow). This way, you can be sure that somebody has the lock on a file as soon as they are wortking with the central repos. 

- I think potentially, you could create a "before-checkout" trigger that checks in the central server if the file is locked or not (eg: parsing the result of "cm listlocks --help").

The problem is this may not be enough. You would need to check if you have loaded the last revision of the item and this is very difficult to handle considering new revisions for this item could be created in any of the multiple distributed servers.

- Hopefully the new features to improve the lock mechanism can be scheduled soon.

Sorry for the inconveniences,

Carlos.

  • 2 weeks later...
Posted

Hi, as a quick update we are moving back to SVN for now. The concept of Plastic sounds great but our team considered it is not ready to handle big projects with binary files well. Definitely not a big Unreal Engine project as ours is. I gotta say the Plastic team was always blazing fast to answer our questions, so thanks for that.

We may consider giving Plastic another try in the future, if features like traveling locks or a better integration with Rider get implemented.

I'll keep a close eye on Plastic as the idea is so promising, but due to its current limitations we ended up having to use single branch development without distributed repos. I'll keep an eye of how it develops.

  • 4 months later...
Posted
On 6/7/2021 at 10:41 AM, nahuelarjonadev said:

I guess the before-checkout operation should verify both 1) that the file is not locked on the central repo, and 2) that the requesting user has the most up-to-date version of the file in comparison to main.

We're working with Cloud so this type of trigger still seems useful. Though, for 2, we'd want to ensure that the file in our current branch was the most up-to-date across all branches. I looked at the trigger documentation but I did not see a good example of how to do something like this.

Posted

Hi,
I do it here by having a "before-clientcheckout" to warn the user if the file version he has locally is not the latest across all other branches. The user can decide to work locally or not. And I have another trigger "before-clientcheckin" that blocks if it's not latest.

I use "cm hist rev:thefile" to get the latest revision on a given file in all branches and "cm ls thefile --tree-br:/mybranch" to get the local file revision... then you compare the two. I skip if it's a new file or not in the lock file list.

  • Like 1
  • 2 weeks later...
  • 9 months later...
  • 2 weeks later...
  • 2 years later...
Posted
WHATSAPP VIDEOS FORUM LOLI TOR LINKS

ВЕБСАЙТ: ОТКРЫВАЙТЕ ТОЛЬКО В АНОНИМНОМ ТОР БРАУЗЕРЕ (В ДРУГИХ БРАУЗЕРАХ ССЫЛКА НЕ РАБОТАЕТ) http://torx5mtxatfovjmdizm27tsqusa4bgej5qx7zvv2quxvh44spl5xzsad.onion

LINK 218GB VIDEOS FOR TORRENT CLIENT: magnet:?xt=urn:btih:abd5aaed52b5994fe54136701c4c18156bd28415

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...