Jump to content

Mikael Kalms

Members
  • Content Count

    151
  • Joined

  • Last visited

  • Days Won

    15

Mikael Kalms last won the day on June 7

Mikael Kalms had the most liked content!

Community Reputation

6 Neutral

About Mikael Kalms

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Side note; one of my colleague reports that this happens to him multiple times per day. Once I have some more logs I will provide them to you via a support ticket.
  2. We are not able to easily reproduce it, sorry. The general repro steps are: 1. do a bunch of work in other applications 2. switch to Plastic SCM (which is already running) 3. switch tab within Plastic then, at step 3, sometimes there is a multi-second freeze. I have attached a redacted & cut-down version of the Plastic SCM logfile. Since I shut down the Plastic SCM client shortly after experiencing the freeze, I believe that the interesting part starts at 2020-10-13 22:15:20,498 . If you want the full, unredacted log, let me know and I will file a support ticket + include the log file. log_redacted.txt
  3. Hi, over the past month-or-two several of the developers on my team have noticed that, occasionally, the Plastic SCM GUI freezes for a couple of seconds when switching between tabs in the UI. The frequency which I observe this myself at is - say, 2-3 times over the past month, but I work only rarely within the editor. Others experience it more often. In the most recent case, the freeze occurred when I switched from a Branch Explorer view to a Pending Changes view by pressing Shift-Tab. The GUI froze for about 7 seconds. After that, I could switch back and forth freely without seeing any freezes. I run Plastic SCM with file system watching enabled. I have Plastic configured to determine file differences based on file hash, not just timestamp. I had sum total of 1 changed file on the machine. When I look within the Plastic SCM application log, I notice a sequence - about 7 seconds in length, which corresponds to the freeze I experienced - with entries like this: 2020-10-13 22:15:22,240 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - Watcher <REDACTED1> - Processing change b for <REDACTED2> 2020-10-13 22:15:22,240 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - FsWatcher <REDACTED1>. Speed: 1 events/s 2020-10-13 22:15:22,312 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - FsWatcher. Event path:<REDACTED3> type:Changed 2020-10-13 22:15:22,312 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - FsWatcher <REDACTED1>. Speed: 2 events/s 2020-10-13 22:15:22,312 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - Watcher <REDACTED1> - Processing change a for <REDACTED4> 2020-10-13 22:15:22,685 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - FsWatcher. Event path:'<REDACTED2>' type:Changed 2020-10-13 22:15:22,685 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - FsWatcher <REDACTED1>. Speed: 3 events/s 2020-10-13 22:15:22,685 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - Watcher <REDACTED1> - Processing change a for <REDACTED2> 2020-10-13 22:15:22,689 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - FsWatcher. Event path:<REDACTED5> type:Deleted 2020-10-13 22:15:22,690 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - FsWatcher <REDACTED1>. Speed: 4 events/s 2020-10-13 22:15:22,690 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - FsWatcher. Event path:<REDACTED5> type:Renamed 2020-10-13 22:15:22,690 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - FsWatcher <REDACTED1>. Speed: 5 events/s 2020-10-13 22:15:22,691 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - FsWatcher. Event path:<REDACTED6> type:Changed 2020-10-13 22:15:22,691 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - FsWatcher <REDACTED1>. Speed: 6 events/s 2020-10-13 22:15:22,690 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - Watcher <REDACTED1> - Processing change c for <REDACTED7> 2020-10-13 22:15:22,691 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - Watcher <REDACTED1> - Processing change d for <REDACTED7> 2020-10-13 22:15:22,692 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - Watcher <REDACTED1> - Processing change a for <REDACTED8> 2020-10-13 22:15:22,936 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - FsWatcher. Event path:<REDACTED3> type:Changed No associated errors. It appears that the FsWatcher processed 33 changes in 7 seconds. That sounds low to me. 6 of those 33 changes are for a file that is ~45MB in size. All the other changes are for much smaller files (1MB or smaller), as far as I can tell. Is there any chance that the FsWatcher is related to the stalls we are experiencing?
  4. Howdy, I'm also curious, are there any concrete plans to implement distributed locking? We are growing from ~20 to ~50 people on our project over the next 12 months. We use task branches, and the lack of distributed locking is making life difficult for content creators. The lack of an effective locking mechanism that operates well together with multi-branch workflows may force us to move to a single-branch workflow for content creators in the future. I have thought a bit about what Wouter has mentioned (100% global locks causing problems with release branches etc). A mechanism that organizes the branches within a repository into groups could act as a solution: - I think that the core concept of a branch + sub-branches (+ sub-sub-branches) represents one version of a project well, with sub-branches being tools to collaborate on that particular version of the project. Locks/unlocks should behave according to the "intuitive" distributed-locking model within this group of branches. Every version of the code that is not by definition going to be merged into another version should then be a separate group. That would apply to each long-lived release branch (if they exist at all) and each long-lived maintenance branch (imagine v1.x / v2.x / v3.x branches here). Lock/unlock operations should be contained within a group, and not propagate between groups. One way of modelling this in Plastic would be to use multiple top-level branches. Each top-level branch is the root of a branch group. Locks propagate within that branch group, but not over to other top-level branches. Another way of modelling it in Plastic is to allow marking certain branches as "root branch for locking group". Those branches that are marked such act as propagation stops for locking information. If we create release branches off of main (i.e. /main/release1, /main/release2) and mark /main/release1 & /main/release2 as root branches for locking groups, then we have three locking groups in the tree: /main (excluding those release branches), /main/release1 and /main/release2. We can use all the workflows that are described in the Plastic SCM book without any modifications; it's just that when a file is locked under /main/release1, it won't affect locking elsewhere. A third way of modelling it in Plastic is to take a note from Perforce's streams model, where the branch tree extends both _upward_ and _downward_ from the /main branch: feature branches are located below /main, and release branches are above /main. This gives a simple visual presentation, but comes with other problems, like, why is there exactly 1 main branch? And what about if someone wants to create a task branch off of one of the release branches, that ought to go _below_ the release branch...? Many questions appear. I think option 1 and 2 are more appealing than option 3 in the case of Plastic. Anyhow -- reiterating: are there plans to implement distributed locking? Our beautiful workflows are beginning to suffer due to the lack of this feature.
  5. If there are lots of changes since the last build, the Plastic plugin for Jenkins fails with the communication. The typical error message is: "FATAL: Parse error: XML document structures must start and end within the same entity." We have a manual workaround, but it requires making dummy commits. It would be nice to eventually have this resolved, as it is one of the things that makes our Jenkins build jobs require manual maintenance on an irregular basis. https://issues.jenkins-ci.org/browse/JENKINS-62442?jql=project %3D JENKINS AND component %3D plasticscm-plugin
  6. I can't speak to the existence (or lack thereof) of an Unreal forum, but my experience when using Plastic Cloud with Unreal is that I have never needed to add an actual URL anywhere. Perhaps you can provide a bit more background? My experience is as follows: I have begun by creating a repository for the game in Plastic Cloud. I have done this through the Plastic SCM client. The organization identifier is 'companyname@cloud', and the repository identifier is 'gamename@companyname@cloud'. After that, I have created a workspace on my local machine. I have typically done this via the Plastic SCM client (as I have intended to use that client to work against the repository). Here, I typically need to enter 'companyname@cloud' in at least one place when I browse for the list of existing repositiories. The workspace identifier is then 'C:\folder_where_workspace_is_located'. Here, if I had wanted to work on the project using Gluon and its simplified workflows, I would also have created the workspace using Gluon instead. After that, I have created an UE4 project in 'C:\folder_where_workspace_is_located', or copied in an existing project. After that, I have added an 'ignore.conf' file to the workspace root folder, and listed files/folders that should not be added to source control. The syntax of the file is similar to .gitignore files. After that, I have launched the Unreal Editor, and opened the .uproject. UE has initially been in source control disabled mode. Sometimes I choose to activate it; then I select "Connect to source control...", I select Plastic SCM, and accept the defaults (the plugin figures out which folder is the workspace root, and what user I am authenticating as against Plastic Cloud). After that, I have checked in Plastic SCM (or Gluon) to see, if the list of files-to-be-added looks sensible, and continued to iterate on ignore.conf until no temp files/folders are visible in the list. Then I'd shut down the Unreal Editor, and submit the project including the ignore.conf file.
  7. Yes. Your proposal would give the behaviour that I desire: it mimics the result that I would get, if I used only read-only xlinks, made changes to the parent and Xlinked repo separately, and manually updated the Xlink target when necessary. (I haven't thought much about the traceability and the question "does the wxlinked repo structure make sense when viewed in relation to the parent repo structure?". I have only thought about whether the wxlinked repo structure makes sense on its own. I think the former results in empty changesets being good, and the latter results in empty changesets being bad.)
  8. I tried following your repro steps. Using merge-to rather than merge-from does not result in any extra changesets for me. This is what my A repo looks like at the end: and this is what my B repo looks like at the end: However -- I suspect that we are both seeing problems caused by the same root behaviour (any merge that involves a wxlink change results in a commit via the wxlink). We are both surprised because we think "hey, this case could be solved without touching the child repo -- after all, that's how it is handled with read-only xlinks, and without requiring manual conflict resolution". Right?
  9. Ok. So, my desired behaviour, summarized even shorter. During a merge, when there is a wxlink change involved: - if the corresponding read-only xlink change would have resulted in an automatic merge, then I would expect the wxlink to be handled the same way: the wxlink is updated, but no changes are made within the wxlinked repo. - If the corresponding read-only xlink change would have resulted in a merge conflict that required manual resolution, then I would expect the wxlink to be handled by automatic branch creation + a merge performed within the wxlinked repo. The above logic fulfils two key criteria: 1) Merges will no longer produce branches with empty changesets in the wxlinked repo 2) Any merges that were handled by the current Plastic SCM logic, will continue to be handled by the proposed logic. (For the time being, we will convert all our shared plug-in repos to be referenced via read-only xlinks. The wxlinks were nice but the extra "noise" in the wxlinked repos isn't worth it.)
  10. Thanks for the explanation Carlos. I understand the reasoning. I have a problem with how plastic handles the wxlink merge (item nr 3 & 4). If the B2 link had been a read-only xlink, then people would have done the work manually within the B2 workspace. The B2-related changes within the A2 workspace would only be changes to xlink's csetid. This would result in an xlink change first committed in cs:4@A2; this would get merged back to /main in cs:5@A2, and that same change would propagate to /main/A2_1_task001 in cs:6@A2. Since no extra branches/csets are being created in to B2, there would be no change to the xlink in cs:8@A2. This changes when we use a wxlink for the B2 link. As soon as _anyone_ changes the csetid which B2 points to and there is at least one more active branch in parallel, then there will be at least one extra set of branches (and corresponding null commits) created when people begin to merge their branches. If people manage to always have overlapping branches, the extra branches and null commits will continue to occur on B2 until the end of time. This is my desired behaviour: During a merge, where either the source, or the target, but not both, contain an update to the wxlinked csetid, I want: - No new branch created in the wxlinked repo - No empty cset created in the wxlinked repo - The wxlinked csetid should be updated in the parent repo, just as if it had been a read-only xlink. I want this behaviour during branch merges. I want this behaviour during cset cherry-picks. I think I am missing something here. Will get back to this tomorrow, need to think more about it.
  11. Okay, I have a short repro case of my problem (which may or may not be related to M-Pixel's problem). It involves concurrent branches. When someone changes something within a wxlinked repo on one task branch, and then merges that up to /main, then, under certain circumstances, this results -- a couple of steps further away -- in a merge operation from /main to another task branch resulting in a wxlink update _and_ an empty task branch in the wxlinked repo. This keeps perpetuating itself as long as there are overlapping task branches. Here is the first situation where it becomes obvious that there is a problem: Screenshot 1 shows, that I am on /main/A2_2_task003, with no local changes. I am about to merge from /main to my branch, using the "Merge from this branch..." feature. Screenshot 2 shows, that the pending merge is about to update the cset for the wxlink. It is also going to change the branch expansion rule, despite there being no pending changes listed within the wxlinked repo. Screenshot 3 shows, that an additional branch has been created within the wxlinked repo. This branch is created when I start the "Merge from this branch..." operation. If I undo the merge changes, and delete the additional branch within the wxlinked repo, then I can recreate the problem over again. I will provide a snapshot of the two test repos to Codice via a support ticket.
  12. @M-Pixel I have tried following the repro steps that you posted in the Uservoice to see whether it really is the same problem as I have been seeing. However, from what I can tell, the "good" and the "bad" case in the uservoice are identical procedures. There's a numbering mistake in the "bad" case and the wording for the xlink creation step is different -- but, again, I can't tell any functional difference between "good" and "bad". Can you please clarify this?
  13. All the commits in the Xlinked repo are empty. I will file a support ticket for further troubleshooting.
  14. Hi, you should be aware that GitSync (synchronization directly between a Git repo and a Plastic repo) has some surprising limitations: , For your use case, GitServer (allows you to use a Git client to work against a Plastic repo) sounds more like what you want to use. I haven't tried it myself. You may however find that you are happy working directly with the Plastic SCM client. You get light-weight branches, you get a reasonable GUI client, you get good diff views, you get a good history view. People who want to rewrite their commit history before pushing may find Plastic limiting. Personally I'm happy with Plastic's model; strong GUI client + lightweight branches covers most of my (and my colleagues') needs. There are indeed locks. Be aware that Plastic doesn't support travelling locks yet ( https://plasticscm.uservoice.com/forums/15467-general/suggestions/37148053-go-ahead-and-implement-traveling-locks ) but as long as your artists stay on /main all the time and work "mostly like P4" they should be good. Our art staff use branches, do not use locking, and manage to avoid stepping on each others toes for the most part... but it is not for everyone. We keep dumping stuff into Plastic Cloud perpetually, and haven't really considered doing anything else, because the pricing for extra GBs of cloud storage (https://www.plasticscm.com/cloud/pricing) is relatively cheap for a commercial operation. How much space do we use? I don't know, 300-500GB I guess?
  15. You can do it with the current UI: First, use click + shift-click to mark a range of files. (You get a blue box type mark over each of the marked files.) Then, click in the checked/unchecked-boxes next to one of the marked files. All files the marked files will change their checked/unchecked-status to match that of the file whose checked/unchecked-box you just changed.
×
×
  • Create New...