Jump to content

Mikael Kalms

Members
  • Content Count

    157
  • Joined

  • Last visited

  • Days Won

    15

Mikael Kalms last won the day on June 7 2020

Mikael Kalms had the most liked content!

Community Reputation

6 Neutral

About Mikael Kalms

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I find it difficult to understand what sort of situations that a user would like to download on-the-other-side-of-xlink changes on demand. I would understand if Plastic has a mode where the user can choose whether or not to download the contents of a folder (i.e. partial workspaces), and would use that to not download the xlink's contents at all. I would understand if Plastic had a configuration where you could declare, for a workspace, which xlinks should be 'expanded' and which should not in the workspace. (This is partial workspaces, but at the granularity of xlinks instead of every possible folder.) I find it far fetched, however, that a user would want to edit an xlink but keep the xlink's contents from a previous changeset within the workspace. What's the use case? The user effectively has an inconsistent local workspace at that point. The only use case I can imagine is "... so that the user can change the xlink, and check that in, without needing to download the contents" which I guess some CI processes would be interested in doing. Do you know of _any_ other use cases? I understand if you don't want to change behavior because some existing customers rely on the current behavior. I find it hard to accept the current behavior as a good user experience, however. For 1), a pop-up might help. For 2), I'm not so sure. the user is not consciously updating the xlink. The user is merging in others' changes, which may or may not include xlink changes. When would the pop-up show? After every merge? After merges that include xlink modifications? For 3), I'll report back with details in case I find a repro case in the future.
  2. I'm not sure we are talking about the same thing. Allow me to elaborate a bit. ----------------------- Fundamentally, everyone on our team expects Plastic to update the contents of an Xlinked folder at the same time that the XLink changes. Plastic doesn't work that way, and it causes frustration ("I merged from /main and now files are missing locally, yet /main is good for everyone else on the team") within our team. Here are three scenarios where that is problematic for us when we use read-only XLinks. 1) When a user creates or modifies an XLink, they need to check it in before they can validate that the xlink was set up correctly. The user needs to learn that the correct procedure is to create/modify the xlink, then check it in, then "Update Workspace". This is not great but is something that we can train people to do. 2) When a user merges, and the merge source includes an XLink creation/update, the user's workspace will not always get updated. This is problematic for the entire team: it means that when merging from /main to a task branch, then depending on what is in the merge source, the user may also need to perform an "Update Workspace" after the merge. In our case, it means that 1 out of 100 merges from /main to a task branch require the user to click "Update Workspace" afterward or else they get strange errors. They don't care that someone else has modified an XLink on /main, they want a merge from /main to their current task branch to provide them with an up-to-date workspace, always. 3) When a user merges from /main to a task branch, and the merge source includes an XLink deletion, _under some circumstances_ the XLinked folder will remain on-disk in the workspace. I don't know the exact conditions yet. In our case, it results in people occasionally re-adding the content from a deleted XLink (now as regular files rather than as XLinked files) because they don't always pay enough attention to exactly what they check in. ("Oh, I guess Unreal Editor also wanted to do something to those other files. I don't know why, but I trust UE knows what it is doing...") ----------------------------- Below is a long 2-user journey, which demonstrate items 1 and 2. ----------------------------- User A creates a repository, MainRepo, and a workspace. User A adds the file MainRepo-file1.txt to the repository, on /main. <image 1> User A creates another repository, SubRepo, and a workspace. User A adds the file SubRepo-file1.txt to the repository, on /main. <image 2> User A creates a feature branch, 'CreateXLink' in MainRepo. <image 3> User A adds a read-only XLink from MainRepo to SubRepo. <image 4> <image 5> <image 6> Above is the first situation that I find problematic: Since Plastic does not download the contents of the SubRepo changeset to MainRepo\SubRepo, there is no way for User A to validate that the configuration is correct without checking in results first. (For example, if MainRepo was a C++ application and SubRepo contained a library/plugin, User A cannot test-build the project on his/her local machine - the test-build would fail since no files are present in SubRepo yet.) User A checks in the XLink add. <image 7> <image 8> <image 9> Above is the next situation that I find problematic: My local workspace is _not_ up-to-date with cs:2 -- the SubRepo folder is still empty -- yet there is nothing in the Plastic UI that indicates to me that my workspace is different from the repository. Nothing tells me that clicking "Update Workspace" will have any effect in this situation. Nothing tells me that switching workspace to cs:1, and back to cs:2, will make any difference to my workspace contents. ----------------------------------- Now we introduce User B. User B will do some work in parallel with User A, directly on the main branch. User B creates a workspace, MainRepo_2. User B switches to cs:1. User B adds the file MainRepo-file2.txt to the repository, on /main. <image 10> User B edits MainRepo-file2.txt but does not yet check in the results. <image 11> User A switches to /main and merges CreateXLink branch into /main. <image 12> User B notices that there are incoming changes, and clicks on the "view" button. <image 13> <image 14> Here is the next problematic situation: User B is encouraged to use the 'Incoming changes' view, but is presented with a red error message. If user B ignores the red error message, goes back, and checks in the file, then this results in the file being marked as Checked-Out, and then the user is sent back to the red error message. <image 15> <image 16> User B goes to the Branch Explorer view, performs a merge from cs:4, and checks in the result. <image 17> <image 18> <image 19> Here is the next problematic situation: User B's workspace is again out-of-sync with the repository contents: the MainRepo_2\SubRepo folder exists but is empty. User B's workspace does not match the workspace contents, despite that all the user has done is to add a file and performed a merge when instructed to do so by the Plastic UI. Again, if this was a C++ application with xlinks to libraries, then User B is now in a situation where fixing a one-line change in a source file, and then performing a merge as instructed by the Plastic UI, leads to compilation/linker errors on the user's machine until the user performs "Update Workspace" or switches the changeset back-and-forth. <image 20> <image 21> <image 22> User B edits MainRepo-file2.txt again, but does not yet check in the restuls. <image 23> <image 24> User A now moves to cs:5, deletes the XLink locally (choosing to delete the items on disk), and checks in the change. <image 25> Next, slightly problematic situation: User B has learned from their mistake (the Incoming Changes red-text view) and merges immediately from cs:6. This results in a checkin window ON TOP OF the merge window. The users faithfully fills in a change description and then checks in the change. The user clicks 'update workspace'. After this, the user is returned to the original checkin window, and apparently the file didn't get checked in in the previous step? <image 26> <image 27> <image 28> <image 29> User B attempts to check in the file again. This time, it succeeds. <image 30> <image 31>
  3. Hi, when I change an xlink, I need to first check in the xlink change, and then click "update workspace" to make the contents of the xlinked folder update. I find the extra required step unintuitive, but accept that there are probably reasons for this. (Side question: Do I need to check in the xlink change? Or is it enough to change the xlink locally, and then click 'update workspace'?) However -- if someone else merges from a branch with such an xlink change, they too need to perform an extra "Update workspace" to get the contents of their xlinked folder updated. This means that, sometimes, "merge latest from main to your task branch" is not enough to ensure that your workspace is up-to-date; sometimes you need to perform an additional "Update workspace" step to ensure that, and this depends on whether or not the merge introduced any xlink changes. Am I missing something? If no, is there some way to make Plastic automatically update the xlinked folders within the workspace during such a merge?
  4. I don't work at Codice, but I have used Plastic and other VCSes for some time. My advice: --- Think of Plastic and Git as "file systems with history, and support for multiple parallel timelines". They provide you with four things that are valuable when working on an UE4 project, compared to when working via Dropbox: You get exact control of when files are uploaded/downloaded. No more "person X changed a file so I guess I'll wait a while and the latest version ought to show up on my machine as well". You get tools that help you work collaboratively with a set of files. There are locking mechanisms which prevent two people from working on the same file at the same time, there is a standard for how and when to do conflict resolution (when two people have worked on the same file, in the case of not having used locking), and there are strategies for how to merge several peoples' changes within text files. You get detailed history. If someone says 'the game is not working for me' and you know which exact version they are on, you too can jump back in time to that version on your machine and reproduce the problem without needing access to their machine. You get the ability to temporarily split (branch) the timeline for the set of files into multiple parallel timelines, work independently on these timelines, and decide later on when it's time to merge two timelines back into one; this enables you to easily collaborate with a colleague (or share your work across two machines) before you share your work with the entire team. --- Plastic is the most useful when you use it to manage a set of files that are tightly dependent on each other. All source code - regardless of language, be it C++ / C# / Lua / ... - is of this kind. The *.uasset/*.umap files in an Unreal project are also of this kind, to an extent. Non-engine format assets (*.mb / *.blend / *.psd / *.png / *.fbx / *.wav / *.mp3 / ...) are usually not as tightly coupled. You would still benefit from using Plastic IMO, but the gain is less there. Therefore, I recommend that if you do not have enough space to put all of your project, put only the Unreal project into Plastic. If you do not have enough space to put the entire Unreal project into Plastic, you are moving into some tricky territory ... yes, you can split the Unreal project so that some parts are in Plastic and some are in Dropbox, but you will need to pick a stable split. Moving parts back and forth between Plastic and Unreal will not help you conserve space, because of what's in the next paragraph. Plastic is designed to retain the entire history for all files that it manages. This means that if you put an Unreal project into Plastic, and then continue to work on it, all the old revisions of *.uasset/*.umap files will continue to take up space within the Plastic repository. Every time you upload a modified version of a file, the size of the repository grows. There is currently no mechanism for erasing old revisions of files from Plastic's history. Erasing entire files will not erase their history, so that doesn't free up space either. This will eat into your 5GB, and the size of the repository will keep growing as you work on it. If your project size is currently, say, 1GB -- then you can probably continue to work on your project for another year until you hit the 5GB cap. Otherwise, you will hit it quicker. Once you hit that cap you will want to either A) start paying per-month for Plastic to retain more than 5GB, or B) move to something else - I guess back to Dropbox, since other version control options aren't free at that scale either - perhaps you could run your own Git server, on a machine of your own... with all the maintenance & backup woes associated? Do not use Plastic to distribute packaged builds of the game. The history that Plastic provides is not worth the extra space. Continue to use Dropbox for those. On our previous game, we kept our "source-format assets" in Google Drive, and the entire game project in a Plastic repository. For our current game, we keep our source-format assets in another Plastic repository, for the sake of convenience. We have ~500GB of files+history in Plastic Cloud today. We use a combination of Google Drive and Google Cloud Storage (+ a bunch of scripts) to distribute packaged game builds. --- If you want Plastic to only manage some of the files in a folder structure, then you place a file named "ignore.conf" in the root of the Plastic workspace. You will see that Plastic's UI will hide stuff that you have ignored. Notice that this ignoring mechanism is protection from accidentally adding unwanted files to being managed by Plastic -- once you have added a file, Plastic will continue to manage it, regardless of what ignore.conf says. The ignore mechanism is useful if you want, for example, the entire Content folder to be managed by Dropbox, or you want to ignore a "Builds" folder where you typically put packaged builds on your machine. See docs here. --- What you describe above -- create a repository in <your org@Cloud>, create a workspace on your machine that is linked to <your repository@your org@Cloud>, Configure your workspace in Gluon to include the entire root folder, perform some actions in your workspace, then clicking "check in" -- should be enough to make files upload to the repository. This should also result in an entry in the "Changesets" tab within Gluon. Your colleague should be able to see your changeset listed within "Changesets" too. Your colleague should be able to fetch your files by clicking "Update workspace", either in the "Explore workspace" or the "Incoming changes" tabs. If something goes awry there then I typically use the full Plastic SCM client. It allows browsing the server-side history and see which actions have been performed with more detail. I would advise that, once you have figured out how Gluon works, create a blank repository in cloud + clean workspace on one machine, copy over the entire project from Dropbox to the workspace folder, edit ignore.conf appropriately (ignore Binaries + Intermediate + DerivedDataCache + Saved folders) and check in the entire workspace from that machine. Then create workspaces (pointing to the same, now no-longer-blank repository) on the other machines. At that point, everybody has the same set of files on their machines, you are in sync -- you are now ready to start collaborating. Gluon's UI should guide you from that point onward.
  5. We are using a cloud server. I have provided four logs via ticket #23929.
  6. One in our team disabled the FsWatcher. It made no difference. We do not have a reproducible scenario. We observe it happen during regular work. It is always after having been busy in other programs and switching back to Plastic SCM. Sometimes directly after alt-tab. sometimes at the first tab switch within Plastic SCM. When the freeze has occurred once, it will not happen again until we switch to other programs and work elsewhere for a while. I am personally experiencing this 1-3 times per day now (because I am doing more hands-on work than I used to). Freezes last typically 2-4 seconds for me.
  7. Side note; one of my colleague reports that this happens to him multiple times per day. Once I have some more logs I will provide them to you via a support ticket.
  8. We are not able to easily reproduce it, sorry. The general repro steps are: 1. do a bunch of work in other applications 2. switch to Plastic SCM (which is already running) 3. switch tab within Plastic then, at step 3, sometimes there is a multi-second freeze. I have attached a redacted & cut-down version of the Plastic SCM logfile. Since I shut down the Plastic SCM client shortly after experiencing the freeze, I believe that the interesting part starts at 2020-10-13 22:15:20,498 . If you want the full, unredacted log, let me know and I will file a support ticket + include the log file. log_redacted.txt
  9. Hi, over the past month-or-two several of the developers on my team have noticed that, occasionally, the Plastic SCM GUI freezes for a couple of seconds when switching between tabs in the UI. The frequency which I observe this myself at is - say, 2-3 times over the past month, but I work only rarely within the editor. Others experience it more often. In the most recent case, the freeze occurred when I switched from a Branch Explorer view to a Pending Changes view by pressing Shift-Tab. The GUI froze for about 7 seconds. After that, I could switch back and forth freely without seeing any freezes. I run Plastic SCM with file system watching enabled. I have Plastic configured to determine file differences based on file hash, not just timestamp. I had sum total of 1 changed file on the machine. When I look within the Plastic SCM application log, I notice a sequence - about 7 seconds in length, which corresponds to the freeze I experienced - with entries like this: 2020-10-13 22:15:22,240 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - Watcher <REDACTED1> - Processing change b for <REDACTED2> 2020-10-13 22:15:22,240 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - FsWatcher <REDACTED1>. Speed: 1 events/s 2020-10-13 22:15:22,312 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - FsWatcher. Event path:<REDACTED3> type:Changed 2020-10-13 22:15:22,312 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - FsWatcher <REDACTED1>. Speed: 2 events/s 2020-10-13 22:15:22,312 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - Watcher <REDACTED1> - Processing change a for <REDACTED4> 2020-10-13 22:15:22,685 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - FsWatcher. Event path:'<REDACTED2>' type:Changed 2020-10-13 22:15:22,685 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - FsWatcher <REDACTED1>. Speed: 3 events/s 2020-10-13 22:15:22,685 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - Watcher <REDACTED1> - Processing change a for <REDACTED2> 2020-10-13 22:15:22,689 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - FsWatcher. Event path:<REDACTED5> type:Deleted 2020-10-13 22:15:22,690 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - FsWatcher <REDACTED1>. Speed: 4 events/s 2020-10-13 22:15:22,690 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - FsWatcher. Event path:<REDACTED5> type:Renamed 2020-10-13 22:15:22,690 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - FsWatcher <REDACTED1>. Speed: 5 events/s 2020-10-13 22:15:22,691 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - FsWatcher. Event path:<REDACTED6> type:Changed 2020-10-13 22:15:22,691 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - FsWatcher <REDACTED1>. Speed: 6 events/s 2020-10-13 22:15:22,690 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - Watcher <REDACTED1> - Processing change c for <REDACTED7> 2020-10-13 22:15:22,691 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - Watcher <REDACTED1> - Processing change d for <REDACTED7> 2020-10-13 22:15:22,692 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - Watcher <REDACTED1> - Processing change a for <REDACTED8> 2020-10-13 22:15:22,936 KALMSHOMEDESKTO\Kalms DEBUG WatcherFsNodeReader - FsWatcher. Event path:<REDACTED3> type:Changed No associated errors. It appears that the FsWatcher processed 33 changes in 7 seconds. That sounds low to me. 6 of those 33 changes are for a file that is ~45MB in size. All the other changes are for much smaller files (1MB or smaller), as far as I can tell. Is there any chance that the FsWatcher is related to the stalls we are experiencing?
  10. Howdy, I'm also curious, are there any concrete plans to implement distributed locking? We are growing from ~20 to ~50 people on our project over the next 12 months. We use task branches, and the lack of distributed locking is making life difficult for content creators. The lack of an effective locking mechanism that operates well together with multi-branch workflows may force us to move to a single-branch workflow for content creators in the future. I have thought a bit about what Wouter has mentioned (100% global locks causing problems with release branches etc). A mechanism that organizes the branches within a repository into groups could act as a solution: - I think that the core concept of a branch + sub-branches (+ sub-sub-branches) represents one version of a project well, with sub-branches being tools to collaborate on that particular version of the project. Locks/unlocks should behave according to the "intuitive" distributed-locking model within this group of branches. Every version of the code that is not by definition going to be merged into another version should then be a separate group. That would apply to each long-lived release branch (if they exist at all) and each long-lived maintenance branch (imagine v1.x / v2.x / v3.x branches here). Lock/unlock operations should be contained within a group, and not propagate between groups. One way of modelling this in Plastic would be to use multiple top-level branches. Each top-level branch is the root of a branch group. Locks propagate within that branch group, but not over to other top-level branches. Another way of modelling it in Plastic is to allow marking certain branches as "root branch for locking group". Those branches that are marked such act as propagation stops for locking information. If we create release branches off of main (i.e. /main/release1, /main/release2) and mark /main/release1 & /main/release2 as root branches for locking groups, then we have three locking groups in the tree: /main (excluding those release branches), /main/release1 and /main/release2. We can use all the workflows that are described in the Plastic SCM book without any modifications; it's just that when a file is locked under /main/release1, it won't affect locking elsewhere. A third way of modelling it in Plastic is to take a note from Perforce's streams model, where the branch tree extends both _upward_ and _downward_ from the /main branch: feature branches are located below /main, and release branches are above /main. This gives a simple visual presentation, but comes with other problems, like, why is there exactly 1 main branch? And what about if someone wants to create a task branch off of one of the release branches, that ought to go _below_ the release branch...? Many questions appear. I think option 1 and 2 are more appealing than option 3 in the case of Plastic. Anyhow -- reiterating: are there plans to implement distributed locking? Our beautiful workflows are beginning to suffer due to the lack of this feature.
  11. If there are lots of changes since the last build, the Plastic plugin for Jenkins fails with the communication. The typical error message is: "FATAL: Parse error: XML document structures must start and end within the same entity." We have a manual workaround, but it requires making dummy commits. It would be nice to eventually have this resolved, as it is one of the things that makes our Jenkins build jobs require manual maintenance on an irregular basis. https://issues.jenkins-ci.org/browse/JENKINS-62442?jql=project %3D JENKINS AND component %3D plasticscm-plugin
  12. I can't speak to the existence (or lack thereof) of an Unreal forum, but my experience when using Plastic Cloud with Unreal is that I have never needed to add an actual URL anywhere. Perhaps you can provide a bit more background? My experience is as follows: I have begun by creating a repository for the game in Plastic Cloud. I have done this through the Plastic SCM client. The organization identifier is 'companyname@cloud', and the repository identifier is 'gamename@companyname@cloud'. After that, I have created a workspace on my local machine. I have typically done this via the Plastic SCM client (as I have intended to use that client to work against the repository). Here, I typically need to enter 'companyname@cloud' in at least one place when I browse for the list of existing repositiories. The workspace identifier is then 'C:\folder_where_workspace_is_located'. Here, if I had wanted to work on the project using Gluon and its simplified workflows, I would also have created the workspace using Gluon instead. After that, I have created an UE4 project in 'C:\folder_where_workspace_is_located', or copied in an existing project. After that, I have added an 'ignore.conf' file to the workspace root folder, and listed files/folders that should not be added to source control. The syntax of the file is similar to .gitignore files. After that, I have launched the Unreal Editor, and opened the .uproject. UE has initially been in source control disabled mode. Sometimes I choose to activate it; then I select "Connect to source control...", I select Plastic SCM, and accept the defaults (the plugin figures out which folder is the workspace root, and what user I am authenticating as against Plastic Cloud). After that, I have checked in Plastic SCM (or Gluon) to see, if the list of files-to-be-added looks sensible, and continued to iterate on ignore.conf until no temp files/folders are visible in the list. Then I'd shut down the Unreal Editor, and submit the project including the ignore.conf file.
  13. Yes. Your proposal would give the behaviour that I desire: it mimics the result that I would get, if I used only read-only xlinks, made changes to the parent and Xlinked repo separately, and manually updated the Xlink target when necessary. (I haven't thought much about the traceability and the question "does the wxlinked repo structure make sense when viewed in relation to the parent repo structure?". I have only thought about whether the wxlinked repo structure makes sense on its own. I think the former results in empty changesets being good, and the latter results in empty changesets being bad.)
  14. I tried following your repro steps. Using merge-to rather than merge-from does not result in any extra changesets for me. This is what my A repo looks like at the end: and this is what my B repo looks like at the end: However -- I suspect that we are both seeing problems caused by the same root behaviour (any merge that involves a wxlink change results in a commit via the wxlink). We are both surprised because we think "hey, this case could be solved without touching the child repo -- after all, that's how it is handled with read-only xlinks, and without requiring manual conflict resolution". Right?
  15. Ok. So, my desired behaviour, summarized even shorter. During a merge, when there is a wxlink change involved: - if the corresponding read-only xlink change would have resulted in an automatic merge, then I would expect the wxlink to be handled the same way: the wxlink is updated, but no changes are made within the wxlinked repo. - If the corresponding read-only xlink change would have resulted in a merge conflict that required manual resolution, then I would expect the wxlink to be handled by automatic branch creation + a merge performed within the wxlinked repo. The above logic fulfils two key criteria: 1) Merges will no longer produce branches with empty changesets in the wxlinked repo 2) Any merges that were handled by the current Plastic SCM logic, will continue to be handled by the proposed logic. (For the time being, we will convert all our shared plug-in repos to be referenced via read-only xlinks. The wxlinks were nice but the extra "noise" in the wxlinked repos isn't worth it.)
×
×
  • Create New...