Jump to content

Mikael Kalms

  • Content Count

  • Joined

  • Last visited

  • Days Won


Mikael Kalms last won the day on February 20

Mikael Kalms had the most liked content!

Community Reputation

3 Neutral

About Mikael Kalms

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. We are in a similar situation. We have chosen to put data into different repositories with names like 'OurGame.UnityProject', 'OurGame.SourceAssets' etc. Then we use the Plastic SCM against the game project, and Gluon against the source assets repo. Gluon allows partial workspaces (choose which folders to sync). We do not use replication & distributed workflows at all, so I don't know how that would fare together with partial workspaces.
  2. I like a lot of the changes that you are planning. Here is one thing that I have noticed with the current design, which I'm not sure whether the new design improves upon: I have found it strange that all tab-views have to exist within the context of a workspace. Whenever I work with Plastic I often end up with a tab list like this: The first three tabs are related to the current workspace. So far so good. The last three tabs are entirely unrelated to the current workspace. I wanted to browse the history of an unrelated repository - but the only way for me to do so, is to open a "Cloud repositories" tab, and subsequent detail views for the cloud-side repository, within the current workspace's view. I keep doing this a lot. I keep cleaning up my tab list (carefully) a lot. The first thing that feels unnatural to me is that "Cloud repositories" opens as a tab within a Workspace view. The second thing that feels unnatural to me is that if I begin to drill down into another repository, the additional tabs continue to open up within a Workspace view. If there was a way for me to get a new view - possibly in the top-left workspace selector - which is tied to the (serverside) repository I have chosen, and not to any workspace, then I would set up a couple of tabs within that view, I would keep that view around for a long time. ========================================================= Summary: I would like to be able to create workspace-less views. For example, imagine that there was an "Open repository" alternative at the bottom of this dropdown: The "Open repository" option would ask me to choose a repository, either on my local server or on the cloud side. After choosing a repository, I would then have access to a subset of tabs within that view (only the tabs that make sense when browsing without an associated workspace). I would set up a couple of these workspace-less views, for projects that I don't actively work on but follow discussions on, and keep those views for a long time. This would reduce the amount of time I spend creating & cleaning up tabs.
  3. Branch, upgrade, merge back to /main. You want to keep check-in history past the upgrade point.
  4. Short version: We would like to see a feature where Plastic Cloud can export backups to a customer-controlled cloud storage bucket. A combination of incremental and full backups would be nice. This would put us in control of the backup situation, with a minimum of operational overhead: we just need to validate the backups regularly. These backups would be in addition to whatever processes Codice Software have for backups internally. I'd expect any additional costs for these backups (processing & network transfer) to be put onto our monthly bill. Full description below. Others who find that this would makes sense, please go and vote! https://plasticscm.uservoice.com/forums/15467-general/suggestions/39121939-implement-cloud-based-backups-for-plastic-cloud
  5. Update: I have a complete replica of the UE4 Git repository on my workstation's local Plastic server. However, my local UE4 repo cannot be replicated in its entirety. /main can be replicated to Plastic Cloud just fine. When I attempt to replicate all branches from my local server to Plastic Cloud, however, I encounter this error: ... and this is the problematic location on the '4.4' branch: As you can see above, there are indeed two heads in the branch. The changesets have been created by the 'cm sync' command, I haven't touched the repository myself. I will proceed by replicating all remaining branches, and see how far that takes me.
  6. Re ignore.conf - good good. It does version like a normal file. Now regarding similarity, no, it is not supposed to operate just on file sizes. I don't know the exact algorithm but it does operate on content. If you package up one or two 'source' files and corresponding 'target' files that got suspiciously high match rates and submit to Codice Software (or just attach to a post here) then they can give it a look and establish why the matching produced false positives.
  7. From what I can tell, yes, Plastic SCM works just fine with a Unity project without the plugin. If you are capable of working on a Unity project against GitHub without using a Git plugin, then Plastic should not cause any extra hindrances.
  8. My guess: I think it has to do with the algorithm used to convert a single commit into a changeset. I don't think it depends on previously synced changesets on the Plastic side. Rather, it depends on how much previous history walking that happens within the Git library - and I presume that there is a gradual increase in the amount of Git-side history walking necessary, for Plastic to be able to convert a commit -> a changeset. This is either due to more calls to the Git library, or because each individual call results in longer walks, on average.
  9. Hi jwvanderbeck, we built a game (15 people over 24 months) using Plastic SCM and Unity, without using the Unity plugin at all. We were happy building the game that way. To me, the big differentiator is that the Unity plugin provides a workflow where you check out / edit / check in files all from within the Unity editor. If you work without the plugin, you will do all those options either via the Plastic SCM GUI or via Gluon. If you use the Plastic SCM / Gluon GUI then you can work without explicit checkout - you can configure the Plastic SCM client to scan the harddrive for changed files (like Git does, when you have modified files locally but not yet staged anything). We preferred working this way. The team needed more training to use the Plastic SCM client effectively, but it allowed us to work better when the team grew in size. We used branching & merging a lot, both for code & content. Mikael
  10. Hi jwvanderbeck, I've used Plastic when developing a game in Unity previously. Perhaps this helps a bit: 1) It appears that your client is not configured to ignore the Library/ folder. Git has ignore rules in a .gitignore file; Plastic has ignore rules in an ignore.conf file (docs here). The syntax is similar, but not identical; complex nested exclusion/inclusion rules do not work the same, and brackets [ABC] notation does not work the same. Also, ignore.conf must be at the root of the repository. If you have already added the Library/ folder, either delete the thing - or nuke the repository and reimport your entire project, so you get a clean history when starting off. 2) It appears that Plastic's automatic move detection is not working like it should for you. See the 'similarity %' in the right-hand column? If there are both deletions and add operations on your local machine, then Plastic tries to pair them up by looking for delete/add pairs that have similar content. I'm surprised that it arrives at such high similarity %s for those file pairs -- the matches like "from sfw_x86.dll to UnityEditor.TestRunner.dll" do not make sense to me; perhaps someone from Codice Software can explain that if necessary. Mikael
  11. After another hiccup (needed to restart machine, forgot I had the sync running in the background), I restarted the sync, and it completed quicker than I had expected - in ~41 hours. Statistics: Stage Duration Compressing objects 5 Downloading 203 Processing objects 2816 Importing 145680 Avg time per changeset for importing: From the look of this, it appears that time spent per changeset is a function of two things: 1) the previous number of changesets (because there is more history to walk in the git tree) - causing a gradual increase in processing time, compare 0..50k vs 80k-115k. 2) the content of individual changesets (perhaps many changed files -> more tree walks need to be done?) - causing the two major bumps at 60k-80k and 115k-140k. Anyway -- my fourth, or fifth, attempt to sync a very large Git repository to Plastic has been successful.
  12. I think you are missing the full story by not running the entire test. My machine reached 45k imported changesets after 2.5 hours - it is around the 50k-60k mark that I begin to see real performance problems. Here are some tools for profiling the situation: https://github.com/kalmalyzer/profile-gitsync in case you are interested. I have let the sync step run for approx 12 hours now. I am syncing to a local server. I interrupted it after it had imported 72k changesets out of 140k. Here is where the time has been spent: Stage Duration (seconds) Compressing objects 6.402 Downloading 219.25 Processing objects 2823.818 Importing 42205.704 This graph shows the average time taken (in seconds) for importing each changeset. The light-blue line represents individual values, the dark-blue line is a trend line, formed with moving average, over the 10 nearest samples: What this means is, importing the remaining 68k changesets will take a long time. - Even if the trend would suddenly stop, and land at 2 seconds/changeset, it would take another 37 hours to import the remaining changesets. - More likely, the trend continues to rise with 1 second per 10k changesets, and it will take 4 days and 20 hours to import the remaining changesets. I will disable Windows Update for a month and continue running this on my machine, just to see what the trend looks like.
  13. I'm trying out the sync against a local server. After 24 hours it has processed 70k of 133k changesets. Quicker than against Plastic Cloud _but_ it will still take multiple days to complete. I am using a reasonably high-end workstation. CPU: AMD Ryzen 7 1700X 8-core @ 3.4GHz Disk: two SSDs (boot partition: Samsung 960 EVO, repo partition: Samsung 860 QVO) RAM: 32GB OS: Windows 10 Home. Most time seems to be spent in LibGit2Sharp: I have disabled logging for cm.exe, but that did not seem to make any major performance difference. The only attribute which stands out for the cm process is memory usage: Breaking into it with a debugger, the bulk of time is spent inside of ForeignScm.dll!Codice.Foreign.GitPuller.GetForeignTree. That av9.b() method spends a lot of time constructing a tree -- just by breakpointing I would notice that sometimes the while-loops in there could take 1+ second to complete: internal TreeForeignNode b(Tree A_0) { Stack<TreeForeignNode> stack = new Stack<TreeForeignNode>(); Stack<Tree> stack2 = new Stack<Tree>(); TreeForeignNode treeForeignNode = new TreeForeignNode(A_0.Sha, string.Empty, TreeForeignNode.Mode.Directory, false); stack.Push(treeForeignNode); stack2.Push(A_0); TreeEntryInfo treeEntryInfo = new TreeEntryInfo(); while (stack2.Count > 0) { TreeForeignNode treeForeignNode2 = stack.Pop(); Tree a_ = stack2.Pop(); TreeIterator treeIterator = this.a(a_); if (treeIterator != null) { while (this.a(treeIterator, treeEntryInfo)) { if (av9.h(treeEntryInfo)) { TreeForeignNode treeForeignNode3; if (this.a(treeEntryInfo, out treeForeignNode3)) { treeForeignNode2.AddChild(treeForeignNode3); } else { treeForeignNode3 = this.g(treeEntryInfo); if (treeForeignNode3 != null) { treeForeignNode2.AddChild(treeForeignNode3); if (treeForeignNode3.IsDirectory) { Tree tree = this.b(treeEntryInfo); if (!(tree == null) && tree.Count != 0) { stack.Push(treeForeignNode3); stack2.Push(tree); } } } } } } } } this.a.a(treeForeignNode); return treeForeignNode; } I really don't know why this completes in a few hours for Manu but takes >24h for me. I could imagine it is one of these three things happening: 1) The CLR is somehow running a lot slower on my workstation than it does on manu's workstation, in general...? 2) There are a lot more changesets in the UnrealEngine repo than there used to be (... but, the repo has existed for 4 years, it doesn't make sense)? 3) Interop is a lot slower on my machine than on yours, because of ... configuration reasons on my machine? I have stopped the replication for the time being. If you have ideas on what to test, let me know. I would like to find a solution for this in the next month(s) but it is not urgent for us.
  14. Hi, I was looking to change the database path for my local server, but had quite a bit of trouble finding the appropriate documentation. It took me three or four rounds of searching until I had connected the dots. Here are things that would haved helped me get an answer more quickly: In the https://www.plasticscm.com/documentation/administration/plastic-scm-version-control-administrator-guide document... * In "Chapter 16: Configuration files", the links in the 'db.conf' section are broken (they lead to anchors in the guide that have been renamed since - #Chapter8:Databasesetup vs #Chapter8:Repositorystorageconfiguration). It would have helped me if 1) this section had correct links, and 2) it mentioned that the db.conf was for configuring SQL-type databases. * In "Chapter 16: Configuration files", there is no mention of the 'jet.conf' file (there is a section for it in Chapter 8 however). It would have helped me if 1) there was a separate section about jet.conf in Chapter 16, and 2) it mentioned that jet.conf was for configuring Jet-type databases. * It would have helped me if there was a default 'jet.conf' present in "c:\Program Files\PlasticSCM5\server", as that would have allowed me to discover the setting(s) by looking through all the server's config files. Mikael
  • Create New...