Jump to content

Mikael Kalms

  • Posts

  • Joined

  • Last visited

  • Days Won


Mikael Kalms last won the day on June 7 2020

Mikael Kalms had the most liked content!

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Mikael Kalms's Achievements


Rookie (2/14)

  • First Post Rare
  • Collaborator Rare
  • Conversation Starter Rare
  • Week One Done Rare
  • One Month Later Rare

Recent Badges



  1. If you are happy to just use the features from Gluon (so you will not do any branching/merging) then you can put all your stuff into a single repo, just like you speculate at the beginning. We made a turn-based strategy game in Unity. We really wanted to use task branches for the Unity project itself, but were concerned about repo size & workspace size, just like you. For that project, we kept all source assets in Google Drive, and the Unity project in Plastic. We used the full Plastic SCM client (not Gluon). Google Drive provided a simple way of sharing files. It provided a backup in case of catastrophic disk failure. The Plastic repo ended up at 85GB including history. Overall, this worked well for us. The main problems when using Google Drive for source assets are, 1) it is not obvious to users when synchronization is complete, and 2) it is cumbersome to look at history, 3) there is no linkage between "source asset version X" and "imported asset version X". We are now making larger FPS in Unreal. We still want to use task branches for the Unreal project. Our source asset collection is so large that if we were to put it in Plastic, we don't want to fetch all of it when getting latest. For this project, we keep all source assets in one Plastic repo, and the Unreal project in another repo. We use Gluon (with partial checkout) for the former repo, and Plastic SCM (using branches) for the latter repo. The source assets repo is currently at 650GB including history. A full checkout would be >100GB, but no-one does that. The Unreal project repo is currently at 300GB including history. A checkout is ~25GB. Overall, this works well for us. The main problem when using two separate repos are, there is no linkage between "source asset version X" and "imported asset version X", other than looking at timestamps. It does not cause us much trouble in practice. We are evaluating the Dynamic Workspaces feature. In the future, it may allow us to combine both repos into one, while still allowing us to use branch-based workflows, yet only requiring people to download the files that they access. It is not flexible enough for us yet; we need it to work on multiple OSes and possibly in Docker containers to satisfy our build system needs.
  2. This does not block us. It used to make clean builds in Jenkins fail, until we found that adjusting client.conf appropriately made the error go away.
  3. Side note: it might be possible to simulate this just by stopping the local Plastic server in a regular installation, ensuring that WorkspaceServer in client.conf points to the local server, and then attempt a 'cm workspace create' like above.
  4. Sorry for the delay. Here are reproduction steps: 1) Ensure you have a dynamic workspace, in this example at C:\MyDynamicWorkspace. 2) Create a batch file at C:\MyDynamicWorkspace\Hello.bat, with the following content: "echo hello". 3) Download and Install Total Commander from https://www.ghisler.com/ . 4) Open a command prompt; run C:\MyDynamicWorkspace\Hello.bat; observe that the command window runs the batch file successfully. 5) Open Total commander; navigate to C:\MyDynamicWorkspace; double click on Hello.bat; observe that you get the following error message: (The above screenshot says D-drive -- when I tested this, I used an existing dynamic workspace on my D drive. I have also moved my Plasticfs cache folder to the D drive.)
  5. Thank you, that works. Bug report: Plasticfs does not cooperate well with Total Commander ( https://www.ghisler.com/ ). If I have a *.bat file within the plasticfs file system, and I attempt to launch it via Total Commander, I get the following error message: Similarly, if I try to launch an exe file: Also, when deleting a file, Total Commander first attempts to move it to the recycle bin - which fails, which prompts Total Commander to display a dialog, and ask if I want to delete it permanently. The permanent delete succeeds. I can launch batch files & executables via Windows Explorer. A "move to recycle bin" delete operation via Windows Explorer triggers a permanent delete directly (with a popup). I presume this means that plasticfs simply doesn't support the concept of the recycle bin.
  6. How do I change plastic FS' backing storage location from %LOCALAPPDATA%\plastic4 to somewhere else? We are typically setting up our workstations like this: C for applications, D for development. When we begin to use plasticfs, any files that are fetched or created locally (think "build output") will now take up space on the C drive instead of D. We will run out of space on C quickly, due to the intermediate files that get produced when building stuff.
  7. Bug report #2: Five minutes ago, I logged in to the WebUI. I can browse the files in our primary repo, but I cannot see the contents of any of the files. It appears that the web client is getting '400' responses from the backend when it does requests for actual data blobs, for example, https://euwest4-00-cloud.plasticscm.com:7178/api/v1/organizations/<org>/repos/<project>/revisions/1678635/data fails for me according to the Console log in Chrome. --- backend response: { "error": { "message": "An invalid user api token was provided." } }
  8. Bug report: I think that folders and files with "+" in the name are not handled correctly in the web UI. I have a repo with folders in with "+" in the name. I can't enter those folders in the web UI. I also have files with "+" in the name. I can't get info on those files, and I can't download the contents.
  9. I have seen this problem both on Ubuntu 20.04 (when run in WSL2), and on Debian Buster (when run within a Docker container). I have been using cm version throughout all testing. Here is the install script that the Docker container used: https://github.com/falldamagestudio/UE-Jenkins-Images/blob/aa13570e86edb7f31258f35c545b5475144cd457/ue-jenkins-agents/linux/ue-jenkins-ssh-agent/Dockerfile#L20-L25 And here is the (templated) client.conf used: https://github.com/falldamagestudio/UE-Jenkins-BuildSystem/blob/895eb9b2dfbb36772e17328a0cbf50c6e98abcd2/application/plastic/client.conf.template ------------------------------------------------------------------------------------------------------------------ When I reproduce this locally in Ubuntu 20.04, I notice that, with <WorkspaceServer>local</WorkspaceServer> in client.conf, a 'create workspace' command like this fails: $ cm workspace create test1234 test1234 --repository=UE-Jenkins-Game@ue_jenkins_buildsystem@cloud Error: Connection refused ... but, with <WorkspaceServer>ue_jenkins_buildsystem@cloud</WorkspaceServer> in client.conf, it works just fine: $ cm workspace create test1234 test1234 --repository=UE-Jenkins-Game@ue_jenkins_buildsystem@cloud Workspace test1234 has been correctly created --------------------------------------------------------------------------------------------------------- From the above, I think that the repro steps are: 1) Install plasticscm-client-core only, not any of the full client+server packages 2) Create a client.conf that does not point WorkspaceServer to any existing server, and cryptedservers.conf + *.key in case you intend to talk to a server with content encryption on 3) try to create a workspace for a remote repository. This will fail with 'connection refused' ---------------------------------------------------------------------------------------------------------- The same thing can be simulated on a Windows machine with a full install. Just stop the "Plastic SCM Server" service. Then try to create a workspace: PS C:\x> cm workspace create test1234 test1234 --repository=UE-Jenkins-Game@ue_jenkins_buildsystem@cloud Error: No connection could be made because the target machine actively refused it ... but, the workspace folder appears on disk, and it's perfectly possible to enter the folder and perform a 'cm update' to fetch the repo contents.
  10. Very glad to see this go Alpha! I understand that this is a rather niche question, and understand if you aren't yet in a position to give a relevant answer, but: Do you intend to make the Dynamic Workspace feature (including plasticfs driver) operate within Windows Containers? What about Linux containers?
  11. I'm very glad to see this. A few small niggles as I try it for just a few minutes: * I don't see any way to 'discover' the web UI by clicking my way from www.plasticscm.com. I presume you'll add an entry for it somewhere under https://www.plasticscm.com/dashboard in the future. * The org name in the URL is case sensitive. If I enter "https://www.plasticscm.com/orgs/<our_org_name_all_lowercase>", then I get a 404 page. Some other services allow typing the URL with the user-chosen entity in the wrong case, and either roll with it, or redirect to the properly-cased version. (Example: both https://www.github.com/epicgames and https://www.github.com/EpicGames work) * The repos view (https://www.plasticscm.com/orgs/<orgname>/repos) always prints the total number of repos, even when it shows filtered results. I would expect the view to print the number of repos that are shown on-screen. Most other views work like that. Aside from this -- very glad to see this in action. The #1 use case for me, initially, is to share a (read-only) link to a source file or a changeset with a colleague, who might not have the Plastic SCM client ready, or who might not have the workspace updated properly. Here are the three questions I'll ask myself: * I'm in Visual Studio and looking at FooComponent.cpp. How do I create a plastic Web URL to that file the most easily? * I'm in the Plastic SCM client, looking at a view that has FooComponent.cpp in a list of files. How do I create a plastic Web URL to that file most easily? * I'm in the Plastic SCM client, looking at a changeset. How do I create a plastic Web URL to that changeset most easily? * Someone has told me "um, there is some junk in FooComponent" over a Slack message, without making the effort to create a web URL for me. How do I access the Web UI and navigate to the latest version of that file most easily? What if I'm on a computer? (Less important - what if I'm on a mobile phone?) * I'm in the Web UI, looking at a file. How do I jump to it in any kind of native program - Visual Studio, Plastic SCM client, ...?
  12. Hi, I'm setting up a Jenkins build system. Both Linux and Windows agents are involved. I run all this within Docker containers. For Linux, I install the `plasticscm-client-core` package (so only a client - no server). It _seems_ to work well for checkout & incremental updates - except for one odd thing: In order to do `cm workspace create`, the client.conf/WorkspaceServer setting needs to point to a valid server [for example, pointing to a cloud org works well]. If I don't do this, then workspace creation succeeds, but the `cm workspace create` command itself errors ("Connection refused", nonzero exit code). Subsequent Jenkins commands (`cm update` etc) work just fine though. I don't know whether `cm` is intended to operate fine in such a "client-only" configuration (no local server). Is that the case? If so, you may want to investigate why the WorkspaceServer setting is important to `cm` when creating a workspace. (Side note: for Windows containers, I am currently running the whole cloud edition installer, and configure a local server etc within the container there. I hope to move to a client-only configuration there as well, sometime in the future.) Mikael
  13. I'm not the original author, but perhaps my commentary is useful: This is excellent feedback. It is the Plastic philosophy that history should not be deleted or changed, but we understand there are circumstances where it may be desired or beneficial. We are currently looking into options for allowing the trimming of old branches and content. I will be sure to bring up the points you have raised here as they are very valid points. I think there are two different things at play here: trimming of old branches & content to reduce repository size, and trimming of old branches to make the repository easier to navigate. If the goal is to make the repository easier to navigate, then rather than making it easy to delete branches (like you would do in Git), I think a better option for Plastic would be to make it easy to mark branches as 'archived'/'inactive'. I may be misunderstanding your request here, but I believe this can be done. If you want to check in only the content in one file, then you can select only that one file in the Pending Changes view and check it in. Another alternative would be to create a Gluon workspace for this repo, configure this repo to only download the content you need, make the two lines change and check it in. Please do let me know if these solutions do not meet this requirement and why, because I would really like to understand what we could do to improve in this area. Git power users want a finer granularity than "which of my currently checked-out files do I want to check in now?". They want to be able to implement a larger change to a set of files locally, and then check that in as a series of commits, where individual commits only involve _some_ of the changes from _some_ of the files. In other words, they want to be able to not only mark a set of files for checking in: they want to mark some blocks of code within the files for checking in.
  14. We have finally decided to stop using Xlinks, and instead include any plugins verbatim into our main repository. The costs & confusion associated with Xlink updates, and difficult life cycle management (how do you know if a repo can be safely renamed/deleted? Perhaps it used to be xlinked by another repo a year ago?) finally made us say that it's not worth it for the upside of having separate change histories for each sub-repo.
  15. This week we are once again running into a wave of these "I merged from main and now stuff isn't working on my local branch" moments of confusion from people on our team. Yes, we have updated an Xlink. Yes, we have reminded people to do "update workspace" after merging from main. No, everyone does not remember to do so.
  • Create New...