Mikael Kalms Posted March 16, 2017 Report Share Posted March 16, 2017 Hi, I am trying to replicate a branch from a repo in Plastic Cloud to a local Plastic server. This is part of my https://github.com/falldamagestudio/plastic-cloud-to-ucb/ project. I am having problems with the replication step. I have had it work in the past - but am unable to get it to work now: it appears that the 'cm replicate' command waits forever for the Plastic Cloud side to deliver something? The biggest change is that I used to have the Plastic server running on Azure before, but now it runs on a Google Compute Engine VM instead. I create a new repo on the local Plastic server, and then I start replication from our Plastic Cloud organization to my local Plastic server. Here is what I see in the console: Quote root@29ed4f4f80c0:/# cm replicate /main@PongSP@FallDamage@Cloud PongSP --authfile=/conf/authentication.conf OperationStartingFetch CalculatingInitialChangeset CalculatingInitialChangeset CalculatingInitialChangeset CalculatingInitialChangeset CalculatingInitialChangeset CalculatingInitialChangeset CalculatingInitialChangeset CalculatingInitialChangeset CalculatingInitialChangeset CalculatingInitialChangeset CalculatingInitialChangeset CalculatingInitialChangeset CalculatingInitialChangeset CalculatingInitialChangeset CalculatingInitialChangeset CalculatingInitialChangeset CalculatingInitialChangeset CalculatingInitialChangeset CalculatingInitialChangeset CalculatingInitialChangeset CalculatingInitialChangeset CalculatingInitialChangeset CalculatingInitialChangeset ... etc, forever... The repository is small -- 100kB of content, 14 changesets, only the /main branch. I have left it running for a long time (1 hour +) and it continues to print that line, over and over, without making progress. I think the problem is within the Plastic Cloud backend, or with the local sqlite database, but don't know how to debug further. Can you help out? Attached is a plastic.server.log with all logging settings changed to "DEBUG". Let me know and I will be happy to provide other info for debugging if necessary. plastic.server.log Link to comment Share on other sites More sharing options...
Mikael Kalms Posted March 16, 2017 Author Report Share Posted March 16, 2017 Additional info which might be important: The file /conf/authentication.conf contains 2 lines: "LDAPWorkingMode" and a username/password string in a special format (copied from profiles.conf). I believe these are in correct format because otherwise I should get an error message very early on when connecting to Plastic Cloud for the replication operation. The file cryptedservers.conf (and associated .key) contain info which I have copied from my own workstation. I have used this successfully on another VM during last week -- but it is possible that the contents of these files are somehow different on this VM. Client & server version: 6.0.16.884 Link to comment Share on other sites More sharing options...
Mikael Kalms Posted March 16, 2017 Author Report Share Posted March 16, 2017 --- wait. Version 6.0.16.884, is version 6.x perhaps not compatible with Plastic Cloud today? If so then the solution might be to pull a different version when installing Plastic on the Linux VM. Installation lines that I perform today: Quote RUN echo "deb http://www.plasticscm.com/plasticrepo/plasticscm-common/Ubuntu_14.04/ ./" > /etc/apt/sources.list.d/plastic.list RUN echo "deb http://www.plasticscm.com/plasticrepo/plasticscm-latest/Ubuntu_14.04/ ./" >> /etc/apt/sources.list.d/plastic.list RUN wget -q http://www.plasticscm.com/plasticrepo/plasticscm-common/Ubuntu_14.04/Release.key -O - | apt-key add - RUN wget -q http://www.plasticscm.com/plasticrepo/plasticscm-latest/Ubuntu_14.04/Release.key -O - | apt-key add - RUN DEBIAN_FRONTEND=noninteractive apt-get -q update && apt-get install -y -q plasticscm-complete && plasticsd stop I will test installing plasticscm-* version 5.4.x instead of 6.0.x. Link to comment Share on other sites More sharing options...
Mikael Kalms Posted March 16, 2017 Author Report Share Posted March 16, 2017 Same behaviour with plasticscm-complete version 5.4.16.867 unfortunately. Link to comment Share on other sites More sharing options...
manu Posted March 16, 2017 Report Share Posted March 16, 2017 Hi Mikael, we can't see requests enqueued for your organization. Can you try again? Can we schedule a fast online meeting in order to check the issue with you? Link to comment Share on other sites More sharing options...
Mikael Kalms Posted March 16, 2017 Author Report Share Posted March 16, 2017 I am testing again now, with negative results. Time: 15:56 UTC. My local Plastic server runs at IP 130.211.108.218. Happy to do an online meeting during tomorrow. I will PM you my contact details. Link to comment Share on other sites More sharing options...
Mikael Kalms Posted March 17, 2017 Author Report Share Posted March 17, 2017 Results after support session: Running the Plastic software inside a Docker container, on a Docker host, on a VM, on Google Compute Engine, has networking problems when communicating with Plastic Cloud. It can connect to the Plastic Cloud servers and execute some commands, but "cm replicate" fails with obscure timeouts. Running the Plastic software inside a Docker container, on a Docker host, on a VM, on Azure, works fine when communicating with Plastic Cloud. <= this is what we will do for the time being. Link to comment Share on other sites More sharing options...
Mikael Kalms Posted March 21, 2017 Author Report Share Posted March 21, 2017 Update: This is indeed an interaction between Docker and Google Compute Engine. GCE has a network-wide MTU of 1460. Docker ignores this and sets up a bunch of extra network interfaces with MTU 1500. I'm not sure but I expect that GCE machines have Large Segment Offload active. These three factors combined make it so that if an application (such as Plastic) attempts to send a lot of data via TCP, some fragments of the TCP communications will get dropped. Net result: Plastic local server -> Plastic Cloud communications becomes unreliable. I have made things work in our case by manually forcing the MTU for all network interfaces related to the Docker containers to 1460: https://github.com/falldamagestudio/plastic-cloud-to-ucb/commit/91faf02257ea4d41531541d9000277d36f268668 Issue log: https://github.com/falldamagestudio/plastic-cloud-to-ucb/issues/15 Link to comment Share on other sites More sharing options...
Recommended Posts
Archived
This topic is now archived and is closed to further replies.