Jump to content

calbzam

Administrators
  • Posts

    3,118
  • Joined

  • Last visited

  • Days Won

    115

Posts posted by calbzam

  1. Hi @cloudwalker

    For now I'm afraid this is the only feature to reduce the size of the cloud repos:

    https://www.plasticscm.com/download/releasenotes/10.0.16.6241
    
    All platforms - Cloud: Archiving revisions is now available in Cloud!
    
    You can reduce the size (and the costs :)) of your cloud repositories by archiving revisions to an external storage.

    Regards,

    Carlos.

  2. Hi,

    "The files have too many differences, there aren't enough resources to complete the operation" message, it means that you have reached the limit of the maximum difference. The limit is 32k differences per operation. The file size shouldn´t be a problem.

    It needs more or less 1,5Gb memory. We decided to limit the max differences because after this limit the memory increases a lot (we have the algorithm optimized until these values).

    Is this the error message you are seeing?

    Regards,

    Carlos.

    • Like 1
  3. Hi,

    To disable Git Sync you need to remove the Git Sync attribute. We also advise removing the GitSync local mappings folder from the client (as you did), as establishing the same sync after changeset deletion will cause Git Sync to fail. Any subsequent Git Sync will need to be created from scratch.
    C:\Users\USERNAME\AppData\Local\plastic4\sync\git\<guid>

    Did you remove the git-repo-sync attribute from the repo?

    Regards,

    Carlos.

  4. Hi,

    We actually store our repository data on Google Cloud Platform. 

    We perform nightly backups of the repository metadata and retain those for at least 7 days. 

    The repository data itself is stored in Binary Large Object (BLOB) files on Google Cloud Storage persistent disks. According to Google's own documentation:

    Objects written to Cloud Storage must be redundantly stored in at least two different availability zones before the write is acknowledged as successful. Checksums are stored and regularly revalidated to proactively verify that the data integrity of all data at rest as well as to detect corruption of data in transit. If required, corrections are automatically made using redundant data.

    The integrity and the error recovery from those copies are handled by Google automatically.

    In the event of a hardware or system failure in a GCP data center, there could be some downtime as the SLA of GCP that for our configuration is >= 99.9%. However, the integrity of the data, as noted above, is protected. 99.9% uptime translates to a maximum total downtime of fewer than 9 hours per year. 

    Regards,

    Carlos.

×
×
  • Create New...