Jump to content

hypotheticalEric

Members
  • Content Count

    30
  • Joined

  • Last visited

  • Days Won

    2

hypotheticalEric last won the day on September 14 2015

hypotheticalEric had the most liked content!

Community Reputation

2 Neutral

About hypotheticalEric

  • Rank
    Advanced Member

Recent Profile Visitors

1,436 profile views
  1. I figured that wouldn't be possible. Yes, I updated I think a couple weeks ago. So I may not be on the very latest, but not more than a version older. If you have any other ideas I'd like to give it a try, but if it's a Github restriction I can continue using Bitbucket instead. Thanks, - Eric
  2. Hi Carlos, Thanks for the quick reply. It does work with Bitbucket, whoop! Ideally I'd like to get it working with Github but that may be too much work for what I need (mostly as an additional backup location). I did cm find revs where "size > 104857600" --format={item}:{size} on repository '<repository name>' And found exactly one file over 100mb. Of course! It was actually included by mistake and it's quite a few revisions back in the history. I know this would be a rewriting of history, but is there any way to delete that file now? The file was added in changeset 296 and deleted in 298 (nothing changed with it in 297) and now I'm on 498 so it's a little way back.
  3. I'm trying to sync an existing Plastic repository with Github using the Git sync an I'm always getting an error : This happens after it goes through the steps of exporting, packaging, sending. I'm trying to push to a private Github repository, brand new just created with no commits at all. I have my Github user set to an admin (owner of the repository). I have my gitsync.conf file in my user/Appdata/local/plastic4 directory as described in the documentation. There's no gitsync.conf in the Plastic client directory. The repository is pretty small (478 changesets, almost all code and no large binary files), and even branches with almost nothing in them are being rejected. My gitsync.conf reads: (With the xxx.com part as my real email address used on Github.) Has anyone come across errors like this? Is there anywhere Plastic saves additional log info when it does the sync? It sounds like the "pre-receive hook declined" error is from Github, not Plastic, but it's not terribly helpful since it doesn't say what the error is. Maybe there's some built-in pre-receive hooks across all of Github that are rejecting me?
  4. The server is on the same network, typically just on the office LAN, though sometimes VPN if I'm out of the office. Physically the server is about 12 feet away, so shouldn't have any connection problems.
  5. Oh great! Adding the plasticpipelineprotocol.conf file fixed it on my Windows machine. Apologies for not spotting that earlier. I'm now finding that it breaks the connection with my main server. If it delete that file, I can use the branch explorer, etc. I'm using Jet as my database backend and found this thread which sounds relevant: Is it possible for these two configurations to coexist, or do I need to update the configuration file when I need to pull from Github, then reset it after I'm done? (not that big of a deal because I will only occasionally push / pull to Github compared to my local server)
  6. Hey Manu, Here are the versions I tried (I'm terrible about keeping these in sync): Linux 6.0.16.1078: Success Mac 6.0.16.1614: failed Windows 6.0.16.1600: failed The repository is public, it's at https://github.com/hypotheticalinc/gaffer.git That one is a fork, so I tried it with another non-forked repo and got the same failures.
  7. I'm trying to sync my Plastic repo with a Github repository and I'm finding a strange behavior. If I try to sync from my laptop or workstation, I get this error (run from the command line): So it can connect to github but can't seem to pull the changes. The strange part is that when I logged into the machine my Plastic server is running on (Linux) and run the same command, it works great! So I'm halfway there, but it would be nice to be able to sync with the GUI and not have to log into that server. I think, but I'm not certain it's related, that the error on the server side when I tried to sync with the client is this: I'm guessing it's a configuration problem on my client side?
  8. hypotheticalEric

    Plastic 6

    My migration went fairly well, though it would be nice to have a dedicated tool for Linux to change databases. I had 130+ repositories so I created a shell script to call the RepliKate tool for each. On Linux you can use the mono installed in the Plastic directory to run RepliKate. It does have trouble (at least on Linux) with spaces in the names of repositories. I had trouble with timeouts on the database it was reading from (MySQL). Oddly I had this even when trying to replicate in the windows GUI on some of the problematic repos. Maybe it's a hardware / disk capabilities problem? Ultimately I ended up writing a Python script to more or less do what RepliKate does but with added function to save which repositories had errors so I could try them again. Still working on a few of the last problem children, but it's mostly done.
  9. hypotheticalEric

    Plastic 6

    Perfect, thanks Manu! I've done a quick bit of testing and it went well. I'll probably kick off the migration tonight.
  10. hypotheticalEric

    Plastic 6

    Is there any information on the jet.conf file? I need to make sure all storage is done in a specific directory (which is where the large virtual disk is mounted) so I'm guessing I'll need to configure that correctly for the jet backend?
  11. hypotheticalEric

    Plastic 6

    Right now it's just myself, the server is running on Linux. Performance is great as-is, but it seems like I can't upgrade MySQL without having something go wrong so if I could have an easier upgrade process and even better performance it seems like a no-brainer.
  12. hypotheticalEric

    Plastic 6

    I'm in a similar position and would love to know how to migrate to Jet. Is it simple enough that you could put it in this forum or another how-to, or is it a more complicated procedure that is specific to the individual installation? We have about 1.6 terabytes in Plastic (using MySQL) distributed among a bunch of repositories. Is Jet still a good choice for this case?
  13. That's a huge help, thanks Manu! I'll put in a request on the uservoice and see if anybody else needs this. In the meantime, I'll play with getting it to work using the query. thanks, - Eric
  14. I'm looking for a way to change the comment attached to a checkin via triggers. Here's the use-case of what I want to do: We use an online image review site called ProofHQ where clients can leave comments on our work as we go. These comments are essentially to-do items like "make this car less red", etc. When they go to check in files to Plastic, rather than have them rewrite or copy / paste all the done items (which we know nobody will do when they are busy, so we get checkin comments like "updates" ), I'd like to allow them to just put in the ProofHQ serial number in the changeset comments, something like "proof:800282" Create a Plastic before-checkin trigger that will look for that serial number string and pull in the comments through the ProofHQ API. add these comments to the changeset comments I tried a really simple example using the before-checkin trigger and changing the PLASTIC_COMMENT environment variable, but it doesn't seem to work. ( I assume Plastic doesn't read this variable back in, just as a publishing mechanism?) Is there a way to either change the comment through the before-checkin trigger or via "cm" commands and after-checkin? thanks!
  15. You are right, they don't need to be sequential. But how do I get the changeset ID of the commit I will be performing in the future? Here's my workflow: Artist checks out a Photoshop file that will be worked on, let's say at this point it is changeset 27. In that Photoshop file is a text layer that shows the changeset on the image. Because the changeset in the image is always out of date, it could be anything below 27. The artist makes some changes and prepares to send it to the client. Part of the prep is to run a script that updates the changeset number of that text layer from step 2. Right now this returns 27, but I want it to return the next changeset. It doesn't matter if it's 28 or 200, just that it is the changeset that will be assigned when the checkin is done. After saving out a copy of the file (let's say to their desktop, out of version control to keep it simple) they send it to the client. Then the artist checks in all the changes they've made. As far as I can tell, this is when the changeset is updated. So I want to be able to "see the future" and get that changeset number before the checkin begins. That's where my idea of reserving a changeset number came in. Maybe I'm doing something strange that could be done better? Should this be done in a checkin trigger? (that would be tricky for this case but might be possible) thanks!
×
×
  • Create New...