Jump to content

Jenkins Plastic Cloud Checkout Intermittent Failures


ahudelson

Recommended Posts

Hello,

I have a small AWS hosted centos vm that runs jenkins, it uses the plastic plugin to run jobs defined in a jenkinsfile checked into our plastic cloud repository.

Every few days the jenkins host seems to get into a bad state for one of our jobs that gets run daily. It is unable to update the plastic workspace, however restarting the jenkins resolves the issue (for a few days).

The jenkins job is configured for "Standard Cleanup" for the workspace. Here are the logs from the job when it does fail:
 

[Project] $ cm setselector --file=/var/lib/jenkins/workspace/Project/selector8810853426325782878.txt /var/lib/jenkins/workspace/Project
Searching for changed items in the workspace...
Cannot perform the switch to branch/label/changeset/shelve since there are pending changes. Please review the pending changes and retry the operation again.
[Project] $ cm setselector --file=/var/lib/jenkins/workspace/Project/selector8810853426325782878.txt /var/lib/jenkins/workspace/Project
Searching for changed items in the workspace...
Cannot perform the switch to branch/label/changeset/shelve since there are pending changes. Please review the pending changes and retry the operation again.
FATAL: The cm command 'cm setselector --file=/var/lib/jenkins/workspace/Project/selector8810853426325782878.txt /var/lib/jenkins/workspace/Project' failed after 3 retries
FATAL: The cm command 'cm setselector --file=/var/lib/jenkins/workspace/Project/selector8810853426325782878.txt /var/lib/jenkins/workspace/Project' failed after 3 retries
[Pipeline] }


I am using PlasticSCM plugin version 3.6.

Could this be a bug in the Jenkins plugin possibly?

Link to comment
Share on other sites

I am not renaming the branch. The job takes a branch name as a parameter, but we have only been running it on the "main" branch.

Here is our Selector:
 

repository "Project@FrostGiantStudios@cloud"
  path "/"
    smartbranch "${BRANCH}"


The "BRANCH" parameter that gets passed in each time the job runs is set to "/main"

Link to comment
Share on other sites

The best repro steps I have currently is to just create a jenkins pipeline job using a jenkinsfile that is checked into a plastic cloud repository and wait. It fails to checkout latest on the jenkinsmaster so I don't believe it has anything to do with the specific contents of the repository, and the master is not running any jobs directly so it should not have anything to do with the specifics of the job execution either.

There seems to be a correlation between time since the previous job run and how likely this issue is to occur (Waiting longer between job runs =  more new change sets committed to scm = more likely the project will get into this broken state). Once the project has entered this state it remains broken until the jenkins gets restarted, which seems weird because I don't know how restarting the jenkins would clear out any pending changes in the workspace.

Link to comment
Share on other sites

Hi,

I think there should be something that triggers it. Do you have some external tool/script that could be performing changes in the workspace?

Could you enable the "cm" log so the next time the issue happens, we can better debug it?

https://www.plasticscm.com/documentation/technical-articles/kb-enabling-logging-for-plastic-scm-part-i

Regards,

Carlos.

Link to comment
Share on other sites

I followed the instructions provided, however whenever I run the jenkins job it does not output any logs to "${HOME}/.plastic4/cm.log.txt" on the centos jenkins master. However if I manually run a cm command through ssh (to create and pull down a workspace for example) it emits logs properly. Is there something I can do to force it to emit logs even when ran through jenkins?

I dont think there are any scripts running that would touch the workspace. All the "heavy lifting" for the job occurs on windows worker nodes, not on the Linux master itself; technically I dont think the master should have to pull down the entire workspace as I would think it only needs the jenkinsfile. Here is a striped down version of our jenkinsfile, you can see that all of the actual stages that execute any custom code must run on a windows agent:
 

pipeline {
    agent any
	
    tools {
        nodejs "node"
    }

    parameters {
        ...
    }
	
    environment {
        ...
    }

    stages {
        stage('BuildAndStage') {
            agent {
                label 'windows && buider'
            }
            steps {
               ...
            }
        }
        stage('DeployServerPool') {
            agent {
                label 'windows && game_server_host'
            }
            steps {
                ...
            }
        }
    }

    post {
        always {
            slackSend message: ...
        }
    }
}

 

Link to comment
Share on other sites

Hi,

- I'm guessing if the problem to generate the logs is the user who runs the Jenkins service doesn't have permissions in the output path. if you are a paying customer, we can arrange a meeting to take a look.

- If the main problem is the workspace has some local changes that you don't know where are they comming from, I think you could run the following command before the build to undo the local changes:

cm unco --all

Regards,

Carlos.

 

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...