Based in Nantes, France CET/CEST (UTC+1, UTC+2)
SRE maintains two python projects deployed by scap which use Docker containers to build the wheels for the target distribution. One can seek inspiration from them:
Those settings are for the Puppet roles. Given roles are solely for production, on WMCS the hiera lookup hierarchy does not include roles. T120165 All those settings are thus missing.
Arturo reviewed the change https://gerrit.wikimedia.org/r/#/c/labs/toollabs/+/558722/ and indicated the final strike is for @Bstorm :)
Should be fixed by frontend-maven-plugin 1.9.0
That was for wikimedia.biterg.io bot which is crawling Gerrit. We since have upgraded the hardware and have way more memory allocated to the Gerrit JVM. So that is not really needed anymore.
--- a/zuul/reporter/__init__.py +++ b/zuul/reporter/__init__.py @@ -128,8 +128,13 @@ class BaseReporter(object, metaclass=abc.ABCMeta): def _formatItemReportStart(self, item, with_jobs=True): status_url = get_default(self.connection.sched.config, 'web', 'status_url', '') - return item.pipeline.start_message.format(pipeline=item.pipeline, - status_url=status_url) + if status_url: + status_url = item.formatUrlPattern(status_url) + + return item.pipeline.start_message.format( + pipeline=item.pipeline.getSafeAttributes(), + change=item.change.getSafeAttributes(), + status_url=status_url)
Thanks! I have deleted the job.
T235865 is a similar need but requiring ElasticSearch.
Saving Guillaume diagram here for later reference
The extra lseek syscall comes from file_put_contents() being invoked with the FILE_APPEND flag.
Pending review of https://gerrit.wikimedia.org/r/#/c/labs/toollabs/+/558722/ by cloud-services-team :)
We still need some Jessie instances. Notably due to a long tail of Jenkins jobs that still have to be migrated and a CPU performance regression I have yet to reproduce using VMs on my local machine.
@mmodell from the build history of https://integration.wikimedia.org/ci/job/phabricator-jessie-diffs/ the last few builds are from March 2019 for D1144 and D1145. That is for rPHEX.
Seems like there is no PHP opcache cleaner on the instances. I know nothing about that mechanism though :-\
For mediawiki/core there is T237477: Redis: Add support for TLS.
puppet/modules/admin/data(productionu=)$ ./matrix.py brennen grp/users brennen contint-admins OK contint-docker OK contint-roots OK <----- deployment OK gerrit-admin OK releasers-mediawiki OK <----
In addition to REQUESTS_CA_BUNDLE, one can also set CURL_CA_BUNDLE if you have curl with the bundled certificates or SSL_CERT_FILE which is for OpenSSL ones. I don't think any of them are available by default in Mac OS X which comes with its own security store.
I just felt it would give an extra information when one has a fleet of Jessie, Stretch, Buster servers. That might gives a lead as to why a machine as an old/new package. But I understand it is a design choice, I guess the OS field can even be entirely removed now :)
hi, then I guess it is an issue with Brave?
Paired the upgrade with @brennen . We had all four jenkins instances upgraded.
TLDR: I thought about using \internal as an alias, but it suppress everything until the end of the code block...
We have upgraded to Doxygen 1.8.16 and it still happens. I have edited the task description with the latest example.
I have retriggered all builds. For tags using:
The git plugin no more has any history about what it has build:
I have nuked the build history on contint1001 in /srv/jenkins/builds/mediawiki-core-doxygen-docker
And we should upgrade the collapsible sections plugin for T236222 :)
tldr: try to throttle libupgrader a bit ;)
No out of memory messages for December 12th or 13rd. The one I have found from gerrit log roughly happened at:
[2019-12-14 10:49:05,536] [2019-12-14 10:49:05,536] [2019-12-14 10:49:05,542] [2019-12-14 13:12:08,827] [2019-12-14 13:12:08,827] [2019-12-14 15:56:35,167] [2019-12-14 16:02:30,185] [2019-12-14 16:11:28,080] [2019-12-14 16:07:25,808] [2019-12-14 16:14:29,865] [2019-12-14 16:27:21,074] [2019-12-14 16:46:06,978] [2019-12-14 16:45:08,071] [2019-12-14 16:43:10,741] [2019-12-14 16:58:58,182] [2019-12-14 16:48:06,235] [2019-12-14 17:24:34,533] [2019-12-14 17:30:27,134] [2019-12-14 18:03:58,780] [2019-12-14 18:24:41,986] [2019-12-14 18:56:05,281] [2019-12-14 19:51:11,527] [2019-12-14 19:47:15,898] [2019-12-14 19:40:26,615] [2019-12-14 19:31:29,080] [2019-12-14 19:30:30,505] [2019-12-14 19:23:32,859] [2019-12-14 20:53:35,223] [2019-12-14 20:41:50,299] [2019-12-14 20:21:41,008] [2019-12-14 19:54:08,065]
[2019-12-14 16:07:25,808] [HTTP-87043]
WARN org.eclipse.jetty.servlet.ServletHandler : Error for /r/mediawiki/extensions/DataTransfer.git/info/refs
java.lang.OutOfMemoryError: Java heap space
mediawiki/core tags are being regenerated.
To regenerate the doc for mediawiki/core tags, on contint1001 I am issuing:
TAG=1.23.17 bash -c 'zuul enqueue-ref --trigger gerrit --pipeline publish --project mediawiki/core --ref $TAG --newrev $TAG'
That was very specific to the jenkins-debian-glue jobs which somehow rely on local state of branches. That got fixed by setting the GIT_COMMIT and GIT_BRANCH environment variable which jenkins-debian-glue use to fetch the branch. https://gerrit.wikimedia.org/r/#/c/integration/config/+/301791/2/jjb/operations-debs.yaml
CI now runs Doxygen 1.8.16. The master branch has been regenerated with it already https://doc.wikimedia.org/Wikibase/master/php/deprecated.html
We are now using Doxygen 1.8.16 and the documentation for the master branch has been regenerated. I took the example from this task https://doc.wikimedia.org/mediawiki-core/master/php/classWikimedia_1_1Rdbms_1_1DatabaseMysqli.html and it shows a nice drop down list:
The example given works now! https://doc.wikimedia.org/mediawiki-core/master/php/search.php?query=ObjectCache
The polling job for mediawiki/core https://integration.wikimedia.org/ci/job/mediawiki-core-doxygen-docker/ fails:
And all jobs have been updated to use 1.8.16!
2019-12-13 15:33:08,737 [docker-pkg-build] INFO - Successfully tagged docker-registry.discovery.wmnet/releng/doxygen:0.6.0 (image.py:179)
@hashar Sorry for the delay, I wasn't aware of the procedure and issues you described in your comment.
@jcrespo yes I lack write access :]