| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
| |
There are currently no automated e-mails to that list, so this allows
some communication to backports users.
|
| |
|
|
|
|
| |
The puppet object names are used otherwise which isn't correct.
|
|
|
|
|
| |
Also, reinstate the http://start.mageia.org/ redirect that was
accidentally deleted in the previous commit.
|
| |
|
|
|
|
|
|
|
|
|
| |
false means we don't want to add a Reply-To: header, which is Sympa's
default, but writing this section causes a warning message:
Bad entry "other_email false" for key "other_email", paragraph "reply_to_header"
when running "sympa upgrade".
|
|
|
|
|
|
| |
This complements the one for ml.mageia.org and is needed for some
automated e-mails to get through. There should be no senders from this
domain other than on Mageia infrastructure.
|
|
|
|
|
|
|
| |
There were 2 bugs:
- getting the architecture got broken due to a missing %
- name of the binary package instead of the src was used, somethimes
they are different
|
|
|
|
|
|
|
|
| |
When subversion update messages were changed to come from a new address
in commit f3f49a26, not all the mailing lists there were send to were
updated to accept the new sender. This change whitelists it for all the
remaining subversion destination lists. It also adds root@mageia.org to
some lists, which were likely missing some other automated messages.
|
|
|
|
| |
Use of https was disabled there as unnecessary in commit 56079da1.
|
|
|
|
|
| |
This time, use a YAML mapping, as I suspect that's what the code is
looking for instead of a plain string.
|
|
|
|
|
|
|
|
| |
It resulted in the svnlook error:
Can't use string ("\bmga#(\d+)\b=https://bugs.magei"...) as a HASH
ref while "strict refs" in use at
/usr/share/perl5/vendor_perl/SVN/Notify.pm line 1789.
|
|
|
|
|
| |
These mails should be delivered more reliably because this domain has an
SPF record.
|
|
|
|
|
|
| |
- Add the path to the subject
- Add a link to Bugzilla for bugs
- Add a link to svnweb
|
|
|
|
|
|
|
|
|
|
| |
It's not perfect (logs for requests that are rejected by the server are
still buffered) but at least normal requests can be immediately be seen
in the logs now. A better approach would be to use a logging function
that flushes automatically.
Also, wait for the queue thread to complete before exiting so things are
left in a clean state.
|
| |
|
|
|
|
|
|
|
| |
The forum's certificate expired so the reverse proxy was returning 500
errors. But, since the forums server is running in a VM (friteuse)
on the proxy server (sucuk), data never leaves the machine so there is
no need to encrypt it.
|
|
|
|
|
| |
This should resolve wiki e-mail delivery issues as b9c41d85 did for
Bugzilla.
|
| |
|
| |
|
|
|
|
|
|
| |
binrepo is 93% full but that's fine (133G free) so alert at 95%.
www is 97% full (3.5G free) but the only growing part is autobuild
logs when it runs.
|
| |
|
|
|
|
| |
This still needs to be enabled once it's checked.
|
|
|
|
|
|
|
| |
This is currently the only subdomain with an SPF record and is therefore
the only one from which some mail providers will accept e-mails these
days. Having _noreply in the name makes it more obvious that a reply
will go nowhere.
|
|
|
|
|
| |
If the process info can't be read, just skip it instead of displaying
"[: : integer expression expected"
|
|
|
|
|
|
| |
Remove errors.log or the report won't be generated. Also, log stderr to
the journal or else the normal stderr logs will cause a spurious cron
e-mail to be sent.
|
|
|
|
| |
Apache doesn't support comments here.
|
|
|
|
|
|
|
| |
All these point to valid https: resources, but there is a small chance
that some unusual interaction will cause it not to work. Some of these
changes also won't take effect until the server is restarted, so we'll
need to keep this in mind if failures occur long from now.
|
|
|
|
| |
These are for use by humans only.
|
|
|
|
|
| |
The report will be available at
https://pkgsubmit.mageia.org/spec-rpm-mismatch.html
|
|
|
|
|
|
|
| |
The test jobs seem to be working as desired, so make them actually
start deleting the old files every 4 hours. Use -ignore_readdir_race in
all of them to avoid errors when schedbot cleans the files in the middle
of a run (an unlikely situation because of the clean-up times involved).
|
|
|
|
| |
Follow-up to 82f3da50
|
| |
|
|
|
|
|
|
|
| |
Even if 9/nonfree/backports_testing/ hasn't had any builds recently, it
shouldn't be deleted. Only delete directories deeper in the hierarchy.
Follow-up to f7e017e8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It's now confirmed that tidy() has been creating huge (2.6 GiB) state
files that the Puppet agent loads before every agent run, which causes
runs to take up to 4 days each and use of all RAM on the server.
Cleaning files using find is more straightforward and efficient and
avoids this problem.
The tidy() functions are disabled here and the cron jobs aren't actually
deleting files yet, so a follow-up commit will enable deleting
imminently, once testing shows it will work.
Follow-up to 59d57245
|
|
|
|
| |
This hasn't been used in over a decade (removed in commit 18854eb0).
|
|
|
|
|
| |
No code changes are needed. Also, drop support for older than mga7 since
the two nodes that need this are running that.
|
|
|
|
|
|
| |
There are no template substitutions needed in these files, so allowing
them opens the danger of substitutions happening unknowingly with future
changes to these files.
|
|
|
|
|
|
| |
It now also passes pytype and flake8 checks.
Also, improve logging in the case of errors
|
|
|
|
|
|
|
| |
Puppet complained with "Files must be fully qualified" using the file()
function.
Follow-up to 0ea383bf2
|
|
|
|
|
|
|
| |
The Puppet method for doing this seems to be what's causing its memory
usage to reach the size of the physical RAM in the system and take up to
4 days to complete a Puppet run. The test will show the files being
deleted and how long it takes, but won't actually delete them.
|
|
|
|
| |
The service doesn't support TLS.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This depends on a newer version of pybugz that's Python 3 compatible
(tested with 0.14) and git_multimail.py, which has already been updated.
Replace token support with API key support, as per the latest pybugz
(and Bugzilla). If an API key is found, it will be used and if not found
it will fall back to username/password. No attempt is made to try to
create an API key in the same way that a token was minted before. Use a
different file name for an API key for coexistence with a token, which
is still used by other programs.
Add a debug flag for enabling more logging to better see when things go
wrong.
Create variables for configuration items.
Log a message when an i18n e-mail is sent.
Do a few little code cleanups.
|
| |
|
|
|
|
|
| |
Don't try to perform template variable substitution because there are
strings in the file that look like substitutions but aren't.
|
| |
|
| |
|
|
|
|
|
| |
This file had what looked like an ERB delimiter eaten by Puppet so git
didn't see the right percent placeholder to get the address.
|
| |
|