From 1be510f9529cb082f802408b472a77d074b394c0 Mon Sep 17 00:00:00 2001 From: Nicolas Vigier Date: Sun, 14 Apr 2013 13:46:12 +0000 Subject: Add zarb MLs html archives --- zarb-ml/mageia-dev/20100926/000268.html | 129 ++++++++++++++++++++++++++++++++ 1 file changed, 129 insertions(+) create mode 100644 zarb-ml/mageia-dev/20100926/000268.html (limited to 'zarb-ml/mageia-dev/20100926/000268.html') diff --git a/zarb-ml/mageia-dev/20100926/000268.html b/zarb-ml/mageia-dev/20100926/000268.html new file mode 100644 index 000000000..1091cd489 --- /dev/null +++ b/zarb-ml/mageia-dev/20100926/000268.html @@ -0,0 +1,129 @@ + + + + [Mageia-dev] Will this work for a build system? + + + + + + + + + +

[Mageia-dev] Will this work for a build system?

+ Michael Scherer + misc at zarb.org +
+ Sun Sep 26 18:04:16 CEST 2010 +

+
+ +
Le dimanche 26 septembre 2010 à 17:04 +0200, joris dedieu a écrit :
+> 2010/9/26 Olivier Blin <mageia at blino.org>:
+> > R James <upsnag2 at gmail.com> writes:
+> >
+> >>> BTW, I once calculated (test plus extrapolation) how long it would take
+> >>> to rebuild every package in Mandriva on a low end 2 GHz Celeron server
+> >>> that I had available and it came to about 80 days.
+> >
+> > With a reasonably good machine, we used to be able to rebuild most of
+> > "main" in about one day.
+> >
+> >> Perhaps I was naive in thinking that compiling the distro could be
+> >> done with distcc or even a simple queuing system that distributes
+> >> SRPMs to nodes in the community swarm.  As each node returns its
+> >> completed binary package, the queuing system could send it another
+> >> SRPM to build.
+> >>
+> >> It would be cool if it could be done that way.  Why pay for data
+> >> center space, hardware, electricity and big bandwidth when you could
+> >> have a community-provided "cloud" for free? :o)
+> >
+> > Because there are some authentication and integrity issues which are not
+> > simple to solve: we have to be sure that the binary packages really come
+> > from the unmodified SRPM (so that it does not contains malware).
+> 
+> This can be avoid by
+> - building every package twice (also useful for integrity check)
+
+What if a package has changed between the first build and the second in
+such a way that it impact the compilation ?
+
+This would either requires to resubmit the packages ( which will be
+quickly annoying ) or this would requires 3rd compilation, maybe a 4th
+one.
+
+what if the binary include hostname, build date and so on ?
+
+then the build will be seen as different no matter you do ( ie, md5,
+sha1 ) because it will have different contents.
+
+
+> - randomize build order
+> - timedout jobs
+> 
+> It's not a trivial problem but imho distribute tools advantages
+> (price, scalability, availability ...) should be seriously considered.
+> Has a single build system in a single datacenter should be a single
+> point of failure.
+
+there will be a single point of failure, no matter you do :
+
+There is a reference vcs, and a single job dispatcher. We can maybe
+double them or work around issues but this would lead to more complexity
+which may not really compensate a potential datacenter problem.
+
+Fedora had been compromised once and had to shut down their
+infrastructure, or had to move servers sooner this year, they coped with
+the downtime. 
+
+Debian had problem ( like security.debian.org who burned in 2002, or the
+famous openssl problem in 2008 ) too, without trouble. 
+
+Launchpad is often down for database upgrade, and still, Ubuntu is
+there.
+
+-- 
+Michael Scherer
+
+
+ + + + + + + + +
+

+ +
+More information about the Mageia-dev +mailing list
+ -- cgit v1.2.1