diff options
Diffstat (limited to 'zarb-ml/mageia-dev/attachments/20100927/d8e0e4e8/attachment-0001.html')
-rw-r--r-- | zarb-ml/mageia-dev/attachments/20100927/d8e0e4e8/attachment-0001.html | 14 |
1 files changed, 14 insertions, 0 deletions
diff --git a/zarb-ml/mageia-dev/attachments/20100927/d8e0e4e8/attachment-0001.html b/zarb-ml/mageia-dev/attachments/20100927/d8e0e4e8/attachment-0001.html new file mode 100644 index 000000000..a20232427 --- /dev/null +++ b/zarb-ml/mageia-dev/attachments/20100927/d8e0e4e8/attachment-0001.html @@ -0,0 +1,14 @@ +<br>2010/9/27 herman <span dir="ltr"><<a href="mailto:herman@aeronetworks.ca">herman@aeronetworks.ca</a>></span><br><div class="gmail_quote"><div> </div><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"> +On Sun, 2010-09-26 at 18:32 -0700, Frank Griffin wrote:<br> +<div class="im">> Giuseppe Ghibň wrote:<br> +> > IMHO one of the building problems was not massive automatic rebuilding<br> +> > but avoid bottenlecks to the users when building goes wrong.<br> +> I really like the concept of a distributed build system.<br> +<br> +</div>The problem with a distributed system is the enormous increase in<br> +complexity. As long as a single big server with about 24 cores can<br> +compile the lot in one day, then a distributed system is not really<br> +needed.<br> +<div><div></div><div class="h5"><br></div></div></blockquote><div><br>I agree there would be an ENORMOUS increase in complexity, but you are forgetting that first 24 cores someone cited were not 24 cores, but a dual exacore machine (I guess a dual Xeon 5650) so 12 core with hyperthreads, which is not exactly the same as 24 native core. Such core with 12GB of memory are just "peanuts" in an environment with plenty of developers. Half of that machine (e.g. with a core i7 985X, or AMD 1055T) is actually a medium/top PC which you can build in your home.<br> +<br>So you have to distinguish automatic rebuilding of the distro, which operates on "working" packages from the svn, from new packages built from the first time. The first task of a massive rebuilding is a automated task which can be done sequentially (but 1 day is only for the main, then there is contrib, then you have 2 archs, 32 and 64bits, and backports); the second task instead has a lot of stop and go. If for instance a build goes wrong because a packager couldn't test a parallel build in his own development machine (or because the number of cores is different and you get race conditions problems), and have to redo the work, but in the meanwhile the building system is busy, or has other kind of problems (wait for library to propagate, etc.), then he have to spend a lot of time fighting against the system and babysitting a package, rather than concentrate on packaging or developing. And for sure he would fly away (packagers are not of iron with infinite patience).<br> +<br>Also consider that the phase of LZMA compression of the RPM building won't operate in parallel.<br><br>Bye<br>Giuseppe.<br><br></div></div> |