summaryrefslogtreecommitdiffstats
path: root/zarb-ml/mageia-sysadm/2012-April/004368.html
diff options
context:
space:
mode:
Diffstat (limited to 'zarb-ml/mageia-sysadm/2012-April/004368.html')
-rw-r--r--zarb-ml/mageia-sysadm/2012-April/004368.html147
1 files changed, 147 insertions, 0 deletions
diff --git a/zarb-ml/mageia-sysadm/2012-April/004368.html b/zarb-ml/mageia-sysadm/2012-April/004368.html
new file mode 100644
index 000000000..accfe5f63
--- /dev/null
+++ b/zarb-ml/mageia-sysadm/2012-April/004368.html
@@ -0,0 +1,147 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
+<HTML>
+ <HEAD>
+ <TITLE> [Mageia-sysadm] questions about our infrastructure setup &amp; costs
+ </TITLE>
+ <LINK REL="Index" HREF="index.html" >
+ <LINK REL="made" HREF="mailto:mageia-sysadm%40mageia.org?Subject=Re%3A%20%5BMageia-sysadm%5D%20questions%20about%20our%20infrastructure%20setup%20%26%09costs&In-Reply-To=%3C20120402194156.GC21938%40mars-attacks.org%3E">
+ <META NAME="robots" CONTENT="index,nofollow">
+ <META http-equiv="Content-Type" content="text/html; charset=us-ascii">
+ <LINK REL="Previous" HREF="004362.html">
+ <LINK REL="Next" HREF="004363.html">
+ </HEAD>
+ <BODY BGCOLOR="#ffffff">
+ <H1>[Mageia-sysadm] questions about our infrastructure setup &amp; costs</H1>
+ <B>nicolas vigier</B>
+ <A HREF="mailto:mageia-sysadm%40mageia.org?Subject=Re%3A%20%5BMageia-sysadm%5D%20questions%20about%20our%20infrastructure%20setup%20%26%09costs&In-Reply-To=%3C20120402194156.GC21938%40mars-attacks.org%3E"
+ TITLE="[Mageia-sysadm] questions about our infrastructure setup &amp; costs">boklm at mars-attacks.org
+ </A><BR>
+ <I>Mon Apr 2 21:41:56 CEST 2012</I>
+ <P><UL>
+ <LI>Previous message: <A HREF="004362.html">[Mageia-sysadm] questions about our infrastructure setup &amp; costs
+</A></li>
+ <LI>Next message: <A HREF="004363.html">[Mageia-sysadm] perl modules shipped by mageia - the web site!
+</A></li>
+ <LI> <B>Messages sorted by:</B>
+ <a href="date.html#4368">[ date ]</a>
+ <a href="thread.html#4368">[ thread ]</a>
+ <a href="subject.html#4368">[ subject ]</a>
+ <a href="author.html#4368">[ author ]</a>
+ </LI>
+ </UL>
+ <HR>
+<!--beginarticle-->
+<PRE>On Mon, 02 Apr 2012, Romain d'Alverny wrote:
+
+&gt;<i> On Mon, Apr 2, 2012 at 17:49, nicolas vigier &lt;<A HREF="https://www.mageia.org/mailman/listinfo/mageia-sysadm">boklm at mars-attacks.org</A>&gt; wrote:
+</I>&gt;<i> &gt; Using paid hosting will not remove problems like bad RJ45 or switch
+</I>&gt;<i> &gt; that stop working. If we want good availability, we need more servers
+</I>&gt;<i> &gt; in different places.
+</I>&gt;<i>
+</I>&gt;<i> In paid hosting, (physical) server and link failure is to be directly
+</I>&gt;<i> handled by people that have a financial incentive to have it work. I
+</I>&gt;<i> expect (but may be wrong) that the availability will be higher than
+</I>&gt;<i> what we have today, and that it is still affordable for _some_
+</I>&gt;<i> services. It's not about going full speed to paid services or to spend
+</I>&gt;<i> unnecessarily money, it's about using what we can (it includes money)
+</I>&gt;<i> to improve our systems availability.
+</I>
+It's doesn't matter that it's paid hosting or not, if a switch stop
+working on friday evening, and if there's nobody available to go to the
+datacenter to replace it, then everything will be offline until one of
+us has time to go to the datacenter. We can pay very expensive datacenter
+hosting, but they won't replace our switch if it stops working.
+
+We can also pay expensive hosting in datacenter and have power outage,
+network problems because of a flood or other reason, air-conditioning
+problems, etc ...
+
+And we can also pay expensive hosting at EC2 and have 2 days downtime :
+<A HREF="http://www.pcworld.com/businesscenter/article/226327/what_your_business_can_learn_from_the_amazon_cloud_outage.html">http://www.pcworld.com/businesscenter/article/226327/what_your_business_can_learn_from_the_amazon_cloud_outage.html</A>
+
+In 1 year we had only one major unexpected downtime on our servers,
+because of a bad network cable, on friday evening, and hopefully this
+kind of problem does not happen very often. Before this we had more
+downtimes on the servers hosted at gandi, because of problems on their
+storage servers for all their customers.
+
+&gt;<i>
+</I>&gt;<i> The point is that: I don't know and I don't have the data to get an
+</I>&gt;<i> idea about that; and I'm not even sure the data needed is compiled
+</I>&gt;<i> somewhere at this time. And I suspect I'm not alone in this case. If I
+</I>&gt;<i> don't ask, someone else will later. Or even worse than that, won't
+</I>&gt;<i> dare to ask.
+</I>&gt;<i>
+</I>&gt;<i> That's why I'm asking for this for those two purposes: explaining more
+</I>&gt;<i> how it works, understanding how it could work.
+</I>&gt;<i> - functional split list =&gt; your skills/job
+</I>&gt;<i> - needs per functional unit =&gt; same
+</I>&gt;<i> - dependencies between units =&gt; same [1]
+</I>&gt;<i> - cost per unit in different contexts =&gt; can be spread around
+</I>
+We don't have a lot of servers, so no need for complex dependency graph
+to see that all of the servers are critical, and downtime of any of the
+server will cause problems somewhere. If we want to reduce the risk of
+having a lot of services down at the same time, then we need more
+servers, hosted in different places.
+
+&gt;<i>
+</I>&gt;<i> And yes, it may be too expensive. Or it may not. But I suspect we
+</I>&gt;<i> don't know, or it's not obvious enough. On the other hand, having one,
+</I>&gt;<i> or several server downtime like this for 2/3 days also costs a lot to
+</I>&gt;<i> the project (loss of time, and reputation shift).
+</I>
+If we can't afford a 2 days downtime, then we should probably stop
+everything now and do something else.
+
+Projects with more money and more machines than us also have unexpected
+server downtime.
+
+Fedora had almost 1 day of downtime on their buildsystem in december :
+<A HREF="http://lists.fedoraproject.org/pipermail/devel-announce/2011-December/000867.html">http://lists.fedoraproject.org/pipermail/devel-announce/2011-December/000867.html</A>
+And if we read their mailing list archives we can see 2 hours on many
+services in january 2012, 1 hour for build system in febuary 2012, 2 hours
+in january 2012, etc ...
+
+In april 2010 Debian had their buildd.debian.org server down on friday
+and restored on monday, wiki.debian.org for one day, forums.debian.org
+for a few days :
+<A HREF="http://lists.debian.org/debian-infrastructure-announce/2010/04/msg00001.html">http://lists.debian.org/debian-infrastructure-announce/2010/04/msg00001.html</A>
+wiki down for an unknow time in january 2010 :
+<A HREF="http://lists.debian.org/debian-infrastructure-announce/2010/01/msg00001.html">http://lists.debian.org/debian-infrastructure-announce/2010/01/msg00001.html</A>
+ftp-master in january 2011:
+<A HREF="http://lists.debian.org/debian-infrastructure-announce/2011/01/msg00000.html">http://lists.debian.org/debian-infrastructure-announce/2011/01/msg00000.html</A>
+
+And I think it's the same for most projects.
+
+</PRE>
+
+
+
+
+
+
+
+
+
+
+<!--endarticle-->
+ <HR>
+ <P><UL>
+ <!--threads-->
+ <LI>Previous message: <A HREF="004362.html">[Mageia-sysadm] questions about our infrastructure setup &amp; costs
+</A></li>
+ <LI>Next message: <A HREF="004363.html">[Mageia-sysadm] perl modules shipped by mageia - the web site!
+</A></li>
+ <LI> <B>Messages sorted by:</B>
+ <a href="date.html#4368">[ date ]</a>
+ <a href="thread.html#4368">[ thread ]</a>
+ <a href="subject.html#4368">[ subject ]</a>
+ <a href="author.html#4368">[ author ]</a>
+ </LI>
+ </UL>
+
+<hr>
+<a href="https://www.mageia.org/mailman/listinfo/mageia-sysadm">More information about the Mageia-sysadm
+mailing list</a><br>
+</body></html>