[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[no subject]



Storing all your log files on a central server is an excellent idea.  I 
would store them as text files if you can.  What I would suggest for 
better reporting is to decide what needs reporting and write a perl script  
(most efficient afaik for text processing) to parse your files and store only 
*aggregate* data in a database.  If you're storing aggregate data, you 
can afford to store redundant data even and get fast results on 
your queries.  Use the perl script in combination with something like 
logtail (which is a small program that comes with logwatch) to keep track 
of the last time it looked at a file and what line it was on, so you can  
pick up where you left off and not have to parse the whole file again.  

MySQL can certainly handle 100,000+ inserts per day. If you do use a 
database, make absolutely sure you know what you want from your reporting  
and design your data structure for it, and design good queries and make 
full use of indexing or you will be dealing with a horribly slow mess.  
Also, I would suggest using a round-robin approach so that you query 
against only a very small amount of data from your tables holding the full 
log files -- ie, you have say a day's worth of data in your full logs before 
archiving it out to text somewhere.  You store aggregate data in other 
tables where most of the querying is done, and then archive your logs.  
Depending on the database you use you can do this all internally as a row 
is inserted -- use triggers to set up functions that will parse the row on 
insert and go ahead and store the aggregate data.  This will slow down 
your insert but probably not enough that you'll feel it at 100,000 
inserts/day if you've got enough machine.  Older MySQL can't do this, and 
I don't know if the new versions can but I wouldn't trust them. I beleive 
PGSQL can do triggers and functions, and have heard its improved its 
speed.

Regardless of what you do, the more processing you can do before you store 
your data in a database the better off you are for querying.

HTH
jenn



On Sat, 14 Feb 2004, Jonathan Glass wrote:

> I'm thinking of having all my server logs (including NT/2K/2K+3) record
> into a MySQL db for better reporting.  Does anyone know if MySQL can
> handle 100,000+ inserts per day?  Also, if it handling this many inserts,
> how good is the SELECT performance for the automated reporting?  ANy
> thoughts, pointers?
> 
> Thanks!
> 



</pre>
<!--X-Body-of-Message-End-->
<!--X-MsgBody-End-->
<!--X-Follow-Ups-->
<hr>
<ul><li><strong>Follow-Ups</strong>:
<ul>
<li><strong><a name="00530" href="msg00530.html">[ale] Any MySQL Experts around?</a></strong>
<ul><li><em>From:</em> jimpop at yahoo.com (Jim Popovitch)</li></ul></li>
<li><strong><a name="00554" href="msg00554.html">[ale] Any MySQL Experts around?</a></strong>
<ul><li><em>From:</em> ron at Opus1.COM (Ronald Chmara)</li></ul></li>
</ul></li></ul>
<!--X-Follow-Ups-End-->
<!--X-References-->
<ul><li><strong>References</strong>:
<ul>
<li><strong><a name="00500" href="msg00500.html">[ale] Any MySQL Experts around?</a></strong>
<ul><li><em>From:</em> jonathan.glass at ibb.gatech.edu (Jonathan Glass)</li></ul></li>
</ul></li></ul>
<!--X-References-End-->
<!--X-BotPNI-->
<ul>
<li>Prev by Date:
<strong><a href="msg00516.html">[ale] OT: One-time use digital cameras</a></strong>
</li>
<li>Next by Date:
<strong><a href="msg00518.html">[ale] OT: MS disabling features and changing old standards.</a></strong>
</li>
<li>Previous by thread:
<strong><a href="msg00509.html">[ale] Any MySQL Experts around?</a></strong>
</li>
<li>Next by thread:
<strong><a href="msg00530.html">[ale] Any MySQL Experts around?</a></strong>
</li>
<li>Index(es):
<ul>
<li><a href="maillist.html#00517"><strong>Date</strong></a></li>
<li><a href="threads.html#00517"><strong>Thread</strong></a></li>
</ul>
</li>
</ul>

<!--X-BotPNI-End-->
<!--X-User-Footer-->
<!--X-User-Footer-End-->
</body>
</html>