Wednesday, December 16, 2009

Passwordless SSH login

ssh-keygen -t rsa
#save the file in default path (/root/.ssh/id_rsa)
Enter Passphrase ---- make it empty by press Enter
Enter again to continue

Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
02:2c:81:1b:6d:0b:32:c0:23:3c:18:6d:fb:63:b5:d2 root@testserver
The key's randomart image is:
+--[ RSA 2048]----+
|B=. |
|O==o |
|oB+oo |
|. o. .. |
| . o..S |
| = E. |
| . o |
| |
| |
+-----------------+

Copy id_rsa.pub to remote server's /root/.ssh/authorized_keys

Done!

Wednesday, December 9, 2009

Network Teaming

Multiple network interfaces into a single interface

Bind both NIC so that it works as a single device

Linux allows binding multiple network interfaces into a single channel/NIC using special kernel module called bonding

Create a bond0 configuration file

vi /etc/sysconfig/network-scripts/ifcfg-bond0

Append following lines to it:

DEVICE=bond0
IPADDR=192.168.100.20
NETWORK=192.168.100.0
NETMASK=255.255.255.0
GATEWAY=192.168.100.1
USERCTL=no
BOOTPROTO=none
ONBOOT=yes


Modify eth0 and eth1 config files:

vi /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none


vi /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none

Add bond driver/module


vi /etc/modprobe.conf

Append following two lines:

alias bond0 bonding
options bond0 mode=balance-alb miimon=100


# modprobe bonding

Restart Network

Done!

Friday, November 13, 2009

Increase Swap Space Using Swap File

1. Using dd lets make a zero’d file for the swap

dd if=/dev/zero of=/swapfile bs=1048576 count=1000

2. Make file as a swapfile

mkswp /swapfile

3.Activate swapfile

swapon /swapfile

4. Verify that our swapfile has been activated

swapon -s

Recover MySQL Root Password

1. Stop MySQL Server Process
(/etc/init.d/mysql stop)

2. Start the MySQL (mysqld) server/daemon process with the –skip-grant-tables option so that it will not prompt for password
(mysqld_safe –skip-grant-tables &)

3. Connect to mysql server as the root user
(mysql -u root)

4. Setup new root password
(mysql> use mysql;

mysql> update user set password=PASSWORD(”NEW-ROOT-PASSWORD“) where User=’root’;

mysql> flush privileges;

mysql> quit)

5. Exit and restart MySQL server
(/etc/init.d/mysql stop
/etc/init.d/mysql start

mysql -u root -p )

Linux Runlevels

Mode Directory Description
0 /etc/rc.d/rc0.d Halt
1 /etc/rc.d/rc1.d Single-user mode
2 /etc/rc.d/rc2.d Not used (user-definable)
3 /etc/rc.d/rc3.d Full multi-user mode (no GUI interface)
4 /etc/rc.d/rc4.d Not used (user-definable)
5 /etc/rc.d/rc5.d Full multiuser mode (with GUI interface)
6 /etc/rc.d/rc6.d Reboot

Thursday, November 12, 2009

NFS Server

Package - nfs-utils
Service - portmap and nfs
Port No. - 111

Configuration Files
/etc/exports
/etc/hosts.allow
/etc/hosts.deny


Shared NFS directory is listed in /etc/exports,this file control the shared directory.

Example
#/etc/exports
/data/files *(ro,sync)
/home 192.168.1.0/24(rw,sync)
/data/test *.my-site.com(rw,sync)

/data/database 192.168.1.203/32(rw,sync)


showmount queries the mount daemon on a remote host for information
about the state of the NFS server on that machine.

Note:
  1. Only export directories beneath the / directory.
  2. Do not export a subdirectory of a directory that has already been exported. The exception being when the subdirectory is on a different physical device. Likewise, do not export the parent of a subdirectory unless it is on a separate device.
  3. Only export local filesystems.

Keep in mind that when you mount any filesystem on a directory, the original contents of the directory are ignored, or obscured, in favor of the files in the mounted filesystem. When the filesystem is unmounted, then the original files in the directory reappear unchanged.








Thursday, October 22, 2009

Tape Backup With mt And tar Command

Rewind tape drive:
# mt -f /dev/st0 rewind

Backup directory /www and /home with tar command (z - compressed):
# tar -czf /dev/st0 /www /home

Find out what block you are at with mt command:
# mt -f /dev/st0 tell

Display list of files on tape drive:
# tar -tzf /dev/st0

Restore /www directory:
# cd /
# mt -f /dev/st0 rewind
# tar -xzf /dev/st0 www

Unload the tape:
# mt -f /dev/st0 offline

Display status information about the tape unit:
# mt -f /dev/st0 status

Erase the tape:
# mt -f /dev/st0 erase

BACKWARD or FORWARD on tape with mt command itself:
(a) Go to end of data:
# mt -f /dev/nst0 eod(b) Goto previous record:
# mt -f /dev/nst0 bsfm 1(c) Forward record:
# mt -f /dev/nst0 fsf 1

To restore tape in case of data loss or hard disk failure:
# tar -xlpMzvf /dev/st0 /home

EXIM Mail Server Commands

To send mail from a server,

mail -v emailid
Press "." till finishes

Check the status from
tail -f /var/log/exim_mainlog

####################
# Queues information
####################
Queues information

Print a count of the messages in the queue:
Quote:
root@localhost# exim -bpc

Print a listing of the messages in the queue (time queued, size,
message-id, sender, recipient):
Quote:
root@localhost# exim -bp

Print a summary of messages in the queue (count, volume, oldest, newest,
domain, and totals):
Quote:
root@localhost# exim -bp | exiqsumm

Generate and display Exim stats from a logfile:
Quote:
root@localhost# eximstats /path/to/exim_mainlog

Generate and display Exim stats from a logfile, with less verbose
output:
Quote:
root@localhost# eximstats -ne -nr -nt /path/to/exim_mainlog

Generate and display Exim stats from a logfile, for one particular day:
Quote:
root@localhost# fgrep 2007-02-16 /path/to/exim_mainlog | eximstats

Print what Exim is doing right now:
Quote:
root@localhost# exiwhat

To delete frozen emails
Quote:
exim -bp | awk '$6~"frozen" { print $3 }' | xargs exim -Mrm

To deliver emails forcefully
Quote:
exim -qff -v -C /etc/exim.conf &

To delete nobody mails
Quote:
exim -bp | grep nobody | awk '{print $3}' | xargs exim -Mrm
delete all mails

exim -bp | awk '{print $3}' | xargs exim -Mrm

#####################
# Searching the queue
#####################
Searching the queue

Exim includes a utility that is quite nice for grepping through the
queue, called exiqgrep. Learn it. Know it. Live it. If you're not using
this, and if you're not familiar with the various flags it uses, you're
probably doing things the hard way, like piping `exim -bp` into awk,
grep, cut, or `wc -l`.

Search the queue for messages from a specific sender:
Quote:
root@localhost# exiqgrep -f [luser]@domain

Search the queue for messages for a specific recipient/domain:
Quote:
root@localhost# exiqgrep -r [luser]@domain

Print just the message-id as a result of one of the above two searches:
Quote:
root@localhost# exiqgrep -i [ -r | -f ] ...

Print a count of messages matching one of the above searches:
Quote:
root@localhost# exiqgrep -c [ -r | -f ] ...

Print just the message-id of the entire queue:
Quote:
root@localhost# exiqgrep -i

Managing the queue, Start a queue run:
Quote:
root@localhost# exim -q -v

Start a queue run for just local deliveries:
Quote:
root@localhost# exim -ql -v

Remove a message from the queue:
Quote:
root@localhost# exim -Mrm [ ... ]

Freeze a message:
Quote:
root@localhost# exim -Mf [ ... ]

Thaw a message:
Quote:
root@localhost# exim -Mt [ ... ]

Deliver a specific message:
Quote:
root@localhost# exim -M [ ... ]

Force a message to fail and bounce:
Quote:
root@localhost# exim -Mg [ ... ]

Remove all frozen messages:
Quote:
root@localhost# exiqgrep -z -i | xargs exim -Mrm

Remove all messages older than five days (86400 * 5 = 432000 seconds):
Quote:
root@localhost# exiqgrep -o 1296000 -i | xargs exim -Mrm

Freeze all queued mail from a given sender:
Quote:
root@localhost# exiqgrep -i -f luser@example.tld | xargs exim -Mf

View a message's headers:
Quote:
root@localhost# exim -Mvh

View a message's body:
Quote:
root@localhost# exim -Mvb

View a message's logs:
Quote:
root@localhost# exim -Mvl
--------------------------------------------------------------------------------
#############################
# Message-IDs and spool files
#############################
Message-IDs and spool files
Message-IDs and spool files

The message-IDs that Exim uses to refer to messages in its queue are
mixed-case alpha-numeric, and take the form of: XXXXXX-YYYYYY-ZZ. Most
commands related to managing the queue and logging use these
message-ids.
There are three -- count 'em, THREE -- files for each message in the
spool directory. If you're dealing with these files by hand, instead of
using he appropriate exim commands as detailed below, make sure you get
them all, and don't leave Exim with remnants of messages in the queue.

Files in /var/spool/exim/msglog contain logging information for each
message and are named the same as the message-id.

Files in /var/spool/exim/input are named after the message-id, plus a
suffix denoting whether it is the envelope header (-H) or message data
(-D).

These directories may contain further hashed subdirectories to deal with
larger mail queues, so don't expect everything to always appear directly
on the top /var/spool/exim/input or /var/spool/exim/msglog directories;
any searches or greps will need to be recursive. See if there is a
proper way to do what you're doing before working directly on the spool
files.
---------------------------------------------------------------------------------------------------------------------------------------------
###################
# Setting Exim Mail
###################
Setting Exim Mail
One common problem people have is an incorrectly setup mail system. Here
is a list of rules that must be followed:

1) hostname must not match any domain that is being used on the system.
Example, if you have a domain called domain.com and you want to recieve
mail on user@domain.com, you must *not* set your hostname to domain.com.
We recommend using server.domain.com instead. You must make sure that
you add the A record for server.domain.com so that it resolves.

2) The hostname must be in the /etc/virtual/domains file.

3) The hostname must *not* be in the /etc/virtual/domainowners file.

4) The hostname must resolve. If not, add the required A records to the
dns zone such that it does.

5) The directory /etc/virtual/hostname must exist..
(eg: /etc/virtual/server.domain.com). It must not contain any files.

6) Any domains that you want to use for email (eg: domain.com) must be
in both the /etc/virtual/domains file and the /etc/virtual/domainowners
file. The directory /etc/virtual/domain.com must exist and the
files /etc/virtual/domain.com/passwd and /etc/virtual/domain.com/aliases
exist.

7) File permissions for virtual pop inboxes should be:
Quote:
/var/spool/virtual/domain.com 770 username:mail
/var/spool/virtual/domain.com/* 660 username:mail
If you've made any changes to you /etc/exim.conf file and require a
fresh copy, you can retrieve one by running
Quote:
wget -O /etc/exim.conf http://files.directadmin.com/services/exim4.conf
A restart of exim is required after installing a new exim.conf file.

8) Ensure your hostname does not contain any upper case letters.

9) Make sure that your main server IP has a reverse lookup on it.
---------------------------------------------------------------------------------------------------------------------------------------------

###################
# Exim Error Fixing
###################
Exim Error Fixing
550-Verification failed for user@email.com

This error will occur if exim cannot verify the sending email address.
This might be because the domain doesn't return an MX record, or the
email account itself doesn't exist.

To disable the check, edit your /etc/exim.conf and change

Quote:
require verify = sender
to
#require verify = sender

And then restart exim.
---------------------------------------------------------------------------------------------------------------------------------------------

550 Sender verify failed

If we face the 550 Sender verify failed error message while sending the
mails or receving the mails from a domain then the possible reasons can
be

1) Reverse DNS entries for MX records
2) Acceptance of postmaster address in DNS Report.

Simply, check the "/etc/valiases/domain.com" file and replace "fail" or
"blackhole" with a valid email account on that domain.
Then restart exim on server.

This will fix the error in "Acceptance of postmaster address" on DNS
Report page.
---------------------------------------------------------------------------------------------------------------------------------------------

550 Relaying denied

We do not see the sendmail configurations removed. Please take a minute
to read the sendmail documentation at:
http://www.sendmail.org/~ca/email/relayingdenied.html

There are two cases in which you may get an undesired 550 Relaying
denied:

1. local sender to external recipient: Then the relay (relay=host.domain
[IP.ADD.RE.SS]) must be in the relevant classes or maps:
* 8.9-8.12: access map: RELAY, class R, or class m
* HACKs for 8.8: HACK(use_ip), HACK(use_names), or _LOCAL_IP_MAP_

What you are trying to do is what is described above as "local sender to
external recipient" and you are getting a relaying denied. The only
solution to this is to add your IP to the relay domain, and this is as
advised by the sendmail team themselves. Since they have clearly stated
it as thus, it is clear that sendmail is configured as above. Hence you
can either add an ip range to relay or will have to add them every time
you do it manually.

Here are the telnet results to port 25:

[root@niKx ~]# t 72.249.32.237 25
Trying 72.249.32.237...
Connected to sqatest.com (72.249.32.237).
Escape character is '^]'.
220 server.rozinskiy.com ESMTP Sendmail 8.13.1/8.13.1; Mon, 30 Apr 2007
00:04:16 -0500
helo yahoo.com
250 server.rozinskiy.com Hello [59.93.40.156], pleased to meet you
mail from: sqatest@odessit.com
250 2.1.0 sqatest@odessit.com... Sender ok
rcpt to: mailzoom@gmail.com
550 5.7.1 mailzoom@gmail.com... Relaying denied. IP name lookup failed
[IP]

[root@niKx ~]# t 72.249.32.237 25
Trying 72.249.32.237...
Connected to rozinskiy.com (72.249.32.237).
Escape character is '^]'.
220 server.rozinskiy.com ESMTP Sendmail 8.13.1/8.13.1; Mon, 30 Apr 2007
00:11:55 -0500
helo odessit.com
250 server.rozinskiy.com Hello [59.93.40.156], pleased to meet you
mail from: odessit@odessit.com
250 2.1.0 odessit@odessit.com... Sender ok
rcpt to: odessit@odessit.com
250 2.1.5 odessit@odessit.com... Recipient ok

As you can see, the second trial to a mail inside the server is
accepted. This clearly means, as you have identified earlier that your
IP address should be at the relay list to have the mails sent.

The Solution: The only solution in this case is to implement a script by
means of which an IP who logs into the server for pop is also allowed to
relay through the server dynamically. Since, sendmail by default does
not provide a solution to this, you can implement something that is
described here: http://lena.franken.de/linux/daemons.html ; the part
under "How to make SMTP after POP run without modifying daemons". We are
not sure of the functionality, but it is sure worth a try.
----------------------------------------------------------------------
##############
# Horde Issues
##############
Horde Issues
Horde scripts

/scripts/fullhordereset
/scripts/resethorde
---------------------------------------------------------------------------------------------------------------------------------------------

If there is a problem say, cannot login to horde or so, try this, you
just need to cut n paste the following at the shell, mysql prompt.
Before going for the below make sure database named horde exists or not.
If exists you need to delete it first then try the below.


create database horde;
use horde;
CREATE TABLE horde_users (
user_uid VARCHAR(255) NOT NULL,
user_pass VARCHAR(255) NOT NULL,
user_soft_expiration_date INT,
user_hard_expiration_date INT,

PRIMARY KEY (user_uid)
);

GRANT SELECT, INSERT, UPDATE, DELETE ON horde_users TO horde@localhost;

CREATE TABLE horde_prefs (
pref_uid VARCHAR(200) NOT NULL,
pref_scope VARCHAR(16) NOT NULL DEFAULT '',
pref_name VARCHAR(32) NOT NULL,
pref_value LONGTEXT NULL,

PRIMARY KEY (pref_uid, pref_scope, pref_name)
);

CREATE INDEX pref_uid_idx ON horde_prefs (pref_uid);
CREATE INDEX pref_scope_idx ON horde_prefs (pref_scope);
GRANT SELECT, INSERT, UPDATE, DELETE ON horde_prefs TO horde@localhost;

CREATE TABLE horde_datatree (
datatree_id INT NOT NULL,
group_uid VARCHAR(255) NOT NULL,
user_uid VARCHAR(255) NOT NULL,
datatree_name VARCHAR(255) NOT NULL,
datatree_parents VARCHAR(255) NOT NULL,
datatree_order INT,
datatree_data TEXT,
datatree_serialized SMALLINT DEFAULT 0 NOT NULL,

PRIMARY KEY (datatree_id)
);

CREATE INDEX datatree_datatree_name_idx ON horde_datatree
(datatree_name);
CREATE INDEX datatree_group_idx ON horde_datatree (group_uid);
CREATE INDEX datatree_user_idx ON horde_datatree (user_uid);
CREATE INDEX datatree_serialized_idx ON horde_datatree
(datatree_serialized);

CREATE TABLE horde_datatree_attributes (
datatree_id INT NOT NULL,
attribute_name VARCHAR(255) NOT NULL,
attribute_key VARCHAR(255) DEFAULT '' NOT NULL,
attribute_value TEXT
);

CREATE INDEX datatree_attribute_idx ON horde_datatree_attributes
(datatree_id);
CREATE INDEX datatree_attribute_name_idx ON horde_datatree_attributes
(attribute_name);
CREATE INDEX datatree_attribute_key_idx ON horde_datatree_attributes
(attribute_key);

GRANT SELECT, INSERT, UPDATE, DELETE ON horde_datatree TO
horde@localhost;
GRANT SELECT, INSERT, UPDATE, DELETE ON horde_datatree_attributes TO
horde@localhost;

CREATE TABLE horde_tokens (
token_address VARCHAR(100) NOT NULL,
token_id VARCHAR(32) NOT NULL,
token_timestamp BIGINT NOT NULL,

PRIMARY KEY (token_address, token_id)
);

GRANT SELECT, INSERT, UPDATE, DELETE ON horde_tokens TO horde@localhost;

CREATE TABLE horde_vfs (
vfs_id BIGINT NOT NULL,
vfs_type SMALLINT NOT NULL,
vfs_path VARCHAR(255) NOT NULL,
vfs_name VARCHAR(255) NOT NULL,
vfs_modified BIGINT NOT NULL,
vfs_owner VARCHAR(255) NOT NULL,
vfs_data LONGBLOB,

PRIMARY KEY (vfs_id)
);

CREATE INDEX vfs_path_idx ON horde_vfs (vfs_path);
CREATE INDEX vfs_name_idx ON horde_vfs (vfs_name);

GRANT SELECT, INSERT, UPDATE, DELETE ON horde_vfs TO horde@localhost;

CREATE TABLE horde_histories (
history_id BIGINT NOT NULL,
object_uid VARCHAR(255) NOT NULL,
history_action VARCHAR(32) NOT NULL,
history_ts BIGINT NOT NULL,
history_desc TEXT,
history_who VARCHAR(255),
history_extra TEXT,

PRIMARY KEY (history_id)
);

CREATE TABLE horde_histories_seq (
id int(10) unsigned NOT NULL auto_increment,
PRIMARY KEY (id)
);

CREATE TABLE horde_datatree_seq (
id int(10) unsigned NOT NULL auto_increment,
PRIMARY KEY (id)
);


CREATE INDEX history_action_idx ON horde_histories (history_action);
CREATE INDEX history_ts_idx ON horde_histories (history_ts);
CREATE INDEX history_uid_idx ON horde_histories (object_uid);

GRANT SELECT, INSERT, UPDATE, DELETE ON horde_histories TO
horde@localhost;
GRANT SELECT, INSERT, UPDATE, DELETE ON horde_histories_seq TO
horde@localhost;
GRANT SELECT, INSERT, UPDATE, DELETE ON horde_datatree_seq TO
horde@localhost;

CREATE TABLE horde_sessionhandler (
session_id VARCHAR(32) NOT NULL,
session_lastmodified INT NOT NULL,
session_data LONGBLOB,

PRIMARY KEY (session_id)
) ENGINE = InnoDB;

GRANT SELECT, INSERT, UPDATE, DELETE ON horde_sessionhandler TO
horde@localhost;

FLUSH PRIVILEGES;
---------------------------------------------------------------------------------------------------------------------------------------------

###############
# Squirrel Mail
###############
Squirrel Mail
If you have below mentioned error
Warning: main(../config/config.php): failed to open stream: No such file
or directory
in /usr/local/cpanel/base/3rdparty/squirrelmail/functions/global.php on
line 18

Fatal error: main(): Failed opening required
‘../config/config.php’ (include_path=’/usr/local/cpanel/3rdparty/lib/php/:.’) in /usr/local/cpanel/base/3rdparty/squirrelmail/functions/global.php on line 18

then run

/scripts/fixwebmail and if you get something like this:

chown: failed to get attributes of
`/usr/local/etc/cpanel/base/webmail/data’: No such file or directory
chmod: failed to get attributes of
`/usr/local/etc/cpanel/base/webmail/data’: No such file or directory

then execute via SSH
cp
-p /usr/local/cpanel/base/3rdparty/squirrelmail/config/config_default.php /usr/local/cpanel/base/3rdparty/squirrelmail/config/config.php
Thats it refresh your inbox page of squirrelmail now and you wont see
those errors there.
---------------------------------------------------------------------------------------------------------------------------------------------

If you are getting this error in your inbox of your squirrel mail
account

ERROR:
ERROR: Could not complete request.
Query: SELECT “INBOX.Drafts�
Reason Given: Unable to open this mailbox.

ERROR:
ERROR: Could not complete request.
Query: SELECT “INBOX.Sent�
Reason Given: Unable to open this mailbox.

then it simply means that there is some problem with the sent and Draft
folder in your email accounts just create that at its loacation and
assign the proper file permission to both the folders as well as check
the other configuration file in perticular format which is been needed
there as by your mail services you are using on the server.

fo example if you there is imap configured there for your account then
you can do this

Go to

cd /home/UserName/mail/Domainname.com/emailIDUsername/.Sent

mkdir new; mkdir cur; mkdir tmp
chown username.mail *
---------------------------------------------------------------------------------------------------------------------------------------------

#################################
# Converting To maildir on Server
#################################
Converting To maildir on Server
We Dont worry about converting to MailDir on server …follow this ….

A) /scripts/convert2maildir

choose option 1.. Backup all mail folders on this server
3.. Start maildir conversion process

B) /scripts/courierup –force

C) /scripts/eximup –force

D) /scripts/upcp –force

E) /scripts/convert2maildir

choose option 3… to convert partially converted mail accounts

#################
# exim optimizing
#################
----------
variables which already exist (cpanel servers), change their value to

smtp_receive_timeout = 100s
smtp_connect_backlog = 12
smtp_accept_max = 12

variables which donot exist and need to add them, can add below
'smtp_accept_max' (or any location within main section)



smtp_accept_max_per_connection = 3
smtp_accept_max_per_host = 5
smtp_accept_queue = 12
smtp_accept_keepalive = false
queue_only_load = 3
----------

Explaining the variables is beyond the scope of this mail. Note that
this will considerably reduce the volume of simultaneous incomming
mails. In busy servers you may see delay in receiving mails but will not
cause permanent failure. If the client complaints delay in receiving
mails you can increase the values (find the exact variable and increase
it). The above are optimized values which can be used most servers (used
in Bobby's servers).


After changing the values, restart exim and check the number of exim
process, they will be only 3 or 4 process (average).
######
# spam
######
Stop PHP nobody Spammers

Update: May 25, 2005:
- Added Logrotation details
- Added Sample Log Output

PHP and Apache has a history of not being able to track which users are sending out mail through the PHP mail function from the nobody user causing leaks in formmail scripts and malicious users to spam from your server without you knowing who or where.

Watching your exim_mainlog doesn't exactly help, you see th email going out but you can't track from which user or script is sending it. This is a quick and dirty way to get around the nobody spam problem on your Linux server.

If you check out your PHP.ini file you'll notice that your mail program is set to: /usr/sbin/sendmail and 99.99% of PHP scripts will just use the built in mail(); function for PHP - so everything will go through /usr/sbin/sendmail =)

Requirements:
We assume you're using Apache 1.3x, PHP 4.3x and Exim. This may work on other systems but we're only tested it on a Cpanel/WHM Red Hat Enterprise system.

Time:
10 Minutes, Root access required.

Step 1)
Login to your server and su - to root.

Article provided by WebHostGear.com

Step 2)
Turn off exim while we do this so it doesn't freak out.
/etc/init.d/exim stop

Step 3)
Backup your original /usr/sbin/sendmail file. On systems using Exim MTA, the sendmail file is just basically a pointer to Exim itself.
mv /usr/sbin/sendmail /usr/sbin/sendmail.hidden

Step 4)
Create the spam monitoring script for the new sendmail.
pico /usr/sbin/sendmail

Paste in the following:


#!/usr/local/bin/perl

# use strict;
use Env;
my $date = `date`;
chomp $date;
open (INFO, ">>/var/log/spam_log") || die "Failed to open file ::$!";
my $uid = $>;
my @info = getpwuid($uid);
if($REMOTE_ADDR) {
print INFO "$date - $REMOTE_ADDR ran $SCRIPT_NAME at $SERVER_NAME n";
}
else {

print INFO "$date - $PWD - @infon";

}
my $mailprog = '/usr/sbin/sendmail.hidden';
foreach (@ARGV) {
$arg="$arg" . " $_";
}

open (MAIL,"|$mailprog $arg") || die "cannot open $mailprog: $!n";
while ( ) {
print MAIL;
}
close (INFO);
close (MAIL);


Step 5)
Change the new sendmail permissions
chmod +x /usr/sbin/sendmail

Step 6)
Create a new log file to keep a history of all mail going out of the server using web scripts
touch /var/log/spam_log

chmod 0777 /var/log/spam_log

Step 7)
Start Exim up again.
/etc/init.d/exim start

Step 8)
Monitor your spam_log file for spam, try using any formmail or script that uses a mail function - a message board, a contact script.
tail - f /var/log/spam_log

Sample Log Output

Mon Apr 11 07:12:21 EDT 2005 - /home/username/public_html/directory/subdirectory - nobody x 99 99 Nobody / /sbin/nologin

Log Rotation Details
Your spam_log file isn't set to be rotated so it might get to be very large quickly. Keep an eye on it and consider adding it to your logrotation.

pico /etc/logrotate.conf

FIND:
# no packages own wtmp -- we'll rotate them here
/var/log/wtmp {
monthly
create 0664 root utmp
rotate 1
}

ADD BELOW:

# SPAM LOG rotation
/var/log/spam_log {
monthly
create 0777 root root
rotate 1
}

EXIM Mail Server Commands

To send mail from a server,

mail -v emailid
Press "." till finishes

Check the status from
tail -f /var/log/exim_mainlog

####################
# Queues information
####################
Queues information

Print a count of the messages in the queue:
Quote:
root@localhost# exim -bpc

Print a listing of the messages in the queue (time queued, size,
message-id, sender, recipient):
Quote:
root@localhost# exim -bp

Print a summary of messages in the queue (count, volume, oldest, newest,
domain, and totals):
Quote:
root@localhost# exim -bp | exiqsumm

Generate and display Exim stats from a logfile:
Quote:
root@localhost# eximstats /path/to/exim_mainlog

Generate and display Exim stats from a logfile, with less verbose
output:
Quote:
root@localhost# eximstats -ne -nr -nt /path/to/exim_mainlog

Generate and display Exim stats from a logfile, for one particular day:
Quote:
root@localhost# fgrep 2007-02-16 /path/to/exim_mainlog | eximstats

Print what Exim is doing right now:
Quote:
root@localhost# exiwhat

To delete frozen emails
Quote:
exim -bp | awk '$6~"frozen" { print $3 }' | xargs exim -Mrm

To deliver emails forcefully
Quote:
exim -qff -v -C /etc/exim.conf &

To delete nobody mails
Quote:
exim -bp | grep nobody | awk '{print $3}' | xargs exim -Mrm
delete all mails

exim -bp | awk '{print $3}' | xargs exim -Mrm

#####################
# Searching the queue
#####################
Searching the queue

Exim includes a utility that is quite nice for grepping through the
queue, called exiqgrep. Learn it. Know it. Live it. If you're not using
this, and if you're not familiar with the various flags it uses, you're
probably doing things the hard way, like piping `exim -bp` into awk,
grep, cut, or `wc -l`.

Search the queue for messages from a specific sender:
Quote:
root@localhost# exiqgrep -f [luser]@domain

Search the queue for messages for a specific recipient/domain:
Quote:
root@localhost# exiqgrep -r [luser]@domain

Print just the message-id as a result of one of the above two searches:
Quote:
root@localhost# exiqgrep -i [ -r | -f ] ...

Print a count of messages matching one of the above searches:
Quote:
root@localhost# exiqgrep -c [ -r | -f ] ...

Print just the message-id of the entire queue:
Quote:
root@localhost# exiqgrep -i

Managing the queue, Start a queue run:
Quote:
root@localhost# exim -q -v

Start a queue run for just local deliveries:
Quote:
root@localhost# exim -ql -v

Remove a message from the queue:
Quote:
root@localhost# exim -Mrm [ ... ]

Freeze a message:
Quote:
root@localhost# exim -Mf [ ... ]

Thaw a message:
Quote:
root@localhost# exim -Mt [ ... ]

Deliver a specific message:
Quote:
root@localhost# exim -M [ ... ]

Force a message to fail and bounce:
Quote:
root@localhost# exim -Mg [ ... ]

Remove all frozen messages:
Quote:
root@localhost# exiqgrep -z -i | xargs exim -Mrm

Remove all messages older than five days (86400 * 5 = 432000 seconds):
Quote:
root@localhost# exiqgrep -o 1296000 -i | xargs exim -Mrm

Freeze all queued mail from a given sender:
Quote:
root@localhost# exiqgrep -i -f luser@example.tld | xargs exim -Mf

View a message's headers:
Quote:
root@localhost# exim -Mvh

View a message's body:
Quote:
root@localhost# exim -Mvb

View a message's logs:
Quote:
root@localhost# exim -Mvl
--------------------------------------------------------------------------------
#############################
# Message-IDs and spool files
#############################
Message-IDs and spool files
Message-IDs and spool files

The message-IDs that Exim uses to refer to messages in its queue are
mixed-case alpha-numeric, and take the form of: XXXXXX-YYYYYY-ZZ. Most
commands related to managing the queue and logging use these
message-ids.
There are three -- count 'em, THREE -- files for each message in the
spool directory. If you're dealing with these files by hand, instead of
using he appropriate exim commands as detailed below, make sure you get
them all, and don't leave Exim with remnants of messages in the queue.

Files in /var/spool/exim/msglog contain logging information for each
message and are named the same as the message-id.

Files in /var/spool/exim/input are named after the message-id, plus a
suffix denoting whether it is the envelope header (-H) or message data
(-D).

These directories may contain further hashed subdirectories to deal with
larger mail queues, so don't expect everything to always appear directly
on the top /var/spool/exim/input or /var/spool/exim/msglog directories;
any searches or greps will need to be recursive. See if there is a
proper way to do what you're doing before working directly on the spool
files.
---------------------------------------------------------------------------------------------------------------------------------------------
###################
# Setting Exim Mail
###################
Setting Exim Mail
One common problem people have is an incorrectly setup mail system. Here
is a list of rules that must be followed:

1) hostname must not match any domain that is being used on the system.
Example, if you have a domain called domain.com and you want to recieve
mail on user@domain.com, you must *not* set your hostname to domain.com.
We recommend using server.domain.com instead. You must make sure that
you add the A record for server.domain.com so that it resolves.

2) The hostname must be in the /etc/virtual/domains file.

3) The hostname must *not* be in the /etc/virtual/domainowners file.

4) The hostname must resolve. If not, add the required A records to the
dns zone such that it does.

5) The directory /etc/virtual/hostname must exist..
(eg: /etc/virtual/server.domain.com). It must not contain any files.

6) Any domains that you want to use for email (eg: domain.com) must be
in both the /etc/virtual/domains file and the /etc/virtual/domainowners
file. The directory /etc/virtual/domain.com must exist and the
files /etc/virtual/domain.com/passwd and /etc/virtual/domain.com/aliases
exist.

7) File permissions for virtual pop inboxes should be:
Quote:
/var/spool/virtual/domain.com 770 username:mail
/var/spool/virtual/domain.com/* 660 username:mail
If you've made any changes to you /etc/exim.conf file and require a
fresh copy, you can retrieve one by running
Quote:
wget -O /etc/exim.conf http://files.directadmin.com/services/exim4.conf
A restart of exim is required after installing a new exim.conf file.

8) Ensure your hostname does not contain any upper case letters.

9) Make sure that your main server IP has a reverse lookup on it.
---------------------------------------------------------------------------------------------------------------------------------------------

###################
# Exim Error Fixing
###################
Exim Error Fixing
550-Verification failed for user@email.com

This error will occur if exim cannot verify the sending email address.
This might be because the domain doesn't return an MX record, or the
email account itself doesn't exist.

To disable the check, edit your /etc/exim.conf and change

Quote:
require verify = sender
to
#require verify = sender

And then restart exim.
---------------------------------------------------------------------------------------------------------------------------------------------

550 Sender verify failed

If we face the 550 Sender verify failed error message while sending the
mails or receving the mails from a domain then the possible reasons can
be

1) Reverse DNS entries for MX records
2) Acceptance of postmaster address in DNS Report.

Simply, check the "/etc/valiases/domain.com" file and replace "fail" or
"blackhole" with a valid email account on that domain.
Then restart exim on server.

This will fix the error in "Acceptance of postmaster address" on DNS
Report page.
---------------------------------------------------------------------------------------------------------------------------------------------

550 Relaying denied

We do not see the sendmail configurations removed. Please take a minute
to read the sendmail documentation at:
http://www.sendmail.org/~ca/email/relayingdenied.html

There are two cases in which you may get an undesired 550 Relaying
denied:

1. local sender to external recipient: Then the relay (relay=host.domain
[IP.ADD.RE.SS]) must be in the relevant classes or maps:
* 8.9-8.12: access map: RELAY, class R, or class m
* HACKs for 8.8: HACK(use_ip), HACK(use_names), or _LOCAL_IP_MAP_

What you are trying to do is what is described above as "local sender to
external recipient" and you are getting a relaying denied. The only
solution to this is to add your IP to the relay domain, and this is as
advised by the sendmail team themselves. Since they have clearly stated
it as thus, it is clear that sendmail is configured as above. Hence you
can either add an ip range to relay or will have to add them every time
you do it manually.

Here are the telnet results to port 25:

[root@niKx ~]# t 72.249.32.237 25
Trying 72.249.32.237...
Connected to sqatest.com (72.249.32.237).
Escape character is '^]'.
220 server.rozinskiy.com ESMTP Sendmail 8.13.1/8.13.1; Mon, 30 Apr 2007
00:04:16 -0500
helo yahoo.com
250 server.rozinskiy.com Hello [59.93.40.156], pleased to meet you
mail from: sqatest@odessit.com
250 2.1.0 sqatest@odessit.com... Sender ok
rcpt to: mailzoom@gmail.com
550 5.7.1 mailzoom@gmail.com... Relaying denied. IP name lookup failed
[IP]

[root@niKx ~]# t 72.249.32.237 25
Trying 72.249.32.237...
Connected to rozinskiy.com (72.249.32.237).
Escape character is '^]'.
220 server.rozinskiy.com ESMTP Sendmail 8.13.1/8.13.1; Mon, 30 Apr 2007
00:11:55 -0500
helo odessit.com
250 server.rozinskiy.com Hello [59.93.40.156], pleased to meet you
mail from: odessit@odessit.com
250 2.1.0 odessit@odessit.com... Sender ok
rcpt to: odessit@odessit.com
250 2.1.5 odessit@odessit.com... Recipient ok

As you can see, the second trial to a mail inside the server is
accepted. This clearly means, as you have identified earlier that your
IP address should be at the relay list to have the mails sent.

The Solution: The only solution in this case is to implement a script by
means of which an IP who logs into the server for pop is also allowed to
relay through the server dynamically. Since, sendmail by default does
not provide a solution to this, you can implement something that is
described here: http://lena.franken.de/linux/daemons.html ; the part
under "How to make SMTP after POP run without modifying daemons". We are
not sure of the functionality, but it is sure worth a try.
----------------------------------------------------------------------
##############
# Horde Issues
##############
Horde Issues
Horde scripts

/scripts/fullhordereset
/scripts/resethorde
---------------------------------------------------------------------------------------------------------------------------------------------

If there is a problem say, cannot login to horde or so, try this, you
just need to cut n paste the following at the shell, mysql prompt.
Before going for the below make sure database named horde exists or not.
If exists you need to delete it first then try the below.


create database horde;
use horde;
CREATE TABLE horde_users (
user_uid VARCHAR(255) NOT NULL,
user_pass VARCHAR(255) NOT NULL,
user_soft_expiration_date INT,
user_hard_expiration_date INT,

PRIMARY KEY (user_uid)
);

GRANT SELECT, INSERT, UPDATE, DELETE ON horde_users TO horde@localhost;

CREATE TABLE horde_prefs (
pref_uid VARCHAR(200) NOT NULL,
pref_scope VARCHAR(16) NOT NULL DEFAULT '',
pref_name VARCHAR(32) NOT NULL,
pref_value LONGTEXT NULL,

PRIMARY KEY (pref_uid, pref_scope, pref_name)
);

CREATE INDEX pref_uid_idx ON horde_prefs (pref_uid);
CREATE INDEX pref_scope_idx ON horde_prefs (pref_scope);
GRANT SELECT, INSERT, UPDATE, DELETE ON horde_prefs TO horde@localhost;

CREATE TABLE horde_datatree (
datatree_id INT NOT NULL,
group_uid VARCHAR(255) NOT NULL,
user_uid VARCHAR(255) NOT NULL,
datatree_name VARCHAR(255) NOT NULL,
datatree_parents VARCHAR(255) NOT NULL,
datatree_order INT,
datatree_data TEXT,
datatree_serialized SMALLINT DEFAULT 0 NOT NULL,

PRIMARY KEY (datatree_id)
);

CREATE INDEX datatree_datatree_name_idx ON horde_datatree
(datatree_name);
CREATE INDEX datatree_group_idx ON horde_datatree (group_uid);
CREATE INDEX datatree_user_idx ON horde_datatree (user_uid);
CREATE INDEX datatree_serialized_idx ON horde_datatree
(datatree_serialized);

CREATE TABLE horde_datatree_attributes (
datatree_id INT NOT NULL,
attribute_name VARCHAR(255) NOT NULL,
attribute_key VARCHAR(255) DEFAULT '' NOT NULL,
attribute_value TEXT
);

CREATE INDEX datatree_attribute_idx ON horde_datatree_attributes
(datatree_id);
CREATE INDEX datatree_attribute_name_idx ON horde_datatree_attributes
(attribute_name);
CREATE INDEX datatree_attribute_key_idx ON horde_datatree_attributes
(attribute_key);

GRANT SELECT, INSERT, UPDATE, DELETE ON horde_datatree TO
horde@localhost;
GRANT SELECT, INSERT, UPDATE, DELETE ON horde_datatree_attributes TO
horde@localhost;

CREATE TABLE horde_tokens (
token_address VARCHAR(100) NOT NULL,
token_id VARCHAR(32) NOT NULL,
token_timestamp BIGINT NOT NULL,

PRIMARY KEY (token_address, token_id)
);

GRANT SELECT, INSERT, UPDATE, DELETE ON horde_tokens TO horde@localhost;

CREATE TABLE horde_vfs (
vfs_id BIGINT NOT NULL,
vfs_type SMALLINT NOT NULL,
vfs_path VARCHAR(255) NOT NULL,
vfs_name VARCHAR(255) NOT NULL,
vfs_modified BIGINT NOT NULL,
vfs_owner VARCHAR(255) NOT NULL,
vfs_data LONGBLOB,

PRIMARY KEY (vfs_id)
);

CREATE INDEX vfs_path_idx ON horde_vfs (vfs_path);
CREATE INDEX vfs_name_idx ON horde_vfs (vfs_name);

GRANT SELECT, INSERT, UPDATE, DELETE ON horde_vfs TO horde@localhost;

CREATE TABLE horde_histories (
history_id BIGINT NOT NULL,
object_uid VARCHAR(255) NOT NULL,
history_action VARCHAR(32) NOT NULL,
history_ts BIGINT NOT NULL,
history_desc TEXT,
history_who VARCHAR(255),
history_extra TEXT,

PRIMARY KEY (history_id)
);

CREATE TABLE horde_histories_seq (
id int(10) unsigned NOT NULL auto_increment,
PRIMARY KEY (id)
);

CREATE TABLE horde_datatree_seq (
id int(10) unsigned NOT NULL auto_increment,
PRIMARY KEY (id)
);


CREATE INDEX history_action_idx ON horde_histories (history_action);
CREATE INDEX history_ts_idx ON horde_histories (history_ts);
CREATE INDEX history_uid_idx ON horde_histories (object_uid);

GRANT SELECT, INSERT, UPDATE, DELETE ON horde_histories TO
horde@localhost;
GRANT SELECT, INSERT, UPDATE, DELETE ON horde_histories_seq TO
horde@localhost;
GRANT SELECT, INSERT, UPDATE, DELETE ON horde_datatree_seq TO
horde@localhost;

CREATE TABLE horde_sessionhandler (
session_id VARCHAR(32) NOT NULL,
session_lastmodified INT NOT NULL,
session_data LONGBLOB,

PRIMARY KEY (session_id)
) ENGINE = InnoDB;

GRANT SELECT, INSERT, UPDATE, DELETE ON horde_sessionhandler TO
horde@localhost;

FLUSH PRIVILEGES;
---------------------------------------------------------------------------------------------------------------------------------------------

###############
# Squirrel Mail
###############
Squirrel Mail
If you have below mentioned error
Warning: main(../config/config.php): failed to open stream: No such file
or directory
in /usr/local/cpanel/base/3rdparty/squirrelmail/functions/global.php on
line 18

Fatal error: main(): Failed opening required
‘../config/config.php’ (include_path=’/usr/local/cpanel/3rdparty/lib/php/:.’) in /usr/local/cpanel/base/3rdparty/squirrelmail/functions/global.php on line 18

then run

/scripts/fixwebmail and if you get something like this:

chown: failed to get attributes of
`/usr/local/etc/cpanel/base/webmail/data’: No such file or directory
chmod: failed to get attributes of
`/usr/local/etc/cpanel/base/webmail/data’: No such file or directory

then execute via SSH
cp
-p /usr/local/cpanel/base/3rdparty/squirrelmail/config/config_default.php /usr/local/cpanel/base/3rdparty/squirrelmail/config/config.php
Thats it refresh your inbox page of squirrelmail now and you wont see
those errors there.
---------------------------------------------------------------------------------------------------------------------------------------------

If you are getting this error in your inbox of your squirrel mail
account

ERROR:
ERROR: Could not complete request.
Query: SELECT “INBOX.Drafts�
Reason Given: Unable to open this mailbox.

ERROR:
ERROR: Could not complete request.
Query: SELECT “INBOX.Sent�
Reason Given: Unable to open this mailbox.

then it simply means that there is some problem with the sent and Draft
folder in your email accounts just create that at its loacation and
assign the proper file permission to both the folders as well as check
the other configuration file in perticular format which is been needed
there as by your mail services you are using on the server.

fo example if you there is imap configured there for your account then
you can do this

Go to

cd /home/UserName/mail/Domainname.com/emailIDUsername/.Sent

mkdir new; mkdir cur; mkdir tmp
chown username.mail *
---------------------------------------------------------------------------------------------------------------------------------------------

#################################
# Converting To maildir on Server
#################################
Converting To maildir on Server
We Dont worry about converting to MailDir on server …follow this ….

A) /scripts/convert2maildir

choose option 1.. Backup all mail folders on this server
3.. Start maildir conversion process

B) /scripts/courierup –force

C) /scripts/eximup –force

D) /scripts/upcp –force

E) /scripts/convert2maildir

choose option 3… to convert partially converted mail accounts

#################
# exim optimizing
#################
----------
variables which already exist (cpanel servers), change their value to

smtp_receive_timeout = 100s
smtp_connect_backlog = 12
smtp_accept_max = 12

variables which donot exist and need to add them, can add below
'smtp_accept_max' (or any location within main section)



smtp_accept_max_per_connection = 3
smtp_accept_max_per_host = 5
smtp_accept_queue = 12
smtp_accept_keepalive = false
queue_only_load = 3
----------

Explaining the variables is beyond the scope of this mail. Note that
this will considerably reduce the volume of simultaneous incomming
mails. In busy servers you may see delay in receiving mails but will not
cause permanent failure. If the client complaints delay in receiving
mails you can increase the values (find the exact variable and increase
it). The above are optimized values which can be used most servers (used
in Bobby's servers).


After changing the values, restart exim and check the number of exim
process, they will be only 3 or 4 process (average).
######
# spam
######
Stop PHP nobody Spammers

Update: May 25, 2005:
- Added Logrotation details
- Added Sample Log Output

PHP and Apache has a history of not being able to track which users are sending out mail through the PHP mail function from the nobody user causing leaks in formmail scripts and malicious users to spam from your server without you knowing who or where.

Watching your exim_mainlog doesn't exactly help, you see th email going out but you can't track from which user or script is sending it. This is a quick and dirty way to get around the nobody spam problem on your Linux server.

If you check out your PHP.ini file you'll notice that your mail program is set to: /usr/sbin/sendmail and 99.99% of PHP scripts will just use the built in mail(); function for PHP - so everything will go through /usr/sbin/sendmail =)

Requirements:
We assume you're using Apache 1.3x, PHP 4.3x and Exim. This may work on other systems but we're only tested it on a Cpanel/WHM Red Hat Enterprise system.

Time:
10 Minutes, Root access required.

Step 1)
Login to your server and su - to root.

Article provided by WebHostGear.com

Step 2)
Turn off exim while we do this so it doesn't freak out.
/etc/init.d/exim stop

Step 3)
Backup your original /usr/sbin/sendmail file. On systems using Exim MTA, the sendmail file is just basically a pointer to Exim itself.
mv /usr/sbin/sendmail /usr/sbin/sendmail.hidden

Step 4)
Create the spam monitoring script for the new sendmail.
pico /usr/sbin/sendmail

Paste in the following:


#!/usr/local/bin/perl

# use strict;
use Env;
my $date = `date`;
chomp $date;
open (INFO, ">>/var/log/spam_log") || die "Failed to open file ::$!";
my $uid = $>;
my @info = getpwuid($uid);
if($REMOTE_ADDR) {
print INFO "$date - $REMOTE_ADDR ran $SCRIPT_NAME at $SERVER_NAME n";
}
else {

print INFO "$date - $PWD - @infon";

}
my $mailprog = '/usr/sbin/sendmail.hidden';
foreach (@ARGV) {
$arg="$arg" . " $_";
}

open (MAIL,"|$mailprog $arg") || die "cannot open $mailprog: $!n";
while ( ) {
print MAIL;
}
close (INFO);
close (MAIL);


Step 5)
Change the new sendmail permissions
chmod +x /usr/sbin/sendmail

Step 6)
Create a new log file to keep a history of all mail going out of the server using web scripts
touch /var/log/spam_log

chmod 0777 /var/log/spam_log

Step 7)
Start Exim up again.
/etc/init.d/exim start

Step 8)
Monitor your spam_log file for spam, try using any formmail or script that uses a mail function - a message board, a contact script.
tail - f /var/log/spam_log

Sample Log Output

Mon Apr 11 07:12:21 EDT 2005 - /home/username/public_html/directory/subdirectory - nobody x 99 99 Nobody / /sbin/nologin

Log Rotation Details
Your spam_log file isn't set to be rotated so it might get to be very large quickly. Keep an eye on it and consider adding it to your logrotation.

pico /etc/logrotate.conf

FIND:
# no packages own wtmp -- we'll rotate them here
/var/log/wtmp {
monthly
create 0664 root utmp
rotate 1
}

ADD BELOW:

# SPAM LOG rotation
/var/log/spam_log {
monthly
create 0777 root root
rotate 1
}

MySql Commands

To login (from unix shell) use -h only if needed.

[mysql dir]/bin/mysql -h hostname -u root -p

Create a database on the sql server.

create database [databasename];

List all databases on the sql server.

show databases;

Switch to a database.

use [db name];

To see all the tables in the db.

show tables;

To see database's field formats.

describe [table name];

To delete a db.

drop database [database name];

To delete a table.

drop table [table name];

Show all data in a table.

SELECT * FROM [table name];

Returns the columns and column information pertaining to the designated table.

show columns from [table name];

Show certain selected rows with the value "whatever".

SELECT * FROM [table name] WHERE [field name] = "whatever";

Show all records containing the name "Bob" AND the phone number '3444444'.

SELECT * FROM [table name] WHERE name = "Bob" AND phone_number = '3444444';

Show all records not containing the name "Bob" AND the phone number '3444444' order by the phone_number field.

SELECT * FROM [table name] WHERE name != "Bob" AND phone_number = '3444444' order by phone_number;

Show all records starting with the letters 'bob' AND the phone number '3444444'.

SELECT * FROM [table name] WHERE name like "Bob%" AND phone_number = '3444444';

Use a regular expression to find records. Use "REGEXP BINARY" to force case-sensitivity. This finds any record beginning with a.

SELECT * FROM [table name] WHERE rec RLIKE "^a$";

Show unique records.

SELECT DISTINCT [column name] FROM [table name];

Show selected records sorted in an ascending (asc) or descending (desc).

SELECT [col1],[col2] FROM [table name] ORDER BY [col2] DESC;

Return number of rows.

SELECT COUNT(*) FROM [table name];

21j3lk21j3lk21j3lk2

Sum column.

SELECT SUM(*) FROM [table name];

Join tables on common columns.

select lookup.illustrationid, lookup.personid,person.birthday from lookup

left join person on lookup.personid=person.personid=statement to join birthday in person table with primary illustration id;

Switch to the mysql db. Create a new user.

INSERT INTO [table name] (Host,,Password) VALUES('%','user',PASSWORD('password'));

Change a users password.(from unix shell).

[mysql dir]/bin/mysqladmin -u root -h hostname.blah.org -p password 'new-password'

Change a users password.(from MySQL prompt).

SET PASSWORD FOR 'user'@'hostname' = PASSWORD('passwordhere');

Allow the user "bob" to connect to the server from localhost using the password "passwd"

grant usage on *.* to bob@localhost identified by 'passwd';

Switch to mysql db.Give user privilages for a db.

INSERT INTO [table name] (Host,Db,User,Select_priv,Insert_priv,Update_priv,Delete_priv,Create_priv,Drop_priv) VALUES ('%','databasename','username','Y','Y','Y','Y','Y','N');

or

grant all privileges on databasename.* to username@localhost;

To update info already in a table.

UPDATE [table name] SET Select_priv = 'Y',Insert_priv = 'Y',Update_priv = 'Y' where [field name] = 'user';

Delete a row(s) from a table.

DELETE from [table name] where [field name] = 'whatever';

Update database permissions/privilages.

FLUSH PRIVILEGES;

Delete a column.

alter table [table name] drop column [column name];

Add a new column to db.

alter table [table name] add column [new column name] varchar (20);

Change column name.

alter table [table name] change [old column name] [new column name] varchar (50);

Make a unique column so you get no dupes.

alter table [table name] add unique ([column name]);

Make a column bigger.

alter table [table name] modify [column name] VARCHAR(3);

Delete unique from table.

alter table [table name] drop index [colmn name];

Load a CSV file into a table.

LOAD DATA INFILE '/tmp/filename.csv' replace INTO TABLE [table name] FIELDS TERMINATED BY ',' LINES

TERMINATED BY '\n' (field1,field2,field3);

Dump all databases for backup. Backup file is sql commands to recreate all db's.

[mysql dir]/bin/mysqldump -u root -ppassword --opt >/tmp/alldatabases.sql

Dump one database for backup.

[mysql dir]/bin/mysqldump -u username -ppassword --databases databasename >/tmp/databasename.sql

Dump a table from a database.

[mysql dir]/bin/mysqldump -c -u username -ppassword databasename tablename > /tmp/databasename.tablename.sql

s

Restore database (or database table) from backup.

[mysql dir]/bin/mysql -u username -ppassword databasename < /tmp/databasename.sql

Create Table Example 1.

CREATE TABLE [table name] (firstname VARCHAR(20), middleinitial VARCHAR(3), lastname VARCHAR(35),suffix VARCHAR(3),officeid VARCHAR(10),userid VARCHAR(15),username VARCHAR(8),email VARCHAR(35),phone VARCHAR(25), groups

VARCHAR(15),datestamp DATE,timestamp time,pgpemail VARCHAR(255));

Create Table Example 2.

create table [table name] (personid int(50) not null auto_increment primary key,firstname varchar(35),middlename varchar(50),lastnamevarchar(50) default 'bato');

dump

From the SSH command line:

mysqldump -h localhost DATABASE > DATABASE.sql

MYSQL password file

/root/.my.conf

db restore

mysql d77bcom_M < forumbackup-New-1.sql

grant all privileges on *.* to 'd77bcom_M'@'localhost' identified by 'M' with grant option;

flush priveleges;

Maintaining Log Files

Almost any computer system has log files. Fortunately, many of these systems have a facility in place that rotates or trims these log files. Sometimes, a specific application can even rotate or manage its log files internally.

That having been said, there are many reasons why you, as an administrator, might need to be concerned with log files. For instance, if your system does not rotate its own logs, you will have to set up your own log rotation system. Likewise, you may have applications (custom or third party) on your system that have continuously growing log files and do not manage these logs in any way. If you are lucky, the log rotation facility provided with your system can be extended to handle additional log files. If it can't, you may have to add a custom rotation system just to handle any extra log files on your system.

If you do have to add a new rotation system, you may want to use that system for all of your log rotation needs. Doing so can simplify your life and add more power and flexibility to your log rotation possibilities. A custom rotation system can also be applied to multiple platforms and can support the exact style of log rotation you require.

Fortunately, there are several log rotation programs available that you can deploy on almost any system. I will present a few of the more popular ones in this section. If you are already using (or are thinking about using) GNU cfengine, it can also take care of log rotations for you, thus eliminating the need for an additional program.

Red Hat's logrotate
Red Hat has a log rotation system called logrotate. It has shipped with Red Hat Linux for as long as I can remember and has slowly evolved over the years. It is written in C and can be directly compiled on or easily ported to almost any system. Its source can be found at ftp://ftp.redhat.com/pub/redhat/linux/code/logrotate/.

The program is simple enough. The actual executable is typically installed as /usr/sbin/logrotate. The main configuration file is usually /etc/logrotate.conf (but you can specify any file on the command line). The program also maintains the data file /var/lib/logrotate.status (which can also be specified on the command line with the -s option).

On Red Hat Linux, logrotate is set to execute once per day through the cron daemon. You can specify any number of configuration files on the command line, but only /etc/logrotate.conf is specified in the cron job. That configuration file contains global settings and includes the directory /etc/logrotate.d/, which causes the configuration files in that directory to be read as well. This directory can contain any number of files, usually one file per application or set of log files.

Here is the main portion of Red Hat's default /etc/logrotate.conf:

# rotate log files weekly
weekly

# keep 4 weeks worth of backlogs
rotate 4

# create new (empty) log files after rotating old ones
create

# uncomment this if you want your log files compressed
#compress

# RPM packages drop log rotation information into this directory
include /etc/logrotate.d

These global options tell logrotate to rotate each log file once per week and to keep four rotated files. logrotate will create a new empty file once the rotation has taken place and can optionally compress the rotated files. Finally, the include directive is used to include additional configuration files from the /etc/logrotate.d/ directory. The rest of this file follows:

/var/log/wtmp {
monthly
create 0664 root utmp
rotate 1
}

This block of code rotates the file /var/log/wtmp. Everything within the braces applies to this file only. In this case, the global weekly rotation setting is changed to monthly and the rotation limit is reduced to one. Once the file has been rotated, it is replaced with an empty file with permissions of 0664, an owner of root, and a group of utmp.

There are actually about 40 commands you can use within the configuration files. These commands allow you to rotate files based on their size, mail the log entries, and execute pre and post rotate scripts. They are all discussed in the man page provided with the program. I am not going to discuss all of these commands in this section, but I will provide one more example that is a bit more complicated:

/var/log/samba/*.log {
notifempty
missingok
sharedscripts
copytruncate
postrotate
/bin/kill -HUP 'cat /var/run/samba/smbd.pid \
/var/run/samba/nmbd.pid 2> /dev/null' 2> /dev/null || true
endscript
}

This configuration file is placed in the /etc/logrotate.d/ directory and is part of the Samba package. The first thing you should notice is that a wildcard is used in the log file specification. This helps make the configuration files much less complex. A file will not be rotated if it is empty (notifempty). If no files are found, Logwatch will not complain (missingok). Any scripts within this block are run once after all files have been rotated (sharedscripts) and not after each individual rotation. Finally, when the files are rotated, they are copied to a new filename and then the original file is truncated (copytruncate).

This example also has a postrotate script. This script sends the HUP signal to the two Samba processes after the rotation has taken place. As is the case with many daemons, this signal causes them to close and reopen any log files. If this is not done, they will keep writing to the log file after it has been truncated, causing file corruption.

For more information on the commands available within the logrotate configuration files, see the man page provided with the program.

Rotating Logs with Spinlogs
The Spinlogs program is a very simple, yet useful, file rotation script that can run on just about any system. Its only requirement, apart from the standard UNIX commands such as egrep, mv, and cp, is the Korn shell (ksh).

You can download Spinlogs from

http://isle.wumpus.org/cgi-bin/pikie?SpinLogs. Its archive contains the shell script, a sample configuration file, and its license file (the "Artistic License"). Here is an example configuration file:

# log_filename owner:group mode num size time flags pid_file signal
# | | | | | | | |
/var/log/messages root:root 0600 14 * D Z

This simply says to rotate /var/log/messages every time the program is executed (because no maximum size is specified). The meanings for each field are as follows:

log_filename: The file to be rotated, surprisingly enough. No wildcards are allowed.

owner:group: The ownership to be applied to the rotated log files as well as the new blank log file.

mode: The permissions for the rotated and original log files.

num: How many rotated log files to keep. In this case, if the program is being run daily, two weeks worth of logs will be stored. If the program was run hourly, you'd only have 14 hours of log archives.

size: The maximum size of a file, in kilobytes. A file will only be rotated if it is larger than this limit. If set to *, the file will always be rotated.

time: This is only here to maintain the same file format as the FreeBSD's newsyslog log rotation utility. The Spinlogs program does not support time-based rotations.

flags: This field can contain any number of the following flags:

D: Do not create an empty log file in its place after rotation.

Z: Use this flag to compress files after rotation.

0: Don't compress the .0 rotated log (the most recent archive).

B: This says the log file is binary and a rotation message should not be placed in the file.

pid_file: If specified, this file should contain a PID for a program to signal after the rotation has completed.

signal: This is the signal to be sent to the process (such as HUP).

Once you have created your configuration file, you should run the spinlogs command on a regular basis (usually daily) from your system's cron daemon. A typical invocation would be:

% /usr/local/sbin/spinlogs \ -c /usr/local/etc/spinlogs.conf \
-p /var/run/syslogd.pid

This specifies a configuration file of /usr/local/etc/spinlogs.conf and identifies /var/run/syslogd.pid as the file that contains the process ID for the syslog program. The HUP signal will be sent to this process ID after Spinlogs has completed execution.

Log Rotation with cfengine
If you have read the discussions of GNU cfengine thus far (particularly section 6.4), you should realize that cfengine can do just about anything on your systems automatically. When it comes to log rotation, it can get the job done just about as well as any other program, and it can do so on any of your systems.

Unfortunately, the section of cfengine that can perform these rotations—the disable section—is not exactly obvious. The disable section can be placed in cfagent.conf along with the rest of your configuration directives. Like many other sections, the disable section was originally created to delete or rename unwanted files, but it is flexible enough to handle other tasks, such as log rotation.

Like many log rotation systems, you can rotate based on time or file size, and you can store as many rotated files as you desire. This first example will cause cfengine to rotate your web server's access log every Sunday:

disable:
Sunday::
/var/log/httpd/access_log rotate=52

The Sunday class will only be defined on Sundays, so the rotation will only take place on that day. The rotate argument specifies that (at most) 52 rotated files should be present. Here, since the rotation is performed once per week, one year's worth of logs will be stored (52 weeks). One problem with this section is that it assumes cfagent is only executed one time on each Sunday. If it is executed more than once, the rotation will be performed multiple times for that week (and some rotated files will contain less than a day's worth of data).

Another (perhaps better) option is to rotate based on the log file's size. Here is another example:

disable:
/var/log/httpd/access_log size=>100mb rotate=4

This only keeps four previous log files, and the rotation only takes place if the file is greater than 100Mb. It is important to remember that, if you run cfagent only once per day, the file will only be rotated at most once per day. The rotated file can, however, be much larger than 100Mb if your daily traffic is significant. Even if the file grows to one gigabyte in a day, it will only be rotated the next time cfagent executes.

You can also perform certain actions (like sending the HUP signal to a daemon) by defining a class when a rotation occurs. For example:


disable:
/var/log/httpd/access_log size=>100mb rotate=4 define=http_rotated

processes:
http_rotated::
"httpd" signal=hup

Whenever the log file is rotated, the http_rotated class is defined. Then, in the processes section, the HUP signal is sent to all httpd processes only if this class is defined.

Synchronizing Time

If you are running distributed applications, the time on the various systems may be critical. When using networked filesystems (such as NFS), you will have problems if the time on the client is not close to the time on the server (some files will have access and modification times in the future when they are written by a system with a clock set in the future, causing problems with applications that rely on these timestamps to make decisions). Other network protocols such as NIS+ also depend on relatively synchronized system clocks.

Setting the Time Zone
The first thing you must do is make sure the time zone on your systems is correct. The time zone is important because time synchronization usually uses Coordinated Universal Time (UTC), which is not the time you are used to seeing on your system. How you set the time zone is, unfortunately, completely system dependent. Here are some pointers for some popular systems:

Red Hat Linux: /etc/sysconfig/clock (or run timeconfig)

Debian GNU/Linux: /etc/timezone

Solaris: /etc/TIMEZONE (symbolic link to /etc/default/init)

AIX: /etc/environment and /etc/profile

FreeBSD: /etc/localtime

HP-UX: /etc/TIMEZONE

Tru64: /etc/svid3_tz (or run timezone)

Synchronizing Your Clocks
On most systems, you have two ways to synchronize time. One involves using the rdate or ntpdate commands to synchronize your clocks on a regular basis (hourly or daily from a cron job). The other option involves running a Network Time Protocol (NTP) time daemon (called xntpd on some systems and ntpd on others).

The rdate Command
Most UNIX variants come with the rdate command. This command can quickly synchronize a system's clock to a well-known timeserver using the RFC 868 protocol. Here is an example on a Linux system:

% rdate -s time.nist.gov

Although used in this example, the -s switch is not standard and should not be used on most systems (on Solaris, for example, the command rdate time.nist.gov would accomplish the same task). The server specified here (time.nist.gov) is run by the National Institute of Standards and Technology (NIST) Laboratories in Boulder, Colorado. You can find plenty of other public timeservers using some quick web searches.

You can set up your own server easily since most versions of inetd internally support this protocol. The protocol is not very accurate (i.e., the times might be off by a few milliseconds), but it is more than suitable for most situations. You would need to run this command from cron on a regular basis (usually at least once per day, or even hourly).

Using the Network Time Protocol
NTP has gone though several revisions and has several associated Request For Comments (RFCs are published Internet standards documents). It provides very accurate system time updates (to the millisecond). NTP is usually used with a daemon that continuously updates the system clock and may also provide services to other clients. The software is included by default on many systems and can be installed on most others.

If you like the simplicity of the rdate command but would rather use NTP, you can simply pick an appropriate server and use the ntpdate the same way you use rdate as mentioned in the previous section. This is the easiest, but least accurate and efficient, method of using NTP on a system.

The NTP software suite allows for much more advanced uses that are beyond the scope of this book. The daemon (either ntpd or xntpd) can determine the time from a variety of sources including one or more remote systems or a local device such as a GPS. The same daemon can allow other clients to synchronize their clocks from the system. It also continuously monitors any time drift present in the internal clock, and then regularly updates the clock, accounting for known drift, even if time servers are not currently available. This also reduces the chance of the system clock having sudden significant jumps in time. Quite a bit of information about the NTP protocol can be found at http://www.ntp.org.

One thing to keep in mind—NTP will not update the system clock unless it is pretty close to the server's time. For this reason, you should use rdate or ntpdate to get the time pretty close first, and then start your NTP daemon. You might want to perform this rdate every time the system boots to make sure it starts out with a fairly accurate value.

Updating the Hardware Clock
On some systems (such as x86-based systems running Linux), an updated time within the operating system kernel only lasts until the next reboot. On these types of systems, you should set your hardware clock whenever you set the time. Alternatively, setting the hardware clock once per day from cron would be adequate. Implementing one of these methods prevents the system from booting with the incorrect time.

I can't profess to know the details of all hardware platforms and operating systems. The only system I have personally experienced this on is x86-based Linux. On these systems, the command /sbin/hwclock –systohc synchronizes the date as it is stored in the kernel into the hardware BIOS.

Alternatively, you could execute your time synchronization command during system startup as well as on a regular basis so that your system always starts with the correct time.

The Basics of Using SSH

If you are already familiar with the basic use of SSH, you might want to just skim this section. If, on the other hand, you are an SSH novice, you are in for quite a surprise. SSH is very easy and efficient to use and can help with a wide variety of tasks.

The commands in this section work fine without any setup (assuming you have the SSH daemon running on the remote host). If nothing has been configured, all of these commands use password authentication just like Telnet; except with SSH, the password (and all traffic) is sent over an encrypted connection.

To initiate a connection to any machine as any user and to start an interactive shell, use this command:

% ssh user@host

In addition to connecting to remote hosts, I often use SSH to log in as root on my local machine because it is quicker then using ssh-agent, as discussed later in this chapter.

You can also execute any command in lieu of starting an interactive shell. This displays memory usage information on the remote host:

% ssh user@host free
total used free shared buffers cached
Mem: 126644 122480 4164 1164 29904 36300
-/+ buffers/cache: 56276 70368
Swap: 514072 10556 503516

Finally, there is the scp command that allows you to copy files between hosts using the SSH protocol. The syntax is very similar to the standard cp command, but if a filename contains a colon, it is a remote file instead of a local file. Like the standard ssh command, if no username is specified on the command line, your current username is used. If no path is specified after the colon, the user's home directory is used as the source or destination directory. Here are a few examples:

% scp local_file user@host:/tmp/remote_file
% scp user@host:/tmp/remote_file local_file
% scp user1@host1:file user2@host2:

The last example copies the file named file from user1's home directory on host1 directly into user2's home directory on host2. Since no filename is given in the second argument, the original filename is used (file, in this case).